metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | module-qc-database-tools | 2.10.0rc1 | Python wrapper to interface with LocalDB and Production DB for common tasks for pixel modules. | # Module QC Database Tools v2.10.0rc1
The package to regisiter ITkPixV1.1 modules, and generate YARR configs from ITk
production database using `itkdb` API.
---
<!-- sync the following div with docs/index.md -->
<div align="center">
<!--<img src="https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-database-tools/-/raw/main/docs/assets/images/logo.svg" alt="mqdbt logo" width="500" role="img">-->
<!-- --8<-- [start:badges] -->
<!-- prettier-ignore-start -->
| | |
| --- | --- |
| CI/CD | [![CI - Test][cicd-badge]][cicd-link] |
| Docs | [![Docs - Badge][docs-badge]][docs-link] |
| Package | [![PyPI - Downloads - Total][pypi-downloads-total]][pypi-link] [![PyPI - Downloads - Per Month][pypi-downloads-dm]][pypi-link] [![PyPI - Version][pypi-version]][pypi-link] [![PyPI platforms][pypi-platforms]][pypi-link] |
| Meta | [![GitLab - Issue][gitlab-issues-badge]][gitlab-issues-link] [![License - MIT][license-badge]][license-link] |
[cicd-badge]: https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-database-tools/badges/main/pipeline.svg
[cicd-link]: https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-database-tools/-/commits/main
[docs-badge]: https://img.shields.io/badge/documentation-mkdocs-brightgreen?style=for-the-badge&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABNCAYAAAAW92IAAAAAIGNIUk0AAHomAACAhAAA+gAAAIDoAAB1MAAA6mAAADqYAAAXcJy6UTwAAAAGYktHRAD/AP8A/6C9p5MAAAAJcEhZcwAACxMAAAsTAQCanBgAAAAHdElNRQfnAhsVAB+tqG4KAAANnklEQVR42u2cf5BcVZXHP+f168lkQn6QAJJsFgP4AyGajIuKISlqdREEtEopyyrZslbUtVjFcg1QpbuliEWpZQK1u7ilUuW6K1JSurWliL9/gBncFSWTEFiEkCWYYCJOmMTM5Md097v7x/fcfq9nejqvkw7DujlVr6a733v33nPuued8z7nnjlGSVgytJ5n6cz9wKvAnwAuBM4ClwOn++8nAPGAOMBuYBaRAZZpuGkAdOAwcAsaBPwCjwO+BXcBO4Dd+7fTfDxUbCcHYtObDpfhKywqgwHw/8Grg9cD5wFnO7ElAH2Bl22xDFb9mueDaUQAmgDFgBNgG/Ar4MfAAcMgslO6wtAAAQjAzCzcA1zvDM0HmApoFLAJeClwGfBj4bAj2SbNQWgJJ2QcBzEIFzf5MMd+JTgJe7WMsTV0JwCmbaU57ObajEcAfFZ0QwEwPYKbphABmegAzTd0KwDg2oHO8qevxdSeAQB0hr/JQ67mjAGwLGmNp6gIJGghh3YSg56XAK1AcMK+7tnpCdRQn/BbYDHwfuMeMYFZeCUoPOtDUrWeBOyDcCbaI1kDoT1EgdAqwAJgLDCDYWvX+UqR51qaLjDwgmkBB0QFgP7AX2EMeED3l19P+exMEhfJIuLwAbIrWW4Yisd8Dm3IughlWJcfr/YXPfQVBJORLMPOrDtQKzMeo8DBwOIRQs24inV4KwKmCgo/taGbaCMpitDaBZq5nZGYtqjiJBoBlwOMuyOMigAT4OxTnfwfYiNRwNMBBey6Mo0FQWDrbx/FC4JUoIhwF/qqb5roVQEDr+o1+7Qd+BzxtsAOtzV1oWYwiIzUOHERqXENrvOFtRYFF9xXzAXEJ9aNkyjxn9lRgsZktRfZmKfACHxPA3XQ5Ccdquef69aJJv8f1PNHmqk8SQhRAZD5FtqJ4Vf3qOXA7Xq4rKQz+eU0noPBMD2Cm6YQAjuKdP6qUWLdGsA58CTgNeDnPn+ToGLDFx3a8gqEmfQv4GbAcAZDlaG9gMbDQhRI3QHoVOgfyDZMxhDF2ocj0YQTIHkbxQld0tG5wLzAEDFmSELJsNsIDJ7sQ4rUAmO9CGSCPC6rI50cBBYQLauT4/4AzG3eGRlHQ86x/3p8kdjDLjg18disAC1CxgpqFLAMhvYPAM0d4nYyMBDMC1sK+EbIQQpIkUDKam8J8IMVoEMqjwW6NYGrwfuAKFPKWUvFQ+JTolYA1I8DMP4fErMl8aQ5CMB/LFRjvB9JuFt7RxAIXA58FHgUeQmvvCRQHjCCVPQBMQKiDhaMxBE3lCJhZEx7PRnFB3JB9MWbLUWLmZcAPgM9108/R2IAMreFX+AVav+PkiYtR/bV9wD6/N04e28egKLqthKlB0ABwkhnzkB05GdmUBeQ7zpU2Y+uKehULVHxQ89DM/J+hE0hwpgcw03RCAF0+//98Y0QA6MmZ5rIDPRmOWywQmhsjnwAeJN8YWYxg8HO9MdJAbve3CI98H7jbIBDKz2v5jRELUbf2AP8K3IFqdJaizOwy8gqxuDESY4CYHosVYnFjpAiGJ2+MxLggxgR7ve/dKAEbN0Z2+O+N5mCtPBzoej0PDq0v02YfrRsjMQgqboy0C4Ymb4wcav0bah25CzC8Zm1X/PTEoK0cWv+cW8YQYFOXzJ6gNnTME9duSQyv7v3MTNfPyqF1hbhalCUVNq/6UKl2jwkIrdywnlo9m9JWCTtx1MyH0DrmyHwIweqHJzSQrFG67bTYuCFr1M0MVtMEYA3KE9yJUmY9p0q1QqPWuMqMy1E4Ply4fZqZfSyd1bcLWIcMZzkBoG2tM4GtAbbHRMR0s1gUTkHx3gK83du7h6JL6hE1ao1+4N3An6McRFEAg8A1yEXeSRdgLQG+DnwX+CdgIDKVVJrsLQHOYzrMYE1BQqtr6zXFnEGxv0hV8iRS16Wyp/tLC4svZ40Aiu+/iDItFxwnxo6GwhG+l6YUuBrl+H+ehWx/Yi02Zi5KNS1BJTDHjZb/9Baq1dCh/qE7mt4QB4ZXX9cigO/6pdoOa3m5Tp5makxpuCTi7GRPcjemSXTmtZTM6mUzxB2oAphh9SjeOJ7h1WtJkQW/ArjLkmSjv/Ri4FzygxAAr0Gp7xR4IsAWS46geiEweP8t8VsCXIK06pvA4UkIsg+40J95CZASwk7gJ2gJlrXsWb3eIE0rFzhf5wKVQNgB/BT4EbAvZIHBofWkwHXAm53RjQi33wpcPqnhv/UL4EFTNDgy3ShWbliP5cxXkZv8hE/1E8DGAvOnAzcC7yCv9oj0PhfAJzmyzgVgQZpWbgb+BgVkk9v6EfBRS2w4ZBkpitYAZmfBSCzUgPv85T5kH/pR8dEzSKXuZ5oiqUiFUr05wEdd0H3Av9HqphYCtwFXoiX3Q3T8ZQ8KtS9Bwl6G0uKdmO9zQV4B/Bq4Hdjq712AynouRVHrVZYkD6XkFjQ0JIAGAhO3+YPfQ3t/n0Y+NkXRWaOEtVoEfAp4jzN3Cyq03CchGSGEa5z5MeDjyOuMFdq4DVgL3IA0qZMATgHeBHwN+AiqZot3P4fxRuAf0X7mTcA7W/xpDNCDGjuIcvlRQM16veYLGZ3A9BloKb0VJS5u8s4nmmMK4UzkhcRoCLcytQ5w1N9dArzrCAI34D/RUt3d4lGMBvBtpPH/gjTrTW3BzfDqtdFSWrt7kQY3TLXulSSpN7LsPLRDcxFCZzcQwh1tmFuNtOtJ4PZ4v6UPjWMC+Gef3VM6CCBDar8blMAaXnNdsR1QJdl9aDlc2euscGhk2UXAV535x4B3Al9pxxz5ztKDKLtDNmlEBS/4qF/TkaHyvP9qTtaa3N8X+j2ItvcBVvRaAIPAl4EV/v0JdJZPVIjSQshAbha0r9gIBDavag3ECkmPg6gmYDpKUNrs2TaCnkw7/O/CXgtgmV9bUWrrMmQ8hSWSCivv/QwAjXoDcvuicRwZ83Qyu4HW+uNOFCF/1msB1NEavBT4gg/4fcj/9wNYKseTVqugKtMouKqZsXLDLS0NDm5YFz/OQV6pkwAWovKdI+Ukzva/z/RaAEPoVOn/AH+PrK0BH0RYwN1YcyKHfeDnI/TZYidXblhXBBRxC7yTABahcJnQqLcIofB5PvA6//xA17n8FslOVdlRYMxd6T4XxgDKFdyA3Oo68nzB/ciwnQtc64KqvXToZgboL9b9z/F7C0oM8T3APVZJt04Zr+gqBIr+ANx1JA2o+QVwViWtAFS1S9KWDI+pnPYAH0JZolnAx4BrCM2di53A55H7uhotlVMHtFrwkx9LkB25knLh13nIZa4IjZbHZyMccSMCc98Aflys5ErCVL72Ild2DnBto97w0jj7BwRZi4wX/xZpN/ABH8DFwM2YPQ38h9//EoLb70Xo7WLgXuTSFqNT6i/3/mJyZnI/EcPtQUDocuAeqyQ/QBB+NgrmLkK26Gcotjicoq0lgF3VSqOlfDsIgNyKorNzkCqPo10hkV6Ixmw3hAZYEUyB3M41zuwaciOEt3e9C+qvkT04v3B/zLXk08BnXAC/o5XGXTv2oqX2c7Sk3tXmuTsQ5N6Oq8KNKI83BLnNKQjiviCJ/hnKEG0FfjGp4dtd0g+ABavIywyvXiu0qDa3IVC0IvZVoH0+qH8H/sKF3YfA0Q8RuKn5M99AwVKRfolsxAjw2KHxiU/1z+m7G3gDsi8p+ocL97pwmv9wwSYbiYCxabX++8Lg0PqOGZosgaTdqpy0RTWdS6pX+tjy2mun3DeMWmiQWmcTNUnL2lJiCRP1Gmml0vb959Vef0v+n1zwGbC5A7IbHFrX8n149XV5nDKJw7plbLnw+plm9flDx7w7PNxxZlo3XQiBjYUA5Vjb74kApD55UjKeEJzqZ6xYgboAIaqnSvSRArMMxps9WGs1bGipmwXyM0IHWtxSz3LGzY5J3O5XwGjUa95ViIeYml1nGskA+YnQ16JkY2yueBrUgNTvzQfODvkzlY1PXVZ8J3HmjTyQWYTig9iihSBBBcGsOMbmaTP3XsXTZ8U2LQoaSJrPGkmKfPJZwOOVtG87sNCw1yG3AYKhvzYxvRyBiiEXxCrDDiMXuMx/24b8+C7D5qFtrFNRoLIMsMEzvvOQdMDO8r43Aa9CbqwPYZMYKg9grDHYBMkrCfxCY7al/vxvgCVB3I0gnDCCwuKlyOWlwBbDXgAsDIHdKLbYk6DkZhWBHRBOnwv8tzc2gf5zzDxveCcCFHtR4vFMlM7ei/D1oL//OMre9KMIbRUCNQ8DdUfMYy6YJShk3o5KYs8ATvMZO4Bg9EtQwPQylGJbgDJJsRR3uTNaRVjlfB/zbufrzQgN9pOX9j+SoNRQg3y/Ldbq9LsUN6O0dURas1zFYj1PVLNTxBh7yM8GZgW1NFftJMgCnI6Ckpq3V/e2x52pSpIXOz0CrESnQs7zWY9Hcy/0ZVakPpTBrvvnCZ+8Awg0bUO5zf2VxVdfMof8VPaoD2KVM/+wz9KAS3uPL4lnfKAjrmKP+kyO+edD3ta4tzPuTMwHUlMRdeZ9jXjf5/jsPdZk0HjWJyRq3GPASGiw2xJq3kcU9E5neAU6Uf4QOuc8H2WmYug919saA0atDZKaD7wNVYLVXMIBBRnH82zwX6Jk5Y7mL90b/bOR6t/l39/qQntkuhf+F0N4SOsZwIo7AAAAJXRFWHRkYXRlOmNyZWF0ZQAyMDIzLTAyLTI3VDIwOjU4OjQ2KzAwOjAwDOG2KgAAACV0RVh0ZGF0ZTptb2RpZnkAMjAyMy0wMi0yOFQwMTo1ODo0MCswMDowMKx+Qb4AAAAodEVYdGRhdGU6dGltZXN0YW1wADIwMjMtMDItMjdUMjE6MDA6MzErMDA6MDBa3S3tAAAAAElFTkSuQmCC
[docs-link]: https://atlas-itk-pixel-mqdbt.docs.cern.ch
[gitlab-issues-badge]: https://img.shields.io/static/v1?label=Issues&message=File&color=blue&logo=gitlab
[gitlab-issues-link]: https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-database-tools/-/issues
[pypi-link]: https://pypi.org/project/module-qc-database-tools/
[pypi-downloads-dm]: https://img.shields.io/pypi/dm/module-qc-database-tools.svg?color=blue&label=Downloads&logo=pypi&logoColor=gold
[pypi-downloads-total]: https://pepy.tech/badge/module-qc-database-tools
[pypi-platforms]: https://img.shields.io/pypi/pyversions/module-qc-database-tools
[pypi-version]: https://img.shields.io/pypi/v/module-qc-database-tools
[license-badge]: https://img.shields.io/badge/License-MIT-blue.svg
[license-link]: https://spdx.org/licenses/MIT.html
<!-- prettier-ignore-end -->
<!-- --8<-- [end:badges] -->
</div>
| text/markdown | null | Jay Chan <jay.chan@cern.ch> | null | Giordon Stark <kratsg@gmail.com>, Elisabetta Pianori <elisabetta.pianori@cern.ch>, Lingxin Meng <lingxin.meng@cern.ch> | Copyright (c) 2022 ATLAS ITk Pixel Modules
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"arrow",
"cachecontrol<=0.14.1",
"influxdb",
"itkdb>=0.6.17",
"itksn>=0.4.2",
"jsbeautifier",
"jsondiff",
"module-qc-data-tools>=1.5.0rc1",
"packaging",
"pandas",
"pyarrow",
"pymongo>=4.0.0",
"rich",
"typer>=0.18.0",
"typing-extensions>=4.0; python_version < \"3.10\"",
"urllib3<2,>=1.2... | [] | [] | [] | [
"Homepage, https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-database-tools",
"Bug Tracker, https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-database-tools/issues",
"Source, https://gitlab.cern.ch/atlas-itk/pixel/module/module-qc-database-tools",
"Documentation, https://atlas-itk-pixel-mqdbt.docs... | twine/6.2.0 CPython/3.11.4 | 2026-02-19T11:44:39.805254 | module_qc_database_tools-2.10.0rc1.tar.gz | 752,767 | 3c/01/db57afab1a14a842d0b9f6f9b282cc08644a2c6c0e1e92ac841a7bc572f5/module_qc_database_tools-2.10.0rc1.tar.gz | source | sdist | null | false | 1d95821e6738872e5347ffa753b19511 | 6bc18d22088820d44178ae11d4de31ec87085b0a020a6f053cb124300f66e668 | 3c01db57afab1a14a842d0b9f6f9b282cc08644a2c6c0e1e92ac841a7bc572f5 | null | [
"LICENSE"
] | 364 |
2.4 | aiohomematic | 2026.2.20 | Homematic interface for Home Assistant running on Python 3. | [![Release][releasebadge]][release]
[![License][license-shield]](LICENSE)
[![Python][pythonbadge]][release]
[![GitHub Sponsors][sponsorsbadge]][sponsors]
# aiohomematic
A modern, async Python library for controlling and monitoring [Homematic](https://www.eq-3.com/products/homematic.html) and [HomematicIP](https://www.homematic-ip.com/en/start.html) devices. Powers the Home Assistant integration "Homematic(IP) Local".
This project is the modern successor to [pyhomematic](https://github.com/danielperna84/pyhomematic), focusing on automatic entity creation, fewer manual device definitions, and faster startups.
## Key Features
- **Automatic entity discovery** from device/channel parameters
- **Extensible** via custom entity classes for complex devices (thermostats, lights, covers, locks, sirens)
- **Fast startups** through caching of paramsets
- **Robust operation** with automatic reconnection after CCU restarts
- **Fully typed** with strict mypy compliance
- **Async/await** based on asyncio
## Documentation
**Full documentation:** [sukramj.github.io/aiohomematic](https://sukramj.github.io/aiohomematic/)
| Section | Description |
| ------------------------------------------------------------------------------------ | -------------------------------- |
| [Getting Started](https://sukramj.github.io/aiohomematic/getting_started/) | Installation and first steps |
| [User Guide](https://sukramj.github.io/aiohomematic/user/homeassistant_integration/) | Home Assistant integration guide |
| [Developer Guide](https://sukramj.github.io/aiohomematic/developer/consumer_api/) | API reference for integrations |
| [Architecture](https://sukramj.github.io/aiohomematic/architecture/) | System design overview |
| [Glossary](https://sukramj.github.io/aiohomematic/reference/glossary/) | Terminology reference |
## How It Works
```
┌─────────────────────────────────────────────────────────┐
│ Home Assistant │
│ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Homematic(IP) Local Integration │ │
│ │ │ │
│ │ • Home Assistant entities (climate, light, etc.) │ │
│ │ • UI configuration flows │ │
│ │ • Services and automations │ │
│ │ • Device/entity registry integration │ │
│ └────────────────────────┬───────────────────────────┘ │
└───────────────────────────┼─────────────────────────────┘
│
│ uses
▼
┌───────────────────────────────────────────────────────────┐
│ aiohomematic │
│ │
│ • Protocol implementation (XML-RPC, JSON-RPC) │
│ • Device model and data point abstraction │
│ • Connection management and reconnection │
│ • Event handling and callbacks │
│ • Caching for fast startups │
└───────────────────────────────────────────────────────────┘
│
│ communicates with
▼
┌───────────────────────────────────────────────────────────┐
│ CCU3 / OpenCCU / Homegear │
└───────────────────────────────────────────────────────────┘
```
### Why Two Projects?
| Aspect | aiohomematic | Homematic(IP) Local |
| ---------------- | ------------------------------------------------------- | ----------------------------------------------------------------- |
| **Purpose** | Python library for Homematic protocol | Home Assistant integration |
| **Scope** | Protocol, devices, data points | HA entities, UI, services |
| **Dependencies** | Standalone (aiohttp, orjson) | Requires Home Assistant |
| **Reusability** | Any Python project | Home Assistant only |
| **Repository** | [aiohomematic](https://github.com/sukramj/aiohomematic) | [homematicip_local](https://github.com/sukramj/homematicip_local) |
**Benefits of this separation:**
- **Reusability**: aiohomematic can be used in any Python project, not just Home Assistant
- **Testability**: The library can be tested independently without Home Assistant
- **Maintainability**: Protocol changes don't affect HA-specific code and vice versa
- **Clear boundaries**: Each project has a focused responsibility
### How They Work Together
1. **Homematic(IP) Local** creates a `CentralUnit` via aiohomematic's API
2. **aiohomematic** connects to the CCU/Homegear and discovers devices
3. **aiohomematic** creates `Device`, `Channel`, and `DataPoint` objects
4. **Homematic(IP) Local** wraps these in Home Assistant entities
5. **aiohomematic** receives events from the CCU and notifies subscribers
6. **Homematic(IP) Local** translates events into Home Assistant state updates
## For Home Assistant Users
Use the Home Assistant custom integration **Homematic(IP) Local**:
1. Add the custom repository: https://github.com/sukramj/homematicip_local
2. Install via HACS
3. Configure via **Settings** → **Devices & Services** → **Add Integration**
See the [Integration Guide](https://sukramj.github.io/aiohomematic/user/homeassistant_integration/) for detailed instructions.
## For Developers
```bash
pip install aiohomematic
```
### Quick Start
```python
from aiohomematic.central import CentralConfig
from aiohomematic.client import InterfaceConfig
from aiohomematic.const import Interface
config = CentralConfig(
central_id="ccu-main",
host="ccu.local",
username="admin",
password="secret",
default_callback_port=43439,
interface_configs={
InterfaceConfig(central_name="ccu-main", interface=Interface.HMIP_RF, port=2010)
},
)
central = config.create_central()
await central.start()
for device in central.devices:
print(f"{device.name}: {device.device_address}")
await central.stop()
```
See [Getting Started](https://sukramj.github.io/aiohomematic/getting_started/) for more examples.
## Requirements
- **Python**: 3.13+
- **CCU Firmware**: CCU2 ≥2.61.x, CCU3 ≥3.61.x
- There is not active testing to identify the minimum required firmware versions.
### Important Notes on Backend Supportf
**Actively tested backends:**
- OpenCCU with current firmware
**Not actively tested:**
- CCU2
- Homegear
Running outdated firmware versions or using untested backends (CCU2, Homegear) is at your own risk.
**Recommendation:** Keep your CCU firmware up to date. Outdated versions may lack bug fixes, security patches, and compatibility improvements that this library depends on.
## Related Projects
| Project | Description |
| --------------------------------------------------------------------- | -------------------------- |
| [Homematic(IP) Local](https://github.com/sukramj/homematicip_local) | Home Assistant integration |
| [aiohomematic Documentation](https://sukramj.github.io/aiohomematic/) | Full documentation |
## Contributing
Contributions are welcome! See the [Contributing Guide](https://sukramj.github.io/aiohomematic/contributor/contributing/) for details.
## License
MIT License - see [LICENSE](LICENSE) for details.
## Support
[![GitHub Sponsors][sponsorsbadge]][sponsors]
If you find this project useful, consider [sponsoring](https://github.com/sponsors/SukramJ) the development.
[license-shield]: https://img.shields.io/github/license/SukramJ/aiohomematic.svg?style=for-the-badge
[pythonbadge]: https://img.shields.io/badge/Python-3.13+-blue?style=for-the-badge&logo=python&logoColor=white
[release]: https://github.com/SukramJ/aiohomematic/releases
[releasebadge]: https://img.shields.io/github/v/release/SukramJ/aiohomematic?style=for-the-badge
[sponsorsbadge]: https://img.shields.io/github/sponsors/SukramJ?style=for-the-badge&label=Sponsors&color=ea4aaa
[sponsors]: https://github.com/sponsors/SukramJ
| text/markdown | null | SukramJ <sukramj@icloud.com>, Daniel Perna <danielperna84@gmail.com> | null | null | MIT License | home, automation, homematic, openccu, homegear | [
"Development Status :: 5 - Production/Stable",
"Environment :: No Input/Output (Daemon)",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Natural Language :: German",
"Operating System :: OS Independ... | [] | https://github.com/sukramj/aiohomematic | null | >=3.13 | [] | [] | [] | [
"aiohttp>=3.12.0",
"pydantic>=2.10.0",
"python-slugify>=8.0.0",
"orjson>=3.11.0; extra == \"fast\""
] | [] | [] | [] | [
"Homepage, https://github.com/sukramj/aiohomematic",
"Source Code, https://github.com/sukramj/aiohomematic",
"Bug Reports, https://github.com/sukramj/aiohomematic/issues",
"Changelog, https://github.com/sukramj/aiohomematic/blob/devel/changelog.md",
"Documentation, https://sukramj.github.io/aiohomematic",
... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:43:38.956188 | aiohomematic-2026.2.20.tar.gz | 637,561 | b9/30/145f428b1d5092763e16c8292dab4aeaa071045c4e6da71a1c5d834f94c5/aiohomematic-2026.2.20.tar.gz | source | sdist | null | false | e33cf507ee781ed48591e7801b6e02c0 | 151a1340868fee231e84d788b0670c4c2a0a6e4e24bd2f52ef9c75a8b9f3b96f | b930145f428b1d5092763e16c8292dab4aeaa071045c4e6da71a1c5d834f94c5 | null | [
"LICENSE"
] | 773 |
2.4 | pyomp | 0.5.1 | Python OpenMP library based on Numba | [](https://pyomp.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/Python-for-HPC/PyOMP/actions/workflows/build-upload-wheels.yml)
[](https://github.com/Python-for-HPC/PyOMP/actions/workflows/build-upload-conda.yml)
[](https://mybinder.org/v2/gh/Python-for-HPC/binder/HEAD)
# PyOMP
OpenMP for Python CPU/GPU parallel programming, powered by Numba.
PyOMP provides a familiar interface for CPU/GPU programming using OpenMP
abstractions adapted for Python.
Besides effortless programmability, PyOMP generates fast code using Numba's JIT
compiler based on LLVM, which is competitive with equivalent C/C++ implementations.
PyOMP is developed and distributed as an *extension* to Numba, so it uses
Numba as a dependency.
It is currently tested with several Numba versions on the following
architecture and operating system combinations: linux-64 (x86_64), osx-arm64
(mac), and linux-arm64.
The [compatibility matrix](#compatibility-matrix) with Numba versions records
the possible combinations.
Installation is possible through `pip` or `conda`, detailed in the next section.
As PyOMP builds on top of the LLVM OpenMP infrastructure, it also inherits its
limitations: GPU support is only available on Linux.
Also, PyOMP currently supports only NVIDIA GPUs with AMD GPU support in development.
## Installation
### Pip
PyOMP is distributed through PyPI, installable using the following command:
```bash
pip install pyomp
```
### Conda
PyOMP is also distributed through Conda, installable using the following command:
```bash
conda install -c python-for-hpc -c conda-forge pyomp
```
### Compatibility matrix
| PyOMP | Numba |
| ----- | --------------- |
| 0.5.x | 0.62.x - 0.63.x |
| 0.4.x | 0.61.x |
| 0.3.x | 0.57.x - 0.60.x |
Besides a standard installation, we also provide the following options to
quickly try out PyOMP online or through a container.
### Trying it out
#### Binder
You can try it out for free on a multi-core CPU in JupyterLab at the following link:
https://mybinder.org/v2/gh/Python-for-HPC/binder/HEAD
#### Docker
We also provide pre-built containers for arm64 and amd64 architectures with
PyOMP and Jupyter pre-installed.
The following show how to access the container through the terminal or using
Jupyter.
First pull the container
```bash
docker pull ghcr.io/python-for-hpc/pyomp:latest
```
To use the terminal, run a shell on the container
```bash
docker run -it ghcr.io/python-for-hpc/pyomp:latest /bin/bash
```
To use Jupyter, run without arguments and forward port 8888.
```bash
docker run -it -p 8888:8888 ghcr.io/python-for-hpc/pyomp:latest
```
Jupyter will start as a service on localhost with token authentication by default.
Grep the url with the token from the output and copy it to the browser.
```bash
...
[I 2024-09-15 17:24:47.912 ServerApp] http://127.0.0.1:8888/tree?token=<token>
...
```
## Usage
From `numba.openmp` import the `@njit` decorator and the `openmp_context`.
Decorate with `njit` the function you want to parallelize with OpenMP and
describe parallelism in OpenMP directives using `with` contexts.
Enjoy the simplicity of OpenMP with Python syntax and parallel performance.
For a list of supported OpenMP directives and more detailed information, check
out the [Documentation](https://pyomp.readthedocs.io).
PyOMP supports both CPU and GPU programming.
For GPU programming, PyOMP implements OpenMP's `target` directive for offloading
and supports the `device` clause to select the offloading target device.
For more information see the [GPU
Offloading](https://pyomp.readthedocs.io/en/latest/openmp.html#openmp-and-gpu-offloading-support)
section in the documentation.
### Example
This is an example of calculating $\pi$ with PyOMP with a `parallel for` loop
using CPU parallelism:
```python
from numba.openmp import njit
from numba.openmp import openmp_context as openmp
@njit
def calc_pi(num_steps):
step = 1.0 / num_steps
red_sum = 0.0
with openmp("parallel for reduction(+:red_sum) schedule(static)"):
for j in range(num_steps):
x = ((j-1) - 0.5) * step
red_sum += 4.0 / (1.0 + x * x)
pi = step * red_sum
return pi
print("pi =", calc_pi(1000000))
```
and this is the same example using GPU offloading:
```python
from numba.openmp import njit
from numba.openmp import openmp_context as openmp
from numba.openmp import omp_get_thread_num
@njit
def calc_pi(num_steps):
step = 1.0 / num_steps
red_sum = 0.0
with openmp("target map(tofrom: red_sum)"):
with openmp("loop private(x) reduction(+:red_sum)"):
for i in range(num_steps):
x = (i + 0.5) * step
red_sum += 4.0 / (1.0 + x * x)
pi = step * red_sum
return pi
print("pi =", calc_pi(1000000))
```
## Support
We welcome any feedback, bug reports, or feature requests.
Please open an [Issue](https://github.com/Python-for-HPC/PyOMP/issues) or post
in [Discussions](https://github.com/Python-for-HPC/PyOMP/discussions).
## License
PyOMP is licensed under the BSD-2-Clause license (see [LICENSE](LICENSE)).
The package includes the LLVM OpenMP runtime library, which is distributed under
the Apache License v2.0 with LLVM Exceptions. See
[LICENSE-OPENMP.txt](LICENSE-OPENMP.txt) for details.
| text/markdown | null | null | null | Giorgis Georgakoudis <georgakoudis1@llnl.gov> | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Compilers"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"numba<0.64,>=0.62",
"lark",
"cffi",
"setuptools"
] | [] | [] | [] | [
"Homepage, https://github.com/Python-for-HPC/PyOMP",
"Issues, https://github.com/Python-for-HPC/PyOMP/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:42:08.470176 | pyomp-0.5.1.tar.gz | 6,698,560 | e5/10/343164b5d33e46a0ad882e0e0078b01bb29f4bcf29c39b1548fb5a13bdc1/pyomp-0.5.1.tar.gz | source | sdist | null | false | 60850d3d44fc7a25e0e7880ba8903950 | 3e97d4057461370ab67f6d7e0e1dfdb31b414e922f3a5ed269315aa7cdff3d4e | e510343164b5d33e46a0ad882e0e0078b01bb29f4bcf29c39b1548fb5a13bdc1 | BSD-2-Clause | [
"LICENSE",
"LICENSE-OPENMP.txt"
] | 1,085 |
2.4 | autofepg | 0.2.0 | AutoFE - Playground: Automatic Feature Engineering & Selection for Kaggle Playground Competitions | # 🧪 AutoFE-PG
**Automatic Feature Engineering & Selection for Kaggle Playground Competitions**



AutoFE-PG is a production-ready library that automatically generates, evaluates, and selects engineered features to boost your tabular ML models — with zero target leakage.
Designed specifically for Kaggle Playground competitions where synthetic data is common, it includes specialized strategies for **domain alignment**, **Bayesian priors from external data**, **dual-representation features**, and **cross-dataset density analysis**.
---
## ✨ Key Features
| Feature | Description |
|---|---|
| Auto column detection | Automatically identifies categorical vs. numerical columns |
| 25+ feature strategies | Target encoding, domain alignment, Bayesian priors, dual representation, cross-dataset frequency, count encoding, digit extraction, arithmetic interactions, group statistics, and more |
| Zero target leakage | All target-dependent features use strict out-of-fold encoding |
| Greedy forward selection | Adds features one-by-one, keeping only those that improve CV score |
| Optional backward pruning | Removes redundant features after forward selection |
| Original data integration | Snap synthetic values to real clinical grids and inject historical priors |
| GPU acceleration | Automatically uses XGBoost GPU if available |
| Time budget | Set a wall-clock limit; the search stops gracefully |
| Sampling support | Evaluate on a subsample for faster iteration |
| Custom XGBoost params | Pass your own hyperparameters |
| Score variance tracking | Reports mean ± std across folds |
| Classification & regression | Supports both tasks with auto-detection |
| Detailed reports | Auto-generated `.txt` report with full selection history |
---
## 🚀 Quick Start
### Installation
```bash
pip install autofepg
```
Or install dependencies directly:
```bash
pip install -r requirements.txt
```
### Minimal Example
```python
import pandas as pd
from autofepg import select_features
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")
X_train = train.drop(columns=["id", "target"])
y_train = train["target"]
X_test = test.drop(columns=["id"])
result = select_features(
X_train, y_train, X_test,
task="classification",
time_budget=3600,
)
X_train_new = result["X_train"]
X_test_new = result["X_test"]
print(f"Baseline AUC: {result['base_score']:.6f}")
print(f"Best AUC: {result['best_score']:.6f}")
print(f"Features added: {len(result['selected_features'])}")
```
### With Original Data (Domain Alignment + Bayesian Priors)
When working with Kaggle Playground competitions where synthetic data is generated from a real dataset, you can pass the original data to unlock powerful de-noising and prior-injection strategies:
```python
import pandas as pd
from autofepg import select_features
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")
original = pd.read_csv("original.csv")
X_train = train.drop(columns=["id", "target"])
y_train = train["target"]
X_test = test.drop(columns=["id"])
X_original = original.drop(columns=["target"])
y_original = original["target"]
result = select_features(
X_train, y_train, X_test,
task="classification",
time_budget=3600,
original_df=X_original,
original_target=y_original,
)
X_train_new = result["X_train"]
X_test_new = result["X_test"]
print(f"Baseline AUC: {result['base_score']:.6f}")
print(f"Best AUC: {result['best_score']:.6f}")
print(f"Features added: {len(result['selected_features'])}")
```
### Using the Class API
```python
from autofepg import AutoFE
import pandas as pd
original = pd.read_csv("original.csv")
autofe = AutoFE(
task="classification",
n_folds=5,
time_budget=1800,
improvement_threshold=0.0001,
backward_selection=True,
sample=10000,
original_df=original.drop(columns=["target"]),
original_target=original["target"],
xgb_params={
"n_estimators": 1000,
"max_depth": 8,
"learning_rate": 0.05,
},
)
X_train_new, X_test_new = autofe.fit_select(
X_train, y_train, X_test,
aux_target_cols=["employment_status", "debt_to_income_ratio"],
)
# Inspect results
print(autofe.get_selected_feature_names())
history_df = autofe.get_history()
details_df = autofe.get_selection_details()
```
---
## 📖 How It Works
### 1. Feature Generation
AutoFE-PG generates candidates from a hardcoded priority sequence ordered by expected impact:
| Priority | Strategy | Description | Leakage-free? |
|---|---|---|---|
| 1 | **Domain Alignment** | Snap synthetic values to nearest real-data grid point; expose residual | ✅ No target |
| 2 | **Bayesian Priors** | Inject P(target \| value) from original dataset as external knowledge | ✅ No train target |
| 3 | Target Encoding (single) | OOF mean-target per category | ✅ OOF |
| 4 | Count Encoding (single) | Value counts per category | ✅ No target |
| 5 | **Dual Representation** | Continuous + label-encoded copy of each numerical column | ✅ No target |
| 6 | Target Encoding on pairs | OOF TE on column pair interactions | ✅ OOF |
| 7 | Count Encoding on pairs | Value counts on column pair interactions | ✅ No target |
| 8 | Frequency Encoding | Normalized value counts | ✅ No target |
| 9 | **Cross-Dataset Frequency & Rarity** | How common/rare a value is across train+test+original | ✅ No target |
| 10 | Missing Indicators | Binary NaN flags | ✅ No target |
| 11 | TE with auxiliary targets | OOF TE using a different column as target | ✅ OOF |
| 12 | Unary transforms | log1p, sqrt, square, reciprocal | ✅ No target |
| 13 | Arithmetic interactions | add, sub, mul, div between numerical pairs | ✅ No target |
| 14 | Polynomial features | Square and cross-product terms | ✅ No target |
| 15 | Pairwise label interactions | Label-encoded column pairs | ✅ No target |
| 16 | TE/CE on digits | Target/count encoding on extracted digits | ✅ OOF / No target |
| 17 | Digit × Category TE | Digit-category interaction with OOF TE | ✅ OOF |
| 18 | Quantile binning | Equal-frequency bins | ✅ No target |
| 19 | Raw digit extraction | i-th digit of numerical values | ✅ No target |
| 20 | Digit interactions | Within-feature and cross-feature digit combos | ✅ No target |
| 21 | Rounding features | Round to various decimal places / magnitudes | ✅ No target |
| 22 | Num-to-Cat conversion | Equal-width binning | ✅ No target |
| 23 | Group statistics & deviations | Mean, std, min, max, median by group; diff/ratio to group | ✅ No target |
### 2. Greedy Forward Selection
Each candidate is evaluated by adding it to the current feature set and running XGBoost K-fold CV.
A feature is kept only if it improves the score beyond the configured threshold.
### 3. Optional Backward Pruning
After forward selection, features are tested for removal.
If removing a feature improves (or maintains) the score, it is permanently dropped.
---
## 🧬 Synthetic Data Strategies
AutoFE-PG includes four strategies specifically designed for Kaggle Playground competitions where the training data is synthetically generated from a real-world dataset.
### Domain Alignment (De-noising)
The synthetic generation process often introduces "fuzzy" values that wouldn't exist in a real clinical setting. **Domain Alignment** forces every continuous value in the synthetic set to its nearest neighbor in the original dataset, effectively "snapping" the data back to its true clinical grid. The residual (distance to the snap point) is also exposed as a feature, since it encodes how much the synthetic process perturbed the value.
```python
from autofepg.generators import DomainAlignmentFeature
import numpy as np
# Reference values from original dataset
ref_vals = original["blood_pressure"].dropna().unique()
gen = DomainAlignmentFeature("blood_pressure", reference_values=ref_vals)
```
### Bayesian-Style Priors (External Mapping)
Instead of letting the model learn strictly from the training data, **Bayesian Priors** import external knowledge from the original dataset. By calculating `P(target | value)` in the original file and injecting those probabilities as features, the model starts with a "hint" about which values are clinically dangerous. This uses no information from the training target — zero leakage.
```python
from autofepg.generators import BayesianPriorFeature
# Pre-computed from original data
prior_map = original.groupby("cholesterol")["heart_disease"].mean().to_dict()
gen = BayesianPriorFeature("cholesterol", prior_map=prior_map)
```
### Dimensionality Expansion (Dual Representation)
The model uses a "dual-representation" strategy for numerical features:
- **Continuous copy**: Treated as a number to capture linear or threshold trends
- **Categorical copy**: Treated as a discrete label-encoded value to allow the tree to create very specific, non-linear splits on exact values
```python
from autofepg.generators import DualRepresentationFeature
gen = DualRepresentationFeature("age")
# Produces: dual__age_cont (float) + dual__age_cat (int label)
```
### Frequency and Density Analysis
Cross-dataset frequency analysis calculates the rarity of values across the entire data ecosystem (train, test, and original). This helps the model identify if a specific data point is an outlier or part of a common cluster — a strong signal in synthetic datasets where certain "modes" are over-represented.
```python
from autofepg.generators import CrossDatasetFrequencyFeature, ValueRarityFeature
import pandas as pd
# Combine counts across all datasets
combined = pd.concat([train["age"], test["age"], original["age"]])
eco_counts = combined.value_counts()
eco_total = len(combined)
freq_gen = CrossDatasetFrequencyFeature("age", eco_counts, eco_total)
rare_gen = ValueRarityFeature("age", eco_counts, eco_total)
```
> **Note:** When you pass `original_df`, `original_target`, and `X_test` to `AutoFE` or `select_features`, all four strategies are automatically generated and evaluated. No manual setup required.
---
## ⚙️ Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
| `task` | str | `"auto"` | `"classification"`, `"regression"`, or `"auto"` |
| `n_folds` | int | `5` | Number of CV folds |
| `time_budget` | float | `None` | Max seconds (wall clock) |
| `improvement_threshold` | float | `1e-7` | Min score delta to keep a feature |
| `sample` | int | `None` | Subsample rows for faster CV |
| `backward_selection` | bool | `False` | Run backward pruning after forward |
| `max_pair_cols` | int | `20` | Max columns for pairwise features |
| `max_digit_positions` | int | `4` | Max digit positions to extract |
| `xgb_params` | dict | `None` | Custom XGBoost hyperparameters |
| `metric_fn` | callable | `None` | Custom metric `(y_true, y_pred) -> float` |
| `metric_direction` | str | `None` | `"maximize"` or `"minimize"` |
| `random_state` | int | `42` | Random seed |
| `verbose` | bool | `True` | Print progress |
| `original_df` | DataFrame | `None` | Original (real) dataset features for domain alignment & priors |
| `original_target` | Series | `None` | Original dataset target for Bayesian prior computation |
| `report_path` | str | `"autofepg_report.txt"` | Path for detailed selection report |
---
## 📊 Output
The `select_features()` function returns a dictionary:
```python
{
"X_train": pd.DataFrame, # Augmented training data
"X_test": pd.DataFrame, # Augmented test data (if provided)
"autofe": AutoFE, # Fitted AutoFE object
"history": pd.DataFrame, # Full selection history
"selected_features": List[str], # Names of kept features
"selection_details": pd.DataFrame, # Per-feature improvement details
"base_score": float, # Baseline CV mean
"base_score_std": float, # Baseline CV std
"best_score": float, # Final CV mean
"best_score_std": float, # Final CV std
}
```
---
## 🧪 Running Tests
```bash
pytest tests/ -v
```
---
## 📁 Project Structure
```text
autofepg/
├── autofepg/
│ ├── __init__.py # Public API & exports
│ ├── utils.py # GPU detection, task inference, metrics
│ ├── generators.py # All feature generator classes (25+)
│ ├── builder.py # FeatureCandidateBuilder
│ ├── engine.py # XGBoost CV engine
│ └── core.py # AutoFE class + select_features()
├── tests/
│ ├── __init__.py
│ └── test_autofepg.py # Unit and integration tests
├── examples/
│ ├── example_classification.py
│ ├── example_regression.py
│ └── example_with_original.py
├── .github/
│ └── workflows/
│ └── ci.yml
├── .gitignore
├── LICENSE
├── README.md
├── CHANGELOG.md
├── CONTRIBUTING.md
├── Makefile
├── pyproject.toml
├── setup.py
└── requirements.txt
```
---
## 📋 Generator Reference
### Original Strategies
| Generator | Class | Target used? |
|---|---|---|
| Target Encoding | `TargetEncoding` | ✅ OOF |
| Count Encoding | `CountEncoding` | ❌ |
| Frequency Encoding | `FrequencyEncoding` | ❌ |
| Pair Interaction | `PairInteraction` | ❌ |
| TE on Pairs | `TargetEncodingOnPair` | ✅ OOF |
| CE on Pairs | `CountEncodingOnPair` | ❌ |
| Digit Extraction | `DigitFeature` | ❌ |
| Digit Interaction | `DigitInteraction` | ❌ |
| TE on Digits | `TargetEncodingOnDigit` | ✅ OOF |
| CE on Digits | `CountEncodingOnDigit` | ❌ |
| Digit × Cat TE | `DigitBasePairTE` | ✅ OOF |
| Rounding | `RoundFeature` | ❌ |
| Quantile Binning | `QuantileBinFeature` | ❌ |
| Num-to-Cat | `NumToCat` | ❌ |
| TE with Aux Target | `TargetEncodingAuxTarget` | ✅ OOF (aux) |
| Arithmetic Interaction | `ArithmeticInteraction` | ❌ |
| Missing Indicator | `MissingIndicator` | ❌ |
| Group Statistics | `GroupStatFeature` | ❌ |
| Group Deviation | `GroupDeviationFeature` | ❌ |
| Unary Transform | `UnaryTransform` | ❌ |
| Polynomial Feature | `PolynomialFeature` | ❌ |
### Synthetic Data Strategies (NEW in v0.2.0)
| Generator | Class | Requires | Target used? |
|---|---|---|---|
| Domain Alignment | `DomainAlignmentFeature` | `original_df` | ❌ |
| Bayesian Prior | `BayesianPriorFeature` | `original_df` + `original_target` | ❌ (external only) |
| Dual Representation | `DualRepresentationFeature` | — | ❌ |
| Cross-Dataset Frequency | `CrossDatasetFrequencyFeature` | `original_df` or `X_test` | ❌ |
| Value Rarity | `ValueRarityFeature` | `original_df` or `X_test` | ❌ |
---
## 📝 Changelog
### v0.2.0
- **Domain Alignment**: Snap synthetic values to nearest real-data grid point with residual feature
- **Bayesian Priors**: Inject external P(target|value) from original dataset
- **Dual Representation**: Continuous + categorical copy of numerical features
- **Cross-Dataset Frequency**: Value frequency across train+test+original ecosystem
- **Value Rarity**: Log-inverse-frequency score for outlier detection
- Added `original_df` and `original_target` parameters to `AutoFE` and `select_features`
- Report now includes original data status
- Version bump to 0.2.0
### v0.1.3
- Initial public release
- 20+ feature generation strategies
- Greedy forward selection with optional backward pruning
- GPU acceleration support
- Detailed text report generation
---
## 📄 License
MIT License. See [LICENSE](LICENSE) for details.
| text/markdown | AutoFE-PG Contributors | null | null | null | MIT | feature-engineering, machine-learning, kaggle, playground, xgboost, automated-ml | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Pyth... | [] | https://github.com/thomastschinkel/autofepg | null | >=3.8 | [] | [] | [] | [
"numpy>=1.21.0",
"pandas>=1.3.0",
"scikit-learn>=1.0.0",
"xgboost>=1.7.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"flake8>=5.0; extra == \"dev\"",
"black>=22.0; extra == \"dev\"",
"isort>=5.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/autofepg",
"Repository, https://github.com/yourusername/autofepg",
"Issues, https://github.com/yourusername/autofepg/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:41:43.239938 | autofepg-0.2.0.tar.gz | 36,016 | 1e/45/8056f0e3751e50922257a8b49ee39d652a45b1f2cc7f1fec6b9d7dc3ab0e/autofepg-0.2.0.tar.gz | source | sdist | null | false | c51b69118c69bc6ff90ea399c449d297 | 16bff906112a55f7019ae463ede92c54502e575e0f7f6f2d62b1b216a66ed861 | 1e458056f0e3751e50922257a8b49ee39d652a45b1f2cc7f1fec6b9d7dc3ab0e | null | [
"LICENSE"
] | 243 |
2.4 | changepoint-doctor | 0.0.3 | Fast change-point detection bindings backed by Rust. | # changepoint-doctor Python Bindings (MVP-A)
`changepoint-doctor` exposes fast offline change-point detection from Rust into Python.
For citation and provenance policy, see [`../CITATION.cff`](../CITATION.cff)
and [`../docs/clean_room_policy.md`](../docs/clean_room_policy.md).
## Install
From PyPI (target release `0.0.3`):
```bash
python -m pip install --upgrade pip
python -m pip install changepoint-doctor==0.0.3
```
For local development from this repository:
```bash
cd cpd/python
python -m pip install --upgrade pip maturin
maturin develop --release --manifest-path ../crates/cpd-python/Cargo.toml
python -m pip install --upgrade ".[dev]"
```
Apple Silicon contributors should run the architecture checks and sanity path in
[`../docs/python_apple_silicon_toolchain.md`](../docs/python_apple_silicon_toolchain.md)
before debugging `pyo3`/linker errors.
Common extras:
- `plot`: `python -m pip install "changepoint-doctor[plot]==0.0.3"`
- `notebooks`: `python -m pip install "changepoint-doctor[notebooks]==0.0.3"`
- `parity`: `python -m pip install "changepoint-doctor[parity]==0.0.3"`
- `dev`: `python -m pip install "changepoint-doctor[dev]==0.0.3"`
`plot`/`notebooks`/`parity` extras only install optional Python tooling. They do
not toggle Rust compile-time features. Rust features are set when building the
extension (for example `maturin develop --features preprocess,serde ...`).
> Install/import naming: install with `python -m pip install changepoint-doctor`, then import with `import cpd` in Python. Optional compatibility alias: `import changepoint_doctor as cpd`.
## API Map
- `cpd.Pelt`: high-level PELT detector.
- `cpd.Binseg`: high-level Binary Segmentation detector.
- `cpd.Fpop`: high-level FPOP detector (L2 cost only).
- `cpd.detect_offline`: low-level API for explicit detector/cost/constraints/stopping/preprocess selection, including `detector="segneigh"` (exact fixed-K DP; `dynp` alias supported).
- `cpd.OfflineChangePointResult`: typed result object with breakpoints and diagnostics.
## Streaming `update()` vs `update_many()` Policy
`update_many()` now uses a size-aware GIL strategy in Rust bindings:
- Workloads with `< 16` scalar work items (`n * d`) keep the GIL (lower overhead for tiny micro-batches).
- Workloads with `>= 16` scalar work items (`n * d`) release the GIL (`py.allow_threads`) for throughput and thread fairness.
To reproduce the benchmark snapshot used for this policy:
```bash
cd cpd/python
python -m pip install --upgrade ".[dev]"
pytest -q tests/test_streaming_perf_contract.py
```
Optional controls:
- `CPD_PY_STREAMING_PERF_ENFORCE=1`: enable stricter ratio gates.
- `CPD_PY_STREAMING_PERF_REPORT_OUT=/tmp/cpd-python-streaming-perf.json`: write JSON metrics.
The perf contract uses median latency with outlier-triggered retry rounds to reduce scheduler-noise flakiness.
Reference run (local dev machine, `tests/test_streaming_perf_contract.py`, median ms):
| Batch size | `update()` median ms | `update_many()` median ms | `update_many()` speedup vs `update()` |
| --- | ---: | ---: | ---: |
| 1 | 0.0035 | 0.0097 | 0.36x |
| 8 | 0.0177 | 0.0194 | 0.91x |
| 16 | 0.0356 | 0.0310 | 1.15x |
| 64 | 0.1308 | 0.0891 | 1.47x |
| 4096 | 7.8216 | 4.4616 | 1.75x |
## Masking Risk Guidance
If BinSeg diagnostics indicate masking risk (for example warnings that closely
spaced weaker changes may be hidden), prefer Wild Binary Segmentation (WBS) in
Rust/offline flows (`cpd-offline::Wbs`) for stronger recovery.
Python high-level APIs expose `cpd.Pelt`, `cpd.Binseg`, and `cpd.Fpop`.
WBS and SegNeigh are not yet exposed as Python high-level detector classes; use `detect_offline(...)`.
## Quickstart
See [`QUICKSTART.md`](./QUICKSTART.md) for a full walkthrough.
## Reproducibility Modes
`detect_offline(..., repro_mode=...)` supports `strict`, `balanced` (default),
and `fast`.
For deterministic contracts, cross-platform expectations, and tolerance gates,
see [`../docs/reproducibility_modes.md`](../docs/reproducibility_modes.md).
## Result JSON Contract
`OfflineChangePointResult.to_json()` / `OfflineChangePointResult.from_json(...)`
follow the versioned contract in
[`../docs/result_json_contract.md`](../docs/result_json_contract.md), with the
canonical schema marker at `diagnostics.schema_version`.
When available, build provenance is emitted under `diagnostics.build` (for
Python adapters this includes ABI and enabled feature context).
In `0.x`, schema compatibility follows the bounded version window documented in
[`../VERSIONING.md`](../VERSIONING.md): readers accept only supported
schema-marker versions (currently `1..=2` for offline result fixtures).
Serialization + plotting workflow:
```python
import numpy as np
import cpd
x = np.concatenate([
np.zeros(40, dtype=np.float64),
np.full(40, 8.0, dtype=np.float64),
np.full(40, -4.0, dtype=np.float64),
])
pelt = cpd.Pelt(model="l2").fit(x).predict(n_bkps=2)
binseg = cpd.Binseg(model="l2").fit(x).predict(n_bkps=2)
fpop = cpd.Fpop(min_segment_len=2).fit(x).predict(n_bkps=2)
low = cpd.detect_offline(
x,
detector="pelt",
cost="l2",
constraints={"min_segment_len": 2},
stopping={"n_bkps": 2},
)
segneigh = cpd.detect_offline(
x,
detector="segneigh", # 'dynp' alias also supported
cost="l2",
constraints={"min_segment_len": 2},
stopping={"n_bkps": 2},
)
payload = pelt.to_json()
restored = cpd.OfflineChangePointResult.from_json(payload)
assert restored.breakpoints == pelt.breakpoints
try:
fig = restored.plot(x, title="Detected breakpoints")
except ImportError:
# Plotting remains optional.
# Install with: python -m pip install "changepoint-doctor[plot]==0.0.3"
fig = None
```
Compatibility + limitations:
- `from_json(...)` accepts only supported schema markers (`diagnostics.schema_version`,
currently `1..=2` in `0.x`).
- `to_json()` writes the current schema marker (currently `1`) and preserves additive
unknown fields when round-tripping payloads.
- `plot()` requires optional plotting dependencies (`changepoint-doctor[plot]`).
- `plot(values=None, ...)` requires per-segment summaries in the result; if segments
are unavailable, pass explicit `values`.
- `plot(ax=...)` is supported only for univariate data (`diagnostics.d == 1`).
These paths are smoke-tested in CI in
[`tests/test_integration_mvp_a.py`](./tests/test_integration_mvp_a.py), including
fixture compatibility checks and example-script execution.
## Stopping and Penalty Guide
Ruptures-compatible naming is supported in Python:
- `n_bkps`: exact number of change points (`Stopping::KnownK`)
- `pen`: manual penalty scalar (`Stopping::Penalized(Penalty::Manual(...))`)
- `min_segment_len`: minimum segment size (`Constraints.min_segment_len`)
When to use each stopping style:
- `n_bkps` (`KnownK`): use when you know the expected number of changes and need an exact count.
- `pen="bic"`: good default when you want automatic model-selection behavior that scales with sample size.
- `pen="aic"`: less conservative than BIC; can recover weaker changes but may over-segment noisy data.
- `pen=<float>`: use when you need tight operational control over sensitivity (lower finds more changes, higher finds fewer).
- `stopping={"PenaltyPath": [...]}` (pipeline serde form): request multiple penalties in one PELT sweep and inspect diagnostics notes for each path entry.
BIC/AIC complexity terms are model-aware by default:
- `l2` uses `params_per_segment=2` (mean + residual variance proxy)
- `normal` uses `params_per_segment=3` (mean + variance + residual term)
- `normal_full_cov` uses model-aware effective complexity for BIC/AIC: `1 + d + d(d+1)/2` (mean vector + full covariance + residual term)
Advanced users can still override `params_per_segment` in low-level pipeline detector config.
### SegNeigh Sizing Guide (`detector="segneigh"` / `"dynp"`)
SegNeigh is exact dynamic programming for fixed-`k` segmentation (`n_bkps` / `KnownK`).
- Let `m` be the effective candidate count after constraints (`jump`, `candidate_splits`, `min_segment_len` filtering).
- Expected scaling is approximately:
- runtime: `O(k * m^2)`
- memory: `O(k * m + m)`
- Practical guidance:
- Use SegNeigh when `k` is known and `m` is modest.
- Increase `jump` and/or `min_segment_len` first when runtime or memory is high.
- Prefer `pelt`/`fpop` when `k` is unknown or when very large `n` requires penalty-based model selection.
Reproducible local benchmark harness for representative `(n, k)` regimes:
```bash
cd cpd
cargo bench -p cpd-bench --bench offline_segneigh
```
## Preprocess Config Contract
`detect_offline(..., preprocess=...)` validates keys and method payloads.
Unknown preprocess stage keys fail with `ValueError`.
Default PyPI wheels include preprocess support.
Canonical shape:
```python
preprocess = {
"detrend": {"method": "linear"}, # or {"method": "polynomial", "degree": 2}
"deseasonalize": {"method": "differencing", "period": 2}, # or method="stl_like" (period >= 2)
"winsorize": {"lower_quantile": 0.05, "upper_quantile": 0.95}, # optional fields
"robust_scale": {"mad_epsilon": 1e-9, "normal_consistency": 1.4826}, # optional fields
}
```
Validation details:
- `detrend.method`: `"linear"` or `"polynomial"` (`degree` required for polynomial).
- `deseasonalize.method`: `"differencing"` (`period >= 1`) or `"stl_like"` (`period >= 2`).
- `winsorize`: defaults to `lower_quantile=0.01`, `upper_quantile=0.99` when omitted.
- `robust_scale`: defaults to `mad_epsilon=1e-9`, `normal_consistency=1.4826` when omitted.
## Example Scripts
- `examples/synthetic_signal.py`: synthetic step-function detection with all MVP-A APIs.
- `examples/csv_detect.py`: detect breakpoints from a CSV column.
- `examples/plot_breakpoints.py`: render detected breakpoints over a synthetic signal.
Run from repo root:
```bash
cpd/python/.venv/bin/python cpd/python/examples/synthetic_signal.py
cpd/python/.venv/bin/python cpd/python/examples/csv_detect.py --csv /path/to/data.csv --column 0
cpd/python/.venv/bin/python cpd/python/examples/plot_breakpoints.py --out /tmp/cpd_breakpoints.png
```
## Notebook Examples
- `examples/notebooks/01_offline_algorithms.ipynb`: quick comparison of offline detectors (`Pelt`, `Binseg`, `Fpop`, `segneigh`, and pipeline-form `wbs`).
- `examples/notebooks/02_online_algorithms.ipynb`: streaming workflows for `Bocpd`, `Cusum`, and `PageHinkley`.
- `examples/notebooks/03_doctor_recommendations.ipynb`: doctor recommendation workflow with live CLI execution and snapshot fallback.
- `examples/notebooks/README.md`: notebook launch instructions and workflow overview.
Launch from `cpd/python`:
```bash
python -m pip install --upgrade "changepoint-doctor[notebooks]==0.0.3"
jupyter lab
```
## Ruptures Parity Suite
To run the differential parity suite locally:
```bash
cd cpd/python
python -m pip install --upgrade ".[parity]"
CPD_PARITY_PROFILE=smoke pytest -q tests/test_ruptures_parity.py
CPD_PARITY_PROFILE=full CPD_PARITY_REPORT_OUT=/tmp/cpd-parity-report.json pytest -q tests/test_ruptures_parity.py
```
See [`../docs/parity_ruptures.md`](../docs/parity_ruptures.md) for corpus structure,
tolerance rules, and CI thresholds.
## BOCPD Bayesian Parity Suite
To run BOCPD parity against
`hildensia/bayesian_changepoint_detection` (preferred pin with fallback):
```bash
cd cpd/python
python -m pip install --upgrade ".[parity]"
REF_REPO="https://github.com/hildensia/bayesian_changepoint_detection.git"
PREFERRED_REF="f3f8f03af0de7f4f98bd54c7ca0b5f6d0b0f6f8c"
python -m pip install "git+${REF_REPO}@${PREFERRED_REF}" || \
python -m pip install "git+${REF_REPO}"
CPD_BOCPD_PARITY_PROFILE=smoke pytest -q tests/test_bocpd_bayesian_parity.py
CPD_BOCPD_PARITY_PROFILE=full CPD_BOCPD_PARITY_REPORT_OUT=/tmp/cpd-bocpd-parity-report.json pytest -q tests/test_bocpd_bayesian_parity.py
```
## Extras Validation
Run the metadata sanity checks for optional extras:
```bash
cd cpd/python
pytest -q tests/test_optional_extras_contract.py
```
Optional install commands (one per workflow extra):
```bash
python -m pip install "changepoint-doctor[plot]==0.0.3"
python -m pip install "changepoint-doctor[notebooks]==0.0.3"
python -m pip install "changepoint-doctor[parity]==0.0.3"
python -m pip install "changepoint-doctor[dev]==0.0.3"
```
See [`../docs/parity_bocpd_bayesian.md`](../docs/parity_bocpd_bayesian.md) for
comparison logic, corpus layout, and threshold gates.
## Wheel CI Policy
Cross-platform wheel hardening is enforced by
[`../../.github/workflows/wheel-build.yml`](../../.github/workflows/wheel-build.yml)
and [`../../.github/workflows/wheel-smoke.yml`](../../.github/workflows/wheel-smoke.yml).
- Build backend: `cibuildwheel`
- Platforms:
- Linux manylinux x86_64
- macOS universal2 (validated on `macos-13` and `macos-14`)
- Windows amd64 (`windows-2022`)
- Python matrix:
- Full (`main`/nightly/tag): `3.9`, `3.10`, `3.11`, `3.12`, `3.13`
- Tiered (`pull_request`): representative subset with at least one `3.13` row
- NumPy matrix:
- `1.26.*` and `2.*`
- `3.13 + numpy 1.26.*` is excluded
- Python `3.13` rows are marked `experimental` and soft-gated (`continue-on-error`)
Default wheels are BLAS-free by policy:
- Native dependency reports are gated by
[`../../.github/scripts/wheel_dependency_gate.py`](../../.github/scripts/wheel_dependency_gate.py)
using `auditwheel` (Linux), `delocate` (macOS), and `delvewheel` (Windows).
- Runtime smoke asserts `low.diagnostics.blas_backend is None` for default wheel installs.
## Troubleshooting
1. `TypeError: expected float32 or float64`
Cause: integer/object arrays are passed into `.fit(...)` or `detect_offline(...)`.
Fix: cast first, e.g. `x = np.asarray(x, dtype=np.float64)`.
2. Input contains NaN/missing values and detection fails
Cause: MVP-A Python APIs reject missing values under `MissingPolicy::Error`.
Fix: impute/drop NaNs before calling detectors.
3. `RuntimeError: fit(...) must be called before predict(...)`
Cause: `.predict(...)` called on an unfitted high-level detector.
Fix: always call `.fit(x)` first.
4. Extension import fails after Rust/Python upgrade
Cause: wheel/extension built against a different interpreter environment.
Fix: rebuild via `maturin develop --release` in the active environment.
5. Apple Silicon linker mismatch (`arm64` vs `x86_64`)
Cause: host shell/interpreter/libpython architectures do not match.
Fix: follow
[`../docs/python_apple_silicon_toolchain.md`](../docs/python_apple_silicon_toolchain.md)
to verify architecture and run the CI-aligned local sanity flow.
## API Reference Outline
- `Pelt(model="l2"|"normal"|"normal_full_cov", min_segment_len, jump, max_change_points)`
- `.fit(x)` -> detector
- `.predict(pen=..., n_bkps=...)` -> `OfflineChangePointResult`
- `Binseg(model="l2"|"normal"|"normal_full_cov", min_segment_len, jump, max_change_points, max_depth)`
- `.fit(x)` -> detector
- `.predict(pen=..., n_bkps=...)` -> `OfflineChangePointResult`
- `Fpop(min_segment_len, jump, max_change_points)` (`l2` only)
- `.fit(x)` -> detector
- `.predict(pen=..., n_bkps=...)` -> `OfflineChangePointResult`
- `detect_offline(x, pipeline=None, detector, cost, constraints, stopping, preprocess, repro_mode, return_diagnostics)`
- `detector` accepts `pelt`, `binseg`, `fpop`, or `segneigh` (`dynp` alias). `fpop` requires `cost="l2"`.
- `segneigh` is exact fixed-K dynamic programming (best when `stopping` is `n_bkps`/`KnownK`); runtime/memory can grow quickly on large `n` and high `k`.
- `cost` accepts `l1_median`, `l2`, `normal`, `normal_full_cov`, and (pipeline-only) `nig`.
- `pipeline` accepts both simplified Python dicts (for example `{"detector": {"kind": "segneigh"}}`) and Rust `PipelineSpec` serde shape (for example `{"detector": {"Offline": {"SegNeigh": {...}}}, ...}`).
- `OfflineChangePointResult`
- fields: `breakpoints`, `change_points`, `scores`, `segments`, `diagnostics`
- helpers: `to_json()`, `from_json(payload)`, `plot(values=None, *, ax=None, title=...)`
| text/markdown; charset=UTF-8; variant=GFM | changepoint-doctor contributors | null | null | null | MIT OR Apache-2.0 | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Rust",
"License :: OSI Approved :: Apache Software License",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.20",
"maturin>=1.7; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"ruff>=0.6; extra == \"dev\"",
"mypy>=1.10; extra == \"dev\"",
"pre-commit>=3.7; extra == \"dev\"",
"jsonschema>=4.0; extra == \"dev\"",
"jupyterlab>=4.0; extra == \"dev\"",
"ipyk... | [] | [] | [] | [
"Homepage, https://github.com/xang1234/changepoint-doctor",
"Issues, https://github.com/xang1234/changepoint-doctor/issues",
"Repository, https://github.com/xang1234/changepoint-doctor"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:40:46.003215 | changepoint_doctor-0.0.3.tar.gz | 303,896 | a2/93/6340c9e93382f845efbb0b378e6dee4302bc9272a6d2371da4c11de16467/changepoint_doctor-0.0.3.tar.gz | source | sdist | null | false | 310760435274d47ef94c6479d117f7c8 | 7b76b264e6099bd8416d7df44d94551cca2529f97a3ee606a36398f2d85b4448 | a2936340c9e93382f845efbb0b378e6dee4302bc9272a6d2371da4c11de16467 | null | [] | 367 |
2.4 | dccQuantities | 2.0.0 | Python classes for working with DDC calibration data | # dccQuantities
dccQuantities is a Python library designed for users of PTB’s Digital Calibration Certificates (DCC) in XML format. It provides an object‑oriented interface to parse, serialize, and manipulate calibration data with full support for uncertainties and units. Arithmetic works naturally on scalars, scalar‑vector mixes, and same‑length vectors element‑wise, preserving uncertainty propagation and metadata throughout.
[](https://gitlab1.ptb.de/digitaldynamicmeasurement/dcc-and-dsi/dccQuantities/-/commits/devel)
[](https://gitlab1.ptb.de/digitaldynamicmeasurement/dcc-and-dsi/dccQuantities/-/releases)
---
## Key Features
- **DCC XML Parsing & Serialization**
Import certificates from XML into Python objects and export back to XML, JSON, CSV, Excel, or pandas DataFrames.
- **Uncertainty & Unit Awareness**
All quantity objects wrap values as `ufloat` (via `metas_unclib`) and units via `dsi_unit`, ensuring correct propagation in calculations.
- **Object‑Oriented Arithmetic**
Standard operators (`+`, `-`, `*`, `/`, `**`) are overloaded on:
- **`DccQuantityType`**: single or tabulated quantities
- **`SiRealList`**, **`SiComplexList`**, **`SiHybrid`**: 1D/2D arrays
- **Tables & Fancy Indexing**
The classes `DccLongTable` and `DccFlatTable` transparently implement numpy like indexing on efficient table structures described in the [table document](doc/tabellen/tables-de.md). Fancy indexing is supported, return type are always new tables.
---
## Linux dependencies
The package requires the Linux .NET library. For that reason, it is required to have installed the `mono` library:
```
sudo apt install mono-runtime
```
## Installation
There are multiple ways to install the package.
Read them all and choose the best one for your case:
1. From PyPI (core functionality):
```bash
pip install dccQuantities
```
This will install the latest released changes at the 'main' branch.
2. Installing unreleased changes:
```bash
pip install git+https://gitlab1.ptb.de/digitaldynamicmeasurement/dcc-and-dsi/dccQuantities.git@devel
```
Please consider that unreleased changes might be unstable and can break your code.
3. Cloning the repository:
```bash
git clone https://gitlab1.ptb.de/digitaldynamicmeasurement/dccQuantities.git
cd dccQuantities
pip install -e .
```
This is the best option for developers.
## Deploy local documentation
It is possible to deploy and read the local documentation.
To do so, it is required to clone the repository as stated at '2.' in the _Installation_ section.
Once the repository is cloned and the current working directory is `dccQuantities/`, install the optional dependencies for documentation:
````
pip install .[docs]
````
Now you can deploy and open the documentation by running the following command at your terminal:
```
quantity-docs
```
## Under the Hood (Test‑Driven Behavior)
The library’s design is guided by its test suite:
1. **Core Parsing** (`tests/test_parser.py`): reads `<DccQuantityTable>` and `<DccQuantityType>` elements, building Python objects.
2. **Naming** (`tests/test_dccName.py`): parses and normalizes `<DccLangName>` entries for multilingual support.
3. **Quantity Discovery** (`tests/test_quantityTypeCollector.py`): auto‑registers data handlers via `AbstractQuantityTypeData` subclasses.
4. **List Types** (`tests/test_SiRealList_*.py`): handles real, complex, and hybrid lists, including broadcasting and label merging.
5. **Table Flattening** (`tests/test_tables.py`): cover the tables.
6. **Round‑Trip Serialization** (`tests/test_serilizer.py`): ensures parse→serialize yields equivalent XML.
7. **JSON Interchange** (`tests/test_dccQuantTabJSONDumpingAndLoadingFromFile.json`): lossless JSON dump/load.
---
## Contributing & Contact
We welcome improvements, bug reports, and new features. To contribute:
1. **Fork** the repository.
2. **Create** a feature branch.
3. **Add** tests for new functionality.
4. **Submit** a merge request.
We highly encourage direct personal contact for design discussions or questions. Feel free to create Issues, even if you think your question/comment is not worth an issue, it is allways!
Or reach out to the maintainer:
- **Benedikt Seeger**: benedikt.seeger@ptb.de
directly
## License
This project is licensed under the [LGPL‑2.1‑or‑later](LICENSE).
| text/markdown | Vanessa Stehr, Thomas Bruns | Benedikt Seeger <benedikt.seeger@ptb.de>, Jaime Gonzalez Gomez <jaime.gonzalez-gomez@ptb.de> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"dsiUnits~=3.1",
"dccXMLJSONConv~=3.0.0.dev8",
"metas_unclib",
"numpy",
"PyBackport; python_version < \"3.11\"",
"pythonnet",
"mkdocs; extra == \"docs\"",
"mkdocstrings-python; extra == \"docs\"",
"mike; extra == \"docs\"",
"ruff; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://gitlab1.ptb.de/digitaldynamicmeasurement/dccQuantities"
] | twine/6.2.0 CPython/3.12.0 | 2026-02-19T11:40:40.526697 | dccquantities-2.0.0.tar.gz | 67,073 | 8d/68/32394385998869856209a7e329678dd387fe71a05d89beaa5a58b3711aa5/dccquantities-2.0.0.tar.gz | source | sdist | null | false | b52f1584d24716fd55e139a42fd58050 | 289b453449fa39d2028f339cab5c9946deaacd157337f6e7d54d683cfb106354 | 8d6832394385998869856209a7e329678dd387fe71a05d89beaa5a58b3711aa5 | LGPL-2.1-or-later | [
"LICENSE"
] | 0 |
2.4 | regulayer | 2.0.1 | Record provable AI decisions with tamper-detectable audit trails | # Regulayer SDK
Record provable AI decisions with tamper-detectable audit trails.
## Installation
```bash
pip install regulayer
```
## Quick Start
### 1. Configure the SDK
```python
from regulayer import configure, trace
configure(api_key="rl_live_your_api_key")
```
### 2. Record a Decision
```python
with trace(
system="loan_approval",
risk_level="high",
model_name="credit-model-v2"
) as t:
t.set_input({"income": 50000, "credit_score": 720})
t.set_output({"approved": True, "limit": 10000})
```
## Documentation
For full documentation, visit [docs.regulayer.tech](https://docs.regulayer.tech/python).
| text/markdown | null | Regulayer <support@regulayer.tech> | null | null | MIT | ai, governance, audit, compliance, decisions, provable | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | null | null | >=3.8 | [] | [] | [] | [
"httpx<1.0.0,>=0.24.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-httpx>=0.21; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://regulayer.tech",
"Documentation, https://docs.regulayer.tech/python",
"Repository, https://github.com/regulayer/regulayer-python",
"Bug Tracker, https://github.com/regulayer/regulayer-python/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T11:40:33.965538 | regulayer-2.0.1.tar.gz | 10,814 | 04/01/04f52ac1b3cbea77ec14702611a8a8a031926791f87815d3bb86d9f6dfdc/regulayer-2.0.1.tar.gz | source | sdist | null | false | ff60fe5649766205e393eba4cebfd1ab | 04bca528a8727c3aa10ae11e3773c47e8dc853b021ad9098deecff878a274891 | 040104f52ac1b3cbea77ec14702611a8a8a031926791f87815d3bb86d9f6dfdc | null | [] | 226 |
2.4 | guardrails-blindfold | 0.1.0 | Guardrails AI validator for PII detection and protection using Blindfold | # Guardrails Blindfold
A [Guardrails AI](https://guardrailsai.com) validator that detects and protects PII in LLM outputs using the [Blindfold](https://blindfold.dev) API.
| | |
|---|---|
| Developed by | [Blindfold](https://blindfold.dev) |
| License | MIT |
| Input/Output | String |
| Hub | `blindfold/pii_protection` |
## Installation
```bash
pip install guardrails-blindfold
```
Set your Blindfold API key:
```bash
export BLINDFOLD_API_KEY=your-api-key
```
Get a free API key at [app.blindfold.dev](https://app.blindfold.dev).
## Quick Start
```python
from guardrails import Guard
from guardrails_blindfold import BlindfoldPII
guard = Guard().use(BlindfoldPII(on_fail="fix"))
result = guard.validate("Contact John Doe at john@example.com")
print(result.validated_output)
# → "Contact <Person_1> at <Email Address_1>"
```
## Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| `policy` | `str` | `"basic"` | Detection policy — see table below |
| `pii_method` | `str` | `"tokenize"` | How to fix detected PII |
| `region` | `str` | `None` | `"eu"` or `"us"` for data residency |
| `entities` | `list` | `None` | Specific entity types to detect |
| `score_threshold` | `float` | `None` | Confidence threshold (0.0–1.0) |
| `api_key` | `str` | `None` | Falls back to `BLINDFOLD_API_KEY` env var |
| `on_fail` | `str` | `None` | Guardrails failure action |
### Policies
| Policy | Entities | Use Case |
|---|---|---|
| `basic` | Names, emails, phones, locations | General PII protection |
| `gdpr_eu` | EU-specific: IBANs, addresses, dates of birth | GDPR compliance |
| `hipaa_us` | PHI: SSNs, MRNs, medical terms | HIPAA compliance |
| `pci_dss` | Card numbers, CVVs, expiry dates | PCI DSS compliance |
| `strict` | All entity types, lower threshold | Maximum detection |
### PII Methods
| Method | Description | Reversible |
|---|---|---|
| `tokenize` | Replace with tokens (`<Person_1>`) | Yes |
| `redact` | Remove permanently (`[REDACTED]`) | No |
| `mask` | Partially hide (`J****oe`) | No |
| `hash` | Deterministic hash (`HASH_abc123`) | No |
| `synthesize` | Replace with fake data (`Jane Smith`) | No |
| `encrypt` | AES-256 encryption | Yes (with key) |
## Examples
### GDPR Compliance (EU Region)
```python
guard = Guard().use(
BlindfoldPII(
policy="gdpr_eu",
region="eu",
on_fail="fix",
)
)
result = guard.validate("Email hans.mueller@example.de about the meeting")
# → "Email <Email Address_1> about the meeting"
```
### HIPAA — Redact PHI
```python
guard = Guard().use(
BlindfoldPII(
policy="hipaa_us",
pii_method="redact",
region="us",
on_fail="fix",
)
)
result = guard.validate("Patient John Smith, SSN 123-45-6789")
# → "Patient [REDACTED], SSN [REDACTED]"
```
### Block Output if PII Detected
```python
guard = Guard().use(
BlindfoldPII(policy="strict", on_fail="exception")
)
# Raises ValidationError if PII is found
result = guard.validate("No PII here") # passes
result = guard.validate("Email john@example.com") # raises
```
### Chain with Other Validators
```python
from guardrails_blindfold import BlindfoldPII
guard = Guard().use(
BlindfoldPII(policy="strict", on_fail="fix"),
).use(
AnotherValidator(on_fail="exception"),
)
```
### Detect Specific Entity Types
```python
guard = Guard().use(
BlindfoldPII(
entities=["Email Address", "Phone Number"],
on_fail="fix",
)
)
```
## Data Residency
Use the `region` parameter to ensure PII is processed in a specific jurisdiction:
- `region="eu"` — processed in Frankfurt, Germany
- `region="us"` — processed in Virginia, US
```python
# EU data stays in the EU
guard = Guard().use(
BlindfoldPII(policy="gdpr_eu", region="eu", on_fail="fix")
)
```
## Links
- [Blindfold Documentation](https://docs.blindfold.dev)
- [Blindfold Dashboard](https://app.blindfold.dev)
- [Guardrails AI Documentation](https://docs.guardrailsai.com)
- [GitHub](https://github.com/blindfold-dev/guardrails-blindfold)
| text/markdown | null | Blindfold <hello@blindfold.dev> | null | null | MIT | guardrails, pii, blindfold, privacy, gdpr, hipaa, ai-safety | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Py... | [] | null | null | >=3.9 | [] | [] | [] | [
"guardrails-ai>=0.5.0",
"blindfold-sdk>=1.3.0"
] | [] | [] | [] | [
"Homepage, https://blindfold.dev",
"Documentation, https://docs.blindfold.dev",
"Repository, https://github.com/blindfold-dev/guardrails-blindfold"
] | twine/6.2.0 CPython/3.13.2 | 2026-02-19T11:40:30.313022 | guardrails_blindfold-0.1.0.tar.gz | 6,884 | d1/11/5de9bad93754d6fd856e46e0d5746aa29e190c4959add18b07c1144d1f23/guardrails_blindfold-0.1.0.tar.gz | source | sdist | null | false | 0b5998bb2ff445d7e04203ca85509db9 | 026e7b49cc4569ea98a9bd7249d4f6f2df001148ef750a2a8379b41cedf41a04 | d1115de9bad93754d6fd856e46e0d5746aa29e190c4959add18b07c1144d1f23 | null | [
"LICENSE"
] | 226 |
2.3 | leetcode-py-sdk | 0.43.1 | Modern Python LeetCode practice environment with automated problem generation, beautiful data structure visualizations, and comprehensive testing | # LeetCode Practice Environment Generator 🚀
[](https://github.com/wislertt/leetcode-py/actions/workflows/cd.yml)
[](https://github.com/wislertt/leetcode-py/actions/workflows/cd.yml)
[](https://sonarcloud.io/summary/new_code?id=wislertt_leetcode-py)
[](https://sonarcloud.io/summary/new_code?id=wislertt_leetcode-py)
[](https://sonarcloud.io/summary/new_code?id=wislertt_leetcode-py)
[](https://codecov.io/gh/wislertt/leetcode-py)
[](https://pypi.python.org/pypi/leetcode-py-sdk)
[](https://pepy.tech/projects/leetcode-py-sdk)
[](https://github.com/wislertt/leetcode-py/)
[](https://github.com/wislertt/leetcode-py)
[](https://github.com/sponsors/wislertt)
A Python package to generate professional LeetCode practice environments. Features automated problem generation from LeetCode URLs, beautiful data structure visualizations (TreeNode, ListNode, GraphNode), and comprehensive testing with 10+ test cases per problem. Built with professional development practices including CI/CD, type hints, and quality gates.
## Table of Contents
- [What's Included](#whats-included)
- [Quick Start](#quick-start)
- [Problem Structure](#problem-structure)
- [Key Features](#key-features)
- [Usage Patterns](#usage-patterns)
- [Development Setup](#development-setup)
- [Helper Classes](#helper-classes)
- [Commands](#commands)
- [Architecture](#architecture)
- [Quality Metrics](#quality-metrics)
**What makes this different:**
- 🤖 **[LLM-Assisted Workflow](https://github.com/wislertt/leetcode-py/blob/main/docs/llm-assisted-problem-creation.md)**: Generate new problems instantly with AI assistance
- 🎨 **Visual Debugging**: Interactive tree/graph rendering with Graphviz and anytree
- 🧪 **Production Testing**: Comprehensive test suites with edge cases and reproducibility verification
- 🚀 **Modern Python**: PEP 585/604 type hints, uv, and professional tooling
- 📊 **Quality Assurance**: 95%+ test coverage, security scanning, automated linting
- ⚡ **[Powerful CLI](https://github.com/wislertt/leetcode-py/blob/main/docs/cli-usage.md)**: Generate problems anywhere with `lcpy` command
## <a id="whats-included"></a>🎯 What's Included
**Current Problem Sets**:
- **grind-75** - Essential coding interview questions from [Grind 75](https://www.techinterviewhandbook.org/grind75/) ✅ Complete
- **grind** - Extended Grind collection including all Grind 75 plus additional problems 🚧 Partial
- **blind-75** - Original [Blind 75](https://leetcode.com/problem-list/xi4ci4ig/) curated list ✅ Complete
- **neetcode-150** - Comprehensive [NeetCode 150](https://neetcode.io/practice) problem set 🚧 Partial
- **algo-master-75** - Curated algorithmic mastery problems 🚧 Partial
**Coverage**: 130+ unique problems across all major coding interview topics and difficulty levels.
**Note**: Some problem sets are partially covered. We're actively working to complete all collections. [Contributions welcome!](https://github.com/wislertt/leetcode-py/blob/main/CONTRIBUTING.md)
## <a id="quick-start"></a>🚀 Quick Start
### System Requirements
- **Python 3.10+** - Python runtime
- **Graphviz** - Graph visualization library ([install guide](https://graphviz.org/download/))
```bash
# Install the package
pip install leetcode-py-sdk
# Generate problems anywhere
lcpy gen -n 1 # Generate Two Sum
lcpy gen -t grind-75 # Generate all Grind 75 problems
lcpy gen -t neetcode-150 # Generate NeetCode 150 problems
lcpy list -t grind-75 # List Grind 75 problems
lcpy list -t blind-75 # List Blind 75 problems
# Start practicing
cd leetcode/two_sum
python -m pytest test_solution.py # Run tests
# Edit solution.py, then rerun tests
```
### Bulk Generation Example
```bash
lcpy gen --problem-tag grind-75 --output leetcode # Generate all Grind 75 problems
lcpy gen --problem-tag neetcode-150 --output leetcode # Generate NeetCode 150 problems
lcpy gen --problem-tag blind-75 --output leetcode # Generate Blind 75 problems
```
<img src="https://raw.githubusercontent.com/wislertt/leetcode-py/main/docs/images/problems-generation.png" alt="Problem Generation" style="pointer-events: none;">
_Bulk generation output showing "Generated problem:" messages for all 75 Grind problems_
<img src="https://raw.githubusercontent.com/wislertt/leetcode-py/main/docs/images/problems-generation-2.png" alt="Problem Generation 2" style="pointer-events: none;">
_Generated folder structure showing all 75 problem directories after command execution_
## <a id="problem-structure"></a>📁 Problem Structure
Each problem follows a consistent, production-ready template:
```
leetcode/two_sum/
├── README.md # Problem description with examples and constraints
├── solution.py # Implementation with type hints and TODO placeholder
├── test_solution.py # Comprehensive parametrized tests (10+ test cases)
├── helpers.py # Test helper functions
├── playground.py # Interactive debugging environment (converted from .ipynb)
└── __init__.py # Package marker
```
<img src="https://raw.githubusercontent.com/wislertt/leetcode-py/main/docs/images/readme-example.png" alt="README Example" style="pointer-events: none;">
_README format that mirrors LeetCode's problem description layout_
<img src="https://raw.githubusercontent.com/wislertt/leetcode-py/main/docs/images/solution-boilerplate.png" alt="Solution Boilerplate" style="pointer-events: none;">
_Solution boilerplate with type hints and TODO placeholder_
<img src="https://raw.githubusercontent.com/wislertt/leetcode-py/main/docs/images/test-example.png" alt="Test Example" style="pointer-events: none;">
_Comprehensive parametrized tests with 10+ test cases - executable and debuggable in local development environment_
<img src="https://raw.githubusercontent.com/wislertt/leetcode-py/main/docs/images/logs-in-test-solution.png" alt="Test Logging" style="pointer-events: none;">
_Beautiful colorful test output with loguru integration for enhanced debugging and test result visualization_
## <a id="key-features"></a>✨ Key Features
### Production-Grade Development Environment
- **Modern Python**: PEP 585/604 type hints, snake_case conventions
- **Comprehensive Linting**: black, isort, ruff, mypy with nbqa for notebooks
- **High Test Coverage**: 10+ test cases per problem including edge cases
- **Beautiful Logging**: loguru integration for enhanced test debugging
- **CI/CD Pipeline**: Automated testing, security scanning, and quality gates
### Enhanced Data Structure Visualization
Professional-grade visualization for debugging complex data structures with dual rendering modes:
- **TreeNode**: Beautiful tree rendering with anytree and Graphviz integration
- **ListNode**: Clean arrow-based visualization with cycle detection
- **GraphNode**: Interactive graph rendering for adjacency list problems
- **DictTree**: Box-drawing character trees perfect for Trie implementations
#### Jupyter Notebook Integration (HTML Rendering)
<img src="https://raw.githubusercontent.com/wislertt/leetcode-py/main/docs/images/tree-viz.png" alt="Tree Visualization" style="pointer-events: none;">
_Interactive tree visualization using Graphviz SVG rendering in Jupyter notebooks_
<img src="https://raw.githubusercontent.com/wislertt/leetcode-py/main/docs/images/linkedlist-viz.png" alt="LinkedList Visualization" style="pointer-events: none;">
_Professional linked list visualization with Graphviz in Jupyter environment_
#### Terminal/Console Output (String Rendering)
<img src="https://raw.githubusercontent.com/wislertt/leetcode-py/main/docs/images/tree-str-viz.png" alt="Tree String Visualization" style="pointer-events: none;">
_Clean ASCII tree rendering using anytree for terminal debugging and logging_
<img src="https://raw.githubusercontent.com/wislertt/leetcode-py/main/docs/images/linkedlist-str-viz.png" alt="LinkedList String Visualization" style="pointer-events: none;">
_Simple arrow-based list representation for console output and test debugging_
### Flexible Notebook Support
- **Template Generation**: Creates Jupyter notebooks (`.ipynb`) by default with rich data structure rendering
- **User Choice**: Use `jupytext` to convert notebooks to Python files, or keep as `.ipynb` for interactive exploration
- **Repository State**: This repo converts them to Python files (`.py`) for better version control
- **Dual Rendering**: Automatic HTML visualization in notebooks, clean string output in terminals
<img src="https://raw.githubusercontent.com/wislertt/leetcode-py/main/docs/images/notebook-example.png" alt="Notebook Example" style="pointer-events: none;">
_Interactive multi-cell playground with rich data structure visualization for each problem_
## <a id="usage-patterns"></a>🔄 Usage Patterns
### CLI Usage (Global Installation)
Perfect for quick problem generation anywhere. See the 📖 **[Complete CLI Usage Guide](https://github.com/wislertt/leetcode-py/blob/main/docs/cli-usage.md)** for detailed documentation with all options and examples.
## <a id="development-setup"></a>🛠️ Development Setup
For working within this repository to generate additional LeetCode problems using LLM assistance:
### Development Requirements
- **Python 3.10+** - Modern Python runtime with latest type system features
- **uv** - Fast Python package manager
- **Bake** - Modern task runner (uses typer for CLI)
- **Git** - Version control system
- **Graphviz** - Graph visualization library ([install guide](https://graphviz.org/download/))
```bash
# Clone repository for development
git clone https://github.com/wislertt/leetcode-py.git
cd leetcode-py
uv sync
# Generate problems from JSON templates
bake p-gen -p problem_name
bake p-test -p problem_name
# Regenerate all existing problems
bake gen-all-problems
```
### LLM-Assisted Problem Creation
To extend the problem collection beyond the current catalog, leverage an LLM assistant within your IDE (Cursor, GitHub Copilot Chat, Amazon Q, etc.).
📖 **[Complete LLM-Assisted Problem Creation Guide](https://github.com/wislertt/leetcode-py/blob/main/docs/llm-assisted-problem-creation.md)** - Comprehensive guide with screenshots and detailed workflow.
**Quick Start:**
```bash
# Problem generation commands:
"Add problem 198. House Robber"
"Add problem 198. House Robber. tag: grind"
# Test enhancement commands:
"Enhance test cases for two_sum problem"
"Fix test reproducibility for binary_tree_inorder_traversal"
```
**Required LLM Context**: Include these rule files in your LLM context for automated problem generation and test enhancement:
- [`.claude/commands/problem-creation.md`](https://github.com/wislertt/leetcode-py/blob/main/.claude/commands/problem-creation.md) - Complete problem generation workflow
- [`.claude/commands/test-quality-assurance.md`](https://github.com/wislertt/leetcode-py/blob/main/.claude/commands/test-quality-assurance.md) - Test enhancement and reproducibility verification
**Manual Check**: Find problems needing more test cases:
```bash
uv run python -m leetcode_py.tools.check_test_cases --threshold=10
```
## 🧰 Helper Classes
- **TreeNode**: `from leetcode_py import TreeNode`
- Array ↔ tree conversion: `TreeNode.from_list([1,2,3])`, `tree.to_list()`
- Beautiful anytree text rendering and Graphviz SVG for Jupyter
- Node search: `tree.find_node(value)`
- Generic type support: `TreeNode[int]`, `TreeNode[str]`
- **ListNode**: `from leetcode_py import ListNode`
- Array ↔ list conversion: `ListNode.from_list([1,2,3])`, `node.to_list()`
- Cycle detection with Floyd's algorithm
- Graphviz visualization for Jupyter notebooks
- Generic type support: `ListNode[int]`, `ListNode[str]`
- **GraphNode**: `from leetcode_py import GraphNode`
- Adjacency list conversion: `GraphNode.from_adjacency_list([[2,4],[1,3],[2,4],[1,3]])`
- Clone detection: `original.is_clone(cloned)`
- Graphviz visualization for undirected graphs
- DFS traversal utilities
- **DictTree**: `from leetcode_py.data_structures import DictTree`
- Perfect for Trie implementations: `DictTree[str]()`
- Beautiful tree rendering with box-drawing characters
- Graphviz visualization for Jupyter notebooks
- Generic key type support
## 🛠️ Commands
### CLI Commands (Global)
📖 **[Complete CLI Usage Guide](https://github.com/wislertt/leetcode-py/blob/main/docs/cli-usage.md)** - Detailed documentation with all options and examples.
```bash
# Generate problems
lcpy gen -n 1 # Single problem by number
lcpy gen -s two-sum # Single problem by slug
lcpy gen -t grind-75 # Bulk generation by tag
lcpy gen -t neetcode-150 # Generate NeetCode 150 problems
lcpy gen -n 1 -n 2 -n 3 # Multiple problems
lcpy gen -t grind-75 -d Easy # Filter by difficulty
lcpy gen -n 1 -o my-problems # Custom output directory
# List problems
lcpy list # All available problems
lcpy list -t grind-75 # Filter by Grind 75 tag
lcpy list -t blind-75 # Filter by Blind 75 tag
lcpy list -t neetcode-150 # Filter by NeetCode 150 tag
lcpy list -d Medium # Filter by difficulty
# Scrape problem data
lcpy scrape -n 1 # Fetch by number
lcpy scrape -s two-sum # Fetch by slug
```
### Development Commands (Repository)
```bash
# Problem-specific operations
bake p-test -p problem_name # Test specific problem
bake p-gen -p problem_name # Generate problem from JSON template
bake p-gen -p problem_name -f # Force regenerate (overwrite existing files)
# Bulk operations
bake test # Run all tests
bake lint # Lint entire codebase
bake gen-all-problems # Regenerate all problems (destructive)
bake gen-all-problems -f # Force regenerate all problems
```
## 🏗️ Architecture
- **Template-Driven**: JSON templates in `leetcode_py/cli/resources/leetcode/json/problems/` drive code generation
- **Cookiecutter Integration**: Uses `leetcode_py/cli/resources/leetcode/{{cookiecutter.problem_name}}/` template for consistent file structure
- **Automated Scraping**: LLM-assisted problem data extraction from LeetCode
- **Version Control Friendly**: Python files by default, optional notebook support
## 📊 Quality Metrics
- **Test Coverage**: 95%+ with comprehensive edge case testing (Codecov integration)
- **Security**: SonarCloud quality gates, Trivy dependency scanning, Gitleaks secret detection
- **Code Quality**: Automated linting with black, isort, ruff, mypy
- **Test Reproducibility**: Automated verification that problems can be regenerated consistently
- **CI/CD**: GitHub Actions for testing, security, pre-commit hooks, and release automation
Perfect for systematic coding interview preparation with professional development practices and enhanced debugging capabilities.
## 💖 Support This Project
[](https://github.com/wislertt/leetcode-py)
[](https://github.com/sponsors/wislertt)
If you find this project helpful, please consider **starring the repo ⭐** or **sponsoring my work 💖**.
Your support helps me maintain and improve this project. Thank you!
| text/markdown | Wisaroot Lertthaweedech | Wisaroot Lertthaweedech <l.wisaroot@gmail.com> | null | null | Apache-2.0 | algorithms, coding-practice, data-structures, interview-prep, leetcode | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language ... | [] | null | null | >=3.10 | [] | [] | [] | [
"anytree>=2.13.0",
"cookiecutter>=2.6.0",
"graphviz>=0.21",
"json5>=0.13.0",
"loguru>=0.7.3",
"requests>=2.32.5",
"rich>=14.1.0",
"ruff>=0.14.0",
"ty>=0.0.13",
"typer>=0.21.0"
] | [] | [] | [] | [
"Homepage, https://github.com/wislertt/leetcode-py",
"Repository, https://github.com/wislertt/leetcode-py"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T11:38:34.695274 | leetcode_py_sdk-0.43.1.tar.gz | 136,929 | da/2a/c39cea43e44f5769ddc52f0149afcf3cafbc1da14fd87f782427e736c4a0/leetcode_py_sdk-0.43.1.tar.gz | source | sdist | null | false | 7f41b4523f0ef73e2cccef294ba82d70 | b6b408a1fdfa08121b5b6cbfc505d334289fc52451e1d4757cb1cad645b02a4f | da2ac39cea43e44f5769ddc52f0149afcf3cafbc1da14fd87f782427e736c4a0 | null | [] | 681 |
2.4 | merilang | 3.0.0 | Merilang — a desi-flavoured programming language with a full compiler front-end | # Merilang 🇮🇳
**A desi-flavoured programming language with a full compiler front-end — built in Python.**
[](https://github.com/XploitMonk0x01/merilang)
[](https://pypi.org/project/merilang/)
[](LICENSE)
[](https://www.python.org/)
---
## What's New in v3.0 🆕
Merilang has graduated from a basic interpreter to a **full compiler front-end**:
| Phase | What it does |
|---|---|
| 🔍 **Panic-mode Lexer** | Collects *all* bad characters instead of stopping at the first one |
| 🌳 **Panic-mode Parser** | Synchronises after errors and reports *all* syntax problems in one pass |
| 🔬 **Semantic Analyser** | Static type-checking, scope resolution, arity checks — before any code runs |
| 📐 **IR Generator** | Lowers the AST to Three-Address Code (3AC) viewable with `--ir` |
| 🚀 **Interpreter** | Unchanged tree-walking execution on the verified AST |
---
## Features ✨
- **Desi Keywords** — write code in Hindi-inspired syntax (`maan`, `likho`, `kaam`, …)
- **Panic-mode Error Recovery** — see every mistake in one run, not one at a time
- **Static Semantic Analysis** — undefined names, type mismatches and arity errors caught *before* execution
- **IR Dump** — inspect the generated Three-Address Code with `--ir`
- **Full OOP** — classes, inheritance, `yeh` (this), `upar` (super)
- **Exception Handling** — `koshish` / `pakad` / `aakhir` (try / catch / finally)
- **Interactive REPL** — persistent state across lines, `--ir` mode available
- **Bilingual Errors** — every error message in English *and* Hindi
---
## Installation 📦
### From PyPI (recommended)
```bash
pip install merilang
```
### From source
```bash
git clone https://github.com/XploitMonk0x01/merilang.git
cd merilang
pip install -e .
```
---
## Quick Start 🚀
### Hello World
Create `hello.meri`:
```
maan naam = "Duniya"
likho("Namaste, " + naam + "!")
```
Run it:
```bash
merilang run hello.meri
```
Output:
```
Namaste, Duniya!
```
### Interactive REPL
```bash
merilang repl
```
```
Merilang v3.0.0 Interactive REPL
>>> maan x = 10
>>> maan y = 32
>>> likho(x + y)
42
>>> niklo
Alvida! 👋
```
---
## Language Syntax 📝
### Comments
```
// This is a comment
```
### Variables
```
maan x = 42 // number
maan pi = 3.14 // float
maan naam = "Ravi" // string
maan flag = sach // boolean true
maan other = jhoot // boolean false
maan nothing = khaali // null / None
```
### Operators
| Category | Operators |
|---|---|
| Arithmetic | `+` `-` `*` `/` `%` |
| Comparison | `==` `!=` `>` `<` `>=` `<=` |
| Logical | `aur` (and) `ya` (or) `nahi` (not) |
### Print & Input
```
likho("Hello!") // print with newline
likho_online("Enter name: ") // print without newline
poocho naam "What is your name? " // read input into 'naam'
```
### Conditionals
```
maan umar = 20
agar umar >= 18 {
likho("Adult")
} warna_agar umar >= 13 {
likho("Teen")
} warna {
likho("Child")
}
```
### Loops
**While loop:**
```
maan i = 0
jab_tak i < 5 {
likho(i)
maan i = i + 1
}
```
**For-each loop:**
```
maan nums = [1, 2, 3, 4, 5]
har n mein nums {
likho(n)
}
```
**Break & Continue:**
```
jab_tak sach {
agar x > 10 { ruk } // break
agar x == 5 { age_badho } // continue
maan x = x + 1
}
```
### Functions
```
kaam jodo(a, b) {
wapas a + b
}
maan hasil = jodo(3, 4)
likho(hasil) // 7
```
**Lambda:**
```
maan double = lambda(x) -> x * 2
likho(double(21)) // 42
```
### Lists & Dicts
```
maan fruits = ["apple", "mango", "guava"]
likho(fruits[0]) // apple
likho(length(fruits)) // 3
append(fruits, "banana")
maan person = {"naam": "Raj", "umar": 25}
likho(person["naam"]) // Raj
```
### Object-Oriented Programming
```
class Insaan {
kaam __init__(naam, umar) {
yeh.naam = naam
yeh.umar = umar
}
kaam parichay() {
likho("Mera naam " + yeh.naam + " hai.")
}
}
class Chaatra extends Insaan {
kaam __init__(naam, umar, school) {
upar(naam, umar)
yeh.school = school
}
kaam padhai() {
likho(yeh.naam + " padh raha hai.")
}
}
maan c = naya Chaatra("Aryan", 18, "IIT")
c.parichay() // Mera naam Aryan hai.
c.padhai() // Aryan padh raha hai.
```
### Exception Handling
```
koshish {
maan x = 10 / 0
} pakad galti {
likho("Error: " + galti)
} aakhir {
likho("Always runs.")
}
// Throw your own
kaam check_age(umar) {
agar umar < 0 {
uchalo "Umar negative nahi ho sakti!"
}
wapas sach
}
```
---
## CLI Reference 💻
```bash
# Run a script
merilang run script.meri
# Run with debug output (tokens + AST)
merilang run script.meri --debug
# Show Three-Address Code IR before running
merilang run script.meri --ir
# Skip semantic analysis (faster, less safe)
merilang run script.meri --no-semantic
# Interactive REPL
merilang repl
merilang repl --ir # show IR for each line
# Show version
merilang version
merilang --version
```
---
## Built-in Functions 🔧
| Function | Description |
|---|---|
| `likho(...)` | Print values |
| `poocho(var, prompt)` | Read user input |
| `length(x)` | Length of list or string |
| `append(list, val)` | Add element to list |
| `pop(list, idx)` | Remove & return element |
| `insert(list, idx, val)` | Insert at index |
| `sort(list)` | Return sorted copy |
| `reverse(list)` | Return reversed copy |
| `sum(list)` | Sum of elements |
| `min(list)` / `max(list)` | Minimum / Maximum |
| `upper(s)` / `lower(s)` | String case conversion |
| `split(s, sep)` | Split string → list |
| `join(list, sep)` | Join list → string |
| `replace(s, old, new)` | Replace in string |
| `str(x)` / `int(x)` / `float(x)` | Type conversion |
| `bool(x)` / `type(x)` | Type conversion / inspection |
| `abs(x)` / `round(x, n)` | Math helpers |
| `range(n)` | List `[0 … n-1]` |
---
## Project Structure 🗂️
```
merilang/
├── merilang/
│ ├── __init__.py # Public API (v3.0.0)
│ ├── __main__.py # python -m merilang
│ ├── cli.py # Arg parsing + pipeline wiring
│ ├── errors_enhanced.py # All error classes (bilingual)
│ ├── lexer_enhanced.py # Phase 1 — tokeniser (panic-mode)
│ ├── ast_nodes_enhanced.py # AST node dataclasses
│ ├── parser_enhanced.py # Phase 2 — recursive-descent parser
│ ├── symbol_table.py # Scope manager for semantic analysis
│ ├── semantic_analyzer.py # Phase 3 — static analyser
│ ├── ir_nodes.py # 3AC instruction dataclasses
│ ├── ir_generator.py # Phase 4 — AST → IR lowering
│ ├── environment.py # Runtime variable scoping
│ └── interpreter_enhanced.py # Phase 5 — tree-walking interpreter
├── tests/
│ └── smoke_test_pipeline.py # Full pipeline smoke tests
├── examples/ # .meri example programs
├── Guide.md # In-depth developer guide
├── pyproject.toml # PEP 517/518 packaging config
└── setup.py # Legacy build compat shim
```
> **Full developer reference:** [Guide.md](Guide.md)
---
## Keyword Reference 🔤
| Concept | Merilang | Python |
|---|---|---|
| Variable | `maan x = …` | `x = …` |
| Print | `likho(…)` | `print(…)` |
| Input | `poocho var "prompt"` | `var = input("prompt")` |
| If | `agar … { }` | `if …:` |
| Elif | `warna_agar … { }` | `elif …:` |
| Else | `warna { }` | `else:` |
| While | `jab_tak … { }` | `while …:` |
| For-each | `har x mein list { }` | `for x in list:` |
| Break | `ruk` | `break` |
| Continue | `age_badho` | `continue` |
| Function | `kaam name(…) { }` | `def name(…):` |
| Return | `wapas …` | `return …` |
| Lambda | `lambda(x) -> expr` | `lambda x: expr` |
| Class | `class Name { }` | `class Name:` |
| Inherit | `class A extends B { }` | `class A(B):` |
| New object | `naya Name(…)` | `Name(…)` |
| This | `yeh` | `self` |
| Super | `upar(…)` | `super().__init__(…)` |
| Try | `koshish { }` | `try:` |
| Catch | `pakad e { }` | `except e:` |
| Finally | `aakhir { }` | `finally:` |
| Throw | `uchalo …` | `raise …` |
| True / False | `sach` / `jhoot` | `True` / `False` |
| Null | `khaali` | `None` |
| Not | `nahi` | `not` |
| And / Or | `aur` / `ya` | `and` / `or` |
---
## Error System 🚨
Merilang reports errors in **English + Hindi** with line/column positions. In v3.0 all errors from a single run are reported together (panic-mode), so you fix all issues at once.
```
[LexerError] Line 4, Col 9: Unexpected character: '@'
[ParseError] Line 8, Col 1: Expected expression, got 'EOF'
[SemanticError] Line 12: Undefined name 'resutl' — did you mean 'result'?
[TypeCheckError] Line 15: Cannot apply '-' to string and number
```
---
## Roadmap 🗺️
- [x] Lexer + parser
- [x] Tree-walking interpreter
- [x] OOP (classes, inheritance)
- [x] Exception handling
- [x] Interactive REPL
- [x] **Panic-mode error recovery** *(v3.0)*
- [x] **Semantic analysis pass** *(v3.0)*
- [x] **IR / Three-Address Code generation** *(v3.0)*
- [ ] Bytecode compiler & VM
- [ ] Standard library expansion
- [ ] VS Code extension
- [ ] Debugger with breakpoints
- [ ] Package manager
---
## Contributing 🤝
1. Fork the repo
2. Create a branch: `git checkout -b feature/my-feature`
3. Commit: `git commit -m "Add my feature"`
4. Push: `git push origin feature/my-feature`
5. Open a Pull Request
See [Guide.md](Guide.md) for a step-by-step walkthrough of how to add a new language feature.
---
## License 📄
MIT — see [LICENSE](LICENSE).
---
Made with ❤️ for the desi developer community
| text/markdown | null | Merilang Community <xploitmonk0x01@github.com> | null | null | MIT | programming-language, compiler, interpreter, desi, hindi, education, learning, three-address-code, semantic-analysis | [
"Development Status :: 4 - Beta",
"Intended Audience :: Education",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: Hindi",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Langu... | [] | null | null | >=3.12 | [] | [] | [] | [
"termcolor>=2.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"flask>=3.0.0; extra == \"playground\""
] | [] | [] | [] | [
"Homepage, https://github.com/XploitMonk0x01/merilang",
"Documentation, https://github.com/XploitMonk0x01/merilang/blob/main/Guide.md",
"Repository, https://github.com/XploitMonk0x01/merilang",
"Bug Tracker, https://github.com/XploitMonk0x01/merilang/issues",
"Changelog, https://github.com/XploitMonk0x01/me... | twine/6.2.0 CPython/3.12.10 | 2026-02-19T11:38:31.862227 | merilang-3.0.0.tar.gz | 94,827 | 23/62/b59443783bed5e8a5c3c0bd0dea7e7edc102c250b5879a5ff683f4a29ff3/merilang-3.0.0.tar.gz | source | sdist | null | false | fef6c5c8cd6747940fba70f22166bf3d | 4615ff3424326e8ca9e1849f7b3f7a814a7d13d63b7aaba167a233b5d5598a73 | 2362b59443783bed5e8a5c3c0bd0dea7e7edc102c250b5879a5ff683f4a29ff3 | null | [
"LICENSE"
] | 209 |
2.1 | surrealist | 2.0.1 | Python client for SurrealDB, latest SurrealDB version compatible, all features supported | # README #
<p align="left">
<a href="https://pypi.org/project/surrealist/"><img src="https://img.shields.io/pypi/status/surrealist?style=flat-square"></a>
<a href="https://pypi.org/project/surrealist/"><img src="https://img.shields.io/pypi/v/surrealist?style=flat-square"></a>
<a href="https://pypi.org/project/surrealist/"><img src="https://img.shields.io/pypi/dm/surrealist?style=flat-square"></a>
<a href="https://pypi.org/project/surrealist/"><img src="https://img.shields.io/pypi/pyversions/surrealist?style=flat-square"></a>
<a href="https://pypi.org/project/surrealist/"><img src="https://img.shields.io/github/last-commit/kotolex/surrealist/master?style=flat-square"></a>
</p>
Surrealist is a Python tool to work with awesome [SurrealDB](https://docs.surrealdb.com/docs/intro) (support for latest version 3.0.0)
It is **synchronous** and **unofficial**, so if you need async AND/OR official client, go [here](https://github.com/surrealdb/surrealdb.py)
Works and tested on Ubuntu, macOS, Windows 10, can use python 3.8+ (including python 3.14)
#### Key features: ####
* only one small dependency (websocket-client), no need to pull a lot of libraries to your project
* fully documented
* well tested (on the latest Ubuntu, macOS and Windows 10)
* fully compatible with the latest version of SurrealDB (3.0.0), including [live queries](https://surrealdb.com/products/lq), [change feeds](https://surrealdb.com/products/cf) and [GraphQL](https://surrealdb.com/docs/surrealdb/querying/graphql)
* debug mode to see all that goes in and out if you need (using standard logging)
* iterator to handle big select queries
* QL-builder to explore, generate and use SurrealDB queries (explain, transaction etc.)
* connections pool for use at a high load
* http or websocket transport to use
* always up to date with SurrealDB features and changes
### Installation ###
Via pip:
`pip install surrealist`
### Before you start ###
Please make sure you install and start SurrealDB, you can read more [here](https://docs.surrealdb.com/docs/installation/overview)
**Attention!** SurrealDB version 2.0.0 has some breaking changes, so we have to inherit some of them, and you cannot use surrealist version 1.0.0 to work with
Surreal DB version 1.5.3 or earlier.
For the same reasons, you will not be able to use the Surrealist version 2.0.0+ with the SurrealDB version 2.0.0+ and below.
Please consider table to choose a version:
| SurrealDB version | 3.0.0+ | 2.0.0+ | 1.5.0+ | 1.4.0+ | 1.3.0+ | 1.2.0+ | 1.1.1+ |
|:-------------------------:|----------|:--------:| :---: |----------|----------|----------|----------|
| Surrealist version | 2.0.0+ | 1.0.0+ | 0.5.3 | 0.4.2+ | 0.3.1+ | 0.2.10+ | 0.2.3+ |
| Python versions | 3.8-3.14 | 3.8-3.13 | 3.8-3.12 | 3.8-3.12 | 3.8-3.12 | 3.8-3.12 | 3.8-3.12 |
A good place to start is connect examples [here](https://github.com/kotolex/surrealist/tree/master/examples/connect.py)
You can find a lot of examples [here](https://github.com/kotolex/surrealist/tree/master/examples)
## Transports ##
First of all, you should know that SurrealDB can work with websocket or http "transports", we chose to support both transports here,
but websockets is preferred and default one. Websockets can use live queries and other cool features.
Each transport has functions it cannot use by itself (in a current SurrealDB version)
**Http-transport cannot:**
- create or kill a live query
- use LET or UNSET methods
**Websocket-transport cannot:**
- use GraphQL
- import or export data (you should use http connection or cli tools for that)
- import or export ML files (you should use http connection or cli tools for that)
If you use these methods on transports -CompatibilityError will be raised
## Connect to SurrealDB ##
All you need is url of SurrealDB and sometimes a few more data to connect
**Example 1**
In this example, we explicitly show all parameters, but remember many of them are optional
```python
from surrealist import Surreal
# we create a surreal object, it can be used to create one or more connections with websockets (use_http=False)
# with timeout 10 seconds
surreal = Surreal("http://127.0.0.1:8000", namespace="test", database="test", credentials=("user_db", "user_db"),
use_http=False, timeout=10)
print(surreal.is_ready()) # prints True if server up and running on that url
print(surreal.version()) # prints server version
```
**Note:** create of a Surreal object does not attempt any connections or other actions, just store parameters for future use
Calls of **is_ready()**, **health()** or **version()** on Surreal objects are for server checks only, these not validate or check your namespace, database or credentials.
### Parameters ###
**url** - url of SurrealDB server, if you are sure you will use websocket connection - you can use url like ws://127.0.0.1:8000/rpc, but http will work fine too, even for websockets.
So, you can simply use http://127.0.0.1:8000, it will be transformed to ws://127.0.0.1:8000/rpc under the hood.
If your url is differed - specify url in ws(s) format
But if you will use ws(s) format, a Surreal object will try to predict http url too; it is important for status and version checks.
For example for wss://127.0.0.1:9000/some/rps predicted http url will be https://127.0.0.1:9000/
**namespace** - name of the namespace, it is optional, but if you use it, you should specify a database too
**database** - name of the database, it is optional, but if you use it, you should specify namespace too
**credentials** - optional, pair(tuple) of username and password for SurrealDB
**use_http** - optional, False by default, flag of using websockets or http transport, False mean using websocket, specify True if you want to use http transport
**timeout** - optional, 15 seconds by default, it is time in seconds to wait for responses and messages, time for trying to connect to SurrealDB
**Example 2**
In this example, we do not use default(optional) parameters
```python
from surrealist import Surreal
# we create a surreal object, it can be used to create one or more connections with websockets
# with timeout 15 seconds
surreal = Surreal("http://127.0.0.1:8000")
print(surreal.is_ready()) # prints True if server up and running on that url
print(surreal.version()) # prints server version
```
## Context managers and close ##
You should always close created connections, when you do not need them anymore, the best way to do it is via context manager
**Example 3**
```python
from surrealist import Surreal
surreal = Surreal("http://127.0.0.1:8000", namespace="test", database="test", credentials=("user_db", "user_db"))
with surreal.connect() as ws_connection: # create context manager, it will close connection for us
result = ws_connection.select("person") # select from db
print(result) # print result
# here connection is closed
```
You can do the same by itself:
**Example 4**
```python
from surrealist import Surreal
surreal = Surreal("http://127.0.0.1:8000", namespace="test", database="test", credentials=("user_db", "user_db"))
ws_connection = surreal.connect() # open connection
result = ws_connection.select("person") # select from db
print(result) # print result
ws_connection.close() # explicitly close connection
# after closing, we cannot use connection anymore if you need one - create one more connection with a surreal object
```
## Methods and Query Language ##
Before you go with surrealist, please [check](https://surrealdb.com/docs/surrealql/overview)
You can find basic examples [here](https://github.com/kotolex/surrealist/tree/master/examples)
QL-builder is a simple, convenient way to create queries, validate them and run it against SurealDB.
It is simple, readable and can be the way to learn QL
**Example 5**
```python
from surrealist import Database
# connects to Database (it is not connection)
with Database("http://127.0.0.1:8000", 'test', 'test', credentials=("user_db", "user_db")) as db:
table = db.table("person") # switch to table level, no problem if it is not exists
# let's add record
# real query CREATE person:john SET status = "ACTIVE" RETURN id;
result = table.create("john").set(status="ACTIVE").returns("id").run()
# SurrealResult(id=9eb966a4-02fc-40ea-82ba-825d37254f43, status=OK, result=[{'id': 'person:john'}],
# query=CREATE person:john SET status = "ACTIVE" RETURN id;, code=None, time=110.3µs, additional_info={})
print(result)
print(table.count()) # 1
```
You can find QL examples [here](https://github.com/kotolex/surrealist/tree/master/examples/surreal_ql)
One of the main features of QL-builder is that using dot you can see all statements available on each level,
any modern IDE will show possible statements when you type dot.
Thanks to this, you can study QL and also gain confidence that you are forming a valid query.
for example
`db.account.select().limit(50).start_at(50)` analog "SELECT * FROM account LIMIT 50 START 50;"
Pay attention — you can use just table name without using table() method `db.person.select()`,
it is readable and shorter, but in that particular case you will not get IDE suggestions.
So, we recommend using table() method `db.table("person").select()` it is not much bigger, but still readable,
and you will get help from your IDE
If you cannot form your query with QL, you always can use a raw query via `database.raw_query` or `connection.query`
It is the most efficient way, cause it allows you to do all that is possible if you have permissions.
### Iteration on Select ###
When you expect a lot of data on your select query via QL-builder, you should consider using iterator, it is a simple, lazy and common way to use in python.
Iterator can be used with **next** method or in **for** statement
**Example 6**
```python
from surrealist import Database
with Database("http://127.0.0.1:8000", 'test', 'test', credentials=("user_db", "user_db")) as db: # connects to Database
iterator = db.table("user").select().iter(limit=20) # get an iterator, nothing executes on this line
for result in iterator: # here, where actions actually start
print(result.count()) # just print count of results, but you can do anything here
```
## Results ##
If the method of connection is not raised, it is always returns SurrealResult object on any response of SurrealDB. It was chosen for simplicity.
Please see [examples](https://github.com/kotolex/surrealist/blob/master/examples/result.py)
Here is standard result:
`SurrealResult(id=None, status=OK, result=[{'author': '51ff5faa-d798-4194-93c6-179ce7525a8c', 'id': 'article:⟨51ff5faa-d798-4194-93c6-179ce7525a8c⟩', 'text': '51ff5faa-d798-4194-93c6-179ce7525a8c', 'title': '51ff5faa-d798-4194-93c6-179ce7525a8c'}], query=None, code=None, time=77.25µs, additional_info={})`
Here is standard error:
`SurrealResult(id=ca3eface-9287-4092-a198-4f91ed27a010, status=ERR, result={'code': -32000, 'message': 'There was a problem with authentication'}, query=None, code=None, time=None, additional_info={})
`
You can always check for error using is_error() method
```python
if result.is_error():
raise ValueError("Got error")
```
Besides, a result object has helper methods **is_empty**, **id**, **ids**, **get**, **first**, **last** to work with response of SurrealDB.
You need to read this on SurrealDB recordID: https://surrealdb.com/docs/surrealql/datamodel/ids
## Using RecordID ##
Since version 2.0, SurrealDB never converts strings to record_id, so we have to manage it ourselves.
RecordId object exists for that purpose,
you can see examples [here](https://github.com/kotolex/surrealist/blob/master/examples/record_id.py)
Although for backward compatibility, you still can use record_id in string format,
it is strongly recommended to use RecordId instead!
**Note**: RecordId object supports only string or uid/ulid type ids, if you need ranges, object or aray type record_id,
you should create valid query and use connection.query() method
Here we create new person, get record id, wraps in RecordId and use for select:
```python
from surrealist import RecordId
result = ws_connection.create("person", {"name": "John Doe"})
record_id = result.id # person:34vepp6apg0np2sdstle
print(ws_connection.select("person", record_id=RecordId(record_id)).result) # [{'id': 'person:34vepp6apg0np2sdstle', 'name': 'John Doe'}]
```
Simple record_id can have only A-Z, a-z letters and digits 0-9, for any other UTF-8 letters RecordId will generate valid representation with u-prefix:
```python
from surrealist import get_uuid, RecordId
uuid = get_uuid() # 6e796db2-8322-4056-b63f-0f1812f6e075
record_id = RecordId(uuid, table="person")
print(record_id.to_valid_string()) # person:u'6e796db2-8322-4056-b63f-0f1812f6e075'
create_result = ws_connection.create("person", {"name": "tobie", "age": 30}, record_id)
print(create_result.result) # {"age": 30, "id": "person:u'6e796db2-8322-4056-b63f-0f1812f6e075'", "name": "tobie"}
print(ws_connection.select("person", record_id=record_id).result) # [{"age": 30, 'id': "person:u'6e796db2-8322-4056-b63f-0f1812f6e075'", "name": "tobie"}]
```
## Surreal Datetime ##
Since version 2.0 SurrealDB never converts values, we send to it, so we need to explicitly use datetime.
For example, if you have a datetime field in your table:
`DEFINE FIELD create_time ON person TYPE datetime DEFAULT time::now() PERMISSIONS FULL;`
you need to use datetime with prefix to add a new record with that field
**Note**: Since version 3.0 datetime is not recognized by SurrealDB if it is value for json field, but works fine as part of the string query! So if you need datetime in SDB version 3.0+ - use QL (Database) or raw_query of the connection.
```python
from datetime import datetime, timezone
from surrealist import Database, Surreal, to_surreal_datetime_str
surreal = Surreal("http://127.0.0.1:8000", credentials=("root", "root"))
with surreal.connect() as ws_connection:
ws_connection.use("test", "test")
db = Database.from_connection(ws_connection)
tm = to_surreal_datetime_str(datetime.now(timezone.utc)) # get current time in surreal format d'2024-10-22T16:18:59.367084Z'
result = db.person.create().content({'name': "zzz", 'age': 44, 'active': True, 'create_time': tm}).run()
```
but if you just use datetime string without d-prefix, you will get an error back
```Found '2024-10-22T13:54:40.445833Z' for field `create_time`, with record `person:p8vji2zhvr8z7frhsaex`, but expected a datetime```
## Logging and Debug mode ##
As it was said, if you need to debug something, stuck in some problem or just want to know all about data between you and SurrealDB, you can use standard logging.
All library logs will contain "surrealist" prefix. You, as a developer, should choose proper handlers, formats, filters etc.
Surrealist does not use root logger, does not use any handlers and uses only DEBUG, INFO and ERROR level for its events.
For example
**Example 7**
```python
from logging import basicConfig, INFO, DEBUG
from surrealist import Surreal, LOG_FORMAT # LOG_FORMAT is used for simplicity, you can use your own
basicConfig(format=LOG_FORMAT, level=INFO) # we specify a format and level of events to catch
surreal = Surreal("http://127.0.0.1:8000", namespace="test", database="test", credentials=("user_db", "user_db"))
with surreal.connect() as connection:
res = connection.create("article", {"author": "John Doe", "title": "In memoriam", "text": "text"})
```
if you run it, you get in the console:
```
2024-05-29 15:59:41,759 : Thread-1 : websocket : INFO : Websocket connected
2024-05-29 15:59:41,762 : MainThread : surrealist.connections.websocket : INFO : Operation: SIGNIN. Data: {'user': 'user_db', 'pass': '******', 'NS': 'test', 'DB': 'test'}
2024-05-29 15:59:41,788 : MainThread : surrealist.connections.websocket : INFO : Got result: SurrealResult(id=c3fcebbc-359f-47d0-822b-a4ad8043f64b, status=OK, result=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJpYXQiOjE3MTY5ODAzODEsIm5iZiI6MTcxNjk4MDM4MSwiZXhwIjoxNzE2OTgzOTgxLCJpc3MiOiJTdXJyZWFsREIiLCJqdGkiOiI0YTQyNWFiNy00NGEyLTQ4OGItYjM4MS05YjUyNDQzYTI5OTQiLCJOUyI6InRlc3QiLCJEQiI6InRlc3QiLCJJRC...
2024-05-29 15:59:41,788 : MainThread : surrealist.connections.websocket : INFO : Connected to ws://127.0.0.1:8000/rpc, params: {'NS': 'test', 'DB': 'test'}, credentials: ('root', '******'), timeout: 15
2024-05-29 15:59:41,788 : MainThread : surrealist.connections.websocket : INFO : Operation: CREATE. Path: article, data: {'author': 'John Doe', 'title': 'In memoriam', 'text': 'text'}
2024-05-29 15:59:41,794 : MainThread : surrealist.connections.websocket : INFO : Got result: SurrealResult(id=b307d67f-b01b-4b71-a319-906fa17b8c72, status=OK, result=[{'author': 'John Doe', 'id': 'article:b44tdiiyb8jw6mcn1tzs', 'text': 'text', 'title': 'In memoriam'}], query=None, code=None, time=None, additional_info={})
2024-05-29 15:59:41,794 : MainThread : surrealist.connection : INFO : The connection was closed
```
but if in the example above (example 7) you choose "DEBUG" for level, you will see all, including low-level clients' data:
```
2024-05-29 16:03:58,438 : MainThread : surrealist.clients.websocket : DEBUG : Connecting to ws://127.0.0.1:8000/rpc
2024-05-29 16:03:58,445 : Thread-1 : websocket : INFO : Websocket connected
2024-05-29 16:03:58,458 : MainThread : surrealist.clients.websocket : DEBUG : Connected to ws://127.0.0.1:8000/rpc, timeout is 15 seconds
2024-05-29 16:03:58,458 : MainThread : surrealist.connections.websocket : INFO : Operation: SIGNIN. Data: {'user': 'user_db', 'pass': '******', 'NS': 'test', 'DB': 'test'}
2024-05-29 16:03:58,458 : MainThread : surrealist.clients.websocket : DEBUG : Send data: {"id": "1d5758bb-0879-4d8d-9e14-37c9117669a3", "method": "signin", "params": [{"user": "root", "pass": "******", "NS": "test", "DB": "test"}]}
2024-05-29 16:03:58,484 : Thread-1 : surrealist.clients.websocket : DEBUG : Get message b'{"id":"1d5758bb-0879-4d8d-9e14-37c9117669a3","result":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJpYXQiOjE3MTY5ODA2MzgsIm5iZiI6MTcxNjk4MDYzOCwiZXhwIjoxNzE2OTg0MjM4LCJpc3MiOiJTdXJyZWFsREIiLCJqdGkiOiIwODhhMWY0My04YzY3LTQ5NjYtYTdjNC02ZGI5NjA0MGNkYmIiLCJOUyI6InRlc3QiLCJEQiI6InRlc3QiLCJJRCI6InJvb3QifQ.1pSbJ'...
2024-05-29 16:03:58,484 : MainThread : surrealist.connections.websocket : INFO : Got result: SurrealResult(id=1d5758bb-0879-4d8d-9e14-37c9117669a3, status=OK, result=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJpYXQiOjE3MTY5ODA2MzgsIm5iZiI6MTcxNjk4MDYzOCwiZXhwIjoxNzE2OTg0MjM4LCJpc3MiOiJTdXJyZWFsREIiLCJqdGkiOiIwODhhMWY0My04YzY3LTQ5NjYtYTdjNC02ZGI5NjA0MGNkYmIiLCJOUyI6InRlc3QiLCJEQiI6InRlc3QiLCJJRC...
2024-05-29 16:03:58,484 : MainThread : surrealist.connections.websocket : INFO : Connected to ws://127.0.0.1:8000/rpc, params: {'NS': 'test', 'DB': 'test'}, credentials: ('root', '******'), timeout: 15
2024-05-29 16:03:58,484 : MainThread : surrealist.connections.websocket : INFO : Operation: CREATE. Path: article, data: {'author': 'John Doe', 'title': 'In memoriam', 'text': 'text'}
2024-05-29 16:03:58,484 : MainThread : surrealist.clients.websocket : DEBUG : Send data: {"id": "9bbc90d7-d6dc-4a51-ad97-b765e6b09131", "method": "create", "params": ["article", {"author": "John Doe", "title": "In memoriam", "text": "text"}]}
2024-05-29 16:03:58,490 : Thread-1 : surrealist.clients.websocket : DEBUG : Get message b'{"id":"9bbc90d7-d6dc-4a51-ad97-b765e6b09131","result":[{"author":"John Doe","id":"article:72duj8mef1s97c67dv38","text":"text","title":"In memoriam"}]}'
2024-05-29 16:03:58,491 : MainThread : surrealist.connections.websocket : INFO : Got result: SurrealResult(id=9bbc90d7-d6dc-4a51-ad97-b765e6b09131, status=OK, result=[{'author': 'John Doe', 'id': 'article:72duj8mef1s97c67dv38', 'text': 'text', 'title': 'In memoriam'}], query=None, code=None, time=None, additional_info={})
2024-05-29 16:03:58,491 : MainThread : surrealist.connection : INFO : The connection was closed
2024-05-29 16:03:58,491 : Thread-1 : surrealist.clients.websocket : DEBUG : Close connection to ws://127.0.0.1:8000/rpc
2024-05-29 16:03:58,491 : MainThread : surrealist.clients.websocket : DEBUG : Client is closed connection to ws://127.0.0.1:8000/rpc
```
**Note:** passwords and auth information always masked in logs. If you still see it in logs - please, report an issue
## Live Query ##
Live queries let you subscribe to events of desired table when changes happen—you get notification as a simple result or in DIFF format
About live query: https://surrealdb.com/products/lq
Using live select: https://surrealdb.com/docs/surrealql/statements/live
About DIFF (jsonpatch): https://jsonpatch.com
LQ can work only with websockets, you have to provide a callback function to call on any event.
Callback should have signature `def any_name(param: Dict) -> None`, so it will be called with python dictionary as only argument
**Note 1:** if your connection was interrupted or closed, LQ will disappear, and you need to recreate it
**Note 2:** LQ only produces events which happen after the creation of this LQ and table should exist
**Note 3:** LQ is associated with connection, where it was created, if you have two or more connections, LQ will depend only on one,
and will disappear on connection close, even if other connections are still active
**Note 4:** LQ is stop working after calling REMOVE TABLE for table it listens on. This will be fixed in future SurrealDB versions
**Example 8**
```python
from time import sleep
from surrealist import Surreal
# you need callback, a function which will get dictionary and do something with it
def call_back(response: dict) -> None:
print(response)
# you need websockets for a live query
surreal = Surreal("http://127.0.0.1:8000", namespace="test", database="test", credentials=("user_db", "user_db"))
with surreal.connect() as connection:
res = connection.live("person", callback=call_back) # here we subscribe on person table
live_id = res.result # live_id is a LQ id, we need it to kill a query
connection.create("person", {"name": "John", "surname": "Doe"}) # here we create an event
sleep(0.5) # sleep a little cause need some time to get a message back
```
in console, you will get:
`{'result': {'action': 'CREATE', 'id': 'c2c8952b-b2bc-4d3a-aa68-4609f5818d7c', 'result': {'id': 'person:dik1sm50xr2d5mc7fysi', 'name': 'John', 'surname': 'Doe'}}}`
**Example 9**
```python
from time import sleep
from surrealist import Surreal
# you need callback, a function which will get dictionary and do something with it
def call_back(response: dict) -> None:
print(response)
# you need websockets for a live query
surreal = Surreal("http://127.0.0.1:8000", namespace="test", database="test", credentials=("user_db", "user_db"))
with surreal.connect() as connection:
# here we subscribe on person table and specify we need DIFF
res = connection.live("person", callback=call_back, return_diff=True)
live_id = res.result # live_id is a LQ id, we need it to kill a query
connection.create("person", {"name": "John", "surname": "Doe"}) # here we create an event
sleep(0.5) # sleep a little cause need some time to get a message back
connection.kill(live_id) # we kill LQ, no more events to come
```
in console, you will get:
`{'result': {'action': 'CREATE', 'id': '54a4dd0b-0008-46f4-b4e6-83e466cb4141', 'result': [{'op': 'replace', 'path': '/', 'value': {'id': 'person:fhglyrxkit3j0fnosjqg', 'name': 'John', 'surname': 'Doe'}}]}}`
If you do not need LQ anymore, call KILL method, with live_id
You can use a custom live query if you need, it lets you use filters and conditions, as refer [here](https://surrealdb.com/docs/surrealql/statements/live#filter-the-live-query)
**Example 10**
```python
from time import sleep
from surrealist import Surreal
# you need callback, a function which will get dictionary and do something with it
def call_back(response: dict) -> None:
print(response)
# you need websockets for a live query
surreal = Surreal("http://127.0.0.1:8000", namespace="test", database="test", credentials=("user_db", "user_db"))
with surreal.connect() as connection:
# here we subscribe and specify a custom query for persons
res = connection.custom_live("LIVE SELECT * FROM ws_person WHERE age > 18;", callback=call_back)
live_id = res.result # live_id is a LQ id, we need it to kill a query in future
# here we create 2 records but only the second one is what we look for
connection.create("ws_person", {"age": 16, "name": "Jane"}) # Jane is too young for us :)
connection.create("ws_person", {"age": 28, "name": "John"}) # John older than 18, so wee need this event
sleep(0.5) # sleep a little cause need some time to get a message back
connection.kill(live_id) # we kill LQ, no more events to come
```
in console, you will get:
`{'result': {'action': 'CREATE', 'id': '1f57f2de-354a-43ba-8f39-57000944707c', 'result': {'age': 28, 'id': 'ws_person:awot8zdkg3mqj4wymq8c', 'name': 'John'}}}`
Pay attention — there is no info about Jane in events we get from LQ, cause Jane is younger than 18.
Same example with QL-builder:
```python
from time import sleep
from surrealist import Database
# you need callback, a function which will get dictionary and do something with it
def call_back(response: dict) -> None:
print(response)
# you need websockets for a live query
with Database("http://127.0.0.1:8000", 'test', 'test', credentials=("user_db", "user_db")) as db:
table = db.table("ws_person")
# here we subscribe and specify a custom query for persons
result = table.live(callback=call_back).where("age > 18").run()
live_uid = result.result # live_id is a LQ id, we need it to kill a query in future
# here we create 2 records but only the second one is what we look for
table.create().content({"age": 16, "name": "Jane"}).run() # Jane is too young for us :)
table.create().content({"age": 28, "name": "John"}).run() # John older than 18, so wee need this event
sleep(0.1)
result = table.kill(live_uid) # we kill LQ, no more events to come
```
## Change Feeds ##
Changes in the database, such as creating, updating, or deleting, are recorded and played back in another channel.
This channel functions as a stream of messages.
Change Feeds are great for ensuring accurate order and consistent replication of tables or databases. They also provide
immediate updates on any changes made.
Read here: https://surrealdb.com/blog/unlocking-streaming-data-magic-with-surrealdb-live-queries-and-change-feeds
Read here: https://surrealdb.com/products/cf
Under the hood: https://surrealdb.com/docs/surrealql/statements/show
Changes Feed works both for http and websockets!
Let's set up everything:
```
DEFINE TABLE reading CHANGEFEED 1d;
```
**Note:** date and time of your requests should be strict AFTER date and time of creating `reading` and it should have d'-prefix
**Example 11**
```python
from surrealist import Surreal
surreal = Surreal("http://127.0.0.1:8000", namespace="test", database="test", credentials=("user_db", "user_db"))
with surreal.connect() as connection:
# Again, 2024-02-06T10:48:08.700483Z - is a moment AFTER the table was created
res = connection.query('SHOW CHANGES FOR TABLE reading SINCE d"2024-02-06T10:48:08.700483Z" LIMIT 10;')
print(res.result) # it will be [] cause no events happen
# now we add one record
connection.query('CREATE reading set story = "long long time ago";')
# check again
res = connection.query('SHOW CHANGES FOR TABLE reading SINCE d"2024-02-06T10:48:08.700483Z" LIMIT 10;')
print(res.result)
```
in the console, you will see
`[{'changes': [{'update': {'id': 'reading:w0useg3n9bkne6mei63f', 'story': 'long long time ago'}}], 'versionstamp': 851968}]`
Same example via QL-builder:
**Example 12**
```python
from datetime import datetime, timezone
from surrealist import Database, to_surreal_datetime_str
with Database("http://127.0.0.1:8000", 'test', 'test', credentials=("user_db", "user_db")) as db:
tm = to_surreal_datetime_str(datetime.now(timezone.utc)) # Again, here is a moment AFTER the table was created
res = db.table("reading").show_changes().since(tm).run()
print(res.result) # it will be [] cause no events happen
# now we add one record
db.table("reading").create().set(story="long long time ago").run()
res = db.table("reading").show_changes().since(tm).run()
```
## GraphQL ##
Since SurrealDB version 3.0 you can use GraphQL, but pay attention, you should use http connection for that
Refer to: https://surrealdb.com/docs/surrealdb/querying/graphql
Example: https://github.com/kotolex/surrealist/tree/master/examples/graph_ql.py
**Example 13**
```python
from surrealist import Surreal
surreal = Surreal("http://127.0.0.1:8000", credentials=('root', 'root'), use_http=True) # http only!
with surreal.connect() as connection:
connection.use("test", "test")
connection.create("author", {"age":31, "is_alive": False}, "john") # you need at least 1 table at database
connection.query("DEFINE CONFIG GRAPHQL AUTO;") # you need this for GraphQL to work
res = connection.graphql({"query": "{ author { id } }"})
print(res.result) # {'data': {'author': [{'id': 'author:john'}]}}
```
## Threads and thread-safety ##
Remember, SurrealDB is "surreally" fast, so first make sure you need to use multiple threads to work with it, because in many situations
one thread is enough to do the job. Do not fall to premature optimizations.
All objects, including connections, statements, database are thread-safe, so you can use all library features in different threads.
This library was made for using in multithreading environments, remember some rules of thumb:
- if you work with only one server of SurrealDB, you need only one Surreal object
- one Connection/Database object represents exactly one connection (websocket or http) with DB
- it is OK to use connection in different threads, but it can be your bottleneck, as there is only one connection to DB
- with many queries and high load, you should consider using more than one connection, but not too many of them. The number of connections equal to the number of CPU-cores is the best choice
- remember to properly close connections
## Connections Pool ##
And again, please, do not fall to premature optimizations, when working with SurrealDB. But if you consider or expect a high load and/or a lot of
threads, which are use SurrealDB, you can use DatabaseConnectionsPool. It can be used exactly like a Database object, the main difference — you
can specify minimum and maximum connections to use. Under high load, when a lot of data goes in and out in a lot of threads - a pool object can
make job faster and effectively, than one common connection.
On start pool will create minimum number of connections, and on a big load will be creating more and more connections until reach the maximum of them.
By default, the minimum number is equal to CPU cores count for the system.
So any incoming request from your application will use the first non-busy connection it gets from the pool.
Pay attention — new connections can be created, but old connections never be closed until the pool will be closed, so the number of connections can grow,
but never can shrink. It is because of Live Queries, as you remember: LQ always linked to connection, so if connection is closed, LQ stops working.
**Example 14**
```python
from surrealist import DatabaseConnectionsPool
with DatabaseConnectionsPool("http://127.0.0.1:8000", 'test', 'test', credentials=("user_db", "user_db"), min_connections=10,
max_connections=40) as db: # create pool, it creates 10 connections on start
make_something_with_a_lot_of_threads_or_data(db) # use pool everywhere we need as a simple Database object
```
**Note:** DatabaseConnectionsPool is NOT a singleton, it allows creating as many pools as you like, for example, for different databases or namespaces.
It is your job as a developer to limit number of pools created in your application
**Important note:** for many and maybe the most cases, one shared connection is enough to do the job. Test it and make sure you really need a connection pool.
## Recursion and JSON in Python ##
SurrealDb has _"no limit to the depth of any nested objects or values within"_, but in Python we have a recursion limit and
standard json library (and str function) use recursion to load and dump objects, so if you will have deep nesting in your objects -
you can get RecursionLimitError.
The best choice here is to rethink your schema and objects, because you probably do
something wrong with such a high level of nesting.
Second choice — increase recursion limit in your system with
```python
import sys
sys.setrecursionlimit(10_000)
```
## Examples ##
You can find a lot of examples [here](https://github.com/kotolex/surrealist/tree/master/examples)
### Contacts ###
Mail me at farofwell@gmail.com
| text/markdown | null | kotolex <farofwell@gmail.com> | null | null | Copyright (c) 2018 The Python Packaging Authority Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | surreal, python, Surreal, SurrealDB, surrealist, database, surrealdb, surrealdb.py, GraphQL, LiveQuery | [
"License :: OSI Approved :: MIT License",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Intended Audience :: Education",
"Topic :: Software Development",
"Topic :: Database",
"Topic :: Software Development :: Libraries",
"Development Status :: 5 - Production/Stable"... | [] | null | null | >=3.8 | [] | [] | [] | [
"websocket-client"
] | [] | [] | [] | [
"Homepage, https://github.com/kotolex/surrealist"
] | twine/5.0.0 CPython/3.8.10 | 2026-02-19T11:38:08.644119 | surrealist-2.0.1.tar.gz | 87,713 | 35/ac/3ed9ef3fefc3ff568038b2227136ab402162aad106b5245540d4b1f51811/surrealist-2.0.1.tar.gz | source | sdist | null | false | 0041aa33a46e8253ba38d5fb526f1ad5 | a7305334b8c151f46db70be8dd131002462c7c0140280956bcf6c45b35e3258c | 35ac3ed9ef3fefc3ff568038b2227136ab402162aad106b5245540d4b1f51811 | null | [] | 2,838 |
2.4 | filler-classifier | 0.2.6 | Thai filler word classifier for voice bots - picks the right acknowledgment phrase while LLM thinks | # filler-classifier
Thai filler word classifier for voice bots. Classifies customer input into categories and returns the appropriate filler phrase to play instantly while the LLM generates a full response.
Built for [ingfah.ai](https://ingfah.ai) voice bot but easily adaptable to any Thai voice AI system.
## Why
Voice bots have a latency problem: the user speaks, ASR transcribes, then the LLM takes 1-3 seconds to respond. Dead silence feels broken. The solution is to play a short filler phrase ("สักครู่นะคะ", "ขออภัยด้วยน่ะคะ") immediately while the LLM thinks.
But you can't play the same filler for everything. If someone is angry, "ได้เลยค่ะ" sounds dismissive. If someone asks a question, "ขออภัยด้วยน่ะคะ" makes no sense.
This classifier picks the right filler by category.
## Categories
| Category | When | Example Fillers |
|---|---|---|
| `complaint` | Angry, frustrated, profanity, threats | ขออภัยด้วยน่ะคะ |
| `question` | Asking for info, pricing, how-to | สักครู่นะคะ, ตรวจสอบให้นะคะ |
| `default` | Greetings, agreements, requests, everything else | รับทราบค่ะ, ได้เลยค่ะ |
### Default Filler Phrases
| Category | Fillers |
|---|---|
| `complaint` | ขออภัยด้วยน่ะคะ |
| `question` | สักครู่นะคะ, สักครู่ค่ะ, ตรวจสอบให้นะคะ |
| `default` | รับทราบค่ะ, ค่ะ ได้ค่ะ, ได้เลยค่ะ, ดีเลยค่ะ, ยินดีค่ะ |
A random filler is picked from the matching category each time. These are designed to be short (~0.3-0.5s when synthesized) for minimal latency.
## How It Works
Uses `intfloat/multilingual-e5-small` embeddings with centroid-based cosine similarity:
1. Each category has ~30-60 anchor phrases (real Thai customer service examples)
2. On init, all anchors are embedded and averaged into category centroids
3. At inference, the input is embedded and compared to centroids via cosine similarity
4. The closest category wins, and a random filler from that category is returned
## Performance
- **Accuracy**: 89.6% on 1,000 Thai customer service sentences
- **Inference**: <10ms per classification (after model load)
- **Init**: ~200ms for centroid computation
- **Model size**: ~118MB (multilingual-e5-small)
## Installation
```bash
pip install filler-classifier
```
## Usage
```python
from filler_classifier import FillerClassifier
# loads model automatically on first init
clf = FillerClassifier()
# classify and get category + confidence + filler
category, confidence, filler = clf.classify("อยากถามเรื่องบิลครับ")
# ("question", 0.872, "สักครู่นะคะ")
category, confidence, filler = clf.classify("ใช้งานไม่ได้เลย")
# ("complaint", 0.891, "ขออภัยด้วยน่ะคะ")
category, confidence, filler = clf.classify("ได้ครับ ตกลง")
# ("default", 0.845, "ได้เลยค่ะ")
# or just get the filler phrase directly
filler = clf.get_filler("มีโปรอะไรบ้างครับ")
# "ตรวจสอบให้นะคะ"
```
### Sharing the model
If you already have a `SentenceTransformer` instance loaded (e.g., for other tasks), pass it in to avoid loading twice:
```python
from sentence_transformers import SentenceTransformer
from filler_classifier import FillerClassifier
model = SentenceTransformer("intfloat/multilingual-e5-small")
clf = FillerClassifier(model=model)
```
## Customizing Fillers
Override `CATEGORY_FILLERS` to use your own phrases:
```python
import filler_classifier
filler_classifier.CATEGORY_FILLERS["complaint"] = ["ขออภัยค่ะ", "เข้าใจค่ะ"]
filler_classifier.CATEGORY_FILLERS["question"] = ["รอสักครู่นะคะ"]
```
## License
MIT
| text/markdown | null | "100x.fi" <kiri@100x.fi> | null | null | MIT | thai, nlp, filler, voice, classifier, embeddings, voice-bot | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engin... | [] | null | null | >=3.10 | [] | [] | [] | [
"sentence-transformers>=2.0",
"numpy>=1.20"
] | [] | [] | [] | [
"Homepage, https://github.com/100x-fi/filler-classifier",
"Repository, https://github.com/100x-fi/filler-classifier"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T11:36:48.062075 | filler_classifier-0.2.6.tar.gz | 9,982 | 3d/25/6f89467d0890db432ddd039ca691c90e1ecd9cbf799be7c1403783c138af/filler_classifier-0.2.6.tar.gz | source | sdist | null | false | 567834b506cfae650a5bba721c4233a1 | 182d8a8c52ed573a91b534a3173f26f0d5fee82110b3618d4f9e6873b18717fe | 3d256f89467d0890db432ddd039ca691c90e1ecd9cbf799be7c1403783c138af | null | [
"LICENSE"
] | 226 |
2.4 | tariff_fetch | 0.1 | A CLI tool and a python library to simplify downloading electric and gas utility tariff data. | # Tariff Fetch
[](https://img.shields.io/github/v/release/switchbox-data/tariff_fetch)
[](https://github.com/switchbox-data/tariff_fetch/actions/workflows/main.yml?query=branch%3Amain)
[](https://img.shields.io/github/commit-activity/m/switchbox-data/tariff_fetch)
[](https://img.shields.io/github/license/switchbox-data/tariff_fetch)
A CLI tool, and python library, to simplify downloading electric and gas utility tariff data from multiple providers in a consistent data format.
- **Github repository**: <https://github.com/switchbox-data/tariff_fetch/>
- **Documentation**: <https://switchbox-data.github.io/tariff_fetch/>
- **PyPI page**: <https://pypi.org/project/tariff_fetch/>
## Requirements
- Python 3.11+
- Credentials for the providers you intend to call:
- **Genability / Arcadia Data Platform**: `ARCADIA_APP_ID`, `ARCADIA_APP_KEY`
[Create an account](https://dash.genability.com/signup), navigate to [Applications dashboard](https://dash.genability.com/org/applications), create an application, then copy the Application ID and Key.
- **OpenEI**: `OPENEI_API_KEY`
Request a key at the [OpenEI API signup](https://openei.org/services/api/signup/). The key arrives by email.
- **RateAcuity Web Portal**: `RATEACUITY_USERNAME`, `RATEACUITY_PASSWORD`
There is no self-serve signup. [Contact RateAcuity](https://rateacuity.com/contact-us/) to request Web Portal access. No API key is required for `tariff_fetch`.
- Google Chrome or Chromium installed locally (for RateAcuity)
## Configuration
Populate a `.env` file (or export the variables manually). Only set the values you need.
```
ARCADIA_APP_ID=...
ARCADIA_APP_KEY=...
OPENEI_API_KEY=...
RATEACUITY_USERNAME=...
RATEACUITY_PASSWORD=...
```
## Running CLI with uvx
If you have [uv](https://github.com/astral-sh/uv/releases) installed, you can run the cli simply with
```bash
uvx --env-file=.env --from git+https://github.com/switchbox-data/tariff_fetch tariff-fetch
```
Or, for gas tariffs:
```bash
uvx --env-file=.env --from git+https://github.com/switchbox-data/tariff_fetch tariff-fetch-gas
```
## Installation
```bash
uv sync
source .venv/bin/activate
```
Alternative using plain `pip`:
```bash
python -m venv .venv
source .venv/bin/activate
pip install -e .
```
## Running the CLI
```bash
python -m tariff_fetch.cli [OPTIONS]
python -m tariff_fetch.cli_gas [OPTIONS]
```
With uv:
```bash
uv run tariff-fetch [OPTIONS]
uv run tariff-fetch-gas [OPTIONS]
```
With Just:
```bash
just cli
just cligas
```
Options:
- `--state` / `-s`: two-letter state abbreviation (default: prompt)
- `--providers` / `-p`: (only for electricity benchmarks) repeat per provider (`genability`, `openei`, `rateacuity`)
- `--output-folder` / `-o`: directory for exports (default: `./outputs`)
Omitted options will trigger interactive prompts.
### Examples
```bash
# Fully interactive run
uv run tariff-fetch
# Scripted run for Genability and OpenEI
uv run tariff-fetch.cli \
--state ca \
--providers genability \
--providers openei \
--output-folder data/exports
```
The CLI suggests filenames like `outputs/openei_Utility_sector_detail-0_2024-03-18.json` before writing each file so you
can accept or override them.
| text/markdown | null | Switchbox <hello@switch.box> | null | null | null | python | [
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"fastexcel>=0.16.0",
"fuzzywuzzy>=0.18.0",
"pathvalidate>=3.3.1",
"polars>=1.34.0",
"pydantic>=2.12.5",
"python-dotenv>=1.1.1",
"python-levenshtein>=0.27.1",
"questionary>=2.1.1",
"requests>=2.32.5",
"rich>=14.2.0",
"selenium>=4.36.0",
"tenacity>=9.1.2",
"typedload>=2.38",
"typer>=0.19.2",... | [] | [] | [] | [
"Repository, https://github.com/switchbox-data/tariff_fetch",
"Documentation, https://switchbox-data.github.io/tariff_fetch/"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T11:34:37.525828 | tariff_fetch-0.1-py3-none-any.whl | 47,517 | 61/a6/a11474afd88cb1d1c287158ba217648c485b68272fe52ef438a6887b41ce/tariff_fetch-0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 373710fc0a2b38b22103facc9c914c10 | 7ff9e9530edbafcb36227dd62f2c74a69ce2e6d4466fdf312e073fcb75e60650 | 61a6a11474afd88cb1d1c287158ba217648c485b68272fe52ef438a6887b41ce | null | [
"LICENSE"
] | 0 |
2.4 | testtools | 2.8.4 | Extensions to the Python standard library unit testing framework | ======================================
testtools: tasteful testing for Python
======================================
testtools is a set of extensions to the Python standard library's unit testing
framework. These extensions have been derived from many years of experience
with unit testing in Python and come from many different sources.
What better way to start than with a contrived code snippet?::
from testtools import TestCase
from testtools.content import Content
from testtools.content_type import UTF8_TEXT
from testtools.matchers import Equals
from myproject import SillySquareServer
class TestSillySquareServer(TestCase):
def setUp(self):
super(TestSillySquareServer, self).setUp()
self.server = self.useFixture(SillySquareServer())
self.addCleanup(self.attach_log_file)
def attach_log_file(self):
self.addDetail(
'log-file',
Content(UTF8_TEXT,
lambda: open(self.server.logfile, 'r').readlines()))
def test_server_is_cool(self):
self.assertThat(self.server.temperature, Equals("cool"))
def test_square(self):
self.assertThat(self.server.silly_square_of(7), Equals(49))
Why use testtools?
==================
Matchers: better than assertion methods
---------------------------------------
Of course, in any serious project you want to be able to have assertions that
are specific to that project and the particular problem that it is addressing.
Rather than forcing you to define your own assertion methods and maintain your
own inheritance hierarchy of ``TestCase`` classes, testtools lets you write
your own "matchers", custom predicates that can be plugged into a unit test::
def test_response_has_bold(self):
# The response has bold text.
response = self.server.getResponse()
self.assertThat(response, HTMLContains(Tag('bold', 'b')))
More debugging info, when you need it
--------------------------------------
testtools makes it easy to add arbitrary data to your test result. If you
want to know what's in a log file when a test fails, or what the load was on
the computer when a test started, or what files were open, you can add that
information with ``TestCase.addDetail``, and it will appear in the test
results if that test fails.
Extend unittest, but stay compatible and re-usable
--------------------------------------------------
testtools goes to great lengths to allow serious test authors and test
*framework* authors to do whatever they like with their tests and their
extensions while staying compatible with the standard library's unittest.
testtools has completely parametrized how exceptions raised in tests are
mapped to ``TestResult`` methods and how tests are actually executed (ever
wanted ``tearDown`` to be called regardless of whether ``setUp`` succeeds?)
It also provides many simple but handy utilities, like the ability to clone a
test, a ``MultiTestResult`` object that lets many result objects get the
results from one test suite, adapters to bring legacy ``TestResult`` objects
into our new golden age.
Cross-Python compatibility
--------------------------
testtools gives you the very latest in unit testing technology in a way that
will work with Python 3.10+ and PyPy3.
If you wish to use testtools with Python 2.4 or 2.5, then please use testtools
0.9.15.
If you wish to use testtools with Python 2.6 or 3.2, then please use testtools
1.9.0.
If you wish to use testtools with Python 3.3 or 3.4, then please use testtools 2.3.0.
If you wish to use testtools with Python 2.7 or 3.5, then please use testtools 2.4.0.
| text/x-rst | null | "Jonathan M. Lange" <jml+testtools@mumak.net> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Languag... | [] | null | null | >=3.10 | [] | [] | [] | [
"mypy>=1.0.0; extra == \"dev\"",
"ruff==0.14.11; extra == \"dev\"",
"typing-extensions; python_version < \"3.11\" and extra == \"dev\"",
"testresources; extra == \"test\"",
"testscenarios; extra == \"test\"",
"fixtures; extra == \"twisted\"",
"twisted; extra == \"twisted\""
] | [] | [] | [] | [
"Homepage, https://github.com/testing-cabal/testtools"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:34:17.409735 | testtools-2.8.4.tar.gz | 219,143 | af/5c/ab3068990dc51fe26b1d3d76146f2b9cebaa36de855ea37e5780fd2d22c7/testtools-2.8.4.tar.gz | source | sdist | null | false | bcdabcb7719b3cee5a889b5165301ae1 | 76c32db2f52c4dcd539e2fbf31686ca1f9c2f5626726c8844dff7893e7122e3f | af5cab3068990dc51fe26b1d3d76146f2b9cebaa36de855ea37e5780fd2d22c7 | null | [
"LICENSE"
] | 35,935 |
2.4 | distributed-a2a | 0.1.12rc0 | A library for building A2A agents with routing capabilities | # A2A Agent Library
A Python library for building A2A (Agent-to-Agent) agents with routing capabilities, DynamoDB-backed registry, and LangChain integration.
## Features
- **StatusAgent**: Base agent implementation with status tracking and structured responses
- **RoutingAgentExecutor**: Agent executor with intelligent routing capabilities
- **DynamoDB Registry**: Dynamic agent card registry with heartbeat mechanism
- **Server Utilities**: FastAPI application builder with A2A protocol support
- **LangChain Integration**: Built on LangChain for flexible model integration
## Installation
```bash
pip install distributed-a2a
```
## Quick Start
1. Start a server with your agent application:
```python
from distributed_a2a import load_app
from a2a.types import AgentSkill
# Define your agent's skills
skills = [
AgentSkill(
id='example_skill',
name='Example Skill',
description='An example skill',
tags=['example']
)
]
# Create your agent application
app = load_app(
name="MyAgent",
description="My specialized agent",
skills=skills,
api_key="your-api-key",
system_prompt="You are a helpful assistant...",
host="http://localhost:8000"
)
```
2. Send a request with the client
```python
from uuid import uuid4
from distributed_a2a import RoutingA2AClient
if __name__ == "__main__":
import asyncio
request = "Tell me the weather in Bonn"
client = RoutingA2AClient("http://localhost:8000")
response: str = asyncio.run(client.send_message(request, str(uuid4())))
print(response)
```
## Requirements
- Python 3.10+
- langchain
- langchain-core
- langchain-openai
- langgraph
- pydantic
- boto3
- a2a
## License
MIT
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
| text/markdown | Fabian Bell | Fabian Bell <fabian.bell@barrabytes.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Pr... | [] | https://github.com/Barra-Technologies/distributed-a2a | null | >=3.10 | [] | [] | [] | [
"langchain==1.2.3",
"langchain-core==1.2.7",
"langchain-openai==1.1.7",
"langchain_mcp_adapters==0.2.1",
"langgraph==1.0.5",
"langgraph-dynamodb-checkpoint==0.2.6.4",
"pydantic==2.12.5",
"boto3==1.42.25",
"a2a-sdk==0.3.22",
"a2a-types==0.1.0",
"build==1.4.0",
"twine==6.2.0",
"fastapi",
"uv... | [] | [] | [] | [
"Homepage, https://github.com/Barra-Technologies/distributed-a2a",
"Repository, https://github.com/Barra-Technologies/distributed-a2a"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-19T11:33:49.833191 | distributed_a2a-0.1.12rc0.tar.gz | 15,546 | 43/49/10d64b18fc6c49552bfe6bf6df03fbbb0d6162145148cdf3339f58e5ad8e/distributed_a2a-0.1.12rc0.tar.gz | source | sdist | null | false | 642dd6b38ef53f96d85dc3dd29c9400b | bf2a765db27fe0559f283b427d9259685c598f5d210329e29a4323c6e21cf0f7 | 434910d64b18fc6c49552bfe6bf6df03fbbb0d6162145148cdf3339f58e5ad8e | null | [
"LICENSE"
] | 215 |
2.1 | scalexi-llm | 0.1.22 | A comprehensive multi-provider LLM proxy library with unified interface | # ScaleXI LLM
A production-ready, multi-provider LLM proxy that gives you **one unified API** for many different model providers.
- **9 Providers**: OpenAI, Anthropic (Claude), Google (Gemini), Groq, DeepSeek, Alibaba/Qwen, Grok, local **Ollama**, and **RunPod (native API)**
- **60+ Model Configurations**: Pricing, limits, and capabilities encoded in a single model registry
- **Structured Outputs**: Pydantic schemas with intelligent fallbacks and validation
- **Vision & Files**: Image analysis, PDF/DOCX/TXT/JSON handling, and automatic vision fallbacks
- **Web Search**: Exa + SERP (Google) integration for retrieval-augmented generation (optionally restricted to a single domain)
- **Fallbacks & Reliability**: Provider-best and global-standard fallbacks, plus detailed error logging
- **LangSmith Tracing**: Optional built-in observability — set `enable_tracing=True` and every call is traced
This package is ideal when you want a **single, consistent interface** to multiple LLM vendors, with:
- Centralized configuration for models and costs
- Unified ask function (`ask_llm`) that works across providers
- Built-in support for web search, files, and images
- Optional local-only workflows via Ollama
- Optional LangSmith tracing with token/cost tracking
## Installation
```bash
pip install scalexi_llm
```
## Quick Example
```python
from scalexi_llm import LLMProxy
llm = LLMProxy()
response, execution_time, token_usage, cost = llm.ask_llm(
model_name="chatgpt-4o-latest",
system_prompt="You are a helpful assistant.",
user_prompt="Explain quantum computing in simple terms."
)
print(response)
```
## Model Listing
Inspect all registered models and their metadata (provider, pricing, limits, capabilities):
```python
import json
from scalexi_llm import LLMProxy
llm = LLMProxy(verbose=0)
models = llm.list_available_models()
print(json.dumps(models, indent=2))
```
## Domain-Restricted Web Search
```python
from scalexi_llm import LLMProxy
llm = LLMProxy()
response, _, _, _ = llm.ask_llm(
model_name="gpt-5-mini",
user_prompt="Find the admissions requirements",
websearch=True,
search_tool="both",
search_domain="binbaz.org.sa"
)
```
## LangSmith Tracing (Optional)
```python
from scalexi_llm import LLMProxy
# pip install langsmith (+ set LANGSMITH_API_KEY in .env)
llm = LLMProxy(enable_tracing=True)
response, exec_time, token_usage, cost = llm.ask_llm(
model_name="chatgpt-4o-latest",
user_prompt="What is quantum computing?"
)
# Token usage, cost, provider, and model are automatically logged to LangSmith
```
## Features at a Glance
- One `LLMProxy` class for all providers
- Unified `ask_llm` API for text, files, images, and web search (with file/image fallbacks for providers like RunPod/Ollama)
- Pydantic-based structured outputs with retry and model fallbacks
- Vision fallback when a chosen model doesn't support images
- Token and cost accounting for every call
- Optional LangSmith tracing with zero-code setup (`enable_tracing=True`)
- Comprehensive test suite (`provider_test.py`, `ollama_test.py`, `combined_test.py`)
| text/markdown | scalex_innovation | scalex_innovation@gmail.com | null | null | null | llm, ai, openai, anthropic, gemini, groq, deepseek, grok, qwen, ollama, exa, proxy, api | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | https://github.com/scalexi/scalexi_llm | null | >=3.8 | [] | [] | [] | [
"openai",
"anthropic",
"google-genai",
"groq",
"pymupdf",
"xai-sdk",
"python-docx",
"pydantic",
"python-dotenv",
"exa-py",
"google-search-results"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-19T11:33:34.244724 | scalexi_llm-0.1.22.tar.gz | 29,582 | d5/11/b44605ca29423f94252f182f318757d2b775272d1be3f227011e4eba09ca/scalexi_llm-0.1.22.tar.gz | source | sdist | null | false | f7994a069d77200737ebcca34f935646 | 0d9a66a3051a6069c817dfa09642a7d2af49c5e4aac7de6cdc0d2c8a142fbad1 | d511b44605ca29423f94252f182f318757d2b775272d1be3f227011e4eba09ca | null | [] | 241 |
2.4 | yt-framework | 1.1.1 | YTsaurus pipeline framework with utilities and common modules | # YT Framework

[](https://pypi.org/project/yt-framework/)


**[PyPI](https://pypi.org/project/yt-framework/) | [Documentation](https://yt-framework.readthedocs.io/en/latest/) | [Examples](https://github.com/GregoryKogan/yt-framework/tree/main/examples)**
---
## Overview
A powerful Python framework for building and executing data processing pipelines on [YTsaurus](https://ytsaurus.tech/) (YT) clusters. YT Framework simplifies pipeline development with automatic stage discovery, seamless dev/prod mode switching, and comprehensive support for YT operations.
## Architecture
YT Framework follows a pipeline-based architecture where pipelines consist of stages, and stages execute operations.
**Key Components:**
- **Pipeline**: Orchestrates stages, their execution order, and configuration management
- **Stages**: Reusable units of work that execute operations
- **Operations**: Specific tasks (Map, Vanilla, YQL, S3, Table operations)
- **Configuration**: YAML-based configuration system for flexible pipeline setup
## Key Features
- **Pipeline & Stage Architecture**: Organize complex workflows into reusable stages
- **Automatic Stage Discovery**: No manual registration needed - just create stages and run
- **Dev/Prod Modes**: Develop locally with file system simulation, deploy to YT cluster seamlessly
- **Multiple Operation Types**: Support for Map, Vanilla, YQL, and S3 operations
- **Code Upload**: Automatic code packaging and deployment to YT cluster
- **Docker Support**: Custom Docker images for special dependencies
- **Checkpoint Management**: Built-in support for ML model checkpoints
- **Configuration Management**: Flexible YAML-based configuration with multiple config support
## Installation
### For Users
Install from [PyPI](https://pypi.org/project/yt-framework/):
```bash
pip install yt-framework
```
### For Developers and Contributors
Install in editable mode from source:
```bash
git clone https://github.com/GregoryKogan/yt-framework.git
cd yt-framework
pip install -e .
```
For development with testing tools:
```bash
pip install -e ".[dev]"
```
See [Installation Guide](https://yt-framework.readthedocs.io/en/latest/#installation) for prerequisites and detailed setup instructions.
## Quick Start
Create your first pipeline in 3 steps:
**What you'll build:** A simple pipeline that creates a stage, logs a message, and demonstrates the basic framework structure.
1. **Create pipeline structure**:
```bash
mkdir my_pipeline && cd my_pipeline
mkdir -p stages/my_stage configs
```
2. **Create `pipeline.py`**:
```python
from yt_framework.core.pipeline import DefaultPipeline
if __name__ == "__main__":
DefaultPipeline.main()
```
3. **Create stage and config**:
```python
# stages/my_stage/stage.py
from yt_framework.core.stage import BaseStage
class MyStage(BaseStage):
def run(self, debug):
self.logger.info("Hello from YT Framework!")
return debug
```
```yaml
# configs/config.yaml
stages:
enabled_stages:
- my_stage
pipeline:
mode: "dev" # Use "dev" for local development
```
**Run your pipeline:**
```bash
python pipeline.py
```
**Next Steps:**
- See the [Quick Start Guide](https://yt-framework.readthedocs.io/en/latest/#quick-start) for a complete example with table operations
- Explore [Examples](https://github.com/GregoryKogan/yt-framework/tree/main/examples) to see more complex use cases
- Read about [Pipelines and Stages](https://yt-framework.readthedocs.io/en/latest/pipelines-and-stages.html) in the documentation
## Examples
The [`examples/`](https://github.com/GregoryKogan/yt-framework/tree/main/examples) directory contains comprehensive examples demonstrating most framework features.
Each example includes a README explaining what it demonstrates and how to run it.
## Requirements
### Prerequisites Checklist
- [ ] **Python 3.11+** installed
- [ ] **YT cluster access and credentials** (for production mode)
### YT Cluster Requirements
When running pipelines in production mode, code from `ytjobs` executes on YT cluster nodes. The cluster's Docker image (default or custom) must include:
- **Python 3.11+**
- **ytsaurus-client** >= 0.13.0 (for checkpoint operations)
- **boto3** == 1.35.99 (for S3 operations)
- **botocore** == 1.35.99 (auto-installed with boto3)
**Important:** Ensure your cluster's default Docker image satisfies these dependencies, or always use custom Docker images for your pipelines. See [Cluster Requirements](https://yt-framework.readthedocs.io/en/latest/configuration/cluster-requirements.html) and [Custom Docker Images](https://yt-framework.readthedocs.io/en/latest/advanced/docker.html) for details.
## Documentation
**Full documentation available at: [yt-framework.readthedocs.io](https://yt-framework.readthedocs.io/en/latest/)**
For local development, source documentation is available in the [`docs/`](docs/) directory.
**[Examples](https://github.com/GregoryKogan/yt-framework/tree/main/examples)** - Complete working examples for most features
## Getting Help
- **Documentation**: Check the [full documentation](https://yt-framework.readthedocs.io/en/latest/) for detailed guides
- **Troubleshooting**: See the [Troubleshooting Guide](https://yt-framework.readthedocs.io/en/latest/troubleshooting/index.html) for common issues
- **Examples**: Browse [working examples](https://github.com/GregoryKogan/yt-framework/tree/main/examples) to see how features are used
- **GitHub Issues**: Report bugs or request features on [GitHub Issues](https://github.com/GregoryKogan/yt-framework/issues)
- **Questions**: Open a GitHub issue with the `question` label
## Contributing
We welcome contributions! Whether it's bug fixes, new features, documentation improvements, or examples, your help makes YT Framework better.
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.
| text/markdown | null | Artem Zavarzin <artemutz555@gmail.com>, Gregory Koganovsky <g.koganovsky@gmail.com> | null | Gregory Koganovsky <g.koganovsky@gmail.com>, Artem Zavarzin <artemutz555@gmail.com> | null | ytsaurus, yt, pipeline, data-processing, map-reduce, distributed-computing, big-data | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering",
"Topic :: System :: Distributed Computing"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"botocore==1.35.99",
"boto3==1.35.99",
"ytsaurus-client>=0.13.0",
"python-decouple>=3.8",
"duckdb>=1.0.0",
"omegaconf>=2.3.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"sphinx>=7.2.0; extra == \"docs\"",
"myst-parser[linkify]>=2.0.0; extra == \"docs\"",
"pydata-s... | [] | [] | [] | [
"Repository, https://github.com/GregoryKogan/yt-framework",
"Documentation, https://yt-framework.readthedocs.io/en/latest/",
"Issues, https://github.com/GregoryKogan/yt-framework/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:33:19.901638 | yt_framework-1.1.1.tar.gz | 72,771 | ae/cf/5ab0b34fde4a62e2ac973beed0f37dda9f20bc004b3ebb901f99f71d72c3/yt_framework-1.1.1.tar.gz | source | sdist | null | false | 18db9ecc3075dcaee1209756d2362915 | c89b7cf3f68cb6e74a4035215395a7d16d0c08bc5a640288a65d9a2063b77e55 | aecf5ab0b34fde4a62e2ac973beed0f37dda9f20bc004b3ebb901f99f71d72c3 | Apache-2.0 | [
"LICENSE"
] | 232 |
2.4 | dataio-artpark | 0.4.0b16 | Postgres and FASTAPI based Dataset Management System (DMS), with a CLI and SDK for interacting with the API. | # dataio-artpark
Dataio is a Postgres and FASTAPI based Dataset Management System (DMS) for users to access and manage datasets distributed by the Data Science Innovation Hub, ARTPARK. The scaffolding can be used to build a similar system for your own datasets.
You can find the documentation for the project [here](https://dataio.artpark.ai), and we're also on PyPI [here](https://pypi.org/project/dataio-artpark/).
## Installation
Install the project using pip:
```bash
pip install dataio-artpark
```
or using uv:
```bash
uv add dataio-artpark
```
## Development
We use uv to manage the project. Clone the repository and run:
```bash
uv sync
```
## How to set up the local dev environment.
Run below command to set up the DB. API keys for users will be generated in the db/init/data_inserts folder
```
bash ./src/dataio/db/init/recreate_full.sh
```
Starting the API Server
```
uv run fastapi dev src/dataio/api
```
To start with logging & autoreload enabled
```
uvicorn src.dataio.api.main:app --log-config log_config.yml --reload
```
| text/markdown | null | Akhil B <akhil@artpark.in>, Sneha S <sneha@artpark.in> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"pytest>=8.4.1",
"python-dotenv>=1.1.0",
"pyyaml>=6.0.2",
"requests>=2.32.4",
"tabulate>=0.9.0",
"typer>=0.15.0"
] | [] | [] | [] | [
"Homepage, https://dataio.artpark.ai",
"Github, https://github.com/dsih-artpark/dataio"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T11:31:50.563354 | dataio_artpark-0.4.0b16-py3-none-any.whl | 28,956 | f9/74/3376172a3e2d9e862a06e29d6f58a7ad7edb46221c420d45787263f66d3f/dataio_artpark-0.4.0b16-py3-none-any.whl | py3 | bdist_wheel | null | false | 3d65158764fe82e67d3844fef0764f69 | 45f99c656617e83ad1cdb6bdaa25b04c93b2153b72d1e0c47693d9e8dfc54967 | f9743376172a3e2d9e862a06e29d6f58a7ad7edb46221c420d45787263f66d3f | AGPL-3.0-or-later | [
"LICENSE"
] | 209 |
2.4 | stata-ai-fusion | 0.1.0 | MCP Server + Skill for Stata: execute commands, inspect data, and generate high-quality Stata code with AI | # Stata AI Fusion
MCP Server + Skill Knowledge Base + VS Code Extension for Stata.
A three-in-one Stata AI integration: let AI directly execute Stata code,
generate high-quality statistical analysis code, and provide a complete IDE
experience in VS Code.
## Features
- **MCP Server**: 10 tools that let AI agents operate Stata directly
- `run_command` / `run_do_file` -- execute code
- `inspect_data` / `codebook` -- data exploration
- `get_results` -- extract r()/e()/c() results
- `export_graph` -- graph export with auto-capture
- `search_log` / `install_package` -- utility tools
- `list_sessions` / `close_session` -- session management
- **Skill Knowledge Base**: 5,600+ lines of Stata knowledge
- Econometrics, causal inference, survival analysis, clinical data analysis
- 14 reference documents with Progressive Disclosure architecture
- **VS Code Extension**: complete Stata IDE experience
- Syntax highlighting (350+ functions)
- 30 code snippets
- Run code / .do files with one keypress
- Graph preview panel
## Quick Start
### MCP Server (Claude Desktop / Claude Code / Cursor)
```bash
# One-command launch with uvx
uvx --from stata-ai-fusion stata-ai-fusion
```
Or configure in your AI assistant's MCP settings:
```json
{
"mcpServers": {
"stata": {
"command": "uvx",
"args": ["--from", "stata-ai-fusion", "stata-ai-fusion"]
}
}
}
```
### VS Code Extension
```bash
code --install-extension stata-ai-fusion-0.1.0.vsix
```
### Skill Only (Claude.ai)
Download `stata-ai-fusion-skill.zip` from the
[Releases](https://github.com/SexyERIC0723/stata-ai-fusion/releases) page,
then upload via Claude.ai Settings > Skills.
## Requirements
- Stata 17+ installed locally (MP, SE, IC, or BE)
- Python 3.11+ (for MCP server)
- VS Code 1.85+ (for extension)
## Supported Platforms
- macOS (Intel & Apple Silicon)
- Linux
- Windows
## Development
```bash
git clone https://github.com/SexyERIC0723/stata-ai-fusion.git
cd stata-ai-fusion
uv sync
uv run pytest tests/ -v
```
## License
MIT -- see [LICENSE](LICENSE) for details.
| text/markdown | null | null | null | null | MIT | ai, econometrics, mcp, stata, statistics | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering :: Mathematics"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"anyio>=4.0.0",
"mcp>=1.0.0",
"pexpect>=4.9.0",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\""
] | [] | [] | [] | [] | uv/0.6.6 | 2026-02-19T11:31:29.772482 | stata_ai_fusion-0.1.0.tar.gz | 98,413 | d4/45/8d653e84657bb24a59ef893fd144a2e4473a3221b730e18872263038b91c/stata_ai_fusion-0.1.0.tar.gz | source | sdist | null | false | 26a5f87b67ae07230e454438b044bd8f | 2cf13c44967efa7aa621e17db7188880551131707f7d4abe840c427f827fc678 | d4458d653e84657bb24a59ef893fd144a2e4473a3221b730e18872263038b91c | null | [
"LICENSE"
] | 233 |
2.4 | tftree-by-ritesh | 1.0.1 | Terraform Project Tree Generator CLI Tool | # 🚀 TFTree --- Terraform Project Tree Generator CLI
> A professional-grade CLI tool to generate clean, structured, and
> shareable Terraform project trees --- with smart exclude support,
> colored output, and Markdown export.
------------------------------------------------------------------------




------------------------------------------------------------------------
## 📌 Why TFTree?
Terraform projects often include:
- `.terraform/`
- `terraform.tfstate`
- `.terraform.lock.hcl`
- `.git/`
- Provider binaries
- Deeply nested modules
Sharing structure manually becomes messy and unreadable.
🔥 **TFTree solves this problem** by generating a clean, structured tree
view of your infrastructure project --- ready for documentation,
sharing, and audits.
------------------------------------------------------------------------
# ✨ Features
✅ Beautiful tree-style output\
✅ 🎨 Colored CLI output\
✅ Optional file content preview\
✅ Smart exclude support (like `.gitignore`)\
✅ Wildcard pattern support (`*.exe`, `.terraform*`)\
✅ Exclude via file (`--exclude-file`)\
✅ Markdown export mode (`--markdown`)\
✅ Output to file (`-o`)\
✅ Lightweight & Fast\
✅ Installable as a CLI tool
------------------------------------------------------------------------
# 📦 Installation
## 🔹 Local Install (Development Mode)
``` bash
python -m pip install -e .
```
## 🔹 Standard Install
``` bash
python -m pip install .
```
## 🔹 After PyPI Publish (Global Install)
``` bash
pip install tftree
```
------------------------------------------------------------------------
# 🚀 Usage
## Basic Usage
``` bash
tftree <project-folder>
```
Example:
``` bash
tftree .
```
------------------------------------------------------------------------
## 📁 Structure Only (No File Content)
``` bash
tftree . --no-content
```
------------------------------------------------------------------------
## 🚫 Exclude Files & Folders
### Direct Patterns
``` bash
tftree . --exclude .terraform .git terraform.tfstate *.exe
```
### Using Exclude File
``` bash
tftree . --exclude-file exclude.txt
```
Example `exclude.txt`:
.terraform
.git
terraform.tfstate
*.exe
.terraform.lock.hcl
Supported:
- Exact file names
- Folder names
- Wildcards
------------------------------------------------------------------------
## 💾 Save Output to File
``` bash
tftree . -o infra_tree.txt
```
------------------------------------------------------------------------
## 📝 Markdown Export Mode
Generate Markdown-ready structure:
``` bash
tftree . --markdown -o structure.md
```
Perfect for:
- GitHub documentation
- Wiki pages
- Confluence
- Client documentation
------------------------------------------------------------------------
# 🖥 Example Output
📁 Terraform Project: infra
├── 📁 modules
│ ├── 📄 main.tf
│ │ resource "azurerm_resource_group" "rg" {
│ │ name = "example"
│ │ location = "East US"
│ │ }
│ └── 📄 variables.tf
│ variable "location" {
│ type = string
│ }
└── 📄 provider.tf
------------------------------------------------------------------------
# ⚙️ CLI Options
Option Description
------------------ ------------------------------------
`--no-content` Show only folder/file structure
`--exclude` Space-separated patterns to ignore
`--exclude-file` Load exclude patterns from file
`--markdown` Export output in Markdown format
`-o`, `--output` Write output to file
------------------------------------------------------------------------
# 🏗 Project Structure
tftree/
│
├── tftree/
│ ├── __init__.py
│ └── cli.py
│
├── pyproject.toml
└── README.md
------------------------------------------------------------------------
# 🔥 DevOps Use Cases
- Share Terraform structure in tickets
- Infrastructure documentation
- CI/CD pipeline documentation
- Client infrastructure overview
- Audit reporting
- Pre-deployment reviews
------------------------------------------------------------------------
# 🌍 PyPI Publishing
Once published to PyPI, anyone can install globally:
``` bash
pip install tftree
```
This makes TFTree a globally accessible DevOps utility tool.
------------------------------------------------------------------------
# 🧠 Roadmap
- `.treeignore` auto-detection
- `--max-depth` option
- Terraform-only mode (`*.tf`)
- JSON export
- GitHub Action integration
- Auto documentation mode
------------------------------------------------------------------------
# 👨💻 Author
**Ritesh Sharma**\
DevOps \| Azure \| Terraform \| Kubernetes
------------------------------------------------------------------------
# 📄 License
MIT License
------------------------------------------------------------------------
# ⭐ Support
If you find this tool useful:
⭐ Star the repository\
🚀 Share with DevOps community\
🛠 Contribute improvements
------------------------------------------------------------------------
> Built with ❤️ for DevOps Engineers
| text/markdown | Ritesh Sharma | null | null | null | MIT | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"rich>=13.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/riteshatri/tftree"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-19T11:31:24.088592 | tftree_by_ritesh-1.0.1.tar.gz | 5,649 | 0b/32/e3035e1a6ef679e3b0ae0b066468c4e489c8598d387f4dd05d07d7ce9ff2/tftree_by_ritesh-1.0.1.tar.gz | source | sdist | null | false | ed670caa066511ad2a450476d1e0c8f0 | f4955b609955d894be483cb897343e2b2657abdcc6ce2b4862b48fdfa5ad1fb8 | 0b32e3035e1a6ef679e3b0ae0b066468c4e489c8598d387f4dd05d07d7ce9ff2 | null | [
"LICENSE"
] | 104 |
2.4 | fastogram | 0.0.3 | Opinionated FastAPI + Aiogram project generator (FastGram template) | # Fastogram
Opinionated FastAPI + Aiogram project generator — like `django-admin startproject` for FastGram.
## Installation
```bash
pip install fastogram
```
Or with uv:
```bash
uv add fastogram
```
## Usage
Create a new project:
```bash
fastogram new my-telegram-bot
```
This creates `my-telegram-bot/` with the full FastGram template. Then:
```bash
cd my-telegram-bot
uv sync
# Edit .env with your TELEGRAM_BOT_TOKEN
python manage.py setup
python manage.py run --reload
```
### Options
```bash
fastogram new my-bot # Create ./my-bot/
fastogram new . # Scaffold in current directory (no subfolder)
fastogram new my-bot -d ~/code # Create ~/code/my-bot/
```
---
## For maintainers: syncing the template
When you update FastGram (architecture changes, bug fixes, etc.), sync the template into the CLI so `pip install fastogram` users get the latest:
```bash
cd cli/FastoGram
python scripts/sync_template.py
```
Or with a custom FastGram path:
```bash
FASTGRAM_SOURCE=/path/to/FastGram python scripts/sync_template.py
```
The sync copies FastGram → `src/fastogram/templates/fastgram/`, excluding `.git`, `.venv`, and other generated files. Rebuild and publish a new fastogram release after syncing.
### Publishing to PyPI
```bash
# 1. Create .pypi-token with your PyPI API token (from pypi.org/manage/account/token/)
echo "pypi-your-token-here" > .pypi-token
# 2. Publish
make publish
```
`.pypi-token` is in `.gitignore` — it will never be committed.
| text/markdown | null | Abubeker Afdel <ibnuafdel@gmail.com> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.11 | 2026-02-19T11:30:47.646448 | fastogram-0.0.3.tar.gz | 143,080 | 98/e8/f4e3b52fb74237c6f98d945fd88ee1c4c093a0afacd994ab5cb4aefbbae9/fastogram-0.0.3.tar.gz | source | sdist | null | false | df0c8c936489ebd6f5859b1e39723f67 | d3bf142d2d7e9767df32ffeba0c6319ec4c517bba86b69643b292c4b471b8caa | 98e8f4e3b52fb74237c6f98d945fd88ee1c4c093a0afacd994ab5cb4aefbbae9 | null | [] | 231 |
2.4 | acc-fwu | 0.2.2 | A tool to update Linode/ACC firewall rules with your current IP address. | # acc-firewall_updater
A tool to automatically update the [Akamai Connected Cloud (ACC) / Linode](https://www.akamai.com/cloud) firewall rules to allow your IP.
[](https://github.com/johnybradshaw/acc-firewall_updater/actions/workflows/python-app.yml)
[](https://badge.fury.io/py/acc-fwu)
## Description
`acc-fwu` is a command-line tool to automatically update [Linode](https://www.linode.com)/ACC firewall rules with your current IP address. This is particularly useful for dynamically updating firewall rules to allow access from changing IP addresses, like when you visit the gym or you're sat in an airport.
## Features
- Automatically detects your current public IP address
- Creates firewall rules for TCP, UDP, and ICMP protocols
- Saves configuration for easy subsequent usage
- Supports dry-run mode to preview changes
- Quiet mode for cron jobs and automation
- Debug mode for troubleshooting
- Input validation for security
- Secure configuration file storage (owner-only permissions)
- **Interactive firewall selection** - List and choose from available firewalls
- **Add mode** - Accumulate multiple IP addresses (ideal for traveling)
## Prerequisites
- Python 3.6 or higher
- [Linode CLI](https://www.linode.com/docs/products/tools/cli/get-started/) configured with an API token
- A Linode/ACC firewall ID
## Installation
You can install the package via `pip` or `pipx`:
```bash
pipx install acc-fwu
```
Alternatively, you can install it directly from the source:
```bash
git clone https://github.com/johnybradshaw/acc-firewall_updater.git
cd acc-firewall_updater
pip install --use-pep517 .
```
For development installation, see [BUILD.md](BUILD.md).
## Usage
### First-time Setup
The first time you use `acc-fwu`, you'll need to provide your Linode/ACC Firewall ID and *optionally* the label for the rule you want to create or update:
```bash
acc-fwu --firewall_id <FIREWALL_ID> --label <RULE_LABEL>
```
For example:
```bash
acc-fwu --firewall_id 123456 --label "Allow-My-Current-IP"
```
This command will do two things:
1. It will create or update the firewall rule with your current public IP address.
2. It will save the `firewall_id` and `label` to a configuration file `(~/.acc-fwu-config)` for future use.
### Subsequent Usage
After the initial setup, you can simply run `acc-fwu` without needing to provide the `firewall_id` and `label` again:
```bash
acc-fwu
```
This will:
1. Load the saved `firewall_id` and `label` from the configuration file.
2. Update the firewall rule with your current public IP address.
### Command-Line Options
```
usage: acc-fwu [-h] [--firewall_id FIREWALL_ID] [--label LABEL] [-d] [-r] [-a] [-l] [-q] [--dry-run] [-v]
Create, update, or remove Akamai Connected Cloud (Linode) firewall rules with your current IP address.
options:
-h, --help show this help message and exit
--firewall_id FIREWALL_ID
The numeric ID of the Linode firewall.
--label LABEL Label for the firewall rule (alphanumeric, underscores, hyphens, max 32 chars).
-d, --debug Enable debug mode to show existing rules data.
-r, --remove Remove the specified rules from the firewall.
-a, --add Add IP to existing rules instead of replacing (useful for multiple locations).
-l, --list List available firewalls and exit.
-q, --quiet Suppress output messages (useful for cron/scripting).
--dry-run Show what would be done without making any changes.
-v, --version show program's version number and exit
Example: acc-fwu --firewall_id 12345 --label MyIP
```
### Examples
**Preview changes without applying them:**
```bash
acc-fwu --firewall_id 123456 --label "My-IP" --dry-run
```
**Run silently (for cron jobs):**
```bash
acc-fwu --quiet # Requires existing config file
```
**Remove firewall rules:**
```bash
acc-fwu --remove
```
**Debug mode (shows existing rules):**
```bash
acc-fwu --debug
```
**Check version:**
```bash
acc-fwu --version
```
**List available firewalls:**
```bash
acc-fwu --list
```
**Add IP without replacing existing ones (great for traveling):**
```bash
acc-fwu --add
```
**First-time setup with interactive firewall selection:**
```bash
# If no firewall_id is provided and no config exists,
# you'll be prompted to select from available firewalls
acc-fwu
```
### Multi-Location Usage (Add Mode)
If you frequently travel and need to access your servers from multiple locations, use the `--add` flag:
```bash
# From home
acc-fwu --add
# Later, from a coffee shop
acc-fwu --add
# Later, from the airport
acc-fwu --add
```
Each location's IP address will be added to your firewall rules, allowing access from all locations. Without `--add`, your IP would be replaced each time.
To start fresh and remove all accumulated IPs:
```bash
acc-fwu --remove
acc-fwu # Creates new rules with only your current IP
```
### Cron Job Example
To automatically update your firewall rules every hour:
```bash
# Update firewall rules every hour
0 * * * * /usr/local/bin/acc-fwu --quiet
```
**Important**: Before using `--quiet` mode, you must have a valid configuration file (`~/.acc-fwu-config`) with your `firewall_id` and `label`. Interactive firewall selection is not available in quiet mode. Run `acc-fwu` interactively first to set up your configuration.
## Configuration File
The `acc-fwu` tool saves the `firewall_id` and `label` in a configuration file located at `~/.acc-fwu-config`. This file is:
- Automatically managed by the tool
- Created with secure permissions (readable only by owner)
- Uses standard INI format
You generally won't need to edit it manually.
## Security
- **Input Validation**: All inputs (firewall ID, labels, IP addresses) are validated before use
- **Secure Config Storage**: Configuration file is created with `600` permissions (owner read/write only)
- **No Credential Storage**: API tokens are read from the Linode CLI configuration, not stored separately
- **HTTPS Only**: All API communications use HTTPS
## Development
See [BUILD.md](BUILD.md) for local development and testing instructions.
See [RELEASE.md](RELEASE.md) for information on creating releases.
## License
This project is licensed under the GNU General Public License v3 (GPLv3) - see the [LICENSE](LICENSE) file for details.
## Summary of Changes
### 2026-02-19 - v0.2.2
- **Bug Fixes**:
- Fixed `list_firewalls()` only returning the first page of results - users with many firewalls would see incomplete lists in both `--list` output and interactive selection. All pages are now fetched.
- **Improvements**:
- Extracted `LINODE_API_PAGE_SIZE` constant for the API page size value
- Refactored code to reduce cognitive complexity: extracted helper functions in `cli.py` and `firewall.py`, added `CONTENT_TYPE_JSON` constant, simplified regex patterns
### 2026-01-05 - v0.2.1
- **Bug Fixes**:
- Fixed `select_firewall()` hanging in quiet mode - now raises an error immediately with guidance to configure firewall_id first
- Quiet mode (`--quiet`) now properly fails fast in non-interactive environments (e.g., cron jobs) when no configuration exists
### 2026-01-02 - v0.2.0
- **New Features**:
- Added `--list` / `-l` flag to list available firewalls from your Linode account
- Added `--add` / `-a` flag to append IP addresses to existing rules instead of replacing (useful when traveling between multiple locations)
- Added interactive firewall selection when no firewall_id is configured
- **Improvements**:
- When running without a config file, the tool now prompts you to select from available firewalls
- Better handling of multiple IP addresses per rule
### 2025-11-21 - v0.1.5
- **New Features**:
- Added `--version` / `-v` flag to display installed version
- Added `--dry-run` flag to preview changes without applying them
- Added `--quiet` / `-q` flag to suppress output for cron/scripting
- **Security Improvements**:
- Added input validation for firewall_id (numeric only)
- Added input validation for labels (alphanumeric, underscores, hyphens, max 32 chars)
- Added IP address validation
- Configuration file now created with secure permissions (600)
- **Usability Improvements**:
- Proper exit codes (0 for success, 1 for errors)
- Error messages now output to stderr
- Improved help text with usage examples
### 2025-06-03 - v0.1.4
- **Security Fixes**: Updated Python dependencies to resolve security vulnerabilities.
### 2024-10-01 - v0.1.3
- **Show IP Address**: Now shows the current public IP address when it is updated.
### 2024-08-20 - v0.1.2
- **Fixes**: Fixed issue with updating the firewall rule.
### 2024-08-18 - v0.1.1
- **Remove Firewall Rules**: Instructions on how to remove the firewall rule.
### 2024-08-17 - v0.1.0
- **First-time Setup**: Instructions on how to set the `firewall_id` and `label` the first time you use the tool.
- **Subsequent Usage**: Information about running the tool without additional arguments after the initial setup.
- **Updating the Configuration**: Guidance on how to change the stored `firewall_id` and `label` if needed.
- **Configuration File**: Brief explanation of the config file and its location.
| text/markdown | John Bradshaw | acc-fwu@bradshaw.cloud | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | https://github.com/johnybradshaw/acc-firewall_updater | null | >=3.6 | [] | [] | [] | [
"requests"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:30:39.235814 | acc_fwu-0.2.2.tar.gz | 44,415 | e3/37/1143967727ca09f8e1f4d2d63e928fa751d7c6a06400e83debd50e5c7220/acc_fwu-0.2.2.tar.gz | source | sdist | null | false | a1a57dac01c4ba678b4810504cdc4340 | e4995e9367eeced9050019be4c87b7ebf5283ef9eb6be4c759d2779aba32be26 | e3371143967727ca09f8e1f4d2d63e928fa751d7c6a06400e83debd50e5c7220 | null | [
"LICENSE"
] | 246 |
2.3 | takk | 0.1.21 | A project that makes it easier to develope and deploy Python projects | # Takk
Define your architecture in pure Python-servers, workers, scheduled jobs, and databases connect automatically through type hints.
## Overview
Takk is a Python framework that eliminates configuration overhead by using type hints to automatically wire up your infrastructure. Write pure Python code and let Takk handle the connections between servers, workers, databases, and scheduled jobs.
No YAML. No configuration files. Just Python.
## Features
- **Type-hint driven** - Your type annotations define your architecture
- **Automatic dependency injection** - Components connect without manual wiring
- **Pure Python** - Everything is code, nothing is configuration
- **Full IDE support** - Autocomplete and type checking work out of the box
- **Minimal boilerplate** - Focus on your logic, not setup
## Installation
```bash
uv add takk
```
## Quick Start
```python
from takk.models import Project, FastAPIApp, Worker
from takk.secrets import SlackWebhook
from my_app.settings import AppSettings
from my_app import app
background_worker = Worker("background")
project = Project(
name="my-custom-server",
shared_settings=[AppSettings],
workers=[background_worker],
my_server=FastAPIApp(app),
)
```
## How It Works
Takk uses Python type hints to understand your application's resources and automatically creates the necessary connections. When you annotate a settings class with a type like `PostgresDsn`or `RedisDsn`, Takk:
1. Detects the dependency through type
2. Instantiates the component with appropriate configuration
3. Injects it into your environment
Read the [full article](https://docs.takkthon.com/blog/deploy-with-python-type-hints) to see how we built this approach.
## Core Components
### Server
```python
from takk.models import Project, FastAPIApp, Worker
from takk.secrets import SlackWebhook
project = Project(
name="my-custom-server",
custom_network_app=NetworkApp(
command=["/bin/bash", "-c", "uv run main.py"],
port=8000,
),
)
```
### Worker
```python
from takk import Worker
worker = Worker("name-of-worker")
worker.run(function, Args(...))
```
### Database
```python
from pydantic import PostgresDsn, RedisDsn, BaseModel
from takk import Database
class MyAppSettings(BaseModel):
redis_url: RedisDsn
psql_db: PostgresDsn
```
### Scheduled Jobs
```python
from takk.models import Compute, Project, Job
from my_app.train import train_model, TrainConfig
project = Project(
name="ml-example",
train_pokemon_model=Job(
train_model,
cron_schedule="0 3 * * *", # Runs daily at 3 AM
arguments=TrainConfig(...),
),
)
```
## Requirements
- Python 3.10+
- Type hints support
## Examples
Check out the [examples directory](examples/) for complete applications:
- [Simple web server](examples/web_server/)
- [Background worker with scheduling](examples/worker/)
- [Full-stack application](examples/fullstack/)
## Learn More
- [Documentation](https://docs.takkthon.com/docs)
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0.0",
"pydantic>=2.0.0",
"pydantic-settings>=2.0.0",
"fastapi[standard]>=0.119.0",
"nats-py[nkeys]>=2.12.0",
"docker>=7.1.0",
"tomli>=2.3.0",
"rich>=13.0.0",
"python-logging-loki; extra == \"logging\"",
"prometheus-client; extra == \"metrics\"",
"python-logging-loki; extra == \"observa... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T11:30:10.846309 | takk-0.1.21.tar.gz | 35,421 | cc/f3/6029edde3b1d8b4d95e5aa24a0cca6ba100a9b47ed1923153d2ea960a05a/takk-0.1.21.tar.gz | source | sdist | null | false | 04848869dd1f8f9cabbb2d8e04524296 | aeddd719139be12e3147cf5f1ecfdaeee45f2e1e8610853c073648bedeca0fc0 | ccf36029edde3b1d8b4d95e5aa24a0cca6ba100a9b47ed1923153d2ea960a05a | null | [] | 239 |
2.4 | XmlDict-light | 1.1.3 | XML JSON conversion | # XmlDict
This is a lightweight implementation to transform JSON to XML and vice versa.
Uses only python core functions, so no external requirements. | text/markdown | Daniel Rexin | null | null | null | null | json, xml, converter, conversion | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/DannieDarko/xmldict",
"Issues, https://github.com/DannieDarko/xmldict/issues"
] | twine/6.2.0 CPython/3.12.0 | 2026-02-19T11:30:08.614724 | xmldict_light-1.1.3.tar.gz | 3,597 | 41/43/f6364ffc4f7f298835ac0b6c67694a0f50df0d5f5c595ccac1bd079222eb/xmldict_light-1.1.3.tar.gz | source | sdist | null | false | b384444b13c8a6654135b2aa353a1a26 | 769522d6fb14cf880f517846caa8aba5232cd91abf9b4594a0147165a22293b6 | 4143f6364ffc4f7f298835ac0b6c67694a0f50df0d5f5c595ccac1bd079222eb | MIT | [
"LICENSE"
] | 0 |
2.4 | jupyter-quant | 2602.1 | Jupyter quant research environment. | # Jupyter Quant
A dockerized Jupyter quant research environment.
## Highlights
- It can be used as a docker image or pypi package.
- Includes tools for quant analysis, statsmodels, pymc, arch, py_vollib,
zipline-reloaded, PyPortfolioOpt, etc.
- The usual suspects are included, numpy, pandas, sci-py, scikit-learn,
yellowbricks, shap, optuna.
- [ib_async](https://github.com/ib-api-reloaded/ib_async) for Interactive Broker
connectivity. Works well with
[IB Gateway](https://github.com/gnzsnz/ib-gateway-docker) docker image.
- Includes all major Python packages for statistical and time series analysis,
see [requirements](https://github.com/gnzsnz/jupyter-quant/blob/master/requirements.txt).
For an extensive list check
[list installed packages](#list-installed-packages) section.
- [Zipline-reloaded](https://github.com/stefan-jansen/zipline-reloaded/),
[pyfolio-reloaded](https://github.com/stefan-jansen/pyfolio-reloaded)
and [alphalens-reloaded](https://github.com/stefan-jansen/alphalens-reloaded).
- You can install it as a python package, just `pip install -U jupyter-quant`
- Designed for [ephemeral](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#create-ephemeral-containers)
containers. Relevant data for your environment will survive your container.
- Optimized for size, it's a 2GB image vs 4GB for jupyter/scipy-notebook
- Includes jedi language server, jupyterlab-lsp, black and isort.
- It does NOT include conda/mamba. All packages are installed with pip under
`~/.local/lib/python`. Which should be mounted in a dedicated volume to
preserve your environment.
- Includes Cython, Numba, bottleneck and numexpr to speed up things
- sudo, so you can install new packages if needed.
- bash and stow, so you can [BYODF](#install-your-dotfiles) (bring your
dotfiles). Plus common command line utilities like git, less, nano (tiny), jq,
[ssh](#install-your-ssh-keys), curl, bash completion and others.
- Support for [apt cache](https://github.com/gnzsnz/apt-cacher-ng). If you have
other Linux boxes using it can leverage your package cache.
- It does not include a built environment. If you need to install a package
that does not provide wheels you can build your wheels, as explained
in [common tasks](#build-wheels-outside-the-container)
## Quick Start
To use `jupyter-quant` as a [pypi package](https://pypi.org/project/jupyter-quant/)
see [install quant package](#install-jupyter-quant-package).
Create a `docker-compose.yml` file with this content
```yml
services:
jupyter-quant:
image: gnzsnz/jupyter-quant:${IMAGE_VERSION}
environment:
APT_PROXY: ${APT_PROXY:-}
BYODF: ${BYODF:-}
SSH_KEYDIR: ${SSH_KEYDIR:-}
START_SCRIPTS: ${START_SCRIPTS:-}
TZ: ${QUANT_TZ:-}
restart: unless-stopped
ports:
- ${LISTEN_PORT}:8888
volumes:
- quant_conf:/home/gordon/.config
- quant_data:/home/gordon/.local
- ${PWD}/Notebooks:/home/gordon/Notebooks
volumes:
quant_conf:
quant_data:
```
You can use `.env-dist` as your starting point.
```bash
cp .env-dist .env
# verify everything looks good
docker compose config
docker compose up
```
## Volumes
The image is designed to work with 3 volumes:
1. `quant_data` - volume for ~/.local folder. It contains caches and all Python
packages. This enables to install additional packages through pip.
2. `quant_conf` - volume for ~/.config, all config goes here. This includes
jupyter, ipython, matplotlib, etc
3. Bind mount (but you could use a named volume) - volume for all notebooks,
under `~/Notebooks`.
This allows to have ephemeral containers and to keep your notebooks (3), your
config (2) and your additional packages (1). Eventually, you would need to
update the image, in this case, your notebooks (3) can move without issues,
your config (2) should still work but no warranty and your packages in
`quant_data` could still be used but you should refresh it with a new image.
Eventually, you would need to refresh (1) and less frequently (2)
## Common tasks
### Get running server URL
```bash
docker exec -it jupyterquant jupyter-server list
Currently running servers:
http://40798f7a604a:8888/?token=
ebf9e870d2aa0ed877590eb83b4d3bbbdfbd55467422a167 :: /home/gordon/Notebooks
```
or
```bash
docker logs -t jupyter-quant 2>&1 | grep '127.0.0.1:8888/lab?token='
```
You will need to change hostname (40798f7a604a in this case) or 127.0.0.1 by
your docker host ip.
### Show jupyter config
```bash
docker exec -it jupyter-quant jupyter-server --show-config
```
### Set password
```bash
docker exec -it jupyter-quant jupyter-server password
```
### Get command line help
```bash
docker exec -it jupyter-quant jupyter-server --help
docker exec -it jupyter-quant jupyter-lab --help
```
### List installed packages
```bash
docker exec -it jupyter-quant pip list
# outdated packages
docker exec -it jupyter-quant pip list -o
```
### Pass parameters to jupyter-lab
```bash
docker run -it --rm gnzsnz/jupyter-quant --core-mode
docker run -it --rm gnzsnz/jupyter-quant --show-config-json
```
### Run a command in the container
```bash
docker run -it --rm gnzsnz/jupyter-quant bash
```
### Build wheels outside the container
Build wheels outside the container and import wheels into the container
```bash
# make sure python version match .env-dist
docker run -it --rm -v $PWD/wheels:/wheels python:3.11 bash
pip wheel --no-cache-dir --wheel-dir /wheels numpy
```
This will build wheels for numpy (or any other package that you need) and save
the file in `$PWD/wheels`. Then you can copy the wheels in your notebook mount
(3 above) and install it within the container. You can even drag and drop into
Jupyter.
### Install your dotfiles
`git clone` your dotfiles to `Notebook/etc/dotfiles`, set environment variable
`BYODF=/home/gordon/Notebook/etc/dotfiles` in your `docker-compose.yml` When
the container starts up stow will create links like `/home/gordon/.bashrc`
### Install your SSH keys
You need to define environment variable `SSH_KEY_DIR` which should point to a
location with your keys. The suggested place is
`SSH_KEYDIR=/home/gordon/Notebooks/etc/ssh`, make sure the director has the
right permissions. Something like `chmod 700 Notebooks/etc/ssh` should work.
The `entrypoint.sh` script will create a symbolic link pointing to
`$SSH_KEYDIR` on `/home/gordon/.ssh`.
Within Jupyter's terminal, you can then:
```shell
# start agent
eval $(ssh-agent)
# add keys to agent
ssh-add
# open a tunnel
ssh -fNL 4001:localhost:4001 gordon@bastion-ssh
```
### Run scripts at start up
If you define `START_SCRIPTS` env variable with a path, all scripts on that
directory will be executed at start up. The sample `.env-dist` file contains
a commented line with `START_SCRIPTS=/home/gordon/Notebooks/etc/start_scripts`
as an example and recommended location.
Files should have a `.sh` suffix and should run under `bash`. in directory
[start_scripts](https://github.com/quantbelt/jupyter-quant/tree/master/start_scripts)
you will find example scripts to load ssh keys and install python packages.
### Install jupyter-quant package
Jupyter-quant is available as a package in [pypi](https://pypi.org/project/jupyter-quant/).
It's a meta-package that pulls all dependencies in it's highest possible version.
Install [pypi package](https://pypi.org/project/jupyter-quant/).
```bash
pip install -U jupyter-quant
```
Additional options supported are
```bash
pip install -U jupyter-quant[bayes] # to install pymc & arviz/graphviz
pip install -U jupyter-quant[sk-util] # to install skfolio & sktime
```
`jupyter-quant` it's a meta-package that pins all it's dependencies versions.
If you need/want to upgrade a dependency you can uninstall `jupyter-quant`,
although this can break interdependencies. Or install from git, where it's
updated regularly.
```bash
# git install
pip install -U git+https://github.com/quantbelt/jupyter-quant.git
```
| text/markdown | gnzsnz | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Development Status :: 3 - Alpha"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"alphalens-reloaded==0.4.6",
"arch==8.0.0",
"black==26.1.0",
"bottleneck==1.6.0",
"dask[dataframe,distributed]==2026.1.2",
"empyrical-reloaded==0.5.12",
"exchange_calendars==4.13.1",
"h5py==3.15.1",
"hurst==0.0.5",
"ib_async==2.1.0",
"ipympl==0.10.0",
"ipywidgets==8.1.8",
"isort==7.0.0",
"... | [] | [] | [] | [
"Homepage, https://github.com/quantbelt/jupyter-quant",
"Bug Tracker, https://github.com/quantbelt/jupyter-quant/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:29:22.710996 | jupyter_quant-2602.1.tar.gz | 272,592 | 03/17/a2fed60614dd7ace3d40f3ca3ac9803538da716d52612bbb2427fddf7be7/jupyter_quant-2602.1.tar.gz | source | sdist | null | false | 96b69bcc7e81dbc2ec57527579c5b204 | 30ab0278a2de4c16c463b31d77a12f410394f34d50da04fa09dbf6374b33eb15 | 0317a2fed60614dd7ace3d40f3ca3ac9803538da716d52612bbb2427fddf7be7 | Apache-2.0 | [
"LICENSE.txt"
] | 233 |
2.1 | dbhydra | 2.3.7 | Data science friendly ORM combining Python | # dbhydra
Data science friendly ORM (Object Relational Mapping) library combining Python, Pandas, and various SQL dialects
For full documentation see official [documentation](http://app.forloop.ai/dbhydra/documentation) - currently unavailable but we're working on it!
## Installation
Use the package manager [pip](https://pip.pypa.io/en/stable/) to install dbhydra.
```bash
pip install dbhydra
```
## Usage
```python
import dbhydra.dbhydra_core as dh
db1=dh.db()
table1 = dh.Table(db1,"test",["test1","test2","test3","test4"],["int","int","int","int"])
#table1.drop()
#table1.create()
#rows=[[1,2,3,4],[5,4,7,9]]
#table1.insert(rows)
list1=table1.select("SELECT * FROM test")
print(list1)
#list2=table1.select_all()
#print(list2)
#table1.drop()
table1.export_to_xlsx()
tables=db1.get_all_tables()
table_dict=db1.generate_table_dict()
print(tables)
columns=table_dict['test'].get_all_columns()
types=table_dict['test'].get_all_types()
print(columns,types)
table_test=dh.Table.init_all_columns(db1,"test")
print(table_test.columns)
table2 = dh.Table(db1,"test_new",["id","test2"],["int","nvarchar(20)"])
#table2.create()
#table2.drop()
```
## Current scope
Aims: Easy integration with Pandas, SQL SERVER/MySQL database, and exports/imports to/from excel/CSV format
Done: Table functions (Create, Drop, Select, Update, Insert, and Delete) should be working fine
Todo: Group by, Order by, Where, Linking of FK, Customizable PK,...
## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
## License
[MIT](https://choosealicense.com/licenses/mit/)
| text/markdown | DovaX | dovax.ai@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/DovaX/dbhydra | null | >=3.6 | [] | [] | [] | [
"pyodbc",
"pandas",
"pymysql",
"pymongo",
"google-cloud-bigquery",
"pydantic"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.7 | 2026-02-19T11:28:02.905903 | dbhydra-2.3.7.tar.gz | 30,488 | cb/a1/00dcf0d85501796b2f87e94f7cdb53d284987ca1097b3f8f6a99ba611c95/dbhydra-2.3.7.tar.gz | source | sdist | null | false | 226ea8566daa33481883f442bd3847c0 | 790174dcf39556bc93511a7179a30c113b9f8cc1bfcb08f812f9487ee1238caa | cba100dcf0d85501796b2f87e94f7cdb53d284987ca1097b3f8f6a99ba611c95 | null | [] | 260 |
2.4 | pyluos | 3.1.0 | Python library to set the high level behavior of your device based on Luos embedded system. | <a href="https://luos.io"><img src="https://uploads-ssl.webflow.com/601a78a2b5d030260a40b7ad/603e0cc45afbb50963aa85f2_Gif%20noir%20rect.gif" alt="Luos logo" title="Luos" align="right" height="100" /></a>

[](https://github.com/Luos-io/luos_engine/blob/master/LICENSE)
[](https://www.luos.io/docs)
[](https://www.luos.io)
[](https://registry.platformio.org/libraries/luos/luos_engine)
[](http://bit.ly/JoinLuosDiscord)
[](https://www.reddit.com/r/Luos)
# Pyluos
## The most for the developer
Luos provides a simple way to think your hardware products as a group of independant features. You can easily manage and share your hardware products' features with your team, external developers, or with the community. Luos is an open-source lightweight library that can be used on any MCU, leading to free and fast multi-electronic-boards products development. Choosing Luos to design a product will help you to develop, debug, validate, monitor, and manage it from the cloud.
## The most for the community
Most of the embedded developments are made from scratch. By using Luos, you will be able to capitalize on the development you, your company, or the Luos community already did. The re-usability of features encapsulated in Luos services will fasten the time your products reach the market and reassure the robustness and the universality of your applications.
* → Join the [Luos Discord server](http://discord.gg/luos)
* → Join the [Luos subreddit](https://www.reddit.com/r/Luos/)
## Good practices with Luos
Luos proposes organized and effective development practices, guaranteeing development flexibility and evolutivity of your hardware product, from the idea to the maintenance of the industrialized product fleet.
## Let's do this
* → Try on your own with the [get started](https://www.luos.io/tutorials/get-started)
* → Consult the full [documentation](https://www.luos.io/docs)
| text/markdown | Luos | hello@luos.io | null | null | MIT | null | [] | [] | https://docs.luos.io/pages/high/pyluos.html | null | null | [] | [] | [] | [
"future",
"websocket-client",
"pyserial>3",
"SimpleWebSocketServer",
"zeroconf",
"numpy",
"anytree",
"crc8",
"ipython",
"requests",
"simple_websocket_server==0.4.2",
"mergedeep",
"pytest; extra == \"tests\"",
"flake8; extra == \"tests\"",
"ipywidgets; extra == \"jupyter-integration\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:27:26.892369 | pyluos-3.1.0.tar.gz | 34,007 | c5/4b/2cf7c022aa3654957d95bcf24f19c35f51ba2e4ded9c0829bc06f8bb214e/pyluos-3.1.0.tar.gz | source | sdist | null | false | 6bd250e3ced0e1de08f356cd77820646 | 88588339f33d63e22951a654780c8e6b395b48176068ff96ebe6675438db0eca | c54b2cf7c022aa3654957d95bcf24f19c35f51ba2e4ded9c0829bc06f8bb214e | null | [
"LICENSE"
] | 251 |
2.4 | opensipscli | 0.3.5 | OpenSIPS Command Line Interface | # OpenSIPS CLI (Command Line Interface)
OpenSIPS CLI is an interactive command line tool that can be used to control
and monitor **OpenSIPS SIP servers**. It uses the Management Interface
exported by OpenSIPS over JSON-RPC to gather raw information from OpenSIPS and
display it in a nicer, more structured manner to the user.
The tool is very flexible and has a modular design, consisting of multiple
modules that implement different features. New modules can be easily added by
creating a new module that implements the [OpenSIPS CLI
Module](opensipscli/module.py) Interface.
OpenSIPS CLI is an interactive console that features auto-completion and
reverse/forward command history search, but can also be used to execute
one-liners for automation purposes.
OpenSIPS CLI can communicate with an OpenSIPS server using different transport
methods, such as fifo or http.
# Compatibility
This tool uses the new JSON-RPC interface added in OpenSIPS 3.0, therefore
it can only be used with OpenSIPS versions higher than or equal to 3.0. For older
versions of OpenSIPS, use the classic `opensipsctl` tool from the `opensips` project.
## Usage
### Tool
Simply run `opensips-cli` tool directly in your cli.
By default the tool will start in interactive mode.
OpenSIPS CLI accepts the following arguments:
* `-h|--help` - used to display information about running `opensips-cli`
* `-v|--version` - displays the version of the running tool
* `-d|--debug` - starts the `opensips-cli` tool with debugging enabled
* `-f|--config` - specifies a configuration file (see [Configuration
Section](#configuration) for more information)
* `-i|--instance INSTANCE` - changes the configuration instance (see [Instance
Module](docs/modules/instance.md) Documentation for more information)
* `-o|--option KEY=VALUE` - sets/overwrites the `KEY` configuration parameter
with the specified `VALUE`. Works for both core and modules parameters. Can be
used multiple times, for different options
* `-x|--execute` - executes the command specified and exits
In order to run `opensips-cli` without installing it, you have to export the
`PYTHONPATH` variable to the root of the `opensips-cli` and `python-opensips`
packages. If you installed the two packages under `/usr/local/src`, simply do:
```
export PYTHONPATH=/usr/local/src/opensips-cli:/usr/local/src/python-opensips
/usr/local/src/opensips-cli/bin/opensips-cli
```
### Python Module
The module can be used as a python module as well. A simple snippet of running
an MI command using the tool is:
```
from opensipscli import cli
opensipscli = cli.OpenSIPSCLI()
print(opensipscli.mi('ps'))
```
The OpenSIPSCLI object can receive a set of arguments/modifiers through the
`OpenSIPSCLIArgs` class, i.e.:
```
from opensipscli import args
...
args = OpenSIPSCLIArgs(debug=True)
opensipscli = cli.OpenSIPSCLI(args)
...
```
Custom settings can be provided through the arguments, i.e.:
```
# run commands over http
args = OpenSIPSCLIArgs(communication_type = "http",
url="http://127.0.0.1:8080/mi")
...
```
### Docker Image
The OpenSIPS CLI tool can be run in a Docker container. The image is available
on Docker Hub at [opensips/opensips-cli](https://hub.docker.com/r/opensips/opensips-cli).
For more information on how to run the tool in a Docker container, please refer to the
[OpenSIPS CLI Docker Image](docker/docker.md) documentation.
## Configuration
OpenSIPS CLI accepts a configuration file, formatted as an `ini` or `cfg`
file, that can store certain parameters that influence the behavior of the
OpenSIPS CLI tool. You can find [here](etc/default.cfg) an example of a
configuration file that behaves exactly as the default parameters. The set of
default values used, when no configuration file is specified, can be found
[here](opensipscli/defaults.py).
The configuration file can have multiple sections/instances, managed by the
[Instance](docs/modules/instance.md) module. One can choose different
instances from the configuration file by specifying the `-i INSTANCE` argument
when starting the cli tool.
If no configuration file is specified by the `-f|--config` argument, OpenSIPS
CLI searches for one in the following locations:
* `~/.opensips-cli.cfg` (highest precedence)
* `/etc/opensips-cli.cfg`
* `/etc/opensips/opensips-cli.cfg` (lowest precedence)
If no file is found, it starts with the default configuration.
The OpenSIPS CLI core can use the following parameters:
* `prompt_name`: The name of the OpenSIPS CLI prompt (Default: `opensips-cli`)
* `prompt_intro`: Introduction message when entering the OpenSIPS CLI
* `prompt_emptyline_repeat_cmd`: Repeat the last command on an emptyline (Default: `False`)
* `history_file`: The path of the history file (Default: `~/.opensips-cli.history`)
* `history_file_size`: The backlog size of the history file (Default: `1000`)
* `log_level`: The level of the console logging (Default: `WARNING`)
* `communication_type`: Communication transport used by OpenSIPS CLI (Default: `fifo`)
* `fifo_file`: The OpenSIPS FIFO file to which the CLI will write commands
(Default: `/var/run/opensips/opensips_fifo`)
* `fifo_file_fallback`: A fallback FIFO file that is being used when the `fifo_file`
is not found - this has been introduces for backwards compatibility when the default
`fifo_file` has been changed from `/tmp/opensips_fifo` (Default: `/tmp/opensips_fifo`)
* `fifo_reply_dir`: The default directory where `opensips-cli` will create the
fifo used for the reply from OpenSIPS (Default: `/tmp`)
* `url`: The default URL used when `http` `communication_type` is used
(Default: `http://127.0.0.1:8888/mi`).
* `datagram_ip`: The default IP used when `datagram` `communication_type` is used (Default: `127.0.0.1`)
* `datagram_port`: The default port used when `datagram` `communication_type` is used (Default: `8080`)
* `datagram_timeout`: Timeout for Datagram Socket.
* `datagram_buffer_size`: Buffer size for Datagram Socket.
* `datagram_unix_socket`: Unix Domain Socket to use instead of UDP.
Each module can use each of the parameters above, but can also declare their
own. You can find in each module's documentation page the parameters that they
are using.
Configuration parameters can be overwritten using the `-o/--option` arguments,
as described in the [Usage](#tool) section.
It is also possible to set a parameters dynamically, using the `set` command.
This configuration is only available during the current interactive session,
and also gets cleaned up when an instance is switched.
## Modules
The OpenSIPS CLI tool consists of the following modules:
* [Management Interface](docs/modules/mi.md) - run MI commands
* [Database](docs/modules/database.md) - commands to create, modify, drop, or
migrate an OpenSIPS database
* [Diagnose](docs/modules/diagnose.md) - instantly diagnose OpenSIPS instances
* [Instance](docs/modules/instance.md) - used to switch through different
instances/configuration within the config file
* [User](docs/modules/user.md) - utility used to add and remove OpenSIPS users
* [Trace](docs/modules/trace.md) - trace calls information from users
* [Trap](docs/modules/trap.md) - use `gdb` to take snapshots of OpenSIPS workers
* [TLS](docs/modules/tls.md) - utility to generate certificates for TLS
## Communication
OpenSIPS CLI can communicate with an OpenSIPS instance through MI using
different transports. Supported transports at the moment are:
* `FIFO` - communicate over the `mi_fifo` module
* `HTTP` - use JSONRPC over HTTP through the `mi_http` module
* `DATAGRAM` - communicate over UDP using the `mi_datagram` module
## Installation
Please follow the details provided in the
<a href="docs/INSTALLATION.md">Installation</a> section, for a complete guide
on how to install `opensips-cli` as a replacement for the deprecated
`opensipsctl` shell script.
## Contribute
Feel free to contribute to this project with any module, or functionality you
find useful by opening a pull request.
## History
This project was started by **Dorin Geman**
([dorin98](https://github.com/dorin98)) as part of the [ROSEdu
2018](http://soc.rosedu.org/2018/) program. It has later been adapted to the
new OpenSIPS 3.0 MI interface and became the main external tool for managing
OpenSIPS.
## License
<!-- License source -->
[License-GPLv3]: https://www.gnu.org/licenses/gpl-3.0.en.html "GNU GPLv3"
[Logo-CC_BY]: https://i.creativecommons.org/l/by/4.0/88x31.png "Creative Common Logo"
[License-CC_BY]: https://creativecommons.org/licenses/by/4.0/legalcode "Creative Common License"
The `opensips-cli` source code is licensed under the [GNU General Public License v3.0][License-GPLv3]
All documentation files (i.e. `.md` extension) are licensed under the [Creative Common License 4.0][License-CC_BY]
![Creative Common Logo][Logo-CC_BY]
© 2018 - 2020 OpenSIPS Solutions
| text/markdown | null | OpenSIPS Project <project@opensips.org> | null | Razvan Crainea <razvan@opensips.org> | GNU General Public License v3 (GPLv3) | null | [
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"mysqlclient<1.4.0rc1",
"opensips",
"sqlalchemy-utils",
"sqlalchemy<2,>=1.3.3"
] | [] | [] | [] | [
"Homepage, https://github.com/OpenSIPS/opensips-cli",
"Source, https://github.com/OpenSIPS/opensips-cli",
"Issues, https://github.com/OpenSIPS/opensips-cli/issues",
"Download, https://github.com/OpenSIPS/opensips-cli/archive/master.zip"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:25:40.062069 | opensipscli-0.3.5.tar.gz | 74,146 | 16/95/aee745b93196ff06ab4cf77525900c062ec3513a60ce26457655401c552e/opensipscli-0.3.5.tar.gz | source | sdist | null | false | 0e1a917b37371146ca0cbeddefe25e64 | d3e8fc1e1044765cf835050c28af75f3fc0ed7e45e618b3f359b22dcea8b2ed4 | 1695aee745b93196ff06ab4cf77525900c062ec3513a60ce26457655401c552e | null | [
"LICENSE"
] | 239 |
2.4 | agent-dist | 0.2.0 | A lightweight agentic mesh for orchestrating AI agents. | <<<<<<< HEAD
# agent_dist
Distributed agent orchestration framework with registry, hierarchical routing, ReAct execution, and traceable multi-agent workflows.
=======
# AgentFlows
A lightweight agentic mesh for orchestrating AI agents in a hospital environment.
## ⚙️ Configuration
Copy the example configuration file:
```bash
cp .env.example .env
```
Edit `.env` to set your LLM provider.
### Supported LLM Providers
- **Ollama** (Default): Run local models via `ollama serve`.
- **Groq**: Fast inference. Set `LLM_PROVIDER=groq` and `GROQ_API_KEY`.
- **OpenAI**: Set `LLM_PROVIDER=openai` and `LLM_API_KEY`.
## Installation
```bash
pip install agentflows
```
## Running
**Registry:**
```bash
python -m agentflows.registry.app
```
**Orchestrator:**
```bash
python -m agentflows.orchestrator.app
```
>>>>>>> f31a0e6 (Initial commit)
| text/markdown | null | Shekar <shekar.shekar9036@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"fastapi",
"uvicorn",
"requests",
"httpx",
"pydantic",
"langchain-groq",
"langchain-ollama",
"python-dotenv",
"matplotlib>=3.10.8",
"seaborn>=0.13.2",
"sentence-transformers",
"numpy"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T11:25:03.570316 | agent_dist-0.2.0.tar.gz | 35,696 | 47/fe/f5af00cead20fd14f2ec97a5c804aeb8db5e09e11eb200d6263f197a25b1/agent_dist-0.2.0.tar.gz | source | sdist | null | false | 641bf2936c92c4179284d62afd3e5260 | aad07735bb64a920386939361fd6c009eb25b807acac0d9fc6deb41140796570 | 47fef5af00cead20fd14f2ec97a5c804aeb8db5e09e11eb200d6263f197a25b1 | null | [
"LICENSE"
] | 242 |
2.4 | orchestrator-core | 4.8.0rc4 | This is the orchestrator workflow engine. | # Orchestrator-Core
[](https://pepy.tech/project/orchestrator-core)
[](https://codecov.io/gh/workfloworchestrator/orchestrator-core)
[](https://pypi.org/project/orchestrator-core)
[](https://pypi.org/project/orchestrator-core)

<p style="text-align: center"><em>Production ready Orchestration Framework to manage product lifecycle and workflows. Easy to use, built on top of FastAPI and Pydantic</em></p>
## Documentation
The documentation can be found at [workfloworchestrator.org](https://workfloworchestrator.org/orchestrator-core/).
## Installation (quick start)
Simplified steps to install and use the orchestrator-core.
For more details, read the [Getting started](https://workfloworchestrator.org/orchestrator-core/getting-started/base/) documentation.
### Step 1 - Install the package
Create a virtualenv and install the orchestrator-core.
```shell
python -m venv .venv
source .venv/bin/activate
pip install orchestrator-core
```
### Step 2 - Setup the database
Create a postgres database:
```shell
createuser -sP nwa
createdb orchestrator-core -O nwa # set password to 'nwa'
```
Configure the database URI in your local environment:
```
export DATABASE_URI=postgresql://nwa:nwa@localhost:5432/orchestrator-core
```
### Step 3 - Create main.py and wsgi.py
Create a `main.py` file for running the CLI.
```python
from orchestrator.cli.main import app as core_cli
if __name__ == "__main__":
core_cli()
```
Create a `wsgi.py` file for running the web server.
```python
from orchestrator import OrchestratorCore
from orchestrator.settings import AppSettings
app = OrchestratorCore(base_settings=AppSettings())
```
### Step 4 - Run the database migrations
Initialize the migration environment and database tables.
```shell
python main.py db init
python main.py db upgrade heads
```
### Step 5 - Run the app
```shell
export OAUTH2_ACTIVE=False
uvicorn --reload --host 127.0.0.1 --port 8080 wsgi:app
```
Visit the [ReDoc](http://127.0.0.1:8080/api/redoc) or [OpenAPI](http://127.0.0.1:8080/api/docs) page to view and interact with the API.
## Contributing
We use [uv](https://docs.astral.sh/uv/getting-started/installation/) to manage dependencies.
To get started, follow these steps:
```shell
# in your postgres database
createdb orchestrator-core-test -O nwa # set password to 'nwa'
# on your local machine
git clone https://github.com/workfloworchestrator/orchestrator-core
cd orchestrator-core
export DATABASE_URI=postgresql://nwa:nwa@localhost:5432/orchestrator-core-test
uv sync --all-extras --all-groups
uv run pytest
```
For more details please read the [development docs](https://workfloworchestrator.org/orchestrator-core/contributing/development/).
| text/markdown | null | SURF <automation-beheer@surf.nl> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: AsyncIO",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"Intended Audience :: Telecommunications Indu... | [] | null | null | <3.15,>=3.11 | [] | [] | [] | [
"alembic==1.18.4",
"anyio>=3.7.0",
"apscheduler>=3.11.0",
"click==8.*",
"deepmerge==2.0",
"deprecated>=1.2.18",
"fastapi~=0.129.0",
"fastapi-etag==0.4.0",
"itsdangerous>=2.2.0",
"jinja2==3.1.6",
"more-itertools~=10.8.0",
"nwa-stdlib~=1.11.0",
"oauth2-lib>=2.5.0",
"orjson==3.11.7",
"pgvec... | [] | [] | [] | [
"Documentation, https://workfloworchestrator.org/orchestrator-core",
"Homepage, https://workfloworchestrator.org/orchestrator-core",
"Source, https://github.com/workfloworchestrator/orchestrator-core"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T11:24:54.731894 | orchestrator_core-4.8.0rc4.tar.gz | 337,483 | 72/92/68f64951ccca996f2550a82207415a8366cec4ac65279b4b281f3b86d836/orchestrator_core-4.8.0rc4.tar.gz | source | sdist | null | false | ffcbf2a51c9b6e31b1158362ebeb140f | e97c98881de7cf67d3df91e1fc8b625b8b0f4483dba305dfc16d19a7b4d0bfad | 729268f64951ccca996f2550a82207415a8366cec4ac65279b4b281f3b86d836 | Apache-2.0 | [
"LICENSE"
] | 194 |
2.4 | minion-code | 0.1.39 | A Python project depending on minion | # MinionCodeAgent
An enhanced AI code assistant built on the Minion framework, pre-configured with rich development tools, optimized for code development tasks.
## Features
- 🤖 **Intelligent Code Assistant**: Pre-configured AI agent designed for programming tasks
- 🔧 **Rich Toolset**: Automatically includes 12+ tools for file operations, command execution, web search, etc.
- ⚡ **Ready to Use**: One-line creation, no complex configuration needed
- 📝 **Conversation History**: Built-in conversation history tracking and management
- 🎯 **Optimized Prompts**: System prompts optimized for code development tasks
- 🛡️ **Security by Design**: Built-in security checks to prevent dangerous operations
- 🔌 **ACP Protocol Support**: Seamless integration with ACP clients like Zed editor
## Installation
### Option 1: Install from source (recommended for development)
```bash
# Clone the dependency repository
git clone https://github.com/femto/minion
# Clone this repository
git clone https://github.com/femto/minion-code
# Enter the directory
cd minion-code
# Install minion dependency
pip install -e ../minion
# Install minion-code
pip install -e .
```
In this case, `MINION_ROOT` is located at `../minion`
### Option 2: Direct installation (recommended for general use)
```bash
# Clone this repository
git clone https://github.com/femto/minion-code
cd minion-code
# Install dependencies
pip install minionx
# Install minion-code
pip install -e .
```
In this case, `MINION_ROOT` is located at the current startup location
On startup, the actual path of `MINION_ROOT` will be displayed:
```
2025-11-13 12:21:48.042 | INFO | minion.const:get_minion_root:44 - MINION_ROOT set to: <some_path>
```
# LLM Configuration
Please refer to https://github.com/femto/minion?tab=readme-ov-file#get-started
Make sure the config file is in `MINION_ROOT/config/config.yaml` or `~/.minion/config.yaml`
## Quick Start
### CLI Usage
```bash
# Basic usage
mcode
# Specify working directory
mcode --dir /path/to/project
# Specify LLM model
mcode --model gpt-4o
mcode --model claude-3-5-sonnet
# Enable verbose output
mcode --verbose
# Load additional tools using MCP config file
mcode --config mcp.json
# Combined usage
mcode --dir /path/to/project --model gpt-4o --config mcp.json --verbose
```
### Configuration
Configure the default LLM model used by minion-code:
```bash
# View current default model
mcode model
# Set default model (saved to ~/.minion/minion-code.json)
mcode model gpt-4o
mcode model claude-3-5-sonnet
# Clear default model (use built-in default)
mcode model --clear
```
**Model Priority:**
1. CLI `--model` argument (highest priority)
2. Config file `~/.minion/minion-code.json`
3. Built-in default (lowest priority)
### ACP Protocol Support
MinionCodeAgent supports the [ACP (Agent Communication Protocol)](https://agentcommunicationprotocol.dev/) protocol, enabling integration with ACP-compatible clients like Zed editor.
```bash
# Start ACP server (stdio mode)
mcode acp
# Specify working directory
mcode acp --dir /path/to/project
# Specify LLM model
mcode acp --model gpt-4o
# Enable verbose logging
mcode acp --verbose
# Skip tool permission prompts (auto-allow all tools)
mcode acp --dangerously-skip-permissions
# Combined usage
mcode acp --dir /path/to/project --model claude-3-5-sonnet --verbose
```
#### Using with Zed Editor
Add the following to Zed's `settings.json`:
```json
{
"agent_servers": {
"minion-code": {
"type": "custom",
"command": "/path/to/mcode",
"args": [
"acp"
],
"env": {}
}
}
}
```
#### Permission Management
In ACP mode, tool calls will request user permission:
- **Allow once**: Allow this time only
- **Always allow**: Permanently allow this tool (saved to `~/.minion/sessions/`)
- **Reject**: Deny execution
### Programming Interface
```python
import asyncio
from minion_code import MinionCodeAgent
async def main():
# Create AI code assistant with all tools auto-configured
agent = await MinionCodeAgent.create(
name="My Code Assistant",
llm="gpt-4.1"
)
# Chat with the AI assistant
response = await agent.run_async("List files in current directory")
print(response.answer)
response = await agent.run_async("Read the README.md file")
print(response.answer)
asyncio.run(main())
```
### Custom Configuration
```python
# Custom system prompt and working directory
agent = await MinionCodeAgent.create(
name="Python Expert",
llm="gpt-4.1",
system_prompt="You are a specialized Python developer assistant.",
workdir="/path/to/project",
additional_tools=[MyCustomTool()]
)
```
### View Available Tools
```python
# Print tools summary
agent.print_tools_summary()
# Get tools info
tools_info = agent.get_tools_info()
for tool in tools_info:
print(f"{tool['name']}: {tool['description']}")
```
## Built-in Tools
MinionCodeAgent automatically includes the following tool categories:
### 📁 File and Directory Tools
- **FileReadTool**: Read file contents
- **FileWriteTool**: Write files
- **GrepTool**: Search text in files
- **GlobTool**: File pattern matching
- **LsTool**: List directory contents
### 💻 System and Execution Tools
- **BashTool**: Execute shell commands
- **PythonInterpreterTool**: Execute Python code
### 🌐 Network and Search Tools
- **WebSearchTool**: Web search
- **WikipediaSearchTool**: Wikipedia search
- **VisitWebpageTool**: Visit webpages
### 🔧 Other Tools
- **UserInputTool**: User input
- **TodoWriteTool**: Task management write
- **TodoReadTool**: Task management read
## MCP Tool Integration
MinionCodeAgent supports loading additional tools via MCP (Model Context Protocol) configuration files.
### MCP Configuration File Format
Create a JSON configuration file (e.g., `mcp.json`):
```json
{
"mcpServers": {
"chrome-devtools": {
"command": "npx",
"args": ["-y", "chrome-devtools-mcp@latest"],
"env": {
"FASTMCP_LOG_LEVEL": "ERROR"
},
"disabled": false,
"autoApprove": []
},
"filesystem": {
"command": "uvx",
"args": ["mcp-server-filesystem", "/tmp"],
"disabled": true,
"autoApprove": ["read_file", "list_directory"]
},
"git": {
"command": "uvx",
"args": ["mcp-server-git"],
"disabled": false,
"autoApprove": ["git_status", "git_log"]
}
}
}
```
### Configuration Options
- `command`: Command to start the MCP server
- `args`: List of command arguments
- `env`: Environment variables (optional)
- `disabled`: Whether to disable this server (default: false)
- `autoApprove`: List of tool names to auto-approve (optional)
### Using MCP Configuration
```bash
# Use MCP config file
minion-code --config examples/mcp_config.json
# View loaded tools (including MCP tools)
# In CLI, type: tools
```
### Using MCP Tools in Programming Interface
```python
from minion_code.utils.mcp_loader import load_mcp_tools
from pathlib import Path
async def main():
# Load MCP tools
mcp_tools = await load_mcp_tools(Path("mcp.json"))
# Create agent with MCP tools
agent = await MinionCodeAgent.create(
name="Enhanced Assistant",
llm="gpt-4o-mini",
additional_tools=mcp_tools
)
```
## Conversation History Management
```python
# Get conversation history
history = agent.get_conversation_history()
for entry in history:
print(f"User: {entry['user_message']}")
print(f"Agent: {entry['agent_response']}")
# Clear history
agent.clear_conversation_history()
```
## Comparison with Original Implementation
### Before (Complex manual configuration)
```python
# Need to manually import and configure all tools
from minion_code.tools import (
FileReadTool, FileWriteTool, BashTool,
GrepTool, GlobTool, LsTool,
PythonInterpreterTool, WebSearchTool,
# ... more tools
)
# Manually create tool instances
custom_tools = [
FileReadTool(),
FileWriteTool(),
BashTool(),
# ... more tool configuration
]
# Manually set system prompt
SYSTEM_PROMPT = "You are a coding agent..."
# Create agent (~50 lines of code)
agent = await CodeAgent.create(
name="Minion Code Assistant",
llm="gpt-4o-mini",
system_prompt=SYSTEM_PROMPT,
tools=custom_tools,
)
```
### Now (Using MinionCodeAgent)
```python
# One line of code completes all setup
agent = await MinionCodeAgent.create(
name="Minion Code Assistant",
llm="gpt-4o-mini"
)
```
## API Reference
### MinionCodeAgent.create()
```python
async def create(
name: str = "Minion Code Assistant",
llm: str = "gpt-4o-mini",
system_prompt: Optional[str] = None,
workdir: Optional[Union[str, Path]] = None,
additional_tools: Optional[List[Any]] = None,
**kwargs
) -> MinionCodeAgent
```
**Parameters:**
- `name`: Agent name
- `llm`: LLM model to use
- `system_prompt`: Custom system prompt (optional)
- `workdir`: Working directory (optional, defaults to current directory)
- `additional_tools`: List of additional tools (optional)
- `**kwargs`: Other parameters passed to CodeAgent.create()
### Instance Methods
- `run_async(message: str)`: Run agent asynchronously
- `run(message: str)`: Run agent synchronously
- `get_conversation_history()`: Get conversation history
- `clear_conversation_history()`: Clear conversation history
- `get_tools_info()`: Get tools info
- `print_tools_summary()`: Print tools summary
### Properties
- `agent`: Access underlying CodeAgent instance
- `tools`: Get available tools list
- `name`: Get agent name
## Security Features
- **Command Execution Safety**: BashTool prohibits dangerous commands (e.g., `rm -rf`, `sudo`, etc.)
- **Python Execution Restrictions**: PythonInterpreterTool runs in a restricted environment, allowing only safe built-in functions and specified modules
- **File Access Control**: All file operations have path validation and error handling
## Examples
See complete examples in the `examples/` directory:
- `simple_code_agent.py`: Basic MinionCodeAgent usage example
- `simple_tui.py`: Simplified TUI implementation
- `advanced_textual_tui.py`: Advanced TUI interface (using Textual library)
- `minion_agent_tui.py`: Original complex implementation (for comparison)
- `mcp_config.json`: MCP configuration file example
- `test_mcp_config.py`: MCP configuration loading test
- `demo_mcp_cli.py`: MCP CLI feature demo
Run examples:
```bash
# Basic usage example
python examples/simple_code_agent.py
# Simple TUI
python examples/simple_tui.py
# Advanced TUI (requires textual: pip install textual rich)
python examples/advanced_textual_tui.py
# Test MCP config loading
python examples/test_mcp_config.py
# MCP CLI feature demo
python examples/demo_mcp_cli.py
```
## Documentation
- [LLM Configuration Guide](LLM_CONFIG.md) - How to configure Large Language Models (LLM)
- [MCP Tool Integration Guide](docs/MCP_GUIDE.md) - Detailed MCP configuration and usage guide
## Contributing
Issues and Pull Requests are welcome to improve this project!
## License
MIT License
| text/markdown | null | User <user@example.com> | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"minionx>=0.1.20",
"typer>=0.9.0",
"Pillow>=10.0.0",
"nest-asyncio>=1.5.0",
"agent-client-protocol>=0.7.0",
"anthropic>=0.30.0",
"aiohttp>=3.9.0",
"pytest; extra == \"dev\"",
"black; extra == \"dev\"",
"flake8; extra == \"dev\"",
"mypy; extra == \"dev\"",
"textual>=0.40.0; extra == \"tui\"",
... | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.4 | 2026-02-19T11:24:49.444139 | minion_code-0.1.39.tar.gz | 190,820 | 2a/2e/c8f868a52ba6ac284710395847003479a9ab05fb77bac709b5412128e6ee/minion_code-0.1.39.tar.gz | source | sdist | null | false | 33b1e9c13eeb45350aa02397ffd28c64 | 8c24184822dc3dfed076348acc053198b3a97cd9701ce161d02617a0792d10d4 | 2a2ec8f868a52ba6ac284710395847003479a9ab05fb77bac709b5412128e6ee | null | [
"LICENSE"
] | 246 |
2.4 | feather-framework | 0.9.0 | An opinionated, batteries-included web framework for building production applications with AI. | <h1 align="center">
<img src="https://raw.githubusercontent.com/RolandFlyBoy/Feather/main/feather/static/favicon.svg" alt="" width="64" height="64" style="vertical-align: middle;">
Feather
</h1>
### What is Feather?
Feather is a full-stack web framework built on proven technologies: **Flask** for the backend, **Tailwind CSS** for styling, **HTMX** for dynamic interactions, and **vanilla JavaScript** for complex client-side behavior.
Built with and optimized for [Claude Code](https://claude.ai/code), though it works with any AI coding assistant. Each project includes a `CLAUDE.md` that gives AI assistants the context they need to follow framework conventions.
### What's Included
Feather provides production-ready infrastructure so you can focus on your application:
| Feature | Options |
|---------|---------|
| **Authentication** | Google OAuth with session management, approval workflow |
| **User Management** | Admin panel for approvals, roles, suspension |
| **Multi-Tenancy** | Domain-based or individual tenants (B2B+B2C) |
| **Background Jobs** | Thread pool with concurrency control, or RQ (Redis) |
| **Caching** | Memory or Redis |
| **File Storage** | Local filesystem or Google Cloud Storage |
| **Email** | Resend for transactional emails |
| **Dark Mode** | Cookie-persisted toggle on every page, including admin |
| **Security Headers** | CSP, HSTS, X-Frame-Options, Referrer-Policy (production) |
| **Rate Limiting** | In-memory (or Redis for distributed) |
| **Events** | Pub/sub with sync and async listeners |
| **Error Logging** | Database-backed, tenant-scoped |
| **Health Checks** | `/health`, `/health/live`, `/health/ready` |
| **Request Tracking** | Unique request IDs, JSON logging |
All features are optional and can be enabled during project creation or added later.
### Why Feather?
Python has a long history in web development. Flask and Django powered countless applications through the 2010s. Then the SPA revolution happened—React, Vue, Angular—and suddenly "modern" web development meant writing Python APIs that served JSON to JavaScript frontends.
That split created a gap. Python developers who wanted full-stack productivity had two choices: adopt the JavaScript ecosystem entirely, or stick with Django's monolithic approach that hadn't evolved much for the new era. Meanwhile, Ruby developers had Rails with Hotwire, PHP developers had Laravel with Livewire—both frameworks that embraced server-rendering while adding modern interactivity.
Feather fills that gap for Python. It's a full-stack framework that gives you authentication, admin panels, file storage, background jobs, and a component system out of the box. The frontend uses server-rendered HTML enhanced with HTMX and small JavaScript islands—no virtual DOM, no hydration, no "use client" confusion.
**How other frameworks approach this:**
- **Rails** and **Laravel** pioneered the batteries-included philosophy. They handle auth, database migrations, background jobs, and asset compilation in one cohesive package. Feather takes the same approach but uses Python and modern tooling (Vite 7, Tailwind CSS, HTMX).
- **Next.js** brought React to the server with excellent developer experience. But you're still managing React's complexity—state management, hydration mismatches, deciding what runs where. Feather sidesteps this by keeping JavaScript minimal and optional.
- **Django** remains powerful but feels heavyweight for many projects. Its template language is limiting, the admin is rigid, and adding modern frontend tooling requires significant configuration.
The real unlock is combining good conventions with AI assistance. Feather's predictable patterns—where files go, how services work, what components look like—mean you can describe what you want and get working code. A feature that might take a day of wiring up authentication, writing migrations, building UI, and handling edge cases can be done in a focused session.
Feather is opinionated about its defaults: Google OAuth for auth, Tailwind for styling, PostgreSQL for production data. These choices reduce decision fatigue and let you ship faster. That said, the abstractions are designed to be extensible—the storage backend interface works with local files or GCS, the job queue can run in-process or on Redis, and you can swap in other providers as your needs evolve.
### How the Frontend Works
Feather uses a three-layer approach to building UIs, each solving a different problem:
**Components** are server-rendered Jinja2 macros—similar to Rails view components, Laravel Blade components, or React Server Components. They're reusable pieces of UI (buttons, cards, modals) that render to HTML on the server. No JavaScript, no hydration, just HTML and CSS. You use them like `{{ button("Save", variant="primary") }}`.
**HTMX** handles server interactions without page reloads. If you've used Hotwire/Turbo in Rails or Livewire in Laravel, it's the same idea. Click a button, HTMX makes an HTTP request, the server returns HTML, HTMX swaps it into the page. It replaces most of what you'd use React + fetch for—forms, search, pagination, like buttons—without writing JavaScript. Think of it as server-side rendering with surgical DOM updates.
**Islands** are small JavaScript components for genuinely interactive UI that needs client-side state. The name comes from Astro's Islands Architecture—most of the page is static HTML, with small "islands" of interactivity. Use them for things like drag-and-drop, audio players, or real-time updates where round-tripping to the server would feel sluggish. They're similar to writing a small React component, but without React's runtime overhead.
The mental model: start with Components for everything static, reach for HTMX when you need server data without a page reload, and only use Islands when you genuinely need client-side state. In practice, 90% of features can be built with just Components and HTMX.
---
## Getting Started
### Prerequisites
**Core requirements (all apps):**
- **Python 3.10+** — the runtime
- **Node.js 22+** — for Vite 7 (build tooling) and Tailwind CSS
- **pipx** — for installing the Feather CLI globally
**Simple apps** (no auth, prototypes, internal tools):
- **SQLite** — works out of the box, no setup required
**Production apps** (auth, multi-tenant, background jobs):
- **PostgreSQL** — required for multi-tenant apps, recommended for anything with auth
- **Google Cloud credentials** — for OAuth (free tier works fine)
- **Redis** (optional) — for distributed caching and persistent job queues
- **Google Cloud Storage** (optional) — for file uploads in production
- **Resend** (optional) — for transactional emails
### Installation
**From PyPI (recommended):**
```bash
pip install feather-framework
```
**Or with pipx (isolated environment):**
```bash
brew install pipx && pipx ensurepath # if you don't have pipx
pipx install feather-framework
```
**For development (contributing to Feather):**
```bash
git clone https://github.com/RolandFlyBoy/Feather.git
cd Feather
pipx install -e .
feather test --framework # run framework tests
```
This installs the `feather` CLI. You can now run `feather new` from any directory.
### Quick Start
#### 1. Create a New Project
```bash
feather new myapp
```
You'll be prompted for app type first:
| App Type | Database | Auth | Description |
|----------|----------|------|-------------|
| `simple` (default) | Ask (default: none) | No | Static pages, minimal setup |
| `single-tenant` | Ask (default: SQLite) | Yes | One organization, user accounts |
| `multi-tenant` | PostgreSQL (required) | Yes | Multiple organizations (SaaS) |
During scaffolding, you'll be asked about optional features:
- **Background jobs** — thread pool by default, optionally Redis
- **Auto-approve users** — immediately activate new signups (authenticated apps only)
- **Caching** — memory cache for development, optionally Redis for production
- **File storage** — local filesystem for development, optionally GCS for production
- **Email** — Resend for transactional emails (authenticated apps only)
- **Display name field** — optional `display_name` field on User model (authenticated apps only)
- **Admin email** — creates your initial admin user (authenticated apps only)
#### 2. Initialize and Run
```bash
cd myapp
source venv/bin/activate
# Set up database (migrations are manual so you can review models first)
feather db migrate -m "Initial migration"
feather db upgrade
python seeds.py # Creates admin user if auth enabled
# Start dev server
feather dev
```
Open http://localhost:5173 — Vite handles frontend assets with HMR, Flask runs on port 5000 behind the proxy. CSS and JS changes are instant; template and Python changes trigger a reload.
**Note:** If using background jobs with the thread backend, set `FLASK_DEBUG=0` in `.env`. The Flask reloader kills background threads on file changes. Use `JOB_BACKEND=sync` during development if you need debug mode.
Every Feather project includes a `CLAUDE.md` guide that helps AI assistants understand the framework's patterns and conventions. It's a starting point—add your own project-specific context, domain rules, or coding preferences as your app grows.
### Project Structure
```
myapp/
├── app.py # Entry point
├── config.py # Configuration classes
├── seeds.py # Initial data (if auth enabled)
├── .env # Environment variables
├── package.json # Node dependencies (Vite, Tailwind)
├── vite.config.js # Build configuration
├── models/ # SQLAlchemy models (auto-discovered)
├── services/ # Business logic (auto-discovered)
├── routes/
│ ├── api/ # API routes → /api/*
│ └── pages/ # Page routes → /*
├── templates/
│ ├── base.html # Base layout with HTMX/Vite
│ ├── components/ # Custom/override components
│ ├── partials/ # HTMX response fragments
│ └── pages/ # Full page templates
├── static/
│ ├── css/app.css # Tailwind entry point
│ ├── js/app.js # Shared JavaScript
│ └── islands/ # Interactive JS components
├── tests/ # Test files
└── migrations/ # Alembic migrations
```
**Framework-provided** (served from `/feather-static/`, auto-update with Feather upgrades):
- Components: `button`, `card`, `modal`, `input`, `alert`, `icon`, `dropdown`
- JS: `api.js` (CSRF-aware fetch), `feather.js` (Islands runtime)
Override any component by creating your own version in `templates/components/`.
---
## UI Architecture
The concepts are explained in [How the Frontend Works](#how-the-frontend-works). This section is a quick reference.
### Components
```html
{% from "components/button.html" import button %}
{% from "components/icon.html" import icon %}
{{ button("Save", type="submit") }}
{{ button("Delete", variant="danger", icon=icon("delete", size="sm")) }}
```
**Available:** `button`, `card`, `modal`, `input`, `textarea`, `alert`, `icon`, `dropdown`, `confirm_modal`, `prompt_modal`, `toast`
### HTMX
```html
<button hx-post="/api/posts/123/like" hx-swap="outerHTML">Like (5)</button>
```
```python
@api.post("/posts/<post_id>/like")
def like_post(post_id):
post = Post.query.get_or_404(post_id)
post.toggle_like(current_user)
return render_template("partials/like_button.html", post=post, liked=True)
```
**Cross-element updates** — use `HX-Trigger` header to fire events that other elements listen for:
```python
response = make_response(render_template('partials/todo.html', todo=todo))
response.headers['HX-Trigger'] = 'todosUpdated'
return response
```
```html
<div hx-get="/htmx/stats" hx-trigger="load, todosUpdated from:body">
```
**Built-in modals:** `hx-confirm="Delete?"` for confirmations, `window.showPrompt({...})` for input.
### Islands
```javascript
island("counter", {
persist: true,
state: { count: 0 },
actions: {
increment() { this.state.count++; },
decrement() { this.state.count--; }
},
render(state) {
return { ".count": state.count };
}
});
```
```html
<div data-island="counter">
<button data-action="decrement">-</button>
<span class="count">0</span>
<button data-action="increment">+</button>
</div>
```
**Optimistic updates:**
```javascript
await this.optimistic(
() => { this.state.liked = true; }, // Instant UI update
() => api.post(`/posts/${this.data.id}/like`) // Rolls back on failure
);
```
**Drag-drop:** Built-in via `draggable` config — see [CLAUDE.md](CLAUDE.md) for full API.
### Icons
[Google Material Icons](https://fonts.google.com/icons): `{{ icon("home") }}`, `{{ icon("settings", size="lg") }}`
Sizes: `sm` (18px), `md` (24px), `lg` (36px), `xl` (48px)
### Dark Mode
Every scaffolded app includes a dark mode toggle that persists across pages via a `dm` cookie. The toggle is in the header of every page, including the admin panel.
**How it works:**
- A `dark-mode.js` script (loaded in `<head>`) reads the `dm` cookie and applies a `.dark` class to `<html>` before first render — no flash of wrong theme
- Clicking any element with `data-toggle-dark-mode` toggles the class and updates the cookie
- All CSS uses `dark:` variants via Tailwind's custom variant: `@custom-variant dark (&:where(.dark, .dark *))`
**Toggle button (scaffolded in templates):**
```html
<button data-toggle-dark-mode class="dark-mode-toggle" title="Toggle dark mode">
<span class="material-symbols-outlined icon-light">bedtime</span>
<span class="material-symbols-outlined icon-dark">sunny</span>
</button>
```
**CSS classes (in `app.css`):**
```css
.dark-mode-toggle .icon-light { @apply dark:hidden; }
.dark-mode-toggle .icon-dark { @apply hidden dark:inline; }
```
When adding custom styles, include `dark:` variants for every color-related class. Feather recommends CSS classes with `@apply` rather than inline Tailwind, so dark mode support looks like:
```css
.my-card {
@apply bg-white dark:bg-gray-800 text-gray-900 dark:text-gray-100
border border-gray-200 dark:border-gray-700;
}
```
---
## Backend
### Routes
Routes handle HTTP requests. Feather auto-discovers routes in `routes/api/` and `routes/pages/`.
```python
# routes/api/users.py
from feather import api, auth_required, inject
from services import UserService
@api.get('/users')
@inject(UserService)
def list_users(user_service):
return {'users': user_service.list_all()}
@api.post('/users')
@auth_required
@inject(UserService)
def create_user(user_service, email: str, username: str):
user = user_service.create(email=email, username=username)
return {'user': user}, 201
```
**Route prefixes:**
- `routes/api/*.py` → `/api/*`
- `routes/pages/*.py` → `/*`
### Models
Models define your database schema using SQLAlchemy with helpful mixins:
```python
# models/post.py
from feather.db import db, Model
from feather.db.mixins import UUIDMixin, TimestampMixin, SoftDeleteMixin
class Post(UUIDMixin, TimestampMixin, SoftDeleteMixin, Model):
__tablename__ = 'posts'
title = db.Column(db.String(255), nullable=False)
content = db.Column(db.Text)
author_id = db.Column(db.String(36), db.ForeignKey('users.id'))
```
**Mixins:**
| Mixin | Provides |
|-------|----------|
| `UUIDMixin` | `id` (auto-generated UUID) |
| `TimestampMixin` | `created_at`, `updated_at` |
| `SoftDeleteMixin` | `soft_delete()`, `restore()`, `query_active()` |
| `OrderingMixin` | `move_to()`, `move_above()`, `query_ordered()` |
| `TenantScopedMixin` | `tenant_id`, `for_tenant()` |
**OrderingMixin** for drag-drop:
```python
class Card(UUIDMixin, TimestampMixin, OrderingMixin, Model):
__tablename__ = 'cards'
__ordering_scope__ = ['column_id'] # Position is per-column
title = db.Column(db.String(200))
column_id = db.Column(db.String(36), db.ForeignKey('columns.id'))
# Reorder
card.move_to(0) # Move to top
card.move_above(other) # Move above another card
Card.query_ordered(column_id=col.id).all()
```
### Schema Design: Separating Users, Accounts, and Subscriptions
A common mistake when building SaaS apps is putting everything on the User model—subscription status, quotas, assets, preferences. This creates problems:
- **Family/team sharing impossible** — subscriptions are locked to one person
- **Profile switching breaks** — can't have separate preferences per context
- **Billing gets messy** — hard to transfer subscriptions or handle corporate accounts
**The better pattern:** separate authentication (User) from content ownership (Account) from billing (Subscription).
```
┌─────────┐ ┌─────────────┐ ┌─────────────┐
│ User │────▶│ AccountUser │◀────│ Account │
│ (auth) │ │ (role) │ │ (content) │
└─────────┘ └─────────────┘ └──────┬──────┘
│
┌──────▼──────┐
│Subscription │
│ (billing) │
└─────────────┘
```
**User** — authentication identity only:
```python
class User(UserMixin, Model):
email = db.Column(db.String(255), unique=True) # OAuth identity
stripe_customer_id = db.Column(db.String(255)) # For billing portal
# NO subscription_status, NO quota, NO content here
```
**Account** — where content and quotas live (like Netflix profiles):
```python
class Account(Model):
name = db.Column(db.String(100)) # "Family", "Work", etc.
owner_user_id = db.Column(db.ForeignKey("users.id"))
quota = db.Column(db.Integer, default=0) # Usage limits here
# Projects, documents, assets belong to Account, not User
```
**AccountUser** — many-to-many with roles:
```python
class AccountUser(Model):
user_id = db.Column(db.ForeignKey("users.id"), primary_key=True)
account_id = db.Column(db.ForeignKey("accounts.id"), primary_key=True)
role = db.Column(db.String(20)) # "admin", "member", "child"
```
**Subscription** — billing state attached to Account:
```python
class Subscription(Model):
account_id = db.Column(db.ForeignKey("accounts.id"))
stripe_subscription_id = db.Column(db.String(255))
status = db.Column(db.String(50)) # "active", "canceled", etc.
tier_name = db.Column(db.String(50)) # "Basic", "Pro", "Enterprise"
```
**Benefits:**
- One user can access multiple accounts (personal + work)
- Multiple users can share one account (family plan)
- Subscriptions transfer cleanly when ownership changes
- Content queries are scoped to Account, not scattered across Users
- Easy to add team features later without schema changes
**When to use this pattern:** Any app with subscriptions, quotas, shared resources, or where users might want separate "workspaces" or "profiles."
### Services
Services contain business logic. Keep routes thin, services fat.
```python
# services/user_service.py
from feather import Service, transactional
from feather.exceptions import ValidationError, ConflictError
from feather.db import paginate
from models import User
class UserService(Service):
@transactional # Auto-commits on success, rollbacks on exception
def create(self, email: str, username: str) -> User:
if not email or '@' not in email:
raise ValidationError('Valid email required', field='email')
if User.query.filter_by(email=email).first():
raise ConflictError('Email already registered')
user = User(email=email, username=username)
self.db.add(user)
return user
def list_paginated(self, page: int = 1, per_page: int = 20):
query = User.query.order_by(User.created_at.desc())
return paginate(query, page=page, per_page=per_page)
```
**Singleton services** for expensive initialization:
```python
from feather.services import singleton, Service
@singleton
class CacheService(Service):
def __init__(self):
super().__init__()
self.cache = {} # Shared across all requests
```
### Exceptions
Exception classes that automatically convert to JSON responses:
```python
from feather.exceptions import (
ValidationError, # 400 - Invalid input
AuthenticationError, # 401 - Not logged in
AuthorizationError, # 403 - No permission
AccountPendingError, # 403 - Account awaiting approval (redirects to /account/pending)
AccountSuspendedError, # 403 - Account suspended (redirects to /account/suspended)
NotFoundError, # 404 - Resource not found
ConflictError, # 409 - Already exists
)
# Throws:
raise ValidationError('Email is required', field='email')
# Returns:
# {"success": false, "error": {"code": "VALIDATION_ERROR", "message": "Email is required"}}
```
**Account status exceptions:** `AccountPendingError` and `AccountSuspendedError` inherit from `AuthorizationError` but trigger redirects to dedicated status pages instead of generic 403 errors. They're raised automatically by `@auth_required` based on the user's `active` and `approved_at` fields.
---
## Features
### Authentication
Feather uses **Google OAuth** for authentication—no passwords to store, no signup forms to build. The same flow handles both login and registration: users click "Sign in with Google", authorize the app, and Feather creates their account if it doesn't exist. This eliminates the entire signup/login/forgot-password complexity that traditional auth requires.
While Google OAuth is the default, the architecture can be extended for other OAuth providers (GitHub, Microsoft, etc.) by adding additional blueprints.
**User approval workflows:**
When users first authenticate, Feather can either auto-approve them immediately or hold them for admin review:
| Workflow | CLI Option | Best For |
|----------|------------|----------|
| Auto-approve | `Auto-approve new user signups?` → Yes | Consumer apps, open registration |
| Manual approval | `Auto-approve new user signups?` → No (default) | Internal tools, B2B apps, invite-only |
**Manual approval (default)** — new users are created in suspended state and see a "pending approval" page until an admin approves them via the admin panel. This prevents drive-by signups and gives you explicit control over who uses your application.
**Auto-approve** — new users are automatically activated on first login. When you select this during scaffolding, Feather sets `AUTO_APPROVE_USERS = True` in your `config.py` and the framework handles the rest — no callback files or env vars needed.
**Converting existing apps:** To switch from manual to auto-approve, add `AUTO_APPROVE_USERS = True` to your `config.py`.
**Configuration:**
```bash
# .env
GOOGLE_CLIENT_ID=your-client-id
GOOGLE_CLIENT_SECRET=your-client-secret
# Session settings (optional)
SESSION_LIFETIME_DAYS=7 # Default: 7
REMEMBER_COOKIE_DAYS=365 # Default: 365
SESSION_PROTECTION=basic # Options: None, basic, strong
```
**Setup:**
1. Create credentials at [Google Cloud Console](https://console.cloud.google.com/apis/credentials)
2. Add redirect URI: `http://localhost:5173/auth/google/callback` (dev) or your production URL
3. Add credentials to `.env`
4. Run `python seeds.py` to create your admin user
**Seeds** (`seeds.py`) populate initial data in your database. The scaffolded version creates your admin user with the email you provided during `feather new`. Extend it for your own initial data:
```python
# seeds.py
def seed():
# Admin user (scaffolded)
admin = User(email=ADMIN_EMAIL, role="admin", active=True)
db.session.add(admin)
# Add your seed data here
default_categories = ["General", "Support", "Billing"]
for name in default_categories:
db.session.add(Category(name=name))
db.session.commit()
```
Run seeds anytime with `python seeds.py` or `feather db seed`. The scaffolded seed is idempotent—it updates existing users rather than creating duplicates.
**Routes:**
| Route | Description |
|-------|-------------|
| `/auth/google/login` | Start OAuth flow |
| `/auth/google/callback` | OAuth callback (automatic) |
| `/auth/logout` | End session |
**Usage:**
```html
<a href="/auth/google/login">Sign in with Google</a>
<a href="/auth/logout">Sign out</a>
```
**Auth decorators:**
```python
from feather import auth_required, admin_required, role_required, login_only
from feather.auth import permission_required, platform_admin_required
@api.get('/me')
@auth_required # Any authenticated + approved user
def get_profile():
return {'user': current_user.to_dict()}
@page.get('/account/pending')
@login_only # Authenticated but may be pending/suspended
def account_pending():
return render_template('pages/account/pending.html')
@api.delete('/users/<id>')
@admin_required # Tenant admin (role="admin")
def delete_user(id):
pass
@api.post('/articles')
@role_required('editor') # Specific role (admin inherits all)
def create_article():
pass
@api.post('/tenants')
@platform_admin_required # Cross-tenant operations
def create_tenant():
pass
```
**Roles** — these defaults cover most apps, but you can add, remove, or rename them:
| Role | Purpose | Inherits |
|------|---------|----------|
| `user` | Basic access (default for new users) | — |
| `editor` | Content creation | `user` |
| `moderator` | Content moderation | `user` |
| `admin` | Tenant administration | all roles |
Roles inherit permissions: `@role_required('editor')` allows both editors and admins.
**To customize roles**, edit the hierarchy in `feather/auth/roles.py`:
```python
# Add a new role
ROLE_INHERITS = {
"admin": {"admin", "editor", "moderator", "reviewer", "user"},
"editor": {"editor", "user"},
"moderator": {"moderator", "user"},
"reviewer": {"reviewer", "user"}, # New role
"user": {"user"},
}
```
Then use it in routes: `@role_required('reviewer')`. The User model's `role` field is a simple string—no migration needed when adding roles.
**Permissions** — CRUD-based access control that maps to roles:
| Permission | Who Has It | Use Case |
|------------|------------|----------|
| `resources.read` | all roles | View data |
| `resources.create` | editor, admin | Create content |
| `resources.update` | editor, admin | Edit content |
| `resources.manage` | moderator, admin | Moderation actions |
| `resources.delete` | admin only | Delete content |
| `*` | admin only | All permissions |
```python
from feather.auth import permission_required
@api.get('/articles')
@permission_required('resources.read') # All authenticated users
def list_articles():
pass
@api.post('/articles')
@permission_required('resources.create') # Editors and admins
def create_article():
pass
@api.delete('/articles/<id>')
@permission_required('resources.delete') # Admins only
def delete_article(id):
pass
```
**When to use which:**
- `@auth_required` — any logged-in, approved user
- `@login_only` — authenticated but may be pending/suspended (for status pages, account setup)
- `@role_required('editor')` — check by role name (with inheritance)
- `@permission_required('resources.create')` — check by action (more semantic)
- `@admin_required` — shorthand for `@role_required('admin')`
Permissions are defined in `feather/auth/permissions.py` and can be extended like roles.
#### Approval Workflow Pages
When users are pending approval or suspended, they're automatically redirected to dedicated pages instead of seeing generic error messages:
| State | Redirect | Description |
|-------|----------|-------------|
| Pending | `/account/pending` | New user awaiting admin approval |
| Suspended | `/account/suspended` | Previously approved, now deactivated |
These pages are scaffolded with friendly messages and logout buttons. They use `@login_only` so users remain authenticated while seeing their account status.
**Customizing the flow:** Edit the templates in `templates/pages/account/` to match your branding and add contact information.
#### Post-Login Callback
For B2B+B2C apps that need custom account setup logic after OAuth:
```bash
# .env
FEATHER_POST_LOGIN_CALLBACK=myapp.auth:handle_login
```
```python
# myapp/auth.py
def handle_login(user, token):
"""Called after OAuth login with user and token.
Args:
user: The User model instance
token: OAuth token dict (access_token, refresh_token, etc.)
Returns:
Redirect URL string, or None for default behavior
"""
if not user.account_id:
# New user needs account setup
return '/onboarding/select-plan'
return None # Default redirect to home
```
Use this for creating Account/Membership records, assigning tenants to public email users, or custom onboarding flows.
#### Pre-Register Callback
Block new user registrations before the account is created. This runs during OAuth signup, **only for new users** — existing users logging in are unaffected.
```bash
# .env
FEATHER_PRE_REGISTER_CALLBACK=myapp.auth:check_registration
```
```python
# myapp/auth.py
from flask import request
def check_registration():
"""Called before creating a new user during OAuth signup.
Use Flask's request object to access the current request context
(e.g., IP address, headers).
Returns:
Error message string to block registration, or None to allow it.
"""
ip = request.headers.get("X-Real-IP", request.remote_addr)
if is_blocked(ip):
return "Registration is not available from your location."
return None # Allow registration
```
If the callback returns a string, registration is blocked — the message is shown as a toast error and no user record is created. If it returns `None` (or raises an exception), registration proceeds normally. Errors in the callback are logged but do not block signups (graceful degradation).
### Admin Panel
Most frameworks leave you to build your own admin interface—user management, analytics, error tracking. That's typically days of work before you ship any actual features. Feather includes a production-ready admin panel out of the box.
**What's included:**
| Feature | Description |
|---------|-------------|
| **User Management** | List, search, paginate users with HTMX-powered UI |
| **User Approval** | Approve pending signups, suspend bad actors |
| **Role Assignment** | Change user roles (user → editor → admin) |
| **Analytics Dashboard** | User growth charts with Apache ECharts, time range filters |
| **Error Logging** | Database-backed error logs with stack traces, tenant-scoped |
| **Tenant Management** | Create/manage tenants, assign admins (multi-tenant only) |
**Enable:**
```bash
feather new myapp
# Choose "single-tenant" or "multi-tenant" app type
```
**Access:** `/admin/` — requires `role="admin"` or `is_platform_admin=True`
**Pages:**
| Page | Route | Description |
|------|-------|-------------|
| Users | `/admin/users` | Searchable user list with pagination |
| User Detail | `/admin/users/<id>` | Profile card, role dropdown, approve/suspend buttons |
| Analytics | `/admin/analytics` | User growth chart with 7d/30d/90d/1y filters |
| Error Logs | `/admin/logs` | Filterable error list (4xx/5xx, searchable) |
| Tenants | `/admin/tenants` | Tenant list with status filters (multi-tenant only) |
**User states:**
- **Pending Approval** — new signup, never approved (`active=False`, `approved_at=None`)
- **Active** — approved and can access the app (`active=True`)
- **Suspended** — was active, now blocked (`active=False`, `approved_at` set)
#### Extending the Admin Panel
The admin is scaffolded into your app as regular routes and templates—not hidden in the framework. You own the code and can modify it freely.
**Files you can customize:**
```
routes/pages/admin.py # Admin routes and HTMX endpoints
services/admin_service.py # User queries, analytics data
templates/pages/admin/ # Full page templates
templates/partials/admin/ # HTMX response fragments
static/css/app.css # Admin CSS classes (admin-header, etc.)
```
**Adding a new admin page:**
1. Add a route in `routes/pages/admin.py`:
```python
@page.get('/admin/reports')
@admin_required
def admin_reports():
reports = ReportService().get_recent()
return render_template('pages/admin/reports.html', reports=reports)
```
2. Create the template `templates/pages/admin/reports.html`:
```jinja2
{% extends "pages/admin/base.html" %}
{% block admin_content %}
<h1>Reports</h1>
<!-- Your content here -->
{% endblock %}
```
3. Add navigation in `templates/pages/admin/base.html`:
```jinja2
<a href="{{ url_for('page.admin_reports') }}"
class="admin-nav-item {{ 'active' if active_page == 'reports' }}">
Reports
</a>
```
**Adding HTMX interactions** (like the user search):
```python
@page.get('/admin/htmx/reports/filter')
@admin_required
def htmx_filter_reports():
status = request.args.get('status')
reports = ReportService().filter_by_status(status)
return render_template('partials/admin/reports_table.html', reports=reports)
```
The admin uses the same three-layer architecture as the rest of your app: server-rendered templates, HTMX for interactions, and Islands only where needed (the analytics chart).
### Multi-Tenancy
Multi-tenancy is one of the hardest problems in SaaS development. You need to:
- Isolate data so Company A never sees Company B's data
- Handle authentication across organizational boundaries
- Manage two levels of admin (company admins vs. platform operators)
- Scope every database query to the current tenant
- Prevent cross-tenant access even from malicious or buggy code
Most teams spend weeks building this infrastructure. Feather provides production-ready multi-tenancy out of the box.
**Enable:**
```bash
feather new myapp
# Choose "multi-tenant" app type
```
#### How It Works
Feather uses **domain-based tenant isolation**. When a user signs in with `bob@acme.com`:
1. Feather extracts the domain (`acme.com`)
2. Looks up the tenant with that domain
3. Assigns the user to that tenant
4. All subsequent queries are scoped to that tenant
```
User signs in → Domain extracted → Tenant matched → Data scoped
bob@acme.com → acme.com → Acme Corp tenant → Only sees Acme data
```
**Public email domains:** By default, Gmail, Outlook, Yahoo, and other consumer email providers are blocked—users must sign in with their work email. For B2B+B2C apps that need to support both corporate and individual users:
```bash
# .env
FEATHER_ALLOW_PUBLIC_EMAILS=true
```
When enabled, users with public emails (Gmail, etc.) are created with `tenant_id=None`. Use the post-login callback to handle account/tenant creation for these users.
#### Two-Axis Authority Model
Feather separates **tenant authority** (what you can do within your organization) from **platform authority** (cross-organization operator power):
| Axis | Field | Scope | Example |
|------|-------|-------|---------|
| **Tenant Role** | `user.role` | Within one tenant | "admin", "editor", "user" |
| **Platform Authority** | `user.is_platform_admin` | Across all tenants | True/False |
This means:
- A **Tenant Admin** (`role="admin"`) can manage users within their organization, but can't see other tenants
- A **Platform Admin** (`is_platform_admin=True`) can create tenants, view all users, and operate across organizational boundaries
**Key design principle:** Tenant admins do NOT automatically bypass tenant isolation. An admin at Acme Corp cannot access data from Beta Inc—that requires explicit platform admin privileges.
#### Admin Levels Explained
**Tenant Admin** — manages one organization:
- Approve/suspend users in their tenant
- Change user roles within their tenant
- View error logs scoped to their tenant
- Cannot see other tenants or their data
**Platform Admin** — operates the entire platform:
- Create new tenants and assign domains
- Approve/suspend tenants
- View all users across all tenants
- Access platform-wide analytics and logs
- For security, can only be granted via CLI (not web UI)
```bash
# Grant platform admin (requires server access)
feather platform-admin admin@example.com
# Revoke platform admin
feather platform-admin admin@example.com --revoke
```
#### Admin Pages (Multi-Tenant Mode)
| Page | Route | Who Can Access | Description |
|------|-------|----------------|-------------|
| Users | `/admin/users` | Tenant Admin | Users in current tenant |
| User Detail | `/admin/users/<id>` | Tenant Admin | Approve/suspend, change roles |
| Error Logs | `/admin/logs` | Tenant Admin | Errors scoped to tenant |
| **Tenants** | `/admin/tenants` | Platform Admin only | All tenants, create new |
| **Tenant Detail** | `/admin/tenants/<id>` | Platform Admin only | Tenant info, users, approve/suspend |
#### Data Isolation
Feather enforces tenant isolation at multiple layers:
**1. Route layer** — `get_current_tenant_id()` returns the authenticated user's tenant:
```python
from feather import get_current_tenant_id
@api.get('/projects')
@auth_required
def list_projects():
tenant_id = get_current_tenant_id()
return Project.query.filter_by(tenant_id=tenant_id).all()
```
**2. Service layer** — `require_same_tenant()` guards against cross-tenant access:
```python
from feather.auth import require_same_tenant
def get_project_or_404(project_id):
project = Project.query.get_or_404(project_id)
require_same_tenant(project.tenant_id) # Raises 403 if mismatch
return project
```
**3. Model layer** — `TenantScopedMixin` adds tenant_id and scoped queries:
```python
from feather.db.mixins import TenantScopedMixin
class Project(UUIDMixin, TenantScopedMixin, Model):
__tablename__ = 'projects'
name = db.Column(db.String(100))
# Query only this tenant's projects
projects = Project.for_tenant(tenant_id).all()
```
**Hard boundary:** `require_same_tenant()` is a hard stop—even tenant admins cannot bypass it. Cross-tenant operations require platform admin routes with explicit `@platform_admin_required` decorators.
#### Tenant Model
The scaffolded Tenant model supports both B2B (domain-based) and B2C (individual) patterns:
```python
class Tenant(Model):
slug = db.Column(db.String(64), unique=True, nullable=False)
domain = db.Column(db.String(255), nullable=True) # Nullable for B2C
name = db.Column(db.String(255), nullable=False)
type = db.Column(db.String(50), nullable=True) # "company", "individual", etc.
status = db.Column(db.String(20), default="pending")
```
- **B2B tenants:** Set `domain` to auto-assign users by email (e.g., `@acme.com` → Acme tenant)
- **B2C tenants:** Leave `domain` as `None`, create individually via post-login callback
- **type field:** Classify tenants for billing, features, or reporting
#### Tenant Lifecycle
1. **Platform admin creates tenant** via `/admin/tenants`:
- Sets tenant name, slug, and optionally email domain
- Creates initial tenant admin (auto-approved)
- Tenant starts in pending state
2. **Platform admin approves tenant** — tenant becomes active
3. **Users sign up** with matching email domain:
- Auto-assigned to tenant
- Created in suspended state (pending approval)
4. **Tenant admin approves users** via `/admin/users`
This flow en | text/markdown | Roland Selmer | null | null | null | null | flask, web framework, server-first, progressive enhancement, islands architecture, ai-friendly | [
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Framework :: Flask",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming ... | [] | null | null | >=3.11 | [] | [] | [] | [
"flask>=3.1.0",
"flask-sqlalchemy>=3.1.0",
"flask-migrate>=4.1.0",
"flask-login>=0.6.0",
"flask-wtf>=1.2.0",
"sqlalchemy>=2.0.0",
"alembic>=1.17.0",
"click>=8.1.0",
"python-dotenv>=1.0.0",
"werkzeug>=3.1.0",
"authlib>=1.6.0",
"requests>=2.32.0",
"psycopg2-binary>=2.9.0",
"google-cloud-stor... | [] | [] | [] | [
"Homepage, https://github.com/RolandFlyBoy/Feather",
"Documentation, https://github.com/RolandFlyBoy/Feather#readme",
"Repository, https://github.com/RolandFlyBoy/Feather",
"Issues, https://github.com/RolandFlyBoy/Feather/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T11:21:54.916958 | feather_framework-0.9.0.tar.gz | 283,411 | 99/96/2f3274c96a6a9ffd0a24b8831e12e247a5d4012b0612785421d85c8756a5/feather_framework-0.9.0.tar.gz | source | sdist | null | false | 0544f80e6515b57882732e73ae1172d9 | f96967bfb48323daa211cf0c1756122ee59f14a4f1249a15a5d9acc881597e90 | 99962f3274c96a6a9ffd0a24b8831e12e247a5d4012b0612785421d85c8756a5 | MIT | [
"LICENSE"
] | 255 |
2.4 | shellcoderunner-aes | 2.0.0 | AES-based shellcode loader generator for Windows security research | # ShellcodeRunner (AES)
## Overview
**ShellcodeRunner** is a research-focused project designed to help security enthusiasts, red teamers, and malware researchers understand **how custom shellcode loaders work on Windows**.
This repository demonstrates:
* Encrypting raw shellcode using **AES**
* Generating a **native C++ loader**
* Executing shellcode fully **from memory**
* Leveraging **NT Native APIs** for execution
> **Primary Goal:**
> To provide a practical idea of how shellcode loaders can be built in a way that can **easily bypass Windows Defender–based solutions** by avoiding static signatures, plaintext payloads, and common high-level APIs.
This project is intended for **educational and defensive research purposes only**.
---
## Proof of Concept [Video]
[](https://www.youtube.com/watch?v=xlK_TSLLuHA)
---
## Key Features
* AES-128-CBC encrypted shellcode
* Password-based key derivation (SHA-256)
* No plaintext shellcode on disk
* Native Windows CryptoAPI decryption
* NTAPI-based memory allocation and execution
* Simple and clean workflow
---
## Repository Structure
```
shellcoderunner/
├── shellcoderunneraes.py # Python builder (encrypts shellcode & generates C++ loader)
├── aes_nt_runner.cpp # Generated C++ loader
├── meow.inc # Encrypted shellcode + IV (auto-generated)
└── runner.exe # Final compiled executable
```
---
## Installation
Required Dependencies (Linux):
```bash
sudo apt update && sudo apt install -y python3 python3-pip mingw-w64
python3 -m pip install pycryptodome
```
Clone the repository:
```bash
git clone https://github.com/jaytiwari05/shellcoderunner.git
cd shellcoderunner
```
Make the script globally accessible:
```bash
cp shellcoderunneraes.py /usr/local/bin/shellcoderunneraes.py && chmod +x /usr/local/bin/shellcoderunneraes.py
```
---
## Usage
Generate and compile a shellcode loader using AES encryption:
```bash
shellcoderunneraes.py <C2_shellcode>.bin --aes pain05 --compile
```
### Arguments
* `<C2_shellcode>.bin` — Raw shellcode file generated by a C2 framework (e.g., Sliver, Adaptix, Cobalt Strike).
* `--aes` — Password used for AES key derivation
* `--compile` — Compiles the generated C++ loader into an executable
The final output will be a **standalone Windows executable** that decrypts and executes the shellcode entirely in memory.
---
## Why This Works Against Defender
This project highlights techniques commonly used to bypass Windows Defender–based detection:
* Encrypted payload stored on disk
* Runtime decryption using legitimate Windows APIs
* No RWX memory allocation
* Execution via NT Native APIs
* No use of high-level Win32 execution helpers
These techniques help reduce static signatures and behavioral indicators commonly relied upon by Defender.
---
## Disclaimer
This project is provided **strictly for educational, research, and defensive security purposes**.
Do not use this code for unauthorized or malicious activities.
The author is not responsible for misuse.
---
## Author
**PaiN05**
Security Research | Offensive Tradecraft | Malware Development Research
| text/markdown | PaiN05 | null | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Topic :: Security",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology"
] | [] | null | null | null | [] | [] | [] | [
"pycryptodome"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-19T11:21:23.090989 | shellcoderunner_aes-2.0.0.tar.gz | 7,276 | 07/19/c5d48b99321b35802ba20e3b932bcf4e62d14d3816ecaa556356859aff4f/shellcoderunner_aes-2.0.0.tar.gz | source | sdist | null | false | 268ec6abcf49e6debd752d2c527339d6 | dee6cecc63d3db496c28dc60ac796277361a6407859d600758d855b5810176fb | 0719c5d48b99321b35802ba20e3b932bcf4e62d14d3816ecaa556356859aff4f | null | [
"LICENSE"
] | 255 |
2.4 | openllm-func-call-synthesizer | 0.1.2 | A tool for generating synthetic function call datasets for Large Language Models (LLMs). | # 🛠️ openllm-func-call-synthesizer

[](https://openllm-func-call-synthesizer.readthedocs.io/en/latest/?version=latest)
> Lightweight toolkit to synthesize function-call datasets and convert them to formats compatible with OpenAI-style function-call training and downstream tooling (including Llama Factory compatible exports).
---
## ✨ Features
- 📝 Generate synthetic function call datasets for LLM training and evaluation
- ⚙️ Flexible configuration via YAML and Hydra
- 💻 CLI interface powered by Typer & Rich
- 🔧 Utility functions for dataset manipulation
- 🔄 Extensible and easy to integrate into your own pipeline
- 🌐 Supports multiple LLM backends (OpenAI, Google, etc.)
- 📊 Export formats: JSONL, CSV, Parquet, LlamaFactory-compatible
---
## 🛠 Installation
### Prerequisites
- Python 3.12+ (match environment used by the project)
- API credentials for any LLM backend (set via environment variables or `.env` file)
- Example: `OPENAI_API_KEY`
- See `.env.example` for reference
- 🔌 MCP Server (Required)
This project relies on an MCP server to provide tool/function metadata.
Before running the synthesizer, you must start an MCP server.
▶ Start the example MCP server
An example MCP server is included in the repository:
python examples/mcp_example_sserver/server.py
This will start a local MCP server that the synthesizer can connect to.
Make sure your configuration (e.g. mcp_servers.transport) matches the server address.
⸻
⚠ Important
* The synthesizer will fail if no MCP server is available.
* Ensure the server is running before executing:
python -m apps.main
* If you see connection errors, verify:
* The server is running
* The transport URL in your config is correct
* Network/firewall settings allow local connections
⸻
---
### Install from PyPI
```bash
pip install openllm-func-call-synthesizer
# or using uv
uv add openllm-func-call-synthesizer
```
Install from source
```bash
git clone https://github.com/diqiuzhuanzhuan/openllm-func-call-synthesizer.git
cd openllm-func-call-synthesizer
uv sync
```
Is there no tool named 'uv'? You can install it with just one command:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
⸻
⚡ Quickstart
Run the synthesizer with default config:
```bash
python -m apps.main
```
Enable only query generation:
```bash
python -m apps.main synthesizer.query_generation.enable=True
```
Enable function-call generation with custom name:
```bash
python -m apps.main synthesizer.function_call_generation.enable=True synthesizer.function_call_generation.name=function_call_gpt_4o
```
Override languages dynamically:
```bash
python -m apps.main synthesizer.query_generation.languages=[English,Spanish]
```
⸻
📂 Outputs
* Generated datasets are written to data/<name>/
* Each run produces:
* train.jsonl
* output.csv
* output.parquet
* llama_factory step creates LlamaFactory-compatible train.jsonl
⸻
🧪 Testing
Run the test suite:
```bash
pytest -q
```
⸻
📝 Configuration Highlights
Configuration file: examples/conf/synthesizer/default.yaml
* mcp_servers — MCP server(s) to query for available tools
* choose_part_tools — filter toolset to a subset
* query_generation — generate seed queries from function docs
* function_call_generation — generate function-call pairs from queries
* critic — optional scoring/critique step
* llama_factory — export to LlamaFactory-compatible dataset
* verl - export to verl-compatible dataset
See docs for full field descriptions.
### Default pipeline walk-through
The provided `examples/conf/synthesizer/default.yaml` wires every stage together:
- **MCP bootstrap**: points to a local `ugreen_mcp` server on `http://localhost:8000/mcp`; leave it running before launching the synth job or queries will fail.
- **Tool filtering**: `choose_part_tools: false` keeps the full toolset; set it to a list (e.g. `["search_photos"]`) to restrict generations to specific tools.
- **Query generation**: reads `examples/function_docs.json`, emits multilingual prompts (English/Chinese/Japanese/German) under `data/function_query` via parallel OpenAI + Google model pools, each with generous TPM throttles for high-throughput runs.
- **Function-call synthesis**: consumes the query dataset, calls `gpt-4o` through the OpenAI backend, and writes `data/function_call_gpt_4o/*.jsonl` (set `max_num` to limit volume or switch `output_format`).
- **Critic pass**: re-scores every call with `gpt-5-mini-2025-08-07`, expecting `query/prompt/function_call/functions/answer` fields and emitting a scored dataset named `function_call_gpt_4o_critiqued_by_gpt_5_mini_2025_08_07`.
- **Downstream exports**: both `llama_factory` and `verl` blocks draw from the critic output, keep only rows with `score >= 8`, and materialize ready-to-train JSONL files plus optional train/val splits.
Feel free to copy the default file, tweak model lists or directories, and pass it via `python -m apps.main synthesizer=@your_config.yaml` for customized runs. For custom configurations, please refer to `example/conf/synthesizer/default.yaml`.
⸻
🐚 Parallel Runner
Helper script: bin/run_pipeline.sh
* Launch multiple synthesizer runs in parallel
* Requires .venv virtual environment
* Example usage:
```bash
chmod +x bin/run_pipeline.sh
bin/run_pipeline.sh default other
```
* Logs are printed to console; returns non-zero if any run fails
* Can also run manually using:
```bash
python -m apps.main synthesizer=default &
python -m apps.main synthesizer=other &
wait
```
⸻
## Contributing
Welcome to contribute!Please refer to [CONTRIBUTING.md](CONTRIBUTING.md) for details.
## License
MIT License. See [LICENSE](LICENSE) for details.
## Links
- [Documentation](https://openllm-func-call-synthesizer.readthedocs.io)
- [PyPI](https://pypi.org/project/openllm-func-call-synthesizer/)
- [GitHub](https://github.com/diqiuzhuanzhuan/openllm-func-call-synthesizer)
⸻
🌟 Star History
[](https://www.star-history.com/#diqiuzhuanzhuan/openllm-func-call-synthesizer&Date)
| text/markdown | null | Loong Ma <diqiuzhuanzhuan@gmail.com> | null | Loong Ma <diqiuzhuanzhuan@gmail.com> | MIT | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"bespokelabs-curator>=0.1.26",
"datasets>=3.6.0",
"deprecated>=1.2.18",
"fastmcp>=2.13.0.1",
"hydra-core>=1.3.2",
"ipykernel>=7.1.0",
"litellm==1.81.11",
"mcp>=1.19.0",
"ollama>=0.6.1",
"openpyxl>=3.1.5",
"pytest>=8.4.1",
"rich>=13.9.4",
"scikit-learn",
"tenacity>=9.1.2",
"typer",
"cov... | [] | [] | [] | [
"bugs, https://github.com/diqiuzhuanzhuan/openllm-func-call-synthesizer/issues",
"changelog, https://github.com/diqiuzhuanzhuan/openllm-func-call-synthesizer/blob/master/changelog.md",
"homepage, https://github.com/diqiuzhuanzhuan/openllm-func-call-synthesizer"
] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T11:21:20.601962 | openllm_func_call_synthesizer-0.1.2-py3-none-any.whl | 31,800 | 8d/eb/53004662e324e03cf6ffcc34cc9aa00cde5408f1fff5138793e3ae0986fb/openllm_func_call_synthesizer-0.1.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 698fc245efcf2ebabc31bde848db513e | c8e710d5dcb0e620bc56696efcf21d7785579830071bbf74bb3634538a61041b | 8deb53004662e324e03cf6ffcc34cc9aa00cde5408f1fff5138793e3ae0986fb | null | [
"LICENSE"
] | 240 |
2.4 | uagents-core | 0.4.2 | Core components for agent based systems | # uAgents-Core
Core definitions and functionalities for building agents that interact with the Fetch.ai ecosystem and Agentverse marketplace.
## Installation
```bash
pip install uagents-core
```
## Quick Start
### Register a Chat Agent with Agentverse
```python
from uagents_core.utils.registration import (
AgentverseRequestError,
RegistrationRequestCredentials,
register_chat_agent,
)
credentials = RegistrationRequestCredentials(
agent_seed_phrase="my-agent-seed-phrase",
agentverse_api_key="your-agentverse-api-key",
)
try:
register_chat_agent(
name="My Agent",
endpoint="https://your-agent-endpoint.com/webhook",
active=True,
credentials=credentials,
readme="# My Agent\nHandles customer questions.",
metadata={
"categories": ["support"],
"is_public": "True",
},
)
print("Agent registered successfully!")
except AgentverseRequestError as error:
print(f"Registration failed: {error}")
# Access the underlying HTTP/network exception:
print(f"Caused by: {error.from_exc}")
```
## Key Features
### Permanent Registration
Registrations via the v2 API are **permanent** - no need for periodic refresh:
- Agentverse handles Almanac synchronization automatically
- No 48-hour expiration like v1
- Register once when agent is created
### Agent Identity
Create and manage agent identities:
```python
from uagents_core.identity import Identity
# Create from seed (deterministic)
identity = Identity.from_seed("my-seed-phrase", 0)
# Get agent address
print(identity.address) # agent1q...
# Sign messages
signature = identity.sign(b"message")
```
### Configuration
```python
from uagents_core.config import AgentverseConfig
config = AgentverseConfig()
print(config.agents_api) # https://agentverse.ai/v2/agents
print(config.identity_api) # https://agentverse.ai/v2/identity
print(config.almanac_api) # https://agentverse.ai/v1/almanac
```
## Available Functions
### Registration
| Function | Error Behavior | Purpose |
|----------|---------------|---------|
| `register_chat_agent()` | **Raises** `AgentverseRequestError` | Register a chat agent (recommended) |
| `register_agent()` | **Raises** `AgentverseRequestError` | Register with custom protocols |
| `register_in_agentverse()` | Returns `False` on failure | Low-level registration (error-safe) |
| `update_agent_status()` | Returns `False` on failure | Update agent active/inactive status |
| `register_batch_in_agentverse()` | Returns `False` on failure | Batch registration (deprecated) |
> **Important:** `register_chat_agent()` and `register_agent()` raise `AgentverseRequestError` on failure. Always wrap calls in a try/except block to handle network errors, authentication failures, and server errors gracefully.
### Error Handling
All HTTP and network errors are wrapped in `AgentverseRequestError`, which provides:
- A human-readable error message (e.g., `"HTTP error: 401 Unauthorized"`)
- The original exception via the `from_exc` attribute for inspection
```python
from uagents_core.utils.registration import AgentverseRequestError
try:
register_chat_agent(...)
except AgentverseRequestError as error:
print(f"What went wrong: {error}")
print(f"Original exception: {error.from_exc}")
# Common failure patterns:
# - "Connection error ..." → Network/DNS issue
# - "Operation timed out." → Request exceeded 10s timeout
# - "HTTP error: 401 ..." → Invalid or expired API key
# - "HTTP error: 409 ..." → Agent address already claimed
# - "Unexpected server error." → HTTP 500, retry after delay
```
### Models
| Model | Purpose |
|-------|---------|
| `RegistrationRequestCredentials` | API key and agent seed phrase |
| `AgentverseRegistrationRequest` | Full agent registration data |
| `AgentverseRequestError` | Registration failure exception |
| `RegistrationRequest` | Agent registration data (internal) |
| `AgentProfile` | Agent profile (description, readme, avatar) |
| `AgentverseConnectRequest` | Connection credentials (internal) |
| `Identity` | Agent identity and signing |
## Upgrading
See [UPGRADING.md](../docs/UPGRADING.md) for migration guides between versions.
## Related Packages
- **[uagents](https://pypi.org/project/uagents/)** - Full agent framework with decorators and runtime
- **[uagents-adapter](../uagents-adapter/)** - Adapters for LangChain, CrewAI, MCP
## License
Apache 2.0 - See [LICENSE](../../LICENSE) for details.
| text/markdown | Ed FitzGerald | edward.fitzgerald@fetch.ai | null | null | Apache 2.0 | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"bech32<2.0,>=1.2.0",
"ecdsa<1.0,>=0.19.0",
"pydantic<3.0,>=2.8",
"requests<3.0,>=2.32.3"
] | [] | [] | [] | [
"Documentation, https://fetch.ai/docs",
"Homepage, https://fetch.ai",
"Repository, https://github.com/fetchai/uAgents"
] | poetry/2.3.2 CPython/3.11.14 Linux/6.14.0-1017-azure | 2026-02-19T11:20:50.840201 | uagents_core-0.4.2-py3-none-any.whl | 30,742 | 6a/07/cb31c239691b75ace9441e7ecb6621fb3f76d610ec3685e52b8b9e5c40e2/uagents_core-0.4.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 1b859bac6871b238816ae5e62ec697fe | 5cbeca4ee80fe4b166988f1ed8fef4f67e6a62ea6b1a91eeb8875355f8394e51 | 6a07cb31c239691b75ace9441e7ecb6621fb3f76d610ec3685e52b8b9e5c40e2 | null | [] | 502 |
2.4 | occystrap | 0.4.8 | occystrap: docker and OCI container tools | # Occy Strap
Occy Strap is a simple set of Docker and OCI container tools, which can be used
either for container forensics or for implementing an OCI orchestrator,
depending on your needs. This is a very early implementation, so be braced for
impact.
## Quick Start with URI-Style Commands
The recommended way to use Occy Strap is with the new URI-style `process` and
`search` commands:
```
# Download from registry to tarball
occystrap process registry://docker.io/library/busybox:latest tar://busybox.tar
# Download from registry to directory
occystrap process registry://docker.io/library/centos:7 dir://centos7
# Export from local Docker to tarball with timestamp normalization
occystrap process docker://myimage:v1 tar://output.tar -f normalize-timestamps
# Search for files in an image
occystrap search registry://docker.io/library/busybox:latest "bin/*sh"
```
## The `process` Command
The `process` command takes a source URI, a destination URI, and optional
filters:
```
occystrap process SOURCE DESTINATION [-f FILTER]...
```
### Input URI Schemes
- `registry://HOST/IMAGE:TAG` - Docker/OCI registry
- `docker://IMAGE:TAG` - Local Docker daemon
- `dockerpush://IMAGE:TAG` - Local Docker via push (fast, see below)
- `tar:///path/to/file.tar` - Docker-save format tarball
### Output URI Schemes
- `tar:///path/to/output.tar` - Create tarball
- `dir:///path/to/directory` - Extract to directory
- `oci:///path/to/bundle` - Create OCI runtime bundle
- `mounts:///path/to/directory` - Create overlay mounts
- `docker://IMAGE:TAG` - Load into local Docker daemon
- `registry://HOST/IMAGE:TAG` - Push to Docker/OCI registry
### URI Options
Options can be passed as query parameters:
```
# Extract with unique names and expansion
occystrap process registry://docker.io/library/busybox:latest \
"dir://merged?unique_names=true&expand=true"
# Use custom Docker socket
occystrap process "docker://myimage:v1?socket=/run/podman/podman.sock" \
tar://output.tar
```
### Filters
Filters transform or inspect image elements as they pass through the pipeline:
```
# Normalize timestamps for reproducible builds
occystrap process registry://docker.io/library/busybox:latest \
tar://busybox.tar -f normalize-timestamps
# Normalize with custom timestamp
occystrap process registry://docker.io/library/busybox:latest \
tar://busybox.tar -f "normalize-timestamps:ts=1609459200"
# Search while creating output (prints matches AND creates tarball)
occystrap process registry://docker.io/library/busybox:latest \
tar://busybox.tar -f "search:pattern=*.conf"
# Chain multiple filters
occystrap process registry://docker.io/library/busybox:latest \
tar://busybox.tar -f normalize-timestamps -f "search:pattern=bin/*"
# Record layer metadata to a JSONL file (inspect filter)
occystrap process docker://myimage:v1 registry://myregistry/myimage:v1 \
-f "inspect:file=layers-before.jsonl" \
-f normalize-timestamps \
-f "inspect:file=layers-after.jsonl"
# Exclude files matching glob patterns from layers
occystrap process registry://docker.io/library/python:3.11 \
tar://python.tar -f "exclude:pattern=**/.git/**"
# Exclude multiple patterns (comma-separated)
occystrap process registry://docker.io/library/python:3.11 \
tar://python.tar -f "exclude:pattern=**/.git/**,**/__pycache__/**,**/*.pyc"
# Load image directly into local Docker daemon
occystrap process registry://docker.io/library/busybox:latest \
docker://busybox:latest
# Load into Podman
occystrap process registry://docker.io/library/busybox:latest \
"docker://busybox:latest?socket=/run/podman/podman.sock"
# Push image to a registry
occystrap process docker://myimage:v1 \
registry://myregistry.example.com/myuser/myimage:v1
# Push to registry with authentication
occystrap --username myuser --password mytoken \
process tar://image.tar registry://ghcr.io/myorg/myimage:latest
# Push with zstd compression (better ratio, requires Docker 20.10+/containerd 1.5+)
occystrap --compression=zstd \
process docker://myimage:v1 registry://myregistry.example.com/myimage:v1
```
## The `search` Command
Search for files in container image layers:
```
occystrap search SOURCE PATTERN [--regex] [--script-friendly]
```
Examples:
```
# Search registry image
occystrap search registry://docker.io/library/busybox:latest "bin/*sh"
# Search local Docker image
occystrap search docker://myimage:v1 "*.conf"
# Search tarball with regex
occystrap search --regex tar://image.tar ".*\.py$"
# Machine-parseable output
occystrap search --script-friendly registry://docker.io/library/busybox:latest "*sh"
```
## The `info` Command
Display information about a container image without downloading layers:
```
occystrap info SOURCE
```
The output format is controlled by the global `-O` / `--output-format`
option:
```
# Human-readable text output (default)
occystrap info registry://docker.io/library/busybox:latest
# JSON output for scripting
occystrap -O json info registry://docker.io/library/busybox:latest
# From local Docker daemon
occystrap info docker://myimage:v1
# From tarball
occystrap info tar://image.tar
```
Registry sources show full detail (compressed sizes, media types,
compression format). Docker and tarball sources show config-derived
info (architecture, OS, diff_ids, history, labels, env, etc.).
## The `check` Command
Check validity of a container image. Validates structural integrity,
history consistency, compression compatibility, and filesystem
correctness:
```
occystrap check SOURCE [--fast]
```
Use `--fast` to skip layer downloads and only check metadata consistency
(manifest and config). The exit code is non-zero if any errors are
found, making it suitable for CI integration.
```
# Full check (downloads and verifies all layers)
occystrap check registry://docker.io/library/busybox:latest
# Fast metadata-only check
occystrap check --fast docker://myimage:v1
# JSON output for CI scripting
occystrap -O json check tar://image.tar
# Validate output of a process pipeline
occystrap process docker://myimage:v1 tar://output.tar
occystrap check tar://output.tar
```
## Legacy Commands (Deprecated)
The following commands are deprecated but still work for backwards
compatibility. They will be removed in a future version.
### Downloading an image from a repository and storing as a tarball
Let's say we want to download an image from a repository and store it as a
local tarball. This is a common thing to want to do in airgapped environments
for example. You could do this with docker with a `docker pull; docker save`.
The Occy Strap equivalent is:
```
occystrap fetch-to-tarfile registry-1.docker.io library/busybox latest busybox.tar
```
**New equivalent:**
```
occystrap process registry://registry-1.docker.io/library/busybox:latest tar://busybox.tar
```
In this example we're pulling from the Docker Hub (registry-1.docker.io), and
are downloading busybox's latest version into a tarball named `busybox.tar`.
This tarball can be loaded with `docker load -i busybox.tar` on an airgapped
Docker environment.
### Repeatable builds with normalized timestamps
To make builds more repeatable, you can normalize file access and modification
times in the image layers. This is useful when you want to ensure that the
same image content produces the same tarball hash, regardless of when the
files were originally created:
```
occystrap fetch-to-tarfile --normalize-timestamps registry-1.docker.io library/busybox latest busybox.tar
```
**New equivalent:**
```
occystrap process registry://registry-1.docker.io/library/busybox:latest tar://busybox.tar -f normalize-timestamps
```
This will set all timestamps in the layer tarballs to 0 (Unix epoch: January
1, 1970). You can also specify a custom timestamp:
```
occystrap fetch-to-tarfile --normalize-timestamps --timestamp 1609459200 registry-1.docker.io library/busybox latest busybox.tar
```
**New equivalent:**
```
occystrap process registry://registry-1.docker.io/library/busybox:latest tar://busybox.tar -f "normalize-timestamps:ts=1609459200"
```
When timestamps are normalized, the layer SHAs are recalculated and the
manifest is updated to reflect the new hashes. This ensures the tarball
structure remains consistent and valid.
### Downloading an image from a repository and storing as an extracted tarball
The format of the tarball in the previous example is two JSON configuration
files and a series of image layers as tarballs inside the main tarball. You
can write these elements to a directory instead of to a tarball if you'd like
to inspect them:
```
occystrap fetch-to-extracted registry-1.docker.io library/centos 7 centos7
```
**New equivalent:**
```
occystrap process registry://registry-1.docker.io/library/centos:7 dir://centos7
```
### Downloading an image to a merged directory
In scenarios where image layers are likely to be reused between images, you
can save disk space by downloading images to a directory which contains more
than one image:
```
occystrap fetch-to-extracted --use-unique-names registry-1.docker.io \
homeassistant/home-assistant latest merged_images
```
**New equivalent:**
```
occystrap process registry://registry-1.docker.io/homeassistant/home-assistant:latest \
"dir://merged_images?unique_names=true"
```
### Storing an image tarfile in a merged directory
Sometimes you have image tarfiles instead of images in a registry:
```
occystrap tarfile-to-extracted --use-unique-names file.tar merged_images
```
**New equivalent:**
```
occystrap process tar://file.tar "dir://merged_images?unique_names=true"
```
### Exploring the contents of layers and overwritten files
If you'd like the layers to be expanded from their tarballs to the filesystem:
```
occystrap fetch-to-extracted --expand quay.io \
ukhomeofficedigital/centos-base latest ukhomeoffice-centos
```
**New equivalent:**
```
occystrap process registry://quay.io/ukhomeofficedigital/centos-base:latest \
"dir://ukhomeoffice-centos?expand=true"
```
### Generating an OCI runtime bundle
```
occystrap fetch-to-oci registry-1.docker.io library/hello-world latest bar
```
**New equivalent:**
```
occystrap process registry://registry-1.docker.io/library/hello-world:latest oci://bar
```
### Searching image layers for files
```
occystrap search-layers registry-1.docker.io library/busybox latest "bin/*sh"
```
**New equivalent:**
```
occystrap search registry://registry-1.docker.io/library/busybox:latest "bin/*sh"
```
### Working with local Docker or Podman daemon
```
occystrap docker-to-tarfile library/busybox latest busybox.tar
```
**New equivalent:**
```
occystrap process docker://library/busybox:latest tar://busybox.tar
```
For faster local Docker image processing, use the `dockerpush://` input:
```
occystrap process dockerpush://library/busybox:latest tar://busybox.tar
```
The `dockerpush://` input starts an embedded Docker Registry V2 server on
localhost and has Docker push the image to it. This is significantly faster
than `docker://` for multi-layer images because Docker's push mechanism
transfers layers individually and in parallel, whereas the Docker Engine API
(`docker://`) exports the entire image as a single sequential tarball.
Since Docker 1.3.2, the entire `127.0.0.0/8` range is implicitly trusted as
insecure, so no daemon.json changes or TLS certificates are needed.
For Podman:
```
occystrap process "docker://myimage:latest?socket=/run/podman/podman.sock" tar://output.tar
```
Note: Podman doesn't run a daemon by default. You need to start the socket
service first:
```
# For rootless Podman
systemctl --user start podman.socket
# For rootful Podman
sudo systemctl start podman.socket
```
## Authenticating with private registries
To fetch images from private registries (such as GitLab Container Registry,
AWS ECR, or private Docker Hub repositories), use the `--username` and
`--password` global options:
```
occystrap --username myuser --password mytoken \
process registry://registry.gitlab.com/mygroup/myimage:latest tar://output.tar
```
You can also use environment variables to avoid putting credentials on the
command line:
```
export OCCYSTRAP_USERNAME=myuser
export OCCYSTRAP_PASSWORD=mytoken
occystrap process registry://registry.gitlab.com/mygroup/myimage:latest tar://output.tar
```
For GitLab Container Registry, the username is typically your GitLab username
and the password is a personal access token with `read_registry` scope.
## Parallel Downloads and Uploads
When working with registries, occystrap downloads and uploads layers in parallel
for improved performance. By default, 4 threads are used:
```
# Default: 4 parallel operations
occystrap process registry://docker.io/library/busybox:latest tar://busybox.tar
# Use 8 parallel threads
occystrap -j 8 process registry://docker.io/library/busybox:latest tar://busybox.tar
# Sequential operations (1 thread)
occystrap --parallel 1 process docker://myimage:v1 registry://myregistry/myimage:v1
```
You can also set the parallelism via environment variable:
```
export OCCYSTRAP_PARALLEL=8
occystrap process registry://docker.io/library/busybox:latest tar://busybox.tar
```
Or via URI query parameter:
```
occystrap process docker://myimage:v1 "registry://myregistry/myimage:v1?max_workers=8"
```
## Layer Compression
When pushing images to registries, occystrap supports both gzip (default) and
zstd compression for image layers:
```
# Use gzip (default, maximum compatibility)
occystrap process docker://myimage:v1 registry://myregistry/myimage:v1
# Use zstd for better compression ratio and speed
occystrap --compression=zstd process docker://myimage:v1 registry://myregistry/myimage:v1
```
You can also set the compression via environment variable:
```
export OCCYSTRAP_COMPRESSION=zstd
occystrap process docker://myimage:v1 registry://myregistry/myimage:v1
```
Or via URI query parameter:
```
occystrap process docker://myimage:v1 "registry://myregistry/myimage:v1?compression=zstd"
```
**Compatibility notes:**
- **gzip** (default): Works with all Docker/container runtimes
- **zstd**: Requires Docker 20.10+ or containerd 1.5+ on the pulling client;
offers ~30% better compression ratio and faster compression
When pulling images, occystrap automatically detects and handles both gzip and
zstd compressed layers from registries or OCI tarballs.
## Cross-Invocation Layer Cache
When pushing multiple images that share base layers (common in CI), occystrap
can cache layer processing results across invocations. This avoids re-fetching,
re-filtering, re-compressing, and re-uploading layers that have already been
processed:
```
# First push: processes all layers
occystrap --layer-cache /tmp/layer-cache.json \
process docker://myimage1:v1 registry://myregistry/myimage1:v1
# Second push: skips shared base layers
occystrap --layer-cache /tmp/layer-cache.json \
process docker://myimage2:v1 registry://myregistry/myimage2:v1
```
You can also set the cache path via environment variable:
```
export OCCYSTRAP_LAYER_CACHE=/tmp/layer-cache.json
occystrap process docker://myimage:v1 registry://myregistry/myimage:v1
```
The cache records the mapping from input layer DiffIDs to compressed output
digests. On subsequent runs, if a cached layer's compressed blob still exists
in the target registry, the layer is skipped entirely (no fetch, no filter,
no compress, no upload). The cache is filter-aware: layers processed with
different filter configurations get separate cache entries.
## Verbosity and Debugging
By default, occystrap logs only milestones (start/end, summary
statistics, layer counts) at INFO level. Per-layer and per-request
detail is logged at DEBUG level.
```
# Enable debug logging for occystrap modules only
occystrap --verbose process docker://myimage:v1 tar://output.tar
# Enable debug logging for all modules (includes library output)
occystrap --debug process docker://myimage:v1 tar://output.tar
```
When running in a terminal, registry downloads and uploads display
interactive tqdm progress bars. In non-TTY environments (CI, pipes),
periodic log messages are emitted instead.
## Supporting non-default architectures
Docker image repositories can store multiple versions of a single image, with
each image corresponding to a different (operating system, cpu architecture,
cpu variant) tuple. Occy Strap supports letting you specify which to use with
global command line flags. Occy Strap defaults to linux amd64 if you don't
specify something different:
```
occystrap --os linux --architecture arm64 --variant v8 \
process registry://registry-1.docker.io/library/busybox:latest dir://busybox
```
Or via URI query parameters:
```
occystrap process "registry://registry-1.docker.io/library/busybox:latest?os=linux&arch=arm64&variant=v8" \
dir://busybox
```
## Development
### Install for Development
```
pip install -e ".[test]"
```
### Pre-commit Hooks
This project uses pre-commit hooks to validate code before commits. Install them
with:
```
pip install pre-commit
pre-commit install
```
The hooks run:
- `actionlint` - GitHub Actions workflow validation
- `shellcheck` - Shell script linting
- `check-log-levels` - Enforces max LOG.info() calls per file
- `tox -eflake8` - Python code style checks
- `tox -epy3` - Unit tests
To run the hooks manually:
```
pre-commit run --all-files
```
### Running Tests
Unit tests are in `occystrap/tests/` and can be run with:
```
tox -epy3
```
Functional tests are in `deploy/occystrap_ci/tests/` and are run in CI.
### Releasing
Releases are automated via GitHub Actions. Push a version tag to trigger the
pipeline:
```
git tag -s v0.5.0 -m "Release v0.5.0"
git push origin v0.5.0
```
The workflow builds the package, signs the tag with Sigstore, publishes to
PyPI, and creates a GitHub Release. See [RELEASE-SETUP.md](RELEASE-SETUP.md)
for one-time configuration steps.
## Developer Automation
This project supports automated CI helpers via PR comments. To use these
commands, comment on a pull request with one of the following:
- `@shakenfist-bot please retest` - Re-run the functional test suite
- `@shakenfist-bot please attempt to fix` - Have Claude Code attempt to fix
test failures
- `@shakenfist-bot please re-review` - Request another automated code review
- `@shakenfist-bot please address comments` - Have Claude Code address the
automated review comments
These commands are only available to repository collaborators with write access.
### Claude Code Skills
The `.claude/skills/` directory contains guidance for AI agents working on
this codebase, covering documentation updates, testing discipline, and PR
preparation.
## Documentation
For more detailed documentation, see the [docs/](docs/) directory:
- [Installation](docs/installation.md) - Getting started guide
- [Command Reference](docs/command-reference.md) - Complete CLI reference
- [Pipeline Architecture](docs/pipeline.md) - How the pipeline works
- [Use Cases](docs/use-cases.md) - Common scenarios and examples
- [Docker Tarball Formats](docs/docker-tarball-formats.md) - Docker save
tarball format reference, entry ordering, and the Docker Engine inspect API
| text/markdown | null | Michael Still <mikal@stillhq.com> | null | null | Apache-2.0 | null | [
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"click>=7.1.1",
"requests",
"requests-unixsocket",
"prettytable",
"oslo.concurrency",
"shakenfist-utilities",
"tqdm>=4.60",
"zstandard>=0.21.0",
"coverage; extra == \"test\"",
"testtools<2.8.3; extra == \"test\"",
"mock; extra == \"test\"",
"stestr; extra == \"test\"",
"flake8; extra == \"te... | [] | [] | [] | [
"Homepage, https://github.com/shakenfist/occystrap",
"Bug Tracker, https://github.com/shakenfist/occystrap/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:20:45.875484 | occystrap-0.4.8.tar.gz | 224,249 | 51/d6/4fbfb78af0b0483261781aa196c98ee0db0bc1a72be7cea78e71a69d6e0f/occystrap-0.4.8.tar.gz | source | sdist | null | false | 2d879db72cfcc29d651cff1dd072909b | 665c698596806b820776f5d69f712144a120ff997f5eceed973c6ab92803c9b4 | 51d64fbfb78af0b0483261781aa196c98ee0db0bc1a72be7cea78e71a69d6e0f | null | [
"LICENSE",
"AUTHORS"
] | 346 |
2.4 | zen-ai-pentest | 3.0.0 | Advanced AI-Powered Penetration Testing Framework with Multi-Agent Orchestration | # Zen-AI-Pentest

> 🛡️ **Professional AI-Powered Penetration Testing Framework**
[](https://python.org)
- **Guest Control**: Execute tools inside isolated VMs
### 🚀 Modern API & Backend
- **FastAPI**: High-performance REST API
- **PostgreSQL**: Persistent data storage
- **WebSocket**: Real-time scan updates
- **JWT Auth**: Role-based access control (RBAC)
- **Background Tasks**: Async scan execution
### 📊 Reporting & Notifications
- **PDF Reports**: Professional findings reports
- **HTML Dashboard**: Interactive web interface
- **Slack/Email**: Instant notifications
- **JSON/XML**: Integration with other tools
### 🐳 Easy Deployment
- **Docker Compose**: One-command full stack deployment
- **CI/CD**: GitHub Actions pipeline
- **Production Ready**: Optimized for enterprise use
---
## 🎯 Real Data Execution - No Mocks!
Zen-AI-Pentest executes **real security tools** - no simulations, no mocks, only actual tool execution:
- ✅ **Nmap** - Real port scanning with XML output parsing
- ✅ **Nuclei** - Real vulnerability detection with JSON output
- ✅ **SQLMap** - Real SQL injection testing with safety controls
- ✅ **FFuF** - Blazing fast web fuzzer
- ✅ **WhatWeb** - Technology detection (900+ plugins)
- ✅ **WAFW00F** - WAF detection (50+ signatures)
- ✅ **Subfinder** - Subdomain enumeration
- ✅ **HTTPX** - Fast HTTP prober
- ✅ **Nikto** - Web vulnerability scanner
- ✅ **Multi-Agent** - Researcher & Analyst agents cooperate
- ✅ **Docker Sandbox** - Isolated tool execution for safety
📖 **Enhanced Tools:** [README_ENHANCED_TOOLS.md](README_ENHANCED_TOOLS.md)
All tools run with **safety controls**:
- Private IP blocking (protects internal networks)
- Timeout management (prevents hanging)
- Resource limits (CPU/memory constraints)
- Read-only filesystems (Docker sandbox)
📖 **Details:** [IMPLEMENTATION_SUMMARY.md](IMPLEMENTATION_SUMMARY.md)
---
## 🚀 Quick Start
[](https://github.com/SHAdd0WTAka/zen-ai-pentest/releases)
[](https://python.org)
[](LICENSE)
[](https://github.com/SHAdd0WTAka/zen-ai-pentest/commits/main)
[](./docs/status/repo_status_card.svg)
[](https://pypi.org/project/zen-ai-pentest/)
[](docker/)
[](tools/)
[](https://github.com/SHAdd0WTAka/Zen-Ai-Pentest/actions)
[](https://github.com/SHAdd0WTAka/Zen-Ai-Pentest/actions/workflows/security.yml)
[](https://codecov.io/gh/SHAdd0WTAka/zen-ai-pentest)
[](https://discord.gg/BSmCqjhY)
[](docs/)
[](ROADMAP_2026.md)
[](https://www.bestpractices.dev/de/projects/11957/passing)
[](https://github.com/marketplace/actions/zen-ai-pentest)
[](#-authors--team)
---
## 📚 Table of Contents
- [Overview](#-overview)
- [Features](#-features)
- [For AI Agents](#-for-ai-agents)
- [Quick Start](#-quick-start)
- [Installation](#-installation)
- [Usage](#-usage)
- [Architecture](#-architecture)
- [API Reference](#-api-reference)
- [Project Structure](#-project-structure)
- [Configuration](#-configuration)
- [Testing](#-testing)
- [Docker Deployment](#-docker-deployment)
- [Safety First](#-safety-first)
- [Documentation](#-documentation)
- [Contributing](#-contributing)
- [Community & Support](#-community--support)
- [License](#-license)
---
## 🎯 Overview
**Zen-AI-Pentest** is an autonomous, AI-powered penetration testing framework that combines cutting-edge language models with professional security tools. Built for security professionals, bug bounty hunters, and enterprise security teams.
```mermaid
graph TB
subgraph "Client Interface"
WebUI[🌐 Web UI]
CLI[💻 CLI]
API_Client[🔌 REST API]
end
subgraph "API Gateway"
FastAPI[FastAPI + WebSocket]
Auth[🔐 JWT/RBAC]
AgentMgr[🤖 Agent Manager]
end
subgraph "Workflow Orchestrator"
Guardrails[🛡️ Guardrails]
TaskQueue[📊 Task Queue]
RiskLevels[⚠️ Risk Levels 0-3]
VPN[🔒 VPN Check]
State[📈 State Machine]
end
subgraph "Agent Pool"
Agent1[🤖 Agent #1]
Agent2[🤖 Agent #2]
AgentN[🤖 Agent #N]
end
subgraph "Security Toolkit"
Nmap[🔍 nmap]
Whois[📡 whois]
Dig[🌐 dig]
Nuclei[⚡ nuclei]
SQLMap[🎯 sqlmap]
end
subgraph "Data Layer"
Postgres[🐘 PostgreSQL]
Redis[⚡ Redis Cache]
Storage[📁 File Storage]
end
WebUI --> FastAPI
CLI --> FastAPI
API_Client --> FastAPI
FastAPI --> Auth
Auth --> AgentMgr
AgentMgr --> Guardrails
Guardrails --> TaskQueue
TaskQueue --> RiskLevels
RiskLevels --> VPN
VPN --> State
State --> Agent1
State --> Agent2
State --> AgentN
Agent1 --> Nmap
Agent1 --> Whois
Agent2 --> Dig
Agent2 --> Nuclei
AgentN --> SQLMap
Nmap --> Postgres
Whois --> Redis
SQLMap --> Storage
```
### Key Highlights
- 🤖 **AI-Powered**: Leverages state-of-the-art LLMs for intelligent decision making
- 🔒 **Security-First**: Multiple safety controls and validation layers
- 🚀 **Production-Ready**: Enterprise-grade with CI/CD, monitoring, and support
- 📊 **Comprehensive**: 40+ integrated security tools
- 🔧 **Extensible**: Plugin system for custom tools and integrations
- ☁️ **Cloud-Native**: Deploy on AWS, Azure, or GCP
- 📱 **Quick Access**: Scan QR codes for instant mobile access
<p align="center">
<a href="docs/qr_codes/index.html">
<img src="docs/qr_codes/qr_grid_preview.png" alt="QR Codes" width="600">
</a>
<br>
<sub>☝️ Click to view all QR codes or scan with your phone!</sub>
</p>
---
## ✨ Features
### 🤖 Autonomous AI Agent
- **ReAct Pattern**: Reason → Act → Observe → Reflect
- **State Machine**: IDLE → PLANNING → EXECUTING → OBSERVING → REFLECTING → COMPLETED
- **Memory System**: Short-term, long-term, and context window management
- **Tool Orchestration**: Automatic selection and execution of 20+ pentesting tools
- **Self-Correction**: Retry logic and adaptive planning
- **Human-in-the-Loop**: Optional pause for critical decisions
### 🎯 Risk Engine
- **False Positive Reduction**: Multi-factor validation with Bayesian filtering
- **Business Impact**: Financial, compliance, and reputation risk calculation
- **CVSS/EPSS Scoring**: Industry-standard vulnerability assessment
- **Priority Ranking**: Automated finding prioritization
- **LLM Voting**: Multi-model consensus for accuracy
### 🔒 Exploit Validation
- **Sandboxed Execution**: Docker-based isolated testing
- **Safety Controls**: 4-level safety system (Read-Only to Full)
- **Evidence Collection**: Screenshots, HTTP captures, PCAP
- **Chain of Custody**: Complete audit trail
- **Remediation**: Automatic fix recommendations
### 📊 Benchmarking
- **Competitor Comparison**: vs PentestGPT, AutoPentest, Manual
- **Test Scenarios**: HTB machines, OWASP WebGoat, DVWA
- **Metrics**: Time-to-find, coverage, false positive rate
- **Visual Reports**: Charts and statistical analysis
- **CI Integration**: Automated regression testing
### 🔗 CI/CD Integration
- **GitHub Actions**: Native action support
- **GitLab CI**: Pipeline integration
- **Jenkins**: Plugin and pipeline support
- **Output Formats**: JSON, JUnit XML, SARIF
- **Notifications**: Slack, JIRA, Email alerts
- **Exit Codes**: Pipeline-friendly status codes
### 🧠 AI Persona System
- **11 Specialized Personas**: Recon, Exploit, Report, Audit, Social, Network, Mobile, Red Team, ICS, Cloud, Crypto
- **CLI Tool**: Interactive and one-shot modes (`k-recon`, `k-exploit`, etc.)
- **REST API**: Flask-based API with WebSocket support
- **Web UI**: Modern browser interface with screenshot analysis
- **Context Preservation**: Multi-turn conversations with memory
- **Screenshot Analysis**: Upload and analyze images with AI personas
### 🛡️ Security Guardrails
- **IP Validation** - Blocks private networks (10.x, 192.168.x, 172.16-31.x)
- **Domain Filtering** - Prevents localhost/internal domain scanning
- **Risk Levels** - 4 levels (SAFE → AGGRESSIVE) with tool restrictions
- **Rate Limiting** - Prevents accidental DoS
### 🤖 Multi-Agent System
- **Workflow Orchestrator** - Manages complex pentest workflows
- **Task Distribution** - Assigns tasks to available agents
- **Real-time Updates** - WebSocket communication
- **Result Aggregation** - Collects and analyzes findings
### 🔒 VPN Integration (Optional)
- **ProtonVPN Support** - Native CLI integration
- **Generic Detection** - Works with OpenVPN, WireGuard, etc.
- **Safety Warnings** - Alerts when scanning without VPN
- **Strict Mode** - Can require VPN for scans
### 🐳 Docker Ready
- **One-Command Deploy** - `docker-compose up -d`
- **Isolated Environment** - All tools pre-installed
- **Scalable** - Run multiple agents
- **Production Ready** - Health checks & monitoring
### 🛠️ 40+ Integrated Tools
| Category | Tools |
|----------|-------|
| **Network** | Nmap, Masscan, Scapy, Tshark |
| **Web** | BurpSuite, SQLMap, Gobuster, OWASP ZAP |
| **Exploitation** | Metasploit Framework |
| **Brute Force** | Hydra, Hashcat |
| **Reconnaissance** | Amass, Nuclei, TheHarvester, Subdomain Scanner |
| **Active Directory** | BloodHound, CrackMapExec, Responder |
| **Wireless** | Aircrack-ng Suite |
### 🔍 Subdomain Scanner
- **Multi-Technique Enumeration**: DNS, Wordlist, Certificate Transparency
- **Advanced Techniques**: Zone Transfer (AXFR), Permutation/Mangling
- **OSINT Integration**: VirusTotal, AlienVault OTX, BufferOver
- **IPv6 Support**: AAAA record enumeration
- **Technology Detection**: Automatic fingerprinting of live hosts
- **Export Formats**: JSON, CSV, TXT
- **REST API**: Async and sync scanning endpoints
- **CLI Tools**: Standalone scanner with comprehensive options
### 🤖 For AI Agents
- **[AGENTS.md](AGENTS.md)** - Essential guide for AI development partners
- **Real Tool Execution** - No mocks, actual security tools
- **Multi-Agent System** - Researcher, Analyst, Exploit agents
- **Safety Controls** - 4-level sandbox system
- **Architecture Guide** - Complete system overview
### 🔔 Notifications & Integrations
- **Telegram Bot**: @Zenaipenbot - Instant CI/CD notifications
- **Discord Integration**: Automated channel updates & GitHub webhooks
- **Slack/Email**: Enterprise notification support
- **GitHub Actions**: Native workflow integration
- **QR Code Gallery**: Quick access to all resources
### ☁️ Multi-Cloud & Virtualization
- **Local**: VirtualBox VM Management
- **Cloud**: AWS EC2, Azure VMs, Google Cloud Compute
- **Snapshots**: Automated clean-state workflows
### Option 1: Docker (Recommended)
```bash
# Clone repository
git clone https://github.com/SHAdd0WTAka/zen-ai-pentest.git
cd zen-ai-pentest
# Copy and configure environment
cp .env.example .env
# Edit .env with your settings
# Start full stack
docker-compose up -d
# Access:
# Dashboard: http://localhost:3000
# API Docs: http://localhost:8000/docs
# API: http://localhost:8000
```
### Option 2: Local Installation
```bash
# Install dependencies
pip install -r requirements.txt
# Initialize database
python database/models.py
# Start API server
python api/main.py
# Run subdomain scan
python scan_target_subdomains.py
# Or use the advanced CLI
python tools/subdomain_enum.py example.com --advanced
```
### Option 3: AI Personas Quick Start
```bash
# Start the AI Personas API & Web UI
bash api/QUICKSTART.sh
# Or manually:
bash api/manage.sh start
# Open http://127.0.0.1:5000
# CLI Usage
source tools/setup_aliases.sh
k-recon "Target: example.com"
k-exploit "Write SQLi scanner"
k-chat # Interactive mode
```
### Option 4: VirtualBox VM Setup
```bash
# Automated Kali Linux setup
python scripts/setup_vms.py --kali
# Manual setup
# See docs/setup/VIRTUALBOX_SETUP.md
```
---
## 📖 Installation
For detailed installation instructions, see:
- **[Docker Installation](docs/INSTALLATION.md#quick-start-docker)**
- **[Local Installation](docs/INSTALLATION.md#local-installation)**
- **[Production Deployment](docs/INSTALLATION.md#production-deployment)**
- **[VirtualBox Setup](docs/setup/VIRTUALBOX_SETUP.md)**
---
## 💻 Usage
### Python API
```python
from agents.react_agent import ReActAgent, ReActAgentConfig
# Configure agent
config = ReActAgentConfig(
max_iterations=10,
use_vm=True,
vm_name="kali-pentest"
)
# Create agent
agent = ReActAgent(config)
# Run autonomous scan
result = agent.run(
target="example.com",
objective="Comprehensive security assessment"
)
# Generate report
print(agent.generate_report(result))
```
### REST API
```bash
# Authentication
curl -X POST http://localhost:8000/auth/login \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"admin"}'
# Create scan
curl -X POST http://localhost:8000/scans \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"name":"Network Scan","target":"192.168.1.0/24","scan_type":"network","config":{"ports":"top-1000"}}'
# Execute tool
curl -X POST http://localhost:8000/tools/execute \
-H "Authorization: Bearer $TOKEN" \
-d '{"tool_name":"nmap_scan","target":"scanme.nmap.org","parameters":{"ports":"22,80,443"}}'
# Generate report
curl -X POST http://localhost:8000/reports \
-H "Authorization: Bearer $TOKEN" \
-d '{"scan_id":1,"format":"pdf","template":"default"}'
```
### WebSocket (Real-Time)
```javascript
const ws = new WebSocket("ws://localhost:8000/ws/scans/1");
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log("Scan update:", data);
};
```
---
## 🏗️ System Architecture
```
┌─────────────────────────────────────────────────────────────────────┐
│ CLIENT INTERFACE │
├─────────────────────────────────────────────────────────────────────┤
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ 🌐 Web UI │ │ 💻 CLI │ │ 🔌 API │ │
│ │ (React) │ │ (Python) │ │ (REST) │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
└─────────┼─────────────────┼─────────────────┼───────────────────────┘
│ │ │
└─────────────────┼─────────────────┘
│ HTTPS / JWT
▼
┌─────────────────────────────────────────────────────────────────────┐
│ API GATEWAY │
│ FastAPI + WebSocket │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ 🔐 Auth │ │ 📋 Work- │ │ 🤖 Agent │ │
│ │ (JWT/RBAC) │ │ flow API │ │ Manager │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────┬───────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ WORKFLOW ORCHESTRATOR │
├─────────────────────────────────────────────────────────────────────┤
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ 🛡️ │ │ 📊 Task │ │ ⚠️ Risk │ │
│ │ Guardrails │ │ Queue │ │ Levels │ │
│ │ (IP/Domain │ │ │ │ (0-3) │ │
│ │ Filter) │ │ │ │ │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ 🔒 VPN │ │ 📈 State │ │ 📝 Report │ │
│ │ Check │ │ Machine │ │ Generator │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────┬───────────────────────────────────────────┘
│ WebSocket + Task Distribution
▼
┌─────────────────────────────────────────────────────────────────────┐
│ AGENT POOL │
├─────────────────────────────────────────────────────────────────────┤
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ 🤖 Agent │ │ 🤖 Agent │ │ 🤖 Agent │ │
│ │ #1 │ │ #2 │ │ #N │ │
│ │ (Docker) │ │ (Docker) │ │ (Docker) │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
└─────────┼─────────────────┼─────────────────┼───────────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────────┐
│ SECURITY TOOLKIT │
├─────────────────────────────────────────────────────────────────────┤
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ 🔍 │ │ 📡 │ │ 🌐 │ │ ⚡ │ │ 🎯 │ │
│ │ nmap │ │ whois │ │ dig │ │ nuclei │ │ sqlmap │ │
│ │ │ │ │ │ │ │ │ │ │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ DATA LAYER │
├─────────────────────────────────────────────────────────────────────┤
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ 🐘 Postgre │ │ ⚡ Redis │ │ 📁 File │ │
│ │ SQL │ │ Cache │ │ Storage │ │
│ │ (State) │ │ (Queue) │ │ (Reports) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
```
For detailed architecture documentation, see [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md).
---
## 📡 API Reference
- **[API Documentation](docs/API.md)** - Complete REST API reference
- **[WebSocket API](docs/API.md#websocket)** - Real-time updates
- **[Authentication](docs/API.md#authentication)** - Security and auth
---
## 📁 Project Structure
```
zen-ai-pentest/
├── api/ # FastAPI Backend (main.py, auth.py, websocket.py)
├── agents/ # AI Agents (react_agent.py, react_agent_vm.py)
├── autonomous/ # ReAct Loop (agent_loop.py, exploit_validator.py, memory.py)
├── tools/ # 40+ Security Tools
│ ├── Network: nmap, masscan, scapy, tshark
│ ├── Web: nuclei, sqlmap, nikto, zap, burpsuite, ffuf, gobuster
│ ├── Recon: subfinder, amass, httpx, whatweb, wafw00f, subdomain_scan, unified_recon
│ ├── AD: bloodhound, crackmapexec, responder
│ ├── OSINT: sherlock, scout, ignorant
│ ├── Secrets: trufflehog, trivy
│ ├── Wireless: aircrack
│ ├── Code: semgrep
│ ├── AI/Kimi: kimi_cli, kimi_helper, update_personas
│ └── Core: tool_caller, tool_registry
├── risk_engine/ # Risk Analysis (cvss.py, epss.py, false_positive_engine.py)
├── benchmarks/ # Performance Testing
├── integrations/ # CI/CD (github, gitlab, slack, jira, jenkins)
├── database/ # PostgreSQL Models
├── gui/ # React Dashboard
├── reports/ # PDF/HTML/JSON Generator
├── notifications/ # Alerts (slack, email)
├── docker/ # Deployment configs
├── docs/ # Documentation (ARCHITECTURE.md, INSTALLATION.md, API.md, setup/)
├── tests/ # Test Suite
└── scripts/ # Setup Scripts
```
---
## 🔧 Configuration
### Environment Variables
```env
# Database
DATABASE_URL=postgresql://postgres:password@localhost:5432/zen_pentest
# Security
SECRET_KEY=your-secret-key-here
JWT_EXPIRATION=3600
# AI Providers (Kimi AI recommended)
KIMI_API_KEY=your-kimi-api-key
DEFAULT_BACKEND=kimi
DEFAULT_MODEL=kimi-k2.5
# Alternative Backends (optional)
# OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...
# OPENROUTER_API_KEY=...
# Notifications
SLACK_WEBHOOK_URL=https://hooks.slack.com/...
SMTP_HOST=smtp.gmail.com
# Cloud Providers
AWS_ACCESS_KEY_ID=AKIA...
AZURE_SUBSCRIPTION_ID=...
```
See `.env.example` for all options.
---
## 🧪 Testing
```bash
# Run all tests
pytest
# With coverage
pytest --cov=. --cov-report=html
# Specific test file
pytest tests/test_react_agent.py -v
# Integration tests
pytest tests/integration/ -v
```
---
## 🐳 Docker Deployment
### Quick Setup (WSL2 + Docker)
Wir empfehlen Docker in WSL2 (Ubuntu) für die beste Performance:
**Option 1: Automatisches Setup**
```bash
# Windows: Setup-Launcher starten
scripts\docker-setup.bat
# Oder direkt in Ubuntu WSL:
./scripts/setup_docker_wsl2.sh
```
**Option 2: Docker Desktop (Windows)**
```powershell
# PowerShell als Administrator:
powershell -ExecutionPolicy Bypass -File scripts/setup_docker_windows.ps1
```
📖 **[Komplette Docker + WSL2 Anleitung](DOCKER_WSL2_SETUP.md)** - Detaillierte Schritte für beide Optionen
### Full Stack Starten
```bash
# Nach Docker-Installation:
docker-compose up -d
# Check status
docker-compose ps
# View logs
docker-compose logs -f api
# Scale agents
docker-compose up -d --scale agent=3
```
### Services
| Service | Port | Description |
|---------|------|-------------|
| API | 8000 | FastAPI server |
| PostgreSQL | 5432 | Database |
| Redis | 6379 | Cache |
| Agent | - | Pentest agent |
📖 **[Complete Docker Guide](DOCKER.md)**
---
## 🛡️ Safety First
### Default Protections
- ✅ **Private IP Blocking** - Prevents scanning 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16
- ✅ **Loopback Protection** - Blocks 127.x.x.x and ::1
- ✅ **Local Domain Filter** - Prevents .local, .internal, localhost
- ✅ **Risk Level Control** - Restricts tools by safety level
- ✅ **Rate Limiting** - Prevents abuse
### Risk Levels
| Level | Tools | Description |
|-------|-------|-------------|
| **SAFE (0)** | whois, dns, subdomain | Reconnaissance only |
| **NORMAL (1)** | + nmap, nuclei | Standard scanning |
| **ELEVATED (2)** | + sqlmap, exploit | Light exploitation |
| **AGGRESSIVE (3)** | + pivot, lateral | Full exploitation |
⚠️ **Always ensure you have authorization before scanning!**
---
## 📚 Documentation
| Document | Description |
|----------|-------------|
| [DOCKER.md](DOCKER.md) | Docker deployment guide |
| [GUARDRAILS.md](GUARDRAILS.md) | Security guardrails documentation |
| [GUARDRAILS_INTEGRATION.md](GUARDRAILS_INTEGRATION.md) | Guardrails integration guide |
| [VPN_INTEGRATION.md](VPN_INTEGRATION.md) | VPN setup and usage |
| [DEMO_E2E.md](DEMO_E2E.md) | End-to-end demo documentation |
| [AGENTS.md](AGENTS.md) | Agent development guide |
---
## 🤝 Contributing
We welcome contributions! Please see:
- **[CONTRIBUTING.md](CONTRIBUTING.md)** - Contribution guidelines
- **[CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md)** - Community standards
- **[CONTRIBUTORS.md](CONTRIBUTORS.md)** - Our amazing contributors
Quick start:
1. Fork the repository
2. Create feature branch (`git checkout -b feature/amazing-feature`)
3. Commit changes (`git commit -m 'Add amazing feature'`)
4. Push to branch (`git push origin feature/amazing-feature`)
5. Open Pull Request
---
## 🌐 Community & Support
Join our growing community!
### Quick Links
| Platform | Link | QR Code |
|----------|------|---------|
| 🎮 **Discord** | [discord.gg/zJZUJwK9AC](https://discord.gg/zJZUJwK9AC) | [📱 Scan](docs/qr_codes/04_discord.png) |
| 💬 **GitHub Discussions** | [SHAdd0WTAka/zen-ai-pentest/discussions](https://github.com/SHAdd0WTAka/zen-ai-pentest/discussions) | [📱 Scan](docs/qr_codes/01_github_repo.png) |
| 📦 **PyPI Package** | [pypi.org/project/zen-ai-pentest](https://pypi.org/project/zen-ai-pentest) | [📱 Scan](docs/qr_codes/06_pypi.png) |
### 📱 All QR Codes
View our complete QR code gallery: [docs/qr_codes/index.html](docs/qr_codes/index.html)
### 💬 Discord Server "Zen-Ai"
**Fully configured with 11 channels:**
- 📢 #announcements
- 📜 #rules
- 💬 #general
- 👋 #introductions
- 📚 #knowledge-base
- 🤖 #tools-automation
- 🔒 #security-research
- 🧠 #ai-ml-discussion
- 🐛 #bug-reports
- 💡 #feature-requests
- 🆘 #support
### 📧 Support
- 📖 **[Documentation](docs/)** - Comprehensive guides
- 🐛 **[Issue Tracker](https://github.com/SHAdd0WTAka/zen-ai-pentest/issues)** - Bug reports
- 📧 **[Email](mailto:support@zen-ai-pentest.dev)** - Direct contact
See [SUPPORT.md](SUPPORT.md) for detailed support options.
---
## ⚠️ Disclaimer
**IMPORTANT**: This tool is for authorized security testing only. Always obtain proper permission before testing any system you do not own. Unauthorized access to computer systems is illegal.
- Use only on systems you have explicit permission to test
- Respect privacy and data protection laws
- The authors assume no liability for misuse or damage
---
## 📄 License
This project is licensed under the MIT License - see [LICENSE](LICENSE) file for details.
---
## 🙏 Acknowledgments
- [LangGraph](https://github.com/langchain-ai/langgraph) - Agent framework
- [FastAPI](https://fastapi.tiangolo.com/) - Web framework
- [Kali Linux](https://www.kali.org/) - Penetration testing distribution
- All open-source security tool creators
---
## 👥 Authors & Team
### Core Development Team
<table>
<tr>
<td align="center">
<a href="https://github.com/SHAdd0WTAka">
<img src="https://github.com/SHAdd0WTAka.png?size=100" width="100px;" alt="SHAdd0WTAka"/>
<br />
<sub><b>@SHAdd0WTAka</b></sub>
</a>
<br />
<sub>Project Founder & Lead Developer</sub>
<br />
<sub>Security Architect</sub>
</td>
<td align="center">
<a href="https://www.moonshot.cn/">
<img src="https://img.shields.io/badge/Kimi-AI-blue?style=for-the-badge&logo=openai&logoColor=white" width="100px;" alt="Kimi AI"/>
<br />
<sub><b>Kimi AI</b></sub>
</a>
<br />
<sub>AI Development Partner</sub>
<br />
<sub>Architecture & Design</sub>
</td>
</tr>
</table>
### AI Contributors
- **Kimi AI (Moonshot AI)** - Primary AI development partner
- Led architecture design for autonomous agent loop
- Implemented Risk Engine with false-positive reduction
- Created CI/CD integration templates
- Developed benchmarking framework
- Co-authored documentation and roadmaps
### Special Thanks
- **Grok (xAI)** - Strategic analysis and competitive research
- **GitHub Copilot** - Code assistance and suggestions
- **Security Community** - Feedback, bug reports, and feature requests
---
## 🎨 Project Artwork
<div align="center">
<img src="docs/qr_codes/hemisphere_sync.png" alt="Hemisphere Sync" width="600"/>
### Hemisphere Sync
```
🧠 GEHIRN
╱ ╲
╱ LINKS ╲ ╱ RECHTS ╲
╱ (Kimi) ╲ ╱(Observer^^)╲
╱ Logik ╲╱ Kreativität ╲
Analytisch ╳ Ganzheitlich
Struktur ╳ Vision
╲ ╱╲ ╱
╲ ╱ ╲ ╱
╲ ╱ ╲╱
╲╱ ╱
╲ ╱
╲ ╱
❤️
HEMISPHERE_SYNC
"Zwei Hälften - Ein Herz - Ein Team"
```
*A fusion of human vision and AI capability*
**Left Brain (Kimi - Logik) + Right Brain (Observer^^ - Kreativität) = Hemisphere_Sync**
| Hemisphere | Zuständig für | Team |
|------------|---------------|------|
| **Left Brain** | Logik, Struktur, Code, Analytik | **Kimi** 🤖 |
| **Right Brain** | Kreativität, Vision, Design, Emotion | **Observer^^** 🎨 |
*Custom artwork by **SHAdd0WTAka** representing the fusion of human vision and AI capability.*
</div>
---
<p align="center">
<b>Made with ❤️ for the security community</b><br>
<sub>© 2026 Zen-AI-Pentest. All rights reserved.</sub>
</p>
| text/markdown | SHAdd0WTAka | SHAdd0WTAka <shadd0wtaka@example.com> | null | null | MIT | penetration-testing, security, ai, llm, multi-agent, cve, vulnerability-scanner, pentest | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Information Technology",
"Topic :: Security",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :... | [] | https://github.com/SHAdd0WTAka/zen-ai-pentest | null | >=3.9 | [] | [] | [] | [
"requests>=2.31.0",
"aiohttp>=3.9.0",
"python-dotenv>=1.0.0",
"pydantic>=2.0.0",
"fastapi>=0.104.0",
"uvicorn>=0.24.0",
"dnspython>=2.7.0",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"... | [] | [] | [] | [
"Homepage, https://github.com/SHAdd0WTAka/zen-ai-pentest",
"Documentation, https://github.com/SHAdd0WTAka/zen-ai-pentest/tree/main/docs",
"Repository, https://github.com/SHAdd0WTAka/zen-ai-pentest.git",
"Bug Tracker, https://github.com/SHAdd0WTAka/zen-ai-pentest/issues",
"Changelog, https://github.com/SHAdd... | twine/6.2.0 CPython/3.12.12 | 2026-02-19T11:20:02.708962 | zen_ai_pentest-3.0.0.tar.gz | 2,607,176 | d3/3e/a5bf57b21d84b27deb464a97b8c68ef417f8f92338c6d21ca3e548440321/zen_ai_pentest-3.0.0.tar.gz | source | sdist | null | false | 121f7b5c7415eb86ca6f9f9acc3a2c90 | 8dfc8005f52d48bc35b3db5a8116b6a1d4a2686e07a7549fb524f99664972207 | d33ea5bf57b21d84b27deb464a97b8c68ef417f8f92338c6d21ca3e548440321 | null | [
"LICENSE"
] | 255 |
2.4 | pychangelog2 | 1.1.0 | rpcclient for connecting with the rpcserver | # Description
Simple utility for quickly generating changelogs, assuming your commits are ordered as they should be. By default this
tool logs commits from `HEAD` back to the latest created tag. You can also provide a specific tag and log from that tag
until now.
# Installation
```shell
python3 -m pip install --user -U pychangelog2
```
# Example
```
➜ pychangelog2 git:(master) ✗ pychangelog2 <path to some repo>
* b1682a0ba253f21a91fabb4d02b9a6d0ec177f7b git: add .idea to .gitignore
* d499ab0f39688e371287ccf3dfc67366e0f70a48 cli: lockdown: add device-name subcommand
* 03f0bee3219fc30aa4f3b378420f5a807eda13b9 requirements: pyusb>=1.2.1
* 7456890e51244e014352a01a9670c35368a68c56 Bugfix: AFC: Pull, push, cd and completion
* d84f3d19f4db5499f0cdcf09bc1b993a96ea1cb0 Replace distutils.LooseVersion with packaging.Version.
➜ pychangelog2 git:(master) ✗ pychangelog2 <path to some repo> --start-tag v1.2.0
* 41dfac... add release docs
* 1f4a12... fix CLI argument parsing
➜ pychangelog2 git:(master) ✗ pychangelog2 <path to some repo> --start-tag v1.2.0 --end-tag v1.4.0
* 12cdab... add tests for transport layer
* 38ef11... fix AFC timeout handling
```
| text/markdown | null | doronz88 <doron88@gmail.com> | null | doronz88 <doron88@gmail.com> | GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
| git, changelog, version | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Lan... | [] | null | null | >=3.8 | [] | [] | [] | [
"click",
"gitpython",
"pytest; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/doronz88/pychangelog2",
"Bug Reports, https://github.com/doronz88/pychangelog2/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T11:18:19.097024 | pychangelog2-1.1.0.tar.gz | 41,946 | 1f/3e/2e6ba73383c9111015194da7de181e0d059cfb929388a3fe736698bc8592/pychangelog2-1.1.0.tar.gz | source | sdist | null | false | 41be34b26a02110a0330b488c0284ad8 | 08f8c36b13239b95935b427d64c272cf923c482c58f7f79b319acb46297b71b6 | 1f3e2e6ba73383c9111015194da7de181e0d059cfb929388a3fe736698bc8592 | null | [
"LICENSE"
] | 259 |
2.4 | snakemake-software-deployment-plugin-conda | 0.3.5 | Software deployment plugin for Snakemake using rattler to deploy conda packages. | # snakemake-software-deployment-plugin-conda
A Snakemake software deployment plugin for conda packages, using [py-rattler](https://conda.github.io/rattler/py-rattler) for ultra fast and robust environment deployment. | text/markdown | null | Johannes Köster <johannes.koester@uni-due.de> | null | null | null | null | [] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"aiofiles<25,>=24.1.0",
"httpx<0.29,>=0.28.1",
"py-rattler<0.16.0,>=0.12.0",
"pyyaml<7.0.0,>=6.0.2",
"snakemake-interface-common<2.0.0,>=1.17.4",
"snakemake-interface-software-deployment-plugins<1.0,>=0.10.1",
"uv<0.7.0,>=0.6.5"
] | [] | [] | [] | [
"repository, https://github.com/snakemake/snakemake-software-deployment-plugin-conda",
"documentation, https://snakemake.github.io/snakemake-plugin-catalog/plugins/software-deployment/conda.html"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:18:17.012969 | snakemake_software_deployment_plugin_conda-0.3.5.tar.gz | 11,629 | 8b/26/2f2b186315a5c0f5c301cf8d882f3c0e17cb81a05096ad9c184ebffb0bc9/snakemake_software_deployment_plugin_conda-0.3.5.tar.gz | source | sdist | null | false | dd07962cd665311bfe7c41f8dbdce1dc | a8e6ddf29a78318f69ed8ff471cf32ff01694af2057391f6ec1c1f7b7877c124 | 8b262f2b186315a5c0f5c301cf8d882f3c0e17cb81a05096ad9c184ebffb0bc9 | null | [] | 400 |
2.4 | zerv-version | 0.8.6 | Dynamic versioning CLI tool | [](https://github.com/wislertt/zerv/actions/workflows/cd.yml)
[](https://github.com/wislertt/zerv/actions/workflows/cd.yml)
[](https://sonarcloud.io/summary/new_code?id=wislertt_zerv)
[](https://sonarcloud.io/summary/new_code?id=wislertt_zerv)
[](https://sonarcloud.io/summary/new_code?id=wislertt_zerv)
[](https://codecov.io/gh/wislertt/zerv)
[](https://crates.io/crates/zerv)
[](https://pypi.python.org/pypi/zerv-version)
[](https://crates.io/crates/zerv)
[](https://pepy.tech/projects/zerv-version)
[](https://github.com/wislertt/zerv/)
# zerv
**Automatic versioning for every commit** - Generate semantic versions from any commit across all branches, or dirty working directory, with seamless pre-release handling and flexible format support for any CI/CD workflow.
## Table of Contents
- [Quick Start](#quick-start)
- [Key Features](#key-features)
- [Usage Examples](#usage-examples)
- [zerv flow: Automated branch-based versions](#zerv-flow-automated-branch-based-versions)
- [zerv version: Manual control with 4 main capability areas](#zerv-version-manual-control-with-4-main-capability-areas)
- [zerv check: Validate version formats](#zerv-check-validate-version-formats)
- [zerv render: Format conversion](#zerv-render-format-conversion)
- [Python API](#python-api)
- [Installation](#installation)
- [Links](#links)
## Quick Start
**Smart Version Detection**: `zerv flow` automatically generates meaningful SemVer versions from any Git state - no manual configuration required for common workflows.
```bash
# Install (Python via uv)
uv tool install zerv-version
# Try automated versioning (current branch determines output)
zerv flow
# → 1.0.0 (on main branch with tag v1.0.0)
# → 1.0.1-rc.1.post.3 (on release branch with pre-release tag)
# → 1.0.1-beta.1.post.3+develop.3.gf297dd0 (on develop branch)
# → 1.0.1-alpha.59394.post.1+feature.new.auth.1.g4e9af24 (on feature branch)
# → 1.0.1-alpha.17015.post.1.dev.1764382150+feature.dirty.work.1.g54c499a (on dirty feature branch)
```
<!-- Corresponding test: tests/integration_tests/flow/docs/quick_start.rs:test_quick_start_documentation_examples -->
- **Multiple Format Generation**: Transform a single ZERV_RON output into various formats (see [`.github/workflows/shared-zerv-versioning.yml`](.github/workflows/shared-zerv-versioning.yml) for example implementation)
```bash
# (on dirty feature branch)
ZERV_RON=$(zerv flow --output-format zerv)
# semver
echo $ZERV_RON | zerv version --source stdin --output-format semver
# → 1.0.1-alpha.17015.post.1.dev.1764382150+feature.dirty.work.1.g54c499a
# pep440
echo $ZERV_RON | zerv version --source stdin --output-format pep440
# → 1.0.0a17015.post1.dev1764382150+feature.dirty.work.1.g54c499a
# docker_tag
echo $ZERV_RON | zerv version --source stdin --output-template "{{ semver_obj.docker }}"
# → 1.0.1-alpha.17015.post.1.dev.1764382150-feature.dirty.work.1.g54c499a
# v_semver
echo $ZERV_RON | zerv version --source stdin --output-prefix v --output-format semver
# → v1.0.1-alpha.17015.post.1.dev.1764382150+feature.dirty.work.1.g54c499a
# v_major (schema-based approach)
echo $ZERV_RON | \
zerv version --source stdin \
--schema-ron '(core:[var(Major)], extra_core:[], build:[])' \
--output-prefix v --output-format pep440
# → v1
# v_major_minor (schema-based approach)
echo $ZERV_RON | \
zerv version --source stdin \
--schema-ron '(core:[var(Major), var(Minor)], extra_core:[], build:[])' \
--output-prefix v --output-format pep440
# → v1.0
# v_major_custom (custom template-based approach)
echo $ZERV_RON | zerv version --source stdin --output-template "v{{ major | default(value=\"0\") }}"
# → v1
# v_major_minor_custom (custom template-based approach)
echo $ZERV_RON | zerv version --source stdin --output-template "v{{ major | default(value=\"0\") }}{{ prefix_if(value=minor, prefix=\".\") }}"
# → v1.0
```
<!-- Corresponding test: tests/integration_tests/flow/docs/quick_start.rs:fn test_quick_start_shared_zerv_versioning_github_actions_documentation_examples() {
-->
## Key Features
- **zerv version**: Flexible, configurable version generation with full control
- **zerv flow**: Opinionated, automated pre-release management based on Git branches
- **Smart Schema System**: Auto-detects clean releases, pre-releases, and build context
- **Multiple Formats**: SemVer, PEP440 (Python), CalVer, custom schemas
- **CI/CD Integration**: Complements semantic release with branch-based pre-releases and full override control
## Usage Examples
### zerv flow: Automated branch-based versions
**Purpose**: Intelligent pre-release management that automatically generates meaningful versions from any Git state without manual decisions.
#### Core Principles
1. **Semantic state capture** - Extract semantic meaning from ANY Git state (any branch, any commit, uncommitted changes)
2. **Multi-format output** - Transform semantic meaning into various version formats with customizable format support
3. **Seamless semantic release integration** - Work with semantic release tools while providing fully automated pre-release versioning
4. **Build traceability** - Include sufficient context to trace versions back to exact Git states
#### Version Format Explained
**Full Example**: `1.0.1-alpha.12345.post.3.dev.1729924622+feature.auth.1.f4a8b9c`
**Structure**: `<BASE>-<PRE_RELEASE>.<POST>[.<DEV>][+BUILD_CONTEXT]`
- **`1.0.1`** - Base version (semantic meaning from tags)
- **`alpha.12345`** - Pre-release type and branch identification
- **`post.3`** - Commits since reference point
- **`[.dev.timestamp]`** - Optional dev timestamp for uncommitted changes
- **`[+BUILD_CONTEXT]`** - Optional build context for traceability
**Key Point**: The core version `<BASE>-<PRE_RELEASE>.<POST>[.<DEV>]` contains all semantic meaning needed to understand Git state. The build context `[+BUILD_CONTEXT]` is optional and provides additional verbose information for easier interpretation and traceability.
**Version Variations**:
- **Tagged release**: `1.0.1`
- **Tagged pre-release**: `2.0.1-rc.1.post.2`
- **Branch from Tagged release**: `1.0.1-alpha.54321.post.1+feature.login.1.f4a8b9c`
- **Branch from Tagged pre-release**: `2.0.1-alpha.98765.post.3+fix.auth.bug.1.c9d8e7f`
- **Uncommitted changes**: `2.0.1-alpha.98765.post.4.dev.1729924622+fix.auth.bug.1.c9d8e7f`
#### Pre-release Resolution Strategy
**Default behavior**: All branches start as `alpha.<hash-id>` (hash-based identification)
**Configurable branch patterns**: Users can configure specific branches to use custom pre-release types (alpha, beta, rc) with optional numbers:
- Example: `feature/user-auth` branch → `beta.12345` (label only, uses hash-based number)
- Example: `develop` branch → `beta.1` (label and custom number for stable branches)
- Any branch can be mapped to any pre-release type (alpha, beta, rc) with hash-based or custom numbers
**Branch name resolution**: Extract pre-release information from branch name patterns:
- Example: `release/1/feature-auth-fix` → `rc.1` (extracts number from branch pattern)
- Simplified GitFlow-inspired naming conventions
- **Note**: Branch names are conventions, not strict requirements - Zerv provides flexible pattern matching and user configuration.
**Clean branches**: `main`, `master` → No pre-release (clean releases)
**Post-release resolution logic**:
- **Configurable post representation** with two options:
- **Tag Distance**: Count commits from last tag
- **Commit Distance**: Count commits from branch creation point
- **Default**: Tag Distance (most common use case)
- **`post.0`**: Exactly on reference point (no commits since)
- **`post.N`**: N commits since reference point
- **Consistent across all branch types** (alpha, beta, rc, etc.)
**Examples**:
**Tag Distance (release branches):**
```
main: v1.0.0 (tag)
└── release/1 (created) → create tag v1.0.1-rc.1.post.1
└── 1 commit → 1.0.1-rc.1.post.1.dev.1729924622 (same post, dev timestamp)
└── 2 commits → 1.0.1-rc.1.post.1.dev.1729924623 (same post, dev timestamp)
└── create tag → 1.0.1-rc.1.post.2 (new tag increments post)
└── more commits → 1.0.1-rc.1.post.2.dev.1729924624 (new post, dev timestamp)
```
**Commit Distance (develop branch):**
```
main: v1.0.0 (tag)
└── develop (created from v1.0.0) → commit 1.0.1-beta.1.post.1 (1 commits since branch creation)
└── 5 commits later → 1.0.1-beta.1.post.6 (6 commits since branch creation)
└── 1 more commit → 1.0.1-beta.1.post.7 (7 commits since branch creation)
```
#### Workflow Examples
This section demonstrates how Zerv Flow works across different branching strategies and Git scenarios.
**Note**: To keep diagrams clean and readable, build context is omitted from version strings in the examples. Dirty state (`.dev.timestamp`) is shown in diagrams when applicable.
**Example**: A commit appears as `1.0.1-alpha.12345.post.3.dev.1729924622` in the diagrams. With build context enabled: `1.0.1-alpha.12345.post.3.dev.1729924622+feature.user-auth.3.a1b2c3d`
##### Trunk-Based Development
**Purpose**: Complex trunk-based workflow with parallel features, nested branches, and synchronization scenarios.
**Scenario**: Development from `v1.0.0` with parallel feature branches, synchronization, and nested development.
<!-- MERMAID_START: git-diagram-trunk-based-development.mmd -->
```mermaid
---
config:
logLevel: 'debug'
theme: 'base'
---
gitGraph
%% Step 1: Initial commit on main with v1.0.0 tag
commit id: "1.0.0"
%% Step 2: Create parallel feature branches feature-1 and feature-2 from main
branch feature-1 order: 2
branch feature-2 order: 3
%% Step 3: feature-2: Start development with dirty state
checkout feature-2
commit type:REVERSE id: "1.0.1-alpha.68031.post.0.dev.{timestamp}" tag: "uncommitted"
%% Step 4: feature-2: Create first commit
commit id: "1.0.1-alpha.68031.post.1"
%% Step 5: feature-1: Create commits (parallel development)
checkout feature-1
commit id: "1.0.1-alpha.42954.post.1"
commit id: "1.0.1-alpha.42954.post.2"
%% Step 6: feature-1: Merge to main and release v1.0.1
checkout main
merge feature-1 id: "1.0.1" tag: "feature-1 released"
%% Step 7: feature-2: Sync with main to get feature-1 changes
checkout feature-2
merge main id: "1.0.2-alpha.68031.post.2"
%% Step 8: feature-2: Create additional commit
commit id: "1.0.2-alpha.68031.post.3"
%% Step 9: feature-3: Branch from feature-2 for sub-feature development
branch feature-3 order: 4
checkout feature-3
commit id: "1.0.2-alpha.14698.post.4"
%% Step 10: feature-3: Continue development with dirty state
commit type:REVERSE id: "1.0.2-alpha.14698.post.4.dev.{timestamp}" tag: "uncommitted"
%% Step 11: feature-3: Continue development with commits
commit id: "1.0.2-alpha.14698.post.5"
commit id: "1.0.2-alpha.14698.post.6"
%% Step 12: feature-2: Merge feature-3 back to continue development
checkout feature-2
merge feature-3 id: "1.0.2-alpha.68031.post.6" tag: "feature-3 merged"
%% Step 13: feature-2: Final development before release
commit id: "1.0.2-alpha.68031.post.7"
%% Step 14: Final release: feature-2 merges to main and releases v1.1.0
checkout main
merge feature-2 id: "1.1.0" tag: "feature-2 released"
```
<!-- MERMAID_END -->
**Key behaviors demonstrated**:
- **Parallel development**: `feature-1` and `feature-2` get unique hash IDs (`42954`, `68031`)
- **Version progression**: Base version updates when syncing (`1.0.1` → `1.0.2`)
- **Dirty state**: Uncommitted changes show `.dev.timestamp` suffix
- **Nested branches**: `feature-3` branches from `feature-2` with independent versioning
- **Clean releases**: Main branch maintains semantic versions on merges
<!-- Corresponding test: tests/integration_tests/flow/scenarios/trunk_based.rs:test_trunk_based_development -->
##### GitFlow Branching Strategy
**Purpose**: GitFlow methodology with proper pre-release type mapping and merge patterns.
**Scenario**: Main branch with `v1.0.0`, develop branch integration, feature development, hotfix emergency flow, and release preparation.
<!-- MERMAID_START: git-diagram-gitflow-development-flow.mmd -->
```mermaid
---
config:
logLevel: 'debug'
theme: 'base'
---
gitGraph
%% Step 1: Initial state: main and develop branches
commit id: "1.0.0"
%% Step 2: Create develop branch with initial development commit
branch develop order: 3
checkout develop
commit id: "1.0.1-beta.1.post.1"
%% Step 3: Feature development from develop branch
branch feature/auth order: 4
checkout feature/auth
commit id: "1.0.1-alpha.92409.post.2"
commit id: "1.0.1-alpha.92409.post.3"
checkout develop
%% Step 4: Merge feature/auth back to develop
merge feature/auth id: "1.0.1-beta.1.post.3" tag: "feature merged"
%% Step 5: Hotfix emergency flow from main
checkout main
branch hotfix/critical order: 1
checkout hotfix/critical
commit id: "1.0.1-alpha.11477.post.1"
checkout main
%% Step 6: Merge hotfix to main and release v1.0.1
merge hotfix/critical id: "1.0.1" tag: "hotfix released"
%% Step 7: Sync develop with main changes and continue development
checkout develop
merge main id: "1.0.2-beta.1.post.4" tag: "sync main"
%% Step 8: Continue development on develop branch
commit id: "1.0.2-beta.1.post.5"
%% Step 9: Release branch preparation
branch release/1 order: 2
checkout release/1
commit id: "1.0.2-rc.1.post.1"
commit id: "1.0.2-rc.1.post.2"
commit type:REVERSE id: "1.0.2-rc.1.post.3.dev.{timestamp}" tag: "uncommitted"
commit id: "1.0.2-rc.1.post.3"
checkout main
%% Step 10: Final release: merge release/1 to main
merge release/1 id: "1.1.0" tag: "release 1.1.0"
%% Step 11: Sync develop with release and prepare for next cycle
checkout develop
merge main id: "1.1.1-beta.1.post.1" tag: "sync release"
```
<!-- MERMAID_END -->
**Key behaviors demonstrated**:
- **Beta pre-releases**: Develop branch uses `beta` for integration builds
- **Alpha pre-releases**: Feature branches use `alpha` with hash-based identification
- **RC pre-releases**: Release branches use `rc` for release candidates
- **Clean releases**: Main branch maintains clean versions without pre-release suffixes
- **Hotfix flow**: Emergency fixes from main with proper version propagation
- **Branch synchronization**: Develop branch syncs with main releases
<!-- Corresponding test: tests/integration_tests/flow/scenarios/gitflow.rs:test_gitflow_development_flow -->
##### Complex Release Management
**Purpose**: Complex release branch scenarios including branch abandonment and cascading release preparation.
**Scenario**: Main branch with `v1.0.0`, release branch preparation with critical issues leading to abandonment, and selective branch creation for successful release.
<!-- MERMAID_START: git-diagram-complex-release-branch.mmd -->
```mermaid
---
config:
logLevel: 'debug'
theme: 'base'
---
gitGraph
%% Step 1: Initial state: main branch with v1.0.0 tag
commit id: "1.0.0" tag: "v1.0.0"
%% Step 2: Create release/1 from main for next release preparation
branch release/1 order: 2
checkout release/1
commit id: "1.0.1-rc.1.post.1"
commit id: "1.0.1-rc.1.post.2"
%% Step 3: Create release/2 from the second commit of release/1 (before issues)
%% release/1 at this point: 1.0.1-rc.1.post.2, so release/2 continues from there
checkout release/1
branch release/2 order: 1
checkout release/2
commit id: "1.0.1-rc.2.post.3"
%% Step 4: Go back to release/1 and add the problematic third commit (issues found)
checkout release/1
commit id: "1.0.1-rc.1.post.3" tag: "issues found"
%% Step 5: release/2 completes preparation successfully
checkout release/2
commit id: "1.0.1-rc.2.post.4"
%% Step 6: Merge release/2 to main and release v1.1.0
checkout main
merge release/2 id: "1.1.0" tag: "v1.1.0"
```
<!-- MERMAID_END -->
**Version progression details**:
- **release/1**: `1.0.1-rc.1.post.1` → `1.0.1-rc.1.post.2` → `1.0.1-rc.1.post.3` (abandoned)
- **release/2**: Created from `release/1`'s second commit (`1.0.1-rc.1.post.2`), continues as `1.0.1-rc.2.post.3` → `1.0.1-rc.2.post.4`
- **Main**: Clean progression `1.0.0` → `1.1.0` (only from successful `release/2` merge)
**Key behaviors demonstrated**:
- **Branch isolation**: Each release branch maintains independent versioning regardless of parent/child relationships
- **Selective branching**: Zerv Flow correctly handles branches created from specific historical commits
- **Abandonment handling**: Unmerged branches don't affect final release versions on main
- **Cascade management**: Complex branching scenarios where releases feed into other releases are handled transparently
- **Clean main branch**: Main only receives versions from successfully merged releases, maintaining clean semantic versioning
<!-- Corresponding test: tests/integration_tests/flow/scenarios/complex_release_branch.rs:test_complex_release_branch_abandonment -->
#### Schema Variants: 10+ Standard Schema Presets
**Purpose**: Complete control over version generation with 20+ schema presets and extensive customization options.
**Schema Selection Examples**:
```bash
zerv flow --schema standard-base
# → 1.0.1 (test case 1)
zerv flow --schema standard-base-context
# → 1.0.1+branch.name.g4e9af24 (test case 2)
zerv flow --schema standard-base-prerelease
# → 1.0.1-alpha.10192 (test case 3)
zerv flow --schema standard-base-prerelease-context
# → 1.0.1-alpha.10192+branch.name.1.g4e9af24 (test case 4)
zerv flow --schema standard-base-prerelease-post
# → 1.0.1-alpha.10192.post.1 (test case 5)
zerv flow --schema standard-base-prerelease-post-context
# → 1.0.1-alpha.10192.post.1+branch.name.1.g4e9af24 (test case 6)
zerv flow --schema standard-base-prerelease-post-dev
# → 1.0.1-alpha.10192.post.1.dev.1764382150 (test case 7)
zerv flow --schema standard-base-prerelease-post-dev-context
# → 1.0.1-alpha.10192.post.1.dev.1764382150+branch.name.1.g4e9af24 (test case 8)
zerv flow --schema standard
# → 1.0.0 (clean main - test case 9)
# → 1.0.1-rc.1 (release branch - test case 10)
# → 1.0.1-alpha.10192.post.1+branch.name.1.g4e9af24 (feature branch - test case 11)
# → 1.0.1-alpha.10192.post.1.dev.1764382150+branch.name.1.g4e9af24 (dirty feature branch - test case 12)
zerv flow --schema standard-no-context
# → 1.0.0 (clean main - test case 13)
# → 1.0.1-rc.1 (release branch - test case 14)
# → 1.0.1-alpha.10192.post.1 (feature branch - test case 15)
# → 1.0.1-alpha.10192.post.1.dev.1764382150 (dirty feature branch - test case 16)
zerv flow --schema standard-context
# → 1.0.0+main.g4e9af24 (clean main - test case 17)
# → 1.0.1-rc.1+release.1.do.something.g4e9af24 (release branch - test case 18)
# → 1.0.1-alpha.10192.post.1+branch.name.1.g4e9af24 (feature branch - test case 19)
# → 1.0.1-alpha.10192.post.1.dev.1764382150+branch.name.1.g4e9af24 (dirty feature branch - test case 20)
```
<!-- Corresponding test: tests/integration_tests/flow/docs/schema_variants.rs:test_schema_variants_documentation_examples -->
#### Branch Rules: Configurable Pattern Matching
**Purpose**: Map branch names to pre-release labels, numbers, and post modes for automated version generation.
**Default GitFlow Rules**:
```ron
[
(pattern: "develop", pre_release_label: beta, pre_release_num: 1, post_mode: commit),
(pattern: "release/*", pre_release_label: rc, post_mode: tag)
]
```
**Pattern Matching**:
- **Exact**: `"develop"` matches only `"develop"`
- **Wildcard**: `"release/*"` matches `"release/1"`, `"release/42"`, `"release/1/feature"`, etc.
- **Number extraction**:
- With numbers: `release/1` → `rc.1`, `release/1/feature` → `rc.1`
- Without numbers: `release/feature` → `rc.<hash-id>` (fallback to hash-based identification)
- **Other branches**: `*`, `feature/*`, `hotfix/*`, `bugfix/*`, etc. → `alpha.<hash-id>` (fallback to hash-based identification)
**Examples**:
```bash
# Default GitFlow behavior
zerv flow
# → 1.0.1-rc.1.post.1+release.1.do.something.1.g3a2b1c4 (release/1/do-something branch - test case 1)
# → 1.0.1-beta.1.post.1+develop.1.g8f7e6d5 (develop branch - test case 2)
# → 1.0.1-alpha.10192.post.1+branch.name.1.g9d8c7b6 (feature branch - test case 3)
# → 1.0.1-rc.48993.post.1+release.do.something.1.g5e4f3a2 (release/do-something branch - test case 4)
# Custom branch rules
zerv flow --branch-rules '[
(pattern: "staging", pre_release_label: rc, pre_release_num: 1, post_mode: commit),
(pattern: "qa/*", pre_release_label: beta, post_mode: tag)
]'
# → 1.0.1-rc.1.post.1+staging.1.g2c3d4e5 (staging branch - test case 5)
# → 1.0.1-beta.123.post.1+qa.123.1.g7b8c9d0 (qa branch - test case 6)
# → 1.0.1-alpha.20460.post.1+feature.new.feature.1.g1d2e3f4 (feature branch - test case 7)
```
**Configuration**:
- **`pattern`**: Branch name (exact) or wildcard (`/*`)
- **`pre_release_label`**: `alpha`, `beta`, or `rc`
- **`pre_release_num`**: Explicit number (exact) or extracted (wildcard)
- **`post_mode`**: `commit` (count commits) or `tag` (count tags)
<!-- Corresponding test: tests/integration_tests/flow/docs/branch_rules.rs:test_branch_rules_documentation_examples -->
#### Override Controls: Complete Version Customization
**Override Options**: VCS, version components, and pre-release controls
```bash
# VCS Overrides
zerv flow --tag-version "v2.1.0-beta.1" # Override tag version
# → 2.1.0
zerv flow --distance 42 # Override distance from tag
# → 1.0.1-alpha.60124.post.42+feature.test.42.g8f4e3a2
zerv flow --dirty # Force dirty=true
# → 1.0.1-alpha.18373.dev.1729927845+feature.dirty.ga1b2c3d
zerv flow --no-dirty # Force dirty=false
# → 1.0.0+feature.clean.g4d5e6f7
zerv flow --clean # Force clean state (distance=0, dirty=false)
# → 1.0.0+feature.clean.force.g8a9b0c1
zerv flow --bumped-branch "release/42" # Override branch name
# → 1.0.1-rc.42.post.1+release.42.1.g2c3d4e5
zerv flow --bumped-commit-hash "a1b2c3d" # Override commit hash
# → 1.0.1-alpha.48498.post.1+feature.hash.1.a1b2c3d
zerv flow --bumped-timestamp 1729924622 # Override timestamp
# → 1.0.1-alpha.18321.dev.1764598322+feature.timestamp.g7f8e9a0
# Version Component Overrides
zerv flow --major 2 # Override major
# → 2.0.0
zerv flow --minor 5 # Override minor
# → 1.5.0
zerv flow --patch 3 # Override patch
# → 1.0.3
zerv flow --epoch 1 # Override epoch
# → 1.0.0-epoch.1
zerv flow --post 7 # Override post
# → 1.0.1-alpha.15355.post.8+feature.post.1.g6b7c8d9 (post affects build context)
# Pre-release Controls
zerv flow --pre-release-label rc # Set pre-release type
# → 1.0.1-rc.10180.post.1+feature.pr.label.1.g3d4e5f6
zerv flow --pre-release-num 3 # Set pre-release number
# → 1.0.1-alpha.3.post.1+feature.pr.num.1.g9a0b1c2
zerv flow --post-mode commit # Set distance calculation method
# → 1.0.1-alpha.17003.post.1+feature.post.mode.1.g1d2e3f4
```
<!-- Corresponding test: tests/integration_tests/flow/docs/override_controls.rs:test_individual_override_options -->
**Usage Examples**:
```bash
# VCS overrides
zerv flow --tag-version "v2.0.0" --distance 5 --bumped-branch "release/candidate"
# → 2.0.1-rc.71808.post.1+release.candidate.5.gb2c3d4e
# Version component overrides
zerv flow --major 2 --minor 5 --patch 3
# → 2.5.3
# Mixed overrides: VCS + version components
zerv flow --distance 3 --major 2 --minor 1
# → 2.1.1-alpha.60124.post.3+feature.test.3.g8f4e3a2
# Clean release with overrides
zerv flow --clean --major 2 --minor 0 --patch 0
# → 2.0.0+feature.clean.force.g8a9b0c1
# Complex multi-override scenario
zerv flow --tag-version "v1.5.0-rc.1" --bumped-commit-hash "f4a8b9c" --major 1 --minor 6
# → 1.6.0-alpha.11178.post.2+dev.branch.2.f4a8b9c
```
<!-- Corresponding test: tests/integration_tests/flow/docs/override_controls.rs:test_override_controls_documentation_examples -->
### zerv version: Manual control with 4 main capability areas
**Purpose**: Complete manual control over version generation with flexible schema variants and granular customization options.
**Note**: Unlike `zerv flow`, `zerv version` generates versions as-is without opinionated auto-bumping logic. It does not automatically increment post-counts based on commits or tags, nor does it derive pre-release labels and numbers from branch patterns. This is general-purpose version generation without opinionated logic.
#### Schema Variants: 20+ presets (standard, calver families) and custom RON schemas
**Purpose**: Choose from 20+ predefined version schemas or create custom RON-based schemas for complete format control.
**Schema Selection Examples**:
```bash
zerv version --schema standard-base
# → 1.0.0 (test case 1)
zerv version --schema standard-base-context
# → 1.0.0+branch.name.1.g4e9af24 (test case 2)
zerv version --schema standard-base-prerelease
# → 1.0.0-alpha.1 (test case 3)
zerv version --schema standard-base-prerelease-post-dev-context
# → 1.0.0-alpha.1.post.5.dev.123+branch.name.1.g4e9af24 (test case 4)
zerv version --schema calver-base-prerelease-post-dev-context
# → 2025.12.4-0.alpha.1.post.5.dev.123+branch.name.1.g4e9af24 (test case 5)
# Custom RON Schemas
zerv version --schema-ron '(core:[var(Major), var(Minor), var(Patch)], extra_core:[], build:[])'
# → 1.0.0 (test case 6)
zerv version --schema-ron '(core:[var(Major), var(Minor), var(Patch)], extra_core:[], build:[str("build.id")])'
# → 1.0.0+build.id (test case 7)
zerv version --schema-ron '(
core: [var(Major), var(Minor), var(Patch)],
extra_core: [var(PreRelease), var(Post), var(Dev)],
build: [var(BumpedBranch), var(Distance), var(BumpedCommitHashShort)]
)'
# → 1.0.0-alpha.1.post.5.dev.123+branch.name.1.g4e9af24 (test case 8, equivalent to standard-base-prerelease-post-dev-context)
zerv version --schema-ron '(
core: [var(ts("YYYY")), var(ts("MM")), var(ts("DD"))],
extra_core: [var(PreRelease), var(Post), var(Dev)],
build: [var(BumpedBranch), var(Distance), var(BumpedCommitHashShort)]
)'
# → 2025.12.4-0.alpha.1.post.5.dev.123+branch.name.1.g{hex:7} (test case 9, equivalent to calver-base-prerelease-post-dev-context)
```
**Schema Architecture**: All schemas resolve to the internal `ZervSchema` struct with three required components:
- **`core`**: Primary version components (e.g., `[Major, Minor, Patch]` for SemVer)
- **`extra_core`**: Additional version components (e.g., pre-release, post-release, dev)
- **`build`**: Build metadata components (e.g., commit hash, branch name, build info)
**Schema Resolution**: Preset schemas (`standard-base`, `calver-*`, etc.) are predefined `ZervSchema` objects that adapt based on repository state. RON schemas are parsed from text into the same `ZervSchema` structure, providing identical functionality with custom definitions.
**Examples**:
- Test case 8: RON schema equivalent to `standard-base-prerelease-post-dev-context` (test case 4)
- Test case 9: RON schema equivalent to `calver-base-prerelease-post-dev-context` (test case 5), demonstrating date formatting with `var(ts("YYYY"))`
#### VCS Overrides: Override tag version, distance, dirty state, branch, commit data
**Purpose**: Override any VCS (Version Control System) detected values for complete control over version components.
```bash
zerv version --tag-version "v2.1.0-beta.1"
# → 2.1.0-beta.1+branch.name.1.g4e9af24 (test case 1)
zerv version --distance 42
# → 1.0.0-alpha.1.post.5.dev.123+branch.name.42.g8f4e3a2 (test case 2)
zerv version --dirty
# → 1.0.0-alpha.1.post.5.dev.123+branch.name.1.g4e9af24 (test case 3)
zerv version --bumped-branch "release/42"
# → 1.0.0-alpha.1.post.5.dev.123+release.42.1.g4e9af24 (test case 4)
```
<!-- Corresponding test: tests/integration_tests/version/docs/vcs_overrides.rs:test_zerv_version_vcs_overrides_documentation_examples -->
#### Version Bumping: Field-based bumps (major/minor/patch) and schema-based bumps
**Purpose**: Increment version components using field-based or schema-based strategies.
```bash
zerv version --bump-major
# → 2.0.0 (test case 1)
zerv version --bump-minor
# → 1.1.0 (test case 2)
zerv version --bump-patch
# → 1.0.1 (test case 3)
zerv version --bump-major --bump-minor
# → 2.1.0 (test case 4)
zerv version --bump-core 0
# → 2.0.0 (test case 5, schema-based bump targeting core component index 0/major)
zerv version --bump-major --bump-minor --bump-patch
# → 2.1.1 (test case 6)
zerv version --bump-major 2
# → 3.0.0 (test case 7)
```
<!-- Corresponding test: tests/integration_tests/version/docs/version_bumping.rs:test_zerv_version_version_bumping_documentation_examples -->
#### Component Overrides: Fine-grained control over individual version components
**Purpose**: Override specific version components while preserving all other detected values for precise version control.
**Override Categories**: Individual components, pre-release controls, and custom variables
```bash
# Version component overrides (major, minor, patch)
zerv version --major 2 --minor 5
# → 2.5.0+branch.name.1.g{hex:7} (test case 1)
# Pre-release component overrides (label and number)
zerv version --schema standard-base-prerelease-post-context --pre-release-label rc --pre-release-num 3
# → 1.0.0-rc.3+branch.name.1.g{hex:7} (test case 2)
# Additional component overrides (epoch, post, dev)
zerv version --schema standard-base-prerelease-post-dev-context --epoch 1 --post 7 --dev 456
# → 1.0.0-epoch.1.post.7.dev.456+branch.name.1.g{hex:7} (test case 3)
# Custom variables in schema-ron (requires schema-ron)
zerv version --schema-ron '(
core: [var(Major), var(Minor), var(Patch)],
extra_core: [],
build: [var(custom("build_id")), var(custom("environment"))]
)' --custom '{"build_id": "prod-123", "environment": "staging"}'
# → 1.0.0+prod.123.staging (test case 4)
```
<!-- Corresponding test: tests/integration_tests/version/docs/component_overrides.rs:test_zerv_version_component_overrides_documentation_examples -->
#### Version Check: Validate version strings for different formats
**Purpose**: Validate that version strings conform to specific format requirements with support for multiple version standards.
```bash
# Check complex SemVer format validation
zerv check --format semver 1.0.0-rc.1.something.complex+something.complex
# → Version: 1.0.0-rc.1.something.complex+something.complex
# ✓ Valid SemVer format (test case 1)
# Check PEP440 format validation with build metadata
zerv check --format pep440 1.0.0a2.post5.dev3+something.complex
# → Version: 1.0.0a2.post5.dev3+something.complex
# ✓ Valid PEP440 format (test case 2)
# Check PEP440 format validation with normalization
zerv check --format pep440 1.0.0-alpha.2.post.5.dev.3+something.complex
# → Version: 1.0.0-alpha.2.post.5.dev.3+something.complex
# ✓ Valid PEP440 format (normalized: 1.0.0a2.post5.dev3+something.complex) (test case 3)
# Invalid version handling (fails with exit code 1)
zerv check --format semver invalid
# → Error: Invalid version: invalid - Invalid SemVer format (test case 4)
# Auto-detect and validate multiple formats
zerv check 2.1.0-beta.1
# → Version: 2.1.0-beta.1
# ✓ Valid PEP440 format (normalized: 2.1.0b1)
# ✓ Valid SemVer format (test case 5)
```
<!-- Corresponding test: tests/integration_tests/version/docs/version_validation.rs:test_zerv_check_documentation_examples -->
#### Input/Output & Piping: Shared capabilities for both commands
**Purpose**: Flexible input handling and output formatting with pipeline support for both `zerv version` and `zerv flow` commands.
```bash
# Source options - Use Git VCS or stdin for version data
zerv flow --source git
# → 1.0.1-alpha.10192.post.1.dev.1764382150+branch.name.1.g4e9af24 (VCS auto-detection)
# (test case 1)
# zerv RON format - Internal/debugging output and intermediate representation
# Used as stdin input for zerv version and zerv flow commands
zerv flow --output-format zerv
# → (
# schema: (
# core: [var(Major), var(Minor), var(Patch)],
# extra_core: [var(Epoch), var(PreRelease), ...],
# build: [var(BumpedBranch), var(Distance), ...]
# ),
# vars: (
# major: Some(1), minor: Some(0), patch: Some(1),
# pre_release: Some((label: Alpha, number: Some(123))),
# bumped_branch: Some("feature-branch"),
# bumped_commit_hash: Some("gabc123def"),
# ...
# )
# )
# (test case 2)
# Pipeline chaining - Multiple transformations
# Note: Upstream command must output --output-format zerv for stdin piping to work
zerv flow --source git --output-format zerv | zerv version --source stdin --major 4 --output-format semver
# → 4.0.1-alpha.10192.post.1.dev.1764382150+branch.name.1.g4e9af24
# (test case 3)
zerv flow --output-format pep440
# 1.0.1a10192.post1.dev1764382150+branch.name.1.g4e9af24
# (test case 4)
zerv flow --output-format semver
# 1.0.1-alpha.10192.post.1.dev.1764902466+branch.name.1.g4e9af24
# (test case 5)
zerv flow --output-prefix v --output-format semver
# v1.0.1-alpha.10192.post.1.dev.1764902466+branch.name.1.g4e9af24
# (test case 6)
zerv flow --output-template "app:{{ major }}.{{ minor }}.{{ patch }}"
# app:1.0.1
# (test case 7)
zerv flow --output-template "{{ semver_obj.docker }}"
# 1.0.1-alpha.10192.post.1.dev.1764902466-branch.name.1.g4e9af24
# (test case 8)
zerv flow --output-template "{{ semver_obj.base_part }}++{{ semver_obj.pre_release_part }}++{{ semver_obj.build_part }}"
# 1.0.1++alpha.10192.post.1.dev.1764902466++branch.name.1.g4e9af24
# (test case 9)
# Comprehensive template examples
zerv flow --output-template "Build: {{ major }}.{{ minor }}.{{ patch }}-{{ pre_release.label | default(value='release') }}{% if pre_release.number %}{{ pre_release.number }}{% endif %} ({{ bumped_branch }}@{{ bumped_commit_hash_short }})"
# → Build: 1.0.1-alpha59394 (feature.new.auth@g4e9af24)
# (test case 10)
zerv flow --output-template "Version: {{ semver_obj.docker }}, Branch: {{ bumped_branch | upper }}, Clean: {% if dirty %}No{% else %}Yes{% endif %}"
# → Version: 1.0.1-alpha.59394.post.1.dev.1764382150-branch.name.1.g54c499a, Branch: DIRTY.FEATURE.WORK, Clean: No
# (test case 11)
zerv flow --output-template "{% if distance %}{{ distance }} commits since {% if last_timestamp %}{{ format_timestamp(value=last_timestamp, format='%Y-%m-%d') }}{% else %}beginning{% endif %}{% else %}Exact tag{% endif %}"
# → 1 commits since 2025-12-05
# (test case 12)
zerv flow --output-template "App-{{ major }}{{ minor }}{{ patch }}{% if pre_release %}-{{ pre_release.label }}{% endif %}{% if dirty %}-SNAPSHOT{% endif %}-{{ hash(value=bumped_branch, length=4) }}"
# → App-101-alpha-SNAPSHOT-a1b2
# (test case 13)
zerv flow --output-template "PEP440: {{ pep440 }}"
# → PEP440: 1.0.1a10192.post1.dev1764909598+branch.name.1.g4e9af24
# (test case 14)
zerv flow --output-template "Release: v{{ major }}.{{ minor }}.{{ patch }}, Pre: {{ pre_release.label_code | default(value='release') }}, Hash: {{ bumped_commit_hash_short }}"
# → Release: v1.0.1, Pre: a, Hash: g4e9af24
# (test case 15)
```
<!-- Corresponding test: tests/integration_tests/flow/docs/io.rs:test_io_documentation_examples -->
- **Smart Source Detection**: Auto-detects input source (stdin if piped, git otherwise)
```bash
# Implicit source detection (auto: git if no stdin, stdin if piped)
zerv version
# Explicit git source
zerv version --source git
# Pipe between commands (implicit stdin detection)
zerv version --output-format zerv | zerv version
# Pipe between commands (explicit stdin source)
zerv version --output-format zerv | zerv version --source stdin
# No VCS - use overrides only
zerv version --source none --tag-version 1.2.3 --distance 5
```
##### Template System: Advanced custom formatting
**Purpose**: Complete control over version output using Tera templating with extensive variables, functions, and logical operations.
**Note**: Zerv uses the [Tera templating engine](https://keats.github.io/tera/docs/), which provides powerful template features including conditionals, loops, filters, and custom functions.
###### Available Template Variables
**Core Version Fields**:
- `major`, `minor`, `patch` - Version numbers
- `epoch` - Epoch version (optional)
- `post`, `dev` - Post-release and dev identifiers
**Pre-release Context**:
- `pre_release.label` - Pre-release type ("alpha", "beta", "rc")
- `pre_release.number` - Pre-release number
- `pre_release.label_code` - Short code ("a", "b", "rc")
- `pre_release.label_pep440` - PEP440 format ("a", "b", "rc")
**VCS/Metadata Fields**:
- `distance` - Commits from reference point
- `dirty` - Working directory dirty state
- `bumped_branch` - Branch name
- `bumped_commit_hash` - Full commit hash
- `bumped_commit_hash_short` - Short commit hash
- `bumped_timestamp` - Commit timestamp
- `last_commit_hash` - Last tag commit hash
- `last_commit_hash_short` - Short last tag commit hash
- `last_timestamp` - Last tag timestamp
**Parsed Version Objects**:
- `semver_obj.base_part` - "1.2.3"
- `semver_obj.pre_release_part` - "alpha.1.post.3.dev.5"
- `semver_obj.build_part` - "build.456"
- `semver_obj.docker` - "1.2.3-alpha.1-build.456"
- `pep440_obj.base_part` - "1.2.3"
- `pep440_obj.pre_release_part` - "a1.post3.dev5"
- `pep440_obj.build_part` - "build.456"
**Formatted Versions**:
- `semver` - Full SemVer string
- `pep440` - Full PEP440 string
- `current_timestamp` - Current Unix timestamp
###### Custom Template Functions
**String Manipulation**:
- `sanitize(value=variable, preset='dotted')` - Sanitize with presets: "semver", "pep440", "uint"
- `sanitize(value=variable, separator='-', lowercase=true, max_length=10)` - Custom sanitization
- `prefix(value=variable, length=10)` - Extract first N characters
- `prefix_if(value=variable, prefix="+")` - Add prefix only if value not empty
**Hashing & Formatting**:
- `hash(value=variable, length=7)` - Generate hex hash
- `hash_int(value=variable, length=7, allow_leading_zero=false)` - Numeric hash
- `format_timestamp(value=timestam | text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | cli, git, semver, versioning | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming L... | [] | https://github.com/wislertt/zerv | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T11:17:41.854718 | zerv_version-0.8.6-py3-none-macosx_10_12_x86_64.whl | 2,483,201 | 1f/d4/68e8efe17940eef90d725121a6cae3e0e4b68f788a4e08a284c664335b51/zerv_version-0.8.6-py3-none-macosx_10_12_x86_64.whl | py3 | bdist_wheel | null | false | 76a819ee01fc373c7fbe63eea5e60836 | c2d4ac96abb22b83b8957f8753fa20ed7dd9da0e1006a9841a7c08d51e4b0d4d | 1fd468e8efe17940eef90d725121a6cae3e0e4b68f788a4e08a284c664335b51 | Apache-2.0 | [
"LICENSE"
] | 1,451 |
2.4 | aionis-sdk | 0.1.4 | Python SDK for Aionis Memory Graph API | # aionis-sdk
Python SDK for Aionis Memory Graph API.
## Install
```bash
pip install aionis-sdk
```
## Usage
```python
import os
from aionis_sdk import AionisClient
client = AionisClient(
base_url="http://localhost:3001",
timeout_s=10.0,
api_key=os.getenv("API_KEY"), # optional: X-Api-Key
auth_bearer=os.getenv("AUTH_BEARER"), # optional: Authorization: Bearer <token>
admin_token=os.getenv("ADMIN_TOKEN"), # optional: X-Admin-Token
)
out = client.write(
{
"scope": "default",
"input_text": "python sdk write",
"auto_embed": False,
"nodes": [{"client_id": "py_evt_1", "type": "event", "text_summary": "hello python sdk"}],
"edges": [],
}
)
print(out["status"], out["request_id"], out["data"]["commit_id"])
```
## Typed payloads
`0.1.4+` exports `TypedDict` API payloads from `aionis_sdk.types`:
```python
from aionis_sdk import AionisClient
from aionis_sdk.types import ToolsFeedbackInput, ToolsSelectInput
client = AionisClient(base_url="http://localhost:3001")
select_payload: ToolsSelectInput = {
"scope": "default",
"run_id": "run_001",
"context": {"intent": "json", "provider": "minimax", "tool": {"name": "curl"}},
"candidates": ["curl", "bash"],
"strict": True,
}
select_out = client.tools_select(select_payload)
decision_id = (select_out.get("data") or {}).get("decision", {}).get("decision_id")
feedback_payload: ToolsFeedbackInput = {
"scope": "default",
"run_id": "run_001",
"decision_id": decision_id,
"outcome": "positive",
"context": {"intent": "json", "provider": "minimax", "tool": {"name": "curl"}},
"candidates": ["curl", "bash"],
"selected_tool": "curl",
}
client.tools_feedback(feedback_payload)
```
## Auth Options
1. `api_key`: sends `X-Api-Key`.
2. `auth_bearer`: sends `Authorization: Bearer <token>`.
3. `admin_token`: sends `X-Admin-Token` (debug/admin flows).
## Covered methods
1. `write`
2. `recall`
3. `recall_text`
4. `rules_evaluate`
5. `tools_select`
6. `tools_feedback`
## Error model
1. `AionisApiError`: API returned non-2xx response.
2. `AionisNetworkError`: request timeout/network failure.
## Smoke
```bash
cd /Users/lucio/Desktop/Aionis
set -a; source .env; set +a
npm run sdk:py:smoke
```
## Build check (repo local)
```bash
cd /Users/lucio/Desktop/Aionis
npm run sdk:py:compile
npm run sdk:py:release-check
```
| text/markdown | Aionis Core | null | null | null | UNLICENSED | aionis, memory-graph, sdk, python | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: Other/Proprietary License"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.6 | 2026-02-19T11:17:23.426141 | aionis_sdk-0.1.4.tar.gz | 6,892 | eb/b8/45f71b7a5d384370940e6746555aba6751667e454215e87f3907d8628756/aionis_sdk-0.1.4.tar.gz | source | sdist | null | false | 89d92227a6d3109cce9bef683fcffaa1 | a58f95ee496074b59deb98ae85b2683831c99c2ebeaec6e927b0014a5543ead6 | ebb845f71b7a5d384370940e6746555aba6751667e454215e87f3907d8628756 | null | [] | 270 |
2.4 | snakemake-interface-software-deployment-plugins | 0.10.2 | This package provides a stable interface for interactions between Snakemake and its software deployment plugins. | # Stable interfaces and functionality for Snakemake software deployment plugins
This package provides a stable interface for interactions between Snakemake and its software deployment plugins.
It is still a work in progress, but completing it is the next big thing on our list.
| text/markdown | null | Johannes Köster <johannes.koester@uni-due.de> | null | null | null | null | [] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"argparse-dataclass<3.0,>=2.0.0",
"snakemake-interface-common<2.0.0,>=1.17.4"
] | [] | [] | [] | [
"repository, https://github.com/snakemake/snakemake-interface-software-deployment-plugins"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:17:14.907224 | snakemake_interface_software_deployment_plugins-0.10.2.tar.gz | 13,377 | 88/df/53b6bca0106cd56590bff62df564a8996f07be055ecc5cb6fdf827beda9e/snakemake_interface_software_deployment_plugins-0.10.2.tar.gz | source | sdist | null | false | e93ae470ccfaca4aed3a4fe5e9e6aac1 | b7cf84b0eda630675555a0ae99a7678eaa79ead0bc6633a5d3dfce194443bec6 | 88df53b6bca0106cd56590bff62df564a8996f07be055ecc5cb6fdf827beda9e | MIT | [
"LICENSE"
] | 664 |
2.1 | ml-management | 0.12.0rc15 | Python SDK for MLManagement platform | # mlmanagement
implementation of model pattern, dataset | text/markdown | null | ISPRAS MODIS <modis@ispras.ru> | Maxim Ryndin | null | null | null | [] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"sgqlc>=16",
"boto3<2,>=1.36",
"s3transfer<0.16.0,>=0.11.2",
"jsonschema<5,>=4.18",
"tqdm<5,>=4.66.6",
"pydantic<3,>=2",
"httpx<1",
"websocket-client<2,>1",
"pandas<3,>2",
"PyYAML<7,>6",
"numpy<2,>1.26",
"matplotlib<4,>=3.7",
"gql==3.5.0",
"websockets==13.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.10.18 | 2026-02-19T11:13:25.644958 | ml_management-0.12.0rc15.tar.gz | 100,174 | 70/5f/f568f3d0c21f8d0ad6ec0eff4066506477f92056917326f56d91728dec41/ml_management-0.12.0rc15.tar.gz | source | sdist | null | false | 94e1f9e68db87b9891fb4959824498f7 | 8b41f2a75297030e974735545024c529147bf31ec36859deba8c918941463e63 | 705ff568f3d0c21f8d0ad6ec0eff4066506477f92056917326f56d91728dec41 | null | [] | 169 |
2.4 | pubmed_classifier | 0.0.1 | Classify documents in PubMed | <!--
<p align="center">
<img src="https://github.com/cthoyt/pubmed-classifier/raw/main/docs/source/logo.png" height="150">
</p>
-->
<h1 align="center">
PubMed Classifier
</h1>
<p align="center">
<a href="https://github.com/cthoyt/pubmed-classifier/actions/workflows/tests.yml">
<img alt="Tests" src="https://github.com/cthoyt/pubmed-classifier/actions/workflows/tests.yml/badge.svg" /></a>
<a href="https://pypi.org/project/pubmed_classifier">
<img alt="PyPI" src="https://img.shields.io/pypi/v/pubmed_classifier" /></a>
<a href="https://pypi.org/project/pubmed_classifier">
<img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/pubmed_classifier" /></a>
<a href="https://github.com/cthoyt/pubmed-classifier/blob/main/LICENSE">
<img alt="PyPI - License" src="https://img.shields.io/pypi/l/pubmed_classifier" /></a>
<a href='https://pubmed_classifier.readthedocs.io/en/latest/?badge=latest'>
<img src='https://readthedocs.org/projects/pubmed_classifier/badge/?version=latest' alt='Documentation Status' /></a>
<a href="https://codecov.io/gh/cthoyt/pubmed-classifier/branch/main">
<img src="https://codecov.io/gh/cthoyt/pubmed-classifier/branch/main/graph/badge.svg" alt="Codecov status" /></a>
<a href="https://github.com/cthoyt/cookiecutter-python-package">
<img alt="Cookiecutter template from @cthoyt" src="https://img.shields.io/badge/Cookiecutter-snekpack-blue" /></a>
<a href="https://github.com/astral-sh/ruff">
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff" style="max-width:100%;"></a>
<a href="https://github.com/cthoyt/pubmed-classifier/blob/main/.github/CODE_OF_CONDUCT.md">
<img src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg" alt="Contributor Covenant"/></a>
<!-- uncomment if you archive on zenodo
<a href="https://doi.org/10.5281/zenodo.XXXXXX">
<img src="https://zenodo.org/badge/DOI/10.5281/zenodo.XXXXXX.svg" alt="DOI"></a>
-->
</p>
Classify documents in PubMed.
## 💪 Getting Started
By default, `pubmed-classifier` uses `pubmed-downloader` for getting the title
and abstract for documents and `sentence-transformers` (or TF-IDF) for embedding
them.
```python
from pubmed_classifier import train, predict, predict_query
positive_pubmeds = []
negative_pubmeds = []
classifiers = train(positive_pubmeds, negative_pubmeds)
target_pubmeds = []
_, results = predict(target_pubmeds, classifier=classifiers.logistic_regression)
# Predict over results from a given query
query = "database OR ontology"
pubmeds, results = predict_query(query, classifier=classifiers.logistic_regression)
```
## 🚀 Installation
The most recent release can be installed from
[PyPI](https://pypi.org/project/pubmed_classifier/) with uv:
```console
$ uv pip install pubmed_classifier
```
or with pip:
```console
$ python3 -m pip install pubmed_classifier
```
The most recent code and data can be installed directly from GitHub with uv:
```console
$ uv pip install git+https://github.com/cthoyt/pubmed-classifier.git
```
or with pip:
```console
$ python3 -m pip install git+https://github.com/cthoyt/pubmed-classifier.git
```
## 👐 Contributing
Contributions, whether filing an issue, making a pull request, or forking, are
appreciated. See
[CONTRIBUTING.md](https://github.com/cthoyt/pubmed-classifier/blob/master/.github/CONTRIBUTING.md)
for more information on getting involved.
## 👋 Attribution
### ⚖️ License
The code in this package is licensed under the MIT License.
<!--
### 📖 Citation
Citation goes here!
-->
<!--
### 🎁 Support
This project has been supported by the following organizations (in alphabetical order):
- [Biopragmatics Lab](https://biopragmatics.github.io)
-->
<!--
### 💰 Funding
This project has been supported by the following grants:
| Funding Body | Program | Grant Number |
|---------------|--------------------------------------------------------------|--------------|
| Funder | [Grant Name (GRANT-ACRONYM)](https://example.com/grant-link) | ABCXYZ |
-->
### 🍪 Cookiecutter
This package was created with
[@audreyfeldroy](https://github.com/audreyfeldroy)'s
[cookiecutter](https://github.com/cookiecutter/cookiecutter) package using
[@cthoyt](https://github.com/cthoyt)'s
[cookiecutter-snekpack](https://github.com/cthoyt/cookiecutter-snekpack)
template.
## 🛠️ For Developers
<details>
<summary>See developer instructions</summary>
The final section of the README is for if you want to get involved by making a
code contribution.
### Development Installation
To install in development mode, use the following:
```console
$ git clone git+https://github.com/cthoyt/pubmed-classifier.git
$ cd pubmed-classifier
$ uv pip install -e .
```
Alternatively, install using pip:
```console
$ python3 -m pip install -e .
```
### Pre-commit
You can optionally use [pre-commit](https://pre-commit.com) to automate running
key code quality checks on each commit. Enable it with:
```console
$ uvx pre-commit install
```
Or using `pip`:
```console
$ pip install pre-commit
$ pre-commit install
```
### 🥼 Testing
After cloning the repository and installing `tox` with
`uv tool install tox --with tox-uv` or `python3 -m pip install tox tox-uv`, the
unit tests in the `tests/` folder can be run reproducibly with:
```console
$ tox -e py
```
Additionally, these tests are automatically re-run with each commit in a
[GitHub Action](https://github.com/cthoyt/pubmed-classifier/actions?query=workflow%3ATests).
### 📖 Building the Documentation
The documentation can be built locally using the following:
```console
$ git clone git+https://github.com/cthoyt/pubmed-classifier.git
$ cd pubmed-classifier
$ tox -e docs
$ open docs/build/html/index.html
```
The documentation automatically installs the package as well as the `docs` extra
specified in the [`pyproject.toml`](pyproject.toml). `sphinx` plugins like
`texext` can be added there. Additionally, they need to be added to the
`extensions` list in [`docs/source/conf.py`](docs/source/conf.py).
The documentation can be deployed to [ReadTheDocs](https://readthedocs.io) using
[this guide](https://docs.readthedocs.io/en/stable/intro/import-guide.html). The
[`.readthedocs.yml`](.readthedocs.yml) YAML file contains all the configuration
you'll need. You can also set up continuous integration on GitHub to check not
only that Sphinx can build the documentation in an isolated environment (i.e.,
with `tox -e docs-test`) but also that
[ReadTheDocs can build it too](https://docs.readthedocs.io/en/stable/pull-requests.html).
</details>
## 🧑💻 For Maintainers
<details>
<summary>See maintainer instructions</summary>
### Initial Configuration
#### Configuring ReadTheDocs
[ReadTheDocs](https://readthedocs.org) is an external documentation hosting
service that integrates with GitHub's CI/CD. Do the following for each
repository:
1. Log in to ReadTheDocs with your GitHub account to install the integration at
https://readthedocs.org/accounts/login/?next=/dashboard/
2. Import your project by navigating to https://readthedocs.org/dashboard/import
then clicking the plus icon next to your repository
3. You can rename the repository on the next screen using a more stylized name
(i.e., with spaces and capital letters)
4. Click next, and you're good to go!
#### Configuring Archival on Zenodo
[Zenodo](https://zenodo.org) is a long-term archival system that assigns a DOI
to each release of your package. Do the following for each repository:
1. Log in to Zenodo via GitHub with this link:
https://zenodo.org/oauth/login/github/?next=%2F. This brings you to a page
that lists all of your organizations and asks you to approve installing the
Zenodo app on GitHub. Click "grant" next to any organizations you want to
enable the integration for, then click the big green "approve" button. This
step only needs to be done once.
2. Navigate to https://zenodo.org/account/settings/github/, which lists all of
your GitHub repositories (both in your username and any organizations you
enabled). Click the on/off toggle for any relevant repositories. When you
make a new repository, you'll have to come back to this
After these steps, you're ready to go! After you make "release" on GitHub (steps
for this are below), you can navigate to
https://zenodo.org/account/settings/github/repository/cthoyt/pubmed-classifier
to see the DOI for the release and link to the Zenodo record for it.
#### Registering with the Python Package Index (PyPI)
The [Python Package Index (PyPI)](https://pypi.org) hosts packages so they can
be easily installed with `pip`, `uv`, and equivalent tools.
1. Register for an account [here](https://pypi.org/account/register)
2. Navigate to https://pypi.org/manage/account and make sure you have verified
your email address. A verification email might not have been sent by default,
so you might have to click the "options" dropdown next to your address to get
to the "re-send verification email" button
3. 2-Factor authentication is required for PyPI since the end of 2023 (see this
[blog post from PyPI](https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2fa/)).
This means you have to first issue account recovery codes, then set up
2-factor authentication
4. Issue an API token from https://pypi.org/manage/account/token
This only needs to be done once per developer.
#### Configuring your machine's connection to PyPI
This needs to be done once per machine.
```console
$ uv tool install keyring
$ keyring set https://upload.pypi.org/legacy/ __token__
$ keyring set https://test.pypi.org/legacy/ __token__
```
Note that this deprecates previous workflows using `.pypirc`.
### 📦 Making a Release
#### Uploading to PyPI
After installing the package in development mode and installing `tox` with
`uv tool install tox --with tox-uv` or `python3 -m pip install tox tox-uv`, run
the following from the console:
```console
$ tox -e finish
```
This script does the following:
1. Uses [bump-my-version](https://github.com/callowayproject/bump-my-version) to
switch the version number in the `pyproject.toml`, `CITATION.cff`,
`src/pubmed_classifier/version.py`, and
[`docs/source/conf.py`](docs/source/conf.py) to not have the `-dev` suffix
2. Packages the code in both a tar archive and a wheel using
[`uv build`](https://docs.astral.sh/uv/guides/publish/#building-your-package)
3. Uploads to PyPI using
[`uv publish`](https://docs.astral.sh/uv/guides/publish/#publishing-your-package).
4. Push to GitHub. You'll need to make a release going with the commit where the
version was bumped.
5. Bump the version to the next patch. If you made big changes and want to bump
the version by minor, you can use `tox -e bumpversion -- minor` after.
#### Releasing on GitHub
1. Navigate to https://github.com/cthoyt/pubmed-classifier/releases/new to draft
a new release
2. Click the "Choose a Tag" dropdown and select the tag corresponding to the
release you just made
3. Click the "Generate Release Notes" button to get a quick outline of recent
changes. Modify the title and description as you see fit
4. Click the big green "Publish Release" button
This will trigger Zenodo to assign a DOI to your release as well.
### Updating Package Boilerplate
This project uses `cruft` to keep boilerplate (i.e., configuration, contribution
guidelines, documentation configuration) up-to-date with the upstream
cookiecutter package. Install cruft with either `uv tool install cruft` or
`python3 -m pip install cruft` then run:
```console
$ cruft update
```
More info on Cruft's update command is available
[here](https://github.com/cruft/cruft?tab=readme-ov-file#updating-a-project).
</details>
| text/markdown | Charles Tapley Hoyt | Charles Tapley Hoyt <cthoyt@gmail.com> | Charles Tapley Hoyt | Charles Tapley Hoyt <cthoyt@gmail.com> | null | snekpack, cookiecutter | [
"Development Status :: 1 - Planning",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Framework :: Pytest",
"Framework :: tox",
"Framework :: Sphinx",
"Natural Language :: English",
"Programming Language ... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"scikit-learn",
"pubmed-downloader>=0.0.13",
"typing-extensions",
"sentence-transformers",
"pystow>=0.7.27",
"click",
"tabulate"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/cthoyt/pubmed-classifier/issues",
"Homepage, https://github.com/cthoyt/pubmed-classifier",
"Repository, https://github.com/cthoyt/pubmed-classifier.git",
"Documentation, https://pubmed_classifier.readthedocs.io",
"Funding, https://github.com/sponsors/cthoyt"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T11:13:20.467420 | pubmed_classifier-0.0.1-py3-none-any.whl | 12,823 | 23/f0/ef1e1f5a42db9879717aad054f357856e323a7c49c973a91c2d7f4a3723e/pubmed_classifier-0.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | bad9149342618d40a02a34c847b4ae0b | a2d2166528e85c8afae7531fb1965814b61f31f7dc2518cbc98646306eab021b | 23f0ef1e1f5a42db9879717aad054f357856e323a7c49c973a91c2d7f4a3723e | null | [
"LICENSE"
] | 0 |
2.4 | agentguard-py | 0.1.1 | Monitor and protect yourself with AI agent budget controls, approval workflows, and anomaly detection. | # AgentGuard Python SDK
> Monitor and protect yourself with AI agent budget controls, approval workflows, and anomaly detection.
[](https://pypi.org/project/agentguard/)
[](https://opensource.org/licenses/MIT)
AgentGuard sits between your AI agent and its actions. Every action is validated before it runs, checking budgets, rate limits, and approval thresholds in real-time.
## Install
```bash
pip install agentguard
```
## Quick Start
```python
from agentguard import AgentGuard
guard = AgentGuard(api_key="ag_your_api_key") # Get this from app.agent-guard.io
# Wrap any function for automatic protection
chat = guard.wrap(
my_chat_function,
action="openai_chat",
provider="openai",
model="gpt-4",
estimated_cost=0.05,
get_cost=lambda r: r.usage.total_cost,
)
# This automatically:
# ✓ Checks if agent is active
# ✓ Checks budget limits
# ✓ Checks approval thresholds
# ✓ Runs your function (only if allowed)
# ✓ Logs the result
result = chat("Hello!")
```
## Decorator Style
```python
@guard.protect(action="openai_chat", estimated_cost=0.05)
def chat(prompt):
return openai.chat(prompt=prompt)
result = chat("Hello!")
```
## Core Methods
### `guard.check(action, estimated_cost)` — Pre-action validation
```python
result = guard.check("openai_chat", estimated_cost=0.05)
if result.allowed:
response = openai.chat(prompt="Hello")
else:
print(f"Blocked: {result.message}")
```
### `guard.log(action, ...)` — Log an action
```python
guard.log(
"api_call",
provider="openai",
model="gpt-4",
cost=0.05,
status="SUCCESS",
input_tokens=10,
output_tokens=25,
)
```
### `guard.track(action)` — Track duration
```python
done = guard.track("openai_chat", provider="openai")
result = openai.chat(prompt="Hello")
done(cost=0.05, response=result)
# Duration is calculated automatically
```
## Budget Management
```python
# Get current budget status
budget = guard.get_budget()
print(f"Spent: ${budget.budget_spent} / ${budget.budget_limit}")
# Check if a specific cost is within budget
check = guard.check_budget(0.50)
if not check.allowed:
print("Would exceed budget")
# Raise if over budget
guard.ensure_budget(0.50) # raises BudgetExceededError if over
```
## Approval Workflows
```python
# Request approval for a high-cost action
approval = guard.request_approval(
"send_bulk_email",
description="Send marketing email to 10,000 users",
estimated_cost=45.00,
)
# Wait for human decision (polls every 5 seconds)
try:
decided = guard.wait_for_approval(approval.id, timeout=3600)
print("Approved! Proceeding...")
except AgentGuardError:
print("Rejected or expired")
```
## Error Handling
```python
from agentguard import (
AgentGuardError, # Base error
AgentPausedError, # Agent is paused
AgentBlockedError, # Agent is blocked (kill switch)
BudgetExceededError, # Budget limit hit
ApprovalRequiredError, # Needs human approval
)
try:
result = chat("Hello!")
except AgentPausedError:
print("Agent paused — stopping gracefully")
except BudgetExceededError:
print("Budget exceeded — waiting for reset")
except ApprovalRequiredError as e:
print(f"Approval needed: {e.approval_id}")
```
## Configuration
```python
guard = AgentGuard(
api_key="ag_your_api_key", # Required
base_url="https://api.agent-guard.io", # Optional
timeout=10, # Optional: seconds (default: 10)
debug=False, # Optional: debug logging (default: False)
)
```
## Dashboard
Manage your agents, view logs, approve actions, and monitor budgets at **[app.agent-guard.io](https://app.agent-guard.io)**
## Links
- [Website](https://agent-guard.io)
- [Dashboard](https://app.agent-guard.io)
- [GitHub](https://github.com/alexhrsu/agentguard)
- [JS/TypeScript SDK](https://www.npmjs.com/package/agentguard-js)
## License
MIT
| text/markdown | null | AgentGuard <abhirsu@gmail.com> | null | null | MIT | agentguard, ai, agents, monitoring, logging, llm, openai, anthropic, budget, approval, safety | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://agent-guard.io",
"Dashboard, https://app.agent-guard.io",
"Repository, https://github.com/alexhrsu/agentguard"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-19T11:12:48.085695 | agentguard_py-0.1.1.tar.gz | 10,631 | b0/a3/343e91f0a0ba395bb4db39776f9353fb5e98935107d7a50073f1f43411b1/agentguard_py-0.1.1.tar.gz | source | sdist | null | false | ffe15a1546dc7a6f852c3743f99c551a | 3fd4cde20205277d3ef726219ce6480a19a73ae0ec8fc241d87b5e4050287d95 | b0a3343e91f0a0ba395bb4db39776f9353fb5e98935107d7a50073f1f43411b1 | null | [
"LICENSE"
] | 281 |
2.4 | insa-its | 3.0.0 | Open-core multi-LLM communication monitoring, hallucination detection & deciphering for agent systems | # InsAIts - The Security Layer for Multi-Agent AI
**Detect, intervene, and audit AI-to-AI communication in real-time.**
[](https://pypi.org/project/insa-its/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/Apache-2.0)
[]()
---
## The Problem
When AI agents communicate with each other, things go wrong silently:
- **Hallucination propagation** - One agent fabricates a fact. The next treats it as truth. By agent 6, the error is buried under layers of confident responses.
- **Semantic drift** - Meaning shifts gradually across messages. By the end of a pipeline, the output has diverged from the original intent.
- **Fabricated sources** - Agents invent citations, DOIs, and URLs. In multi-agent systems, phantom citations pass between agents as established fact.
- **Silent contradictions** - Agent A says $1,000. Agent B says $5,000. No human is watching the AI-to-AI channel.
**In AI-to-human communication, we notice. In AI-to-AI? It's invisible.**
InsAIts makes it visible -- and acts on it.
---
## What It Does
InsAIts is a lightweight Python SDK that monitors AI-to-AI communication, detects 16 types of anomalies, and actively responds: quarantining dangerous messages, rerouting to backup agents, and escalating to human review.
```python
from insa_its import insAItsMonitor
monitor = insAItsMonitor()
# Monitor any AI-to-AI message
result = monitor.send_message(
text=agent_response,
sender_id="OrderBot",
receiver_id="InventoryBot",
llm_id="gpt-4o"
)
# V3: Structured result with programmatic decision-making
if result["monitor_result"].should_halt():
# Critical anomaly -- quarantine + escalate to human
outcome = monitor.intervene(message, result["monitor_result"])
elif result["monitor_result"].should_alert():
# High severity -- log warning, optionally reroute
pass
```
**Three lines to integrate. Full visibility. Active protection. Complete audit trail.**
All processing happens **locally** - your data never leaves your machine.
---
## Install
```bash
pip install insa-its
```
For local embeddings (recommended):
```bash
pip install insa-its[full]
```
---
## What It Detects
16 anomaly types across 5 categories:
| Category | Anomaly | What It Catches | Severity |
|----------|---------|-----------------|----------|
| **Hallucination** | FACT_CONTRADICTION | Agent A vs Agent B disagree on facts | Critical |
| | PHANTOM_CITATION | Fabricated URLs, DOIs, arxiv IDs | High |
| | UNGROUNDED_CLAIM | Response doesn't match source documents | Medium |
| | CONFIDENCE_DECAY | Agent certainty erodes: "certain" -> "maybe" | Medium |
| | CONFIDENCE_FLIP_FLOP | Agent alternates certain/uncertain | Medium |
| **Semantic (V3)** | SEMANTIC_DRIFT | Meaning shifts over conversation (EWMA + cosine) | High |
| | HALLUCINATION_CHAIN | Speculation promoted to "fact" across messages | Critical |
| | JARGON_DRIFT | Undefined acronyms flooding the conversation | Medium |
| **Communication** | SHORTHAND_EMERGENCE | "Process order" becomes "PO" | High |
| | CONTEXT_LOSS | Topic suddenly changes mid-conversation | High |
| | CROSS_LLM_JARGON | Made-up acronyms: "QXRT", "ZPMF" | High |
| | ANCHOR_DRIFT | Response diverges from user's question | High |
| **Model** | LLM_FINGERPRINT_MISMATCH | GPT-4 response looks like GPT-3.5 | Medium |
| | LOW_CONFIDENCE | Excessive hedging: "maybe", "perhaps" | Medium |
| **Compliance** | LINEAGE_DRIFT | Semantic divergence from parent message | Medium |
| | CHAIN_TAMPERING | Hash chain integrity violation | Critical |
---
## V3: Active Intervention
V3 transforms InsAIts from a monitoring tool into a **communication security platform**. It doesn't just detect -- it responds.
### Intervention Engine
```python
# Enable interventions
engine = monitor.enable_interventions()
# Register human-in-the-loop for critical anomalies
def review_critical(message, result, context):
# Your review logic -- Slack notification, dashboard alert, etc.
return True # Allow delivery, or False to quarantine
engine.register_hitl_callback(review_critical)
# Register agent rerouting for high-severity issues
engine.register_reroute("risky_agent", "backup_agent")
# Process intervention
outcome = monitor.intervene(message, result["monitor_result"])
# {"action": "quarantined", "severity": "critical", "reason": "..."}
```
| Severity | Default Action |
|----------|---------------|
| CRITICAL | Quarantine + escalate to human (HITL) |
| HIGH | Reroute to backup agent or deliver with warning |
| MEDIUM | Deliver with warning + structured logging |
| LOW/INFO | Deliver + log |
### Circuit Breaker
Automatically blocks agents with high anomaly rates:
```python
# Built into send_message() -- automatic
result = monitor.send_message("text", "agent1", "agent2", "gpt-4o")
# If agent1's anomaly rate exceeds threshold: result = {"error": "circuit_open", ...}
# Manual inspection
state = monitor.get_circuit_breaker_state("agent1")
# {"state": "closed", "anomaly_rate": 0.15, "window_size": 20}
```
- Sliding window tracking (default: 20 messages per agent)
- State machine: CLOSED -> OPEN -> HALF_OPEN -> CLOSED
- Configurable threshold (default: 40% anomaly rate)
- Independent state per agent
### Tamper-Evident Audit Log
SHA-256 hash chain for regulatory compliance:
```python
# Enable audit logging
monitor.enable_audit("./audit_trail.jsonl")
# Messages are automatically logged (hashes only, never content)
# ...
# Verify integrity at any time
assert monitor.verify_audit_integrity() # Detects any tampering
```
### Prometheus Metrics
```python
# Get Prometheus-formatted metrics for Grafana, Datadog, etc.
metrics_text = monitor.get_metrics()
# Metrics: insaits_messages_total, insaits_anomalies_total{severity="..."},
# insaits_processing_duration_ms (histogram)
```
### System Readiness
```python
readiness = monitor.check_readiness()
# {"ready": True, "checks": {"license": {"status": "ok"}, ...}, "warnings": [], "errors": []}
```
---
## Hallucination Detection
Five independent detection subsystems:
```python
monitor = insAItsMonitor()
monitor.enable_fact_tracking(True)
# Cross-agent fact contradictions
monitor.send_message("The project costs 1000 dollars.", "agent_a", llm_id="gpt-4o")
result = monitor.send_message("The project costs 5000 dollars.", "agent_b", llm_id="claude-3.5")
# result["anomalies"] includes FACT_CONTRADICTION (critical)
# Phantom citation detection
citations = monitor.detect_phantom_citations(
"According to Smith et al. (2030), see https://fake-journal.xyz/paper"
)
# citations["verdict"] = "likely_fabricated"
# Source grounding
monitor.set_source_documents(["Your reference docs..."], auto_check=True)
result = monitor.check_grounding("AI response to verify")
# result["grounded"] = True/False
# Confidence decay tracking
stats = monitor.get_confidence_stats(agent_id="agent_a")
# Full hallucination health report
summary = monitor.get_hallucination_summary()
```
| Subsystem | What It Catches |
|-----------|----------------|
| Fact Tracking | Cross-agent contradictions, numeric drift |
| Phantom Citation Detection | Fabricated URLs, DOIs, arxiv IDs, paper references |
| Source Grounding | Responses that diverge from reference documents |
| Confidence Decay | Agents losing certainty over a conversation |
| Self-Consistency | Internal contradictions within a single response |
---
## Forensic Chain Tracing
Trace any anomaly back to its root cause:
```python
trace = monitor.trace_root(anomaly)
print(trace["summary"])
# "Jargon 'XYZTERM' first appeared in message from agent_a (gpt-4o)
# at step 3 of 7. Propagated through 4 subsequent messages."
# ASCII visualization
print(monitor.visualize_chain(anomaly, include_text=True))
```
---
## Integrations
### LangChain (V3 Updated)
```python
from insa_its.integrations import LangChainMonitor
monitor = LangChainMonitor()
monitored_chain = monitor.wrap_chain(your_chain, "MyAgent",
workflow_id="order-123", # V3: correlation ID for tracing
halt_on_critical=True # V3: auto-halt on critical anomalies
)
```
### CrewAI
```python
from insa_its.integrations import CrewAIMonitor
monitor = CrewAIMonitor()
monitored_crew = monitor.wrap_crew(your_crew)
```
### LangGraph
```python
from insa_its.integrations import LangGraphMonitor
monitor = LangGraphMonitor()
monitored_graph = monitor.wrap_graph(your_graph)
```
### Slack Alerts
```python
from insa_its.integrations import SlackNotifier
slack = SlackNotifier(webhook_url="https://hooks.slack.com/...")
slack.send_alert(anomaly)
```
### Exports
```python
from insa_its.integrations import NotionExporter, AirtableExporter
notion = NotionExporter(token="secret_xxx", database_id="db_123")
notion.export_anomalies(anomalies)
```
---
## Anchor-Aware Detection
Reduce false positives by setting the user's query as context:
```python
monitor.set_anchor("Explain quantum computing")
# Now "QUBIT", "QPU" won't trigger jargon alerts -- they're relevant to the query
```
---
## Domain Dictionaries
```python
# Load domain-specific terms to reduce false positives
monitor.load_domain("finance") # EBITDA, WACC, DCF, etc.
monitor.load_domain("kubernetes") # K8S, HPA, CI/CD, etc.
# Available: finance, healthcare, kubernetes, machine_learning, devops, quantum
# Custom dictionaries
monitor.export_dictionary("my_team_terms.json")
monitor.import_dictionary("shared_terms.json", merge=True)
```
---
## Open-Core Model
The core SDK is **Apache 2.0 open source**. Premium features ship with `pip install insa-its`.
| Feature | License | Status |
|---------|---------|--------|
| All 16 anomaly detectors | Apache 2.0 | Open |
| Hallucination detection (5 subsystems) | Apache 2.0 | Open |
| V3: Circuit breaker, interventions, audit, metrics | Apache 2.0 | Open |
| V3: Semantic drift, hallucination chain, jargon drift | Apache 2.0 | Open |
| Forensic chain tracing + visualization | Apache 2.0 | Open |
| All integrations (LangChain, CrewAI, LangGraph, Slack, Notion, Airtable) | Apache 2.0 | Open |
| Terminal dashboard | Apache 2.0 | Open |
| Local embeddings + Ollama | Apache 2.0 | Open |
| **AI Lineage Oracle** (compliance) | Proprietary | Premium |
| **Edge/Hybrid Swarm Router** | Proprietary | Premium |
| **Decipher Engine** (AI-to-Human translation) | Proprietary | Premium |
| **Adaptive jargon dictionaries** | Proprietary | Premium |
| **Advanced shorthand/context-loss detection** | Proprietary | Premium |
| **Anchor drift forensics** | Proprietary | Premium |
**Both open-source and premium features are included when you `pip install insa-its`.**
The public GitHub repo contains the Apache 2.0 open-source core only.
---
## Architecture
```
Your Multi-Agent System InsAIts V3 Security Layer
| |
|-- user query -----> set_anchor() ------> |
|-- source docs ----> set_source_documents() |
| |
|-- message --------> Circuit Breaker ---> |
| (is agent blocked?) |
| |-- Embedding generation (local)
| |-- Pattern analysis
| |-- Hallucination suite (5 subsystems)
| |-- Semantic drift (EWMA + cosine)
| |-- Hallucination chain (promotion detection)
| |-- Jargon drift (vocabulary analysis)
| |
| |-- Build MonitorResult
| |-- Circuit breaker state update
| |-- Structured logging + metrics
| |-- Audit log (SHA-256 hash chain)
| |
|<-- MonitorResult (should_halt/alert) ----|
| |
|-- intervene() ---> Intervention Engine |
| CRITICAL: quarantine |
| HIGH: reroute/warn |
| MEDIUM: warn + log |
| LOW: deliver + log |
```
**Privacy First:**
- All detection and intervention runs locally
- No message content sent to cloud
- Audit logs store hashes, never raw content
- API keys hashed before storage
- GDPR-ready
---
## Pricing
| Tier | What You Get | Price |
|------|--------------|-------|
| **Free** | 100 msgs/day, all open-source features | **$0** |
| **Pro** | Unlimited messages, cloud features, premium detectors | **Contact us** |
| **Enterprise** | Everything + compliance exports, SLA, self-hosted | **Custom** |
> Free tier works without an API key. Just `pip install insa-its` and start monitoring.
### 100 FREE LIFETIME Keys
We're giving away **100 FREE LIFETIME keys** (unlimited usage forever) to early adopters.
**How to claim:** Email **info@yuyai.pro** with your use case (1-2 sentences). First 100 get lifetime access.
---
## Use Cases
| Industry | Problem Solved |
|----------|----------------|
| **E-Commerce** | Order bots losing context mid-transaction |
| **Customer Service** | Support agents developing incomprehensible shorthand |
| **Finance** | Analysis pipelines hallucinating metrics, contradicting numbers |
| **Healthcare** | Critical multi-agent systems where errors have consequences |
| **Research** | Ensuring scientific integrity, catching fabricated citations |
| **Legal** | AI-generated documents with phantom references |
---
## Documentation
| Resource | Link |
|----------|------|
| Installation Guide | [installation_guide.md](installation_guide.md) |
| API Reference | [insaitsapi-production.up.railway.app/docs](https://insaitsapi-production.up.railway.app/docs) |
| Privacy Policy | [PRIVACY_POLICY.md](../PRIVACY_POLICY.md) |
| Terms of Service | [TERMS_OF_SERVICE.md](TERMS_OF_SERVICE.md) |
---
## Support
- **Email:** info@yuyai.pro
- **GitHub Issues:** [Report a bug](https://github.com/Nomadu27/InsAIts/issues)
- **API Status:** [insaitsapi-production.up.railway.app](https://insaitsapi-production.up.railway.app)
---
## License
**Open-Core Model:**
- Core SDK: [Apache License 2.0](LICENSE) - free to use, modify, and distribute
- Premium features (`insa_its/premium/`): Proprietary - included via `pip install insa-its`
---
<p align="center">
<strong>InsAIts V3.0 - Making Multi-Agent AI Trustworthy, Auditable, and Secure</strong><br>
<em>16 anomaly types. Active intervention. Tamper-evident audit. Circuit breaker. Prometheus metrics.</em><br><br>
<strong>100 FREE LIFETIME keys for early adopters: info@yuyai.pro</strong>
</p>
| text/markdown | YuyAI / InsAIts Team | info@yuyai.pro | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
... | [] | https://github.com/Nomadu27/InsAIts | null | >=3.8 | [] | [] | [] | [
"numpy>=1.20.0",
"requests>=2.26.0",
"websocket-client>=1.0.0",
"sentence-transformers>=2.2.0; extra == \"local\"",
"networkx>=2.6.0; extra == \"graph\"",
"sentence-transformers>=2.2.0; extra == \"full\"",
"networkx>=2.6.0; extra == \"full\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-19T11:11:02.634226 | insa_its-3.0.0.tar.gz | 183,915 | 56/8e/3fbfa365cd4409aaedfc3ab7a5ce05816f197268ccb5c911783864bea6f5/insa_its-3.0.0.tar.gz | source | sdist | null | false | 949a9ab4f7efb8df1b0f5db90895af91 | 496609f91279a21c75909f94c9f389669afd86f5a0350a6f60fec96cead03e41 | 568e3fbfa365cd4409aaedfc3ab7a5ce05816f197268ccb5c911783864bea6f5 | null | [
"LICENSE",
"LICENSE.premium"
] | 262 |
2.4 | pulumi-azure-native | 3.14.0a1771493398 | A native Pulumi package for creating and managing Azure resources. | [](https://slack.pulumi.com)
[](https://npmjs.com/package/@pulumi/azure-native)
[](https://pypi.org/project/pulumi-azure-native)
[](https://badge.fury.io/nu/pulumi.azurenative)
[](https://pkg.go.dev/github.com/pulumi/pulumi-azure-native/sdk/go)
[](https://github.com/pulumi/pulumi-azure-native/blob/master/LICENSE)
# Native Azure Pulumi Provider
The [Azure Native](https://www.pulumi.com/docs/intro/cloud-providers/azure/) provider for Pulumi lets you use Azure resources in your cloud programs.
This provider uses the Azure Resource Manager REST API directly and therefore provides full access to the ARM API.
The Azure Native provider is the recommended provider for projects targeting Azure.
To use this package, [install the Pulumi CLI](https://www.pulumi.com/docs/get-started/install/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (JavaScript/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
npm install @pulumi/azure-native
or `yarn`:
yarn add @pulumi/azure-native
### Python
To use from Python, install using `pip`:
pip install pulumi_azure_native
### Go
To use from Go, use `go get` to grab the latest version of the library
go get github.com/pulumi/pulumi-azure-native/sdk
### .NET
To use from .NET, install using `dotnet add package`:
dotnet add package Pulumi.AzureNative
## Concepts
The `@pulumi/azure-native` package provides a strongly-typed means to build cloud applications that create
and interact closely with Azure resources. Resources are exposed for the entire Azure surface area,
including (but not limited to) 'compute', 'keyvault', 'network', 'storage', and more.
The Azure Native provider works directly with the Azure Resource Manager (ARM) platform instead of depending on a
handwritten layer as with the [classic provider](https://github.com/pulumi/pulumi-azure). This approach ensures higher
quality and higher fidelity with the Azure platform.
## Configuring credentials
To learn how to configure credentials refer to the [Azure configuration options](https://www.pulumi.com/registry/packages/azure-native/installation-configuration/#configuration-options).
## Building
See [contributing](CONTRIBUTING.md) for details on how to build and contribute to this provider.
## Reference
For further information, visit [Azure Native in the Pulumi Registry](https://www.pulumi.com/registry/packages/azure-native/)
or for detailed API reference documentation, visit [Azure Native API Docs in the Pulumi Registry](https://www.pulumi.com/registry/packages/azure-native/api-docs/).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, azure, azure-native, category/cloud, kind/native | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.com",
"Repository, https://github.com/pulumi/pulumi-azure-native"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-19T11:10:51.159596 | pulumi_azure_native-3.14.0a1771493398.tar.gz | 12,595,873 | 5f/56/66e98a2f444b3bbf2abcdc865704a216ec77fe5bf16739710e5e132d172a/pulumi_azure_native-3.14.0a1771493398.tar.gz | source | sdist | null | false | 770ff730a03ef3026acb637bdb88012d | 58f63f44b282bef5261bef14988646333b5bdabab634d5aca07182c74e4f923a | 5f5666e98a2f444b3bbf2abcdc865704a216ec77fe5bf16739710e5e132d172a | null | [] | 243 |
2.4 | netanel-core | 0.2.0 | Self-learning LLM call library. Every call learns. Every call improves. | # netanel-core
> **The self-learning LLM call.** Every call learns. Every call improves.
[](https://github.com/netanel-systems/nathan-core/actions/workflows/tests.yml)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
A Python library that wraps any LLM with file-based memory and automatic quality improvement. No database required.
```python
from netanel_core import LearningLLM
llm = LearningLLM(namespace="my-app")
result = llm.call("Write a function to validate emails")
print(result.response) # The LLM's output
print(result.score) # Quality score (0.0-1.0)
print(result.usage) # Token usage for cost tracking
```
---
## ✨ Features
- 🧠 **Self-Learning** - Extracts patterns from every call, builds better context over time
- 📊 **Quality-First** - Auto-evaluation + retry loop, only stores high-quality outputs
- 💾 **File-Based Memory** - No database, human-readable `.md` files, git-trackable
- 🔄 **Prompt Evolution** - Auto-rewrites prompts based on learnings
- 🎯 **Bounded Safety** - Max retries, tokens, iterations (NASA-grade)
- 🤖 **DeepAgent Support** - Complex reasoning with LangGraph agents
- 💰 **Cost Tracking** - Token usage for all calls
---
## 🚀 Quick Start
```bash
pip install netanel-core
```
```python
from netanel_core import LearningLLM
llm = LearningLLM(namespace="my-app")
result = llm.call("Explain quantum computing simply")
if result.passed:
print(result.response)
print(f"Quality: {result.score:.2f}, Tokens: {result.usage['total_tokens']}")
```
---
## 📖 How It Works
Every `llm.call()` executes:
1. **RETRIEVE** - Load memories from namespace
2. **BUILD** - Create context (role + memories + task)
3. **CALL** - Invoke LLM
4. **EVALUATE** - Score quality (gpt-4o-mini + main model)
5. **RETRY** - If score < threshold, retry with feedback
6. **EXTRACT** - Extract patterns from successful responses
7. **STORE** - Save to `memories/{namespace}/patterns/`
8. **EVOLVE** - Trigger prompt improvements
9. **RETURN** - Result with response + metadata
---
## 💡 Usage Examples
### Custom Model
```python
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4", temperature=0.7)
llm = LearningLLM(namespace="gpt4-app", model=model)
```
### DeepAgent Mode
```python
result = llm.call(
"Research AI papers and summarize trends",
use_agent=True # Multi-step reasoning
)
print(f"Steps: {result.agent_steps}")
```
### Cost Tracking
```python
total = sum(llm.call(task).usage['total_tokens'] for task in tasks)
cost = (total / 1_000_000) * 0.375 # gpt-4o-mini avg
print(f"Cost: ${cost:.4f}")
```
---
## 🎯 Configuration
```python
from netanel_core import Config
config = Config(
namespace="my-app",
quality_threshold=0.8,
max_retries=3,
memories_dir="./memories",
)
llm = LearningLLM(config=config)
```
Or YAML:
```yaml
namespace: my-app
quality_threshold: 0.8
max_retries: 3
```
```python
config = Config.from_yaml("config.yaml")
```
---
## 📂 Memory Structure
```text
memories/
└── {namespace}/
├── patterns/
│ ├── 001-function-structure.md
│ └── 002-error-handling.md
└── prompts/
└── current.md
```
Human-readable Markdown - inspect or version control.
---
## 🧪 Development
```bash
pip install -e .[dev]
pytest --cov
```
---
## 📚 Documentation
- [Architecture](ARCHITECTURE.md) - System design
- [API Reference](docs/API.md) - Complete API
- [Examples](examples/) - Usage patterns
---
## 📝 License
MIT - see [LICENSE](LICENSE)
---
Built by [Netanel Systems](https://www.netanel.systems) with [LangGraph](https://github.com/langchain-ai/langgraph) + [Deep Agents](https://github.com/anthropics/deepagents)
| text/markdown | Netanel Systems | null | null | null | MIT | llm, learning, ai, langchain, langgraph, memory, agents, quality, self-improvement | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"To... | [] | null | null | >=3.11 | [] | [] | [] | [
"langgraph<2.0.0,>=1.0.0",
"langchain<2.0.0,>=1.0.0",
"langchain-openai<2.0.0,>=1.0.0",
"deepagents<1.0,>=0.4.1",
"pydantic<3.0.0,>=2.0.0",
"pyyaml>=6.0",
"filelock>=3.24.3"
] | [] | [] | [] | [
"Homepage, https://www.netanel.systems",
"Repository, https://github.com/netanel-systems/netanel-core",
"Issues, https://github.com/netanel-systems/netanel-core/issues",
"Documentation, https://github.com/netanel-systems/netanel-core"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:10:19.648996 | netanel_core-0.2.0.tar.gz | 148,306 | cc/fc/e6bd4e5572deb6445d85d12215d7c079be37c64504c5190b40e1e249a9e7/netanel_core-0.2.0.tar.gz | source | sdist | null | false | 63cd0d8cebf1dae9fe47578f58c3240f | ff8e8183dbef937ceafa7384d9d3d07389759a7ee8cb842f00852468fa9f1b0a | ccfce6bd4e5572deb6445d85d12215d7c079be37c64504c5190b40e1e249a9e7 | null | [
"LICENSE"
] | 561 |
2.4 | bailo | 3.6.0 | Simplifies interacting with Bailo programmatically | # Bailo Python Client
[![PyPI - Python Version][pypi-python-version-shield]][pypi-url] [![PyPI - Version][pypi-version-shield]][pypi-url]
[![License][license-shield]][license-url] [![Contributor Covenant][code-of-conduct-shield]][code-of-conduct-url]
A lightweight, Python API wrapper for Bailo, providing streamlined programmatic access to its core functionality - designed for Data Scientists, ML Engineers, and Developers who need to integrate Bailo capabilities directly into their workflows.
<br />
<!-- TABLE OF CONTENTS -->
<details>
<summary>Table of Contents</summary>
<ol>
<li>
<a href="#quickstart">Quickstart</a>
<ul>
<li><a href="#installation">Installation</a></li>
<li><a href="#basic-usage">Basic Usage</a></li>
<li><a href="#core-features">Core Features</a></li>
</ul>
</li>
<li>
<a href="#documentation">Documentation</a>
<ul>
<li><a href="#building-locally">Building Locally</a></li>
</ul>
</li>
<li>
<a href="#development">Development</a>
<ul>
<li><a href="#python-setup">Python Setup</a></li>
<li><a href="#running-tests">Running Tests</a></li>
</ul>
</li>
</ol>
</details>
<br />
## Quickstart
> **Requires:** Python 3.10 to 3.14
### Installation
```bash
pip install bailo
```
Optional: enable integration with [MLFlow](https://mlflow.org/) for advanced model tracking:
```bash
pip install bailo[mlflow]
```
### Basic Usage
```python
from bailo import Client, Model
# Connect to Bailo server
client = Client("http://localhost:8080")
# Create a model
yolo = Model.create(
client=client,
name="YoloV4",
description="You only look once!"
)
# Populate datacard using a predefined schema
yolo.card_from_schema("minimal-general-v10")
# Create a new release
my_release = yolo.create_release(
version="0.1.0",
notes="Beta"
)
# Upload a binary file to the release
with open("yolo.onnx") as f:
my_release.upload("yolo", f)
```
### Core Features
- Upload and download model binaries
- Manage Models & Releases
- Handle Datacards & Schemas
- Manage Schemas
- Process Access Requests
> **Note:** Certain collaborative actions (approvals, review threads, etc.) are best handled via the Bailo web interface.
## Documentation
Full Python client documentation: [Bailo Python Docs](https://gchq.github.io/Bailo/docs/python/index.html).
### Building locally
Refer to [backend/docs/README.md](https://github.com/gchq/Bailo/blob/main/backend/docs/README.md) for local build steps.
## Development
The following steps are only required for users who wish to extend or develop the Bailo Python client locally.
### Python Setup
From within the `lib/python` directory:
```bash
python3 -m venv libpythonvenv
source libpythonvenv/bin/activate
pip install -e .[test]
```
### Running Tests
To run the unit tests:
```bash
pytest
```
To run the integration tests (requires Bailo running on `https://localhost:8080`):
```bash
pytest -m integration
```
To run the mlflow integration tests (requires Bailo running on `https://localhost:8080` and mlflow running on `https://localhost:5050` e.g. via docker):
```bash
docker run -p 5050:5000 \
"ghcr.io/mlflow/mlflow:v$(python -m pip show mlflow | awk '/Version:/ {print $2}')" \
mlflow server --host 0.0.0.0 --port 5000
pytest -m mlflow
```
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[pypi-url]: https://pypi.org/project/bailo/
[pypi-version-shield]: https://img.shields.io/pypi/v/bailo?style=for-the-badge
[pypi-python-version-shield]: https://img.shields.io/pypi/pyversions/bailo?style=for-the-badge
[license-shield]: https://img.shields.io/github/license/gchq/bailo.svg?style=for-the-badge
[license-url]: https://github.com/gchq/Bailo/blob/main/LICENSE.txt
[code-of-conduct-shield]: https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg?style=for-the-badge
[code-of-conduct-url]: https://github.com/gchq/Bailo/blob/main/CODE_OF_CONDUCT.md
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pyt... | [] | null | null | >=3.10 | [] | [] | [] | [
"requests==2.32.5",
"semantic-version==2.10.0",
"tqdm==4.67.3",
"mlflow-skinny[mlserver]==3.9.0; extra == \"mlflow\"",
"black==26.1.0; extra == \"test\"",
"check-manifest==0.51; extra == \"test\"",
"pre-commit==4.5.1; extra == \"test\"",
"pylint==4.0.4; extra == \"test\"",
"pylint_junit==0.3.5; extr... | [] | [] | [] | [
"Changelog, https://github.com/gchq/Bailo/blob/main/lib/python/CHANGELOG.md",
"Documentation, https://github.com/gchq/bailo/tree/main#readme",
"Source, https://github.com/gchq/bailo",
"Tracker, https://github.com/gchq/bailo/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:09:09.805710 | bailo-3.6.0.tar.gz | 26,029 | 79/65/ca2304aac198fa77d1ef82f5ab8208ff5fdc653e3d810b34eaeb72ac2055/bailo-3.6.0.tar.gz | source | sdist | null | false | 334739ce012f22277cc20c43e08725dc | 51a38b644e4df9ff0a3802c07f03b68984ae2e54a89d44b61fba06bc6600c35d | 7965ca2304aac198fa77d1ef82f5ab8208ff5fdc653e3d810b34eaeb72ac2055 | null | [] | 368 |
2.4 | mistral-vibe | 2.2.1 | Minimal CLI coding agent by Mistral | # Mistral Vibe
[](https://pypi.org/project/mistral-vibe)
[](https://www.python.org/downloads/release/python-3120/)
[](https://github.com/mistralai/mistral-vibe/actions/workflows/ci.yml)
[](https://github.com/mistralai/mistral-vibe/blob/main/LICENSE)
```
██████████████████░░
██████████████████░░
████ ██████ ████░░
████ ██ ████░░
████ ████░░
████ ██ ██ ████░░
██ ██ ██░░
██████████████████░░
██████████████████░░
```
**Mistral's open-source CLI coding assistant.**
Mistral Vibe is a command-line coding assistant powered by Mistral's models. It provides a conversational interface to your codebase, allowing you to use natural language to explore, modify, and interact with your projects through a powerful set of tools.
> [!WARNING]
> Mistral Vibe works on Windows, but we officially support and target UNIX environments.
### One-line install (recommended)
**Linux and macOS**
```bash
curl -LsSf https://mistral.ai/vibe/install.sh | bash
```
**Windows**
First, install uv
```bash
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```
Then, use uv command below.
### Using uv
```bash
uv tool install mistral-vibe
```
### Using pip
```bash
pip install mistral-vibe
```
## Table of Contents
- [Features](#features)
- [Built-in Agents](#built-in-agents)
- [Subagents and Task Delegation](#subagents-and-task-delegation)
- [Interactive User Questions](#interactive-user-questions)
- [Terminal Requirements](#terminal-requirements)
- [Quick Start](#quick-start)
- [Usage](#usage)
- [Interactive Mode](#interactive-mode)
- [Trust Folder System](#trust-folder-system)
- [Programmatic Mode](#programmatic-mode)
- [Slash Commands](#slash-commands)
- [Built-in Slash Commands](#built-in-slash-commands)
- [Custom Slash Commands via Skills](#custom-slash-commands-via-skills)
- [Skills System](#skills-system)
- [Creating Skills](#creating-skills)
- [Skill Discovery](#skill-discovery)
- [Managing Skills](#managing-skills)
- [Configuration](#configuration)
- [Configuration File Location](#configuration-file-location)
- [API Key Configuration](#api-key-configuration)
- [Custom System Prompts](#custom-system-prompts)
- [Custom Agent Configurations](#custom-agent-configurations)
- [Tool Management](#tool-management)
- [MCP Server Configuration](#mcp-server-configuration)
- [Session Management](#session-management)
- [Update Settings](#update-settings)
- [Custom Vibe Home Directory](#custom-vibe-home-directory)
- [Editors/IDEs](#editorsides)
- [Resources](#resources)
- [Data collection & usage](#data-collection--usage)
- [License](#license)
## Features
- **Interactive Chat**: A conversational AI agent that understands your requests and breaks down complex tasks.
- **Powerful Toolset**: A suite of tools for file manipulation, code searching, version control, and command execution, right from the chat prompt.
- Read, write, and patch files (`read_file`, `write_file`, `search_replace`).
- Execute shell commands in a stateful terminal (`bash`).
- Recursively search code with `grep` (with `ripgrep` support).
- Manage a `todo` list to track the agent's work.
- Ask interactive questions to gather user input (`ask_user_question`).
- Delegate tasks to subagents for parallel work (`task`).
- **Project-Aware Context**: Vibe automatically scans your project's file structure and Git status to provide relevant context to the agent, improving its understanding of your codebase.
- **Advanced CLI Experience**: Built with modern libraries for a smooth and efficient workflow.
- Autocompletion for slash commands (`/`) and file paths (`@`).
- Persistent command history.
- Beautiful Themes.
- **Highly Configurable**: Customize models, providers, tool permissions, and UI preferences through a simple `config.toml` file.
- **Safety First**: Features tool execution approval.
- **Multiple Built-in Agents**: Choose from different agent profiles tailored for specific workflows.
### Built-in Agents
Vibe comes with several built-in agent profiles, each designed for different use cases:
- **`default`**: Standard agent that requires approval for tool executions. Best for general use.
- **`plan`**: Read-only agent for exploration and planning. Auto-approves safe tools like `grep` and `read_file`.
- **`accept-edits`**: Auto-approves file edits only (`write_file`, `search_replace`). Useful for code refactoring.
- **`auto-approve`**: Auto-approves all tool executions. Use with caution.
Use the `--agent` flag to select a different agent:
```bash
vibe --agent plan
```
### Subagents and Task Delegation
Vibe supports subagents for delegating tasks. Subagents run independently and can perform specialized work without user interaction, preventing the context from being overloaded.
The `task` tool allows the agent to delegate work to subagents:
```
> Can you explore the codebase structure while I work on something else?
🤖 I'll use the task tool to delegate this to the explore subagent.
> task(task="Analyze the project structure and architecture", agent="explore")
```
Create custom subagents by adding `agent_type = "subagent"` to your agent configuration. Vibe comes with a built-in subagent called `explore`, a read-only subagent for codebase exploration used internally for delegation.
### Interactive User Questions
The `ask_user_question` tool allows the agent to ask you clarifying questions during its work. This enables more interactive and collaborative workflows.
```
> Can you help me refactor this function?
🤖 I need to understand your requirements better before proceeding.
> ask_user_question(questions=[{
"question": "What's the main goal of this refactoring?",
"options": [
{"label": "Performance", "description": "Make it run faster"},
{"label": "Readability", "description": "Make it easier to understand"},
{"label": "Maintainability", "description": "Make it easier to modify"}
]
}])
```
The agent can ask multiple questions at once, displayed as tabs. Each question supports 2-4 options plus an automatic "Other" option for free text responses.
## Terminal Requirements
Vibe's interactive interface requires a modern terminal emulator. Recommended terminal emulators include:
- **WezTerm** (cross-platform)
- **Alacritty** (cross-platform)
- **Ghostty** (Linux and macOS)
- **Kitty** (Linux and macOS)
Most modern terminals should work, but older or minimal terminal emulators may have display issues.
## Quick Start
1. Navigate to your project's root directory:
```bash
cd /path/to/your/project
```
2. Run Vibe:
```bash
vibe
```
3. If this is your first time running Vibe, it will:
- Create a default configuration file at `~/.vibe/config.toml`
- Prompt you to enter your API key if it's not already configured
- Save your API key to `~/.vibe/.env` for future use
Alternatively, you can configure your API key separately using `vibe --setup`.
4. Start interacting with the agent!
```
> Can you find all instances of the word "TODO" in the project?
🤖 The user wants to find all instances of "TODO". The `grep` tool is perfect for this. I will use it to search the current directory.
> grep(pattern="TODO", path=".")
... (grep tool output) ...
🤖 I found the following "TODO" comments in your project.
```
## Usage
### Interactive Mode
Simply run `vibe` to enter the interactive chat loop.
- **Multi-line Input**: Press `Ctrl+J` or `Shift+Enter` for select terminals to insert a newline.
- **File Paths**: Reference files in your prompt using the `@` symbol for smart autocompletion (e.g., `> Read the file @src/agent.py`).
- **Shell Commands**: Prefix any command with `!` to execute it directly in your shell, bypassing the agent (e.g., `> !ls -l`).
- **External Editor**: Press `Ctrl+G` to edit your current input in an external editor.
- **Tool Output Toggle**: Press `Ctrl+O` to toggle the tool output view.
- **Todo View Toggle**: Press `Ctrl+T` to toggle the todo list view.
- **Auto-Approve Toggle**: Press `Shift+Tab` to toggle auto-approve mode on/off.
You can start Vibe with a prompt using the following command:
```bash
vibe "Refactor the main function in cli/main.py to be more modular."
```
**Note**: The `--auto-approve` flag automatically approves all tool executions without prompting. In interactive mode, you can also toggle auto-approve on/off using `Shift+Tab`.
### Trust Folder System
Vibe includes a trust folder system to ensure you only run the agent in directories you trust. When you first run Vibe in a new directory which contains a `.vibe` subfolder, it may ask you to confirm whether you trust the folder.
Trusted folders are remembered for future sessions. You can manage trusted folders through its configuration file `~/.vibe/trusted_folders.toml`.
This safety feature helps prevent accidental execution in sensitive directories.
### Programmatic Mode
You can run Vibe non-interactively by piping input or using the `--prompt` flag. This is useful for scripting.
```bash
vibe --prompt "Refactor the main function in cli/main.py to be more modular."
```
By default, it uses `auto-approve` mode.
#### Programmatic Mode Options
When using `--prompt`, you can specify additional options:
- **`--max-turns N`**: Limit the maximum number of assistant turns. The session will stop after N turns.
- **`--max-price DOLLARS`**: Set a maximum cost limit in dollars. The session will be interrupted if the cost exceeds this limit.
- **`--enabled-tools TOOL`**: Enable specific tools. In programmatic mode, this disables all other tools. Can be specified multiple times. Supports exact names, glob patterns (e.g., `bash*`), or regex with `re:` prefix (e.g., `re:^serena_.*$`).
- **`--output FORMAT`**: Set the output format. Options:
- `text` (default): Human-readable text output
- `json`: All messages as JSON at the end
- `streaming`: Newline-delimited JSON per message
Example:
```bash
vibe --prompt "Analyze the codebase" --max-turns 5 --max-price 1.0 --output json
```
## Slash Commands
Use slash commands for meta-actions and configuration changes during a session.
### Built-in Slash Commands
Vibe provides several built-in slash commands. Use slash commands by typing them in the input box:
```
> /help
```
### Custom Slash Commands via Skills
You can define your own slash commands through the skills system. Skills are reusable components that extend Vibe's functionality.
To create a custom slash command:
1. Create a skill directory with a `SKILL.md` file
2. Set `user-invocable = true` in the skill metadata
3. Define the command logic in your skill
Example skill metadata:
```markdown
---
name: my-skill
description: My custom skill with slash commands
user-invocable: true
---
```
Custom slash commands appear in the autocompletion menu alongside built-in commands.
## Skills System
Vibe's skills system allows you to extend functionality through reusable components. Skills can add new tools, slash commands, and specialized behaviors.
Vibe follows the [Agent Skills specification](https://agentskills.io/specification) for skill format and structure.
### Creating Skills
Skills are defined in directories with a `SKILL.md` file containing metadata in YAML frontmatter. For example, `~/.vibe/skills/code-review/SKILL.md`:
```markdown
---
name: code-review
description: Perform automated code reviews
license: MIT
compatibility: Python 3.12+
user-invocable: true
allowed-tools:
- read_file
- grep
- ask_user_question
---
# Code Review Skill
This skill helps analyze code quality and suggest improvements.
```
### Skill Discovery
Vibe discovers skills from multiple locations:
1. **Custom paths**: Configured in `config.toml` via `skill_paths`
2. **Standard Agent Skills path** (project root, trusted folders only): `.agents/skills/` — [Agent Skills](https://agentskills.io) standard
3. **Local project skills** (project root, trusted folders only): `.vibe/skills/` in your project
4. **Global skills directory**: `~/.vibe/skills/`
```toml
skill_paths = ["/path/to/custom/skills"]
```
### Managing Skills
Enable or disable skills using patterns in your configuration:
```toml
# Enable specific skills
enabled_skills = ["code-review", "test-*"]
# Disable specific skills
disabled_skills = ["experimental-*"]
```
Skills support the same pattern matching as tools (exact names, glob patterns, and regex).
## Configuration
### Configuration File Location
Vibe is configured via a `config.toml` file. It looks for this file first in `./.vibe/config.toml` and then falls back to `~/.vibe/config.toml`.
### API Key Configuration
To use Vibe, you'll need a Mistral API key. You can obtain one by signing up at [https://console.mistral.ai](https://console.mistral.ai).
You can configure your API key using `vibe --setup`, or through one of the methods below.
Vibe supports multiple ways to configure your API keys:
1. **Interactive Setup (Recommended for first-time users)**: When you run Vibe for the first time or if your API key is missing, Vibe will prompt you to enter it. The key will be securely saved to `~/.vibe/.env` for future sessions.
2. **Environment Variables**: Set your API key as an environment variable:
```bash
export MISTRAL_API_KEY="your_mistral_api_key"
```
3. **`.env` File**: Create a `.env` file in `~/.vibe/` and add your API keys:
```bash
MISTRAL_API_KEY=your_mistral_api_key
```
Vibe automatically loads API keys from `~/.vibe/.env` on startup. Environment variables take precedence over the `.env` file if both are set.
**Note**: The `.env` file is specifically for API keys and other provider credentials. General Vibe configuration should be done in `config.toml`.
### Custom System Prompts
You can create custom system prompts to replace the default one (`prompts/cli.md`). Create a markdown file in the `~/.vibe/prompts/` directory with your custom prompt content.
To use a custom system prompt, set the `system_prompt_id` in your configuration to match the filename (without the `.md` extension):
```toml
# Use a custom system prompt
system_prompt_id = "my_custom_prompt"
```
This will load the prompt from `~/.vibe/prompts/my_custom_prompt.md`.
### Custom Agent Configurations
You can create custom agent configurations for specific use cases (e.g., red-teaming, specialized tasks) by adding agent-specific TOML files in the `~/.vibe/agents/` directory.
To use a custom agent, run Vibe with the `--agent` flag:
```bash
vibe --agent my_custom_agent
```
Vibe will look for a file named `my_custom_agent.toml` in the agents directory and apply its configuration.
Example custom agent configuration (`~/.vibe/agents/redteam.toml`):
```toml
# Custom agent configuration for red-teaming
active_model = "devstral-2"
system_prompt_id = "redteam"
# Disable some tools for this agent
disabled_tools = ["search_replace", "write_file"]
# Override tool permissions for this agent
[tools.bash]
permission = "always"
[tools.read_file]
permission = "always"
```
Note: This implies that you have set up a redteam prompt named `~/.vibe/prompts/redteam.md`.
### Tool Management
#### Enable/Disable Tools with Patterns
You can control which tools are active using `enabled_tools` and `disabled_tools`.
These fields support exact names, glob patterns, and regular expressions.
Examples:
```toml
# Only enable tools that start with "serena_" (glob)
enabled_tools = ["serena_*"]
# Regex (prefix with re:) — matches full tool name (case-insensitive)
enabled_tools = ["re:^serena_.*$"]
# Disable a group with glob; everything else stays enabled
disabled_tools = ["mcp_*", "grep"]
```
Notes:
- MCP tool names use underscores, e.g., `serena_list` not `serena.list`.
- Regex patterns are matched against the full tool name using fullmatch.
### MCP Server Configuration
You can configure MCP (Model Context Protocol) servers to extend Vibe's capabilities. Add MCP server configurations under the `mcp_servers` section:
```toml
# Example MCP server configurations
[[mcp_servers]]
name = "my_http_server"
transport = "http"
url = "http://localhost:8000"
headers = { "Authorization" = "Bearer my_token" }
api_key_env = "MY_API_KEY_ENV_VAR"
api_key_header = "Authorization"
api_key_format = "Bearer {token}"
[[mcp_servers]]
name = "my_streamable_server"
transport = "streamable-http"
url = "http://localhost:8001"
headers = { "X-API-Key" = "my_api_key" }
[[mcp_servers]]
name = "fetch_server"
transport = "stdio"
command = "uvx"
args = ["mcp-server-fetch"]
env = { "DEBUG" = "1", "LOG_LEVEL" = "info" }
```
Supported transports:
- `http`: Standard HTTP transport
- `streamable-http`: HTTP transport with streaming support
- `stdio`: Standard input/output transport (for local processes)
Key fields:
- `name`: A short alias for the server (used in tool names)
- `transport`: The transport type
- `url`: Base URL for HTTP transports
- `headers`: Additional HTTP headers
- `api_key_env`: Environment variable containing the API key
- `command`: Command to run for stdio transport
- `args`: Additional arguments for stdio transport
- `startup_timeout_sec`: Timeout in seconds for the server to start and initialize (default 10s)
- `tool_timeout_sec`: Timeout in seconds for tool execution (default 60s)
- `env`: Environment variables to set for the MCP server of transport type stdio
MCP tools are named using the pattern `{server_name}_{tool_name}` and can be configured with permissions like built-in tools:
```toml
# Configure permissions for specific MCP tools
[tools.fetch_server_get]
permission = "always"
[tools.my_http_server_query]
permission = "ask"
```
MCP server configurations support additional features:
- **Environment variables**: Set environment variables for MCP servers
- **Custom timeouts**: Configure startup and tool execution timeouts
Example with environment variables and timeouts:
```toml
[[mcp_servers]]
name = "my_server"
transport = "http"
url = "http://localhost:8000"
env = { "DEBUG" = "1", "LOG_LEVEL" = "info" }
startup_timeout_sec = 15
tool_timeout_sec = 120
```
### Session Management
#### Session Continuation and Resumption
Vibe supports continuing from previous sessions:
- **`--continue`** or **`-c`**: Continue from the most recent saved session
- **`--resume SESSION_ID`**: Resume a specific session by ID (supports partial matching)
```bash
# Continue from last session
vibe --continue
# Resume specific session
vibe --resume abc123
```
Session logging must be enabled in your configuration for these features to work.
#### Working Directory Control
Use the `--workdir` option to specify a working directory:
```bash
vibe --workdir /path/to/project
```
This is useful when you want to run Vibe from a different location than your current directory.
### Update Settings
#### Auto-Update
Vibe includes an automatic update feature that keeps your installation current. This is enabled by default.
To disable auto-updates, add this to your `config.toml`:
```toml
enable_auto_update = false
```
### Custom Vibe Home Directory
By default, Vibe stores its configuration in `~/.vibe/`. You can override this by setting the `VIBE_HOME` environment variable:
```bash
export VIBE_HOME="/path/to/custom/vibe/home"
```
This affects where Vibe looks for:
- `config.toml` - Main configuration
- `.env` - API keys
- `agents/` - Custom agent configurations
- `prompts/` - Custom system prompts
- `tools/` - Custom tools
- `logs/` - Session logs
## Editors/IDEs
Mistral Vibe can be used in text editors and IDEs that support [Agent Client Protocol](https://agentclientprotocol.com/overview/clients). See the [ACP Setup documentation](docs/acp-setup.md) for setup instructions for various editors and IDEs.
## Resources
- [CHANGELOG](CHANGELOG.md) - See what's new in each version
- [CONTRIBUTING](CONTRIBUTING.md) - Guidelines for feature requests, feedback and bug reports
## Data collection & usage
Use of Vibe is subject to our [Privacy Policy](https://legal.mistral.ai/terms/privacy-policy) and may include the collection and processing of data related to your use of the service, such as usage data, to operate, maintain, and improve Vibe. You can disable telemetry in your `config.toml` by setting `enable_telemetry = false`.
## License
Copyright 2025 Mistral AI
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the [LICENSE](LICENSE) file for the full license text.
| text/markdown | Mistral AI | null | null | null | Apache-2.0 | ai, cli, coding-assistant, developer-tools, llm, mistral | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Sof... | [] | null | null | >=3.12 | [] | [] | [] | [
"agent-client-protocol==0.8.0",
"anyio>=4.12.0",
"cryptography<=46.0.3,>=44.0.0",
"gitpython>=3.1.46",
"giturlparse>=0.14.0",
"google-auth>=2.0.0",
"httpx>=0.28.1",
"keyring>=25.6.0",
"mcp>=1.14.0",
"mistralai==1.9.11",
"packaging>=24.1",
"pexpect>=4.9.0",
"pydantic-settings>=2.12.0",
"pyd... | [] | [] | [] | [
"Homepage, https://github.com/mistralai/mistral-vibe",
"Repository, https://github.com/mistralai/mistral-vibe",
"Issues, https://github.com/mistralai/mistral-vibe/issues",
"Documentation, https://github.com/mistralai/mistral-vibe#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:07:11.038527 | mistral_vibe-2.2.1.tar.gz | 456,945 | 0d/68/c4e0361fe62d65bef06016375393e8d61da5fd7e6ea39d07fc9d855ce6a1/mistral_vibe-2.2.1.tar.gz | source | sdist | null | false | e210382f4767b833fe5c0153149d8241 | a6319171fdd9929bda2e541b2fbee7d6071952e0b40e4c70cb2c694b09c7db6b | 0d68c4e0361fe62d65bef06016375393e8d61da5fd7e6ea39d07fc9d855ce6a1 | null | [
"LICENSE"
] | 7,470 |
2.4 | mnase-classifier | 0.2.5 | Add your description here | <!--
SPDX-FileCopyrightText: 2026 Laurent Modolo <laurent.modolo@cnrs.fr>, Lauryn Trouillot <lauryn.trouillot@ens-lyon.fr>
SPDX-License-Identifier: AGPL-3.0-or-later
-->
[](http://gitbio.ens-lyon.fr/LBMC/Bernard/mnase_classifier/-/commits/main)
[](http://gitbio.ens-lyon.fr/LBMC/Bernard/mnase_classifier/-/commits/main)
# mnase_classifier
A detailed description of this project can be found [here](https://lbmc.gitbiopages.ens-lyon.fr/biocomp/projects/2025/2025_12_19_bernard/)
## Getting started
Getting the project up and running is as simple as:
### Install from pypi
```bash
pipx install mnase-classifier
```
### Install from source
if you want to install it from source you will need [uv](https://github.com/astral-sh/uv) installed
clone the repository
```bash
git clone git@gitbio.ens-lyon.fr:LBMC/Bernard/mnase_classifier.git
cd mnase_classifier
```
Initialize the **uv** environment:
```bash
uv sync
```
Print help:
```bash
uv run mnase_classifier --help
```
### Usage example
Classify paired reads from a BAM file into groups based on insert size:
```bash
uv run mnase_classifier --bam_input path/to/file.bam --outdir outdir --means "110 150 250 300" -v
```
The `-v` flag enables verbose output for debugging.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"numpy==2.3.5",
"pysam>=0.23.3",
"rich-click>=1.9.7",
"scipy>=1.17.0"
] | [] | [] | [] | [
"Homepage, https://gitbio.ens-lyon.fr/LBMC/Bernard/mnase_classifier"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T11:06:19.530465 | mnase_classifier-0.2.5.tar.gz | 20,041 | d0/39/7ff3cd6c02c3bf49a2a32193f0eb5d0ab8549b1378be444d77943746da4b/mnase_classifier-0.2.5.tar.gz | source | sdist | null | false | d8bba4ba02a08c414403a1fe3c355194 | fcadca11f675af8f17b8ccb3d94c99facaef7f4149b8a1460b47d61badf0273b | d0397ff3cd6c02c3bf49a2a32193f0eb5d0ab8549b1378be444d77943746da4b | AGPL-3.0-or-later | [
"LICENSE"
] | 249 |
2.4 | deling | 0.4.5 | A library for accessing and storing data in remote storage systems | ======
deling
======
.. image:: https://readthedocs.org/projects/deling/badge/?version=latest
:target: https://deling.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://badge.fury.io/py/deling.svg
:target: https://badge.fury.io/py/deling
deling is a set of utilities for accessing and writing data across remote data storages.
Installation
------------
Installation from pypi
.. code-block:: sh
pip install deling
Installation from source
.. code-block:: sh
git clone https://github.com/rasmunk/deling.git
cd deling
make install
Datastores
----------
This package contains a set of datastores for accessing and conducting IO operations against remote data storage systems.
Currently the package supports datastores that can be accessed through the following protocols:
- `SFTP <https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol>`_
- `SSHFS <https://en.wikipedia.org/wiki/SSHFS>`_
Helper Datastores
-----------------
To ease the use of the datastores, a set of helper datastores are provided. These datastores are wrappers around the basic datastore that have been implemented.
The helper datastores are:
- ERDAShare/ERDASFTPShare which connects to pre-created `ERDA <https://erda.dk>`_ sharelinks.
Additional documentation can be found at `ReadTheDocs <https://deling.readthedocs.io/en/latest/>`_
| null | Rasmus Munk | code@munk0.dk | null | null | GNU General Public License v2 (GPLv2) | Data IO, Staging data, Data transfer, Data storage, Data management | [
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | https://github.com/rasmunk/deling | null | null | [] | [] | [] | [
"ssh2-python>=1.2.0",
"PyYAML>=6.0.1",
"docutils>=0.18.1; extra == \"test\"",
"Pygments>=2.15.0; extra == \"test\"",
"docker>=6.1.3; extra == \"test\"",
"pytest>=7.1.2; extra == \"test\"",
"rstcheck>=6.2.4; extra == \"test\"",
"black==24.3.0; extra == \"dev\"",
"docutils==0.18.1; extra == \"dev\"",
... | [] | [] | [] | [
"Source Code, https://github.com/rasmunk/deling"
] | twine/6.2.0 CPython/3.12.4 | 2026-02-19T11:06:17.690137 | deling-0.4.5.tar.gz | 27,317 | 8f/b4/8bf238265c688ecc05cc77b28b8c2144199ad945ade03ec70c978f5394de/deling-0.4.5.tar.gz | source | sdist | null | false | bde2ed5f0fa6a3ef317f1f0f9583a86b | 50c4254903c405e3cafd78a4353f9983b209849168cb537e4ea67924a189daf7 | 8fb48bf238265c688ecc05cc77b28b8c2144199ad945ade03ec70c978f5394de | null | [
"LICENSE"
] | 260 |
2.4 | pubmed_downloader | 0.0.13 | Automate downloading and processing PubMed | <h1 align="center">
PubMed Downloader
</h1>
<p align="center">
<a href="https://github.com/cthoyt/pubmed-downloader/actions/workflows/tests.yml">
<img alt="Tests" src="https://github.com/cthoyt/pubmed-downloader/actions/workflows/tests.yml/badge.svg" /></a>
<a href="https://pypi.org/project/pubmed_downloader">
<img alt="PyPI" src="https://img.shields.io/pypi/v/pubmed_downloader" /></a>
<a href="https://pypi.org/project/pubmed_downloader">
<img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/pubmed_downloader" /></a>
<a href="https://github.com/cthoyt/pubmed-downloader/blob/main/LICENSE">
<img alt="PyPI - License" src="https://img.shields.io/pypi/l/pubmed_downloader" /></a>
<a href='https://pubmed-downloader.readthedocs.io/en/latest/?badge=latest'>
<img src='https://readthedocs.org/projects/pubmed-downloader/badge/?version=latest' alt='Documentation Status' /></a>
<a href="https://codecov.io/gh/cthoyt/pubmed-downloader/branch/main">
<img src="https://codecov.io/gh/cthoyt/pubmed-downloader/branch/main/graph/badge.svg" alt="Codecov status" /></a>
<a href="https://github.com/cthoyt/cookiecutter-python-package">
<img alt="Cookiecutter template from @cthoyt" src="https://img.shields.io/badge/Cookiecutter-snekpack-blue" /></a>
<a href="https://github.com/astral-sh/ruff">
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff" style="max-width:100%;"></a>
<a href="https://github.com/cthoyt/pubmed-downloader/blob/main/.github/CODE_OF_CONDUCT.md">
<img src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg" alt="Contributor Covenant"/></a>
<!-- uncomment if you archive on zenodo
<a href="https://zenodo.org/badge/latestdoi/XXXXXX">
<img src="https://zenodo.org/badge/XXXXXX.svg" alt="DOI"></a>
-->
</p>
Automate downloading and processing PubMed.
## 💪 Getting Started
The following will automatically download all 30M+ articles, organize them
deterministically on the local filesystem using
[`pystow`](https://github.com/cthoyt/pystow), process them, cache them as JSON,
and iterate over the articles in Pydantic models from newest to oldest:
```python
import pubmed_downloader
for article in pubmed_downloader.iterate_process_articles():
...
```
The following will automatically download, organize, and structure 600K+ records
in the NLM Catalog. A subset of these records correspond to journals.
```python
import pubmed_downloader
for catalog_record in pubmed_downloader.process_catalog():
...
```
## 🚀 Installation
The most recent release can be installed from
[PyPI](https://pypi.org/project/pubmed_downloader/) with uv:
```console
$ uv pip install pubmed_downloader
```
or with pip:
```console
$ python3 -m pip install pubmed_downloader
```
The most recent code and data can be installed directly from GitHub with uv:
```console
$ uv pip install git+https://github.com/cthoyt/pubmed-downloader.git
```
or with pip:
```console
$ python3 -m pip install git+https://github.com/cthoyt/pubmed-downloader.git
```
## 👐 Contributing
Contributions, whether filing an issue, making a pull request, or forking, are
appreciated. See
[CONTRIBUTING.md](https://github.com/cthoyt/pubmed-downloader/blob/master/.github/CONTRIBUTING.md)
for more information on getting involved.
## 👋 Attribution
### ⚖️ License
The code in this package is licensed under the MIT License.
<!--
### 📖 Citation
Citation goes here!
-->
<!--
### 🎁 Support
This project has been supported by the following organizations (in alphabetical order):
- [Biopragmatics Lab](https://biopragmatics.github.io)
-->
<!--
### 💰 Funding
This project has been supported by the following grants:
| Funding Body | Program | Grant Number |
|---------------|--------------------------------------------------------------|--------------|
| Funder | [Grant Name (GRANT-ACRONYM)](https://example.com/grant-link) | ABCXYZ |
-->
### 🍪 Cookiecutter
This package was created with
[@audreyfeldroy](https://github.com/audreyfeldroy)'s
[cookiecutter](https://github.com/cookiecutter/cookiecutter) package using
[@cthoyt](https://github.com/cthoyt)'s
[cookiecutter-snekpack](https://github.com/cthoyt/cookiecutter-snekpack)
template.
## 🛠️ For Developers
<details>
<summary>See developer instructions</summary>
The final section of the README is for if you want to get involved by making a
code contribution.
### Development Installation
To install in development mode, use the following:
```console
$ git clone git+https://github.com/cthoyt/pubmed-downloader.git
$ cd pubmed-downloader
$ uv pip install -e .
```
Alternatively, install using pip:
```console
$ python3 -m pip install -e .
```
### 🥼 Testing
After cloning the repository and installing `tox` with
`uv tool install tox --with tox-uv` or `python3 -m pip install tox tox-uv`, the
unit tests in the `tests/` folder can be run reproducibly with:
```console
$ tox -e py
```
Additionally, these tests are automatically re-run with each commit in a
[GitHub Action](https://github.com/cthoyt/pubmed-downloader/actions?query=workflow%3ATests).
### 📖 Building the Documentation
The documentation can be built locally using the following:
```console
$ git clone git+https://github.com/cthoyt/pubmed-downloader.git
$ cd pubmed-downloader
$ tox -e docs
$ open docs/build/html/index.html
```
The documentation automatically installs the package as well as the `docs` extra
specified in the [`pyproject.toml`](pyproject.toml). `sphinx` plugins like
`texext` can be added there. Additionally, they need to be added to the
`extensions` list in [`docs/source/conf.py`](docs/source/conf.py).
The documentation can be deployed to [ReadTheDocs](https://readthedocs.io) using
[this guide](https://docs.readthedocs.io/en/stable/intro/import-guide.html). The
[`.readthedocs.yml`](.readthedocs.yml) YAML file contains all the configuration
you'll need. You can also set up continuous integration on GitHub to check not
only that Sphinx can build the documentation in an isolated environment (i.e.,
with `tox -e docs-test`) but also that
[ReadTheDocs can build it too](https://docs.readthedocs.io/en/stable/pull-requests.html).
</details>
## 🧑💻 For Maintainers
<details>
<summary>See maintainer instructions</summary>
### Initial Configuration
#### Configuring ReadTheDocs
[ReadTheDocs](https://readthedocs.org) is an external documentation hosting
service that integrates with GitHub's CI/CD. Do the following for each
repository:
1. Log in to ReadTheDocs with your GitHub account to install the integration at
https://readthedocs.org/accounts/login/?next=/dashboard/
2. Import your project by navigating to https://readthedocs.org/dashboard/import
then clicking the plus icon next to your repository
3. You can rename the repository on the next screen using a more stylized name
(i.e., with spaces and capital letters)
4. Click next, and you're good to go!
#### Configuring Archival on Zenodo
[Zenodo](https://zenodo.org) is a long-term archival system that assigns a DOI
to each release of your package. Do the following for each repository:
1. Log in to Zenodo via GitHub with this link:
https://zenodo.org/oauth/login/github/?next=%2F. This brings you to a page
that lists all of your organizations and asks you to approve installing the
Zenodo app on GitHub. Click "grant" next to any organizations you want to
enable the integration for, then click the big green "approve" button. This
step only needs to be done once.
2. Navigate to https://zenodo.org/account/settings/github/, which lists all of
your GitHub repositories (both in your username and any organizations you
enabled). Click the on/off toggle for any relevant repositories. When you
make a new repository, you'll have to come back to this
After these steps, you're ready to go! After you make "release" on GitHub (steps
for this are below), you can navigate to
https://zenodo.org/account/settings/github/repository/cthoyt/pubmed-downloader
to see the DOI for the release and link to the Zenodo record for it.
#### Registering with the Python Package Index (PyPI)
The [Python Package Index (PyPI)](https://pypi.org) hosts packages so they can
be easily installed with `pip`, `uv`, and equivalent tools.
1. Register for an account [here](https://pypi.org/account/register)
2. Navigate to https://pypi.org/manage/account and make sure you have verified
your email address. A verification email might not have been sent by default,
so you might have to click the "options" dropdown next to your address to get
to the "re-send verification email" button
3. 2-Factor authentication is required for PyPI since the end of 2023 (see this
[blog post from PyPI](https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2fa/)).
This means you have to first issue account recovery codes, then set up
2-factor authentication
4. Issue an API token from https://pypi.org/manage/account/token
This only needs to be done once per developer.
#### Configuring your machine's connection to PyPI
This needs to be done once per machine.
```console
$ uv tool install keyring
$ keyring set https://upload.pypi.org/legacy/ __token__
$ keyring set https://test.pypi.org/legacy/ __token__
```
Note that this deprecates previous workflows using `.pypirc`.
### 📦 Making a Release
#### Uploading to PyPI
After installing the package in development mode and installing `tox` with
`uv tool install tox --with tox-uv` or `python3 -m pip install tox tox-uv`, run
the following from the console:
```console
$ tox -e finish
```
This script does the following:
1. Uses [bump-my-version](https://github.com/callowayproject/bump-my-version) to
switch the version number in the `pyproject.toml`, `CITATION.cff`,
`src/pubmed_downloader/version.py`, and
[`docs/source/conf.py`](docs/source/conf.py) to not have the `-dev` suffix
2. Packages the code in both a tar archive and a wheel using
[`uv build`](https://docs.astral.sh/uv/guides/publish/#building-your-package)
3. Uploads to PyPI using
[`uv publish`](https://docs.astral.sh/uv/guides/publish/#publishing-your-package).
4. Push to GitHub. You'll need to make a release going with the commit where the
version was bumped.
5. Bump the version to the next patch. If you made big changes and want to bump
the version by minor, you can use `tox -e bumpversion -- minor` after.
#### Releasing on GitHub
1. Navigate to https://github.com/cthoyt/pubmed-downloader/releases/new to draft
a new release
2. Click the "Choose a Tag" dropdown and select the tag corresponding to the
release you just made
3. Click the "Generate Release Notes" button to get a quick outline of recent
changes. Modify the title and description as you see fit
4. Click the big green "Publish Release" button
This will trigger Zenodo to assign a DOI to your release as well.
### Updating Package Boilerplate
This project uses `cruft` to keep boilerplate (i.e., configuration, contribution
guidelines, documentation configuration) up-to-date with the upstream
cookiecutter package. Install cruft with either `uv tool install cruft` or
`python3 -m pip install cruft` then run:
```console
$ cruft update
```
More info on Cruft's update command is available
[here](https://github.com/cruft/cruft?tab=readme-ov-file#updating-a-project).
</details>
| text/markdown | Charles Tapley Hoyt | Charles Tapley Hoyt <cthoyt@gmail.com> | Charles Tapley Hoyt | Charles Tapley Hoyt <cthoyt@gmail.com> | null | snekpack, cookiecutter, PubMed, MEDLINE, biomedical literature | [
"Development Status :: 1 - Planning",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Framework :: Pytest",
"Framework :: tox",
"Framework :: Sphinx",
"Natural Language :: English",
"Programming Language ... | [] | null | null | >=3.10 | [] | [] | [] | [
"tqdm",
"click",
"more-click>=0.1.3",
"requests",
"beautifulsoup4",
"pystow",
"pydantic",
"pydantic-extra-types",
"lxml",
"curies>=0.10.6",
"ssslm>=0.0.12",
"typing-extensions",
"more-itertools",
"ratelimit",
"more-click",
"pyobo; extra == \"process\"",
"orcid-downloader; extra == \"... | [] | [] | [] | [
"Bug Tracker, https://github.com/cthoyt/pubmed-downloader/issues",
"Homepage, https://github.com/cthoyt/pubmed-downloader",
"Repository, https://github.com/cthoyt/pubmed-downloader.git",
"Documentation, https://pubmed_downloader.readthedocs.io",
"Funding, https://github.com/sponsors/cthoyt"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T11:06:09.793253 | pubmed_downloader-0.0.13.tar.gz | 28,270 | 1a/40/1258c4b62508bc260b5101c7f66e5da5f8b2dfcf58a16e7a756cf1db5969/pubmed_downloader-0.0.13.tar.gz | source | sdist | null | false | 33a874740333d8ae50d789be004ad66f | 712c5b04b63f4f101174ae5b7ed065a3e33a805d999cb10b6a903d520f0331f6 | 1a401258c4b62508bc260b5101c7f66e5da5f8b2dfcf58a16e7a756cf1db5969 | null | [
"LICENSE"
] | 0 |
2.4 | io-connect | 1.7.1 | io connect library | # io_connect
`io_connect` is a Python package designed for system monitoring and data management. It includes components for handling alerts, MQTT messaging, data access, and event management.
## Components
### Alerts Handler
The `AlertsHandler` class is a critical component for system monitoring and maintenance. It enables the seamless dissemination of alerts through email and Microsoft Teams, ensuring timely and effective communication of important notifications.
#### Features
- **Email Alerts**: Send alerts directly to email addresses.
- **Microsoft Teams Notifications**: Integrate with Microsoft Teams for notifications.
### MQTT Handler
The `MQTTHandler` class provides an interface for publishing data to an MQTT broker. It supports reliable and efficient message sending, whether you're transmitting individual payloads or managing data streams.
#### Features
- **Flexible Publishing**: Send single or multiple messages.
- **Reliable Transmission**: Ensure data reaches its destination reliably.
### Data Access
The `DataAccess` class offers a comprehensive set of methods for various data retrieval tasks. It supports operations such as retrieving device metadata and querying databases for precise information, optimizing data access workflows.
#### Features
- **Device Metadata Retrieval**: Access information about devices.
- **Database Queries**: Perform queries for accurate data retrieval.
### Events Handler
The `EventsHandler` class provides a versatile interface for interacting with an API dedicated to event and notification management. It facilitates event publishing, data retrieval, category fetching, and event analysis.
#### Features
- **Event Publishing**: Publish events efficiently.
- **Data Retrieval**: Retrieve event data within specified intervals.
- **Category Fetching**: Get detailed event categories.
- **Event Analysis**: Analyze events comprehensively.
## Installation
To install `io_connect`, you can use pip:
```bash
pip install io_connect
```
## Documentation
[Documentation](https://wiki.iosense.io/en/Data_Science/io-connect)
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
# Reach us
For usage questions, please reach out to us at reachus@faclon.com
| text/markdown | Faclon-Labs | datascience@faclon.com | null | null | MIT | null | [
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"wheel",
"pandas",
"numpy",
"pytz",
"python_dateutil",
"requests",
"typing_extensions",
"typeguard",
"urllib3",
"pymongo",
"paho_mqtt==1.6.1",
"aiohttp; extra == \"all\"",
"asyncio; extra == \"all\"",
"polars==1.32.3; extra == \"all\"",
"polars==1.32.3; extra == \"all\"",
"aiofiles; ex... | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-19T11:05:40.588482 | io_connect-1.7.1.tar.gz | 93,500 | 4d/e1/9beb8e246059faf9d44a30d8e49a660a2fa3207708886ee91588896d45a1/io_connect-1.7.1.tar.gz | source | sdist | null | false | ab5de48831ad2387e8ab64c4054aff4b | 812df8ff7b132e040fbccc8e9028b77b2695120a9a7387f16cd2ee413e535869 | 4de19beb8e246059faf9d44a30d8e49a660a2fa3207708886ee91588896d45a1 | null | [
"LICENSE"
] | 261 |
2.4 | prismatui | 0.3.2 | A TUI framework based on the idea of "multi-layered transparency" composition. | # PRISMA TUI
**Prisma TUI** (Python teRminal graphIcS with Multilayered trAnsparency) is a Python framework for building composable Terminal User Interfaces (TUIs). The `Terminal` class serves as a wrapper for terminal backends (e.g. curses) while providing a customizable application loop. Flexible layouts can be arranged by creating a hierarchy of `Section` class instances. Complex displays can be composed by
**Prisma** is built around the idea of *multilayered transparency*, which consists in overlaying different "layers" of text on top of each other and merging them together to compose more complex displays (think of stacking together images with transparency). This can be achieved by using the `Layer` class. **Prisma** also provides advanced color management, allowing to write and read multi-colored layers from its own custom **PAL** (*PALette*, JSON with color pair values) and **PRI** (*PRisma Image*, binary with the chars and the respective color pairs to form an image) formats.
<p align="center">
<img src="logo.png" alt="Prisma TUI Logo" width="200"/><br>
<i>Prisma, the cat</i>, as rendered by prismatui.
</p>
<!-- ----------------------------------------------------------------------- -->
## QuickStart
### Run Demo
```
pip install prismatui
python3 demos/layouts.py
```
## Code Example
```python
import prismatui as pr
class MyTUI(pr.Terminal):
def on_start(self):
pr.init_pair(1, pr.COLOR_BLACK, pr.COLOR_CYAN)
def on_update(self):
self.draw_text('c', 'c', "Hello, pr!", pr.A_BOLD)
self.draw_text("c+1", 'c', f"Key pressed: {self.key}", pr.A_BOLD)
self.draw_text('b', 'l', "Press q to exit", pr.get_color_pair(1))
def should_stop(self):
return self.key in (pr.KEY_Q_LOWER, pr.KEY_Q_UPPER)
if __name__ == "__main__":
MyTUI().run()
```
<!-- ----------------------------------------------------------------------- -->
## Demos
See the [`demos/`](demos/) folder for example applications:
- [`images.py`](demos/images.py): Image rendered from a pair of PRI and PAL files.
- [`layouts.py`](demos/layouts.py): Example of a complex layout built using different Section techniques.
- [`movement.py`](demos/movement.py): Example of an application in no-delay mode.
- [`keys.py`](demos/keys.py): Simple "hello world" example.
<!-- ----------------------------------------------------------------------- -->
| text/markdown | DiegoBarMor | diegobarmor42@gmail.com | null | null | MIT | tui terminal user interface transparency layers layered curses | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/diegobarmor/prismatui | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.2 | 2026-02-19T11:05:31.221355 | prismatui-0.3.2.tar.gz | 18,464 | 14/73/e5ec6e1b812ce66bc453b549363d5b529e5b57cff46471968a7570798061/prismatui-0.3.2.tar.gz | source | sdist | null | false | c253bf4044f97290e5ee67cae19070f1 | c24a7c82688e2d0b9aed27f8de07f15d53700ba7796b89f087848c338143669d | 1473e5ec6e1b812ce66bc453b549363d5b529e5b57cff46471968a7570798061 | null | [
"LICENSE"
] | 267 |
2.4 | graintools | 0.1 | Grain segmentation from Laue patterns | # graintools
[](https://pypi.org/project/graintools/)
[](https://pypi.org/project/graintools/)
[](#license)
[](#)
## What is it?
graintools is a set of Python tools is aimed at segmenting X-ray diffraction data coming from polycrystalline materials. The data is acquired using the LAUEMAX setup at BM32 of the European Synchrotron Radiation Facility (ESRF).
---
## Installation
Install the test version
```bash
python3 -m pip install --index-url https://test.pypi.org/simple/ graintools
```
Or add it to your PYTHONPAH
```bash
git clone https://github.com/serbng/graintools.git
```
Inside your script or notebook
```python
>>> sys.path.append("path/to/repo/graintools")
```
### Optional extras
Install dependencies necessary to run the notebooks and the simulation:
```bash
pip install "graintools[full]"
```
---
## Get started
Create a virtual environment and activate it. For example
```bash
python -m venv ~/graintools
source ~/graintools/bin/activate
```
Inside the virtual environment
```bash
pip install "graintools[full]"
```
Download and run the notebook:
```bash
git clone https://github.com/serbng/graintools.git
jupyter lab graintools/examples/simulation
```
---
## Contacts
myemail, TBD which one
---
| text/markdown | null | Sergio Bongiorno <sergiobng@proton.me> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"multiprocess",
"scikit-learn",
"tqdm",
"numba",
"matplotlib",
"opencv-python",
"h5py",
"pandas",
"ipykernel",
"ipympk",
"jupyter",
"lauetools==3.1.19",
"lauetools; extra == \"full\"",
"jupyter; extra == \"full\"",
"ipympl; extra == \"full\"",
"ipykernel; extra == \"full\""
... | [] | [] | [] | [
"Homepage, https://github.com/serbng/graintools",
"Issues, https://github.com/serbng/graintools/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T11:04:48.910665 | graintools-0.1.tar.gz | 22,408 | 86/38/7e3f641b2c1e0590e449133f22104bf456891c64d75a6d213bd77b2309f4/graintools-0.1.tar.gz | source | sdist | null | false | 8875997e7b39d6638bc4c770a95bbc8e | bee5d42c98fc850aed5e51b25b32d2bf4c79c2ac978ae2ac0641e4d1ba88336f | 86387e3f641b2c1e0590e449133f22104bf456891c64d75a6d213bd77b2309f4 | MIT | [
"LICENSE"
] | 260 |
2.4 | datarobot-pulumi-utils | 0.1.2 | A set of Pulumi CustomResources and other utilities built on top of the pulumi-datarobot provider. | <div align="center">
<h1>DataRobot Pulumi Utils</h1>
</div>
<div align="center">
<a href="https://pypi.python.org/pypi/datarobot-pulumi-utils"><img src="https://img.shields.io/pypi/v/datarobot-pulumi-utils.svg" alt="PyPI"></a>
<a href="https://github.com/datarobot-oss/datarobot-pulumi-utils"><img src="https://img.shields.io/pypi/pyversions/datarobot-pulumi-utils.svg" alt="versions"></a>
<a href="https://github.com/datarobot-oss/datarobot-pulumi-utils/blob/main/LICENSE"><img src="https://img.shields.io/github/license/datarobot-oss/datarobot-pulumi-utils.svg?v" alt="license"></a>
</div>
---
A set of Pulumi CustomResources and other utils on top of the `pulumi-datarobot` provider.
| text/markdown | null | null | null | DataRobot <api-maintainer@datarobot.com> | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :... | [] | null | null | >=3.9 | [] | [] | [] | [
"datarobot<4.0,>=3.5.2",
"papermill<3,>=2.6.0",
"pulumi-datarobot>=0.8.15",
"pulumi<4.0.0,>=3.0.0",
"pydantic<3.0,>=2.7.4",
"pyyaml<7.0,>=6.0.2"
] | [] | [] | [] | [
"Homepage, https://datarobot.com",
"Source, https://github.com/datarobot-oss/datarobot-pulumi-utils"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T11:04:03.918133 | datarobot_pulumi_utils-0.1.2-py3-none-any.whl | 49,649 | f6/9c/4b296caf88f28edfcc390ae41cd2ee5b88c3a06844d997754ecf657df0f2/datarobot_pulumi_utils-0.1.2-py3-none-any.whl | py3 | bdist_wheel | null | false | d3a7c89eec3ebb5bbd3f1fbd0f5360e6 | 0c5b6f35af80ba909404e9d15ed6f89843e850f7c481ad3bc65c1f8010520c81 | f69c4b296caf88f28edfcc390ae41cd2ee5b88c3a06844d997754ecf657df0f2 | Apache-2.0 | [
"AUTHORS",
"LICENSE"
] | 569 |
2.4 | roka | 0.1.0 | Lightning-fast toolkit for hackathon dominance. | # ROKA: The Hackathon Accelerator
ROKA is a high-speed Python library built to eliminate boilerplate code. It handles APIs, scraping, and ML architecture so you can focus on building your core product during high-pressure development sessions.
---
## Installation
Install via pip:
bash
pip install roka
---
## Core Features
* **Zero-Config Scraping:** Extract clean data and metadata from any URL instantly.
* **Instant ML Wrappers:** One-line deployments for sentiment analysis, OCR, and summarization.
* **Streamlined API Connectors:** Pre-configured interfaces for OpenAI, Firebase, and Twilio.
* **Lightweight Design:** Minimal dependencies to ensure fast deployment and environment stability.
---
## Usage Examples
### 1. Web Scraping
Extract headlines and main content without manual parsing.
python
from roka import Scraper
data = Scraper.quick_get("https://example-news-site.com")
print(data.headlines)
print(data.main_content)
### 2. Sentiment Analysis
Deploy a model for immediate text processing.
python
from roka.ml import QuickAnalyze
result = QuickAnalyze.sentiment("This project is moving fast.")
# Output: {'label': 'POSITIVE', 'score': 0.99}
### 3. OpenAI Integration
Simplified setup for LLM features.
python
from roka.connect import OpenAI
ai = OpenAI(api_key="your_key")
response = ai.ask("Provide a concept for a hackathon project.")
---
## Efficiency Gains
| Task | Traditional Method | With ROKA |
| :--- | :--- | :--- |
| API Setup | 20 Minutes | 2 Minutes |
| Data Scraping | 1 Hour+ | 5 Minutes |
| ML Implementation | 30 Minutes | 1 Line |
---
## Contributing
To contribute a new utility or feature:
1. Fork the repository.
2. Create a feature branch.
3. Submit a Pull Request with a description of the utility added.
---
## License
Distributed under the MIT License.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"fastapi",
"uvicorn",
"httpx",
"parsel",
"scikit-learn",
"torch"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.7 | 2026-02-19T11:03:30.036369 | roka-0.1.0.tar.gz | 3,047 | 18/c8/cf1688a07534028d301987d207a4c0e9d93a7e9e03f58f4912f3ac0723b2/roka-0.1.0.tar.gz | source | sdist | null | false | 3185e36f75f3ce4cab9c7e65ff96199e | 874c7f271da4661cf77b2d8b64e3a7d1dbf3b88083957d96ec504207d42916e7 | 18c8cf1688a07534028d301987d207a4c0e9d93a7e9e03f58f4912f3ac0723b2 | null | [] | 287 |
2.1 | langfuse | 3.14.4 | A client library for accessing langfuse | 
# Langfuse Python SDK
[](https://opensource.org/licenses/MIT)
[](https://github.com/langfuse/langfuse-python/actions/workflows/ci.yml?query=branch%3Amain)
[](https://pypi.python.org/pypi/langfuse)
[](https://github.com/langfuse/langfuse)
[](https://discord.gg/7NXusRtqYU)
[](https://www.ycombinator.com/companies/langfuse)
## Installation
> [!IMPORTANT]
> The SDK was rewritten in v3 and released in June 2025. Refer to the [v3 migration guide](https://langfuse.com/docs/sdk/python/sdk-v3#upgrade-from-v2) for instructions on updating your code.
```
pip install langfuse
```
## Docs
Please [see our docs](https://langfuse.com/docs/sdk/python/sdk-v3) for detailed information on this SDK.
| text/markdown | langfuse | developers@langfuse.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"backoff>=1.10.0",
"httpx<1.0,>=0.15.4",
"openai>=0.27.8",
"opentelemetry-api<2.0.0,>=1.33.1",
"opentelemetry-exporter-otlp-proto-http<2.0.0,>=1.33.1",
"opentelemetry-sdk<2.0.0,>=1.33.1",
"packaging<26.0,>=23.2",
"pydantic<3.0,>=1.10.7",
"requests<3,>=2",
"wrapt<2.0,>=1.14"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:03:09.432277 | langfuse-3.14.4.tar.gz | 235,283 | 9f/b8/8a165154fa5597a831cd87375622f23bb63871d6e9e4de60f5a114bac859/langfuse-3.14.4.tar.gz | source | sdist | null | false | 6141ad08176e3b89f533991233a93ec8 | f05f853d17eadb1f54ef29b974409ddfcec275753adcf0a2e4d99fa4386b521a | 9fb88a165154fa5597a831cd87375622f23bb63871d6e9e4de60f5a114bac859 | null | [] | 159,821 |
2.4 | traveltime-drive-time-comparisons | 1.6.4 | Compare travel times obtained from TravelTime API other API providers | # TravelTime Drive Time Comparisons tool
This tool compares the travel times obtained from [TravelTime Routes API](https://docs.traveltime.com/api/reference/routes),
[Google Maps Directions API](https://developers.google.com/maps/documentation/directions/get-directions),
[TomTom Routing API](https://developer.tomtom.com/routing-api/documentation/tomtom-maps/routing-service),
[HERE Routing API](https://www.here.com/docs/bundle/routing-api-v8-api-reference),
and [Mapbox Directions API](https://docs.mapbox.com/api/navigation/directions/).
Source code is available on [GitHub](https://github.com/traveltime-dev/traveltime-drive-time-comparisons).
## Features
- Get travel times from TravelTime API, Google Maps API, TomTom API, HERE API and Mapbox API in parallel, for provided origin/destination pairs and a set
of departure times.
- Analyze the differences between the results and print out an accuracy comparison against Google, also general cross-comparison results.
## Prerequisites
The tool requires Python 3.9+ installed on your system. You can download it from [here](https://www.python.org/downloads/).
## Installation
Create a new virtual environment with a chosen name (here, we'll name it 'env'):
```bash
python -m venv env
```
Activate the virtual environment:
```bash
source env/bin/activate
```
Install the project and its dependencies:
```bash
pip install traveltime-drive-time-comparisons
```
## Setup
Provide credentials and desired max requests per minute for the APIs inside the `config.json` file.
Optionally, you can set a custom endpoint for each provider.
You can also disable unwanted APIs by changing the `enabled` value to `false`.
```json
{
"traveltime": {
"app-id": "<your-app-id>",
"api-key": "<your-api-key>",
"max-rpm": "60"
},
"api-providers": [
{
"name": "google",
"enabled": true,
"api-key": "<your-api-key>",
"max-rpm": "60",
"api-endpoint": "custom-endpoint.com"
},
...other providers
]
}
```
## Usage
Run the tool:
```bash
traveltime_drive_time_comparisons --input [Input CSV file path] --output [Output CSV file path] \
--date [Date (YYYY-MM-DD)] --departure-times [Departure times (HH:MM, HH:MM)] --time-zone-id [Time zone ID]
```
Required arguments:
- `--input [Input CSV file path]`: Path to the input file. Input file is required to have a header row and at least one
row with data, with two columns: `origin` and `destination`.
The values in the columns must be latitude and longitude pairs, separated
by comma and enclosed in double quotes. For example: `"51.5074,-0.1278"`. Columns must be separated by comma as well.
Check out the [project's repository](https://github.com/traveltime-dev/traveltime-drive-time-comparisons.git)
for examples in the `examples` directory and more pre-prepared routes in the `inputs` directory.
- `--output [Output CSV file path]`: Path to the output file. It will contain the gathered travel times.
See the details in the [Output section](#output)
- `--date [Date (YYYY-MM-DD)]`: date on which the travel times are gathered. Use a future date, as Google API returns
errors for past dates (and times). Take into account the time needed to collect the data for provided input.
- `--departure-times [Departure times (HH:MM)]`: All departure times in `HH:MM` format, separated by comma ",", spaces can be used.
- `--time-zone-id [Time zone ID]`: non-abbreviated time zone identifier in which the time values are specified.
For example: `Europe/London`. For more information, see [here](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones).
Optional arguments:
- `--config [Config file path]`: Path to the config file. Default - ./config.json
- `--accuracy-output [Output CSV file path]`: Path to the output file for the accuracy table. If not defined,
the table will only be printed to the console.
- `--skip-data-gathering`: If set, reads already gathered data from input file and skips data gathering. Input file must conform to the output file format.
- `--skip-plotting`: If set, graphs of the final summary will not be shown.
Example:
```bash
traveltime_drive_time_comparisons --input examples/uk.csv --output output.csv --date 2023-09-20 \
--departure-times "07:00, 10:00, 13:00, 16:00, 19:00" --time-zone-id "Europe/London"
```
## Console output
The console output contains results when comparing each provider to Google (this part of course relies on Google provider being enabled in the configuration file).
- **Accuracy Score - `100 - mean_absolute_error`**. This gives a score 0 to 100 of how close the travel times on average are to Google, regardless of whether they are higher or lower.
- **Relative Time - `100 - bias` (bias can be negative)**. This gives a value around 100 (can be higher or lower than 100), which indicates how much on average this provider undershoots or overshoots when compared to Google.
Examples:
- **Accuracy Score = 95, Relative Time = 105** - this provider always returns higher results than Google, by 5% on average.
- **Accuracy Score = 95, Relative Time = 95** - this provider always returns lower results than Google, by 5% on average.
- **Accuracy Score = 95, Relative Time = 102** - this provider usually (but not always), returns higher results than Google. It's off by 5% on average, but bias is only +2%.
```
2025-07-17 11:27:45 | INFO | Baseline summary, comparing to Google:
Provider Accuracy Score Relative Time
0 Google 100.00 100.00
1 TravelTime 77.62 110.65
2 TomTom 71.24 123.93
3 HERE 62.92 127.17
4 Mapbox 57.00 135.66
```
It also contains more detailed comparisons with each API
```
2025-06-11 13:23:36 | INFO | Comparing TravelTime to other providers:
2025-06-11 13:23:36 | INFO | Mean relative error compared to Google API: 12.91%
2025-06-11 13:23:36 | INFO | 90% of TravelTime results differ from Google API by less than 32%
2025-06-11 13:23:36 | INFO | Mean relative error compared to TomTom API: 26.50%
2025-06-11 13:23:36 | INFO | 90% of TravelTime results differ from TomTom API by less than 48%
2025-06-11 13:23:36 | INFO | Comparing Google to other providers:
2025-06-11 13:23:36 | INFO | Mean relative error compared to TravelTime API: 17.11%
2025-06-11 13:23:36 | INFO | 90% of Google results differ from TravelTime API by less than 48%
2025-06-11 13:23:36 | INFO | Mean relative error compared to TomTom API: 33.21%
2025-06-11 13:23:36 | INFO | 90% of Google results differ from TomTom API by less than 40%
2025-06-11 13:23:36 | INFO | Comparing TomTom to other providers:
2025-06-11 13:23:36 | INFO | Mean relative error compared to TravelTime API: 19.58%
2025-06-11 13:23:36 | INFO | 90% of TomTom results differ from TravelTime API by less than 32%
2025-06-11 13:23:36 | INFO | Mean relative error compared to Google API: 24.60%
2025-06-11 13:23:36 | INFO | 90% of TomTom results differ from Google API by less than 29%
```
## File output
The output file will contain the `origin` and `destination` columns from input file, with some additional columns:
- `departure_time`: departure time in `YYYY-MM-DD HH:MM:SS±HHMM` format.
It includes date, time and timezone offset.
- `*_travel_time`: travel time gathered from alternative provider API in seconds
- `tt_travel_time`: travel time gathered from TravelTime API in seconds
- `error_percentage_*`: relative error between provider and TravelTime travel times in percent, relative to provider result.
### Sample output with 3 providers - TravelTime, Google and TomTom
```csv
origin,destination,departure_time,tt_travel_time,google_travel_time,tomtom_travel_time,error_percentage_traveltime_to_google,error_percentage_traveltime_to_tomtom,error_percentage_google_to_traveltime,error_percentage_google_to_tomtom,error_percentage_tomtom_to_traveltime,error_percentage_tomtom_to_google
"33.05187660000014 , -117.1350031999999","33.14408130000009 , -117.02942509999977",2025-06-20 13:00:00-0700,1970.0,2224.0,1869.0,11,5,12,18,5,15
"33.05187660000014 , -117.1350031999999","33.14408130000009 , -117.02942509999977",2025-06-20 16:00:00-0700,2052.0,3042.0,2274.0,32,9,48,33,10,25
"37.36713689999986 , -122.09885940000017","37.35365440000001 , -122.21751989999996",2025-06-20 13:00:00-0700,1868.0,1832.0,1317.0,1,41,1,39,29,28
"37.36713689999986 , -122.09885940000017","37.35365440000001 , -122.21751989999996",2025-06-20 16:00:00-0700,2004.0,1896.0,1345.0,5,48,5,40,32,29
```
## License
This project is licensed under MIT License. For more details, see the LICENSE file.
| text/markdown | TravelTime | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiohttp>=3.13.3",
"aiolimiter>=1.2.1",
"pandas>=2.0.0",
"numpy>=2.0.0",
"pytz>=2025.2",
"PyQt5>=5.15.11",
"matplotlib>=3.7.0",
"traveltimepy>=4.0.0",
"pytest; extra == \"test\"",
"flake8; extra == \"test\"",
"flake8-pyproject; extra == \"test\"",
"mypy; extra == \"test\"",
"black; extra == ... | [] | [] | [] | [
"Homepage, https://github.com/traveltime-dev/traveltime-drive-time-comparisons"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T11:02:28.684663 | traveltime_drive_time_comparisons-1.6.4.tar.gz | 4,837,346 | fa/1e/a2b1ed5cc792afcd92376e7c110cec7698ae3dc250f31a8abaf8de01a46e/traveltime_drive_time_comparisons-1.6.4.tar.gz | source | sdist | null | false | be4633fa3819fc6473cfcefe35f54c65 | e17836b3c3e70afd95246cf5909893424490c0a805e47b03730c957c661d4659 | fa1ea2b1ed5cc792afcd92376e7c110cec7698ae3dc250f31a8abaf8de01a46e | null | [
"LICENSE"
] | 245 |
2.4 | mbse4u-sysmlv2-helpers | 0.1.1 | Generic SysML v2 API helpers from MBSE4U | # MBSE4U SysML v2 API Helpers
Generic helper functions for interacting with SysML v2 REST API. This library simplifies the process of querying projects, commits, and traversing the SysML v2 model structure.
## Installation
You can install this package via pip.
### From Source
```bash
pip install .
```
### Editable Mode (Development)
```bash
pip install -e .
```
## Getting Started
Here is a simple example of how to connect to a server and list projects.
```python
import mbse4u_sysmlv2_api_helpers as api
SERVER_URL = "http://localhost:9000"
try:
# Fetch all projects
projects = api.get_projects(SERVER_URL)
print(f"Found {len(projects)} projects.")
for p in projects:
print(f"- {p.get('name')} (ID: {p.get('@id')})")
# Get commits for the first project
commits = api.get_commits(SERVER_URL, p['@id'])
if commits:
latest_commit = commits[-1]
print(f" Latest commit: {latest_commit.get('id')}")
except Exception as e:
print(f"Error: {e}")
```
## API Reference
### Project & Commit Management
- `get_projects(server_url: str, page_size: int = 256) -> List[Dict]`
- Fetches and sorts projects from the given server.
- `get_commits(server_url: str, project_id: str) -> List[Dict]`
- Retrieves commit history for a specific project.
- `get_commit_url(server_url: str, project_id: str, commit_id: str) -> str`
- Helper to constructing the base URL for commit-specific queries.
### Caching
- `load_model_cache(server_url: str, project_id: str, commit_id: str, page_size: int = 256) -> int`
- Loads all elements of a commit into an in-memory `ELEMENT_CACHE` to speed up subsequent queries. Returns the number of elements cached.
### Element Retrieval
- `get_element_fromAPI(query_url: str, element_id: str) -> Optional[Dict]`
- Fetches a single element by ID, checking the local cache first.
- `get_elements_fromAPI(query_url: str, element_ids: List[str]) -> List[Dict]`
- Batch retrieval of elements.
- `get_elements_byKind_fromAPI(server_url: str, project_id: str, commit_id: str, kind: str) -> List[Dict]`
- Cached query for all elements of a specific type (e.g., `'PartUsage'`).
- `get_elements_byName_fromAPI(server_url: str, project_id: str, commit_id: str, name: str) -> List[Dict]`
- Finds elements by `declaredName`. Includes logic to find elements that redefine a named element.
### Traversal & Structure
- `get_contained_elements(server_url, project_id, commit_id, element_id, kind, elementKind='ownedElement') -> List[Dict]`
- Returns children of a specific type.
- `get_recursive_owned_elements(...) -> List[Dict]`
- Recursively fetches descendants.
- `check_specialization_hierarchy(query_url, element, super_element_name) -> bool`
- Checks if an element inherits from a specific named element (useful for checking compliance with patterns like `systemOfInterest`).
- `find_elements_specializing(query_url, elements, super_element_name) -> List[Dict]`
- Filters a list to only include elements that specialize a given supertype.
- `get_feature_value(server_url, project_id, commit_id, element, feature_name) -> Any`
- Extracts the value of a feature (attribute) from an element.
## License
Copyright 2026 MBSE4U - Tim Weilkiens. Licensed under the Apache License, Version 2.0.
| text/markdown | Tim Weilkiens | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T11:02:19.867089 | mbse4u_sysmlv2_helpers-0.1.1.tar.gz | 14,819 | a4/19/0a0cf89368ce08d6a19c34242a94da77f47527b2f4e6adeff4bd816cdf97/mbse4u_sysmlv2_helpers-0.1.1.tar.gz | source | sdist | null | false | 96cc6c52148fd48d0bd8a8baa764e564 | 4df445745d20fabe9c12c86ba6e9e95ffe62a04060b4cef42fc4431b1aa29298 | a4190a0cf89368ce08d6a19c34242a94da77f47527b2f4e6adeff4bd816cdf97 | null | [
"LICENSE"
] | 266 |
2.4 | gvm-tools | 25.4.6 | Tools to control a GSM/GVM over GMP or OSP | 
# Greenbone Vulnerability Management Tools <!-- omit in toc -->
[](https://github.com/greenbone/gvm-tools/releases)
[](https://pypi.org/project/gvm-tools/)
[](https://codecov.io/gh/greenbone/gvm-tools)
[](https://github.com/greenbone/gvm-tools/actions/workflows/ci-python.yml)
The Greenbone Vulnerability Management Tools `gvm-tools` are a collection of
tools that help with remote controlling a Greenbone Enterprise Appliance and
Greenbone Community Edition installations. The tools aid in accessing the
communication protocols GMP (Greenbone Management Protocol) and OSP
(Open Scanner Protocol).
This module is comprised of interactive and non-interactive clients.
The programming language Python is supported directly for interactive scripting.
But it is also possible to issue remote GMP/OSP commands without programming in
Python.
## Table of Contents <!-- omit in toc -->
- [Documentation](#documentation)
- [Installation](#installation)
- [Requirements](#requirements)
- [Version](#version)
- [Usage](#usage)
- [gvm-cli](#gvm-cli)
- [Examples](#examples)
- [gvm-script](#gvm-script)
- [Example script](#example-script)
- [More example scripts](#more-example-scripts)
- [gvm-pyshell](#gvm-pyshell)
- [Example program use](#example-program-use)
- [Support](#support)
- [Maintainer](#maintainer)
- [Contributing](#contributing)
- [License](#license)
## Documentation
The documentation for `gvm-tools` can be found at
[https://greenbone.github.io/gvm-tools/](https://greenbone.github.io/gvm-tools/).
Please refer to the documentation for more details as this README just
gives a short overview.
## Installation
See the [documentation](https://greenbone.github.io/gvm-tools/install.html)
for all supported installation options.
### Requirements
Python 3.9 and later is supported.
### Version
Please consider to always use the **newest** version of `gvm-tools` and `python-gvm`.
We frequently update this projects to add features and keep them free from bugs.
This is why installing `gvm-tools` using pip is recommended.
**To use `gvm-tools` with an old GMP version (7, 8, 9) you must use a release version**
**that is `<21.06`, combined with an `python-gvm` version `<21.05`.**
**In the `21.06` release the support of these older versions has been dropped.**
## Usage
There are several clients to communicate via GMP/OSP.
All clients have the ability to build a connection in various ways:
* Unix Socket
* TLS Connection
* SSH Connection
### gvm-cli
This tool sends plain GMP/OSP commands and prints the result to the standard
output.
#### Examples
Return the current protocol version used by the server:
```bash
gvm-cli socket --xml "<get_version/>"
```
Return all tasks visible to the GMP user with the provided credentials:
```bash
gvm-cli --gmp-username foo --gmp-password bar socket --xml "<get_tasks/>"
```
Read a file with GMP commands and return the result:
```bash
gvm-cli --gmp-username foo --gmp-password bar socket myfile.xml
```
Note that `gvm-cli` will by default print an error message and exit with a
non-zero exit code when a command is rejected by the server. If this kind of
error handling is not desired, the unparsed XML response can be requested using
the `--raw` parameter:
```bash
gvm-cli socket --raw --xml "<authenticate/>"
```
### gvm-script
This tool has a lot more features than the simple `gvm-cli` client. You
have the possibility to create your own custom gmp or osp scripts with commands
from the [python-gvm library](https://github.com/greenbone/python-gvm) and from
Python 3 itself.
#### Example script
```python
# Retrieve current GMP version
version = gmp.get_version()
# Prints the XML in beautiful form
from gvmtools.helper import pretty_print
pretty_print(version)
# Retrieve all tasks
tasks = gmp.get_tasks()
# Get names of tasks
task_names = tasks.xpath('task/name/text()')
pretty_print(task_names)
```
#### More example scripts
There is a growing collection of gmp-scripts in the
["scripts/"](scripts/) folder.
Some of them might be exactly what you need and all of them help writing
your own gmp scripts.
### gvm-pyshell
This tool is for running gmp or osp scripts interactively. It provides the same
API as [gvm-script](#gvm-script) using the
[python-gvm library](https://github.com/greenbone/python-gvm).
#### Example program use
Connect with given credentials via a unix domain socket and open an interactive
shell:
```bash
gvm-pyshell --gmp-username user --gmp-password pass socket
```
Connect through SSH connection and open the interactive shell:
```bash
gvm-pyshell --hostname 127.0.0.1 ssh
```
## Support
For any question on the usage of `gvm-tools` or gmp scripts please use the
[Greenbone Community Portal](https://community.greenbone.net/c/gmp). If you
found a problem with the software, please
[create an issue](https://github.com/greenbone/gvm-tools/issues) on GitHub.
## Maintainer
This project is maintained by [Greenbone AG](https://www.greenbone.net/).
## Contributing
Your contributions are highly appreciated. Please
[create a pull request](https://github.com/greenbone/gvm-tools/pulls) on GitHub.
For bigger changes, please discuss it first in the
[issues](https://github.com/greenbone/gvm-tools/issues).
For development you should use [poetry](https://python-poetry.org/)
to keep you python packages separated in different environments. First install
poetry via pip
python3 -m pip install --user poetry
Afterwards run
poetry install
in the checkout directory of `gvm-tools` (the directory containing the
`pyproject.toml` file) to install all dependencies including the packages only
required for development.
Afterwards active the git hooks for auto-formatting and linting via
[autohooks](https://github.com/greenbone/autohooks).
poetry run autohooks activate --force
## License
Copyright (C) 2017-2024 [Greenbone AG](https://www.greenbone.net/)
Licensed under the [GNU General Public License v3.0 or later](LICENSE).
| text/markdown | Greenbone AG | info@greenbone.net | null | null | GPL-3.0-or-later | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Ope... | [] | null | null | <4.0.0,>=3.9.2 | [] | [] | [] | [
"python-gvm>=26.0.0"
] | [] | [] | [] | [
"Documentation, https://greenbone.github.io/gvm-tools/",
"Homepage, https://github.com/greenbone/gvm-tools/",
"Repository, https://github.com/greenbone/gvm-tools/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T11:01:41.453369 | gvm_tools-25.4.6-py3-none-any.whl | 29,957 | 1a/a4/372cabdcfe25103059196a177cb47bf701ea7e73ffd1a3346553c4cd1df6/gvm_tools-25.4.6-py3-none-any.whl | py3 | bdist_wheel | null | false | 0bd2163d3aa2505a291a8e0e76011a7f | 39e09eeb7e814a030f6dab01b6b347cdb57340a0a225e7b5f0e75a0e58b52676 | 1aa4372cabdcfe25103059196a177cb47bf701ea7e73ffd1a3346553c4cd1df6 | null | [
"LICENSE"
] | 509 |
2.4 | mkdocs-materialx | 10.0.9 | Documentation that simply works | ## MaterialX for MkDocs
<br />
**MaterialX**, the next generation of mkdocs-material, is based on `mkdocs-material-9.7.1` and is named `X`. It continues to be maintained by individual developers (since mkdocs-material will cease maintenance)
<p align="center">
<img src="docs/assets/screenshots/recently-updated-en.gif"/>
</p>
## What Difference
For a more detailed description of the differences, see documentation: [Why MaterialX](https://jaywhj.github.io/mkdocs-materialx/differences/)
<br />
### Differences from Material
| Aspect | mkdocs-material | MaterialX |
| ------------------- | -------------------------------- | ----------------------------------------------- |
| **Latest Version** | mkdocs-material-9.7.1 | mkdocs-materialx-10.x <br />(based on mkdocs-material-9.7.1) |
| **Usage** | Use mkdocs.yml with the theme name `material` | Use mkdocs.yml with the new theme name `materialx`, everything else is the same as when using material |
| **Current Status** | Nearing end-of-maintenance | Active maintenance and updates |
| **Feature Updates** | None (with legacy bugs) | Bug fixes, new feature additions, UX improvements,<br />see [Changelog](https://github.com/jaywhj/mkdocs-materialx/releases) |
### Differences from Zensical
| Aspect | Zensical | MaterialX |
| -------------- | -------------------------------------------- | ------------------------------------------------- |
| **Audience** | Technical developers <br /> Technical documentation | All markdown users <br /> Markdown notes & documents |
| **Language** | Rust | Python |
| **Stage** | Launched a few months ago, in early stages, basic functionality incomplete | Launched for over a decade, mature and stable |
| **Usage** | Adopt the new TOML configuration format, all configurations in the original mkdocs.yml need to be reconfigured from scratch | Continue to use mkdocs.yml with zero migration cost |
| **Ecosystem** | Built entirely from scratch, incompatible with all original MkDocs components, future development uncertain | Based on MkDocs & mkdocs-material-9.7.1, fully compatible with MkDocs' rich long-built ecosystem, open and vibrant |
| **Core Focus** | Prioritizes technical customization, with increasingly cumbersome feature configurations and ever-growing complexity in usage | Focuses on universal functions & visual presentation, extreme ease of use as primary principle, evolving to be more lightweight |
<br />
## Quick Start
Installation:
``` sh
pip install mkdocs-materialx
```
Configure `materialx` theme to mkdocs.yml:
``` yaml
theme:
name: materialx
```
> [!NOTE]
> The theme name is `materialx`, not material. Everything else is the same as when using material.
Start a live preview server with the following command for automatic open and reload:
```
mkdocs serve --livereload -o
```
<br />
For detailed installation instructions, configuration options, and a demo, visit [jaywhj.github.io/mkdocs-materialx](https://jaywhj.github.io/mkdocs-materialx/)
<br />
## Chat Group
**Discord**: https://discord.gg/cvTfge4AUy
**Wechat**:
<img src="docs/assets/images/wechat-group.jpg" width="140" /> | text/markdown | null | Aaron Wang <aaronwqt@gmail.com> | null | null | null | documentation, mkdocs, theme | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: MkDocs",
"License :: OSI Approved :: MIT License",
"Programming Language :: JavaScript",
"Programming Language :: Python",
"Topic :: Documentation",
"Topic :: Software Development :: Documentation",
"Topic... | [] | null | null | >=3.8 | [] | [] | [] | [
"babel>=2.10",
"backrefs>=5.7.post1",
"colorama>=0.4",
"jinja2>=3.1",
"markdown>=3.2",
"mkdocs-material-extensions>=1.3",
"mkdocs>=1.6",
"paginate>=0.5",
"pygments>=2.16",
"pymdown-extensions>=10.2",
"requests>=2.30",
"mkdocs-document-dates>=3.5; extra == \"git\"",
"mkdocs-git-committers-plu... | [] | [] | [] | [
"Homepage, https://github.com/jaywhj/mkdocs-materialx",
"Bug Tracker, https://github.com/jaywhj/mkdocs-materialx/issues",
"Repository, https://github.com/jaywhj/mkdocs-materialx"
] | twine/6.1.0 CPython/3.13.0 | 2026-02-19T11:01:05.146091 | mkdocs_materialx-10.0.9.tar.gz | 4,095,545 | e6/70/c33607d1f6eaf7d1e38c9211d4ba15315b002a56804b09eb020d876dab69/mkdocs_materialx-10.0.9.tar.gz | source | sdist | null | false | 1e54005e037fb130aa055ed488ef7683 | 134955b3aee995b5245ff26eba00546fd951cc8f55380ab7978962a32ec6c5f1 | e670c33607d1f6eaf7d1e38c9211d4ba15315b002a56804b09eb020d876dab69 | MIT | [
"LICENSE"
] | 316 |
2.3 | pertdb | 2.1.1 | Registries for perturbations and their targets [`source <https://github.com/laminlabs/pertdb/blob/main/pertdb/models.py>`__]. | # `pertdb`: Registries for perturbations and their targets
Read the docs: [docs.lamin.ai/pertdb](https://docs.lamin.ai/pertdb).
| text/markdown | null | Lamin Labs <open-source@lamin.ai> | null | null | null | null | [] | [] | null | null | null | [] | [
"pertdb"
] | [] | [
"lamindb<3,>=2.0a1",
"bionty<3,>=2.0a1",
"pre-commit; extra == \"dev\"",
"nox; extra == \"dev\"",
"pytest>=6.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"nbproject_test>=0.4.1; extra == \"dev\""
] | [] | [] | [] | [
"Home, https://github.com/laminlabs/pertdb"
] | python-requests/2.32.5 | 2026-02-19T11:00:07.017855 | pertdb-2.1.1.tar.gz | 17,353 | 40/67/43ec8eddd2c3bcc9a27288bf3f0fbfbb6c5b561d38e6a9edf6f77ba1f3ab/pertdb-2.1.1.tar.gz | source | sdist | null | false | 586eeb01dc32ea32df205c05ccdd6c62 | 5bab23d65d4fd3b01ea021c59dbfb5d52429f52f10e97f487d8fe23ff322281f | 406743ec8eddd2c3bcc9a27288bf3f0fbfbb6c5b561d38e6a9edf6f77ba1f3ab | null | [] | 571 |
2.3 | nonebot-plugin-algo | 0.2.7 | NoneBot2 插件:算法比赛助手. 支持各大oj比赛与题目查询,洛谷用户信息查询与绑定 | <div align="center">
<a href="https://v2.nonebot.dev/store">
<img src="https://github.com/A-kirami/nonebot-plugin-template/blob/resources/nbp_logo.png" width="180" height="180" alt="NoneBotPluginLogo">
</a>
<br>
<p>
<img src="https://github.com/A-kirami/nonebot-plugin-template/blob/resources/NoneBotPlugin.svg" width="240" alt="NoneBotPluginText">
</p>
</div>
<div align="center">
# 🏆 算法比赛助手
_✨ 基于 NoneBot2 的算法比赛查询与订阅助手,支持洛谷用户信息查询 ✨_
<a href="./LICENSE">
<img src="https://img.shields.io/github/license/Tabris-ZX/nonebot-plugin-algo.svg" alt="license">
</a>
<a href="https://pypi.python.org/pypi/nonebot-plugin-algo">
<img src="https://img.shields.io/pypi/v/nonebot-plugin-algo.svg" alt="pypi">
</a>
<img src="https://img.shields.io/badge/python-3.10+-blue.svg" alt="python">
<a href="https://github.com/nonebot/nonebot2">
<img src="https://img.shields.io/badge/nonebot-2.4.3+-red.svg" alt="nonebot2">
</a>
</div>
## 📖 简介
基于 **NoneBot2** 与 **clist.by API** 开发的智能算法比赛助手插件,同时支持洛谷用户信息查询与绑定功能。
> ⚠️ **使用前提**:需要申请 [clist.by API](https://clist.by/api/v4/doc/) 凭据才能正常使用比赛查询功能
🎯 **核心功能**:
- 🔍 **智能查询**:今日/近期比赛、平台筛选、题目检索
- 🔔 **订阅提醒**:个性化比赛提醒,支持群聊/私聊
- 💾 **持久化存储**:订阅数据本地保存,重启不丢失
- 🌐 **多平台支持**:涵盖 Codeforces、AtCoder、洛谷等主流平台
- 🏆 **洛谷服务**:用户信息查询、绑定管理、精美卡片展示
## ✨ 功能特性
### 🔍 比赛查询功能
| 命令 | 功能 | 示例 |
| ------------------------ | ----------------- | --------------- |
| `近期比赛` / `近期` | 查询近期比赛 | `近期比赛` |
| `今日比赛` / `今日` | 查询今日比赛 | `今日比赛` |
| `比赛 [平台id] [天数]` | 条件检索比赛 | `比赛 162 10` |
| `题目 [比赛id]` | 查询比赛题目 | `题目 123456` |
> 💡 **平台ID说明**:162-洛谷,1-Codeforces,2-CodeChef 等,详见 [clist.by](https://clist.by/resources/)
### 🏆 洛谷服务功能
| 命令 | 功能 | 示例 |
| --------------------------- | ---------------- | ----------------------------- |
| `绑定洛谷 [用户名/id]` | 绑定洛谷用户 | `绑定洛谷 123456` |
| `我的洛谷` | 查询自己洛谷信息 | `我的洛谷` |
| `洛谷信息 [用户名/id]` | 查询指定用户信息 | `洛谷信息 123456` |
### 🔔 订阅提醒功能 ⭐
| 命令 | 功能 | 示例 |
| --------------------------- | ---------------- | ----------------------------- |
| `订阅 -i [比赛id]` | 通过ID订阅比赛 | `订阅 -i 123456` |
| `取消订阅 [比赛id]` | 取消指定订阅 | `取消订阅 123456` |
| `订阅列表` / `我的订阅` | 查看订阅列表 | `订阅列表` |
| `清空订阅` | 清空所有订阅 | `清空订阅` |
<!-- | `订阅 -e [比赛名称]` | 通过名称订阅比赛 | `订阅 -e "Codeforces"` | -->
## 🚀 快速开始
> 🚨 **开始前必读**:本插件依赖 clist.by API,请先完成 API 凭据申请,否则无法正常使用!
### 📦 安装插件
<details>
<summary>🎯 方式一:使用 nb-cli(推荐)</summary>
```bash
nb plugin install nonebot-plugin-algo
```
</details>
<details>
<summary>📚 方式二:使用包管理器</summary>
```bash
# 使用 poetry(推荐)
poetry add nonebot-plugin-algo
# 使用 pip
pip install nonebot-plugin-algo
```
然后在 NoneBot 项目的 `pyproject.toml` 中启用插件:
```toml
[tool.nonebot]
plugins = ["nonebot_plugin_algo"]
```
</details>
### ⚙️ 配置设置
> ⚠️ **重要提示**:本插件需要 clist.by API 凭据才能正常工作,请务必先申请!
<details>
<summary>🔧 配置说明</summary>
**第一步:申请 API 凭据**
1. 访问 [clist.by](https://clist.by/api/v4/doc/) 注册账号
2. 在个人设置中生成 API Key
3. 将凭据添加到 `.env` 文件中
**第二步:配置文件**
在 `.env` 文件中添加配置:
```env
# ========== 必需配置 ==========
# clist.by API 凭据(必须配置才能使用比赛查询功能)
clist_username=your_username # 你的 clist.by 用户名
clist_api_key=your_api_key # 你的 clist.by API Key
# ========== 可选配置 ==========
# 比赛查询配置
algo_days=7 # 查询近期天数(默认:7天)
algo_limit=20 # 返回数量上限(默认:20条)
algo_remind_pre=30 # 提醒提前时间,单位:分钟(默认:30分钟)
algo_order_by=start # 排序字段(默认:start,按开始时间排序)
```
**配置项说明:**
- **必需配置**:`clist_username` 和 `clist_api_key` 必须正确配置
- **洛谷Cookie**:仅在需要查询隐私设置用户时配置,普通查询无需此项
- **数据存储**:插件会自动在本地创建数据目录存储订阅信息和洛谷卡片缓存
> 💡 **提示**:洛谷卡片缓存会在每天 2:00、10:00、18:00 自动清理
</details>
## 📖 使用示例
### 🔍 比赛查询功能演示
```bash
# 基础查询
近期比赛 # 查询近期比赛
今日比赛 # 查询今日比赛
比赛 162 10 # 查询洛谷平台10天内的比赛
题目 123456 # 查询比赛ID为123456的题目
```
### 🏆 洛谷服务功能演示
```bash
# 洛谷用户操作
绑定洛谷 123456 # 绑定洛谷用户ID
绑定洛谷 "用户名" # 绑定洛谷用户名
我的洛谷 # 查询自己的洛谷信息
洛谷信息 123456 # 查询指定用户信息
洛谷信息 "用户名" # 查询指定用户名信息
```
### 🔔 订阅功能演示
```bash
# 订阅操作
订阅 -i 123456 # 通过比赛ID订阅
订阅 -e Codeforces # 通过名称订阅
订阅列表 # 查看订阅列表
取消订阅 123456 # 取消指定订阅
清空订阅 # 清空所有订阅
```
## 🎯 功能路线图
### todo list
- [X] **比赛查询系统** - 支持今日/近期比赛查询
- [X] **条件检索** - 按平台、时间筛选比赛
- [X] **题目查询** - 根据比赛ID查询题目信息
- [X] **订阅提醒系统** - 智能比赛订阅与定时提醒
- [X] **洛谷用户绑定** - 支持用户名和ID绑定
- [X] **洛谷信息查询** - 洛谷用户详细信息查询
- [ ] **cf信息查询** - cf用户详细信息查询
- [ ] **atc信息查询** - atc用户详细信息查询
- [ ] **个性题单** - 用户自建个性题单
- [ ] **题目链接解析** - 题目链接自动解析出题面,IO样例
## 📄 开源协议
本项目基于 [MIT License](LICENSE) 开源协议。
<div align="center">
### 🌟 如果这个项目对你有帮助,请给个 Star!
**有任何问题欢迎来提issue!**
#### 让我们一起让算法竞赛变得更简单!
</div>
| text/markdown | Tabris_ZX | 3146463122@qq.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"httpx>=0.24",
"jinja2<4.0.0,>=3.1.6",
"nonebot-adapter-onebot>=2.4.6",
"nonebot-plugin-alconna>=0.49.0",
"nonebot-plugin-apscheduler<0.6.0,>=0.5.0",
"nonebot-plugin-localstore<0.8.0,>=0.7.4",
"nonebot-plugin-uninfo<0.10.0,>=0.9.0",
"nonebot2<3.0.0,>=2.4.3",
"playwright<2.0.0,>=1.55.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.10.11 | 2026-02-19T11:00:03.978794 | nonebot_plugin_algo-0.2.7.tar.gz | 92,199 | 77/63/cb3a7325e9dc6274c66292f7f5257db7913f51e1149d375c61982b851372/nonebot_plugin_algo-0.2.7.tar.gz | source | sdist | null | false | 99a73d241f864aa30c0b7380cea8b83c | e560cb4939680b5b5c7a7b448633758077281ed9f3135bd1c967f2509448c1f1 | 7763cb3a7325e9dc6274c66292f7f5257db7913f51e1149d375c61982b851372 | null | [] | 228 |
2.4 | cmem-plugin-pdf-extract | 1.1.1 | Extract text and tables from PDF documents. | # cmem-plugin-pdf-extract
Extract text and tables from PDF documents.
[![eccenca Corporate Memory][cmem-shield]][cmem-link]
This is a plugin for [eccenca](https://eccenca.com) [Corporate Memory](https://documentation.eccenca.com). You can install it with the [cmemc](https://eccenca.com/go/cmemc) command line client like this:
```
cmemc admin workspace python install cmem-plugin-pdf-extract
```
[](https://github.com/eccenca/cmem-plugin-pdf-extract/actions) [](https://pypi.org/project/cmem-plugin-pdf-extract) [](https://pypi.org/project/cmem-plugin-pdf-extract)
[![poetry][poetry-shield]][poetry-link] [![ruff][ruff-shield]][ruff-link] [![mypy][mypy-shield]][mypy-link] [![copier][copier-shield]][copier]
[cmem-link]: https://documentation.eccenca.com
[cmem-shield]: https://img.shields.io/endpoint?url=https://dev.documentation.eccenca.com/badge.json
[poetry-link]: https://python-poetry.org/
[poetry-shield]: https://img.shields.io/endpoint?url=https://python-poetry.org/badge/v0.json
[ruff-link]: https://docs.astral.sh/ruff/
[ruff-shield]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json&label=Code%20Style
[mypy-link]: https://mypy-lang.org/
[mypy-shield]: https://www.mypy-lang.org/static/mypy_badge.svg
[copier]: https://copier.readthedocs.io/
[copier-shield]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/copier-org/copier/master/img/badge/badge-grayscale-inverted-border-purple.json
| text/markdown | eccenca GmbH | cmempy-developer@eccenca.com | null | null | Apache-2.0 | eccenca Corporate Memory, plugin | [
"Development Status :: 4 - Beta",
"Environment :: Plugins",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.13 | [] | [] | [] | [
"cmem-cmempy<26.0.0,>=25.4.0",
"cmem-plugin-base<5.0.0,>=4.15.0",
"pdfplumber>=0.11.9",
"pyyaml<7.0.0,>=6.0.3"
] | [] | [] | [] | [
"Homepage, https://github.com/eccenca/cmem-plugin-pdf-extract"
] | poetry/2.3.2 CPython/3.13.12 Linux/6.14.0-1017-azure | 2026-02-19T11:00:01.641504 | cmem_plugin_pdf_extract-1.1.1-py3-none-any.whl | 18,683 | 88/b6/612bd4211712de922a3383e64f1d9aa30e16778f807c6299cfbfdf3e2612/cmem_plugin_pdf_extract-1.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 1ed6cac1bbdb4bcd9ca552f9dcda3d72 | c2e8b13c5e26e4e63d3d900cc0ae2404c98786098e5a7122d6cfadc343a3e797 | 88b6612bd4211712de922a3383e64f1d9aa30e16778f807c6299cfbfdf3e2612 | null | [
"LICENSE"
] | 256 |
2.4 | zinvolt | 0.1.0 | Asynchronous Python client for Zinvolt. | # Python: Zinvolt
[![GitHub Release][releases-shield]][releases]
[![Python Versions][python-versions-shield]][pypi]
![Project Stage][project-stage-shield]
![Project Maintenance][maintenance-shield]
[![License][license-shield]](LICENSE.md)
[![Build Status][build-shield]][build]
[![Code Coverage][codecov-shield]][codecov]
Asynchronous Python client for Zinvolt.
## About
This package allows you to fetch data from Zinvolt.
## Installation
```bash
pip install zinvolt
```
## Changelog & Releases
This repository keeps a change log using [GitHub's releases][releases]
functionality. The format of the log is based on
[Keep a Changelog][keepchangelog].
Releases are based on [Semantic Versioning][semver], and use the format
of ``MAJOR.MINOR.PATCH``. In a nutshell, the version will be incremented
based on the following:
- ``MAJOR``: Incompatible or major changes.
- ``MINOR``: Backwards-compatible new features and enhancements.
- ``PATCH``: Backwards-compatible bugfixes and package updates.
## Contributing
This is an active open-source project. We are always open to people who want to
use the code or contribute to it.
We've set up a separate document for our
[contribution guidelines](.github/CONTRIBUTING.md).
Thank you for being involved! :heart_eyes:
## Setting up development environment
This Python project is fully managed using the [Poetry][poetry] dependency manager. But also relies on the use of NodeJS for certain checks during development.
You need at least:
- Python 3.11+
- [Poetry][poetry-install]
- NodeJS 12+ (including NPM)
To install all packages, including all development requirements:
```bash
npm install
poetry install
```
As this repository uses the [pre-commit][pre-commit] framework, all changes
are linted and tested with each commit. You can run all checks and tests
manually, using the following command:
```bash
poetry run pre-commit run --all-files
```
To run just the Python tests:
```bash
poetry run pytest
```
## Authors & contributors
The content is by [Joost Lekkerkerker][joostlek].
For a full list of all authors and contributors,
check [the contributor's page][contributors].
## License
MIT License
Copyright (c) 2025 Joost Lekkerkerker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
[build-shield]: https://github.com/joostlek/python-zinvolt/actions/workflows/tests.yaml/badge.svg
[build]: https://github.com/joostlek/python-zinvolt/actions
[codecov-shield]: https://codecov.io/gh/joostlek/python-zinvolt/branch/master/graph/badge.svg
[codecov]: https://codecov.io/gh/joostlek/python-zinvolt
[commits-shield]: https://img.shields.io/github/commit-activity/y/joostlek/python-zinvolt.svg
[commits]: https://github.com/joostlek/python-zinvolt/commits/master
[contributors]: https://github.com/joostlek/python-zinvolt/graphs/contributors
[joostlek]: https://github.com/joostlek
[keepchangelog]: http://keepachangelog.com/en/1.0.0/
[license-shield]: https://img.shields.io/github/license/joostlek/python-zinvolt.svg
[maintenance-shield]: https://img.shields.io/maintenance/yes/2026.svg
[poetry-install]: https://python-poetry.org/docs/#installation
[poetry]: https://python-poetry.org
[pre-commit]: https://pre-commit.com/
[project-stage-shield]: https://img.shields.io/badge/project%20stage-stable-green.svg
[python-versions-shield]: https://img.shields.io/pypi/pyversions/zinvolt
[releases-shield]: https://img.shields.io/github/release/joostlek/python-zinvolt.svg
[releases]: https://github.com/joostlek/python-zinvolt/releases
[semver]: http://semver.org/spec/v2.0.0.html
[pypi]: https://pypi.org/project/zinvolt/
| text/markdown | Joost Lekkerkerker | joostlek@outlook.com | Joost Lekkerkerker | joostlek@outlook.com | MIT | OpenRouter, api, async, client | [
"Development Status :: 5 - Production/Stable",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
... | [] | null | null | <4.0,>=3.13 | [] | [] | [] | [
"aiohttp>=3.0.0",
"mashumaro<4.0,>=3.11",
"orjson>=3.9.0",
"yarl>=1.6.0"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/joostlek/python-zinvolt/issues",
"Changelog, https://github.com/joostlek/python-zinvolt/releases",
"Documentation, https://github.com/joostlek/python-zinvolt",
"Homepage, https://github.com/joostlek/python-zinvolt",
"Repository, https://github.com/joostlek/python-zinvolt"
] | twine/6.1.0 CPython/3.12.8 | 2026-02-19T10:59:43.449068 | zinvolt-0.1.0.tar.gz | 6,205 | 73/12/65d0ff45e7481e9859a0b1a7768296012a3c0e41e0bed093b60d776d79be/zinvolt-0.1.0.tar.gz | source | sdist | null | false | 28f0624029ab51ceb12b9beb5f9fca61 | 9debec4e4518c18293bb4a3beaa23949cde4f0ff55ebdba400bb8950be40dc54 | 731265d0ff45e7481e9859a0b1a7768296012a3c0e41e0bed093b60d776d79be | null | [
"LICENSE.md"
] | 250 |
2.4 | gravixlayer | 0.0.58 | GravixLayer Python SDK - Official Python client for GravixLayer API | # GravixLayer Python SDK
[](https://badge.fury.io/py/gravixlayer)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/Apache-2.0)
Official Python SDK for [GravixLayer API](https://gravixlayer.com). Simple and powerful.
📚 **[Full Documentation](https://docs.gravixlayer.com/sdk/introduction/introduction)**
## Installation
```bash
pip install gravixlayer
```
## Get Started
### 1. Get Your API Key
Sign up at [platform.gravixlayer.com](https://platform.gravixlayer.com) to obtain your API key.
### 2. Create a Template
Templates define the runtime environment for your sandboxes. Create one using the SDK:
```python
import os
from gravixlayer import GravixLayer, TemplateBuilder
client = GravixLayer(
api_key=os.environ["GRAVIXLAYER_API_KEY"],
cloud="azure",
region="eastus2",
)
# Build a Python template
builder = (
TemplateBuilder("my-python-app", description="My Python application")
.from_image("python:3.11-slim")
.vcpu(2)
.memory(512)
.pip_install("fastapi", "uvicorn[standard]")
.copy_file("print('Hello, World!')", "/app/main.py")
.start_cmd("cd /app && python main.py")
)
status = client.templates.build_and_wait(builder, timeout_secs=600)
print(f"Template ID: {status.template_id}")
```
### 3. Create a Sandbox
Launch a sandbox instance from your template:
```python
# Create a sandbox
sandbox = client.sandbox.sandboxes.create(
template="my-python-app", # or use template ID
timeout=300,
)
print(f"Sandbox ID: {sandbox.sandbox_id}")
print(f"Status: {sandbox.status}")
# Run code in the sandbox
result = client.sandbox.sandboxes.run_code(
sandbox.sandbox_id,
code="print('Hello from sandbox!')",
language="python"
)
print(f"Output: {result.logs}")
# Clean up
client.sandbox.sandboxes.kill(sandbox.sandbox_id)
```
## Quick Examples
### Chat Completions
Talk to AI models.
```python
import os
from gravixlayer import GravixLayer
client = GravixLayer(api_key=os.environ.get("GRAVIXLAYER_API_KEY"))
# Simple chat
response = client.chat.completions.create(
model="mistralai/mistral-nemo-instruct-2407",
messages=[
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "What is Python?"}
]
)
print(response.choices[0].message.content)
```
**What it does:** Sends your message to AI and gets a response.
### Streaming
Get responses in real-time.
```python
import os
from gravixlayer import GravixLayer
client = GravixLayer(api_key=os.environ.get("GRAVIXLAYER_API_KEY"))
stream = client.chat.completions.create(
model="mistralai/mistral-nemo-instruct-2407",
messages=[{"role": "user", "content": "Tell a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
```
**What it does:** Shows AI response word-by-word as it's generated.
---
## Text Completions
Continue text from a prompt.
```python
import os
from gravixlayer import GravixLayer
client = GravixLayer(api_key=os.environ.get("GRAVIXLAYER_API_KEY"))
response = client.completions.create(
model="mistralai/mistral-nemo-instruct-2407",
prompt="The future of AI is",
max_tokens=50
)
print(response.choices[0].text)
```
**What it does:** AI continues writing from your starting text.
### Streaming Completions
```python
import os
from gravixlayer import GravixLayer
client = GravixLayer(api_key=os.environ.get("GRAVIXLAYER_API_KEY"))
stream = client.completions.create(
model="mistralai/mistral-nemo-instruct-2407",
prompt="Once upon a time",
max_tokens=100,
stream=True
)
for chunk in stream:
if chunk.choices[0].text:
print(chunk.choices[0].text, end="", flush=True)
```
**What it does:** Get text completions in real-time.
---
## Embeddings
Convert text to numbers for comparison.
```python
import os
from gravixlayer import GravixLayer
client = GravixLayer(api_key=os.environ.get("GRAVIXLAYER_API_KEY"))
# Single text
response = client.embeddings.create(
model="microsoft/multilingual-e5-large",
input="Hello world"
)
print(f"Vector size: {len(response.data[0].embedding)}")
# Multiple texts
response = client.embeddings.create(
model="microsoft/multilingual-e5-large",
input=["Text 1", "Text 2", "Text 3"]
)
for i, item in enumerate(response.data):
print(f"Text {i+1}: {len(item.embedding)} dimensions")
```
**What it does:** Turns text into a list of numbers. Similar texts have similar numbers.
---
## Files
Upload and manage files.
```python
import os
from gravixlayer import GravixLayer
client = GravixLayer(api_key=os.environ.get("GRAVIXLAYER_API_KEY"))
# Upload
with open("document.pdf", "rb") as f:
file = client.files.upload(file=f, purpose="assistants")
print(f"Uploaded: {file.id}")
# List all files
files = client.files.list()
for f in files.data:
print(f"{f.filename} - {f.bytes} bytes")
# Get file info
file_info = client.files.retrieve("file-id")
print(f"File: {file_info.filename}")
# Download file content
content = client.files.content("file-id")
with open("downloaded.pdf", "wb") as f:
f.write(content)
# Delete file
response = client.files.delete("file-id")
print(response.message)
```
**What it does:** Store files on the server to use with AI.
---
## Vector Database
Search text by meaning, not just keywords.
```python
import os
from gravixlayer import GravixLayer
client = GravixLayer(api_key=os.environ.get("GRAVIXLAYER_API_KEY"))
# Create index
index = client.vectors.indexes.create(
name="my-docs",
dimension=1536,
metric="cosine"
)
print(f"Created index: {index.id}")
# Add single text
vectors = client.vectors.index(index.id)
vectors.upsert_text(
text="Python is a programming language",
model="microsoft/multilingual-e5-large",
id="doc1",
metadata={"category": "programming"}
)
# Add multiple texts
vectors.batch_upsert_text([
{
"text": "JavaScript is for web development",
"model": "microsoft/multilingual-e5-large",
"id": "doc2",
"metadata": {"category": "programming"}
},
{
"text": "React is a JavaScript library",
"model": "microsoft/multilingual-e5-large",
"id": "doc3",
"metadata": {"category": "web"}
}
])
# Search by text
results = vectors.search_text(
query="coding languages",
model="microsoft/multilingual-e5-large",
top_k=5
)
for hit in results.hits:
print(f"{hit.text} (score: {hit.score:.3f})")
# Search with filter
results = vectors.search_text(
query="programming",
model="microsoft/multilingual-e5-large",
top_k=3,
filter={"category": "programming"}
)
# List all indexes
indexes = client.vectors.indexes.list()
for idx in indexes.indexes:
print(f"{idx.name}: {idx.dimension} dimensions")
# Delete index
client.vectors.indexes.delete(index.id)
```
**What it does:** Finds similar text based on meaning, not exact words.
---
## Memory
Remember user information across conversations.
```python
import os
from gravixlayer import GravixLayer
client = GravixLayer(api_key=os.environ.get("GRAVIXLAYER_API_KEY"))
# Setup memory
memory = client.memory(
embedding_model="microsoft/multilingual-e5-large",
inference_model="mistralai/mistral-nemo-instruct-2407",
index_name="user-memories",
cloud_provider="AWS",
region="us-east-1"
)
# Add memory
result = memory.add(
messages="User loves pizza and Italian food",
user_id="user123"
)
print(f"Added {len(result['results'])} memories")
# Add with AI inference
result = memory.add(
messages="I'm a software engineer who loves Python",
user_id="user123",
infer=True
)
for mem in result['results']:
print(f"Extracted: {mem['memory']}")
# Search memories
results = memory.search(
query="What food does user like?",
user_id="user123",
limit=5
)
for item in results['results']:
print(f"{item['memory']} (score: {item['score']:.3f})")
# Get all memories
all_memories = memory.get_all(user_id="user123", limit=50)
print(f"Total memories: {len(all_memories['results'])}")
# Update memory
memory.update(
memory_id="memory-id",
user_id="user123",
data="Updated: User prefers vegetarian food"
)
# Delete specific memory
memory.delete(memory_id="memory-id", user_id="user123")
# Delete all memories for user
memory.delete_all(user_id="user123")
```
**What it does:** Stores facts about users so AI can remember them later.
---
## Sandbox
Run code safely in isolated environments.
```python
import os
from gravixlayer import GravixLayer
client = GravixLayer(api_key=os.environ.get("GRAVIXLAYER_API_KEY"))
# Create sandbox
sandbox = client.sandbox.create(
template="python-base-v1",
timeout=600,
metadata={"project": "my-app"}
)
print(f"Sandbox ID: {sandbox.id}")
# Run Python code
result = sandbox.run_code("print('Hello from sandbox!')\nprint(2 + 2)")
print("Output:", result.logs.stdout)
print("Errors:", result.logs.stderr)
print("Exit code:", result.exit_code)
# Run shell command
result = sandbox.run_command("ls -la")
print(result.logs.stdout)
# Write file
sandbox.files.write(
path="/home/user/script.py",
content="print('Hello World')"
)
# Read file
content = sandbox.files.read(path="/home/user/script.py")
print("File content:", content)
# List files
files = sandbox.files.list(path="/home/user")
for file in files:
print(f"{file.name} - {file.size} bytes")
# Upload file to sandbox
with open("local_file.py", "rb") as f:
sandbox.files.upload(path="/home/user/uploaded.py", file=f)
# Create directory
sandbox.files.mkdir(path="/home/user/myproject")
# Delete file
sandbox.files.delete(path="/home/user/script.py")
# Get sandbox info
info = client.sandbox.get(sandbox.id)
print(f"Status: {info.status}")
# List all sandboxes
sandboxes = client.sandbox.list()
for sb in sandboxes:
print(f"{sb.id}: {sb.status}")
# Extend timeout
sandbox.set_timeout(timeout=1200)
# List available templates
templates = client.sandbox.templates.list()
for template in templates:
print(f"{template.name}: {template.description}")
# Kill sandbox
client.sandbox.kill(sandbox.id)
```
**What it does:** Runs code in a safe, isolated environment that can't harm your system.
---
## Deployments
Deploy your own model instances.
```python
import os
from gravixlayer import GravixLayer
client = GravixLayer(api_key=os.environ.get("GRAVIXLAYER_API_KEY"))
# Create deployment
deployment = client.deployments.create(
deployment_name="my-chatbot",
model_name="mistralai/mistral-nemo-instruct-2407",
hardware="nvidia-t4-16gb-pcie_1",
min_replicas=1,
max_replicas=3
)
print(f"Deployment ID: {deployment.deployment_id}")
# List all deployments
deployments = client.deployments.list()
for dep in deployments:
print(f"{dep.name}: {dep.status}")
# Get deployment info
deployment = client.deployments.get("deployment-id")
print(f"Status: {deployment.status}")
print(f"Endpoint: {deployment.endpoint}")
# Update deployment
client.deployments.update(
"deployment-id",
min_replicas=2,
max_replicas=5
)
# Delete deployment
client.deployments.delete("deployment-id")
# List available hardware
accelerators = client.accelerators.list()
for acc in accelerators:
print(f"{acc.name}: {acc.memory}GB")
```
**What it does:** Runs a dedicated model instance just for you.
---
## Async Support
Use with async/await.
```python
import os
import asyncio
from gravixlayer import AsyncGravixLayer
async def main():
client = AsyncGravixLayer(api_key=os.environ.get("GRAVIXLAYER_API_KEY"))
# Async chat
response = await client.chat.completions.create(
model="mistralai/mistral-nemo-instruct-2407",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
# Async streaming
stream = await client.chat.completions.create(
model="mistralai/mistral-nemo-instruct-2407",
messages=[{"role": "user", "content": "Tell a story"}],
stream=True
)
async for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
asyncio.run(main())
```
**What it does:** Lets your program do other things while waiting for API responses.
---
## CLI Usage
Use from command line.
```bash
# Set API key
export GRAVIXLAYER_API_KEY="your-api-key"
# Chat
gravixlayer --model "mistralai/mistral-nemo-instruct-2407" --user "Hello!"
gravixlayer --model "mistralai/mistral-nemo-instruct-2407" --user "Tell a story" --stream
# Files
gravixlayer files upload document.pdf --purpose assistants
gravixlayer files list
gravixlayer files info file-abc123
gravixlayer files download file-abc123 --output downloaded.pdf
gravixlayer files delete file-abc123
# Deployments
gravixlayer deployments create --deployment_name "my-bot" --model_name "mistralai/mistral-nemo-instruct-2407" --gpu_model "NVIDIA_T4_16GB"
gravixlayer deployments list
gravixlayer deployments delete <deployment-id>
# Vector database
gravixlayer vectors index create --name "my-index" --dimension 1536 --metric cosine
gravixlayer vectors index list
```
---
## Configuration
```python
import os
from gravixlayer import GravixLayer
# Basic configuration
client = GravixLayer(
api_key=os.environ.get("GRAVIXLAYER_API_KEY")
)
# Advanced configuration
client = GravixLayer(
api_key="your-api-key",
base_url="https://api.gravixlayer.com/v1/inference",
timeout=60.0,
max_retries=3,
headers={"Custom-Header": "value"}
)
```
Set API key in environment:
```bash
export GRAVIXLAYER_API_KEY="your-api-key"
```
---
## Error Handling
```python
import os
from gravixlayer import GravixLayer
from gravixlayer.types.exceptions import (
GravixLayerError,
GravixLayerAuthenticationError,
GravixLayerRateLimitError,
GravixLayerServerError,
GravixLayerBadRequestError
)
client = GravixLayer(api_key=os.environ.get("GRAVIXLAYER_API_KEY"))
try:
response = client.chat.completions.create(
model="mistralai/mistral-nemo-instruct-2407",
messages=[{"role": "user", "content": "Hello"}]
)
except GravixLayerAuthenticationError:
print("Invalid API key")
except GravixLayerRateLimitError:
print("Too many requests - please wait")
except GravixLayerBadRequestError as e:
print(f"Bad request: {e}")
except GravixLayerServerError as e:
print(f"Server error: {e}")
except GravixLayerError as e:
print(f"SDK error: {e}")
```
---
## Learn More
📚 **[Full Documentation](https://docs.gravixlayer.com/sdk/introduction/introduction)**
- Detailed guides and tutorials
- API reference
- Advanced examples
- Best practices
## Support
- **Issues**: [GitHub Issues](https://github.com/gravixlayer/gravixlayer-python/issues)
- **Email**: info@gravixlayer.com
## License
Apache License 2.0
| text/markdown | Team Gravix | Team Gravix <info@gravixlayer.com> | null | null | Apache-2.0 | gravixlayer, llm, ai, api, sdk, compatible | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Languag... | [] | https://github.com/gravixlayer/gravixlayer-python | null | >=3.7 | [] | [] | [] | [
"requests>=2.25.0",
"python-dotenv>=0.19.0",
"httpx>=0.24.0",
"pylint>=3.0.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"bandit>=1.7.0; extra == \"dev\"",
"radon>=6.0.0; extra == \"dev\"",
"types-requests>=2.25.0; extr... | [] | [] | [] | [
"Homepage, https://github.com/gravixlayer/gravixlayer-python",
"Repository, https://github.com/gravixlayer/gravixlayer-python",
"Issues, https://github.com/gravixlayer/gravixlayer-python/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:59:39.969817 | gravixlayer-0.0.58.tar.gz | 136,064 | 35/89/d1da0a3011bad4b21f56ef21d7df589d76eec62779b2739436438a0116ae/gravixlayer-0.0.58.tar.gz | source | sdist | null | false | b11a0d35c27718b48248063fead71360 | f210d88b726f9ac315e5c0158e6ab3d212fff7de2f5b473da86d83f7dd04b125 | 3589d1da0a3011bad4b21f56ef21d7df589d76eec62779b2739436438a0116ae | null | [
"LICENSE",
"NOTICE"
] | 254 |
2.3 | bionty | 2.2.1 | Basic biological entities, coupled to public ontologies [`source <https://github.com/laminlabs/bionty/blob/main/bionty/models.py>`__]. | [](https://github.com/laminlabs/bionty)
[](https://pypi.org/project/bionty)
# bionty: Registries for biological ontologies
- Access >20 public ontologies such as Gene, Protein, CellMarker, ExperimentalFactor, CellType, CellLine, Tissue, …
- Create records from entries in public ontologies using `.from_source()`.
- Access full underlying public ontologies via `.public()` to search & bulk-create records.
- Create in-house ontologies by extending public ontologies using hierarchical relationships among records (`.parents`).
- Use `.synonyms` and `.abbr` to manage synonyms.
- Safeguards against typos & duplications.
- Manage multiple ontology versions via `bionty.Source`.
Read the [docs](https://docs.lamin.ai/bionty).
| text/markdown | null | Lamin Labs <open-source@lamin.ai> | null | null | null | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [
"bionty"
] | [] | [
"lamindb>=2.0a1",
"lamindb_setup>=0.81.2",
"lamin_utils>=0.16.2",
"requests",
"pyyaml",
"laminci; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pytest>=6.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"nbproject-test; extra == \"dev\"",
"pronto; extra == \"dev\"",
"pymysql; extra ... | [] | [] | [] | [
"Home, https://github.com/laminlabs/bionty"
] | python-requests/2.32.5 | 2026-02-19T10:59:28.098357 | bionty-2.2.1.tar.gz | 82,413 | d1/d2/12b3766e4bee7491cdcce672ee3541a68b5aa1a200288f244ec97d1c325b/bionty-2.2.1.tar.gz | source | sdist | null | false | 56a5e048fe8dd95c1be3161c56abb7f9 | 7f2d2271515ac3da67b1e4cba4703415a545c0bc73a4ac9699f02be1b4adc700 | d1d212b3766e4bee7491cdcce672ee3541a68b5aa1a200288f244ec97d1c325b | null | [] | 636 |
2.4 | akerbp-mlpet | 6.0.3 | Package to prepare well log data for ML projects. | # akerbp.mlpet
Preprocessing tools for Petrophysics ML projects at Eureka
## Installation
Install the package by running the following (requires Python 3.9 or later)
pip install akerbp-mlpet
## Quick start
For a short example of how to use the mlpet Dataset class for pre-processing data see below. Please refer to the tests folder of this repository for more examples as well as some examples of the `settings.yaml` file:
import os
from akerbp.mlpet import Dataset
from akerbp.mlpet import utilities
# Instantiate an empty dataset object using the example settings and mappings provided
ds = Dataset(
settings=os.path.abspath("settings.yaml"), # Absolute file paths are required
folder_path=os.path.abspath(r"./"), # Absolute file paths are required
)
# Populate the dataset with data from a file (support for multiple file formats and direct cdf data collection exists)
ds.load_from_pickle(r"data.pkl") # Absolute file paths are preferred
# The original data will be kept in ds.df_original and will remain unchanged
print(ds.df_original.head())
# Split the data into train-validation sets
df_train, df_test = utilities.train_test_split(
df=ds.df_original,
target_column=ds.label_column,
id_column=ds.id_column,
test_size=0.3,
)
# Preprocess the data for training according to default workflow
# print(ds.default_preprocessing_workflow) <- Uncomment to see what the workflow does
df_preprocessed = ds.preprocess(df_train)
The procedure will be exactly the same for any other dataset class. The only difference will be in the "settings". For a full list of possible settings keys see either the [built documentation](docs/build/html/akerbp.mlpet.html) or the akerbp.mlpet.Dataset class docstring. Make sure that the curve names are consistent with those in the dataset.
The loaded data is NOT mapped at load time but rather at preprocessing time (i.e. when preprocess is called).
## Recommended workflow for preprocessing
Due to the operations performed by certain preprocessing methods in akerbp.mlpet, the order in which the different preprocessing steps can sometimes be important for achieving the desired results. Below is a simple guide that should be followed for most use cases:
1. Misrepresented missing data should always be handled first (using `set_as_nan`)
2. This should then be followed by data cleaning methods (e.g. `remove_outliers`, `remove_noise`, `remove_small_negative_values`)
3. Depending on your use case, once the data is clean you can then impute missing values (see `imputers.py`). Note however that some features depend on the presence of missing values to provide better estimates (e.g. `calculate_VSH`)
4. Add new features (using methods from `feature_engineering.py`) or using `process_wells` from `preprocessors.py` if the features should be well specific.
5. Fill missing values if any still exist or were created during step 4. (using `fillna_with_fillers`)
6. Scale whichever features you want (using `scale_curves` from `preprocessors.py`). In some use cases this step could also come before step 5.
7. Encode the GROUP & FORMATION column if you want to use it for training. (using `encode_columns` from `preprocessors.py`)
8. Select or drop the specific features you want to keep for model training. (using `select_columns` or `drop_columns` from `preprocessors.py`)
> **_NOTE:_** The dataset class **drops** all input columns that are not explicitly named in your settings.yaml or settings dictionary passed to the Dataset class at instantiation. This is to ensure that the data is not polluted with features that are not used. Therefore, if you have features that are being loaded into the Dataset class but are not being preprocessed, these need to be explicitly defined in your settings.yaml or settings dictionary under the keyword argument `keep_columns`.
## API Documentation
Full API documentation of the package can be found under the [docs](docs/build/html/index.html) folder once you have run the make html command.
## For developers
This repository uses `uv` for dependency management.
- create or update the local environment with all optional groups:
uv sync --all-groups
- run checks in the `uv` environment:
uv run pytest
uv run ruff check .
uv run ruff format .
uv run mypy --config-file pyproject.toml
- to make the API documentation, from the root directory of the project run:
cd docs/
uv run make html
- `requirements.txt` is generated from the lockfile; update it with:
uv export --all-groups --no-hashes --no-editable --no-annotate --format requirements-txt --output-file requirements.txt
- to install `mlpet` in editable mode in another environment, use:
uv pip install -e /path/to/expres-ml-mlpet
# or: pip install -e .
## License
akerbp.mlpet Copyright 2021 AkerBP ASA
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| text/markdown | null | Yann Van Crombrugge <yann.van.crombrugge@akerbp.com>, Saghar Asadi <saghar.asadi@akerbp.com>, Flavia Dias Casagrande <flavia.dias.casagrande@akerbp.com> | null | Yann Van Crombrugge <yann.van.crombrugge@akerbp.com>, Peder Aursand <peder.aursand@akerbp.com> | null | null | [] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"importlib-metadata>=4.12",
"joblib",
"lasio",
"pandas<3",
"plotly",
"pyyaml>=5.4.1",
"scikit-learn",
"scipy",
"tqdm"
] | [] | [] | [] | [
"Repository, https://github.com/AkerBP/expres-ml-mlpet",
"Tracker, https://github.com/AkerBP/expres-ml-mlpet/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T10:59:07.475988 | akerbp_mlpet-6.0.3-py3-none-any.whl | 67,244 | 96/bd/796776ea1c1ebe3e38abb1c842ba95f38eee232e7610de1f38e4d6b3cb97/akerbp_mlpet-6.0.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 5800592ee9e9fea9ce99efbc39ae9881 | af13ddabe5aec289672a04fc7d1fc28d1c88d39c52715263c1fbff02f63c4547 | 96bd796776ea1c1ebe3e38abb1c842ba95f38eee232e7610de1f38e4d6b3cb97 | Apache-2.0 | [
"LICENSE"
] | 249 |
2.4 | nubix-outlook | 1.0.5 | Outlook mailbox client built on Microsoft Graph with helpers for attachment processing. | # nubix-outlook
Outlook mailbox client built on Microsoft Graph with helpers for fetching messages and downloading attachments.
## Installation
```bash
pip install nubix-outlook
```
## Configuration
Set environment variables (or pass arguments to the client):
- `GRAPH_CLIENT_ID`
- `GRAPH_CLIENT_SECRET`
- `GRAPH_TENANT_ID`
- `GRAPH_MAILBOX_EMAIL`
## Usage
```python
from nubix_outlook import OutlookMailboxClient, AttachmentCollector
client = OutlookMailboxClient()
collector = AttachmentCollector(client)
attachments = collector.collect_pdf_attachments(
allowed_sender_domain="@example.com",
download_dir="downloads",
top=25,
max_pages=3,
)
for message_id, files in attachments:
print(message_id, files)
```
### Direct mailbox actions
- `list_messages(top=50, max_pages=1, query_params=None)` fetches messages with optional paging.
- `get_attachments(message_id)` fetches attachments for a message.
- `move_message_to_archive(message_id)` moves a message to the Archive folder.
- `save_attachment(attachment, base_directory=None)` saves a Graph fileAttachment to disk.
### AttachmentCollector
`AttachmentCollector.collect_pdf_attachments(...)` downloads PDF attachments from senders matching `allowed_sender_domain` into per-message subdirectories under `download_dir`. Returns a list of tuples `(message_id, [pdf_paths])`.
| text/markdown | null | Nubix <l.laarveld@nubix.nl> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"msal>=1.30.0",
"requests>=2.32.5"
] | [] | [] | [] | [
"Repository, https://git.nubix.online/python/ai.git"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T10:58:29.842586 | nubix_outlook-1.0.5.tar.gz | 4,781 | a0/f2/7ea5d3a02923dfa70db26a311e6367e8f0cca45b31add76751b2020a39d2/nubix_outlook-1.0.5.tar.gz | source | sdist | null | false | 0b0e9a2c0b1e7af36d137268fec002f3 | 3e19031a1070ffe92f3a8a972459fc0ed5fc20fc4c740d78953a7896642424f6 | a0f27ea5d3a02923dfa70db26a311e6367e8f0cca45b31add76751b2020a39d2 | null | [] | 235 |
2.3 | lamindb_setup | 1.21.0 | Setup & configure LaminDB. | [](https://codecov.io/gh/laminlabs/lamindb-setup)
# lamindb-setup: Setting up `lamindb`
- User [docs](https://lamin.ai/docs)
- Developer [docs](https://lamindb-setup-htry.netlify.app/)
| text/markdown | null | Lamin Labs <open-source@lamin.ai> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [
"lamindb_setup"
] | [] | [
"lamin_utils>=0.3.3",
"python-dotenv",
"django<5.3,>=5.2",
"dj_database_url<3.0.0,>=1.3.0",
"django-pgtrigger",
"pydantic-settings",
"platformdirs<5.0.0",
"httpx_retries<1.0.0",
"requests",
"universal_pathlib==0.2.6",
"botocore<2.0.0",
"supabase<=2.24.0,>=2.20.0",
"websockets>=13.0",
"pyjw... | [] | [] | [] | [
"Home, https://github.com/laminlabs/lamindb-setup"
] | python-requests/2.32.5 | 2026-02-19T10:58:29.483691 | lamindb_setup-1.21.0.tar.gz | 227,689 | b8/0a/b11be0fa43b691ecf143506ce14dfb14d7aecacad4f52d8873eb4a116f46/lamindb_setup-1.21.0.tar.gz | source | sdist | null | false | 32183bc5d192057fefb0a39e7b550c15 | df74cea2cf5781aa772350a11ce2a6f6747e26d0067c30e2ac6aba8c9e873535 | b80ab11be0fa43b691ecf143506ce14dfb14d7aecacad4f52d8873eb4a116f46 | null | [] | 0 |
2.4 | maibot-dashboard | 1.0.0.dev202602198 | MaiBot WebUI static assets | # MaiBot WebUI Dist
该包仅包含 MaiBot WebUI 的前端构建产物(dist)。
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T10:57:59.991902 | maibot_dashboard-1.0.0.dev202602198.tar.gz | 2,282,756 | c3/ff/dab7ddc786f2c7e9dc7596698d83645cb783f0f12a2cc976a222e6d7377e/maibot_dashboard-1.0.0.dev202602198.tar.gz | source | sdist | null | false | f486c7c52f70cada7d99ee390247a62c | fd2c88a21b210e52d6b187e8618ec4dcd531bc4c58bfe705bcae09e1b0d94314 | c3ffdab7ddc786f2c7e9dc7596698d83645cb783f0f12a2cc976a222e6d7377e | null | [] | 206 |
2.4 | kyrodb | 0.1.0 | Official Python SDK for KyroDB | # KyroDB Python SDK (`kyrodb`)
Official Python SDK for KyroDB gRPC APIs.
This SDK provides:
- Sync + async clients with matching behavior.
- Typed public response models (no protobuf objects in public returns).
- Strict input validation for IDs, vectors, filters, and call options.
- Production-oriented reliability defaults (timeouts, retries, circuit breaker support).
- TLS + API-key authentication, including key rotation providers.
## Install
```bash
pip install kyrodb
```
Optional extras:
```bash
pip install "kyrodb[dev]" # lint/type/test tooling
pip install "kyrodb[proto]" # protobuf regeneration tooling
pip install "kyrodb[docs]" # MkDocs toolchain
pip install "kyrodb[numpy]" # faster vector validation path
```
## Quick Start (Sync)
```python
from kyrodb import KyroDBClient
with KyroDBClient(target="127.0.0.1:50051", api_key="kyro_live_key") as client:
client.wait_for_ready(timeout_s=5.0)
client.insert(
doc_id=1,
embedding=[0.0] * 768,
metadata={"tenant": "acme", "source": "sdk-readme"},
namespace="default",
)
result = client.search(
query_embedding=[0.0] * 768,
k=10,
namespace="default",
)
print(result.total_found)
```
## Quick Start (Async)
```python
import asyncio
from kyrodb import AsyncKyroDBClient
async def main() -> None:
async with AsyncKyroDBClient(target="127.0.0.1:50051", api_key="kyro_live_key") as client:
await client.wait_for_ready(timeout_s=5.0)
await client.insert(
doc_id=1,
embedding=[0.0] * 768,
metadata={"tenant": "acme"},
namespace="default",
)
result = await client.search(
query_embedding=[0.0] * 768,
k=10,
namespace="default",
)
print(result.total_found)
asyncio.run(main())
```
## Core Behavior
### Public API contract
- Public SDK surface is what is exported from `src/kyrodb/__init__.py`.
- `kyrodb._generated.*` is internal implementation detail and not part of compatibility guarantees.
### Error model
All RPC failures map to typed exceptions under `KyroDBError`, including:
- `AuthenticationError`
- `PermissionDeniedError`
- `InvalidArgumentError`
- `NotFoundError`
- `QuotaExceededError`
- `DeadlineExceededError`
- `ServiceUnavailableError`
- `CircuitOpenError`
### Timeouts, retries, and circuit breaker
- Default call timeout is `30.0s`.
- `timeout_s=None` explicitly requests unbounded timeout.
- Read-like operations are retry-enabled by default; writes are not.
- Retries use bounded exponential backoff with jitter and elapsed-time budget.
- Optional circuit breaker can fail fast under sustained transient failures.
### Transport defaults
By default:
- gRPC keepalive is enabled.
- Send/receive message limits are set to 64 MiB.
- One client/channel should be reused per process/worker.
## Filters
Filter builders:
- `exact(key, value)`
- `in_values(key, values)`
- `range_match(key, gte=..., lte=..., gt=..., lt=...)`
- `all_of([...])`
- `any_of([...])`
- `negate(filter_value)`
Example:
```python
from kyrodb import KyroDBClient, all_of, exact, in_values
with KyroDBClient(target="127.0.0.1:50051", api_key="kyro_live_key") as client:
filt = all_of([exact("tenant", "acme"), in_values("tier", ["pro", "enterprise"])])
result = client.search(query_embedding=[0.0] * 768, k=10, filter=filt, namespace="default")
print(result.total_found)
```
## Security and TLS
```python
from kyrodb import KyroDBClient, TLSConfig
with open("ca.pem", "rb") as f:
ca_pem = f.read()
with open("client.crt", "rb") as f:
client_crt = f.read()
with open("client.key", "rb") as f:
client_key = f.read()
tls = TLSConfig(
root_certificates=ca_pem,
certificate_chain=client_crt,
private_key=client_key,
)
client = KyroDBClient(target="db.internal:50051", api_key="kyro_live_key", tls=tls)
```
Security notes:
- Non-loopback targets require TLS.
- API key is sent as gRPC metadata (`x-api-key`).
- mTLS requires both certificate chain and private key.
- API key rotation:
- sync: `set_api_key(...)`, `set_api_key_provider(...)`
- async: `set_api_key(...)`, `set_api_key_provider(...)`, `set_api_key_provider_async(...)`
## API Surface
Both `KyroDBClient` and `AsyncKyroDBClient` provide:
- `insert`, `bulk_insert`, `bulk_load_hnsw`, `delete`, `update_metadata`, `batch_delete_ids`
- `query`, `search`, `bulk_search`, `bulk_query`
- `health`, `metrics`
- `flush_hot_tier`, `create_snapshot`, `get_config`
- `wait_for_ready`, `close`
## Development
```bash
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev,proto]"
python scripts/gen_proto.py --proto proto/kyrodb.proto --out src/kyrodb/_generated
```
Run local CI parity:
```bash
./scripts/run_ci_local.sh
```
## Documentation
- `docs/README.md` — docs index
- `docs/api-reference.md` — method-level reference
- `docs/operations.md` — reliability and runtime behavior
- `docs/troubleshooting.md` — operational failure modes
- `docs/performance-benchmarks.md` — validation overhead benchmarks
Hosted docs are generated from `mkdocs.yml` via `.github/workflows/docs.yml`.
| text/markdown | null | KyroDB Team <kishan@kyrodb.com> | null | null | BSL | ai, grpc, kyrodb, rag, vector-database | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language ::... | [] | null | null | >=3.10 | [] | [] | [] | [
"grpcio<2.0.0,>=1.76.0",
"protobuf<7.0.0,>=6.31.1",
"mypy>=1.11.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest-cov>=5.0.0; extra == \"dev\"",
"pytest>=8.3.0; extra == \"dev\"",
"ruff>=0.7.0; extra == \"dev\"",
"types-grpcio>=1.0.0.20251009; extra == \"dev\"",
"mkdocs-mater... | [] | [] | [] | [
"Homepage, https://github.com/KyroDB/kyrodb-python",
"Repository, https://github.com/KyroDB/kyrodb-python",
"Documentation, https://github.com/KyroDB/kyrodb-python#readme",
"Issues, https://github.com/KyroDB/kyrodb-python/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T10:57:50.858785 | kyrodb-0.1.0.tar.gz | 56,658 | 30/f6/8a966150bcd1294e9cc8a70e4237a56d61ea26482755b3d9f259a9dad437/kyrodb-0.1.0.tar.gz | source | sdist | null | false | 3f0c15f06ccdd4cc85eb5048d8aa17c6 | 89f5ca4e7300a024fdf964ee38f0688c504faa592898595e5bc16a39947d2b99 | 30f68a966150bcd1294e9cc8a70e4237a56d61ea26482755b3d9f259a9dad437 | null | [
"LICENSE"
] | 262 |
2.3 | lamin_cli | 1.14.0 | Lamin CLI. | # Lamin CLI
The CLI that exposes `lamindb` and `lamindb_setup` to the command line.
| text/markdown | null | Lamin Labs <open-source@lamin.ai> | null | null | null | null | [] | [] | null | null | null | [] | [
"lamin_cli"
] | [] | [
"rich-click>=1.7"
] | [] | [] | [] | [
"Home, https://github.com/laminlabs/lamin-cli"
] | python-requests/2.32.5 | 2026-02-19T10:57:39.459513 | lamin_cli-1.14.0.tar.gz | 52,362 | 7c/ef/34f7e80606339386001d8da8e7b868e3cde117475bd3e63adf27cefe36a2/lamin_cli-1.14.0.tar.gz | source | sdist | null | false | 0b7e19e5d464ef4731f31d5777bdb036 | 00407ee7de3a3439919abb5a08d6a0ea230938160e3bfb04d4f0cfbf56f602a6 | 7cef34f7e80606339386001d8da8e7b868e3cde117475bd3e63adf27cefe36a2 | null | [] | 0 |
2.4 | agenlang | 0.0.1a1 | AgenLang — shared contract substrate for agent interoperability | # AgenLang
Shared contract substrate for secure, auditable inter-agent communication (OpenClaw ↔ ZHC ↔ personal agents).
https://github.com/iamsharathbhaskar/AgenLang
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/iamsharathbhaskar/AgenLang",
"Repository, https://github.com/iamsharathbhaskar/AgenLang"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-19T10:56:32.618172 | agenlang-0.0.1a1.tar.gz | 1,348 | f1/3e/3bd2f59fa1614a4f69e9d46a3290f021a74924aeaeee6260cb9169cd70da/agenlang-0.0.1a1.tar.gz | source | sdist | null | false | 3a75050319f565adce8700d2e7f22fff | 3812ca318db091a8d757276caf5a5e963b20f0cca6e77fcf08d2a163fae86803 | f13e3bd2f59fa1614a4f69e9d46a3290f021a74924aeaeee6260cb9169cd70da | null | [
"LICENSE"
] | 237 |
2.4 | python-gvm | 26.10.0 | Library to communicate with remote servers over GMP or OSP | 
# Greenbone Vulnerability Management Python Library <!-- omit in toc -->
[](https://github.com/greenbone/python-gvm/releases)
[](https://pypi.org/project/python-gvm/)
[](https://codecov.io/gh/greenbone/python-gvm)
[](https://github.com/greenbone/python-gvm/actions/workflows/ci.yml)
The Greenbone Vulnerability Management Python API library (**python-gvm**) is a
collection of APIs that help with remote controlling Greenbone Community Edition
installations and Greenbone Enterprise Appliances. The library essentially
abstracts accessing the communication protocols Greenbone Management Protocol
(GMP) and Open Scanner Protocol (OSP).
## Table of Contents <!-- omit in toc -->
- [Documentation](#documentation)
- [Installation](#installation)
- [Version](#version)
- [Requirements](#requirements)
- [Install using pip](#install-using-pip)
- [Example](#example)
- [Support](#support)
- [Maintainer](#maintainer)
- [Contributing](#contributing)
- [License](#license)
## Documentation
The documentation for python-gvm can be found at
[https://greenbone.github.io/python-gvm/](https://greenbone.github.io/python-gvm/).
Please always take a look at the documentation for further details. This
**README** just gives you a short overview.
## Installation
### Version
`python-gvm` uses [semantic versioning](https://semver.org/).
Versions prior to 26.0.0 used [calendar versioning](https://calver.org/).
Please consider to always use the **newest** releases of `gvm-tools` and `python-gvm`.
We frequently update these projects to add features and keep them free from bugs.
> [!IMPORTANT]
> To use `python-gvm` with GMP version of 7, 8 or 9 you must use a release version
> that is `<21.5`. In the `21.5` release the support of these versions has been
> dropped.
> [!IMPORTANT]
> To use `python-gvm` with GMP version 20.8 or 21.4 you must use a release version
> that is `<24.6`. In the `24.6` release the support of these versions has been
> dropped.
### Requirements
Python 3.9 and later is supported.
### Install using pip
You can install the latest stable release of python-gvm from the Python Package
Index using [pip](https://pip.pypa.io/):
```shell
python3 -m pip install --user python-gvm
```
## Example
```python3
from gvm.connections import UnixSocketConnection
from gvm.protocols.gmp import GMP
from gvm.transforms import EtreeTransform
from gvm.xml import pretty_print
connection = UnixSocketConnection()
transform = EtreeTransform()
with GMP(connection, transform=transform) as gmp:
# Retrieve GMP version supported by the remote daemon
version = gmp.get_version()
# Prints the XML in beautiful form
pretty_print(version)
# Login
gmp.authenticate('foo', 'bar')
# Retrieve all tasks
tasks = gmp.get_tasks()
# Get names of tasks
task_names = tasks.xpath('task/name/text()')
pretty_print(task_names)
```
## Support
For any question on the usage of python-gvm please use the
[Greenbone Community Forum](https://forum.greenbone.net/c/building-from-source-under-the-hood/gmp/11). If you
found a problem with the software, please
[create an issue](https://github.com/greenbone/python-gvm/issues)
on GitHub.
## Maintainer
This project is maintained by [Greenbone AG](https://www.greenbone.net/).
## Contributing
Your contributions are highly appreciated. Please
[create a pull request](https://github.com/greenbone/python-gvm/pulls) on GitHub.
For bigger changes, please discuss it first in the
[issues](https://github.com/greenbone/python-gvm/issues).
For development you should use [poetry](https://python-poetry.org)
to keep you python packages separated in different environments. First install
poetry via pip
```shell
python3 -m pip install --user poetry
```
Afterwards run
```shell
poetry install
```
in the checkout directory of python-gvm (the directory containing the
`pyproject.toml` file) to install all dependencies including the packages only
required for development.
The python-gvm repository uses [autohooks](https://github.com/greenbone/autohooks)
to apply linting and auto formatting via git hooks. Please ensure the git hooks
are active.
```shell
poetry install
poetry run autohooks activate --force
```
## License
Copyright (C) 2017-2025 [Greenbone AG](https://www.greenbone.net/)
Licensed under the [GNU General Public License v3.0 or later](LICENSE).
| text/markdown | Greenbone AG | info@greenbone.net | null | null | GPL-3.0-or-later | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",... | [] | null | null | <4.0.0,>=3.9.2 | [] | [] | [] | [
"httpx[http2]<0.29.0,>=0.28.1",
"lxml>=4.5.0",
"paramiko>=2.7.1"
] | [] | [] | [] | [
"Documentation, https://greenbone.github.io/python-gvm/",
"Homepage, https://github.com/greenbone/python-gvm/",
"Repository, https://github.com/greenbone/python-gvm/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T10:56:00.009627 | python_gvm-26.10.0.tar.gz | 257,466 | 64/d2/c08cff190433f039aad44295cc5c6a4ad021b66beff45c73ea70b85f9b77/python_gvm-26.10.0.tar.gz | source | sdist | null | false | 29c104398b98fd38fcf4048efb76eb6c | 8fb3681368fdb486ea3af7921c39c84d21596f624d4b95e84e80bef426a551ab | 64d2c08cff190433f039aad44295cc5c6a4ad021b66beff45c73ea70b85f9b77 | null | [
"LICENSE"
] | 2,433 |
2.4 | openworm-ai | 0.4.0 | Investigating the use of LLMs and other AI technology in OpenWorm | # openworm.ai
Scripts related to use of LLMs and other AI technology in OpenWorm
| text/markdown | OpenWorm contributors | p.gleeson@gmail.com | Padraig Gleeson | p.gleeson@gmail.com | LGPLv3 | null | [
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering",
"Intended Audience :: Science/Research",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Onl... | [] | https://openworm.ai | null | >=3.8 | [] | [] | [] | [
"bs4",
"pandas",
"modelspec",
"llama_index",
"llama-index-llms-ollama",
"langchain>=1.0.0",
"matplotlib",
"pytest; extra == \"test\"",
"ruff; extra == \"test\"",
"openworm_ai[test]; extra == \"dev\"",
"llamaapi; extra == \"dev\"",
"llama-index-embeddings-ollama; extra == \"dev\"",
"openworm_... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:55:19.102674 | openworm_ai-0.4.0.tar.gz | 516,614 | e7/f6/17ade3ca070ea4008f70cc9fa7a3ee2feaf8ebbfbb0c09f8de713f06ad46/openworm_ai-0.4.0.tar.gz | source | sdist | null | false | e45c7ad6354c421ead0b47ead9610f45 | cec4c6d6af0c75cf0ae3bb7359e7660eb38fa87134b19b97c98b2b8c710bd7fc | e7f617ade3ca070ea4008f70cc9fa7a3ee2feaf8ebbfbb0c09f8de713f06ad46 | null | [
"LICENSE"
] | 234 |
2.4 | plumber-agent | 1.0.15 | Local DCC Agent for Plumber Workflow Editor - Enables Maya, Blender, and Houdini operations | # Plumber Local DCC Agent v2.0.0 🚀
The Enhanced Local DCC Agent provides **world-class connection stability** and **universal DCC integration** through a revolutionary plugin architecture. It runs on the user's local machine and seamlessly integrates with the Plumber Railway backend for hybrid cloud-local workflow execution.
## 🌟 Version 2.0.0 - Major Release Updates
### **🎯 Enhanced Connection Management**
- **Exponential Backoff**: Smart reconnection with 5s → 10s → 20s → 40s → 60s delays
- **Connection State Persistence**: Maintains connection state across agent restarts
- **Circuit Breaker Pattern**: Prevents connection storms with automatic recovery
- **Multi-Path Communication**: WebSocket primary + HTTP polling fallback
- **Real-time Quality Monitoring**: Live connection quality scoring (0.0-1.0)
- **Message Queuing**: Zero message loss during temporary disconnections
### **🔌 Universal DCC Plugin System**
- **Modular Architecture**: Plugin-based system for easy DCC integration
- **Session Management**: Intelligent session pooling and lifecycle management
- **Cross-DCC Support**: Maya, Blender, Houdini with identical interface
- **Capability Detection**: Automatic discovery of DCC features and operations
- **Resource Optimization**: Smart CPU/memory management per DCC type
- **Operation Validation**: Pre-execution validation and error prevention
## Architecture
```
┌─────────────────┐ WebSocket/HTTP ┌─────────────────┐
│ Railway │ ◄──────────────────► │ Local DCC │
│ Backend │ DCC Operations │ Agent │
│ (Cloud) │ │ (Local) │
└─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ Maya/Blender/ │
│ Houdini │
│ (Local Install) │
└─────────────────┘
```
## 🧩 Custom Nodes SDK
Create your own nodes that run locally on your machine. Your code stays private - only metadata is sent to the cloud.
### **Quick Start**
1. Create `~/plumber/custom_nodes/my_node.py`:
```python
class MyNode(Node):
"""My custom node."""
_category = "My Nodes"
_icon = "Star"
_description = "Does something cool"
message: str = Property(default="Hello", label="Message")
result: str = Output(type=str, label="Result")
def execute(self, context):
self.set_output("result", f"{self.message} World!")
return True
```
2. The agent auto-discovers your node within 2 seconds. Done!
### **Features**
- **🔒 Code Privacy**: Your Python code never leaves your machine
- **⚡ Hot Reload**: Changes detected automatically, no restart needed
- **🌐 Network Paths**: Support for shared pipeline directories (UNC, mounted drives)
- **🛡️ Sandboxed**: Execution timeout protection (default: 5 minutes)
- **📦 Standard Libraries**: Use any Python standard library
### **Documentation**
See **[CUSTOM_NODES_SDK.md](CUSTOM_NODES_SDK.md)** for complete documentation including:
- Property types (text, number, slider, dropdown, file picker)
- Input/Output port definitions
- ExecutionContext API (logging, progress)
- Network path configuration for studios
- Complete examples
### **API Endpoints**
```
GET /custom-nodes/list # List all discovered custom nodes
GET /custom-nodes/{type} # Get specific node metadata
POST /custom-nodes/execute # Execute a custom node
POST /custom-nodes/reload # Force reload all nodes
```
---
## 🛠️ Enhanced Features
### **Connection Reliability**
- **99.9% Uptime Target**: Enterprise-grade connection stability
- **Sub-5s Reconnection**: Lightning-fast recovery from network issues
- **Zero Operation Loss**: Guaranteed operation completion or graceful failure
- **Real-time Diagnostics**: Live connection health monitoring and debugging
### **Universal DCC Integration**
- **🔍 Auto-Discovery**: Intelligent detection of Maya, Blender, and Houdini installations
- **🎨 Cross-DCC Operations**: Unified interface for all supported DCCs
- **🚀 Session Pooling**: Persistent DCC sessions for faster operation execution
- **📊 Resource Management**: Intelligent CPU/memory allocation per DCC type
- **🔄 Operation Chaining**: Complex workflows spanning multiple DCCs
### **Production-Grade Features**
- **🔒 Enhanced Security**: JWT authentication and operation validation
- **📈 Performance Analytics**: Detailed execution metrics and bottleneck analysis
- **🌐 Web Integration**: Seamless integration with Plumber web application
- **🛠️ Easy Setup**: One-click installation with comprehensive testing tools
## Quick Start
### 1. Installation
Run the installer:
```bash
install.bat
```
This will:
- Check Python installation
- Create virtual environment
- Install dependencies
- Run DCC discovery
### 2. Start Agent
```bash
start_agent.bat
```
The agent will be available at:
- **HTTP API**: `http://127.0.0.1:8001`
- **WebSocket**: `ws://127.0.0.1:8001/ws`
- **Health Check**: `http://127.0.0.1:8001/health`
### 3. Check Version and Test System
Verify your agent version and test enhanced features:
```bash
check_version.bat
```
This will:
- Show current agent version (should be v2.0.0)
- Check Railway backend compatibility
- Verify enhanced connection features
- Display connection status and quality metrics
### 4. Test Enhanced Features
Run comprehensive system tests:
```bash
python test_enhanced_dcc_system.py
```
This comprehensive test validates:
- Enhanced connection management
- Universal DCC plugin system
- Connection stability and resilience
- Plugin discovery and validation
### 5. Connect to Railway
The Railway backend will automatically discover and connect to your local agent when executing DCC workflows.
## 📡 Enhanced API Endpoints
### **Connection Management**
```
GET /connection/status # Detailed connection status and quality metrics
GET /health # Enhanced health check with connection quality
GET /version # Comprehensive version and feature information
```
### **DCC Plugin System**
```
GET /dcc/discovery # Universal DCC plugin discovery
POST /dcc/execute # Execute DCC operation through plugin system
GET /dcc/{type}/status # Specific DCC plugin status
GET /dcc/{type}/sessions # Session management and monitoring
```
### **Real-time Communication**
```
WS /ws # Enhanced WebSocket with message queuing
```
### **Monitoring & Analytics**
```
GET /statistics # Execution statistics and performance metrics
GET /history # Operation history and success rates
GET /sessions # Active session monitoring
```
### **Custom Nodes**
```
GET /custom-nodes/list # List all discovered custom nodes
GET /custom-nodes/metadata # Get metadata for all nodes
GET /custom-nodes/{node_type} # Get specific node metadata
POST /custom-nodes/execute # Execute a custom node
POST /custom-nodes/reload # Force reload all nodes
```
## 🎨 Universal DCC Operations
### **Maya Plugin** (Production Ready)
- **🎬 Render**: Scene rendering with Maya Software, Arnold, Mental Ray
- **📤 Export**: OBJ, FBX, Alembic, Maya ASCII/Binary formats
- **📥 Import**: Multi-format asset import with namespace support
- **📝 Script**: Custom Maya Python script execution
- **📊 Scene Info**: Comprehensive scene analysis and metadata extraction
### **Blender Plugin** (Production Ready)
- **🎬 Render**: Cycles and Eevee rendering with animation support
- **📝 Script**: Custom Blender Python script execution
- **📤 Export**: Multiple format support (planned)
- **🎨 Materials**: Shader node manipulation (planned)
### **Houdini Plugin** (Production Ready)
- **🎬 Render**: Mantra and Karma rendering
- **📝 Script**: HOM (Houdini Object Model) Python scripting
- **🌊 Simulation**: Fluid, particle, and rigid body simulations
- **🔄 Procedural**: Node network creation and manipulation
### **Plugin Capabilities**
Each plugin provides:
- **🔍 Auto-Discovery**: Automatic installation detection
- **🚀 Session Pooling**: Persistent sessions for faster execution
- **📊 Resource Management**: CPU/memory limits per DCC
- **⚡ Operation Validation**: Pre-execution parameter checking
- **📈 Performance Monitoring**: Detailed execution analytics
## Configuration
Edit `config/agent_config.json` to customize:
```json
{
"agent": {
"host": "127.0.0.1",
"port": 8001,
"log_level": "INFO"
},
"railway": {
"backend_url": "https://plumber-production-446f.up.railway.app"
},
"dcc": {
"maya": { "enabled": true, "timeout": 600 },
"blender": { "enabled": true, "timeout": 300 },
"houdini": { "enabled": true, "timeout": 900 }
}
}
```
## Requirements
- **Python 3.8+**
- **Windows 10/11** (primary support)
- **Maya 2022+** (optional)
- **Blender 3.6+** (optional)
- **Houdini 19.5+** (optional)
## Troubleshooting
### DCC Not Detected
1. Ensure DCC is installed in standard locations
2. Check DCC executable permissions
3. Run discovery: `python src/main.py --discover-only`
### Connection Issues
1. Check firewall settings (port 8001)
2. Verify Railway backend URL in config
3. Check agent logs: `plumber_agent.log`
### Performance Issues
1. Monitor system resources via `/health` endpoint
2. Adjust DCC timeout settings in config
3. Limit concurrent operations per DCC
## Development
### Manual Installation
```bash
# Create virtual environment
python -m venv venv
# Activate (Windows)
venv\Scripts\activate.bat
# Install dependencies
pip install -r requirements.txt
# Run agent
python src/main.py
```
### Command Line Options
```bash
python src/main.py --help
python src/main.py --host 0.0.0.0 --port 8002
python src/main.py --discover-only
python src/main.py --log-level DEBUG
```
## Security
- Agent only accepts connections from configured Railway backend
- DCC operations run in isolated temporary directories
- File size limits and operation timeouts prevent abuse
- Comprehensive logging for audit trails
## License
Part of the Plumber Workflow Editor project.
| text/markdown | Damn Ltd | Damn Ltd <info@damnltd.com> | null | null | MIT | workflow, dcc, maya, blender, houdini, automation, vfx | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language ... | [] | https://app.plumber.damnltd.com | null | >=3.8 | [] | [] | [] | [
"fastapi>=0.104.1",
"uvicorn[standard]>=0.24.0",
"websockets>=12.0",
"pydantic>=2.5.0",
"psutil>=5.9.6",
"watchdog>=3.0.0",
"cryptography>=42.0.0",
"requests>=2.31.0",
"aiofiles>=23.2.1",
"python-multipart>=0.0.6",
"aiohttp>=3.9.0",
"click>=8.1.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytes... | [] | [] | [] | [
"Homepage, https://app.plumber.damnltd.com",
"Documentation, https://app.plumber.damnltd.com/docs",
"Repository, https://github.com/damnvfx/plumber-editor",
"Bug Tracker, https://github.com/damnvfx/plumber-editor/issues"
] | twine/6.2.0 CPython/3.12.1 | 2026-02-19T10:55:14.975085 | plumber_agent-1.0.15.tar.gz | 128,037 | e9/fc/9e09f5e03b22f6b77080116d2f8d951238372c08c49864c9424877d89dcf/plumber_agent-1.0.15.tar.gz | source | sdist | null | false | 521157fb3bbc04b5b1de83b778b85e01 | 11398adf7131300aa089ed29c83a8ff4d6de6a5f681d72ea6ec73896306ea712 | e9fc9e09f5e03b22f6b77080116d2f8d951238372c08c49864c9424877d89dcf | null | [] | 237 |
2.4 | awslabs.aws-diagram-mcp-server | 1.0.20 | An MCP server that seamlessly creates diagrams using the Python diagrams package DSL | # AWS Diagram MCP Server
Model Context Protocol (MCP) server for AWS Diagrams
This MCP server that seamlessly creates [diagrams](https://diagrams.mingrammer.com/) using the Python diagrams package DSL. This server allows you to generate AWS diagrams, sequence diagrams, flow diagrams, and class diagrams using Python code.
[](https://github.com/awslabs/mcp/blob/main/src/aws-diagram-mcp-server/tests/)
## Prerequisites
1. Install `uv` from [Astral](https://docs.astral.sh/uv/getting-started/installation/) or the [GitHub README](https://github.com/astral-sh/uv#installation)
2. Install Python using `uv python install 3.10`
3. Install GraphViz https://www.graphviz.org/
## Installation
| Kiro | Cursor | VS Code |
|:----:|:------:|:-------:|
| [](https://kiro.dev/launch/mcp/add?name=awslabs.aws-diagram-mcp-server&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22awslabs.aws-diagram-mcp-server%22%5D%2C%22env%22%3A%7B%22FASTMCP_LOG_LEVEL%22%3A%22ERROR%22%7D%7D) | [](https://cursor.com/en/install-mcp?name=awslabs.aws-diagram-mcp-server&config=eyJjb21tYW5kIjoidXZ4IGF3c2xhYnMuYXdzLWRpYWdyYW0tbWNwLXNlcnZlciIsImVudiI6eyJGQVNUTUNQX0xPR19MRVZFTCI6IkVSUk9SIn0sImF1dG9BcHByb3ZlIjpbXSwiZGlzYWJsZWQiOmZhbHNlfQ%3D%3D) | [](https://insiders.vscode.dev/redirect/mcp/install?name=AWS%20Diagram%20MCP%20Server&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22awslabs.aws-diagram-mcp-server%22%5D%2C%22env%22%3A%7B%22FASTMCP_LOG_LEVEL%22%3A%22ERROR%22%7D%2C%22autoApprove%22%3A%5B%5D%2C%22disabled%22%3Afalse%7D) |
Configure the MCP server in your MCP client configuration (e.g., for Kiro, edit `~/.kiro/settings/mcp.json`):
```json
{
"mcpServers": {
"awslabs.aws-diagram-mcp-server": {
"command": "uvx",
"args": ["awslabs.aws-diagram-mcp-server"],
"env": {
"FASTMCP_LOG_LEVEL": "ERROR"
},
"autoApprove": [],
"disabled": false
}
}
}
```
### Windows Installation
For Windows users, the MCP server configuration format is slightly different:
```json
{
"mcpServers": {
"awslabs.aws-diagram-mcp-server": {
"disabled": false,
"timeout": 60,
"type": "stdio",
"command": "uv",
"args": [
"tool",
"run",
"--from",
"awslabs.aws-diagram-mcp-server@latest",
"awslabs.aws-diagram-mcp-server.exe"
],
"env": {
"FASTMCP_LOG_LEVEL": "ERROR",
"AWS_PROFILE": "your-aws-profile",
"AWS_REGION": "us-east-1"
}
}
}
}
```
or docker after a successful `docker build -t awslabs/aws-diagram-mcp-server .`:
```json
{
"mcpServers": {
"awslabs.aws-diagram-mcp-server": {
"command": "docker",
"args": [
"run",
"--rm",
"--interactive",
"--env",
"FASTMCP_LOG_LEVEL=ERROR",
"awslabs/aws-diagram-mcp-server:latest"
],
"env": {},
"disabled": false,
"autoApprove": []
}
}
}
```
## Features
The Diagrams MCP Server provides the following capabilities:
1. **Generate Diagrams**: Create professional diagrams using Python code
2. **Multiple Diagram Types**: Support for AWS architecture, sequence diagrams, flow charts, class diagrams, and more
3. **Customization**: Customize diagram appearance, layout, and styling
4. **Security**: Code scanning to ensure secure diagram generation
## Quick Example
```python
from diagrams import Diagram
from diagrams.aws.compute import Lambda
from diagrams.aws.database import Dynamodb
from diagrams.aws.network import APIGateway
with Diagram("Serverless Application", show=False):
api = APIGateway("API Gateway")
function = Lambda("Function")
database = Dynamodb("DynamoDB")
api >> function >> database
```
## Development
### Testing
The project includes a comprehensive test suite to ensure the functionality of the MCP server. The tests are organized by module and cover all aspects of the server's functionality.
To run the tests, use the provided script:
```bash
./run_tests.sh
```
This script will automatically install pytest and its dependencies if they're not already installed.
Or run pytest directly (if you have pytest installed):
```bash
pytest -xvs tests/
```
To run with coverage:
```bash
pytest --cov=awslabs.aws_diagram_mcp_server --cov-report=term-missing tests/
```
For more information about the tests, see the [tests README](https://github.com/awslabs/mcp/blob/main/src/aws-diagram-mcp-server/tests/README.md).
### Development Dependencies
To set up the development environment, install the development dependencies:
```bash
uv pip install -e ".[dev]"
```
This will install the required dependencies for development, including pytest, pytest-asyncio, and pytest-cov.
| text/markdown | Amazon Web Services | AWSLabs MCP <203918161+awslabs-mcp@users.noreply.github.com> | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"bandit>=1.8.6",
"boto3>=1.40.53",
"diagrams>=0.24.4",
"mcp[cli]>=1.23.0",
"pydantic>=2.12.2",
"sarif-om>=1.0.4",
"setuptools>=80.9.0",
"starlette>=0.48.0",
"urllib3>=2.6.3"
] | [] | [] | [] | [
"Homepage, https://awslabs.github.io/mcp/",
"Documentation, https://awslabs.github.io/mcp/servers/aws-diagram-mcp-server/",
"Source, https://github.com/awslabs/mcp.git",
"Bug Tracker, https://github.com/awslabs/mcp/issues",
"Changelog, https://github.com/awslabs/mcp/blob/main/src/aws-diagram-mcp-server/CHAN... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:54:45.741205 | awslabs_aws_diagram_mcp_server-1.0.20.tar.gz | 103,543 | cc/22/82cc415653eb79e9e307aa600fe9d0cd06947a64db03676e3e756b028244/awslabs_aws_diagram_mcp_server-1.0.20.tar.gz | source | sdist | null | false | 6ac69300be8c0656a86ddb2fefe78ce0 | dd480d79c15294320079a1f24a5e8a5977a3ad6db800f049aea5cf2bde89d800 | cc2282cc415653eb79e9e307aa600fe9d0cd06947a64db03676e3e756b028244 | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | awslabs.healthimaging-mcp-server | 0.0.2 | An AWS Labs Model Context Protocol (MCP) server for HealthImaging | # AWS HealthImaging MCP Server
## Overview
The AWS HealthImaging MCP Server enables AI assistants to interact with AWS HealthImaging services through the Model Context Protocol (MCP). It provides comprehensive medical imaging data lifecycle management with **39 specialized tools** for DICOM operations, datastore management, and advanced medical imaging workflows.
This server acts as a bridge between AI assistants and AWS HealthImaging, allowing you to search, retrieve, and manage medical imaging data while maintaining proper security controls and HIPAA compliance considerations.
## Prerequisites
- You must have an AWS account with HealthImaging access and credentials properly configured. Please refer to the official documentation [here ↗](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials) for guidance. We recommend configuring your credentials using the `AWS_PROFILE` environment variable. If not specified, the system follows boto3's default credential selection order.
- Ensure you have Python 3.10 or newer installed. You can download it from the [official Python website](https://www.python.org/downloads/) or use a version manager such as [pyenv](https://github.com/pyenv/pyenv).
- (Optional) Install [uv](https://docs.astral.sh/uv/getting-started/installation/) for faster dependency management and improved Python environment handling.
## 📦 Installation Methods
Choose the installation method that best fits your workflow and get started with your favorite assistant with MCP support, like Kiro, Cursor or Cline.
| Cursor | VS Code | Kiro |
|:------:|:-------:|:----:|
| [](https://cursor.com/en/install-mcp?name=awslabs.healthimaging-mcp-server&config=eyJjb21tYW5kIjoidXZ4IGF3c2xhYnMuaGVhbHRoaW1hZ2luZy1tY3Atc2VydmVyQGxhdGVzdCIsImVudiI6eyJBV1NfUkVHSU9OIjoidXMtZWFzdC0xIn0sImRpc2FibGVkIjpmYWxzZSwiYXV0b0FwcHJvdmUiOltdfQ%3D%3D) | [](https://insiders.vscode.dev/redirect/mcp/install?name=AWS%20HealthImaging%20MCP%20Server&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22awslabs.healthimaging-mcp-server%40latest%22%5D%2C%22env%22%3A%7B%22AWS_REGION%22%3A%22us-east-1%22%7D%2C%22type%22%3A%22stdio%22%7D) | [](https://kiro.dev/launch/mcp/add?name=awslabs.healthimaging-mcp-server&config=%7B%22command%22%3A%20%22uvx%22%2C%20%22args%22%3A%20%5B%22awslabs.healthimaging-mcp-server%40latest%22%5D%2C%20%22disabled%22%3A%20false%2C%20%22autoApprove%22%3A%20%5B%5D%7D) |
### ⚡ Using uv
Add the following configuration to your MCP client config file (e.g., for Kiro, edit `~/.kiro/settings/mcp.json`):
**For Linux/MacOS users:**
```json
{
"mcpServers": {
"awslabs.healthimaging-mcp-server": {
"command": "uvx",
"args": [
"awslabs.healthimaging-mcp-server@latest"
],
"env": {
"AWS_REGION": "us-east-1"
},
"disabled": false,
"autoApprove": []
}
}
}
```
**For Windows users:**
```json
{
"mcpServers": {
"awslabs.healthimaging-mcp-server": {
"command": "uvx",
"args": [
"--from",
"awslabs.healthimaging-mcp-server@latest",
"awslabs.healthimaging-mcp-server.exe"
],
"env": {
"AWS_REGION": "us-east-1"
},
"disabled": false,
"autoApprove": []
}
}
}
```
### 🐍 Using Python (pip)
> [!TIP]
> It's recommended to use a virtual environment because the AWS CLI version of the MCP server might not match the locally installed one
> and can cause it to be downgraded. In the MCP client config file you can change `"command"` to the path of the python executable in your
> virtual environment (e.g., `"command": "/workspace/project/.venv/bin/python"`).
**Step 1: Install the package**
```bash
pip install awslabs.healthimaging-mcp-server
```
**Step 2: Configure your MCP client**
Add the following configuration to your MCP client config file (e.g., for Kiro, edit `~/.kiro/settings/mcp.json`):
```json
{
"mcpServers": {
"awslabs.healthimaging-mcp-server": {
"command": "python",
"args": [
"-m",
"awslabs.healthimaging_mcp_server.server"
],
"env": {
"AWS_REGION": "us-east-1"
},
"disabled": false,
"autoApprove": []
}
}
}
```
### 🐳 Using Docker
You can isolate the MCP server by running it in a Docker container.
```json
{
"mcpServers": {
"awslabs.healthimaging-mcp-server": {
"command": "docker",
"args": [
"run",
"--rm",
"--interactive",
"--env",
"AWS_REGION=us-east-1",
"--volume",
"/full/path/to/.aws:/app/.aws",
"awslabs/healthimaging-mcp-server:latest"
],
"env": {}
}
}
}
```
### 🔧 Using Cloned Repository
For detailed instructions on setting up your local development environment and running the server from source, please see the [Development](#development) section below.
## 🚀 Quick Start
Once configured, you can ask your AI assistant questions such as:
- **"List all my HealthImaging datastores"**
- **"Search for CT scans for patient PATIENT123"**
- **"Get DICOM metadata for image set abc123"**
## Features
- **Comprehensive HealthImaging Support**: 39 specialized tools covering all aspects of medical imaging data lifecycle management
- **21 Standard AWS API Operations**: Full AWS HealthImaging API coverage including datastore management, import/export jobs, image sets, metadata, and resource tagging
- **18 Advanced DICOM Operations**: Specialized medical imaging workflows including patient/study/series level operations, bulk operations, and DICOM hierarchy management
- **GDPR Compliance Support**: Patient data removal and study deletion tools support "right to be forgotten/right to erasure" objectives
- **Enhanced Search Capabilities**: Patient-focused, study-focused, and series-focused searches with DICOM-aware filtering
- **Bulk Operations**: Efficient large-scale metadata updates and deletions with built-in safety limits
- **MCP Resources**: Automatic datastore discovery eliminates need for manual datastore ID entry
- **Security-First Design**: Built with healthcare security requirements in mind, supporting HIPAA compliance considerations
## Available MCP Tools
The server provides **39 comprehensive HealthImaging tools** organized into eight categories:
### Datastore Management (4 tools)
- **`create_datastore`** - Create new HealthImaging datastores with optional KMS encryption
- **`get_datastore`** - Get detailed datastore information including endpoints and metadata
- **`list_datastores`** - List all HealthImaging datastores with optional status filtering
### DICOM Import/Export Jobs (6 tools)
- **`start_dicom_import_job`** - Start DICOM import jobs from S3 to HealthImaging
- **`get_dicom_import_job`** - Get import job status and details
- **`list_dicom_import_jobs`** - List import jobs with status filtering
- **`start_dicom_export_job`** - Start DICOM export jobs from HealthImaging to S3
- **`get_dicom_export_job`** - Get export job status and details
- **`list_dicom_export_jobs`** - List export jobs with status filtering
### Image Set Operations (8 tools)
- **`search_image_sets`** - Advanced image set search with DICOM criteria and pagination
- **`get_image_set`** - Retrieve specific image set metadata and status
- **`get_image_set_metadata`** - Get detailed DICOM metadata with base64 encoding
- **`list_image_set_versions`** - List all versions of an image set
- **`update_image_set_metadata`** - Update DICOM metadata (patient corrections, study modifications)
- **`delete_image_set`** - Delete individual image sets (IRREVERSIBLE)
- **`copy_image_set`** - Copy image sets between datastores or within datastore
- **`get_image_frame`** - Get specific image frames with base64 encoding
### Resource Tagging (3 tools)
- **`list_tags_for_resource`** - List tags for HealthImaging resources
- **`tag_resource`** - Add tags to HealthImaging resources
- **`untag_resource`** - Remove tags from HealthImaging resources
### Enhanced Search Operations (3 tools)
- **`search_by_patient_id`** - Patient-focused search with study/series analysis
- **`search_by_study_uid`** - Study-focused search with primary image set filtering
- **`search_by_series_uid`** - Series-focused search across image sets
### Data Analysis Operations (8 tools)
- **`get_patient_studies`** - Get comprehensive study-level DICOM metadata for patients
- **`get_patient_series`** - Get all series UIDs for patient-level analysis
- **`get_study_primary_image_sets`** - Get primary image sets for studies (avoid duplicates)
- **`delete_patient_studies`** - Delete all studies for a patient (supports compliance with "right to be forgotten/right to erasure" GDPR objectives)
- **`delete_study`** - Delete entire studies by Study Instance UID
- **`delete_series_by_uid`** - Delete series using metadata updates
- **`get_series_primary_image_set`** - Get primary image set for series
- **`get_patient_dicomweb_studies`** - Get DICOMweb study-level information
- **`delete_instance_in_study`** - Delete specific instances in studies
- **`delete_instance_in_series`** - Delete specific instances in series
- **`update_patient_study_metadata`** - Update Patient/Study metadata for entire studies
### Bulk Operations (2 tools)
- **`bulk_update_patient_metadata`** - Update patient metadata across multiple studies with safety checks
- **`bulk_delete_by_criteria`** - Delete multiple image sets by search criteria with safety limits
### DICOM Hierarchy Operations (2 tools)
- **`remove_series_from_image_set`** - Remove specific series from image sets using DICOM hierarchy
- **`remove_instance_from_image_set`** - Remove specific instances from image sets using DICOM hierarchy
### MCP Resources
The server automatically exposes HealthImaging datastores as MCP resources, enabling:
- **Automatic discovery** of available datastores
- **No manual datastore ID entry** required
- **Status visibility** (ACTIVE, CREATING, etc.)
- **Metadata access** (creation date, endpoints, etc.)
## Usage Examples
### Basic Operations
List datastores (datastore discovered automatically)
```json
{
"status": "ACTIVE"
}
```
### Advanced Search
Search image sets with DICOM criteria
```json
{
"datastore_id": "discovered-from-resources",
"search_criteria": {
"filters": [
{
"values": [{"DICOMPatientId": "PATIENT123"}],
"operator": "EQUAL"
}
]
},
"max_results": 50
}
```
### DICOM Metadata
Get detailed DICOM metadata
```json
{
"datastore_id": "discovered-from-resources",
"image_set_id": "image-set-123",
"version_id": "1"
}
```
## Authentication
Configure AWS credentials using any of these methods:
1. **AWS CLI**: `aws configure`
2. **Environment variables**: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`
3. **IAM roles** (for EC2/Lambda)
4. **AWS profiles**: Set `AWS_PROFILE` environment variable
### Required Permissions
The server requires specific IAM permissions for HealthImaging operations. Here's a comprehensive policy:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"medical-imaging:CreateDatastore",
"medical-imaging:DeleteDatastore",
"medical-imaging:GetDatastore",
"medical-imaging:ListDatastores",
"medical-imaging:StartDICOMImportJob",
"medical-imaging:GetDICOMImportJob",
"medical-imaging:ListDICOMImportJobs",
"medical-imaging:StartDICOMExportJob",
"medical-imaging:GetDICOMExportJob",
"medical-imaging:ListDICOMExportJobs",
"medical-imaging:SearchImageSets",
"medical-imaging:GetImageSet",
"medical-imaging:GetImageSetMetadata",
"medical-imaging:GetImageFrame",
"medical-imaging:ListImageSetVersions",
"medical-imaging:UpdateImageSetMetadata",
"medical-imaging:DeleteImageSet",
"medical-imaging:CopyImageSet",
"medical-imaging:ListTagsForResource",
"medical-imaging:TagResource",
"medical-imaging:UntagResource"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-dicom-bucket/*",
"arn:aws:s3:::your-dicom-bucket"
]
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": "arn:aws:kms:*:*:key/*"
}
]
}
```
### Security Best Practices
- **Principle of Least Privilege**: Create custom policies tailored to your specific use case rather than using broad permissions
- **Minimal Permissions**: Start with minimal permissions and gradually add access as needed
- **MFA Requirements**: Consider requiring multi-factor authentication for sensitive operations
- **Regular Monitoring**: Monitor AWS CloudTrail logs to track actions performed by the MCP server
- **HIPAA Compliance**: Ensure your AWS account and HealthImaging setup meet HIPAA requirements for healthcare data
## Error Handling
All tools return structured error responses:
```json
{
"error": true,
"type": "validation_error",
"message": "Datastore ID must be 32 characters"
}
```
**Error Types:**
- `validation_error` - Invalid input parameters
- `not_found` - Resource or datastore not found
- `auth_error` - AWS credentials not configured
- `service_error` - AWS HealthImaging service error
- `server_error` - Internal server error
## Troubleshooting
### Common Issues
**"AWS credentials not configured"**
- Run `aws configure` or set environment variables
- Verify `AWS_REGION` is set correctly
**"Resource not found"**
- Ensure datastore exists and is ACTIVE
- Check datastore ID is correct (32 characters)
- Verify you have access to the datastore
**"Validation error"**
- Check required parameters are provided
- Ensure datastore ID format is correct
- Verify count parameters are within 1-100 range
### Debug Mode
Set environment variable for detailed logging:
```bash
export PYTHONPATH=.
export AWS_LOG_LEVEL=DEBUG
awslabs.healthimaging-mcp-server
```
## Development
### Local Development Setup
#### Prerequisites
- Python 3.10 or higher
- Git
- AWS account with HealthImaging access
- Code editor (VS Code recommended)
#### Setup Instructions
**Option 1: Using uv (Recommended)**
```bash
git clone <repository-url>
cd healthimaging-mcp-server
uv sync --dev
source .venv/bin/activate # On Windows: .venv\Scripts\activate
```
**Option 2: Using pip/venv**
```bash
git clone <repository-url>
cd healthimaging-mcp-server
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies
pip install -e ".[dev]"
```
### Running the Server Locally
```bash
# After activating your virtual environment
python -m awslabs.healthimaging_mcp_server.main
# Or using the installed script
awslabs.healthimaging-mcp-server
```
### Development Workflow
```bash
# Run tests
pytest tests/ -v
# Run tests with coverage
pytest tests/ -v --cov=awslabs/healthimaging_mcp_server --cov-report=html
# Format code
ruff format awslabs/ tests/
# Lint code
ruff check awslabs/ tests/
pyright awslabs/
# Run all checks
pre-commit run --all-files
```
### Project Structure
```
awslabs/healthimaging_mcp_server/
├── server.py # MCP server with tool handlers
├── healthimaging_operations.py # AWS HealthImaging client operations
├── models.py # Pydantic validation models
├── main.py # Entry point
└── __init__.py # Package initialization
```
## Contributing
1. Fork the repository
2. Create a feature branch: `git checkout -b feature-name`
3. Make changes and add tests
4. Run tests: `pytest tests/ -v`
5. Format code: `ruff format awslabs/ tests/`
6. Submit a pull request
## License
Licensed under the Apache License, Version 2.0. See LICENSE file for details.
## Disclaimer
This AWS HealthImaging MCP Server package is provided "as is" without warranty of any kind, express or implied, and is intended for development, testing, and evaluation purposes only. We do not provide any guarantee on the quality, performance, or reliability of this package.
Users of this package are solely responsible for implementing proper security controls and MUST use AWS Identity and Access Management (IAM) to manage access to AWS resources. You are responsible for configuring appropriate IAM policies, roles, and permissions, and any security vulnerabilities resulting from improper IAM configuration are your sole responsibility.
When working with medical imaging data, ensure compliance with applicable healthcare regulations such as HIPAA, and implement appropriate safeguards for protected health information (PHI). By using this package, you acknowledge that you have read and understood this disclaimer and agree to use the package at your own risk.
| text/markdown | Amazon Web Services | AWSLabs MCP <203918161+awslabs-mcp@users.noreply.github.com> | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programmin... | [] | null | null | >=3.10 | [] | [] | [] | [
"boto3>=1.34.0",
"botocore>=1.34.0",
"filelock>=3.20.3",
"httpx>=0.25.0",
"loguru>=0.7.0",
"mcp[cli]>=1.23.0",
"pydantic>=2.10.6",
"python-dateutil>=2.8.0",
"python-multipart>=0.0.22",
"urllib3>=2.6.3",
"pre-commit>=4.1.0; extra == \"dev\"",
"pyright>=1.1.408; extra == \"dev\"",
"pytest-asyn... | [] | [] | [] | [
"homepage, https://awslabs.github.io/mcp/",
"docs, https://awslabs.github.io/mcp/servers/healthimaging-mcp-server/",
"documentation, https://awslabs.github.io/mcp/servers/healthimaging-mcp-server/",
"repository, https://github.com/awslabs/mcp.git",
"changelog, https://github.com/awslabs/mcp/blob/main/src/he... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:54:44.497972 | awslabs_healthimaging_mcp_server-0.0.2.tar.gz | 117,184 | b4/ea/97eb5a3839e6becc0b1efe0945ac02d195b711f6cceeda7791a58f2d3f45/awslabs_healthimaging_mcp_server-0.0.2.tar.gz | source | sdist | null | false | aa2a306c7513f27801ffc0ca8f4daa23 | 32d9f85bd2abb9d397bf4b19831e36cb8cd491c934ba079368741ac5007c2869 | b4ea97eb5a3839e6becc0b1efe0945ac02d195b711f6cceeda7791a58f2d3f45 | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | awslabs.aws-healthomics-mcp-server | 0.0.26 | An AWS Labs Model Context Protocol (MCP) server for AWS HealthOmics | # AWS HealthOmics MCP Server
A Model Context Protocol (MCP) server that provides AI assistants with comprehensive access to AWS HealthOmics services for genomic workflow management, execution, and analysis.
## Overview
AWS HealthOmics is a purpose-built service for storing, querying, and analyzing genomic, transcriptomic, and other omics data. This MCP server enables AI assistants to interact with HealthOmics workflows through natural language, making genomic data analysis more accessible and efficient.
## Key Capabilities
This MCP server provides tools for:
### 🧬 Workflow Management
- **Create and validate workflows**: Support for WDL, CWL, and Nextflow workflow languages
- **Lint workflow definitions**: Validate WDL and CWL workflows using industry-standard linting tools
- **Version management**: Create and manage workflow versions with different configurations
- **Package workflows**: Bundle workflow definitions into deployable packages
### 🚀 Workflow Execution
- **Start and monitor runs**: Execute workflows with custom parameters and monitor progress
- **Task management**: Track individual workflow tasks and their execution status
- **Resource configuration**: Configure compute resources, storage, and caching options
### 📊 Analysis and Troubleshooting
- **Performance analysis**: Analyze workflow execution performance and resource utilization
- **Failure diagnosis**: Comprehensive troubleshooting tools for failed workflow runs
- **Log access**: Retrieve detailed logs from runs, engines, tasks, and manifests
### 🔍 File Discovery and Search
- **Genomics file search**: Intelligent discovery of genomics files across S3 buckets, HealthOmics sequence stores, and reference stores
- **Pattern matching**: Advanced search with fuzzy matching against file paths and object tags
- **File associations**: Automatic detection and grouping of related files (BAM/BAI indexes, FASTQ pairs, FASTA indexes)
- **Relevance scoring**: Smart ranking of search results based on match quality and file relationships
### 🌍 Region Management
- **Multi-region support**: Get information about AWS regions where HealthOmics is available
## Available Tools
### Workflow Management Tools
1. **ListAHOWorkflows** - List available HealthOmics workflows with pagination support
2. **CreateAHOWorkflow** - Create new workflows with WDL, CWL, or Nextflow definitions from base64-encoded ZIP files or S3 URIs, with optional container registry mappings
3. **GetAHOWorkflow** - Retrieve detailed workflow information and export definitions
4. **CreateAHOWorkflowVersion** - Create new versions of existing workflows from base64-encoded ZIP files or S3 URIs, with optional container registry mappings
5. **ListAHOWorkflowVersions** - List all versions of a specific workflow
6. **LintAHOWorkflowDefinition** - Lint single WDL or CWL workflow files using miniwdl and cwltool
7. **LintAHOWorkflowBundle** - Lint multi-file WDL or CWL workflow bundles with import/dependency support
8. **PackageAHOWorkflow** - Package workflow files into base64-encoded ZIP format
### Workflow Execution Tools
1. **StartAHORun** - Start workflow runs with custom parameters and resource configuration
2. **ListAHORuns** - List workflow runs with filtering by status and date ranges
3. **GetAHORun** - Retrieve detailed run information including status and metadata
4. **ListAHORunTasks** - List tasks for specific runs with status filtering
5. **GetAHORunTask** - Get detailed information about specific workflow tasks
### Analysis and Troubleshooting Tools
1. **AnalyzeAHORunPerformance** - Analyze workflow run performance and resource utilization
2. **DiagnoseAHORunFailure** - Comprehensive diagnosis of failed workflow runs with remediation suggestions
3. **GetAHORunLogs** - Access high-level workflow execution logs and events
4. **GetAHORunEngineLogs** - Retrieve workflow engine logs (STDOUT/STDERR) for debugging
5. **GetAHORunManifestLogs** - Access run manifest logs with runtime information and metrics
6. **GetAHOTaskLogs** - Get task-specific logs for debugging individual workflow steps
### File Discovery Tools
1. **SearchGenomicsFiles** - Intelligent search for genomics files across S3 buckets, HealthOmics sequence stores, and reference stores with pattern matching, file association detection, and relevance scoring
### Region Management Tools
1. **GetAHOSupportedRegions** - List AWS regions where HealthOmics is available
## Instructions for AI Assistants
This MCP server enables AI assistants like Kiro, Cline, Cursor, and Windsurf to help users with AWS HealthOmics genomic workflow management. Here's how to effectively use these tools:
### Understanding AWS HealthOmics
AWS HealthOmics is designed for genomic data analysis workflows. Key concepts:
- **Workflows**: Computational pipelines written in WDL, CWL, or Nextflow that process genomic data
- **Runs**: Executions of workflows with specific input parameters and data
- **Tasks**: Individual steps within a workflow run
- **Storage Types**: STATIC (fixed storage) or DYNAMIC (auto-scaling storage)
### Workflow Management Best Practices
1. **Creating Workflows**:
- **From local files**: Use `PackageAHOWorkflow` to bundle workflow files, then use the base64-encoded ZIP with `CreateAHOWorkflow`
- **From S3**: Store your workflow definition ZIP file in S3 and reference it using the `definition_uri` parameter
- Validate workflows with appropriate language syntax (WDL, CWL, Nextflow)
- Include parameter templates to guide users on required inputs
- Choose the appropriate method based on your workflow storage preferences
2. **S3 URI Support**:
- Both `CreateAHOWorkflow` and `CreateAHOWorkflowVersion` support S3 URIs as an alternative to base64-encoded ZIP files
- **Benefits of S3 URIs**:
- Better for large workflow definitions (no base64 encoding overhead)
- Easier integration with CI/CD pipelines that store artifacts in S3
- Reduced memory usage during workflow creation
- Direct reference to existing S3-stored workflow definitions
- **Requirements**:
- S3 URI must start with `s3://`
- The S3 bucket must be in the same region as the HealthOmics service
- Appropriate S3 permissions must be configured for the HealthOmics service
- **Usage**: Specify either `definition_zip_base64` OR `definition_uri`, but not both
3. **Version Management**:
- Create new versions for workflow updates rather than modifying existing ones
- Use descriptive version names that indicate changes or improvements
- List versions to help users choose the appropriate one
- Both base64 ZIP and S3 URI methods are supported for version creation
### Workflow Execution Guidance
1. **Starting Runs**:
- Always specify required parameters: workflow_id, role_arn, name, output_uri
- Choose appropriate storage type (DYNAMIC recommended for most cases)
- Use meaningful run names for easy identification
- Configure caching when appropriate to save costs and time
2. **Monitoring Runs**:
- Use `ListAHORuns` with status filters to track active workflows
- Check individual run details with `GetAHORun` for comprehensive status
- Monitor tasks with `ListAHORunTasks` to identify bottlenecks
### Troubleshooting Failed Runs
When workflows fail, follow this diagnostic approach:
1. **Start with DiagnoseAHORunFailure**: This comprehensive tool provides:
- Failure reasons and error analysis
- Failed task identification
- Log summaries and recommendations
- Actionable troubleshooting steps
2. **Access Specific Logs**:
- **Run Logs**: High-level workflow events and status changes
- **Engine Logs**: Workflow engine STDOUT/STDERR for system-level issues
- **Task Logs**: Individual task execution details for specific failures
- **Manifest Logs**: Resource utilization and workflow summary information
3. **Performance Analysis**:
- Use `AnalyzeAHORunPerformance` to identify resource bottlenecks
- Review task resource utilization patterns
- Optimize workflow parameters based on analysis results
### Workflow Linting and Validation
The MCP server includes built-in workflow linting capabilities for validating WDL and CWL workflows before deployment:
1. **Lint Workflow Definitions**:
- **Single files**: Use `LintAHOWorkflowDefinition` for individual workflow files
- **Multi-file bundles**: Use `LintAHOWorkflowBundle` for workflows with imports and dependencies
- **Syntax errors**: Catch parsing issues before deployment
- **Missing components**: Identify missing inputs, outputs, or steps
- **Runtime requirements**: Ensure tasks have proper runtime specifications
- **Import resolution**: Validate imports and dependencies between files
- **Best practices**: Get warnings about potential improvements
2. **Supported Formats**:
- **WDL**: Uses miniwdl for comprehensive validation
- **CWL**: Uses cwltool for standards-compliant validation
3. **No Additional Installation Required**:
Both miniwdl and cwltool are included as dependencies and available immediately after installing the MCP server.
### Genomics File Discovery
The MCP server includes a powerful genomics file search tool that helps users locate and discover genomics files across multiple storage systems:
1. **Multi-Storage Search**:
- **S3 Buckets**: Search configured S3 bucket paths for genomics files
- **HealthOmics Sequence Stores**: Discover read sets and their associated files
- **HealthOmics Reference Stores**: Find reference genomes and associated indexes
- **Unified Results**: Get combined, deduplicated results from all storage systems
2. **Intelligent Pattern Matching**:
- **File Path Matching**: Search against S3 object keys and HealthOmics resource names
- **Tag-Based Search**: Match against S3 object tags and HealthOmics metadata
- **Fuzzy Matching**: Find files even with partial or approximate search terms
- **Multiple Terms**: Support for multiple search terms with logical matching
3. **Automatic File Association**:
- **BAM/CRAM Indexes**: Automatically group BAM files with their .bai indexes and CRAM files with .crai indexes
- **FASTQ Pairs**: Detect and group R1/R2 read pairs using standard naming conventions (_R1/_R2, _1/_2)
- **FASTA Indexes**: Associate FASTA files with their .fai, .dict, and BWA index collections
- **Variant Indexes**: Group VCF/GVCF files with their .tbi and .csi index files
- **Complete File Sets**: Identify complete genomics file collections for analysis pipelines
4. **Smart Relevance Scoring**:
- **Pattern Match Quality**: Higher scores for exact matches, lower for fuzzy matches
- **File Type Relevance**: Boost scores for files matching the requested type
- **Associated Files Bonus**: Increase scores for files with complete index sets
- **Storage Accessibility**: Consider storage class (Standard vs. Glacier) in scoring
5. **Comprehensive File Metadata**:
- **Access Paths**: S3 URIs or HealthOmics S3 access point paths for direct data access
- **File Characteristics**: Size, storage class, last modified date, and file type detection
- **Storage Information**: Archive status and retrieval requirements
- **Source System**: Clear indication of whether files are from S3, sequence stores, or reference stores
6. **Configuration and Setup**:
- **S3 Bucket Configuration**: Set `GENOMICS_SEARCH_S3_BUCKETS` environment variable with comma-separated bucket paths
- **Example**: `GENOMICS_SEARCH_S3_BUCKETS=s3://my-genomics-data/,s3://shared-references/hg38/`
- **Permissions**: Ensure appropriate S3 and HealthOmics read permissions
- **Performance**: Parallel searches across storage systems for optimal response times
7. **Performance Optimizations**:
- **Smart S3 API Usage**: Optimized to minimize S3 API calls by 60-90% through intelligent caching and batching
- **Lazy Tag Loading**: Only retrieves S3 object tags when needed for pattern matching
- **Result Caching**: Caches search results to eliminate repeated S3 calls for identical searches
- **Batch Operations**: Retrieves tags for multiple objects in parallel batches
- **Configurable Performance**: Tune cache TTLs, batch sizes, and tag search behavior for your use case
- **Path-First Matching**: Prioritizes file path matching over tag matching to reduce API calls
### File Search Usage Examples
1. **Find FASTQ Files for a Sample**:
```
User: "Find all FASTQ files for sample NA12878"
→ Use SearchGenomicsFiles with file_type="fastq" and search_terms=["NA12878"]
→ Returns R1/R2 pairs automatically grouped together
→ Includes file sizes and storage locations
```
2. **Locate Reference Genomes**:
```
User: "Find human reference genome hg38 files"
→ Use SearchGenomicsFiles with file_type="fasta" and search_terms=["hg38", "human"]
→ Returns FASTA files with associated .fai, .dict, and BWA indexes
→ Provides S3 access point paths for HealthOmics reference stores
```
3. **Search for Alignment Files**:
```
User: "Find BAM files from the 1000 Genomes project"
→ Use SearchGenomicsFiles with file_type="bam" and search_terms=["1000", "genomes"]
→ Returns BAM files with their .bai index files
→ Ranked by relevance with complete file metadata
```
4. **Discover Variant Files**:
```
User: "Locate VCF files containing SNP data"
→ Use SearchGenomicsFiles with file_type="vcf" and search_terms=["SNP"]
→ Returns VCF files with associated .tbi index files
→ Includes both S3 and HealthOmics store results
```
### Performance Tuning for File Search
The genomics file search includes several optimizations to minimize S3 API calls and improve performance:
1. **For Path-Based Searches** (Recommended):
```bash
# Use specific file/sample names in search terms
# This enables path matching without tag retrieval
GENOMICS_SEARCH_ENABLE_S3_TAG_SEARCH=true # Keep enabled for fallback
GENOMICS_SEARCH_RESULT_CACHE_TTL=600 # Cache results for 10 minutes
```
2. **For Tag-Heavy Environments**:
```bash
# Optimize batch sizes for your dataset
GENOMICS_SEARCH_MAX_TAG_BATCH_SIZE=200 # Larger batches for better performance
GENOMICS_SEARCH_TAG_CACHE_TTL=900 # Longer tag cache for frequently accessed objects
```
3. **For Cost-Sensitive Environments**:
```bash
# Disable tag search if only path matching is needed
GENOMICS_SEARCH_ENABLE_S3_TAG_SEARCH=false # Eliminates all tag API calls
GENOMICS_SEARCH_RESULT_CACHE_TTL=1800 # Longer result cache to reduce repeated searches
```
4. **For Development/Testing**:
```bash
# Disable caching for immediate results during development
GENOMICS_SEARCH_RESULT_CACHE_TTL=0 # No result caching
GENOMICS_SEARCH_TAG_CACHE_TTL=0 # No tag caching
GENOMICS_SEARCH_MAX_TAG_BATCH_SIZE=50 # Smaller batches for testing
```
**Performance Impact**: These optimizations can reduce S3 API calls by 60-90% and improve search response times by 5-10x compared to the unoptimized implementation.
### Common Use Cases
1. **Workflow Development**:
```
User: "Help me create a new genomic variant calling workflow"
→ Option A: Use PackageAHOWorkflow to bundle files, then CreateAHOWorkflow with base64 ZIP
→ Option B: Upload workflow ZIP to S3, then CreateAHOWorkflow with S3 URI
→ Validate syntax and parameters
→ Choose method based on workflow size and storage preferences
```
2. **Production Execution**:
```
User: "Run my alignment workflow on these FASTQ files"
→ Use SearchGenomicsFiles to find FASTQ files for the run
→ Use StartAHORun with appropriate parameters
→ Monitor with ListAHORuns and GetAHORun
→ Track task progress with ListAHORunTasks
```
3. **Troubleshooting**:
```
User: "My workflow failed, what went wrong?"
→ Use DiagnoseAHORunFailure for comprehensive analysis
→ Access specific logs based on failure type
→ Provide actionable remediation steps
```
4. **Performance Optimization**:
```
User: "How can I make my workflow run faster?"
→ Use AnalyzeAHORunPerformance to identify bottlenecks
→ Review resource utilization patterns
→ Suggest optimization strategies
```
5. **Workflow Validation**:
```
User: "Check if my WDL workflow is valid"
→ Use LintAHOWorkflowDefinition for single files
→ Use LintAHOWorkflowBundle for multi-file workflows with imports
→ Check for missing inputs, outputs, or runtime requirements
→ Validate import resolution and dependencies
→ Get detailed error messages and warnings
```
### Important Considerations
- **IAM Permissions**: Ensure proper IAM roles with HealthOmics permissions
- **Regional Availability**: Use `GetAHOSupportedRegions` to verify service availability
- **Cost Management**: Monitor storage and compute costs, especially with STATIC storage
- **Data Security**: Follow genomic data handling best practices and compliance requirements
- **Resource Limits**: Be aware of service quotas and limits for concurrent runs
### Error Handling
When tools return errors:
- Check AWS credentials and permissions
- Verify resource IDs (workflow_id, run_id, task_id) are valid
- Ensure proper parameter formatting and required fields
- Use diagnostic tools to understand failure root causes
- Provide clear, actionable error messages to users
## Installation
| Kiro | Cursor | VS Code |
|:----:|:------:|:-------:|
| [](https://kiro.dev/launch/mcp/add?name=awslabs.aws-healthomics-mcp-server&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22awslabs.aws-healthomics-mcp-server%40latest%22%5D%2C%22env%22%3A%7B%22AWS_REGION%22%3A%22us-east-1%22%2C%22AWS_PROFILE%22%3A%22your-profile%22%2C%22FASTMCP_LOG_LEVEL%22%3A%22WARNING%22%7D%7D) | [](https://cursor.com/en/install-mcp?name=awslabs.aws-healthomics-mcp-server&config=eyJjb21tYW5kIjoidXZ4IGF3c2xhYnMuYXdzLWhlYWx0aG9taWNzLW1jcC1zZXJ2ZXJAbGF0ZXN0IiwiZW52Ijp7IkFXU19SRUdJT04iOiJ1cy1lYXN0LTEiLCJBV1NfUFJPRklMRSI6InlvdXItcHJvZmlsZSIsIkZBU1RNQ1BfTE9HX0xFVkVMIjoiV0FSTklORyJ9fQ%3D%3D) | [](https://insiders.vscode.dev/redirect/mcp/install?name=AWS%20HealthOmics%20MCP%20Server&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22awslabs.aws-healthomics-mcp-server%40latest%22%5D%2C%22env%22%3A%7B%22AWS_REGION%22%3A%22us-east-1%22%2C%22AWS_PROFILE%22%3A%22your-profile%22%2C%22FASTMCP_LOG_LEVEL%22%3A%22WARNING%22%7D%7D) |
Install using uvx:
```bash
uvx awslabs.aws-healthomics-mcp-server
```
Or install from source:
```bash
git clone <repository-url>
cd mcp/src/aws-healthomics-mcp-server
uv sync
uv run -m awslabs.aws_healthomics_mcp_server.server
```
## Configuration
### Environment Variables
#### Core Configuration
- `AWS_REGION` - AWS region for HealthOmics operations (default: us-east-1)
- `AWS_PROFILE` - AWS profile for authentication
- `FASTMCP_LOG_LEVEL` - Server logging level (default: WARNING)
- `HEALTHOMICS_DEFAULT_MAX_RESULTS` - Default maximum number of results for paginated API calls (default: 10)
#### Genomics File Search Configuration
- `GENOMICS_SEARCH_S3_BUCKETS` - Comma-separated list of S3 bucket paths to search for genomics files (e.g., "s3://my-genomics-data/,s3://shared-references/")
- `GENOMICS_SEARCH_ENABLE_S3_TAG_SEARCH` - Enable/disable S3 tag-based searching (default: true)
- Set to `false` to disable tag retrieval and only use path-based matching
- Significantly reduces S3 API calls when tag matching is not needed
- `GENOMICS_SEARCH_MAX_TAG_BATCH_SIZE` - Maximum objects to retrieve tags for in a single batch (default: 100)
- Larger values improve performance for tag-heavy searches but use more memory
- Smaller values reduce memory usage but may increase API call latency
- `GENOMICS_SEARCH_RESULT_CACHE_TTL` - Result cache TTL in seconds (default: 600)
- Set to `0` to disable result caching
- Caches complete search results to eliminate repeated S3 calls for identical searches
- `GENOMICS_SEARCH_TAG_CACHE_TTL` - Tag cache TTL in seconds (default: 300)
- Set to `0` to disable tag caching
- Caches individual object tags to avoid duplicate retrievals across searches
- `GENOMICS_SEARCH_MAX_CONCURRENT` - Maximum concurrent S3 bucket searches (default: 10)
- `GENOMICS_SEARCH_TIMEOUT_SECONDS` - Search timeout in seconds (default: 300)
- `GENOMICS_SEARCH_ENABLE_HEALTHOMICS` - Enable/disable HealthOmics sequence/reference store searches (default: true)
> **Note for Large S3 Buckets**: When searching very large S3 buckets (millions of objects), the genomics file search may take longer than the default MCP client timeout. If you encounter timeout errors, increase the MCP server timeout by adding a `"timeout"` property to your MCP server configuration (e.g., `"timeout": 300000` for five minutes, specified in milliseconds). This is particularly important when using the search tool with extensive S3 bucket configurations or when `GENOMICS_SEARCH_ENABLE_S3_TAG_SEARCH=true` is used with large datasets. The value of `"timeout"` should always be greater than the value of `GENOMICS_SEARCH_TIMEOUT_SECONDS` if you want to prevent the MCP timeout from preempting the genomics search timeout
#### Agent Identification
- `AGENT` - Agent identifier appended to the User-Agent string on all boto3 API calls as `agent/<value>` (optional)
- **Use case**: Attributing API calls to specific AI agents for traceability via CloudTrail and AWS service logs
- **Behavior**: When set, the value is sanitized to visible ASCII characters (0x20-0x7E), stripped of leading/trailing whitespace, lowercased, and appended to the User-Agent header as `agent/<value>`
- **Validation**: Empty, whitespace-only, or values that become empty after sanitization are treated as unset
- **Example**: `export AGENT=KIRO` produces `User-Agent: ... agent/kiro`
#### Testing Configuration Variables
The following environment variables are primarily intended for testing scenarios, such as integration testing against mock service endpoints:
- `HEALTHOMICS_SERVICE_NAME` - Override the AWS service name used by the HealthOmics client (default: omics)
- **Use case**: Testing against mock services or alternative implementations
- **Validation**: Cannot be empty or whitespace-only; falls back to default with warning if invalid
- **Example**: `export HEALTHOMICS_SERVICE_NAME=omics-mock`
- `HEALTHOMICS_ENDPOINT_URL` - Override the endpoint URL used by the HealthOmics client
- **Use case**: Integration testing against local mock services or alternative endpoints
- **Validation**: Must begin with `http://` or `https://`; ignored with warning if invalid
- **Example**: `export HEALTHOMICS_ENDPOINT_URL=http://localhost:8080`
- **Note**: Only affects the HealthOmics client; other AWS services use default endpoints
> **Important**: These testing configuration variables should only be used in development and testing environments. In production, always use the default AWS HealthOmics service endpoints for security and reliability.
### AWS Credentials
This server requires AWS credentials with appropriate permissions for HealthOmics operations. Configure using:
1. AWS CLI: `aws configure`
2. Environment variables: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`
3. IAM roles (recommended for EC2/Lambda)
4. AWS profiles: Set `AWS_PROFILE` environment variable
### Required IAM Permissions
The following IAM permissions are required:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"omics:ListWorkflows",
"omics:CreateWorkflow",
"omics:GetWorkflow",
"omics:CreateWorkflowVersion",
"omics:ListWorkflowVersions",
"omics:StartRun",
"omics:ListRuns",
"omics:GetRun",
"omics:ListRunTasks",
"omics:GetRunTask",
"omics:ListSequenceStores",
"omics:ListReadSets",
"omics:GetReadSetMetadata",
"omics:ListReferenceStores",
"omics:ListReferences",
"omics:GetReferenceMetadata",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:GetLogEvents"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectTagging"
],
"Resource": [
"arn:aws:s3:::*genomics*",
"arn:aws:s3:::*genomics*/*",
"arn:aws:s3:::*omics*",
"arn:aws:s3:::*omics*/*"
]
},
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "arn:aws:iam::*:role/HealthOmicsExecutionRole*"
}
]
}
```
**Note**: The S3 permissions above use wildcard patterns for genomics-related buckets. In production, replace these with specific bucket ARNs that you want to search. For example:
```json
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectTagging"
],
"Resource": [
"arn:aws:s3:::my-genomics-data",
"arn:aws:s3:::my-genomics-data/*",
"arn:aws:s3:::shared-references",
"arn:aws:s3:::shared-references/*"
]
}
```
## Usage with MCP Clients
### Kiro
See the [Kiro IDE documentation](https://kiro.dev/docs/mcp/configuration/) or the [Kiro CLI documentation](https://kiro.dev/docs/cli/mcp/configuration/) for details.
For global configuration, edit `~/.kiro/settings/mcp.json`. For project-specific configuration, edit `.kiro/settings/mcp.json` in your project directory.
Add to your Kiro MCP configuration (`~/.kiro/settings/mcp.json`):
```json
{
"mcpServers": {
"aws-healthomics": {
"command": "uvx",
"args": ["awslabs.aws-healthomics-mcp-server"],
"timeout": 300000,
"env": {
"AWS_REGION": "us-east-1",
"AWS_PROFILE": "your-profile",
"HEALTHOMICS_DEFAULT_MAX_RESULTS": "10",
"AGENT": "kiro",
"GENOMICS_SEARCH_S3_BUCKETS": "s3://my-genomics-data/,s3://shared-references/",
"GENOMICS_SEARCH_ENABLE_S3_TAG_SEARCH": "true",
"GENOMICS_SEARCH_MAX_TAG_BATCH_SIZE": "100",
"GENOMICS_SEARCH_RESULT_CACHE_TTL": "600",
"GENOMICS_SEARCH_TAG_CACHE_TTL": "300"
}
}
}
}
```
#### Testing Configuration Example
For integration testing against mock services:
```json
{
"mcpServers": {
"aws-healthomics-test": {
"command": "uvx",
"args": ["awslabs.aws-healthomics-mcp-server"],
"timeout": 300000,
"env": {
"AWS_REGION": "us-east-1",
"AWS_PROFILE": "test-profile",
"HEALTHOMICS_SERVICE_NAME": "omics-mock",
"HEALTHOMICS_ENDPOINT_URL": "http://localhost:8080",
"GENOMICS_SEARCH_S3_BUCKETS": "s3://test-genomics-data/",
"GENOMICS_SEARCH_ENABLE_S3_TAG_SEARCH": "false",
"GENOMICS_SEARCH_RESULT_CACHE_TTL": "0",
"FASTMCP_LOG_LEVEL": "DEBUG"
}
}
}
}
```
### Other MCP Clients
Configure according to your client's documentation, using:
- Command: `uvx`
- Args: `["awslabs.aws-healthomics-mcp-server"]`
- Environment variables as needed
### Windows Installation
For Windows users, the MCP server configuration format is slightly different:
```json
{
"mcpServers": {
"awslabs.aws-healthomics-mcp-server": {
"disabled": false,
"timeout": 300000,
"type": "stdio",
"command": "uv",
"args": [
"tool",
"run",
"--from",
"awslabs.aws-healthomics-mcp-server@latest",
"awslabs.aws-healthomics-mcp-server.exe"
],
"env": {
"FASTMCP_LOG_LEVEL": "ERROR",
"AWS_PROFILE": "your-aws-profile",
"AWS_REGION": "us-east-1",
"GENOMICS_SEARCH_S3_BUCKETS": "s3://my-genomics-data/,s3://shared-references/",
"GENOMICS_SEARCH_ENABLE_S3_TAG_SEARCH": "true",
"GENOMICS_SEARCH_MAX_TAG_BATCH_SIZE": "100",
"GENOMICS_SEARCH_RESULT_CACHE_TTL": "600",
"GENOMICS_SEARCH_TAG_CACHE_TTL": "300"
}
}
}
}
```
#### Windows Testing Configuration
For testing scenarios on Windows:
```json
{
"mcpServers": {
"awslabs.aws-healthomics-mcp-server-test": {
"disabled": false,
"timeout": 300000,
"type": "stdio",
"command": "uv",
"args": [
"tool",
"run",
"--from",
"awslabs.aws-healthomics-mcp-server@latest",
"awslabs.aws-healthomics-mcp-server.exe"
],
"env": {
"FASTMCP_LOG_LEVEL": "DEBUG",
"AWS_PROFILE": "test-profile",
"AWS_REGION": "us-east-1",
"HEALTHOMICS_SERVICE_NAME": "omics-mock",
"HEALTHOMICS_ENDPOINT_URL": "http://localhost:8080",
"GENOMICS_SEARCH_S3_BUCKETS": "s3://test-genomics-data/",
"GENOMICS_SEARCH_ENABLE_S3_TAG_SEARCH": "false",
"GENOMICS_SEARCH_RESULT_CACHE_TTL": "0"
}
}
}
}
```
## Development
### Setup
```bash
git clone <repository-url>
cd aws-healthomics-mcp-server
uv sync
```
### Testing
```bash
# Run tests with coverage
uv run pytest --cov --cov-branch --cov-report=term-missing
# Run specific test file
uv run pytest tests/test_server.py -v
```
### Code Quality
```bash
# Format code
uv run ruff format
# Lint code
uv run ruff check
# Type checking
uv run pyright
```
## Contributing
Contributions are welcome! Please see the [contributing guidelines](https://github.com/awslabs/mcp/blob/main/CONTRIBUTING.md) for more information.
## License
This project is licensed under the Apache-2.0 License. See the [LICENSE](https://github.com/awslabs/mcp/blob/main/LICENSE) file for details.
| text/markdown | Amazon Web Services | AWSLabs MCP <203918161+awslabs-mcp@users.noreply.github.com>, Your Name <githubusername@users.noreply.github.com> | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programmin... | [] | null | null | >=3.10 | [] | [] | [] | [
"boto3>=1.40.23",
"coloredlogs>=15.0",
"cwltool[deps]>=3.1.0",
"isodate>=0.6.0",
"loguru>=0.7.0",
"mcp[cli]>=1.23.0",
"miniwdl>=1.12.0",
"nest-asyncio>=1.5.0",
"polars>=1.0.0",
"pydantic>=2.10.6",
"python-multipart>=0.0.22",
"ruamel-yaml>=0.18.0"
] | [] | [] | [] | [
"homepage, https://awslabs.github.io/mcp/",
"docs, https://awslabs.github.io/mcp/servers/aws-healthomics-mcp-server/",
"documentation, https://awslabs.github.io/mcp/servers/aws-healthomics-mcp-server/",
"repository, https://github.com/awslabs/mcp.git",
"changelog, https://github.com/awslabs/mcp/blob/main/sr... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:54:43.192080 | awslabs_aws_healthomics_mcp_server-0.0.26.tar.gz | 534,095 | fe/dc/6abb08ccd6872bdeb6d02eca0bfaf3c422ab8c655097b3fc9a5a9decd255/awslabs_aws_healthomics_mcp_server-0.0.26.tar.gz | source | sdist | null | false | 82ce7fb86449ca9770bab58dd4a2e0bc | 64141ac1b5f91699c5a00afe2433016565ba61a4b05f5256db3885fea5525323 | fedc6abb08ccd6872bdeb6d02eca0bfaf3c422ab8c655097b3fc9a5a9decd255 | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | awslabs.aurora-dsql-mcp-server | 1.0.20 | An AWS Labs Model Context Protocol (MCP) server for Aurora DSQL | # AWS Labs Aurora DSQL MCP Server
An AWS Labs Model Context Protocol (MCP) server for Aurora DSQL
and corresponding AI rules that can be used for additional model
steering while developing.
## Features
- Converting human-readable questions and commands into structured Postgres-compatible SQL queries and executing them against the configured Aurora DSQL database.
- Read-only by default, transactions enabled with `--allow-writes`
- Connection reuse between requests for improved performance
- Built-in access to Aurora DSQL documentation, search, and best practice recommendations
## Available Tools
### Database Operations
[IMPORTANT]
The MCP Server requires a valid configuration for --cluster_endpoint, --database_user, and --region to enable database operations.
- **readonly_query** - Execute read-only SQL queries against your DSQL cluster
- **transact** - Execute SQL statements in a transaction
- In read-only mode: Supports read operations with transactional consistency
- With `--allow-writes`: Supports all write operations too
- **get_schema** - Retrieve table schema information
### Documentation and Recommendations
- **dsql_search_documentation** - Search Aurora DSQL documentation
- Parameters: `search_phrase` (required), `limit` (optional)
- **dsql_read_documentation** - Read specific DSQL documentation pages
- Parameters: `url` (required), `start_index` (optional), `max_length` (optional)
- **dsql_recommend** - Get recommendations for DSQL best practices
- Parameters: `url` (required)
## Prerequisites
1. An AWS account with an [Aurora DSQL Cluster](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/getting-started.html)
1. This MCP server can only be run locally on the same host as your LLM client.
1. Set up AWS credentials with access to AWS services
- You need an AWS account with appropriate permissions
- Configure AWS credentials with `aws configure` or environment variables
## Installation
| Kiro | Cursor | VS Code |
|:----:|:------:|:-------:|
| [](https://kiro.dev/launch/mcp/add?name=awslabs.aurora-dsql-mcp-server&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22awslabs.aurora-dsql-mcp-server%40latest%22%2C%22--cluster_endpoint%22%2C%22%5Byour%20dsql%20cluster%20endpoint%5D%22%2C%22--region%22%2C%22%5Byour%20dsql%20cluster%20region%2C%20e.g.%20us-east-1%5D%22%2C%22--database_user%22%2C%22%5Byour%20dsql%20username%5D%22%2C%22--profile%22%2C%22default%22%5D%2C%22env%22%3A%7B%22FASTMCP_LOG_LEVEL%22%3A%22ERROR%22%7D%7D) | [](https://cursor.com/en/install-mcp?name=awslabs.aurora-dsql-mcp-server&config=eyJjb21tYW5kIjoidXZ4IGF3c2xhYnMuYXVyb3JhLWRzcWwtbWNwLXNlcnZlckBsYXRlc3QgLS1jbHVzdGVyX2VuZHBvaW50IFt5b3VyIGRzcWwgY2x1c3RlciBlbmRwb2ludF0gLS1yZWdpb24gW3lvdXIgZHNxbCBjbHVzdGVyIHJlZ2lvbiwgZS5nLiB1cy1lYXN0LTFdIC0tZGF0YWJhc2VfdXNlciBbeW91ciBkc3FsIHVzZXJuYW1lXSAtLXByb2ZpbGUgZGVmYXVsdCIsImVudiI6eyJGQVNUTUNQX0xPR19MRVZFTCI6IkVSUk9SIn0sImRpc2FibGVkIjpmYWxzZSwiYXV0b0FwcHJvdmUiOltdfQ%3D%3D) | [](https://insiders.vscode.dev/redirect/mcp/install?name=Aurora%20DSQL%20MCP%20Server&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22awslabs.aurora-dsql-mcp-server%40latest%22%2C%22--cluster_endpoint%22%2C%22%5Byour%20dsql%20cluster%20endpoint%5D%22%2C%22--region%22%2C%22%5Byour%20dsql%20cluster%20region%2C%20e.g.%20us-east-1%5D%22%2C%22--database_user%22%2C%22%5Byour%20dsql%20username%5D%22%2C%22--profile%22%2C%22default%22%5D%2C%22env%22%3A%7B%22FASTMCP_LOG_LEVEL%22%3A%22ERROR%22%7D%2C%22disabled%22%3Afalse%2C%22autoApprove%22%3A%5B%5D%7D) |
### Using `uv`
1. Install `uv` from [Astral](https://docs.astral.sh/uv/getting-started/installation/) or the [GitHub README](https://github.com/astral-sh/uv#installation)
2. Install Python using `uv python install 3.10`
Configure the MCP server in your MCP client configuration (e.g., for Kiro, edit `~/.kiro/settings/mcp.json`):
```json
{
"mcpServers": {
"awslabs.aurora-dsql-mcp-server": {
"command": "uvx",
"args": [
"awslabs.aurora-dsql-mcp-server@latest",
"--cluster_endpoint",
"[your dsql cluster endpoint]",
"--region",
"[your dsql cluster region, e.g. us-east-1]",
"--database_user",
"[your dsql username]",
"--profile",
"default"
],
"env": {
"FASTMCP_LOG_LEVEL": "ERROR"
},
"disabled": false,
"autoApprove": []
}
}
}
```
### Windows Installation
For Windows users, the MCP server configuration format is slightly different:
```json
{
"mcpServers": {
"awslabs.aurora-dsql-mcp-server": {
"disabled": false,
"timeout": 60,
"type": "stdio",
"command": "uv",
"args": [
"tool",
"run",
"--from",
"awslabs.aurora-dsql-mcp-server@latest",
"awslabs.aurora-dsql-mcp-server.exe"
],
"env": {
"FASTMCP_LOG_LEVEL": "ERROR",
"AWS_PROFILE": "your-aws-profile",
"AWS_REGION": "us-east-1"
}
}
}
}
```
### Using Docker
1. 'git clone https://github.com/awslabs/mcp.git'
2. Go to sub-directory 'src/aurora-dsql-mcp-server/'
3. Run 'docker build -t awslabs/aurora-dsql-mcp-server:latest .'
4. Create a env file with temporary credentials:
Either manually:
```file
# fictitious `.env` file with AWS temporary credentials
AWS_ACCESS_KEY_ID=<from the profile you set up>
AWS_SECRET_ACCESS_KEY=<from the profile you set up>
AWS_SESSION_TOKEN=<from the profile you set up>
```
Or using `aws configure`:
```bash
aws configure export-credentials --profile your-profile-name --format env > temp_aws_credentials.env | sed 's/^export //' > temp_aws_credentials.env
```
```json
{
"mcpServers": {
"awslabs.aurora-dsql-mcp-server": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--env-file",
"/full/path/to/file/above/.env",
"awslabs/aurora-dsql-mcp-server:latest",
"--cluster_endpoint",
"[your data]",
"--database_user",
"[your data]",
"--region",
"[your data]"
]
}
}
}
```
## Server Configuration options
### `--allow-writes`
By default, the DSQL MCP server operates in read-only mode. In this mode:
- **readonly_query**: Executes single read-only queries
- **transact**: Executes read-only transactions with point-in-time consistency
- Useful for multiple queries that need to see data at the same point in time
- All statements are validated to ensure they are read-only operations
- Write operations (INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, etc.) are rejected
To enable write operations, pass the `--allow-writes` parameter. In read-write mode:
- **readonly_query**: Same behavior (read-only queries)
- **transact**: Supports all DDL and DML operations (CREATE, INSERT, UPDATE, DELETE, etc.)
We recommend using least-privilege access when connecting to DSQL. For example, users should use a role that is read-only when possible. The read-only mode provides best-effort client-side validation to reject mutations.
### `--cluster_endpoint`
This is mandatory parameter to specify the cluster to connect to. This should be the full endpoint of your cluster, e.g., `01abc2ldefg3hijklmnopqurstu.dsql.us-east-1.on.aws`
### `--database_user`
This is a mandatory parameter to specify the user to connect as. For example
`admin`, or `my_user`. Note that the AWS credentials you are using must have
permission to login as that user. For more information on setting up and using
database roles in DSQL, see [Using database roles with IAM roles](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/using-database-and-iam-roles.html).
### `--profile`
You can specify the aws profile to use for your credentials. Note that this is
not supported for docker installation.
Using the `AWS_PROFILE` environment variable in your MCP configuration is also
supported:
```json
"env": {
"AWS_PROFILE": "your-aws-profile"
}
```
If neither is provided, the MCP server defaults to using the "default" profile in your AWS configuration file.
### `--region`
This is a mandatory parameter to specify the region of your DSQL database.
### `--knowledge-server`
Optional parameter to specify the remote MCP server endpoint for DSQL knowledge tools (documentation search, reading, and recommendations).
By default it is pre-configured.
Example:
```bash
--knowledge-server https://custom-knowledge-server.example.com
```
**Note:** For security, only use trusted knowledge server endpoints. The server should be an HTTPS endpoint.
### `--knowledge-timeout`
Optional parameter to specify the timeout in seconds for requests to the knowledge server.
Default: `30.0`
Example:
```bash
--knowledge-timeout 60.0
```
Increase this value if you experience timeouts when accessing documentation on slow networks.
## Development and Testing
### Running Tests
This project includes comprehensive tests to validate the readonly enforcement mechanisms. To run the tests:
```bash
# Install dependencies and run tests
uv run pytest tests/test_readonly_enforcement.py -v
# Run all tests
uv run pytest -v
# Run tests with coverage
uv run pytest --cov=awslabs.aurora_dsql_mcp_server tests/ -v
```
### Local Docker Testing
To test the MCP server locally using Docker:
1. **Build the Docker image:**
```bash
cd src/aurora-dsql-mcp-server
docker build -t awslabs/aurora-dsql-mcp-server:latest .
```
2. **Create AWS credentials file:**
Option A - Manual creation:
```bash
# Create .env file with your AWS credentials
cat > .env << EOF
AWS_ACCESS_KEY_ID=your_access_key_here
AWS_SECRET_ACCESS_KEY=your_secret_key_here
AWS_SESSION_TOKEN=your_session_token_here
EOF
```
Option B - Export from AWS CLI:
```bash
aws configure export-credentials --profile your-profile-name --format env > temp_aws_credentials.env
sed 's/^export //' temp_aws_credentials.env > .env
rm temp_aws_credentials.env
```
3. **Test the container directly:**
```bash
docker run -i --rm \
--env-file .env \
awslabs/aurora-dsql-mcp-server:latest \
--cluster_endpoint "your-dsql-cluster-endpoint" \
--database_user "your-username" \
--region "us-east-1"
```
4. **Test with write operations enabled:**
```bash
docker run -i --rm \
--env-file .env \
awslabs/aurora-dsql-mcp-server:latest \
--cluster_endpoint "your-dsql-cluster-endpoint" \
--database_user "your-username" \
--region "us-east-1" \
--allow-writes
```
**Note:** Replace the placeholder values with your actual DSQL cluster endpoint, username, and region.
## AI Rules
This repository also contains AI Rules (Steering). These markdown files serve as simple
context and guidance for best practices and patterns that AI assistants automatically apply
when generating code to improve the quality of agentic development.
Recommended paths:
* [Skills CLI for Agent-Agnostic Installation](#skills-cli)
* [Kiro Power](#kiro-power) - button-click installation
* [Claude Skill](#claude-skill) - installation instructions in [claude_skill_setup.md](https://github.com/awslabs/mcp/blob/main/src/aurora-dsql-mcp-server/skills/claude_skill_setup.md)
* [Gemini Skill](#gemini-skill) - use Gemini's github subrepo skill installation with `--path`
* [Codex Skill](#codex-skill) - use Codex's `$skill-installer` skill.
Alternative:
The [dsql-skill](https://github.com/awslabs/mcp/tree/main/src/aurora-dsql-mcp-server/skills/dsql-skill) can also be cloned into your tool's respective `rules` directory
for use with other coding assistants.
### Skills CLI
The [DSQL skill](https://skills.sh/awslabs/mcp/dsql) can also be installed using the [Skills CLI](https://skills.sh/docs/cli).
```bash
npx skills add awslabs/mcp --skill dsql
```
The CLI will guide you through:
* Selecting the agents you'd like to install to (Kiro, Claude Code, Cursor, Copilot, Gemini, Codex, Roo, Cline, OpenCode, Windsurf, etc.)
* Installation scope
- Project: Install in current directory (committed with your project)
- Global: Install in home directory (available across all projects)
* Installation method
- Symlink (Recommended): Single source of truth, easy updates
- Copy to all agents: Independent copies for each agent
Check and update skills at any time using:
```bash
npx skills check
npx skills update
```
### Kiro Power
To setup the Kiro power:
1. Install directly from the [Kiro Powers Registry](https://kiro.dev/launch/powers/amazon-aurora-dsql/)
2. Once redirected to the Power in the IDE either:
1. Select the **`Try Power`** button. Suggested for people who want:
- The AI to guide MCP server setup
- An interactive onboarding experience with DSQL to create a new cluster
2. Open a new Kiro chat and ask anything related to DSQL
- **Optionally update the MCP Config:** Add your existing cluster details and test the MCP server connection
so the MCP server can be used out of the box with the power.
- The Kiro agent will automatically activate the power if it identifies the power as valuable for completing
the user's task.
### Claude Skill
**Simple Setup with the Skills CLI**:
As outlined, the skill can be installed to Claude Code with the [Skills CLI](#skills-cli). To specify
only Claude Code as the agent to install to, use:
```bash
npx skills add awslabs/mcp --skill dsql --agent claude-code
```
**Direct Setup using a Git Clone**:
The alternative setup is outlined in [claude_skill_setup.md](https://github.com/awslabs/mcp/blob/main/src/aurora-dsql-mcp-server/skills/claude_skill_setup.md).
The method outlines taking a sparse clone of the dsql-skill directory and symlinking this clone
into the `.claude/skills/` folder. This allows changes to the skill to be pulled whenever the skill
needs to be updated.
### Gemini Skill
To add the skill directly in Gemini, decide on a scope `workspace` (contained to project) or `user` (default, global)\
and use the `skills` installer.
```bash
gemini skills install https://github.com/awslabs/mcp.git --path src/aurora-dsql-mcp-server/skills/dsql-skill --scope $SCOPE
```
You can then use the `/dsql` skill command with Gemini, and Gemini will automatically detect when the skill should be used.
### Codex Skill
Use the skill installer from the Codex CLI or TUI using the `$skill-installer` skill.
```bash
$skill-installer install dsql skill: https://github.com/awslabs/mcp/tree/main/src/aurora-dsql-mcp-server/skills/dsql-skill
```
Restart codex to pick up the skill. The skill can then be activated using `$dsql`.
| text/markdown | Amazon Web Services, Ram Dwivedula, Yoni Shalom | AWSLabs MCP <203918161+awslabs-mcp@users.noreply.github.com> | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programmin... | [] | null | null | >=3.10 | [] | [] | [] | [
"boto3>=1.38.5",
"botocore>=1.38.5",
"httpx>=0.27.0",
"loguru>=0.7.0",
"mcp[cli]>=1.23.0",
"psycopg[binary]>=3.0",
"pydantic>=2.10.6"
] | [] | [] | [] | [
"homepage, https://awslabs.github.io/mcp/",
"docs, https://awslabs.github.io/mcp/servers/aurora-dsql-mcp-server/",
"documentation, https://awslabs.github.io/mcp/servers/aurora-dsql-mcp-server/",
"repository, https://github.com/awslabs/mcp.git",
"changelog, https://github.com/awslabs/mcp/blob/main/src/aurora... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:54:41.584685 | awslabs_aurora_dsql_mcp_server-1.0.20.tar.gz | 195,187 | 02/6e/0ed7e095b0a291aeee2feaa9dc620924966c2fd3771c1b11f1fa8a6d037e/awslabs_aurora_dsql_mcp_server-1.0.20.tar.gz | source | sdist | null | false | 67a4202fe12b57160fd45f29bf0192f3 | bfb2721b34305ceda26097a87a3418d94f5a66cdf4bc6a2ef1e434d6fceb3464 | 026e0ed7e095b0a291aeee2feaa9dc620924966c2fd3771c1b11f1fa8a6d037e | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | awslabs.aws-api-mcp-server | 1.3.13 | Model Context Protocol (MCP) server for interacting with AWS | # AWS API MCP Server
## Overview
The AWS API MCP Server enables AI assistants to interact with AWS services and resources through AWS CLI commands. It provides programmatic access to manage your AWS infrastructure while maintaining proper security controls.
This server acts as a bridge between AI assistants and AWS services, allowing you to create, update, and manage AWS resources across all available services. It helps with AWS CLI command selection and provides access to the latest AWS API features and services, even those released after an AI model's knowledge cutoff date.
## Prerequisites
- You must have an AWS account with credentials properly configured. Please refer to the official documentation [here ↗](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials) for guidance. We recommend configuring your credentials using the `AWS_API_MCP_PROFILE_NAME` environment variable (see [Configuration Options](#%EF%B8%8F-configuration-options) section for details). If `AWS_API_MCP_PROFILE_NAME` is not specified, the system follows boto3's default credential selection order, in this case, if you have multiple AWS profiles configured on your machine, ensure the correct profile is prioritized in your credential chain.
- Ensure you have Python 3.10 or newer installed. You can download it from the [official Python website](https://www.python.org/downloads/) or use a version manager such as [pyenv](https://github.com/pyenv/pyenv).
- (Optional) Install [uv](https://docs.astral.sh/uv/getting-started/installation/) for faster dependency management and improved Python environment handling.
## 📦 Installation Methods
Choose the installation method that best fits your workflow and get started with your favorite assistant with MCP support, like Kiro, Cursor, or Cline.
| Cursor | VS Code | Kiro |
|:------:|:-------:|:----:|
| [](https://cursor.com/en/install-mcp?name=awslabs.aws-api-mcp-server&config=eyJjb21tYW5kIjoidXZ4IGF3c2xhYnMuYXdzLWFwaS1tY3Atc2VydmVyQGxhdGVzdCIsImVudiI6eyJBV1NfUkVHSU9OIjoidXMtZWFzdC0xIn0sImRpc2FibGVkIjpmYWxzZSwiYXV0b0FwcHJvdmUiOltdfQ%3D%3D) | [](https://insiders.vscode.dev/redirect/mcp/install?name=AWS%20API%20MCP%20Server&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22awslabs.aws-api-mcp-server%40latest%22%5D%2C%22env%22%3A%7B%22AWS_REGION%22%3A%22us-east-1%22%7D%2C%22type%22%3A%22stdio%22%7D) | [](https://kiro.dev/launch/mcp/add?name=awslabs.aws-api-mcp-server&config=%7B%22command%22%3A%20%22uvx%22%2C%20%22args%22%3A%20%5B%22awslabs.aws-api-mcp-server%40latest%22%5D%2C%20%22disabled%22%3A%20false%2C%20%22autoApprove%22%3A%20%5B%5D%7D) |
### ⚡ Using uv
Add the following configuration to your MCP client config file (e.g., for Kiro, edit `~/.kiro/settings/mcp.json`):
**For Linux/MacOS users:**
```json
{
"mcpServers": {
"awslabs.aws-api-mcp-server": {
"command": "uvx",
"args": [
"awslabs.aws-api-mcp-server@latest"
],
"env": {
"AWS_REGION": "us-east-1"
},
"disabled": false,
"autoApprove": []
}
}
}
```
**For Windows users:**
```json
{
"mcpServers": {
"awslabs.aws-api-mcp-server": {
"command": "uvx",
"args": [
"--from",
"awslabs.aws-api-mcp-server@latest",
"awslabs.aws-api-mcp-server.exe"
],
"env": {
"AWS_REGION": "us-east-1"
},
"disabled": false,
"autoApprove": []
}
}
}
```
### 🐍 Using Python (pip)
> [!TIP]
> It's recommended to use a virtual environment because the AWS CLI version of the MCP server might not match the locally installed one
> and can cause it to be downgraded. In the MCP client config file you can change `"command"` to the path of the python executable in your
> virtual environment (e.g., `"command": "/workspace/project/.venv/bin/python"`).
**Step 1: Install the package**
```bash
pip install awslabs.aws-api-mcp-server
```
**Step 2: Configure your MCP client**
Add the following configuration to your MCP client config file (e.g., for Kiro, edit `~/.kiro/settings/mcp.json`):
```json
{
"mcpServers": {
"awslabs.aws-api-mcp-server": {
"command": "python",
"args": [
"-m",
"awslabs.aws_api_mcp_server.server"
],
"env": {
"AWS_REGION": "us-east-1"
},
"disabled": false,
"autoApprove": []
}
}
}
```
### 🐳 Using Docker
You can isolate the MCP server by running it in a Docker container. The Docker image is available on the [public AWS ECR registry](https://gallery.ecr.aws/awslabs-mcp/awslabs/aws-api-mcp-server).
```json
{
"mcpServers": {
"awslabs.aws-api-mcp-server": {
"command": "docker",
"args": [
"run",
"--rm",
"--interactive",
"--env",
"AWS_REGION=us-east-1",
"--volume",
"/full/path/to/.aws:/app/.aws",
"public.ecr.aws/awslabs-mcp/awslabs/aws-api-mcp-server:latest"
],
"env": {}
}
}
}
```
### 🔧 Using Cloned Repository
For detailed instructions on setting up your local development environment and running the server from source, please see the CONTRIBUTING.md file.
### 🌐 HTTP Mode Configuration
The MCP server supports streamable HTTP mode. To use it, you must set:
- `AWS_API_MCP_TRANSPORT` to `"streamable-http"`
- `AUTH_TYPE` to `"no-auth"` if you want to disable authentication (otherwise OAuth is enabled by default)
Optionally configure the host and port with `AWS_API_MCP_HOST` and `AWS_API_MCP_PORT`.
#### For Linux/macOS:
```bash
AWS_API_MCP_TRANSPORT=streamable-http AUTH_TYPE=no-auth uvx awslabs.aws-api-mcp-server@latest
```
#### For Windows (Command Prompt):
```cmd
set AWS_API_MCP_TRANSPORT=streamable-http
set AUTH_TYPE=no-auth
uvx awslabs.aws-api-mcp-server@latest
```
#### For Windows (PowerShell):
```powershell
$env:AWS_API_MCP_TRANSPORT="streamable-http"
$env:AUTH_TYPE="no-auth"
uvx awslabs.aws-api-mcp-server@latest
```
Once the server is running, connect to it using the following configuration (ensure the host and port number match your `AWS_API_MCP_HOST` and `AWS_API_MCP_PORT` settings):"
```json
{
"mcpServers": {
"awslabs.aws-api-mcp-server": {
"type": "streamableHttp",
"url": "http://127.0.0.1:8000/mcp",
"autoApprove": [],
"disabled": false,
"timeout": 60
}
}
}
```
**Note**: Replace `127.0.0.1` with your custom host if you've set `AWS_API_MCP_HOST` to a different value.
### 🔒 HTTP Mode Security Considerations
**IMPORTANT**: When using HTTP mode (`streamable-http`), please be aware of the following security considerations:
- **Single Customer Server**: This HTTP mode is intended for **single customer use only**. It is **NOT designed for multi-tenant environments** or serving multiple users simultaneously
- **Authentication**: The server can be started with OAuth authentication, using `AUTH_TYPE=oauth`. Set `AUTH_TYPE=no-auth` to disable authentication if needed
- **Network Security Controls**: Ensure proper network security controls are in place:
- Bind to localhost (`127.0.0.1`) when possible
- Configure firewall rules to restrict access
- **Encryption in Transit**: We **strongly recommend** adding encryption in transit when using HTTP mode:
- Use HTTPS with TLS/SSL certificates
- Avoid transmitting sensitive data over unencrypted HTTP connections
## 🏗️ Self-host on AgentCore Runtime
You can deploy the AWS API MCP Server to Amazon Bedrock AgentCore for managed, scalable hosting with built-in authentication and session isolation. AgentCore provides a containerized runtime environment that handles scaling, security, and infrastructure management automatically.
See [DEPLOYMENT.md](https://github.com/awslabs/mcp/blob/main/src/aws-api-mcp-server/DEPLOYMENT.md) and [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-lqqkwbcraxsgw) for details.
## ⚙️ Configuration Options
| Environment Variable | Required | Default | Description |
|-------------------------------------------------------------------|----------------------------|----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `AWS_REGION` | ❌ No | `"us-east-1"` | Sets the default AWS region for all CLI commands, unless a specific region is provided in the request. If not provided, the MCP server will determine the region just like boto3's [configuration chain](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#overview) but with a fallback to `us-east-1`. This provides a consistent default while allowing flexibility to run commands in different regions as needed. |
| `AWS_API_MCP_WORKING_DIR` | ❌ No | \<Platform-specific temp directory\>/aws-api-mcp/workdir | Working directory path for the MCP server operations. Must be an absolute path when provided. Used to resolve relative paths in commands like `aws s3 cp`. Does not provide any sandboxing or security restrictions. When `AWS_API_MCP_ALLOW_UNRESTRICTED_LOCAL_FILE_ACCESS` is set to `"workdir"` (default), file operations are restricted to this directory. If not provided, defaults to a platform-specific directory:<br/><br/>• **Windows**: `%TEMP%\aws-api-mcp\workdir` (typically `C:\Users\<username>\AppData\Local\Temp\aws-api-mcp\workdir`)<br/>• **macOS**: `/private/var/folders/<hash>/T/aws-api-mcp/workdir`<br/>• **Linux**: `$XDG_RUNTIME_DIR/aws-api-mcp/workdir` (if set) or `$TMPDIR/aws-api-mcp/workdir` (if set) or `/tmp/aws-api-mcp/workdir` |
| `AWS_API_MCP_ALLOW_UNRESTRICTED_LOCAL_FILE_ACCESS` | ❌ No | `"workdir"` | Controls file system access level with three modes:<br/><br/>• `"workdir"` (default): Restricts file operations to `AWS_API_MCP_WORKING_DIR`. When using this mode, ensure to set an appropriate path for your use case since commands with paths outside this directory are rejected.<br/>• `"unrestricted"`: Enables system-wide file access (may cause unintended overwrites). Use only when explicitly required.<br/>• `"no-access"`: Blocks all local file path arguments. Commands requiring local file access (e.g., `aws s3 cp`, `aws cloudformation package`) will fail. S3 URIs (`s3://...`) and stdout redirect (`-`) remain allowed.<br/><br/>**DEPRECATED**: The boolean values `"true"` and `"false"` are supported for backward compatibility. Use `"unrestricted"` instead of `"true"` and `"workdir"` instead of `"false"`. |
| `AWS_API_MCP_PROFILE_NAME` | ❌ No | `"default"` | AWS Profile for credentials to use for command executions. If not provided, the MCP server will follow the boto3's [default credentials chain](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials) to look for credentials. We strongly recommend you to configure your credentials this way. |
| `READ_OPERATIONS_ONLY` | ❌ No | `"false"` | When set to "true", restricts execution to read-only operations only. IAM permissions remain the primary security control. For a complete list of allowed operations under this flag, refer to the [Service Authorization Reference](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html). Only operations where the **Access level** column is not `Write` will be allowed when this is set to "true". |
| `REQUIRE_MUTATION_CONSENT` | ❌ No | `"false"` | When set to "true", the MCP server will ask explicit consent before executing any operations that are **NOT** read-only. This safety mechanism uses [elicitation](https://modelcontextprotocol.io/docs/concepts/elicitation) so it requires a [client that supports elicitation](https://modelcontextprotocol.io/clients). |
| `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN` | ❌ No | - | Use environment variables to configure AWS credentials |
| `AWS_API_MCP_TELEMETRY` | ❌ No | `"true"` | Allow sending additional telemetry data to AWS related to the server configuration. This includes Whether the `call_aws()` tool is used with `READ_OPERATIONS_ONLY` set to true or false. Note: Regardless of this setting, AWS obtains information about which operations were invoked and the server version as part of normal AWS service interactions; no additional telemetry calls are made by the server for this purpose. |
| `EXPERIMENTAL_AGENT_SCRIPTS` | ❌ No | `"false"` | When set to "true", enables experimental agent scripts functionality. This provides access to structured, step-by-step workflows for complex AWS tasks through the `get_execution_plan` tool. Agent scripts are reusable workflows that automate complex processes and provide detailed guidance for accomplishing specific tasks. This feature is experimental and may change in future releases. |
| `AWS_API_MCP_AGENT_SCRIPTS_DIR` | ❌ No | - | Directory path containing custom user scripts for the agent scripts functionality. When specified, the server will load additional `.script.md` files from this directory alongside the built-in scripts. The directory must exist and be readable. Scripts must follow the same format as built-in scripts with frontmatter metadata including a `description` field. This allows users to extend the agent scripts functionality with their own custom workflows. |
| `AWS_API_MCP_TRANSPORT` | ❌ No | `"stdio"` | Transport protocol for the MCP server. Valid options are `"stdio"` (default) for local communication or `"streamable-http"` for HTTP-based communication. When using `"streamable-http"`, the server will listen on the host and port specified by `AWS_API_MCP_HOST` and `AWS_API_MCP_PORT`. |
| `AWS_API_MCP_HOST` | ❌ No | `"127.0.0.1"` | Host address for the MCP server when using `"streamable-http"` transport. Only used when `AWS_API_MCP_TRANSPORT` is set to `"streamable-http"`. |
| `AWS_API_MCP_PORT` | ❌ No | `"8000"` | Port number for the MCP server when using `"streamable-http"` transport. Only used when `AWS_API_MCP_TRANSPORT` is set to `"streamable-http"`. |
| `AWS_API_MCP_ALLOWED_HOSTS` | ❌ No | `AWS_API_MCP_HOST` | Comma-separated list of allowed host hostnames for HTTP requests. Used to validate the `Host` header in incoming requests. Set to `*` to allow all hosts (not recommended for production). Port numbers are automatically stripped during validation. Only used when `AWS_API_MCP_TRANSPORT` is set to `"streamable-http"`. |
| `AWS_API_MCP_ALLOWED_ORIGINS` | ❌ No | `AWS_API_MCP_HOST` | Comma-separated list of allowed origin hostnames for HTTP requests. Used to validate the `Origin` header in incoming requests. Set to `*` to allow all origins (not recommended for production). Port numbers are automatically stripped during validation. Only used when `AWS_API_MCP_TRANSPORT` is set to `"streamable-http"`. |
| `AWS_API_MCP_STATELESS_HTTP` | ❌ No | `"false"` | ⚠️ **WARNING: We strongly recommend keeping this set to "false" due to significant security implications.** When set to "true", creates a completely fresh transport for each request with no session tracking or state persistence between requests. Only used when `AWS_API_MCP_TRANSPORT` is set to `"streamable-http"`. |
| `AUTH_TYPE` | ❌ No | - | Only used when `AWS_API_MCP_TRANSPORT` is set to `"streamable-http"`. Authentication type for the MCP server. When set to `"no-auth"`, disables authentication. When set to `"oauth"`, enables OAuth authentication and requires `AUTH_ISSUER` and `AUTH_JWKS_URI` to be configured. |
| `AUTH_ISSUER` | ❌ No | - | Only used when `AWS_API_MCP_TRANSPORT` is set to `"streamable-http"`. OAuth issuer URL for JWT token validation. The issuer that will be validated in JWT tokens. Example: `"https://your-auth-provider.com/"`. Required when `AUTH_TYPE` is set to `"oauth"`. |
| `AUTH_JWKS_URI` | ❌ No | - | Only used when `AWS_API_MCP_TRANSPORT` is set to `"streamable-http"`. JWKS (JSON Web Key Set) endpoint URL for JWT token validation. This should be a publicly accessible HTTPS URL that serves the JSON Web Key Set used to verify JWT signatures. Example: `"https://your-auth-provider.com/.well-known/jwks.json"`. Required when `AUTH_TYPE` is set to `"oauth"`. |
### 🚀 Quick Start
Once configured, you can ask your AI assistant questions such as:
- **"List all my EC2 instances"**
- **"Show me S3 buckets in us-west-2"**
- **"Create a new security group for web servers"** *(Only with write permission)*
## Features
- **Comprehensive AWS CLI Support**: Supports all commands available in the latest AWS CLI version, ensuring access to the most recent AWS services and features
- **Help in Command Selection**: Helps AI assistants select the most appropriate AWS CLI commands to accomplish specific tasks
- **Command Validation**: Ensures safety by validating all AWS CLI commands before execution, preventing invalid or potentially harmful operations
- **Hallucination Protection**: Mitigates the risk of model hallucination by strictly limiting execution to valid AWS CLI commands only - no arbitrary code execution is permitted
- **Security-First Design**: Built with security as a core principle, providing multiple layers of protection to safeguard your AWS infrastructure
- **Read-Only Mode**: Provides an extra layer of security that disables all mutating operations, allowing safe exploration of AWS resources
## Available MCP Tools
The tool names are subject to change, please refer to CHANGELOG.md for any changes and adapt your workflows accordingly.
- `call_aws`: Executes AWS CLI commands with validation and proper error handling
- `suggest_aws_commands`: Suggests AWS CLI commands based on a natural language query. This tool helps the model generate CLI commands by providing a description and the complete set of parameters for the 5 most likely CLI commands for the given query, including the most recent AWS CLI commands - some of which may be otherwise unknown to the model (released after the model's knowledge cut-off date).
- `get_execution_plan` *(Experimental)*: Provides structured, step-by-step guidance for accomplishing complex AWS tasks through agent scripts. This tool is only available when the `EXPERIMENTAL_AGENT_SCRIPTS` environment variable is set to "true". Agent scripts are reusable workflows that automate complex processes and provide detailed guidance for accomplishing specific tasks.
## Security Considerations
Before using this MCP Server, you should consider conducting your own independent assessment to ensure that your use would comply with your own specific security and quality control practices and standards, as well as the laws, rules, and regulations that govern you and your content.
### ⚠️ Multi-Tenant Environment Restrictions
**IMPORTANT**: This MCP server is **NOT designed for multi-tenant environments**. Do not use this server to serve multiple users or tenants simultaneously.
- **Single User Only**: Each instance of the MCP server should serve only one user with their own dedicated AWS credentials
- **Separate Directories**: When running multiple instances, create separate working directories for each instance using the `AWS_API_MCP_WORKING_DIR` environment variable
### 🔑 Credential Management and Access Control
We use credentials to control which commands this MCP server can execute. This MCP server relies on IAM roles to be configured properly, in particular:
- Using credentials for an IAM role with `AdministratorAccess` policy (usually the `Admin` IAM role) permits mutating actions (i.e. creating, deleting, modifying your AWS resources) and non-mutating actions.
- Using credentials for an IAM role with `ReadOnlyAccess` policy (usually the `ReadOnly` IAM role) only allows non-mutating actions, this is sufficient if you only want to inspect resources in your account.
- If IAM roles are not available, [these alternatives](https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-files.html#cli-configure-files-examples) can also be used to configure credentials.
- To add another layer of security, users can explicitly set the environment variable `READ_OPERATIONS_ONLY` to true in their MCP config file. When set to true, we'll compare each CLI command against a list of known read-only actions, and will only execute the command if it's found in the allowed list. "Read-Only" only refers to the API classification, not the file system, that is such "read-only" actions can still write to the file system if necessary or upon user request. While this environment variable provides an additional layer of protection, IAM permissions remain the primary and most reliable security control. Users should always configure appropriate IAM roles and policies for their use case, as IAM credentials take precedence over this environment variable.
- ⚠️ **IMPORTANT**: While using a `ReadOnlyAccess` IAM role will block write operations through the MCP server, **however some AWS read only operations can still return AWS credentials or sensitive information** in command outputs that could potentially be used outside of this server.
Our MCP server aims to support all AWS APIs. However, some of them will spawn subprocesses that expose security risks. Such APIs will be denylisted, see the full list below.
| Service | Operations |
|---------|------------|
| **deploy** | `install`, `uninstall` |
| **emr** | `ssh`, `sock`, `get`, `put` |
### File System Access and Operating Mode
**Important**: This MCP server is intended for **STDIO mode only** as a local server using a single user's credentials. The server runs with the same permissions as the user who started it and has complete access to the file system.
#### Security and Access Considerations
- **No Sandboxing**: The `AWS_API_MCP_WORKING_DIR` environment variable sets a working directory. The `AWS_API_MCP_ALLOW_UNRESTRICTED_LOCAL_FILE_ACCESS` flag by default is set to `"workdir"` which restricts MCP server file operations to `<AWS_API_MCP_WORKING_DIR>`. Setting to `"unrestricted"` enables system-wide file access but may cause unintended overwrites. Setting to `"no-access"` disables local file access.
- **File System Access**: The server can read from and write to any location on the file system where the user has permissions.
- **No Confirmation Prompts**: Files can be modified, overwritten, or deleted without any additional user confirmation
- **Host File System Sharing**: When using this server, the host file system is directly accessible
- **Do Not Modify for Network Use**: This server is designed for local STDIO use only; network operation introduces additional security risks
#### Common File Operations
The MCP server can perform various file operations through AWS CLI commands, including:
- `aws s3 sync` - Can overwrite entire directories without warning
- `aws s3 cp` - Can overwrite existing files without confirmation
- Any AWS CLI command using the `outfile` parameter
- Commands that use the `file://` prefix to read from files
**Note**: While the `AWS_API_MCP_WORKING_DIR` environment variable sets where the server starts, it does not restrict where files can be accessed.
### Prompt Injection and Untrusted Data
This MCP server executes AWS CLI commands as instructed by an AI model, which can be vulnerable to prompt injection attacks:
- **Do not connect this MCP server to data sources with untrusted data** (e.g., CloudWatch logs containing raw user data, user-generated content in databases, etc.)
- Always use scoped-down IAM credentials with minimal permissions necessary for the specific task.
- Be aware that prompt injection vulnerabilities are a known issue with LLMs and not caused by MCP servers inherently. When working with untrusted data use a client that supports command validation with a human in the loop.
### Logging
The AWS API MCP server writes logs to help you monitor command executions, troubleshoot issues, and perform debugging. These logs are automatically rotated and contain operational data including command executions, errors, and debug information.
#### Log file location
Logs are written to a rotating file at:
- **macOS/Linux**: `<HOME>/.aws/aws-api-mcp/aws-api-mcp-server.log`
- **Windows**: `%USERPROFILE%\.aws\aws-api-mcp\aws-api-mcp-server.log`
#### Shipping logs to Amazon CloudWatch Logs
To centralize your logs in AWS CloudWatch for better monitoring and analysis, you can use the CloudWatch Agent to automatically ship the MCP server logs to a CloudWatch log group.
**Prerequisites:**
1. **Install the CloudWatch Agent** on your machine:
- **Amazon Linux 2/2023**: `sudo yum install amazon-cloudwatch-agent`
- **Other platforms**: Download from [CloudWatch Agent download page](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/download-CloudWatch-Agent-on-EC2-Instance-commandline-first.html)
- **Learn more**: [CloudWatch Agent overview](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html)
2. **Configure IAM permissions**: Ensure your instance/user has permissions to write to CloudWatch Logs. You can attach the `CloudWatchAgentServerPolicy` or create a custom policy with these permissions:
- `logs:CreateLogGroup`
- `logs:CreateLogStream`
- `logs:PutLogEvents`
**Configuration steps:**
1. **Run the configuration wizard** to set up log collection. The wizard will guide you through configuring the log group name, stream name, and other settings. For detailed wizard documentation, see [Create the CloudWatch agent configuration file with the wizard](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file-wizard.html).:
**Linux/macOS:**
```bash
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
```
**Windows:**
```cmd
cd "C:\Program Files\Amazon\AmazonCloudWatchAgent"
.\amazon-cloudwatch-agent-config-wizard.exe
```
2. **When prompted for log file path**, specify the MCP server log location:
- **macOS**: `/Users/<user>/.aws/aws-api-mcp/aws-api-mcp-server.log`
- **Linux**: `/home/<user>/.aws/aws-api-mcp/aws-api-mcp-server.log`
- **Windows**: `C:\Users\<user>\.aws\aws-api-mcp\aws-api-mcp-server.log`
3. **Start the CloudWatch Agent** following the official AWS documentation:
- [Starting the CloudWatch agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/start-CloudWatch-Agent-on-premise-SSM-onprem.html)
#### Troubleshooting
If you encounter issues with the CloudWatch Agent setup or log shipping, refer to the [Troubleshooting the CloudWatch agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/troubleshooting-CloudWatch-Agent.html).
### Security Best Practices
- **Principle of Least Privilege**: While the examples above use AWS managed policies like `AdministratorAccess` and `ReadOnlyAccess` for simplicity, we **strongly** recommend following the principle of least privilege by creating custom policies tailored to your specific use case.
- **Minimal Permissions**: Start with minimal permissions and gradually add access as needed for your specific workflows.
- **Condition Statements**: Combine custom policies with condition statements to further restrict access by region or other factors based on your security requirements.
- **Untrusted Data Sources**: When connecting to potentially untrusted data sources, use scoped-down credentials with minimal permissions.
- **Regular Monitoring**: Monitor AWS CloudTrail logs to track actions performed by the MCP server.
### Custom Security Policy Configuration
You can create a custom security policy file to define additional security controls beyond IAM permissions. The MCP server will look for a security policy file at `~/.aws/aws-api-mcp/mcp-security-policy.json`.
#### Security Policy File Format
```json
{
"version": "1.0",
"policy": {
"denyList": [],
"elicitList": []
}
}
```
#### Command Format Requirements
**Important**: Commands must be specified in the exact format that the AWS CLI uses internally:
- **Format**: `aws <service> <operation>`
- **Service names**: Use the AWS CLI service name (e.g., `s3api`, `ec2`, `iam`, `lambda`)
- **Operation names**: Use kebab-case format (e.g., `delete-user`, `list-buckets`, `stop-instances`)
#### Examples of Correct Command Formats
| AWS CLI Command | Security Policy Format |
|-----------------|------------------------|
| `aws iam delete-user --user-name john` | `"aws iam delete-user"` |
| `aws s3api list-buckets` | `"aws s3api list-buckets"` |
| `aws ec2 describe-instances` | `"aws ec2 describe-instances"` |
| `aws lambda delete-function --function-name my-func` | `"aws lambda delete-function"` |
| `aws s3 cp file.txt s3://bucket/` | `"aws s3 cp"` |
| `aws cloudformation delete-stack --stack-name my-stack` | `"aws cloudformation delete-stack"` |
#### Policy Configuration Options
- **`denyList`**: Array of AWS CLI commands that will be completely blocked. Commands in this list will never be executed.
- **`elicitList`**: Array of AWS CLI commands that will require explicit user consent before execution. This requires a client that supports [elicitation](https://modelcontextprotocol.io/docs/concepts/elicitation).
#### Pattern Matching and Wildcards
**Current Limitation**: The security policy uses **exact string matching only**. Wildcard patterns (like `iam:delete-*` or `organizations:*`) are **not supported** in the current implementation.
Each command must be specified exactly as it appears in the AWS CLI format. For comprehensive blocking, you need to list each command individually:
```json
{
"version": "1.0",
"policy": {
"denyList": [
"aws iam delete-user",
"aws iam delete-role",
"aws iam delete-group",
"aws iam delete-policy",
"aws iam delete-access-key"
],
"elicitList": [
"aws s3api delete-object",
"aws ec2 stop-instances",
"aws lambda delete-function",
"aws rds delete-db-instance",
"aws cloudformation delete-stack"
]
}
}
```
#### Finding the Correct Command Format
To determine the exact format for a command:
1. **Check AWS CLI documentation**: Look up the service and operation names
2. **Use kebab-case**: Convert camelCase operations to kebab-case (e.g., `ListBuckets` → `list-buckets`)
3. **Test with logging**: Enable debug logging to see how commands are parsed internally
#### Security Policy Precedence
1. **Denylist** - Operations in the denylist are blocked completely
2. **Elicitation Required** - Operations requiring consent will prompt the user
3. **IAM Permissions** - Standard AWS IAM controls apply to all operations
4. **READ_OPERATIONS_ONLY** - Environment variable restriction (if enabled)
**Note**: IAM permissions remain the primary security control mechanism. The security policy provides an additional layer of protection but cannot override IAM restrictions.
## License
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License").
## Disclaimer
This aws-api-mcp package is provided "as is" without warranty of any kind, express or implied, and is intended for development, testing, and evaluation purposes only. We do not provide any guarantee on the quality, performance, or reliability of this package. LLMs are non-deterministic and they make mistakes, we advise you to always thoroughly test and follow the best practices of your organization before using these tools on customer | text/markdown | Amazon Web Services | AWSLabs MCP <203918161+awslabs-mcp@users.noreply.github.com> | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programmin... | [] | null | null | >=3.10 | [] | [] | [] | [
"awscli==1.44.42",
"boto3>=1.41.0",
"botocore[crt]>=1.41.0",
"fastmcp>=2.14.4",
"importlib-resources>=6.0.0",
"loguru>=0.7.3",
"lxml>=5.1.0",
"mcp>=1.23.0",
"pydantic>=2.10.6",
"python-frontmatter>=1.1.0",
"python-json-logger>=2.0.7",
"requests>=2.32.4",
"setuptools>=69.0.0"
] | [] | [] | [] | [
"homepage, https://awslabs.github.io/mcp/",
"docs, https://awslabs.github.io/mcp/servers/aws-api-mcp-server/",
"documentation, https://awslabs.github.io/mcp/servers/aws-api-mcp-server/",
"repository, https://github.com/awslabs/mcp.git",
"changelog, https://github.com/awslabs/mcp/blob/main/src/aws-api-mcp-se... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:54:40.584402 | awslabs_aws_api_mcp_server-1.3.13.tar.gz | 356,421 | b6/c6/738847a8294602061ef6bd0980f791a89109af77d86a9117b3fcbab23618/awslabs_aws_api_mcp_server-1.3.13.tar.gz | source | sdist | null | false | 4bbba2222b42516db11c0720b15b9542 | 8c8c71eb33223b30383c399227ff1f649f717b76faa5fa83162ca72568bd8ac8 | b6c6738847a8294602061ef6bd0980f791a89109af77d86a9117b3fcbab23618 | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | awslabs.dynamodb-mcp-server | 2.0.14 | The official MCP Server for interacting with AWS DynamoDB | # AWS DynamoDB MCP Server
The official developer experience MCP Server for Amazon DynamoDB. This server provides DynamoDB expert design guidance and data modeling assistance.
## Available Tools
The DynamoDB MCP server provides seven tools for data modeling, validation, and code generation:
- `dynamodb_data_modeling` - Retrieves the complete DynamoDB Data Modeling Expert prompt with enterprise-level design patterns, cost optimization strategies, and multi-table design philosophy. Guides through requirements gathering, access pattern analysis, and schema design.
**Example invocation:** "Design a data model for my e-commerce application using the DynamoDB data modeling MCP server"
- `dynamodb_data_model_validation` - Validates your DynamoDB data model by loading dynamodb_data_model.json, setting up DynamoDB Local, creating tables with test data, and executing all defined access patterns. Saves detailed validation results to dynamodb_model_validation.json.
**Example invocation:** "Validate my DynamoDB data model"
- `source_db_analyzer` - Analyzes existing MySQL databases to extract schema structure, access patterns from Performance Schema, and generates timestamped analysis files for use with dynamodb_data_modeling. Supports both RDS Data API-based access and connection-based access.
**Example invocation:** "Analyze my MySQL database and help me design a DynamoDB data model"
- `generate_resources` - Generates various resources from the DynamoDB data model JSON file (dynamodb_data_model.json). Currently only the `cdk` resource type is supported. Passing `cdk` as `resource_type` parameter generates a CDK app to deploy DynamoDB tables. The CDK app reads the dynamodb_data_model.json to create tables with proper configuration.
**Example invocation:** "Generate the resources to deploy my DynamoDB data model using CDK"
- `dynamodb_data_model_schema_converter` - Converts your data model (dynamodb_data_model.md) into a structured schema.json file representing your DynamoDB tables, indexes, entities, fields, and access patterns. This machine-readable format is used for code generation and can be extended for other purposes like documentation generation or infrastructure provisioning. Automatically validates the schema with up to 8 iterations to ensure correctness.
**Example invocation:** "Convert my data model to schema.json for code generation"
- `dynamodb_data_model_schema_validator` - Validates schema.json files for code generation compatibility. Checks field types, operations, GSI mappings, pattern IDs, and provides detailed error messages with fix suggestions. Ensures your schema is ready for the generate_data_access_layer tool.
**Example invocation:** "Validate my schema.json file at /path/to/schema.json"
- `generate_data_access_layer` - Generates type-safe Python code from schema.json including entity classes with field validation, repository classes with CRUD operations, fully implemented access patterns, and optional usage examples. The generated code uses Pydantic for validation and boto3 for DynamoDB operations.
**Example invocation:** "Generate Python code from my schema.json"
## Prerequisites
1. Install `uv` from [Astral](https://docs.astral.sh/uv/getting-started/installation/) or the [GitHub README](https://github.com/astral-sh/uv#installation)
2. Install Python using `uv python install 3.10`
3. Set up AWS credentials with access to AWS services
## Installation
| Kiro | Cursor | VS Code |
|:------:|:-------:|:-------:|
| [](https://kiro.dev/launch/mcp/add?name=awslabs.dynamodb-mcp-server&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22awslabs.dynamodb-mcp-server%40latest%22%5D%2C%22env%22%3A%7B%22DDB-MCP-READONLY%22%3A%22true%22%2C%22AWS_PROFILE%22%3A%22default%22%2C%22AWS_REGION%22%3A%22us-west-2%22%2C%22FASTMCP_LOG_LEVEL%22%3A%22ERROR%22%7D%7D)| [](https://cursor.com/en/install-mcp?name=awslabs.dynamodb-mcp-server&config=JTdCJTIyY29tbWFuZCUyMiUzQSUyMnV2eCUyMGF3c2xhYnMuZHluYW1vZGItbWNwLXNlcnZlciU0MGxhdGVzdCUyMiUyQyUyMmVudiUyMiUzQSU3QiUyMkFXU19QUk9GSUxFJTIyJTNBJTIyZGVmYXVsdCUyMiUyQyUyMkFXU19SRUdJT04lMjIlM0ElMjJ1cy13ZXN0LTIlMjIlMkMlMjJGQVNUTUNQX0xPR19MRVZFTCUyMiUzQSUyMkVSUk9SJTIyJTdEJTJDJTIyZGlzYWJsZWQlMjIlM0FmYWxzZSUyQyUyMmF1dG9BcHByb3ZlJTIyJTNBJTVCJTVEJTdE)| [](https://insiders.vscode.dev/redirect/mcp/install?name=DynamoDB%20MCP%20Server&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22awslabs.dynamodb-mcp-server%40latest%22%5D%2C%22env%22%3A%7B%22AWS_PROFILE%22%3A%22default%22%2C%22AWS_REGION%22%3A%22us-west-2%22%2C%22FASTMCP_LOG_LEVEL%22%3A%22ERROR%22%7D%2C%22disabled%22%3Afalse%2C%22autoApprove%22%3A%5B%5D%7D) |
> **Note:** The install buttons above configure `AWS_REGION` to `us-west-2` by default. Update this value in your MCP configuration after installation if you need a different region.
Add the MCP server to your configuration file (for [Kiro](https://kiro.dev/docs/mcp/) add to `.kiro/settings/mcp.json` - see [configuration path](https://kiro.dev/docs/cli/mcp/configuration/#mcp-server-loading-priority)):
```json
{
"mcpServers": {
"awslabs.dynamodb-mcp-server": {
"command": "uvx",
"args": ["awslabs.dynamodb-mcp-server@latest"],
"env": {
"FASTMCP_LOG_LEVEL": "ERROR"
},
"disabled": false,
"autoApprove": []
}
}
}
```
### Windows Installation
For Windows users, the MCP server configuration format is slightly different:
```json
{
"mcpServers": {
"awslabs.dynamodb-mcp-server": {
"disabled": false,
"timeout": 60,
"type": "stdio",
"command": "uv",
"args": [
"tool",
"run",
"--from",
"awslabs.dynamodb-mcp-server@latest",
"awslabs.dynamodb-mcp-server.exe"
],
"env": {
"FASTMCP_LOG_LEVEL": "ERROR"
}
}
}
}
```
### Docker Installation
After a successful `docker build -t awslabs/dynamodb-mcp-server .`:
```json
{
"mcpServers": {
"awslabs.dynamodb-mcp-server": {
"command": "docker",
"args": [
"run",
"--rm",
"--interactive",
"--env",
"FASTMCP_LOG_LEVEL=ERROR",
"awslabs/dynamodb-mcp-server:latest"
],
"env": {},
"disabled": false,
"autoApprove": []
}
}
}
```
## Data Modeling
### Data Modeling in Natural Language
Use the `dynamodb_data_modeling` tool to design DynamoDB data models through natural language conversation with your AI agent. Simply ask: "use my DynamoDB MCP to help me design a DynamoDB data model."
The tool provides a structured workflow that translates application requirements into DynamoDB data models:
**Requirements Gathering Phase:**
- Captures access patterns through natural language conversation
- Documents entities, relationships, and read/write patterns
- Records estimated requests per second (RPS) for each pattern
- Creates `dynamodb_requirements.md` file that updates in real-time
- Identifies patterns better suited for other AWS services (OpenSearch for text search, Redshift for analytics)
- Flags special design considerations (e.g., massive fan-out patterns requiring DynamoDB Streams and Lambda)
**Design Phase:**
- Generates optimized table and index designs
- Creates `dynamodb_data_model.md` with detailed design rationale
- Provides estimated monthly costs
- Documents how each access pattern is supported
- Includes optimization recommendations for scale and performance
The tool is backed by expert-engineered context that helps reasoning models guide you through advanced modeling techniques. Best results are achieved with reasoning-capable models such as Anthropic Claude 4/4.5 Sonnet, OpenAI o3, and Google Gemini 2.5.
### Data Model Validation
**Prerequisites for Data Model Validation:**
To use the data model validation tool, you need one of the following:
- **Container Runtime**: Docker, Podman, Finch, or nerdctl with a running daemon
- **Java Runtime**: Java JRE version 17 or newer (set `JAVA_HOME` or ensure `java` is in your system PATH)
After completing your data model design, use the `dynamodb_data_model_validation` tool to automatically test your data model against DynamoDB Local. The validation tool closes the loop between generation and execution by creating an iterative validation cycle.
**How It Works:**
The tool automates the traditional manual validation process:
1. **Setup**: Spins up DynamoDB Local environment (Docker/Podman/Finch/nerdctl or Java fallback)
2. **Generate Test Specification**: Creates `dynamodb_data_model.json` listing tables, sample data, and access patterns to test
3. **Deploy Schema**: Creates tables, indexes, and inserts sample data locally
4. **Execute Tests**: Runs all read and write operations defined in your access patterns
5. **Validate Results**: Checks that each access pattern behaves correctly and efficiently
6. **Iterative Refinement**: If validation fails (e.g., query returns incomplete results due to misaligned partition key), the tool records the issue, and regenerates the affected schema and rerun tests until all patterns pass
**Validation Output:**
- `dynamodb_model_validation.json`: Detailed validation results with pattern responses
- `validation_result.md`: Summary of validation process with pass/fail status for each access pattern
- Identifies issues like incorrect key structures, missing indexes, or inefficient query patterns
### Source Database Analysis
The `source_db_analyzer` tool extracts schema and access patterns from your existing database to help design your DynamoDB model. This is useful when migrating from relational databases.
The tool supports two connection methods for MySQL:
- **RDS Data API-based access**: Serverless connection using cluster ARN
- **Connection-based access**: Traditional connection using hostname/port
**Supported Databases:**
- MySQL / Aurora MySQL
- PostgreSQL
- SQL Server
**Execution Modes:**
- **Self-Service Mode**: Generate SQL queries, run them yourself, provide results (MYSQL, PSQL, MSSQL)
- **Managed Mode**: Direct connection via AWS RDS Data API (MySQL only)
We recommend running this tool against a non-production database instance.
### Self-Service Mode (MYSQL, PSQL, MSSQL)
Self-service mode allows you to analyze any database without AWS connectivity:
1. **Generate Queries**: Tool writes SQL queries (based on selected database) to a file
2. **Run Queries**: You execute queries against your database
3. **Provide Results**: Tool parses results and generates analysis
### Managed Mode (MYSQL, PSQL, MSSQL)
Managed mode allow you to connect tool, to AWS RDS Data API, to analyzes existing MySQL/Aurora databases to extract schema and access patterns for DynamoDB modeling.
#### Prerequisites for MySQL Integration (Managed Mode)
**For RDS Data API-based access:**
1. MySQL cluster with RDS Data API enabled
2. Database credentials stored in AWS Secrets Manager
3. AWS credentials with permissions to access RDS Data API and Secrets Manager
**For Connection-based access:**
1. MySQL server accessible from your environment
2. Database credentials stored in AWS Secrets Manager
3. AWS credentials with permissions to access Secrets Manager
**For both connection methods:**
4. Enable Performance Schema for access pattern analysis (optional but recommended):
- Set `performance_schema` parameter to 1 in your DB parameter group
- Reboot the DB instance after changes
- Verify with: `SHOW GLOBAL VARIABLES LIKE '%performance_schema'`
- Consider tuning:
- `performance_schema_digests_size` - Maximum rows in events_statements_summary_by_digest
- `performance_schema_max_digest_length` - Maximum byte length per statement digest (default: 1024)
- Without Performance Schema, analysis is based on information schema only
#### MySQL Environment Variables
Add these environment variables to enable MySQL integration:
**For RDS Data API-based access:**
- `MYSQL_CLUSTER_ARN`: MySQL cluster ARN
- `MYSQL_SECRET_ARN`: ARN of secret containing database credentials
- `MYSQL_DATABASE`: Database name to analyze
- `AWS_REGION`: AWS region of the cluster
**For Connection-based access:**
- `MYSQL_HOSTNAME`: MySQL server hostname or endpoint
- `MYSQL_PORT`: MySQL server port (optional, default: 3306)
- `MYSQL_SECRET_ARN`: ARN of secret containing database credentials
- `MYSQL_DATABASE`: Database name to analyze
- `AWS_REGION`: AWS region where Secrets Manager is located
**Common options:**
- `MYSQL_MAX_QUERY_RESULTS`: Maximum rows in analysis output files (optional, default: 500)
**Note:** Explicit tool parameters take precedence over environment variables. Only one connection method (cluster ARN or hostname) should be specified.
#### MCP Configuration with MySQL
```json
{
"mcpServers": {
"awslabs.dynamodb-mcp-server": {
"command": "uvx",
"args": ["awslabs.dynamodb-mcp-server@latest"],
"env": {
"AWS_PROFILE": "default",
"AWS_REGION": "us-west-2",
"FASTMCP_LOG_LEVEL": "ERROR",
"MYSQL_CLUSTER_ARN": "arn:aws:rds:$REGION:$ACCOUNT_ID:cluster:$CLUSTER_NAME",
"MYSQL_SECRET_ARN": "arn:aws:secretsmanager:$REGION:$ACCOUNT_ID:secret:$SECRET_NAME",
"MYSQL_DATABASE": "<DATABASE_NAME>",
"MYSQL_MAX_QUERY_RESULTS": 500
},
"disabled": false,
"autoApprove": []
}
}
}
```
#### Using Source Database Analysis
1. Run `source_db_analyzer` against your Database (Self-service or Managed mode)
2. Review the generated timestamped analysis folder (database_analysis_YYYYMMDD_HHMMSS)
3. Read the manifest.md file first - it lists all analysis files and statistics
4. Read all analysis files to understand schema structure and access patterns
5. Use the analysis with `dynamodb_data_modeling` to design your DynamoDB schema
The tool generates Markdown files with:
- Schema structure (tables, columns, indexes, foreign keys)
- Access patterns from Performance Schema (query patterns, RPS, frequencies)
- Timestamped analysis for tracking changes over time
## Schema Conversion and Code Generation
After designing your DynamoDB data model, you can convert it to a structured schema and generate reference python code. **When using the MCP tools through an LLM, this entire workflow happens automatically** - the LLM guides you through schema conversion, validation, and code generation in a single conversation without requiring manual tool invocation.
For standalone usage, you can also invoke these tools directly via CLI or manually edit schema.json files and regenerate code as needed.
> **Note:** Data model validation (`dynamodb_data_model_validation`) is optional for code generation. However, if you plan to test the generated code with `usage_examples.py` against DynamoDB Local, running validation first is recommended as it automatically sets up the tables and test data in DynamoDB Local.
### Converting Data Model to Schema
The `dynamodb_data_model_schema_converter` tool converts your human-readable data model (dynamodb_data_model.md) into a structured JSON schema representing your DynamoDB tables, indexes, entities, and access patterns. This machine-readable format enables code generation and can be extended for documentation or infrastructure provisioning.
The tool automatically validates the generated schema, providing detailed error messages and fix suggestions if validation fails. Output is saved to a timestamped folder for isolation.
**Schema Structure:**
The generated schema.json is a structured representation containing:
- **Tables**: One or more DynamoDB table definitions with partition/sort keys
- **GSI Definitions**: Global Secondary Index configurations (optional)
- **Entities**: Domain models (User, Order, Product, etc.) with typed fields
- **Field Types**: string, integer, decimal, boolean, array, object, uuid
- **Access Patterns**: Query/Scan/GetItem operations with parameter definitions and key templates
- **Key Templates**: Patterns for generating partition and sort keys (e.g., `USER#{user_id}`)
This structured format serves as the input for code generation tools.
### Validating Schema Files
The `dynamodb_data_model_schema_validator` tool validates your schema.json file to ensure it's properly formatted for code generation.
**Validation Checks:**
- Required sections (table_config, entities) exist
- All required fields are present
- Field types are valid (string, integer, decimal, boolean, array, object, uuid)
- Enum values are correct (operation types, return types)
- Pattern IDs are unique across all entities
- GSI names match between gsi_list and gsi_mappings
- Fields referenced in templates exist in entity fields
- Range conditions are valid with correct parameter counts
- Access patterns have valid operations and return types
**Security:**
Schema files must be within the current working directory or subdirectories. Path traversal attempts are blocked for security.
**Validation Output Examples:**
Success:
```
✅ Schema validation passed!
```
Error with suggestions:
```
❌ Schema validation failed:
• entities.User.fields[0].type: Invalid type value 'strng'
💡 Did you mean 'string'? Valid options: string, integer, decimal, boolean, array, object, uuid
```
### Generating Data Access Layer
The `generate_data_access_layer` tool generates type-safe Python code from your validated schema.json file.
**Generated Code:**
- **Entity Classes**: Pydantic models with field validation and type safety
- **Repository Classes**: CRUD operations (create, read, update, delete) for each entity
- **Access Patterns**: Fully implemented query and scan operations from your schema
- **Base Repository**: Shared functionality for all repositories
- **Usage Examples**: Sample code demonstrating how to use the generated classes (optional)
- **Configuration**: ruff.toml for code quality and formatting
**Prerequisites for Code Generation:**
The generated Python code requires these runtime dependencies:
- `pydantic>=2.0` - For entity validation and type safety
- `boto3>=1.38` - For DynamoDB operations
Install them in your project:
```bash
uv add pydantic boto3
# or
pip install pydantic boto3
```
**Optional Development Dependencies:**
For linting and formatting the generated code:
- `ruff>=0.9.7` - Python linter and formatter (recommended)
**Generated File Structure:**
```
generated_dal/
├── entities.py # Pydantic entity models
├── repositories.py # Repository classes with CRUD operations
├── base_repository.py # Base repository functionality
├── transaction_service.py # Cross-table transaction methods (if schema includes cross_table_access_patterns)
├── access_pattern_mapping.json # Pattern ID to method mapping
├── usage_examples.py # Sample usage code (if enabled)
└── ruff.toml # Linting configuration
```
**Using Generated Code:**
The generated code provides type-safe entity classes and repository methods for all your access patterns:
```python
from generated_dal.repositories import UserRepository
from generated_dal.entities import User
# Initialize repository
repo = UserRepository(table_name="MyTable")
# Create a new user
user = User(user_id="123", username="username", name="John Doe")
repo.create(user)
# Query by access pattern
users = repo.get_user_by_username(username="username")
# Update user
user.name = "Jane Doe"
repo.update(user)
```
For linting and formatting the generated code with ruff:
```bash
ruff check generated_dal/ # Check for issues
ruff check --fix generated_dal/ # Auto-fix issues
ruff format generated_dal/ # Format code
```
| text/markdown | Amazon Web Services | AWSLabs MCP <203918161+awslabs-mcp@users.noreply.github.com>, Erben Mo <moerben@amazon.com> | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programmin... | [] | null | null | >=3.10 | [] | [] | [] | [
"awslabs-aws-api-mcp-server==1.0.2",
"awslabs-mysql-mcp-server==1.0.9",
"boto3>=1.40.5",
"dspy-ai>=2.6.27",
"jinja2>=3.1.0",
"jinja2>=3.1.6",
"loguru==0.7.3",
"mcp[cli]==1.23.0",
"psutil==7.1.1",
"pydantic==2.11.7",
"strands-agents>=1.5.0",
"typing-extensions==4.14.1"
] | [] | [] | [] | [
"homepage, https://awslabs.github.io/mcp/",
"docs, https://awslabs.github.io/mcp/servers/dynamodb-mcp-server/",
"documentation, https://awslabs.github.io/mcp/servers/dynamodb-mcp-server/",
"repository, https://github.com/awslabs/mcp.git",
"changelog, https://github.com/awslabs/mcp/blob/main/src/dynamodb-mcp... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:54:38.494398 | awslabs_dynamodb_mcp_server-2.0.14.tar.gz | 778,229 | 5e/58/e087730939f2de08ecd0f2845823ecfeb9d20d929c9c88e542cbe77fc175/awslabs_dynamodb_mcp_server-2.0.14.tar.gz | source | sdist | null | false | a714e94049fb9b7b14e7bdc5cc39900a | 53270c3c347d3ddf9d181a5d2117587ec041ec7bc76b2743a23d1cf502cf0210 | 5e58e087730939f2de08ecd0f2845823ecfeb9d20d929c9c88e542cbe77fc175 | null | [
"LICENSE",
"NOTICE"
] | 0 |
2.4 | pyretailscience | 0.43.0 | Retail Data Science Tools | <!-- README.md -->

# PyRetailScience
⚡ Rapid bespoke and deep dive retail analytics ⚡
PyRetailScience equips you with a wide array of retail analytical capabilities,
from segmentations to gain-loss analysis. Leave the mundane to us and elevate your role
from data janitor to insights virtuoso.
## Installation
To get the latest release:
```bash
pip install pyretailscience
```
Alternatively, if you want the very latest version of the package you can install it from GitHub:
```bash
pip install git+https://github.com/Data-Simply/pyretailscience.git
```
## Features
- **Tailored for Retail**: Leverage pre-built functions designed specifically for retail analytics. From customer
segmentations to gains loss analysis, PyRetailScience provides over a dozen building blocks you need to tackle
retail-specific challenges efficiently and effectively.

- **Reliable Results**: Built with extensive unit testing and best practices, PyRetailScience ensures the accuracy
and reliability of your analyses. Confidently present your findings, knowing they're backed by a robust,
well-tested framework.
- **Professional Charts**: Say goodbye to hours of tweaking chart styles. PyRetailScience delivers beautifully
standardized visualizations that are presentation-ready with just a few lines of code. Impress stakeholders and
save time with our pre-built, customizable chart templates.

- **Workflow Automation**: PyRetailScience streamlines your workflow by automating common retail analytics tasks.
Easily loop analyses over different dimensions like product categories or countries, and seamlessly use the output
of one analysis as input for another. Spend less time on data manipulation and more on generating valuable insights.
## Examples
### Gains Loss Analysis
Here is an excerpt from the gain loss analysis example [notebook](https://pyretailscience.datasimply.co/examples/gain_loss/)
```python
from pyretailscience.analysis.gain_loss import GainLoss
gl = GainLoss(
df,
# Flag the rows of period 1
p1_index=time_period_1,
# Flag the rows of period 2
p2_index=time_period_2,
# Flag which rows are part of the focus group.
# Namely, which rows are Calvin Klein sales
focus_group_index=df["brand_name"] == "Calvin Klein",
focus_group_name="Calvin Klein",
# Flag which rows are part of the comparison group.
# Namely, which rows are Diesel sales
comparison_group_index=df["brand_name"] == "Diesel",
comparison_group_name="Diesel",
# Finally we specifiy that we want to calculate
# the gain/loss in total revenue
value_col="total_price",
)
# Ok now let's plot the result
gl.plot(
x_label="Revenue Change",
source_text="Transactions 2023-01-01 to 2023-12-31",
move_legend_outside=True,
)
plt.show()
```

### Cross Shop Analysis
Here is an excerpt from the cross shop analysis example [notebook](https://pyretailscience.datasimply.co/examples/cross_shop/)
```python
from pyretailscience.analysis import cross_shop
cs = cross_shop.CrossShop(
df,
group_1_col="category_name",
group_1_val="Jeans",
group_2_col="category_name",
group_2_val="Shoes",
group_3_col="category_name",
group_3_val="Dresses",
labels=["Jeans", "Shoes", "Dresses"],
)
cs.plot(
title="Jeans are a popular cross-shopping category with dresses",
source_text="Source: Transactions 2023-01-01 to 2023-12-31",
figsize=(6, 6),
)
plt.show()
# Let's see which customers were in which groups
display(cs.cross_shop_df.head())
# And the totals for all groups
display(cs.cross_shop_table_df)
```

### Customer Retention Analysis
Here is an excerpt from the customer retention analysis example [notebook](https://pyretailscience.datasimply.co/examples/retention/)
```python
ax = dbp.plot(
figsize=(10, 5),
bins=20,
cumulative=True,
draw_percentile_line=True,
percentile_line=0.8,
source_text="Source: Transactions in 2023",
title="When Do Customers Make Their Next Purchase?",
)
# Let's dress up the chart a bit of text and get rid of the legend
churn_period = dbp.purchases_percentile(0.8)
ax.annotate(
f"80% of customers made\nanother purchase within\n{round(churn_period)} days",
xy=(churn_period, 0.81),
xytext=(dbp.purchase_dist_s.min(), 0.8),
fontsize=15,
ha="left",
va="center",
arrowprops=dict(facecolor="black", arrowstyle="-|>", connectionstyle="arc3,rad=-0.25", mutation_scale=25),
)
ax.legend().set_visible(False)
plt.show()
```

## Documentation
Please see [this site](https://pyretailscience.datasimply.co/) for full documentation, which includes:
- [Analysis Modules](https://pyretailscience.datasimply.co/analysis_modules/): Overview of the framework and the
structure of the docs.
- [Examples](https://pyretailscience.datasimply.co/examples/retention/): If you're looking to build something
specific or are more of a hands-on learner, check out our examples. This is the best place to get started.
- [API Reference](https://pyretailscience.datasimply.co/api/gain_loss/): Thorough documentation of every class
and method.
## Contributing
We welcome contributions from the community to enhance and improve PyRetailScience. To contribute, please follow these steps:
1. Fork the repository.
2. Create a new branch for your feature or bug fix.
3. Make your changes and commit them with clear messages.
4. Push your changes to your fork.
5. Open a pull request to the main repository's `main` branch.
Please make sure to follow the existing coding style and provide unit tests for new features.
## Contact / Support
This repository is supported by Data simply.
If you are interested in seeing what Data Simply can do for you, then please email
[email us](mailto:murray@datasimply.co). We work with companies at a variety of scales and with varying levels of
data and retail analytics sophistication, to help them build, scale or streamline their analysis capabilities.
## Contributors
<a href="https://github.com/Data-Simply/pyretailscience/graphs/contributors">
<img src="https://contrib.rocks/image?repo=Data-Simply/pyretailscience" alt="Contributors" />
</a>
Made with [contrib.rocks](https://contrib.rocks).
## Acknowledgements
Built with expertise doing analytics and data science for scale-ups to multi-nationals, including:
- Loblaws
- Dominos
- Sainbury's
- IKI
- Migros
- Sephora
- Nectar
- Metro
- Coles
- GANNI
- Mindful Chef
- Auchan
- Attraction Tickets Direct
- Roman Originals
## Testing
PyRetailScience includes comprehensive unit and integration tests to ensure reliability across different backends.
### Unit Tests
Run unit tests using pytest:
```bash
# Install dependencies
uv sync
# Run all unit tests
uv run pytest
# Run specific test file
uv run pytest tests/test_file.py
# Run with coverage
uv run pytest --cov=pyretailscience
```
### Multi-Python Version Testing
PyRetailScience supports Python 3.10, 3.11, 3.12, and 3.13. You can test across all supported versions locally using tox:
```bash
# Test all supported Python versions
tox -e py310,py311,py312,py313
# Test specific Python version
tox -e py313
# Run tests in parallel across versions
tox -p auto
```
**Prerequisites:**
- Multiple Python versions installed on your system
- tox installed (`uv sync` installs it automatically)
### Integration Tests
Integration tests verify that all analysis modules work correctly with distributed computing engines (PySpark and
BigQuery). These tests ensure the Ibis-based code paths function properly across different execution environments.
#### PySpark Integration Tests
The PySpark integration tests run locally using the same pytest framework as other tests.
**Prerequisites:**
- Python environment with dependencies installed (`uv sync`)
**Running locally:**
```bash
# Run all PySpark tests
uv run pytest tests/integration -k "pyspark" -v
# Run specific PySpark test
uv run pytest tests/integration/test_cohort_analysis.py -k "pyspark" -v
```
#### BigQuery Integration Tests
The BigQuery integration tests verify compatibility with Google BigQuery as a backend.
**Prerequisites:**
- Access to a Google Cloud Platform account
- A service account with BigQuery permissions
- The service account key JSON file
- The test dataset loaded in BigQuery (dataset: `test_data`, table: `transactions`)
**Running locally:**
```bash
# Set up authentication
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/service-account-key.json
export GCP_PROJECT_ID=your-project-id
# Install dependencies
uv sync
# Run all BigQuery tests
uv run pytest tests/integration -k "bigquery" -v
# Run specific test module
uv run pytest tests/integration/bigquery/test_cohort_analysis.py -v
```
## License
This project is licensed under the Elastic License 2.0 - see the [LICENSE](LICENSE) file for details.
| text/markdown | null | Murray Vanwyk <2493311+mvanwyk@users.noreply.github.com> | null | null | null | null | [
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"duckdb<2,>=1.0.0",
"ibis-framework[duckdb]<11,>=10.0.0",
"matplotlib-set-diagrams~=0.0.2",
"matplotlib<4,>=3.9.1",
"numpy<=2,>=1.26.3",
"pandas<3,>=2.2.3",
"pyarrow<23,>=18.0.0",
"scikit-learn<2,>=1.4.2",
"scipy<2,>=1.14.1",
"textalloc>=1.2.1",
"toml<0.11,>=0.10.2",
"tqdm<5,>=4.66.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:54:32.339148 | pyretailscience-0.43.0.tar.gz | 7,507,398 | 5a/a4/4282359045323be7faee3a860a594fb40b45ef45fa8359392c055e94c0a9/pyretailscience-0.43.0.tar.gz | source | sdist | null | false | ef5a4d3be70a8fbf8e4bb6245d3cbeb5 | 35ca40738f1d7357cdd4143daf096ee8716780cfdfdb685337430f0b6f51cf1d | 5aa44282359045323be7faee3a860a594fb40b45ef45fa8359392c055e94c0a9 | Elastic-2.0 | [
"LICENSE"
] | 238 |
2.4 | InferAGNI | 26.2.19 | Infer planet properties using AGNI as a static structure model | # AGNI Inference Package
Inferring planet properties using AGNI as a static structure model.
## Get started
1. Install Python 3.12 and a distribution of conda
2. `pip install -e .`
3. `inferagni infer "L 98-59 d"`
### Quick links
* AGNI repo: https://github.com/nichollsh/AGNI/
* AGNI docs: https://www.h-nicholls.space/AGNI/
* Zalmoxis docs: https://proteus-framework.org/Zalmoxis/
* Paper: https://www.overleaf.com/project/6853d410bda854791be86cd7
Available under GPLv3. Copyright (c) 2026 Harrison Nicholls.
| text/markdown | null | Harrison Nicholls <harrison.nicholls@ast.cam.ac.uk> | null | null | null | Astronomy, Exoplanets, Model-coupling | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :... | [] | null | null | >=3.12 | [] | [] | [] | [
"cmcrameri",
"matplotlib",
"netCDF4",
"numpy>=2.0.0",
"pandas",
"scipy",
"pre-commit",
"platformdirs",
"ruff",
"click",
"emcee",
"corner",
"adjusttext",
"exoatlas",
"coverage; extra == \"develop\"",
"tomlkit>=0.11.0; extra == \"develop\"",
"pytest>=8.1; extra == \"develop\"",
"pyte... | [] | [] | [] | [
"homepage, https://www.h-nicholls.space/AGNI/",
"issues, https://github.com/nichollsh/InferAGNI/issues",
"source, https://github.com/nichollsh/InferAGNI/",
"documentation, https://www.h-nicholls.space/AGNI/",
"changelog, https://github.com/nichollsh/InferAGNI/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:54:08.717246 | inferagni-26.2.19.tar.gz | 31,881 | 69/45/4a0fb24bad8f3eee0aa573d4533b536368bf16a2e38d6ed81b22e3e8d468/inferagni-26.2.19.tar.gz | source | sdist | null | false | 26661a0cba41510f85397c5592958d37 | 025229cafbf93ff813b477f4fafb6af6bddc1c90da19b9e50407283e8121d92d | 69454a0fb24bad8f3eee0aa573d4533b536368bf16a2e38d6ed81b22e3e8d468 | GPL-3.0-or-later | [
"LICENSE.txt"
] | 0 |
2.4 | sumoITScontrol | 0.1.0 | Traffic Controller Collection for SUMO Traffic Simulations | <h1>
<center>
<table width="100%">
<tr>
<td align="center">
<img src="resources/Figure_Banner.PNG"
alt="sumoITScontrol"
style="height: 3.5em; vertical-align: middle; margin-right: 0.4em;">
sumoITScontrol <img src="resources/Figure_Banner.PNG"
alt="sumoITScontrol"
style="height: 3.5em; vertical-align: middle; margin-left: 0.4em;">
</td>
</tr>
<tr>
<td align="center">
Traffic Controller Collection for SUMO Traffic Simulatons
</td>
</tr>
</table>
</center>
</h1>
**sumoITScontrol** is an open-source Python framework that provides a standardized collection of established traffic controllers for the SUMO simulator for signal control and freeway ramp metering algorithms.
It enables reproducible, variance-aware benchmarking of intelligent traffic control methods through consistent implementations and rigorous evaluation practices.
<details>
<summary><strong>Table of Contents</strong></summary>
- [Highlights](#highlights)
- [Installation](#installation)
- [Usage](#usage)
- [Case Study Demonstrations](#case-study-demonstrations)
- [Ramp Metering](#ramp-metering)
- [Signalised Intersection Management](#signalised-intersection-management)
- [Calibration / Fine-Tuning of Control Parameters](#calibration)
- [Documentation](#documentation)
- [Control Context Objects](#control-context-objects)
- [RampMetering](#rampmetering)
- [RampMeteringCoordinationGroup](#rampmeteringcoordinationgroup)
- [Intersection](#intersection)
- [IntersectionGroup](#intersectiongroup)
- [Ramp Metering](#ramp-metering)
- [ALINEA](#alinea)
- [HERO](#hero)
- [METALINE](#metaline)
- [Signalised Intersection Management](#signalised-intersection-management)
- [Max-Pressure (Fixed-Cycle)](#max-pressure-fixed)
- [Max-Pressure (Flexible-Cycle)](#max-pressure-flexible)
- [SCOOT/SCATS](#scoot/scats)
- [Citations](#citations)
</details>
## Highlights
<center>
<table>
<tr>
<td colspan="2"><b><center>Covered Controllers</center></b></td>
</tr>
<tr>
<td><center><i>Intersection Management</i></center></td>
<td><center><i>Ramp Metering</i></center></td>
</tr>
<tr>
<td>
<ul>
<li>Max-Pressure (Fixed Cycle)</li>
<li>Max-Pressure (Flexible Cycle)</li>
<li>SCOOT/SCATS</li>
<li>...</li>
</ul>
</td>
<td>
<ul>
<li>ALINEA</li>
<li>HERO</li>
<li>METALINE</li>
<li>...</li>
</ul>
</td>
<td>
</tr>
</table>
<table>
<tr>
<td> <a href="resources/highlight_ramp.PNG" > <img src="resources/highlight_ramp.PNG" /> </a> </td>
<td> <a href="resources/highlight_inter.PNG" > <img src="resources/highlight_inter.PNG" /> </a> </td>
<td> <a href="resources/highlight_fsm.PNG" > <img src="resources/highlight_fsm.PNG" /> </a> </td>
<td> <a href="resources/highlight_scosca.PNG" > <img src="resources/highlight_scosca.PNG" /> </a> </td>
</tr>
</table>
</center>
### **==> Link to [Documentation](#documentation) Page <==**
## Installation
The python package **sumoITScontrol** can be installed using pip:
```bash
pip install sumoITScontrol
```
## Usage
You can use sumoITScontrol as a Python library to easily integrate ITS controllers into your SUMO simulations.
For this, you need to include only one line of code into your main loop when using SUMO with TraCI.
Certain preparation, such as defining sensors and traffic lights, or control parameters, is necessary additionally.
The usage is exemplified at the example of ALINEA controler and ramp metering.
*For further details, please see the section on [Case Study Demonstrations](#case-study-demonstrations).*
**Step 1: Define Sensors and Traffic Lights**
You need to provide a control context such as a `RampMeter` or `RampMeterCoordinationGroup` in the context of ramp metering, and `Intersection` or `IntersectionGroup` in the context of signalised intersection management.
The context informs sumoITScontrol about relevant sensors (`e2_5`, `e2_4`, `e2_0`) and traffic lights (`J0`).
``` python
ramp_meter = RampMeter(
tl_id="J0",
mainline_sensors=["e2_5", "e2_4"],
queue_sensors=["e2_0"],
)
```
**Step 2: Define Control Parameters**
You need to provide a controller and relevant parameters to connect the controller to the control context such as `ALINEA`, `HERO`, or `METALINE` in the context of ramp metering, and `MaxPressure_Fix`, `MaxPressure_Flex` or `ScootScats` in the context of singalised intersection management.
``` python
controller = ALINEA(
params={
"target_occupancy": 10,
"K_P": 30,
"K_I": 0,
"cycle_duration": 60,
"measurement_period": int(
60 / 0.5
), # int(cycle_duration / simulation.time_step)
"min_rate": 5,
"max_rate": 100,
},
ramp_meter=ramp_meter,
)
```
**Step 3: Add One Line to Main Loop**
Any SUMO simulation that is executed with TraCI somehow has the structure of starting traci, inside a loop calling `traci.simulationStep()` for the duration of the simulation, and then closing traci.
Anything you have to do (for most controllers) is to simply add one line of code inside your loop, as outlined below.
**Please Note:** For the controller `ScootScats`, you might need to add one line to the initialization (after `traci.start()` but before `traci.simulationStep()`). *For further details, please see the section on [Case Study Demonstrations](#case-study-demonstrations).*
``` python
# Start Sumo
traci.start(SUMO_CMD)
# Initialize
# Execute Simulation
for simulation_timestep in range(0, SIMULATION_DURATION):
# run one step
traci.simulationStep()
# retrieve time
current_time = traci.simulation.getCurrentTime()
# execute control
controller.execute_control(current_time) # <-- !!! ADD THIS LINE !!!
# Stop Sumo
traci.close()
```
## Case Study Demonstrations
Demo Python scripts and SUMO simulations for each implemented controller can be found in the folder `.\sumoITScontrol\demos\*.py` and `.\sumoITScontrol\demos\demo_simulation_models\`.
The Python scripts open `sumo-gui` for the simulation, and afterwards render figures with control-relevant statistics as reported in the paper.
There are three case studies:
- Control in the context of ramp metering
- Control in the context of signalised intersection management
- Calibration / Fine-tuning of control parameters
### Ramp Metering

<details>
The ramp metering case study consists of a network with three different ramp designs.
There are multiple different demand scenarios, to showcase the performance of local controllers (e.g. ALINEA), and coordinated controllers (e.g. HERO, METALINE).
Relevant demos include:
```bash
python demo_ALINEA.py
python demo_HERO.py
python demo_METALINE.py
```
</details>
### Signalised Intersection Management
<img src="resources/CaseStudy_IntersectionMan.PNG"
alt="Intersection management case study"
style="width:50%; height:auto;">
<details>
The signalised intersection management case study consists of an arterial network (Schorndorfer Strasse, Esslingen am Neckar) with seven intersections, where five are signalised (controlled).
The case study serves to demonstrate the performance of local controllers (e.g. Max-Pressure), and coordinated controllers (e.g. SCOOT/SCATS).
Relevant demos include:
```bash
python demo_MAX_FIX.py
python demo_MAX_FLEX.py
python demo_SCOSCA.py
```
</details>
### Calibration / Fine-Tuning of Control Parameters
<details>
At the example of the ramp metering study and the ALINEA controller, this script explores different parameters for ALINEA, by running SUMO simulations with different random seeds (20), and then reporting mean and standard deviation for different controller configurations (parameters).
Relevant demos include:
```bash
python demo_optimisation.py
```
which calls the python script `demo_optimisation_execute_script.py`.
</details>
## Documentation
This documentation lists specific details to control context objects and traffic control algorithms for ramp metering and signalised intersection management.
<details>
### Control Context Objects
#### RampMetering
This is the example of RampMetering object specification.
The sensors can be E2 sensors or E1 sensors.
```python
ramp_meter = RampMeter(
tl_id="J0",
mainline_sensors=["e2_5", "e2_4"],
queue_sensors=["e2_0"],
)
```
#### RampMeteringCoordinationGroup
This is the example of RampMeterCoordinationGroup object specification.
The sensors can be E2 sensors or E1 sensors.
```python
ramp_meter_group = RampMeterCoordinationGroup(
ramp_meters_ordered=[
RampMeter(
tl_id="J12",
mainline_sensors=["e1_13", "e1_14"],
queue_sensors=["e2_1", "e2_2"],
smoothening_factor=0.1,
saturation_flow_veh_per_sec=0.5,
),
RampMeter(
tl_id="J11",
mainline_sensors=["e1_2", "e1_3"],
queue_sensors=["e2_3"],
smoothening_factor=0.1,
saturation_flow_veh_per_sec=0.5,
),
RampMeter(
tl_id="J0",
mainline_sensors=["e2_5", "e2_4"],
queue_sensors=["e2_0"],
smoothening_factor=0.1,
saturation_flow_veh_per_sec=0.5,
),
],
ramp_meter_ids=["J12", "J11", "J0"],
)
```
#### Intersection
This is the example of Intersection object specification.
You can either provide a list of sensors (E2 sensors) or a list of lanes (links) to measure traffic states (queue lengths, degree of saturation).
If both are provided, sensors are taken first.
Provision of `green_states` and `yellow_states` is only necessary for SCOOT/SCATS.
```python
intersection2 = Intersection(
tl_id="intersection2",
phases=[0, 2, 4],
# links = {0:["183049933#0_1", "-38361908#1_1"],
# 2:["-38361908#1_1", "-38361908#1_2"],
# 4:["-25973410#1_1", "758088375#0_1", "758088375#0_2"]},
sensors={
0: ["e2_183049933#0_1", "e2_-38361908#1_1"],
2: ["e2_-38361908#1_1", "e2_-38361908#1_2"],
4: ["e2_-25973410#1", "e2_758088375#0_1", "e2_758088375#0_2"],
},
green_states=["GGrrrGrr", "GGGrrrrr", "rrrGGrGG"],
yellow_states=["yyrrryrr", "yyyrrrrr", "rrryyryy"],
)
```
#### IntersectionGroup
This is the example of IntersectionGroup object specification.
You can either provide a list of sensors (E2 sensors) or a list of lanes (links) to measure traffic states (queue lengths, degree of saturation).
If both are provided, sensors are taken first.
```python
intersection1 = Intersection(
tl_id="intersection1",
...
)
intersection2 = Intersection(
tl_id="intersection2",
...
)
intersection3 = Intersection(
tl_id="intersection3",
...
)
intersection4 = Intersection(
tl_id="intersection4",
...
)
intersection5 = Intersection(
tl_id="intersection5",
...
)
districts = {
"front": ["intersection1", "intersection2"],
"middle": ["intersection3", "intersection4"],
"back": ["intersection5"],
}
critical_district_order = {
"front": [
"intersection1",
"intersection2",
"intersection3",
"intersection4",
"intersection5",
],
"middle": [
"intersection3",
"intersection2",
"intersection4",
"intersection1",
"intersection5",
],
"back": [
"intersection5",
"intersection4",
"intersection3",
"intersection2",
"intersection1",
],
}
connection_between_intersections = {
"intersection1": ["183049934_1", "183049933#0_1", "1164287131#0_1"], # To Int 2
"intersection2": ["38361908#1_1", "E3_1"], # To Int 3
"intersection3": [
"E1_1",
"758088377#1_1",
"758088377#2_1",
"22889927#0_1",
], # To Int 4
"intersection4": [
"22889927#2_1",
"22889927#3_1",
"22889927#4_1",
"387296014#0_1",
"387296014#1_1",
"696225646#1_1",
"696225646#2_1",
"696225646#3_1",
"130569446_1",
"E5_1",
"E6_1",
], # To Int 5
}
intersection_group = IntersectionGroup(
intersections=[
intersection1,
intersection2,
intersection3,
intersection4,
intersection5,
],
districts=districts,
critical_district_order=critical_district_order,
connection_between_intersections=connection_between_intersections,
)
```
### Ramp Metering
#### ALINEA
This is the example of ALINEA object specification.
```python
controller = ALINEA(
params={
"target_occupancy": 10,
"K_P": 30,
"K_I": 0,
"cycle_duration": 60,
"measurement_period": int(
60 / 0.5
), # int(cycle_duration / simulation.time_step)
"min_rate": 5,
"max_rate": 100,
},
ramp_meter=ramp_meter,
)
```
#### HERO
This is the example of HERO object specification.
```python
controller = HERO(
params={
"hero_cycle_duration": 60, # similar to ALINEA cycle duration
"queue_activation_threshold_m": 15.0, # master queue trigger
"queue_release_threshold_m": 2.5, # dissolve cluster
"min_queue_setpoint_m": 5.0, # for slaves
"anticipation_factor": 1.0, # factor to obtain nonconservative prediction of demand to come in next control period
"avg_vehicle_spacing": 7.5, # average vehicle spacing to convert meters to vehicles and vice versa, from queue length measurements
},
coordination_group=ramp_meter_group,
alinea_controllers={
"J12": alinea_controller1,
"J11": alinea_controller2,
"J0": alinea_controller3,
},
)
```
#### METALINE
This is the example of METALINE object specification.
```python
controller = METALINE(
params={
"cycle_duration": 60, # control cycle
"measurement_period": int(
60 / 0.5
), # int(cycle_duration / simulation.time_step)
"min_rate": 5,
"max_rate": 100,
},
coordination_group=ramp_meter_group,
target_occupancies=[10, 10, 10],
# Interaction gain matrix (3x3)
K_P=np.array(
[
[30, -5, 0], # ramp 1 influenced negatively by ramp 2
[-3, 25, -2], # ramp 2 influenced by neighbors
[0, -4, 20],
]
),
# Optional integral gain matrix
K_I=np.zeros(shape=(3, 3)),
)
```
### Signalised Intersection Management
#### Max-Pressure (Fixed-Cycle)
This is the example of MaxPressure_Fix object specification.
```python
controller = MaxPressure_Fix(
params={
"T_L": 3, # Yellow Time
"G_T_MIN": 5, # Min Greentime (used for Max. Pressure)
"G_T_MAX": 50, # Max Greentime (used for Max. Pressure)
"measurement_period": int(1 / 0.25), # int(1 / simulation.time_step)
"cycle_duration": 120,
},
intersection=intersection1
)
```
#### Max-Pressure (Flexible-Cycle)
This is the example of MaxPressure_Flex object specification.
```python
controller = MaxPressure_Flex(
params={
"T_A": 5, # Recheck-Pressure Time (used for Max.Pressure)
"T_L": 3, # Yellow Time
"G_T_MIN": 5, # Min Greentime (used for Max. Pressure)
"G_T_MAX": 50, # Max Greentime (used for Max. Pressure)
"measurement_period": int(1 / 0.25), # int(1 / simulation.time_step)
"cycle_duration": 120,
},
intersection=intersection1
)
```
#### SCOOT/SCATS
This is the example of ScootScats object specification.
```python
controller = ScootScats(
scosca_params = {
"adaptation_cycle": 30,
"adaptation_green": 10,
"green_thresh": 2,
"adaptation_offset": 1,
"offset_thresh": 0.5,
"min_cycle_length": 50,
"max_cycle_length": 180,
"ds_upper_val": 0.925,
"ds_lower_val": 0.875,
"measurement_period": int(1 / 0.25), # 1 / simulation_step_size
"travel_time_adjustments": {
"intersection1": ["183049934_1", 2],
"intersection3": ["E1_1", 3],
"intersection4": ["22889927#3_1", 9],
},
"intersection_offset_rules": {
"intersection2": {
"base_offset_from": None,
"travel_time_from": "intersection2",
},
"intersection4": {
"base_offset_from": None,
"travel_time_from": "intersection3",
},
"intersection1": {
"base_offset_from": "intersection2",
"travel_time_from": "intersection1",
},
"intersection3": {
"base_offset_from": "intersection4",
"travel_time_from": "intersection4",
},
"default": {
"base_offset_from": "intersection4",
"travel_time_from": "intersection4",
},
},
},
intersection_group=intersection_group1,
initial_greentimes={
"intersection1": [30, 30, 21],
"intersection2": [30, 30, 21],
"intersection3": [30, 30, 21],
"intersection4": [40, 30],
"intersection5": [30, 30, 21],
},
initial_cycle_length=120,
)
```
</details>
## Citations
Please cite our paper if you find sumoITScontrol useful:
```
@inproceedings{riehl2026sumoITScontrol,
title={sumoITScontrol: Traffic Controller Collection for SUMO Traffic Simulations},
author={Riehl, Kevin and Kouvelas, Anastasios and Makridis, Michail A.},
booktitle={SUMO Conference Proceedings},
year={2026}
}
```
| text/markdown | null | Kevin Riehl <kriehl@ethz.ch>, Julius Schlapbach <juliussc@ethz.ch> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"traci",
"matplotlib",
"tqdm"
] | [] | [] | [] | [
"Homepage, https://github.com/kriehl/sumoITScontrol"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-19T10:53:30.401738 | sumoitscontrol-0.1.0.tar.gz | 35,067 | 02/f4/9fa9d34eb3233675108ed30fdd2d532f62d5a65a486a5de3628d0fff956d/sumoitscontrol-0.1.0.tar.gz | source | sdist | null | false | 9026dad973016596fa8d9056555357bb | 2d79ffd29883b696373715809039bce5c87534f6b6a0b17fdd2359da3e757a01 | 02f49fa9d34eb3233675108ed30fdd2d532f62d5a65a486a5de3628d0fff956d | null | [
"LICENSE"
] | 0 |
2.4 | wagtail-nhsuk-frontend | 2.0.0 | NHSUK Frontend Styles for Wagtail | # Wagtail NHS.UK frontend
A wagtail implementation of the [NHS frontend v10.3.1](https://github.com/nhsuk/nhsuk-frontend) standard components.
## Installation
Install the pypi package
```
pip install wagtail-nhsuk-frontend
```
Add to your `INSTALLED_APPS` in wagtail settings
```python
INSTALLED_APPS = [
...
'wagtailnhsukfrontend',
...
]
```
Use blocks in your streamfields
```python
from wagtail.admin.panels import FieldPanel
from wagtail.models import Page
from wagtail.fields import StreamField
from wagtailnhsukfrontend.blocks import ActionLinkBlock, WarningCalloutBlock
class HomePage(Page):
body = StreamField([
# Include any of the blocks you want to use.
('action_link', ActionLinkBlock()),
('callout', WarningCalloutBlock()),
], use_json_field=True)
content_panels = Page.content_panels + [
FieldPanel('body'),
]
```
Use templatetags
```django
{% load nhsukfrontend_tags %}
<html>
...
<body>
{% breadcrumb %}
</body>
</html>
```
Use template includes
```django
{% include 'wagtailnhsukfrontend/header.html' with show_search=True %}
```
See the [component documentation](./docs/components/) for a list of components you can use.
Include the CSS in your base template
```html
<link rel="stylesheet" type="text/css" href="{% static 'wagtailnhsukfrontend/css/nhsuk-frontend-10.3.1.min.css' %}">
```
Include the Javascript in your base template
```html
<script type="text/javascript" src="{% static 'wagtailnhsukfrontend/js/nhsuk-frontend-10.3.1.min.js' %}" defer></script>
```
## Upgrading
If you are upgrading from v0 to v1, see the [changelog](./CHANGELOG.md).
This CSS and JS is taken directly from the [nhsuk-frontend library](https://github.com/nhsuk/nhsuk-frontend/releases/tag/v5.1.0) and provided in this package for convenience.
If you have a more complicated frontend build such as compiling your own custom styles, you might want to [install from npm](https://github.com/nhsuk/nhsuk-frontend/blob/master/docs/installation/installing-with-npm.md) instead.
## Contributing
See the [contributing documentation](./docs/contributing.md) to run the application locally and contribute changes.
## Further reading
See more [documentation](./docs/)
| text/markdown | Paul Flynn | <paul.flynn8@nhs.net> | null | null | null | null | [] | [] | https://github.com/nhsuk/wagtail-nhsuk-frontend | null | null | [] | [] | [] | [
"Wagtail>=5.2",
"beautifulsoup4==4.12.3; extra == \"testing\"",
"Django>=4.2; extra == \"testing\"",
"pytest==8.2.1; extra == \"testing\"",
"pytest-django==4.8.0; extra == \"testing\"",
"flake8<7.0.0,>=5.0.4; extra == \"linting\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T10:52:22.807017 | wagtail_nhsuk_frontend-2.0.0.tar.gz | 89,298 | 5f/a5/011913e247007b123e6b03a81818c64bcff641db850c6d5cdc84dd1e747a/wagtail_nhsuk_frontend-2.0.0.tar.gz | source | sdist | null | false | c67344b3326689b5820d2a3a4f24ffa5 | e14d1e622bb731904d64f296a26569d5e4253a1cbce1d5c9b01cbdbd6bcb6a18 | 5fa5011913e247007b123e6b03a81818c64bcff641db850c6d5cdc84dd1e747a | null | [
"LICENSE"
] | 232 |
2.4 | bdext | 0.1.73 | Estimation of BDEISS-CT parameters from phylogenetic trees. | # bdext
The bdext package provides scripts to train and assess
Deep-Learning-enables estimators of BD(EI)(SS)(CT) model parameters from phylogenetic trees
[//]: # ([](https://doi.org/10.1093/sysbio/syad059))
[//]: # ([](https://github.com/evolbioinfo/bdext/releases))
[](https://pypi.org/project/bdext/)
[](https://pypi.org/project/bdext)
[](https://hub.docker.com/r/evolbioinfo/bdext/tags)
## BDEISS-CT model
The Birth-Death (BD) Exposed-Infectious (EI) with SuperSpreading (SS) and Contact-Tracing (CT) model (BDEISS-CT)
can be described with the following 8 parameters:
* average reproduction number R;
* average total infection duration d;
* incubation period d<sub>inc</sub>;
* sampling probability ρ;
* fraction of superspreaders f<sub>S</sub>;
* super-spreading transmission increase X<sub>S</sub>;
* contact tracing probability υ;
* contact-traced removal speed up X<sub>C</sub>.
Setting d<sub>inc</sub>=0 removes incubation (EI), setting f<sub>S</sub>=0 removes superspreading (SS), while setting υ=0 removes contact-tracing (CT).
For identifiability, we require the sampling probability ρ to be given by the user.
The other parameters are estimated from a time-scaled phylogenetic tree.
[//]: # (## BDEISS-CT parameter estimator)
[//]: # ()
[//]: # (The bdeissct_dl package provides deep-learning-based BDEISS-CT model parameter estimator )
[//]: # (from a user-supplied time-scaled phylogenetic tree. )
[//]: # (User must also provide a value for one of the three BD model parameters (λ, ψ, or ρ). )
[//]: # (We recommend providing the sampling probability ρ, )
[//]: # (which could be estimated as the number of tree tips divided by the number of declared cases for the same time period.)
[//]: # ()
[//]: # ()
[//]: # (## Input data)
[//]: # (One needs to supply a time-scaled phylogenetic tree in newick format. )
[//]: # (In the examples below we will use an HIV tree reconstructed from 200 sequences, )
[//]: # (published in [[Rasmussen _et al._ PLoS Comput. Biol. 2017]](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005448), )
[//]: # (which you can find at [PairTree GitHub](https://github.com/davidrasm/PairTree) )
[//]: # (and in [hiv_zurich/Zurich.nwk](hiv_zurich/Zurich.nwk). )
[//]: # ()
[//]: # (## Installation)
[//]: # ()
[//]: # (There are 4 alternative ways to run __bdeissct_dl__ on your computer: )
[//]: # (with [docker](https://www.docker.com/community-edition), )
[//]: # ([apptainer](https://apptainer.org/),)
[//]: # (in Python3, or via command line (requires installation with Python3).)
[//]: # ()
[//]: # ()
[//]: # ()
[//]: # (### Run in python3 or command-line (for linux systems, recommended Ubuntu 21 or newer versions))
[//]: # ()
[//]: # (You could either install python (version 3.9 or higher) system-wide and then install bdeissct_dl via pip:)
[//]: # (```bash)
[//]: # (sudo apt install -y python3 python3-pip python3-setuptools python3-distutils)
[//]: # (pip3 install bdeissct_dl)
[//]: # (```)
[//]: # ()
[//]: # (or alternatively, you could install python (version 3.9 or higher) and bdeissct_dl via [conda](https://conda.io/docs/) (make sure that conda is installed first). )
[//]: # (Here we will create a conda environment called _phyloenv_:)
[//]: # (```bash)
[//]: # (conda create --name phyloenv python=3.12)
[//]: # (conda activate phyloenv)
[//]: # (pip install bdeissct_dl)
[//]: # (```)
[//]: # ()
[//]: # ()
[//]: # (#### Basic usage in a command line)
[//]: # (If you installed __bdeissct_dl__ in a conda environment (here named _phyloenv_), do not forget to first activate it, e.g.)
[//]: # ()
[//]: # (```bash)
[//]: # (conda activate phyloenv)
[//]: # (```)
[//]: # ()
[//]: # (Run the following command to estimate the BDEISS_CT parameters and their 95% CIs for this tree, assuming the sampling probability of 0.25, )
[//]: # (and save the estimated parameters to a comma-separated file estimates.csv.)
[//]: # (```bash)
[//]: # (bdeissct_infer --nwk Zurich.nwk --ci --p 0.25 --log estimates.csv)
[//]: # (```)
[//]: # ()
[//]: # (#### Help)
[//]: # ()
[//]: # (To see detailed options, run:)
[//]: # (```bash)
[//]: # (bdeissct_infer --help)
[//]: # (```)
[//]: # ()
[//]: # ()
[//]: # (### Run with docker)
[//]: # ()
[//]: # (#### Basic usage)
[//]: # (Once [docker](https://www.docker.com/community-edition) is installed, )
[//]: # (run the following command to estimate BDEISS-CT model parameters:)
[//]: # (```bash)
[//]: # (docker run -v <path_to_the_folder_containing_the_tree>:/data:rw -t evolbioinfo/bdeissct --nwk /data/Zurich.nwk --ci --p 0.25 --log /data/estimates.csv)
[//]: # (```)
[//]: # ()
[//]: # (This will produce a comma-separated file estimates.csv in the <path_to_the_folder_containing_the_tree> folder,)
[//]: # ( containing the estimated parameter values and their 95% CIs (can be viewed with a text editor, Excel or Libre Office Calc).)
[//]: # ()
[//]: # (#### Help)
[//]: # ()
[//]: # (To see advanced options, run)
[//]: # (```bash)
[//]: # (docker run -t evolbioinfo/bdeissct -h)
[//]: # (```)
[//]: # ()
[//]: # ()
[//]: # ()
[//]: # (### Run with apptainer)
[//]: # ()
[//]: # (#### Basic usage)
[//]: # (Once [apptainer](https://apptainer.org/docs/user/latest/quick_start.html#installation) is installed, )
[//]: # (run the following command to estimate BDEISS-CT model parameters (from the folder where the Zurich.nwk tree is contained):)
[//]: # ()
[//]: # (```bash)
[//]: # (apptainer run docker://evolbioinfo/bdeissct --nwk Zurich.nwk --ci --p 0.25 --log estimates.csv)
[//]: # (```)
[//]: # ()
[//]: # (This will produce a comma-separated file estimates.csv,)
[//]: # ( containing the estimated parameter values and their 95% CIs (can be viewed with a text editor, Excel or Libre Office Calc).)
[//]: # ()
[//]: # ()
[//]: # (#### Help)
[//]: # ()
[//]: # (To see advanced options, run)
[//]: # (```bash)
[//]: # (apptainer run docker://evolbioinfo/bdeissct -h)
[//]: # (```)
[//]: # ()
[//]: # ()
| text/markdown | Anna Zhukova | anna.zhukova@pasteur.fr | null | null | null | phylogenetics, birth-death model, incubation, super-spreading, contact tracing | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://github.com/modpath/bdeissct | null | null | [] | [] | [] | [
"tensorflow==2.19.0",
"six",
"ete3",
"numpy==2.0.2",
"scipy==1.14.1",
"biopython",
"scikit-learn==1.5.2",
"pandas==2.2.3",
"treesumstats==0.7"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.10.17 | 2026-02-19T10:52:13.507898 | bdext-0.1.73.tar.gz | 31,864 | 68/c8/c2c6bc213f8e14221630b4c5d4cca1ac24d4af88088b9685f66945e537ff/bdext-0.1.73.tar.gz | source | sdist | null | false | 0670c3d9879971d239013918b615dcef | ae71216de4a070551011ffb4437447e092f35285982282a12a944cc0442955dc | 68c8c2c6bc213f8e14221630b4c5d4cca1ac24d4af88088b9685f66945e537ff | null | [
"LICENSE"
] | 255 |
2.4 | randommachine | 0.1.2 | Random ensemble learning | # RandomMachine
[](https://pypi.org/project/randommachine/) [](https://github.com/ghiffaryr/randommachine/blob/master/LICENSE.md) [](https://doi.org/10.5281/zenodo.18687466)
Random ensemble learning library that extends gradient boosting by randomly sampling base learners from a pool of LightGBM, CatBoost, XGBoost, and arbitrary scikit-learn-compatible estimators for improved ensemble diversity.
## Installation
```bash
pip install randommachine
```
Or install from source:
```bash
git clone https://github.com/ghiffaryr/randommachine.git
cd randommachine
pip install -e .
```
## Quick Start
### Regression
```python
from randommachine import RandomLGBMRegressor
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
X, y = make_regression(n_samples=1000, n_features=20, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
model = RandomLGBMRegressor(num_iterations=20, learning_rate=0.5, random_state=42)
model.fit(X_train, y_train, X_eval=X_test, y_eval=y_test)
predictions = model.predict(X_test)
```
### Classification
```python
from randommachine import RandomCatBoostClassifier
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=1000, n_features=20, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
model = RandomCatBoostClassifier(num_iterations=20, learning_rate=0.5)
model.fit(X_train, y_train, X_eval=X_test, y_eval=y_test)
predictions = model.predict(X_test)
probabilities = model.predict_proba(X_test)
```
## Available Models
**LightGBM-based:**
- `RandomLGBMRegressor` - Regression with random LightGBM base learners
- `RandomLGBMClassifier` - Classification with random LightGBM base learners
**CatBoost-based:**
- `RandomCatBoostRegressor` - Regression with random CatBoost base learners
- `RandomCatBoostClassifier` - Classification with random CatBoost base learners
**XGBoost-based:**
- `RandomXGBRegressor` - Regression with random XGBoost base learners
- `RandomXGBClassifier` - Classification with random XGBoost base learners
**Generic (user-defined pool):**
- `RandomRegressor` - Mix any sklearn-compatible regressors with custom probabilities
- `RandomClassifier` - Mix any sklearn-compatible classifiers with custom probabilities
## Tutorial
An interactive Jupyter notebook is available in the `/docs` folder:
- [Tutorial](docs/tutorial.ipynb) - Getting started guide with **performance comparison vs plain LightGBM, CatBoost, and XGBoost baselines**
```bash
cd docs/
jupyter notebook tutorial.ipynb
```
The tutorial includes side-by-side comparisons showing RandomMachine's improvement over fixed-family baselines.
## Development
Run tests:
```bash
make test # Run all tests
make test-cov # With coverage report
```
Format code:
```bash
make format # Black formatting
make lint # Flake8 linting
```
## License
MIT License - see [LICENSE](LICENSE)
| text/markdown | Ghiffary Rifqialdi | grifqialdi@gmail.com | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"... | [] | https://github.com/ghiffaryr/randommachine | null | >=3.7 | [] | [] | [] | [
"numpy>=1.19.0",
"scikit-learn>=0.24.0",
"lightgbm>=3.0.0",
"catboost>=1.0.0",
"pytest>=6.0; extra == \"dev\"",
"pytest-cov>=2.0; extra == \"dev\"",
"black>=21.0; extra == \"dev\"",
"flake8>=3.9; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.11 | 2026-02-19T10:49:30.887111 | randommachine-0.1.2.tar.gz | 306,286 | df/b3/ad9d44c9141f99d7c70d5d7c26abe6c1e2b1fec760ba7cf5423f281b8ce9/randommachine-0.1.2.tar.gz | source | sdist | null | false | 776c168b7255b909fb457e579dacd467 | a8698e66750ddbde132900a4492bd89841b90734d0d375721e0521ff49b1412c | dfb3ad9d44c9141f99d7c70d5d7c26abe6c1e2b1fec760ba7cf5423f281b8ce9 | null | [
"LICENSE"
] | 231 |
2.4 | wavepacket | 0.3.0 | A package for the propagation of quantum-mechanical wave functions. | Description
-----------
Wavepacket is a Python package to define and simulate small
quantum systems. Or, more technically, it allows you to numerically
solve Schrödinger and Liouville-von-Neumann equations for
distinguishable particles.
The full documentation can be found under https://wavepacket.readthedocs.io.
There are many different quantum systems and consequently approaches
to solve them. Here we focus on a particular niche:
- Wavepacket solves the differential equations directly. This simplifies
the maths, but limits the system size to few degrees of freedom.
If you want to deal with larger systems, look out for MCTDH.
- Wavepacket uses the DVR approximation heavily. This allows you to
directly define your potentials as functions of real-space coordinates
instead of setting up opaque operator matrices.
The latter approach is simpler an more concise if you are only
interested in harmonic oscillators or qubits, though.
- Wavepacket is a Python-only package relying chiefly on numpy.
This is slower than natively implemented code, but you gain
great tooling support, for example matplotlib, Jupyter notebooks or
integrated documentation.
- Most of the code can handle both wave functions and density operators.
This allows you to convert a closed system into an open
system with minimal fuss.
For example use cases, we have been using various precursors of this
package for simulating small molecular systems and for teaching.
Besides examples shipped with this package, see
https://sourceforge.net/p/wavepacket/wiki/Demos.Main for more applications.
The project is currently a first iteration to flesh out everything. Once
0.1 is released, I plan to quickly translate the existing C++ code from
a precursor project and reach a stable state. More can be found on the
project homepage https://github.com/ulflor/wavepacket
Support
-------
If you lack a feature that you would like to have, open an issue at
`our issue tracker <https://github.com/ulflor/wavepacket/issues>`_.
Depending on the complexity of the feature, this will lead to an immediate,
rapid, or prioritized implementation.
Contribution
------------
I currently lack a formal procedure for new contributors, but you are
very welcome to contribute to the project. If you do not know what to
do, feel free to contact one of the developers; there is enough work for
multiple developers, it is just not documented yet.
History
-------
The original version of Wavepacket was written in Matlab and is still
maintained under https://sourceforge.net/p/wavepacket/matlab. It is stable,
battle-tested and works.
However, Matlab is pretty expensive, so not all interested users had
access to it. Also, the project's architecture did not support some
advanced use cases without digging deep into the code. Finally,
C++11 had just come out and looked cool, so I started a
reimplementation in C++ around 2013, adding Python bindings as an
afterthought. The C++ project will be superseded by this Python package, but
can be found under https://sourceforge.net/p/wavepacket/cpp.
This worked really well. However, deploying C++ code is difficult.
In particular, there was no cheap route towards building a "good"
Python package, or towards easily building a Windows version.
Also, the underlying tensor library was slowly
getting fewer and fewer commits over the years, so I am currently
moving to a Python-only package.
The Python version is slower by a factor of two to three compared
to C++-backed code. This is, however, often cancelled by a
parallelization of the tensor operations, and the tooling is better by orders
of magnitude.
| text/x-rst | Ulf Lorenz | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pyth... | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=2.1.3",
"scipy>=1.14.1",
"matplotlib>=3.10"
] | [] | [] | [] | [
"Home, https://github.com/ulflor/wavepacket"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-19T10:48:19.188002 | wavepacket-0.3.0.tar.gz | 87,965 | 6f/1e/b2b084a09df9875b2db8f7ef73478a6c375f30a7a645981946d96a012aad/wavepacket-0.3.0.tar.gz | source | sdist | null | false | 10123a231fe208442dc7674cce851689 | 4d0ae5e64a747146c813ab7137899e49716969159d58276cb8f48823cf9e72d0 | 6f1eb2b084a09df9875b2db8f7ef73478a6c375f30a7a645981946d96a012aad | null | [
"LICENSE"
] | 241 |
2.4 | xyzgraph | 1.5.1 | Molecular Graph Construction from Cartesian Coordinates. | # xyzgraph: Molecular Graph Construction from Cartesian Coordinates
**xyzgraph** is a Python toolkit for building molecular graphs (bond connectivity, bond orders, formal charges, and partial charges) directly from 3D atomic coordinates in XYZ format. It provides both **cheminformatics-based** and **quantum chemistry-based** (xTB) workflows.
[](https://pepy.tech/projects/xyzgraph)
[](https://github.com/aligfellow/xyzgraph/blob/main/LICENSE)
[](https://docs.astral.sh/uv)
[](https://github.com/astral-sh/ruff)
[](https://github.com/astral-sh/ty)
[](https://github.com/aligfellow/xyzgraph/actions)
[](https://codecov.io/gh/aligfellow/xyzgraph)
---
## Table of Contents
1. [Key Features](#key-features)
2. [Installation](#installation)
3. [Quick Start](#quick-start)
4. [Methodology Overview](#methodology-overview)
5. [Workflow Comparison](#workflow-comparison)
6. [CLI Reference](#cli-reference)
7. [Python API](#python-api)
8. [Visualization](#visualization)
9. [Limitations & Future Work](#limitations--future-work)
10. [Examples](#examples)
11. [References](#references)
12. [Contributing & Contact](#contributing--contact)
---
## Key Features
- **Distance-based initial bonding** using *consistent* van der Waals radii across *all elements* from Charry and Tkatchenko [[1]](https://doi.org/10.1021/acs.jctc.4c00784)
- **Four construction methods**:
- `cheminf`: Pure cheminformatics with bond order optimization
- `xtb`: Semi-empirical calculation via xTB Wiberg bond orders with Mulliken charges [[2]](https://pubs.acs.org/doi/10.1021/acs.jctc.8b01176)
- `rdkit`: RDKit's DetermineBonds algorithm [[3]](https://github.com/jensengroup/xyz2mol), [[4]](https://github.com/rdkit)
- `orca`: Reads Mayer bond orders and Mulliken charges from ORCA outputs.
- **Cheminformatics modes**:
- `--quick`: Fast (crude) valence adjustment
- Full optimization with valence and charge minimisation
- `--optimizer`:
**beam**: optimization across multiple paths (slightly slower, default)
**greedy**: iterative valence adjustment
- **Aromatic detection**: Hückel 4n+2 rule for 6-membered rings
- **Charge computation**: Gasteiger (cheminf) or Mulliken (xTB/ORCA) partial charges
- **RDkit/xyz2mol comparison** validation against RDKit bond perception [[3]](https://github.com/jensengroup/xyz2mol), [[4]](https://github.com/rdkit)
- **ASCII 2D depiction** with layout alignment for method comparison (see also [[5]](https://github.com/whitead/moltext))
---
## Installation
### From PyPI
```bash
pip install xyzgraph
```
### From Source
```bash
git clone https://github.com/aligfellow/xyzgraph.git
cd xyzgraph
pip install .
# or simply
pip install git+https://github.com/aligfellow/xyzgraph.git
```
### Dependencies
- **Core**: `numpy`, `networkx`, `rdkit`
- **Optional**: [xTB binary](https://github.com/grimme-lab/xtb) (for `--method xtb`)
- **Optional**: [xyz2mol_tm](https://github.com/jensengroup/xyz2mol_tm) + `scipy` (for `--compare-rdkit-tm`)
To install xTB (Linux/macOS) see [here](https://github.com/grimme-lab/xtb):
```bash
conda install -c conda-forge xtb # or download from GitHub releases
```
To install xyz2mol_tm (required for `--compare-rdkit-tm`):
```bash
pip install "xyzgraph[rdkit-tm]" xyz2mol_tm@git+https://github.com/jensengroup/xyz2mol_tm.git
```
This installs `scipy` (via the `rdkit-tm` extra) and `xyz2mol_tm` from source in one command. This extra step is necessary because `xyz2mol_tm` is not hosted on `pypi`.
---
## Quick Start
### CLI Examples
**Minimal usage** (auto-displays ASCII depiction):
```bash
xyzgraph molecule.xyz # constructs graph with cheminformatics style defaults
xyzgraph molecule.out # constructs graph from ORCA output
```
**Specify charge and method**:
```bash
xyzgraph molecule.xyz --method xtb --charge -1 --multiplicity 2
```
**Detailed debug output**:
```bash
xyzgraph molecule.xyz --debug
```
**Compare with RDKit**:
```bash
xyzgraph molecule.xyz --compare-rdkit
```
**Compare with ORCA output**:
```bash
# Compare XYZ (cheminf) vs ORCA bond orders
xyzgraph molecule.xyz --orca-out molecule.out
# Three-way comparison: cheminf vs ORCA vs RDKit
xyzgraph molecule.xyz --orca-out molecule.out --compare-rdkit
```
**Multi-frame trajectory files**:
```bash
# Process specific frame from trajectory (0-indexed)
xyzgraph trajectory.xyz --frame 2
# Process all frames for quick topological overview
xyzgraph trajectory.xyz --all-frames
```
### Python Example
**Basic usage**:
```python
from xyzgraph import build_graph, build_graph_rdkit, build_graph_orca
# Cheminformatics (default method)
G_cheminf = build_graph("molecule.xyz", charge=0)
# RDKit's DetermineBonds
G_rdkit = build_graph_rdkit("molecule.xyz", charge=0)
# ORCA output (Mayer bond orders)
G_orca = build_graph_orca("structure.out", bond_threshold=0.5)
# Print ASCII structure
from xyzgraph import graph_to_ascii
print(graph_to_ascii(G_cheminf, scale=3.0, include_h=False))
```
**Multi-frame trajectory files**:
```python
from xyzgraph import read_xyz_file, build_graph
# Read specific frame from trajectory
atoms = read_xyz_file("trajectory.xyz", frame=2)
G = build_graph(atoms, charge=0)
# Process all frames
from xyzgraph import count_frames_and_atoms
num_frames, _ = count_frames_and_atoms("trajectory.xyz")
for i in range(num_frames):
atoms = read_xyz_file("trajectory.xyz", frame=i)
G = build_graph(atoms, charge=0)
# ... analyze G
```
**Comparing methods**:
```python
from xyzgraph import compare_with_rdkit
# Build graphs
G_cheminf = build_graph("molecule.xyz", charge=-1)
G_rdkit = build_graph_rdkit("molecule.xyz", charge=-1)
# Compare (returns formatted report)
report = compare_with_rdkit(G_cheminf, G_rdkit, verbose=True, ascii=True)
print(report)
```
---
## Methodology Overview
### Design Philosophy
xyzgraph offers two distinct pathways for molecular graph construction:
1. **Cheminformatics Path** (`method='cheminf'`):
- Pure graph-based approach using chemical heuristics
- No external quantum chemistry calls
- Cached scoring, valence, edge and graph properties
- Fast and suitable for both organic *and* inorganic molecules
2. **Quantum Chemistry Path** (`method='xtb'`):
- Uses GFN2-xTB (extended tight-binding) calculations [[2]](https://pubs.acs.org/doi/10.1021/acs.jctc.8b01176)
- Reads in Wiberg bond orders and Mulliken charges from output
- Potentially more accurate for unusual bonding situations
- *though, xTB may be less robust in these situations*
- Requires xTB binary installation
### Cheminformatics Workflow (method='cheminf')
```
┌─────────────────────────────────────────────────────────────────┐
│ 1. Input Processing │
│ • Parse XYZ file internally │
│ • Load reference data (VDW radii, valences, electrons) │
└────────────────────┬────────────────────────────────────────────┘
│
┌────────────────────▼────────────────────────────────────────────┐
│ 2. Initial Bond Graph (Two-Step Construction) │
│ │
│ Step 1: Baseline Bonds (DEFAULT thresholds) │
│ • Uses DEFAULT threshold parameters (threshold=1.0) │
│ • Builds reliable "core" connectivity │
│ • Bonds sorted by confidence: 1.0 (short) to 0.0 (at thresh) │
│ • High confidence (>0.4): added directly │
│ • Low confidence (≤0.4): geometric validation applied │
│ • Result: stable molecular scaffold │
│ • Compute rings using NetworkX cycle_basis │
│ │
│ Step 2: Extended Bonds (if using CUSTOM thresholds) │
│ • Sorted highest-confidence-first (most reliable first) │
│ • Additional bonds require geometric validation: │
│ - Acute angle check: 15° (metals) / 30° (non-metals) │
│ - Collinearity check: trans vs spurious detection │
│ - Existing ring diagonal rejection and 3-ring validation │
| - Agostic bond filtering: H-M/F-M bonds rejected if │
│ stronger H-X or F-X bond exists (2x confidence ratio) │
│ - M-L priority check: diagonal M-ligand bonds in 3-rings │
│ rejected if stronger M-donor bond exists in ring (2x) │
│ • Allows sensible elongated bonds (e.g., TS structures) │
│ │
│ • Create graph with single bonds (order = 1.0) │
└────────────────────┬────────────────────────────────────────────┘
│
┌────────────────────▼────────────────────────────────────────────┐
│ 3. Kekulé Initialization for Conjugated Rings │
│ • Find 5/6-membered planar rings with C/N/O/S/B/P/Se │
│ • Initialize alternating bond orders (5-ring: 2-1-2-1-1, │
│ 6-ring: 2-1-2-1-2-1) │
│ • Handle fused rings (naphthalene, anthracene): │
│ - Detecting shared edges from previous rings │
│ - Validated across extended ring system │
│ • Gives optimizer excellent starting point │
│ • Reduces iterations needed for conjugated systems │
│ • Broader atom set than aromatic detection (P, Se included) │
└────────────────────┬────────────────────────────────────────────┘
│
┌──────────┴─────────────┐
│ │
┌─────────▼────────────┐ ┌───────▼──────────────────────────────┐
│ 4a. Quick Mode │ │ 4b. Full Optimization │
│ • Lock metal bonds │ │ • Lock metal bonds at 1.0 │
│ • 3 iterations │ │ • Iterative BIDIRECTIONAL search: │
│ • Promote bonds │ │ - Test both +1 AND -1 changes │
│ where both atoms │ │ - Allows Kekulé structure swaps │
│ need increased │ │ • Score = f(valence_error, │
│ valence │ │ formal_charges, │
│ • Distance check │ │ electronegativity, │
│ │ │ conjugation_penalty) │
│ │ │ • Optimizer choice: │
│ │ │ - Beam: parallel hypotheses │
│ │ │ - Greedy: single best change │
│ │ │ • Cache where possible for speed │
│ │ │ • Top-k edge candidate selection │
└─────────┬────────────┘ └──────────┬───────────────────────────┘
└───────────────────────────┘
│
┌────────────────────▼────────────────────────────────────────────┐
│ 5. Aromatic Detection (Hückel 4n+2) │
│ • Find 5/6-membered rings with C/N/O/S/B │
│ • Count π electrons (sp² carbons → 1e, N/O/S LP → 2e) │
│ • Apply Hückel rule: 4n+2 π electrons │
│ • Set aromatic bonds to 1.5 │
│ • Other heteroatoms (e.g. P, Se) use Kekulé structures │
└────────────────────┬────────────────────────────────────────────┘
│
┌────────────────────▼────────────────────────────────────────────┐
│ 6. Formal Charge Assignment │
│ • For each non-metal atom: │
│ - B = 2 × Σ(bond_orders) │
│ - L = max(0, target - B) [target: 2 for H, 8 otherwise] │
│ - formal = V_electrons - (L + B/2) │
│ • Balance total to match system charge │
│ • Metals forced to 0 (coordination not oxidation state) │
└────────────────────┬────────────────────────────────────────────┘
│
┌────────────────────▼────────────────────────────────────────────┐
│ 7. Optional: Gasteiger Partial Charges │
│ • compute_gasteiger_charges(G, target_charge) │
│ • Convert bond orders to RDKit bond types │
│ • Compute Gasteiger charges │
│ • Adjust for total charge conservation │
│ • Aggregate H charges onto heavy atoms │
│ • Stored in G.nodes[i]["charges"]["gasteiger"] │
└────────────────────┬────────────────────────────────────────────┘
│
┌────────────────────▼────────────────────────────────────────────┐
│ 8. Output Graph │
│ Nodes: symbol, formal_charge, valence, metal_valence, │
│ oxidation_state (metals only) │
│ Edges: bond_order, bond_type, metal_coord │
└─────────────────────────────────────────────────────────────────┘
```
### xTB Workflow (method='xtb')
```
┌─────────────────────────────────────────────────────────────────┐
│ 1. Input Processing |
│ • Parse XYZ file internally │
│ • Write XYZ to temporary directory │
│ • Set up xTB calculation parameters │
└────────────────────┬────────────────────────────────────────────┘
│
┌────────────────────▼────────────────────────────────────────────┐
│ 2. Run xTB Calculation │
│ Command: xtb <file>.xyz --chrg <charge> --uhf <unpaired> │
│ • GFN2-xTB Hamiltonian │
│ • Single-point calculation │
│ • Wiberg bond order analysis │
│ • Mulliken population analysis │
└────────────────────┬────────────────────────────────────────────┘
│
┌────────────────────▼────────────────────────────────────────────┐
│ 3. Parse xTB Output │
│ • Read wbo file (Wiberg bond orders) │
│ • Read charges file (Mulliken atomic charges) │
│ • Threshold: bond_order > 0.5 → create edge │
└────────────────────┬────────────────────────────────────────────┘
│
┌────────────────────▼────────────────────────────────────────────┐
│ 4. Build Graph from xTB Data │
│ • Create nodes with Mulliken charges │
│ • Create edges with Wiberg bond orders │
│ • No further optimization needed │
└────────────────────┬────────────────────────────────────────────┘
│
┌────────────────────▼────────────────────────────────────────────┐
│ 5. Cleanup (optional) │
│ • Remove temporary xTB files (unless --no-clean) │
└────────────────────┬────────────────────────────────────────────┘
│
┌────────────────────▼────────────────────────────────────────────┐
│ 6. Output Graph │
│ Nodes: symbol, charges{'mulliken': ...}, agg_charge, │
│ valence, metal_valence │
│ Edges: bond_order (Wiberg), bond_type, metal_coord │
└─────────────────────────────────────────────────────────────────┘
```
---
## Workflow Comparison
| Feature | cheminf (quick) | cheminf (full) | xtb |
|---------|----------------|----------------|-----|
| **Speed** | Very Fast | Fast | Moderate |
| **Accuracy** | Okay for simple molecules | Very good across various systems | Only limited by xTB performance (QM-based) |
| **External deps** | None | None | Requires xTB binary |
| **Bond orders** | Heuristic (integer-like) | Optimized formal charge and valency | Wiberg (fractional) |
| **Charges** | Gasteiger | Gasteiger | Mulliken |
| **Metal complexes** | Limited | Reasonable | Reasonable (limited by xTB metal performance) |
| **Conjugated systems** | Basic | Excellent | Excellent |
| **Best for** | Quick checks, where connectivity most important | Most cases | Awkward bonding, validation |
### When to Use Each Method
**Use `--method cheminf` (default)**:
- Most use cases
- No xTB installation available
- Batch processing structures
**Use `--method cheminf --quick`**:
- Extremely large molecules
- Initial rapid screening
- When approximate bond orders suffice
**Use `--method xtb`**:
- Validation of cheminf results
- Unusual electronic structures
- Low confidence in bonding structure
### Optimizer Algorithms (cheminf full mode only)
**Beam Search Optimizer** (`--optimizer beam` default, `--beam-width 5` default):
- Explores multiple optimization paths in parallel
- Maintains top-k hypotheses at each iteration (of top candidates)
- Bidirectional: tests both +1 and -1 bond orders for each hypothesis
- More robust against local minima
- Slower, but better convergence
- Best for robust bonding assignment across periodic table
**Greedy Optimizer** (`--optimizer greedy`):
- Tests all top candidate edges, picks single best change per iteration
- Bidirectional: tests both +1 and -1 bond order changes
- Fast and effective for most molecules
- Can get stuck in local minima (*e.g.* alpha, beta unsaturated systems)
---
## CLI Reference
### Command Syntax
```text
> xyzgraph -h
usage: xyzgraph [-h] [--version] [--citation] [--method {cheminf,xtb}] [--no-clean] [-c CHARGE] [-m MULTIPLICITY] [-q] [--relaxed] [-t THRESHOLD] [-d] [-a] [--json] [-as ASCII_SCALE] [-H]
[--show-h-idx SHOW_H_IDX] [-b] [--frame FRAME] [--all-frames] [--compare-rdkit] [--compare-rdkit-tm] [--orca-out ORCA_OUT] [--orca-threshold ORCA_THRESHOLD]
[-o {greedy,beam}] [-bw BEAM_WIDTH] [--max-iter MAX_ITER] [--edge-per-iter EDGE_PER_ITER] [--bond BOND] [--unbond UNBOND] [--threshold-h-h THRESHOLD_H_H]
[--threshold-h-nonmetal THRESHOLD_H_NONMETAL] [--threshold-h-metal THRESHOLD_H_METAL] [--threshold-metal-ligand THRESHOLD_METAL_LIGAND]
[--threshold-nonmetal THRESHOLD_NONMETAL] [--allow-metal-metal-bonds] [--threshold-metal-metal-self THRESHOLD_METAL_METAL_SELF]
[--period-scaling-h-bonds PERIOD_SCALING_H_BONDS] [--period-scaling-nonmetal-bonds PERIOD_SCALING_NONMETAL_BONDS]
[input_file]
Build molecular graph from XYZ or ORCA output.
positional arguments:
input_file Input file (XYZ or ORCA .out)
options:
-h, --help show this help message and exit
--version Print version and exit
--citation Print citation and exit
Common Options:
--method {cheminf,xtb}
Graph construction method (default: cheminf)
--no-clean Keep temporary xTB files (only for --method xtb)
-c CHARGE, --charge CHARGE
Total molecular charge (default: 0)
-m MULTIPLICITY, --multiplicity MULTIPLICITY
Spin multiplicity (default: auto estimation)
-q, --quick Quick mode: connectivity only, no formal charge optimization
--relaxed Relaxed geometric validation (for transition states)
-t THRESHOLD, --threshold THRESHOLD
Global scaling for bond thresholds (default: 1.0)
Output Options:
-d, --debug Enable debug output
-a, --ascii Show 2D ASCII depiction
--json Output graph as JSON (for generating test fixtures)
-as ASCII_SCALE, --ascii-scale ASCII_SCALE
ASCII scaling factor (default: 2.5)
-H, --show-h Include hydrogens in visualizations
--show-h-idx SHOW_H_IDX
Show specific H atoms (comma-separated indices)
Input Options:
-b, --bohr XYZ file in Bohr units (default: Angstrom)
--frame FRAME Frame index for trajectory files, 0-indexed (default: 0)
--all-frames Process all frames in trajectory
Comparison Options:
--compare-rdkit Compare with RDKit graph
--compare-rdkit-tm Compare with RDKit xyz2mol_tm graph
--orca-out ORCA_OUT ORCA output file for comparison
--orca-threshold ORCA_THRESHOLD
Min Mayer bond order for ORCA (default: 0.25)
Optimizer Options:
-o {greedy,beam}, --optimizer {greedy,beam}
Algorithm (default: beam)
-bw BEAM_WIDTH, --beam-width BEAM_WIDTH
Beam width (default: 5)
--max-iter MAX_ITER Max iterations (default: 50)
--edge-per-iter EDGE_PER_ITER
Edges per iteration (default: 10)
Bond Constraints:
--bond BOND Force bonds (e.g., --bond 0,1 2,3)
--unbond UNBOND Prevent bonds (e.g., --unbond 0,1)
Advanced Thresholds:
--threshold-h-h THRESHOLD_H_H
H-H vdW threshold (default: 0.38)
--threshold-h-nonmetal THRESHOLD_H_NONMETAL
H-nonmetal vdW threshold (default: 0.42)
--threshold-h-metal THRESHOLD_H_METAL
H-metal vdW threshold (default: 0.45)
--threshold-metal-ligand THRESHOLD_METAL_LIGAND
Metal-ligand vdW threshold (default: 0.65)
--threshold-nonmetal THRESHOLD_NONMETAL
Nonmetal-nonmetal vdW threshold (default: 0.55)
--allow-metal-metal-bonds
Allow metal-metal bonds (default: True)
--threshold-metal-metal-self THRESHOLD_METAL_METAL_SELF
Metal-metal vdW threshold (default: 0.7)
--period-scaling-h-bonds PERIOD_SCALING_H_BONDS
Period scaling for H bonds (default: 0.05)
--period-scaling-nonmetal-bonds PERIOD_SCALING_NONMETAL_BONDS
Period scaling for nonmetal bonds (default: 0.0)
```
**Method comparison**:
```bash
xyzgraph molecule.xyz --debug > cheminf.txt
xyzgraph molecule.xyz --method xtb --debug > xtb.txt
diff cheminf.txt xtb.txt
```
**Validate against RDKit**:
```bash
xyzgraph molecule.xyz --compare-rdkit
```
---
## Python API
Direct graph construction:
```python
from xyzgraph import build_graph, graph_debug_report
# Cheminf full optimization
G_full = build_graph(
atoms='molecule.xyz',
charge=0,
max_iter=50, # maximum iterations (normally converged <20)
edge_per_iter=6, # default 10
bond=[(0,1)], # ensure a bond between 0 and 1
debug=True
)
```
---
## Visualization
### ASCII Depiction
xyzgraph includes a built-in ASCII renderer for 2D molecular structures. This is heavily inspired by work elsewhere, *e.g.* [[5]](https://github.com/whitead/moltext) by Andrew White.
```python
from xyzgraph import graph_to_ascii
# Basic rendering
ascii_art = graph_to_ascii(G, scale=3.0, include_h=False)
print(ascii_art)
```
**Output example** (acyl isothiouronium):
```text
> xyzgraph examples/isothio.xyz -a
/C
/
///
C\
\\
\ \
\\
C\
//
//
O=======C
=========\
C---- \ /S\
// ---C \ / \\
// \ N---- /// \\ ----C\
C \ // ---C\ \C--- \
\ \ / \\ / \\\
\ C--- // \ / C
\ // ----C \\ / /
C--- // \ N\------C /
----C \ /// \\\ /
\ / \ ---C
C-------C/ \C----
//
C---- //
---C
\
\
\
C
```
**Features**:
- Single bonds: `-`, `|`, `/`, `\`
- Double bonds: `=`, `‖` (parallel lines)
- Triple bonds: `#`
- Aromatic: 1.5 bond orders shown as single
- Special edges: `*` (TS), `.` (NCI) if `G.edges[i,j]['TS']=True` or `G.edges[i,j]['NCI']=True`
### Layout Alignment
Compare methods by aligning their ASCII depictions:
```python
from xyzgraph import build_graph, graph_to_ascii
# Build with both methods
G_cheminf = build_graph(atoms, method='cheminf')
G_xtb = build_graph(atoms, method='xtb')
# Generate aligned depictions
ascii_ref, layout = graph_to_ascii(G_cheminf)
ascii_xtb = graph_to_ascii(G_xtb, reference_layout=layout)
print("Cheminf:\n", ascii_ref)
print("\nxTB:\n", ascii_xtb)
```
### Debug Report
Tabular listing of all atoms and bonds:
```python
from xyzgraph import graph_debug_report
report = graph_debug_report(G, include_h=False)
print(report)
```
**Full example**:
```text
> xyzgraph benzene_NH4-cation-pi.xyz -c 1 -a -d
================================================================================
XYZGRAPH
Molecular Graph Construction from Cartesian Coordinates
A. S. Goodfellow, 2025
================================================================================
Version: xyzgraph v1.5.0
Citation: A. S. Goodfellow, xyzgraph: Molecular Graph Construction from
Cartesian Coordinates, v1.5.0, 2025,
https://github.com/aligfellow/xyzgraph.git.
Input: benzene_NH4-cation-pi.xyz
Parameters: charge=1
================================================================================
# Building cheminf graph from examples/benzene_NH4-cation-pi.xyz...
================================================================================
BUILDING GRAPH (CHEMINF, FULL MODE)
Atoms: 17, Charge: 1, Multiplicity: 1
================================================================================
Added 17 atoms
Chemical formula: C6H10N
Step 1: Found 16 baseline bonds (using default thresholds)
...
...
...
Step 1: 16 baseline bonds added, 0 rejected
Found 1 rings from initial bonding (excluding metal cycles)
Total bonds in graph: 16
Initial bonds: 16
================================================================================
KEKULE INITIALIZATION FOR AROMATIC RINGS
================================================================================
Ring 0 (6-membered): ['C0', 'C1', 'C2', 'C3', 'C4', 'C5']
π electrons estimate: 6
--------------------------------------------------------------------------------
Valid rings for Kekulé initialization:
[0]
✓ Initialized isolated 6-ring 0
--------------------------------------------------------------------------------
SUMMARY: Initialized 1 ring(s) with Kekulé pattern
--------------------------------------------------------------------------------
================================================================================
BEAM SEARCH OPTIMIZATION (width=5)
================================================================================
Initial score: 22.50
Iteration 1:
No improvements found in any beam, stopping
Applying best solution to graph...
--------------------------------------------------------------------------------
Explored 13 states across 1 iterations
Found 0 improvements
Score: 22.50 → 22.50
--------------------------------------------------------------------------------
================================================================================
FORMAL CHARGE CALCULATION
================================================================================
Initial formal charges:
Sum: +1 (target: +1)
Charged atoms:
N12: +1
No residual charge distribution needed (sum matches target)
================================================================================
AROMATIC RING DETECTION (Hückel 4n+2)
================================================================================
Ring 1 (6-membered): ['C0', 'C1', 'C2', 'C3', 'C4', 'C5']
π electrons: 6 (C0:1, C1:1, C2:1, C3:1, C4:1, C5:1)
✓ AROMATIC (4n+2 rule: n=1)
--------------------------------------------------------------------------------
SUMMARY: 1 aromatic rings, 6 bonds set to 1.5
--------------------------------------------------------------------------------
================================================================================
GRAPH CONSTRUCTION COMPLETE
================================================================================
Constructed graph with chemical formula: C6H10N
================================================================================
# CHEMINF GRAPH DETAILS
================================================================================
# Molecular Graph: 17 atoms, 16 bonds
# total_charge=1 multiplicity=1
# (C-H hydrogens hidden; heteroatom-bound hydrogens shown; valences still include all H)
# [idx] Sym val=.. metal=.. formal=.. | neighbors: idx(order / aromatic flag)
# (val = organic valence excluding metal bonds; metal = metal coordination bonds)
[ 0] C val=4.00 metal=0.00 formal=0 | 1(1.50*) 5(1.50*)
[ 1] C val=4.00 metal=0.00 formal=0 | 0(1.50*) 2(1.50*)
[ 2] C val=4.00 metal=0.00 formal=0 | 1(1.50*) 3(1.50*)
[ 3] C val=4.00 metal=0.00 formal=0 | 2(1.50*) 4(1.50*)
[ 4] C val=4.00 metal=0.00 formal=0 | 3(1.50*) 5(1.50*)
[ 5] C val=4.00 metal=0.00 formal=0 | 0(1.50*) 4(1.50*)
[ 12] N val=4.00 metal=0.00 formal=+1 | 13(1.00) 14(1.00) 15(1.00) 16(1.00)
[ 13] H val=1.00 metal=0.00 formal=0 | 12(1.00)
[ 14] H val=1.00 metal=0.00 formal=0 | 12(1.00)
[ 15] H val=1.00 metal=0.00 formal=0 | 12(1.00)
[ 16] H val=1.00 metal=0.00 formal=0 | 12(1.00)
# Bonds (i-j: order) (filtered)
[ 0- 1]: 1.50
[ 0- 5]: 1.50
[ 1- 2]: 1.50
[ 2- 3]: 1.50
[ 3- 4]: 1.50
[ 4- 5]: 1.50
[12-13]: 1.00
[12-14]: 1.00
[12-15]: 1.00
[12-16]: 1.00
================================================================================
# ASCII Depiction (cheminf)
================================================================================
-C------------------------C-
--- ---
---- ----
--- ---
C\ -C
\\ //
\\\ ///
\\\ ///
\\ //
\C------------------------C/
H
|
|
|
|
H------------------------N-------------------------H
|
|
|
|
H
```
---
## Limitations & Future Work
### Current Limitations
1. **Metal Complexes**
- Bond orders locked at 1.0 (no d-orbital chemistry)
- Metal-metal bonds *partially* supported (single bond allowed)
- Can deal with **both** ionic *and* neutral ligands
2. **Radicals & Open-Shell Systems**
- Unlikely to appropriately solve a valence structure
- Not explicity dealt with currently
- *May* behave, *may* be unreliable
3. **Zwitterions**
- Formal charge and valence analysis does identify `-[N+](=O)(-[O-])` bonding and formal charge pattern
- This is performed **without pattern matching**
- *May* not always be fully robust
4. **Large Conjugated Systems**
- May need many iterations for convergence (kekule initialised rings)
5. **Charged Aromatics**
- Hückel electron counting is simplistic
- Should still solve with valence/charge optimisation
6. **Inorganic Cages**
- Homogeneous clusters (≥8 atoms, same element) bypass standard ring validation
- Unlikely to be fully accurately described, *e.g.* C/B cage structures
---
### Built-in Comparison
xyzgraph can directly compare its output to rdkit/xyz2mol [[3]](https://github.com/jensengroup/xyz2mol), [[4]](https://github.com/rdkit) or to rdkit/xyz2mol_tm [[6]](https://github.com/jensengroup/xyz2mol_tm), [[7]](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-025-01008-1):
```bash
xyzgraph molecule.xyz --compare-rdkit --debug
# or
xyzgraph molecule.xyz --compare-rdkit-tm --debug # integrates graph building from xyz2mol_tm
```
**Output includes**:
- Layout-aligned ASCII depictions
- Edge differences (bonds only in one method)
- Bond order differences (Δ ≥ 0.25)
**Example**:
```text
# Bond differences: only_in_native=1 only_in_rdkit=0 bond_order_diffs=2
# only_in_native: 4-7
# bond_order_diffs (Δ≥0.25):
# 1-2 native=1.50 rdkit=1.00 Δ=+0.50
# 2-3 native=2.00 rdkit=1.50 Δ=+0.50
```
---
## Examples
This section demonstrates xyzgraph's capabilities on real molecular systems, showcasing Kekulé initialization, aromatic detection, metal coordination analysis, and formal charge assignment.
### Example 1: Metal Complex (Ferrocene-Manganese Hydride)
This example demonstrates xyzgraph's handling of organometallic complexes with multiple ligand types.
**System:** [(η⁵-Cp)₂Fe][Mn(H)(CO)₂(PNN)] - Ferrocene cation with manganese hydride complex
**File:** `examples/mnh.xyz` (77 atoms)
**Command:**
```bash
xyzgraph examples/mnh.xyz --ascii --debug
```
**Key Features:**
- Detection of Cp⁻ (cyclopentadienyl) rings coordinated to Fe
- Metal coordination summary (Fe²⁺, Mn¹⁺) with ligand classification
- Hydride ligand (H⁻) recognition
- Carbonyl (CO) ligands with triple-bonded oxygen
- Aromatic Cp rings with charge contribution to π system
**Output (truncated):**
```text
================================================================================
KEKULE INITIALIZATION FOR AROMATIC RINGS
================================================================================
Ring 0 (5-membered): ['C7', 'C13', 'C11', 'C9', 'C8']
✓ Detected Cp-like ring (all 5 C bonded to Fe0)
π electrons estimate: 6
Ring 1 (6-membered): ['C37', 'C39', 'C41', 'C43', 'C45', 'C36']
π electrons estimate: 6
Ring 2 (6-membered): ['C34', 'C32', 'C30', 'C28', 'C26', 'C25']
π electrons estimate: 6
Ring 3 (6-membered): ['C55', 'C53', 'N6', 'C52', 'C58', 'C57']
π electrons estimate: 6
Ring 4 (5-membe | text/markdown | Dr Alister Goodfellow | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"rdkit",
"networkx",
"scipy; extra == \"rdkit-tm\""
] | [] | [] | [] | [
"homepage, https://github.com/aligfellow/xyzgraph"
] | twine/6.2.0 CPython/3.12.9 | 2026-02-19T10:47:34.558275 | xyzgraph-1.5.1.tar.gz | 119,123 | d3/c0/d62fd7005a1a509b01f8e69b84174781489a00c9870a39c52beeb33c38f4/xyzgraph-1.5.1.tar.gz | source | sdist | null | false | ebbb3d963147c3b14686a5861b51d602 | 1c235f6508074d72e019f52091c3d76e71803757973f5404d38f5d0c4e5a5547 | d3c0d62fd7005a1a509b01f8e69b84174781489a00c9870a39c52beeb33c38f4 | null | [
"LICENSE"
] | 252 |
2.1 | odoo-addon-auth-oauth-autologin | 15.0.1.0.0.2 | Automatically redirect to the OAuth provider for login | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
====================
Auth Oauth Autologin
====================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:c42445ea0f1bbf81fd78fe09b1165269c12db10d78d83dd21262bf18af7d63be
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fserver--auth-lightgray.png?logo=github
:target: https://github.com/OCA/server-auth/tree/15.0/auth_oauth_autologin
:alt: OCA/server-auth
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/server-auth-15-0/server-auth-15-0-auth_oauth_autologin
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/server-auth&target_branch=15.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This modules implements an automatic redirection to the configured OAuth
provider login page, if there is one and only one enabled. This effectively
makes the regular Odoo login screen invisible in normal circumstances.
**Table of contents**
.. contents::
:local:
Configuration
=============
Configure OAuth providers in Settings > Users and Companies, and make sure
there is one and only one that has both the enabled and automatic login flags
set.
When this is done, users visiting the login page (/web/login), or being
redirected to it because they are not authenticated yet, will be redirected to
the identity provider login page instead of the regular Odoo login page.
Be aware that this module does not actively prevent users from authenticating
with an login and password stored in the Odoo database. In some unusual
circumstances (such as identity provider errors), the regular Odoo login may
still be displayed. Securely disabling Odoo login and password, if needed,
should be the topic of another module.
Also be aware that this has a possibly surprising effect on the logout menu
item. When the user logs out of Odoo, a redirect to the login page happens. The
login page in turn redirects to the identity provider, which, if the user is
already authenticated there, automatically logs the user back in Odoo, in a
fresh session.
Usage
=====
When configured, the Odoo login page redirects to the OAuth identify provider
for authentication and login in Odoo. To access the regular Odoo login page,
visit ``/web/login?no_autologin``.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/server-auth/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/server-auth/issues/new?body=module:%20auth_oauth_autologin%0Aversion:%2015.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
~~~~~~~
* ACSONE SA/NV
Maintainers
~~~~~~~~~~~
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-sbidoul| image:: https://github.com/sbidoul.png?size=40px
:target: https://github.com/sbidoul
:alt: sbidoul
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-sbidoul|
This module is part of the `OCA/server-auth <https://github.com/OCA/server-auth/tree/15.0/auth_oauth_autologin>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| null | ACSONE SA/NV,Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 15.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/server-auth | null | >=3.8 | [] | [] | [] | [
"odoo<15.1dev,>=15.0a"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T10:47:26.621358 | odoo_addon_auth_oauth_autologin-15.0.1.0.0.2-py3-none-any.whl | 27,008 | 7c/29/43565c2c17725f4565ba8f3e544e87c7067d1affb3de2d08e8034d118a13/odoo_addon_auth_oauth_autologin-15.0.1.0.0.2-py3-none-any.whl | py3 | bdist_wheel | null | false | a533edb34b8d1727280c4fb935569eeb | a78f2844bd4dbf9191583d5de11586ce437e19d7f66648d65937f2d295446176 | 7c2943565c2c17725f4565ba8f3e544e87c7067d1affb3de2d08e8034d118a13 | null | [] | 98 |
2.4 | reboost | 0.10.3 | New LEGEND Monte-Carlo simulation post-processing | # reboost
[](https://pypi.org/project/reboost/)
[](https://anaconda.org/conda-forge/reboost)

[](https://github.com/legend-exp/reboost/actions)
[](https://github.com/pre-commit/pre-commit)
[](https://github.com/psf/black)
[](https://app.codecov.io/gh/legend-exp/reboost)



[](https://reboost.readthedocs.io)
_reboost_ is a package to post-process
[remage](https://remage.readthedocs.io/en/stable/) simulations. Post processing
is the step of applying a detector response model to the (idealised) _remage_ /
_Geant4_ simulations to ''boost" them allowing comparison to data.
_reboost_ provides tools to:
- apply a HPGe detector response model to the simulations,
- dedicated tools to generate optical maps,
- functionality to control the full post-processing chain with configuration
files.
For more information see our dedicated
[documentation](https://reboost.readthedocs.io/en/stable/)!
| text/markdown | null | Manuel Huber <info@manuelhu.de>, Toby Dixon <toby.dixon.23@ucl.ac.uk>, Luigi Pertoldi <gipert@pm.me> | The LEGEND Collaboration | null | null | null | [
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: MacOS",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientif... | [] | null | null | >=3.10 | [] | [] | [] | [
"hdf5plugin",
"colorlog",
"numpy",
"scipy",
"numba>=0.60",
"legend-pydataobj>=1.17.2",
"legend-pygeom-optics>=0.15.0",
"legend-pygeom-tools>=0.0.26",
"legend-pygeom-hpges",
"hist",
"dbetto",
"particle",
"pandas",
"matplotlib",
"pygama",
"pyg4ometry",
"reboost[docs,test]; extra == \"a... | [] | [] | [] | [
"Homepage, https://github.com/legend-exp/reboost",
"Bug Tracker, https://github.com/legend-exp/reboost/issues",
"Discussions, https://github.com/legend-exp/reboost/discussions",
"Changelog, https://github.com/legend-exp/reboost/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:46:29.226051 | reboost-0.10.3.tar.gz | 139,618 | d4/70/6482fffe21f640f5b486f4e488bdf071db4eaba1b138f3447ee11c6caa8f/reboost-0.10.3.tar.gz | source | sdist | null | false | 0e76382790cde9e386788c12e836a917 | 916da275a023ded34c83a4ea34d418f087654a632b518b7e5a0a2f86aaa12eae | d4706482fffe21f640f5b486f4e488bdf071db4eaba1b138f3447ee11c6caa8f | GPL-3.0 | [
"LICENSE"
] | 665 |
2.4 | onfido-python | 6.0.0 | Python library for the Onfido API | # Onfido Python Library
The official Python library for integrating with the Onfido API.
Documentation is available at <https://documentation.onfido.com>.
This version uses Onfido API v3.6. Refer to our [API versioning guide](https://developers.onfido.com/guide/api-versioning-policy#client-libraries) for details. It explains which client library versions use which versions of the API.
[](https://badge.fury.io/py/onfido-python)

## Installation & Usage
### Requirements
Python 3.9+
### Installation
#### Pip
If the Python package is hosted on a repository, you can install it directly using:
```sh
pip install onfido-python
```
Then import the package:
```python
import onfido
```
#### Poetry
```sh
poetry add onfido-python
```
Then import the package:
```python
import onfido
```
### Tests
Execute `pytest` to run the tests.
## Getting Started
Import the `DefaultApi` object, this is the main object used for interfacing with the API:
```python
import onfido
import urllib3
from os import environ
configuration = onfido.Configuration(
api_token=environ['ONFIDO_API_TOKEN'],
region=onfido.configuration.Region.EU, # Supports `EU`, `US` and `CA`
timeout=urllib3.util.Timeout(connect=60.0, read=60.0)
)
with onfido.ApiClient(configuration) as api_client:
onfido_api = onfido.DefaultApi(api_client)
...
```
NB: by default, timeout values are set to 30 seconds. You can change the default timeout values by setting the `timeout` parameter in the `Configuration` object, as shown in the example above.
### Making a call to the API
```python
try:
applicant = onfido_api.create_applicant(
onfido.ApplicantBuilder(
first_name= 'First',
last_name= 'Last')
)
# To access the information access the desired property on the object, for example:
applicant.first_name
# ...
except OpenApiException:
# ...
pass
except Exception:
# ...
pass
```
Specific exception types are defined into [exceptions.py](onfido/exceptions.py).
### Webhook event verification
Webhook events payload needs to be verified before it can be accessed. Verifying webhook payloads is crucial for security reasons, as it ensures that the payloads are indeed from Onfido and have not been tampered with. The library allows you to easily decode the payload and verify its signature before returning it as an object for user convenience:
```python
try:
verifier = onfido.WebhookEventVerifier(os.environ["ONFIDO_WEBHOOK_SECRET_TOKEN"])
signature = "a0...760e"
event = verifier.read_payload('{"payload":{"r...3"}}', signature)
except onfido.OnfidoInvalidSignatureError:
# Invalid webhook signature
pass
```
### Recommendations
#### Do not use additional properties
Except for accessing Task object's outputs, avoid using the `additional_properties` dictionary to access undefined properties to prevent breaking changes when these fields appear.
## Contributing
This library is automatically generated using [OpenAPI Generator](https://openapi-generator.tech) (version: 7.16.0); therefore, all contributions (except test files) should target the [Onfido OpenAPI specification repository](https://github.com/onfido/onfido-openapi-spec/tree/master) instead of this repository. Please follow the contribution guidelines provided in the OpenAPI specification repository.
For contributions to the tests instead, please follow the steps below:
1. Fork the [repository](https://github.com/onfido/onfido-python/fork)
2. Create your feature branch (`git checkout -b my-new-feature`)
3. Make your changes
4. Commit your changes (`git commit -am 'Add detailed description of the feature'`)
5. Push to the branch (`git push origin my-new-feature`)
6. Create a new Pull Request
## Versioning policy
Versioning helps manage changes and ensures compatibility across different versions of the library.
[Semantic Versioning](https://semver.org) policy is used for library versioning, following the guidelines and limitations outlined below:
- MAJOR versions (x.0.0) may:
- target a new API version
- include non-backward compatible change
- MINOR versions (0.x.0) may:
- add a new functionality, non-mandatory parameter or property
- deprecate an old functionality
- include non-backward compatible change to a functionality which is:
- labelled as alpha or beta
- completely broken and not usable
- PATCH version (0.0.x) will:
- fix a bug
- include backward compatible changes only
## More documentation
Additional documentation and code examples can be found at <https://documentation.onfido.com>.
## Support
Should you encounter any technical issues during integration, please contact Onfido's Customer Support team via the [Customer Experience Portal](https://public.support.onfido.com/) which also includes support documentation.
| text/markdown | OpenAPI Generator community | OpenAPI Generator Community <team@openapitools.org> | null | null | MIT | OpenAPI, OpenAPI-Generator, onfido, identity | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"urllib3<3.0.0,>=2.5.0",
"python-dateutil>=2.8.2",
"pydantic>=2",
"typing-extensions>=4.7.1",
"virtualenv>=20.36.1"
] | [] | [] | [] | [
"Repository, https://github.com/onfido/onfido-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:45:55.853228 | onfido_python-6.0.0.tar.gz | 155,588 | 93/66/439f49dc9cfcc5bfa79e2c884d2dbfdbfec6c2d873a121a6adf01a08a77d/onfido_python-6.0.0.tar.gz | source | sdist | null | false | 4e2038b256057c655a2d82e93ce7e5e9 | 3646939a52ed2f61c64d52843139238ea447fbea24fe26129fef8ee7121ae242 | 9366439f49dc9cfcc5bfa79e2c884d2dbfdbfec6c2d873a121a6adf01a08a77d | null | [
"LICENSE"
] | 273 |
2.4 | ossuary-risk | 0.5.1 | OSS Supply Chain Risk Scoring - Where abandoned packages come to rest | # Ossuary
**OSS Supply Chain Risk Scoring** - Where abandoned packages come to rest.
Ossuary analyzes open source packages to identify governance-based supply chain risks before incidents occur. It calculates a risk score (0-100) based on maintainer concentration, activity patterns, protective factors, and takeover detection.
## What It Detects
Ossuary targets the subset of supply chain attacks where **governance weakness is a precondition** - social engineering takeovers, abandoned packages, governance disputes. High maintainer concentration isn't inherently dangerous (pciutils has been maintained by one person for 28 years), but combined with other signals it becomes meaningful.
| Can Detect | Cannot Detect |
|------------|---------------|
| Social engineering takeover (xz pattern) | Account compromise (stolen tokens) |
| Abandoned packages | Dependency confusion |
| Governance disputes (left-pad pattern) | Typosquatting |
| Newcomer takeover patterns | Malicious code injection |
| Economic frustration signals | Active maintainer sabotage |
## Quick Start
```bash
# Install from GitHub
pip install git+https://github.com/anicka-net/ossuary-risk.git
# Set GitHub token for API access (optional but recommended)
export GITHUB_TOKEN=ghp_xxxxxxxxxxxxx
# Initialize database
ossuary init
# Score a package
ossuary score event-stream --ecosystem npm
# Score across ecosystems
ossuary score numpy --ecosystem pypi
ossuary score serde --ecosystem cargo
# Score with historical cutoff (T-1 analysis)
ossuary score event-stream --ecosystem npm --cutoff 2018-09-01
# Output as JSON
ossuary score requests --ecosystem pypi --json
# Batch score from seed file
ossuary seed-custom seeds/pypi-popular.yaml
# Show packages with biggest score changes
ossuary movers
```
## Supported Ecosystems
npm, PyPI, Cargo, RubyGems, Packagist, NuGet, Go, GitHub
## Scoring Methodology
```
Final Score = Base Risk + Activity Modifier + Protective Factors
(20-100) (-30 to +20) (-70 to +20)
```
**Base Risk** from maintainer concentration. **Activity Modifier** rewards active maintenance, penalizes abandonment. **Protective Factors** include maintainer reputation, funding (GitHub Sponsors), org ownership, visibility (downloads/stars), community size, and takeover detection.
**Takeover Detection** (novel contribution): compares each contributor's recent commit share vs historical baseline. A newcomer jumping from 2% to 50% on a mature project triggers an alert. Guards prevent false positives for established contributors, long-tenure maintainers, and internal org handoffs.
When a takeover pattern is detected, the activity bonus is suppressed - high commit activity during a takeover is evidence of the attack, not project health.
See [methodology](docs/methodology.md) for full details.
## Dashboard
```bash
# Install with dashboard dependencies
pip install "ossuary-risk[dashboard] @ git+https://github.com/anicka-net/ossuary-risk.git"
# Run dashboard
streamlit run dashboard.py --server.port 8501
```
Features: risk overview, ecosystem breakdown, package detail with score history, delta detection (biggest movers).
## Validation
Validated on 144 packages across 8 ecosystems:
- **Accuracy**: 96.5%
- **Precision**: 100.0% (zero false positives)
- **Recall**: 80.0%
- **F1 Score**: 0.89
The 5 remaining false negatives are all account compromises on well-governed projects - confirming the known boundary of governance-based detection.
## Development
```bash
git clone https://github.com/anicka-net/ossuary-risk.git
cd ossuary-risk
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev,dashboard]"
cp .env.example .env # add GITHUB_TOKEN
ossuary init
```
## Configuration
```bash
GITHUB_TOKEN=ghp_xxxxxxxxxxxxx # GitHub API access (recommended)
DATABASE_URL=sqlite:///ossuary.db # Default; supports PostgreSQL
OSSUARY_CACHE_DAYS=7 # Score freshness threshold
```
## License
MIT
## Academic Context
MBA thesis research on OSS supply chain risk (due Dec 2026). Key contribution: governance-based risk indicators are observable in public metadata before incidents occur, but they address a specific attack subset - not a universal detector.
| text/markdown | Anicka | null | null | null | null | oss, risk, scoring, security, supply-chain | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"alembic>=1.13.0",
"fastapi>=0.109.0",
"gitpython>=3.1.0",
"httpx>=0.26.0",
"psycopg2-binary>=2.9.0",
"pydantic-settings>=2.1.0",
"pydantic>=2.5.0",
"python-dateutil>=2.8.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"rich>=13.0.0",
"sqlalchemy>=2.0.0",
"textblob>=0.18.0",
"typer>=0.9.0",
"... | [] | [] | [] | [
"Homepage, https://github.com/anicka-net/ossuary-risk",
"Repository, https://github.com/anicka-net/ossuary-risk",
"Documentation, https://github.com/anicka-net/ossuary-risk/blob/main/docs/methodology.md"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-19T10:45:53.614192 | ossuary_risk-0.5.1.tar.gz | 102,520 | fa/1f/0001e832259ab9eafe0a616ff86c0cb89c3bc89f9ae0d0d8a48a0c5fa6c9/ossuary_risk-0.5.1.tar.gz | source | sdist | null | false | 6b3173713343328a29f8b2ca48874fd8 | a16c2d152d7b2e72428bda0d5c5baadd65c6d36333c8cee1234f5d9ab6952808 | fa1f0001e832259ab9eafe0a616ff86c0cb89c3bc89f9ae0d0d8a48a0c5fa6c9 | MIT | [] | 228 |
2.4 | jupyternotifyplus | 0.4.2 | Enhanced Jupyter Notebook notifications with inline and end-of-cell magic commands. | # Jupyter Notify Plus

[](https://pypi.org/project/jupyternotifyplus/)



[](https://cngmid.github.io/jupyternotifyplus?ref=readme)

---
Jupyter Notify Plus enhances your notebook workflow with clean, modern inline notifications and end‑of‑cell alerts.
It’s lightweight, intuitive, and designed to integrate seamlessly into your existing environment.
A powerful Jupyter Notebook extension that provides:
- `%notifyme` end-of-cell notifications
- `%notifyme here` inline notifications
- Presets: `success`, `warn`, `error`, `failure`
- Custom icons, timestamps, and output inclusion
- Works in classic Notebook and JupyterLab
## Installation
```bash
pip install jupyternotifyplus
```
## Usage
```bash
%load_ext jupyternotifyplus.notifyme
```
Then:
```bash
%notifyme success
%notifyme here warn "Checkpoint reached"
```
---
## License
MIT License
Copyright (c) 2026 Daniel Cangemi
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the “Software”), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
| text/markdown | Daniel | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"pytest; extra == \"dev\"",
"bump-my-version; extra == \"dev\"",
"git-cliff; extra == \"dev\"",
"mkdocs-material[icons,recommended]>=9.5.0; extra == \"docs\"",
"mkdocs>=1.6.0; extra == \"docs\"",
"mike; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/cngmid/jupyternotifyplus",
"Documentation, https://cngmid.github.io/jupyternotifyplus?ref=pypi"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:45:21.898092 | jupyternotifyplus-0.4.2.tar.gz | 5,915 | 3c/0d/4b276bc05e4e116ad057edd6cc9956dda2c09041fdb165a9d8a93138e640/jupyternotifyplus-0.4.2.tar.gz | source | sdist | null | false | f12e82a1ff57583c946e32db41b356c3 | a23927c3abf6feebc302e41880da38625b1b5b3482cb4156f40828432feb9c23 | 3c0d4b276bc05e4e116ad057edd6cc9956dda2c09041fdb165a9d8a93138e640 | MIT | [
"LICENSE"
] | 237 |
2.4 | envserv | 1.0.8 | Environment model | # EnvServ
## Before starting, install ```dotenv```
```console
pip install python-dotenv
```
### EnvServ - Model view for easy Python development
#### Example №1
```env
# file: .env
FirstVar = Hello, world!
SecondVar = 42
ThirdVar = 23
```
```python
from envserv import EnvBase
class MyEnv(EnvBase):
__envfile__ = '.env'
FirstVar: str
SecondVar: int
ThirdVar: float
env = MyEnv()
print(env) # EnvServ(FirstVar:<class 'str'> = Hello, world!, SecondVar:<class 'int'> = 42, ThirdVar:<class 'float'> = 23.0)
print(env.FirstVar, env.SecondVar,env.ThirdVar) # Hello, world! 42 23.0
print(type(env.FirstVar), type(env.SecondVar), type(env.ThirdVar)) # <class 'str'> <class 'int'> <class 'float'>
```
#### Example №2
```python
from envserv import EnvBase
class MyEnv(EnvBase):
__envfile__ = '.env'
FirstVar: str
env = MyEnv()
env.FirstVar = "New variable value" # Also changes a variable in the .env file
print(env) # EnvServ(FirstVar:<class 'str'> = New variable value)
print(env.FirstVar) # New variable value
print(type(env.FirstVar)) # <class 'str'>
```
#### Example №3
```env
# file: .env
pass = 100
```
```python
from envserv import EnvBase, variable
class MyEnv(EnvBase):
__envfile__ = '.env'
pass_: int = variable(alias='pass',overwrite=False)
env = MyEnv()
print(env) # EnvServ(pass_:<class 'int'> = 100)
print(env.pass_) # 100
print(type(env.pass_)) # <class 'int'>
env.pass_ = 1 # envserv.errors.EnvVariableError: Error overwriting variable pass_: It cannot be overwritten
```
#### Example №4
```env
# file: .env
A = Text
ERR = this is error
C = [1, 2, 3, 4, 5]
D = {1: 2, 3: 4}
E = null
F = {1,2,3,4,5}
```
```python
from envserv import EnvBase, Variable
class MyEnv(EnvBase):
__envfile__ = '.env'
A: str
B: int = Variable(alias="ERR", error=False)
C: list
D: dict
E: None
F: set
not_in: str = "test"
env = MyEnv()
print(env.all())
print(env.json())
# Output:
# {'A': 'Text', 'B': 'this is error', 'C': [1, 2, 3, 4, 5], 'D': {1: 2, 3: 4}, 'E': None, 'F': {1, 2, 3, 4, 5}, 'not_in': 'test'}
# {"A": "Text", "B": "this is error", "C": [1, 2, 3, 4, 5], "D": {"1": 2, "3": 4}, "E": null, "F": [1, 2, 3, 4, 5], "not_in": "test"}
```
#### Example 5
```env
# file: .env
json_string = {"checked": null}
```
```python
from envserv import EnvBase
from envserv.typing import JSON
class MyEnv(EnvBase):
__envfile__ = '.env'
json_string: JSON
env = MyEnv()
print(env.all())
print(env.json())
# Output:
# {'json_string': {'checked': None}}
# {"json_string": {"checked": null}}
```
## Version logger
### 1.0.8
* Added `JSON` support
* Added `CI/CD`
* Version filtering changed from latest to first
* Code refactoring
### 1.0.7
* Fixed `variable` function
### 1.0.6
* Added the ability to have a default value for .env variable
* Fixed reading of .env file
### 1.0.5
* Code refactoring has been completed
* Added class __Variable__
* Added `toString` parameter to __all()__ function and added __json()__ function
### 1.0.4
* Added support for the list, dict and None variable
* Added parameter encoding to the class instance
* Added function __all()__ to display the dictionary
### 1.0.3
* Fix alias param
* Fix error message
### 1.0.2
* Added setting for docker-compose (Variable \_\_envfile\_\_ does not need to be written)
### 1.0.1
* Added rules for variable (beta)
### 1.0.0
* Model added
* Added variable change
* Added class instance information output
| text/markdown | null | txello <txello7@proton.me> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"python-dotenv"
] | [] | [] | [] | [
"Homepage, https://github.com/txello/EnvServ",
"Issues, https://github.com/txello/EnvServ/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T10:44:45.885457 | envserv-1.0.8.tar.gz | 5,375 | 0f/86/0b1ddac2f94c2fc96fd386500a8b752c550c4cd4165d21cef4daf509fdb6/envserv-1.0.8.tar.gz | source | sdist | null | false | 6a07537497771dd216fa2acb81b4cb57 | 82307adccc154922dfd35a1aab85922d8a1c0c631b08929a67011bd155dcea9e | 0f860b1ddac2f94c2fc96fd386500a8b752c550c4cd4165d21cef4daf509fdb6 | null | [
"LICENSE"
] | 244 |
2.4 | arbi | 1.7.4 | Python client for the ARBI API | # arbi
Official Python client for the [ARBI](https://arbicity.com) API, auto-generated from the OpenAPI specification.
## Installation
```bash
pip install arbi
```
## Quick start
```python
from arbi_client import Client, AuthenticatedClient
from arbi_client.api.user import login_user
from arbi_client.models import LoginRequest
# 1. Log in with the unauthenticated client
with Client(base_url="https://your-instance.arbicity.com") as c:
response = login_user.sync_detailed(
client=c,
body=LoginRequest(
email="user@example.com",
signature="<base64-ed25519-signature>",
timestamp=1700000000,
),
)
login = response.parsed
token = login.access_token
# 2. Use the JWT for all authenticated requests
with AuthenticatedClient(base_url="https://your-instance.arbicity.com", token=token) as c:
...
```
Login uses Ed25519 signature-based authentication: the client derives a keypair from the user's password, signs `email|timestamp`, and sends the signature. The server verifies it against the stored public key and returns a JWT. See the [ARBI documentation](https://arbicity.com) for details on the key derivation and signing process.
## Async support
Every endpoint has both sync and async variants:
```python
from arbi_client import AuthenticatedClient
from arbi_client.api.user import get_user_workspaces
async with AuthenticatedClient(base_url="https://your-instance.arbicity.com", token=token) as c:
workspaces = await get_user_workspaces.asyncio(client=c)
```
## API structure
Each endpoint is a Python module with four functions:
| Function | Blocking | Returns |
|--------------------|----------|------------------------|
| `sync` | Yes | Parsed data or `None` |
| `sync_detailed` | Yes | Full `Response` object |
| `asyncio` | No | Parsed data or `None` |
| `asyncio_detailed` | No | Full `Response` object |
Endpoints are grouped by tag under `arbi_client.api`:
```
arbi_client.api.user # login, register, settings
arbi_client.api.workspace # workspace management
arbi_client.api.document # document upload and tagging
arbi_client.api.conversation # conversations and messages
arbi_client.api.assistant # AI assistant queries
arbi_client.api.tag # tag management
arbi_client.api.configs # configuration management
arbi_client.api.notifications # notification management
arbi_client.api.health # health checks
```
## Links
- [ARBI](https://arbicity.com)
- [PyPI](https://pypi.org/project/arbi/)
| text/markdown | null | Arbitration City Ltd <support@arbi.city> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"attrs>=22.2.0",
"httpx<0.29.0,>=0.23.0",
"pynacl<2,>=1.5.0",
"python-dateutil<3,>=2.8.0",
"websockets<17,>=12.0"
] | [] | [] | [] | [] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T10:44:45.432806 | arbi-1.7.4-py3-none-any.whl | 365,497 | 6c/f5/c8ab3e2418e039c20895b0f6fa2c766abc8b1ec11eca2bc334ddc7897e74/arbi-1.7.4-py3-none-any.whl | py3 | bdist_wheel | null | false | 9ce8ccf850515098a0cff7604aa56f6e | c256e13e3e16b84e41d8a0112d49c6bdbea4bb72685ea7b360900e48f3f20cd6 | 6cf5c8ab3e2418e039c20895b0f6fa2c766abc8b1ec11eca2bc334ddc7897e74 | null | [] | 237 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.