metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.3 | pixivutil-server-client | 0.1.2 | Async aiohttp client SDK for PixivUtil Server | # pixivutil-server-client
Async `aiohttp` client SDK for PixivUtil Server.
## Example
```python
import asyncio
from pixivutil_client import PixivAsyncClient
async def main() -> None:
async with PixivAsyncClient(
base_url="http://localhost:8000",
api_key="your-api-key",
) as client:
health = await client.health()
print(health)
queued = await client.queue_download_artwork(123456)
print(queued.task_id)
asyncio.run(main())
```
## Install
From PyPI:
```sh
uv pip install pixivutil-server-client
```
From the `pixivutil-server` project root:
```sh
uv pip install -e ./PixivUtilClient
```
## Test
From the `pixivutil-server` project root:
```sh
uv run --package pixivutil-server-client pytest PixivUtilClient/tests
```
| text/markdown | psilabs-dev | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp>=3.13.3",
"pydantic>=2.12.4",
"pixivutil-server-common>=0.1.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:54:02.966285 | pixivutil_server_client-0.1.2.tar.gz | 3,358 | a4/36/ef14c5fce40393763a0956128107f6b073fd8677ddc02d53d282a03b9adb/pixivutil_server_client-0.1.2.tar.gz | source | sdist | null | false | 55337f1fab7ba86e7beae2b001c9beeb | ef60dd3c7ed657588e5ab796e3b87fd2615eb0758f457d6125b477f0327b9c24 | a436ef14c5fce40393763a0956128107f6b073fd8677ddc02d53d282a03b9adb | null | [] | 224 |
2.4 | gpkit-core | 0.2.0 | Package for defining and manipulating geometric programming models. | [<img src="http://gpkit.rtfd.org/en/latest/_images/gplogo.png" width=110 alt="GPkit" />](http://gpkit.readthedocs.org/)
**[Documentation](http://gpkit.readthedocs.org/)** | [Install instructions](http://gpkit.readthedocs.org/en/latest/installation.html) | [Examples](http://gpkit.readthedocs.org/en/latest/examples.html) | [Glossary](https://gpkit.readthedocs.io/en/latest/autodoc/gpkit.html) | [Citing GPkit](http://gpkit.readthedocs.org/en/latest/citinggpkit.html)
GPkit is a Python package for defining and manipulating
geometric programming models, abstracting away the backend solver.
Supported solvers are
[mosek](http://mosek.com)
and [cvxopt](http://cvxopt.org/).
[](https://github.com/whoburg/gpkit/actions/workflows/tests.yml)
[](https://github.com/whoburg/gpkit/actions/workflows/lint.yml)
## Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details on how to set up your development environment and submit pull requests.
## Acknowledgments
Originally developed with Ned Burnell, whose extensive contributions were foundational to the early design.
| text/markdown | null | Warren Hoburg <whoburg@alum.mit.edu> | null | null | The MIT License (MIT)
Copyright (c) 2025 Warren Hoburg
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"adce>=1.3.2",
"cvxopt>=1.1.8",
"matplotlib>=3.3.0",
"numpy>=1.16.4",
"pint>=0.8.1",
"plotly>=5.3.0",
"scipy>=1.7.0"
] | [] | [] | [] | [
"Homepage, https://www.github.com/beautifulmachines/gpkit-core"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T03:53:43.004772 | gpkit_core-0.2.0.tar.gz | 6,844,079 | db/09/d5c5caae408fb21bc70b90b63b2e9e31f546cb2913439222223052fc10ad/gpkit_core-0.2.0.tar.gz | source | sdist | null | false | 5229a68e252b04bdfa0f2e076a013415 | 08a7e50ac23710cb54c254c1f2a2f8eb2a66546fb95bdeee9b8566b5277dda0e | db09d5c5caae408fb21bc70b90b63b2e9e31f546cb2913439222223052fc10ad | null | [
"LICENSE"
] | 297 |
2.4 | pytest-openapi | 0.2.2.dev202602210352 | `pytest --openapi` - an opinionated, lightweight black-box contract tester against a live API using its OpenAPI specification as the source of truth | 



[](https://sinan-ozel.github.io/pytest-openapi/)



# 🧪 OpenAPI Contract Tester
An opinionated, lightweight **black-box contract tester** against a **live API** using its OpenAPI specification as the source of truth.
This tool validates OpenAPI quality, generates test cases from schemas, and verifies that real HTTP responses match the contract.
This "certifies" that the documentation is complete with descriptions, example, and schema, and that the endpoint behaves as the documentation suggests.
## Guiding Principles:
1. A service needs to document clearly. (This means schemas, descriptions, and examples)
2. When the examples and schemas are used, it should respond as expected from the documentation.
📚 **[Read the full documentation](https://sinan-ozel.github.io/pytest-openapi/)**
## ✨ What it does
### ▶️ Quick Example

```bash
pytest --openapi=http://localhost:8000 -v
```
**Console Output:**
```
====================================================================================== test session starts =======================================================================================
platform linux -- Python 3.11.14, pytest-9.0.2, pluggy-1.6.0 -- /usr/local/bin/python3.11
cachedir: .pytest_cache
rootdir: /workspace
plugins: openapi-0.2.1, depends-1.0.1, mock-3.15.1
collected 3 items
created 2 items from openapi examples
created 20 items generated from schema
tests/test_samples/test_sample_math.py::test_sample_addition PASSED [ 4%]
tests/test_samples/test_sample_math.py::test_sample_multiplication PASSED [ 8%]
.::test_openapi[POST /email [example-1]] PASSED [ 12%]
.::test_openapi[POST /email [generated-2]] PASSED [ 16%]
.::test_openapi[POST /email [generated-3]] PASSED [ 20%]
.::test_openapi[POST /email [generated-4]] PASSED [ 24%]
.::test_openapi[POST /email [generated-5]] PASSED [ 28%]
.::test_openapi[POST /email [generated-6]] PASSED [ 32%]
.::test_openapi[POST /email [generated-7]] PASSED [ 36%]
.::test_openapi[POST /email [generated-8]] PASSED [ 40%]
.::test_openapi[POST /email [generated-9]] PASSED [ 44%]
.::test_openapi[POST /email [generated-10]] PASSED [ 48%]
.::test_openapi[POST /email [generated-11]] PASSED [ 52%]
.::test_openapi[POST /email_bad [example-1]] FAILED [ 56%]
.::test_openapi[POST /email_bad [generated-2]] FAILED [ 60%]
.::test_openapi[POST /email_bad [generated-3]] FAILED [ 64%]
.::test_openapi[POST /email_bad [generated-4]] FAILED [ 68%]
.::test_openapi[POST /email_bad [generated-5]] FAILED [ 72%]
.::test_openapi[POST /email_bad [generated-6]] FAILED [ 76%]
.::test_openapi[POST /email_bad [generated-7]] FAILED [ 80%]
.::test_openapi[POST /email_bad [generated-8]] FAILED [ 84%]
.::test_openapi[POST /email_bad [generated-9]] FAILED [ 88%]
.::test_openapi[POST /email_bad [generated-10]] FAILED [ 92%]
.::test_openapi[POST /email_bad [generated-11]] FAILED [ 96%]
tests/test_samples/test_sample_math.py::test_sample_string_operations PASSED [100%]
📝 Full test report saved to: /workspace/tests/report.md
(Configure output file with: --openapi-markdown-output=<filename>)
============================================================================================ FAILURES ============================================================================================
```
**Detailed Report** (`report.md`):
## Test #12 ❌
📋 *Test case from OpenAPI example*
**Endpoint:** `POST /email_bad`
### Request Body
```json
{
"body": "Hi Bob, how are you?",
"from": "alice@example.com",
"subject": "Hello",
"to": "bob@example.com"
}
```
### Expected Response
**Status:** `201`
```json
{
"body": "Hi Bob, how are you?",
"from": "alice@example.com",
"id": "12",
"subject": "Hello",
"to": "bob@example.com"
}
```
### Actual Response
**Status:** `201`
```json
{
"body": "Hi Bob, how are you?",
"from": "alice@example.com",
"id": 12,
"subject": 12345,
"to": "bob@example.com"
}
```
### ❌ Error
```
Type mismatch for key 'subject': expected str, got int. Expected value: Hello, Actual value: 12345
```
Each OpenAPI test appears as an individual pytest test item.
✔️ Validates OpenAPI request/response definitions
✔️ Enforces schema field descriptions
✔️ Generates test cases from schemas, checks response codes and types in the response
✔️ Tests the exanples
✔️ Tests **GET / POST / PUT / DELETE** endpoints
✔️ Compares live responses against examples
✔️ Produces a readable test report
# ▶️ Detailed Example
## Install
```bash
pip install pytest-openapi
```
## Run
Say that you have a service running at port `8000` on `localhost`. Then, run:
```bash
pytest --openapi=http://localhost:8000
```
### Options
- `--openapi=BASE_URL`: Run contract tests against the API at the specified base URL
- `--openapi-no-strict-example-checking`: Use lenient validation for example-based tests
- `--openapi-markdown-output=FILENAME`: (Optional) Write test results in Markdown format to the specified file
- `--openapi-ignore=REGEXP`: Completely ignore endpoints whose path matches the given regular expression. Useful to skip known-broken or auth-protected paths.
- `-v`: Verbose mode - shows full test names
- `-vv`: Very verbose mode - shows request/response with 50 character truncation
- `-vvv`: Very very verbose mode - shows full request/response without truncation
Examples:
```bash
pytest --openapi=http://localhost:8000 --openapi-ignore=mcp
pytest --openapi=http://localhost:8000 --openapi-ignore=(auth|mcp)
pytest --openapi=http://localhost:8000 --openapi-ignore=(v[0-9]+/auth|mcp)
pytest --openapi=http://localhost:8000 -vv # Show truncated request/response
pytest --openapi=http://localhost:8000 -vvv # Show full request/response
```
#### Strict vs Lenient Example Checking
By default, pytest-openapi performs **strict matching** on example-based tests:
- When your OpenAPI spec includes explicit request/response examples, the actual response must match the example values exactly
- This ensures examples accurately reflect real API behavior
However, sometimes examples contain placeholder values (like `[1, 2, 3]`) that don't match actual responses (like `[]`). Use `--openapi-no-strict-example-checking` for lenient validation:
```bash
pytest --openapi=http://localhost:8000 --openapi-no-strict-example-checking
```
**Lenient mode** validates:
- Structure and types match (all expected keys present, correct types)
- But ignores exact values and array lengths
**Note**: Schema-generated tests always use schema validation (not affected by this flag).
#### Markdown Output Format
You can optionally generate test reports in Markdown format and save them to a file:
```bash
pytest --openapi=http://localhost:8000 --openapi-markdown-output=report.md
```
**Note**: If you don't specify `--openapi-markdown-output`, no markdown file is written. The plugin only outputs to pytest's standard output.
This creates a `report.md` file with:
- Summary statistics (total, passed, failed tests)
- Formatted code blocks for JSON data
- Clear sections for expected vs actual responses
- Error details in formatted blocks
The markdown report is written independently of stdout output.
**Example output**: See [example_report.md](example_report.md) for a sample markdown report.
## Server
See here an example server - `email-server`: [tests/test_servers/email_server/server.py](tests/test_servers/email_server/server.py)
## Resulting Tests
[tests/test_servers/email_server/email_test_output.txt](tests/test_servers/email_server/email_test_output.txt)
# Future Plans / TODO
This is a work in progress.
- [ ] A check that the example matches the schema
- [ ] Ask that 400 responses be in the documentation.
- [ ] A check for regexp and email formats.
## Issues? Feedback?
Seriously, this is a work-in-progress. If you try it and something does not work as intended, or expect, open a ticket!
I may be able to fix quickly, especially if you can provide a minimal example to replicate the issue.
## In Consideration
- [ ] Use LLM-as-a-judge to assess the error messages and check their spelling.
# Contributing
Contributions are welcome!
The only requirement is 🐳 Docker.
Test are containerized, run them using the VS Code task `test`. If you don't want to use VS Code, the command is `docker compose -f ./tests/docker-compose.yaml --project-directory ./tests up --build --abort-on-container-exit --exit-code-from test`. Run this before making a PR, please.
There is also a development environment for VS Code, if you need it. On this environment, you can run the task `run-mock-server` to run one of the [mock servers](tests/test_servers) and see the output.
You can add your own mock server, and then add integration tests. Just follow the same pattern as every test to make a call - `subprocess.run('pytest', '--openapi=http://your-server:8000`.
Please reformat and lint before making a PR. The VS Task is `lint`, and if you don't want to use VS Code, the command is: `docker compose -f ./lint/docker-compose.yaml --project-directory ./lint up --build --abort-on-container-exit --exit-code-from linter`. Run this before making a PR, please.
If you add a functionality, please add to the the documentation.
Please submit a pull request or open an issue for any bugs or feature requests.
The moment your PR is merged, you get a dev release. You can then set up the version number to use your changes.
# License
MIT License. See [LICENSE](LICENSE) file for the specific wording.
| text/markdown | null | Sinan Ozel <coding@sinan.slmail.me> | null | null | MIT License
Copyright (c) 2025 Sinan Ozel
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | null | [] | [] | [] | [
"pytest>=7.0.0",
"requests>=2.31.0",
"exrex>=0.11.0",
"pytest>=7.0.0; extra == \"test\"",
"pytest-depends>=1.0.1; extra == \"test\"",
"pytest-mock>=3.14.0; extra == \"test\"",
"httpx>=0.28.1; extra == \"test\"",
"isort>=5.12.0; extra == \"dev\"",
"ruff>=0.12.11; extra == \"dev\"",
"black>=24.0.0; extra == \"dev\"",
"docformatter>=1.7.5; extra == \"dev\"",
"mkdocs-material>=9.0.0; extra == \"docs\"",
"mkdocstrings[python]>=0.24.0; extra == \"docs\"",
"mike>=2.0.0; extra == \"docs\"",
"packaging>=25.0; extra == \"publish\""
] | [] | [] | [] | [
"Homepage, https://github.com/sinan-ozel/pytest-openapi",
"Issues, https://github.com/sinan-ozel/pytest-openapi/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:53:04.686136 | pytest_openapi-0.2.2.dev202602210352.tar.gz | 41,262 | f2/e5/da97d019580e8e368a68688bd674cd40b55ff9f2d5d2235b816b9dd61603/pytest_openapi-0.2.2.dev202602210352.tar.gz | source | sdist | null | false | b26529bc61df454990e593d89272c360 | aa3fed7f548ed4d0664b2622cc0fff530c81e555d2110d846c250f1978fe829f | f2e5da97d019580e8e368a68688bd674cd40b55ff9f2d5d2235b816b9dd61603 | null | [
"LICENSE"
] | 210 |
2.4 | cici-tools | 0.19.3 | Continuous Integration Catalog Interface | # saferatday0 cici
<!-- BADGIE TIME -->
[](https://gitlab.com/saferatday0/cici/-/commits/main)
[](https://gitlab.com/saferatday0/cici/-/commits/main)
[](https://gitlab.com/saferatday0/cici/-/releases)
[](https://github.com/pre-commit/pre-commit)
[](https://github.com/prettier/prettier)
<!-- END BADGIE TIME -->
**cici**, short for **Continuous Integration Catalog
Interface**, is a framework and toolkit for managing the integration and
lifecycle of packaged CI/CD components in a software delivery pipeline.
cici enables the efficient sharing of CI/CD code in an organization, and
eliminates a major source of friction that otherwise leads to poor adoption of
automation and DevOps practices.
cici is a foundational component of [saferatday0](https://saferatday0.dev) and
powers the [saferatday0 library](https://gitlab.com/saferatday0/library).
## Installation
```sh
pip install cici-tools
```
## Usage
### `cici bundle`
Flatten `extends` keywords to make zero-dependency GitLab CI/CD files.
```bash
cici bundle
```
```console
$ cici bundle
⚡ python-autoflake.yml
⚡ python-black.yml
⚡ python-build-sdist.yml
⚡ python-build-wheel.yml
⚡ python-import-linter.yml
⚡ python-isort.yml
⚡ python-mypy.yml
⚡ python-pyroma.yml
⚡ python-pytest.yml
⚡ python-setuptools-bdist-wheel.yml
⚡ python-setuptools-sdist.yml
⚡ python-twine-upload.yml
⚡ python-vulture.yml
```
### `cici readme`
Generate a README for your pipeline project:
```bash
cici readme
```
To customize the output, copy the default README template to `README.md.j2` in
your project root and modify:
```j2
# {{ name }} pipeline
{%- include "brief.md.j2" %}
{%- include "description.md.j2" %}
{%- include "groups.md.j2" %}
{%- include "targets.md.j2" %}
{%- include "variables.md.j2" %}
```
### `cici update`
Update to the latest GitLab CI/CD `include` versions available.
```bash
cici update
```
```console
$ cici update
updated saferatday0/library/python to 0.5.1
updated saferatday0/library/gitlab from 0.1.0 to 0.2.2
```
## License
Copyright 2025 UL Research Institutes.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
<http://www.apache.org/licenses/LICENSE-2.0>
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
| text/markdown | null | Digital Safety Research Institute <contact@dsri.org> | null | null | Apache-2.0 | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"Topic :: Software Development :: Build Tools",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"appdirs",
"jinja2",
"jsonschema",
"markdown",
"msgspec",
"python-decouple",
"ruamel.yaml",
"termcolor",
"pytest; extra == \"test\"",
"ci; extra == \"keywords\"",
"pipeline; extra == \"keywords\"",
"python; extra == \"keywords\""
] | [] | [] | [] | [
"Home, https://gitlab.com/saferatday0/cici",
"Issues, https://gitlab.com/saferatday0/cici/-/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T03:52:59.779357 | cici_tools-0.19.3.tar.gz | 85,845 | 2f/5f/02f5e9c90c5f15c5018a61c0bc4790f478ebf06c4736edb59265077f7d4d/cici_tools-0.19.3.tar.gz | source | sdist | null | false | d98e49a37242da66fd9b7163f3b89c97 | dc33c43cc13719fd02a1eab18c4d1bd312f464bbbd93bbeb56e1ebde27c384f7 | 2f5f02f5e9c90c5f15c5018a61c0bc4790f478ebf06c4736edb59265077f7d4d | null | [
"LICENSE",
"NOTICE"
] | 225 |
2.1 | odoo-addon-sale-purchase-force-vendor | 16.0.1.0.3.2 | Sale Purchase Force Vendor | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
==========================
Sale Purchase Force Vendor
==========================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:ba07793287f048fef6d2dd0a08e4336401499be5d7481c5d02df2bd95ebe48bb
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fpurchase--workflow-lightgray.png?logo=github
:target: https://github.com/OCA/purchase-workflow/tree/16.0/sale_purchase_force_vendor
:alt: OCA/purchase-workflow
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/purchase-workflow-16-0/purchase-workflow-16-0-sale_purchase_force_vendor
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/purchase-workflow&target_branch=16.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module allows you to select a vendor at the sale order line level when a route is defined.
**Table of contents**
.. contents::
:local:
Configuration
=============
To configure this module, you need to:
#. Install *sale_management* app.
#. Go to *Inventory -> Configuration -> Settings* and check "Multi-Step Routes" option.
#. Go to *Inventory -> Configuration -> Routes > Buy* and check "Sales Order Lines" option.
#. Go to *Inventory -> Configuration -> Routes*, filter "Archived" an "Unarchived" MTO route.
#. Go to *Sale -> Products -> Products*.
#. Create a new product with the following options:
* [Puchase tab] `Vendors`: Set different vendors (Vendor A + Vendor B).
* [Iventory tab] `Routes`: Buy and MTO
Usage
=====
#. Go to *Sale -> Orders -> Quotations* and create a new Quotation.
#. Create a new line with the following options:
* `Route`: MTO.
* `Vendor`: Vendor B.
#. Confirm sale order.
#. A new purchase order will have been created to Vendor B.
#. If you don't want to apply restriction, you can uncheck "Restrict allowed vendors in sale orders" field in *Purchase > Configuration > Products*.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/purchase-workflow/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/purchase-workflow/issues/new?body=module:%20sale_purchase_force_vendor%0Aversion:%2016.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
~~~~~~~
* Tecnativa
Contributors
~~~~~~~~~~~~
* `Tecnativa <https://www.tecnativa.com>`_:
* Víctor Martínez
* Pedro M. Baeza
Maintainers
~~~~~~~~~~~
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-victoralmau| image:: https://github.com/victoralmau.png?size=40px
:target: https://github.com/victoralmau
:alt: victoralmau
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-victoralmau|
This module is part of the `OCA/purchase-workflow <https://github.com/OCA/purchase-workflow/tree/16.0/sale_purchase_force_vendor>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| null | Tecnativa, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 16.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/purchase-workflow | null | >=3.10 | [] | [] | [] | [
"odoo<16.1dev,>=16.0a"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T03:52:43.425794 | odoo_addon_sale_purchase_force_vendor-16.0.1.0.3.2-py3-none-any.whl | 33,845 | 66/90/34b93a5b55ce897dad7ea2af32267af9c8994e1345e0cda4e188e6aa1375/odoo_addon_sale_purchase_force_vendor-16.0.1.0.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | efa6d5cc8442bb404c73aa11411ad3a2 | a6eea0192242dd5235a11a52efc84750b8cf71d4976e9ec66b48ae888b79a439 | 669034b93a5b55ce897dad7ea2af32267af9c8994e1345e0cda4e188e6aa1375 | null | [] | 70 |
2.4 | cognitive-memory-layer | 1.3.1 | Neuro-inspired memory system for LLMs (server + Python SDK) | # cognitive-memory-layer
**Python SDK for the Cognitive Memory Layer** — neuro-inspired memory for AI applications. Store, retrieve, and reason over memories with sync/async clients or an in-process embedded engine.
The Cognitive Memory Layer (CML) gives LLMs a neuro-inspired memory system: episodic and semantic storage, consolidation, and active forgetting. It fits into agents, RAG pipelines, and personalized apps as a persistent, queryable memory backend. This SDK provides sync and async HTTP clients for a CML server, plus an optional in-process embedded engine (lite mode: SQLite and local embeddings, no server). You get write/read/turn, sessions, admin and batch operations, and a helper for OpenAI chat.
**Who it's for:** Developers building AI applications that need persistent, queryable memory — chatbots, agents, evaluation pipelines, and personalized assistants.
**What you can do:**
- Power agent loops with retrieved context and store observations in memory.
- Add memory to RAG pipelines so retrieval is informed by prior interactions.
- Personalize by user or session with namespaces and session-scoped context.
- Run benchmarks with eval mode and temporal fidelity (historical timestamps). For bulk evaluation, the server supports `LLM_INTERNAL__*` and the eval script supports `--ingestion-workers`; see [configuration](docs/configuration.md).
- Run embedded without a server for development, demos, or single-machine apps.
[](https://pypi.org/project/cognitive-memory-layer/)
[](https://pypi.org/project/cognitive-memory-layer/)
[](https://www.gnu.org/licenses/gpl-3.0)
[](https://github.com/avinash-mall/CognitiveMemoryLayer/tree/main/packages/py-cml/tests)
[](https://github.com/avinash-mall/CognitiveMemoryLayer)
**What's new (1.1.0):** Dashboard admin methods — 13 new methods for sessions, rate limits, knowledge graph, configuration, labile status, retrieval testing, job history, and bulk actions. Unreleased changes (e.g. `ReadResponse.constraints`, `user_timezone` on read/turn, embedded read filter passthrough) are in [CHANGELOG](CHANGELOG.md).
---
## Installation
```bash
pip install cognitive-memory-layer
```
**Embedded mode** (run the CML engine in-process, no server). In lite mode, only the **episodic** (vector) store is used; the neocortical (graph/semantic) store is disabled, so there is no knowledge graph or semantic consolidation. Best for development, demos, or single-machine apps.
```bash
pip install cognitive-memory-layer[embedded]
```
From the monorepo, the server and SDK are built from the **repository root** (single `pyproject.toml`). Install in editable mode with optional extras:
```bash
# From repo root: install server + SDK
pip install -e .
# With embedded mode (in-process engine)
pip install -e ".[embedded]"
```
---
## Quick start
**Sync client** — Connect to a CML server, write a memory, read by query, and run a turn with a session; use `result.context` for LLM injection and `result.memories` (or `result.constraints` when the server returns them) for structured access.
```python
from cml import CognitiveMemoryLayer
with CognitiveMemoryLayer(api_key="sk-...", base_url="http://localhost:8000") as memory:
memory.write("User prefers vegetarian food.")
result = memory.read("What does the user eat?")
print(result.context) # Formatted for LLM injection
for m in result.memories:
print(m.text, m.relevance)
turn = memory.turn(user_message="What should I eat tonight?", session_id="session-001")
print(turn.memory_context)
```
**Async client** — Same flow as sync; use `async with` and `await` for all operations.
```python
import asyncio
from cml import AsyncCognitiveMemoryLayer
async def main():
async with AsyncCognitiveMemoryLayer(api_key="sk-...", base_url="http://localhost:8000") as memory:
await memory.write("User prefers dark mode.")
result = await memory.read("user preferences")
print(result.context)
asyncio.run(main())
```
**Embedded mode** — No server: SQLite plus local embeddings (lite mode). Use `db_path` for persistence.
```python
import asyncio
from cml import EmbeddedCognitiveMemoryLayer
async def main():
async with EmbeddedCognitiveMemoryLayer() as memory:
await memory.write("User prefers vegetarian food.")
result = await memory.read("dietary preferences")
print(result.context)
asyncio.run(main())
# Persistent: EmbeddedCognitiveMemoryLayer(db_path="./my_memories.db")
```
**Get context for injection** — Use `get_context(query)` when you only need a formatted string for the LLM:
```python
with CognitiveMemoryLayer(api_key="sk-...", base_url="http://localhost:8000") as memory:
context = memory.get_context("user preferences")
# Inject context into your system prompt or RAG pipeline
```
**Session-scoped flow** — Use `memory.session(name="...")` to scope writes and reads to a session:
```python
with CognitiveMemoryLayer(api_key="sk-...", base_url="http://localhost:8000") as memory:
with memory.session(name="session-001") as session:
session.write("User asked about Italian food.")
session.turn(user_message="Any good places nearby?", assistant_response="...")
```
**More usage:** Timezone-aware retrieval with `read(..., user_timezone="America/New_York")` or `turn(..., user_timezone="America/New_York")`. Batch operations: `batch_write([{"content": "..."}, ...])` and `batch_read(["query1", "query2"])` for multiple writes or reads.
---
## Configuration
**Client:** Environment variables (use `.env` or set directly): `CML_API_KEY`, `CML_BASE_URL`, `CML_TENANT_ID`, `CML_TIMEOUT`, `CML_MAX_RETRIES`, `CML_ADMIN_API_KEY`, `CML_VERIFY_SSL`. Use `CMLConfig` for validated, reusable config. See [Configuration](docs/configuration.md).
**Constructor:**
```python
memory = CognitiveMemoryLayer(
api_key="sk-...",
base_url="http://localhost:8000",
tenant_id="my-tenant",
)
```
Or pass a config object: `from cml import CMLConfig` then `CognitiveMemoryLayer(config=config)`.
**Embedded:** Use `EmbeddedConfig` (or constructor args). Options: `storage_mode` (`lite` | `standard` | `full`; only `lite` is implemented), `tenant_id`, `database`, `embedding`, `llm`, `auto_consolidate`, `auto_forget`. Embedding and LLM are read from `.env` when not set: `EMBEDDING__PROVIDER`, `EMBEDDING__MODEL`, `EMBEDDING__DIMENSIONS`, `EMBEDDING__BASE_URL`, `LLM__MODEL`, `LLM__BASE_URL`. Lite mode uses SQLite and local embeddings; pass `db_path` for a persistent database. Full details in [Configuration](docs/configuration.md).
---
## Features
| Mode | Description |
|------|-------------|
| **Client** | Sync and async HTTP clients for a running CML server; context managers |
| **Embedded** | In-process engine (lite mode: SQLite + local embeddings); no server. Embedded `read()` passes `memory_types`, `since`, and `until` to the orchestrator. |
**Memory API:** `write`, `read`, `read_safe`, `turn`, `update`, `forget`, `stats`, `get_context`, `create_session`, `get_session_context`, `delete_all`, `remember` (alias for write), `search` (alias for read), `health`. Options: `user_timezone` on `read()` and `turn()` for timezone-aware "today"/"yesterday"; `timestamp` on `write()`, `turn()`, `remember()` for event time; `eval_mode` on `write()`/`remember()` for benchmark responses. Write supports `context_tags`, `session_id`, `memory_type`, `namespace`, `metadata`, `agent_id`. Read supports `memory_types`, `since`, `until`, `response_format` (`packet` | `list` | `llm_context`).
**Response shape:** `ReadResponse` has `memories`, `facts`, `preferences`, `episodes`, `constraints` (when the server has constraint extraction), and `context` (formatted string for LLM injection).
**Server compatibility:** The server supports `delete_all` (admin API key), read filters and `user_timezone`, response formats, write `metadata` and `memory_type`, and session-scoped context. Read filters and `user_timezone` are sent when the server supports them. The server can use LLM-based extraction (constraints, facts, salience, importance) when `FEATURES__USE_LLM_*` flags are enabled; see [UsageDocumentation](../ProjectPlan/UsageDocumentation.md) § Configuration Reference.
**Session and namespace:** `memory.session(name=...)` (SessionScope) scopes writes/reads/turns to a session. `with_namespace(namespace)` returns a `NamespacedClient` (and async `AsyncNamespacedClient`) that injects namespace into write, update, and batch_write.
**Admin & batch:** `batch_write`, `batch_read`, `consolidate`, `run_forgetting`, `reconsolidate`, `with_namespace`, `iter_memories`, `list_tenants`, `get_events`, `component_health`. Dashboard admin (require `CML_ADMIN_API_KEY`): `get_sessions` (active sessions from Redis), `get_rate_limits` (rate-limit usage per API key), `get_request_stats` (hourly request volume), `get_graph_stats` (Neo4j node/edge stats), `explore_graph` / `search_graph` (knowledge graph), `get_config` / `update_config` (runtime config), `get_labile_status` (reconsolidation status), `test_retrieval` (retrieval test), `get_jobs` (consolidation/forgetting/reconsolidation job history), `bulk_memory_action` (archive/silence/delete in bulk).
**Embedded extras:** `EmbeddedConfig` for storage_mode, embedding/LLM, `auto_consolidate`, `auto_forget`. Export/import: `export_memories`, `import_memories` (and async `export_memories_async`, `import_memories_async`) for migration between embedded and server.
**OpenAI integration:** `CMLOpenAIHelper(memory_client, openai_client)` for memory-augmented chat. Set `OPENAI_MODEL` or `LLM__MODEL` in `.env`.
```python
from openai import OpenAI
from cml import CognitiveMemoryLayer
from cml.integrations import CMLOpenAIHelper
memory = CognitiveMemoryLayer(api_key="...", base_url="...")
helper = CMLOpenAIHelper(memory, OpenAI())
response = helper.chat("What should I eat tonight?", session_id="s1")
```
**Developer:** `read_safe` (returns empty on connection/timeout), `memory.session(name=...)`, `configure_logging("DEBUG")`, typed models (`py.typed`). Typed exceptions: `AuthenticationError`, `AuthorizationError`, `ValidationError`, `RateLimitError`, `NotFoundError`, `ServerError`, `CMLConnectionError`, `CMLTimeoutError`. The `MemoryProvider` protocol is available for custom backends. See [API Reference](docs/api-reference.md).
**Temporal fidelity:** Optional `timestamp` in `write()`, `turn()`, and `remember()` enables historical data replay for benchmarks, migration, and testing. See [Temporal Fidelity](docs/temporal-fidelity.md).
**Eval mode:** `eval_mode=True` in `write()` or `remember()` returns `eval_outcome` and `eval_reason` (stored/skipped and write-gate reason) for benchmark scripts. See [API Reference — Eval mode](docs/api-reference.md#eval-mode-write-gate).
---
## Documentation
- [Getting Started](docs/getting-started.md)
- [API Reference](docs/api-reference.md)
- [Configuration](docs/configuration.md)
- [Examples](docs/examples.md)
- [Temporal Fidelity](docs/temporal-fidelity.md)
- [Security policy](../../SECURITY.md)
[GitHub repository](https://github.com/avinash-mall/CognitiveMemoryLayer) — source, issues, server setup
[CHANGELOG](CHANGELOG.md)
---
## Testing
The SDK has **175 tests** (unit, integration, embedded, e2e). From the **repository root**:
```bash
# Run all SDK tests
pytest packages/py-cml/tests -v
# Unit only
pytest packages/py-cml/tests/unit -v
# Integration (requires CML API; set CML_BASE_URL, CML_API_KEY)
pytest packages/py-cml/tests/integration -v
# Embedded (requires embedding/LLM from .env or skip)
pytest packages/py-cml/tests/embedded -v
# E2E (requires CML API)
pytest packages/py-cml/tests/e2e -v
```
Some integration, embedded, and e2e tests skip when the CML server or embedding model is unavailable. See the root [tests/README.md](../../tests/README.md) and [tests/SKIPPED_TESTS_REPORT.md](../../tests/SKIPPED_TESTS_REPORT.md).
---
## License
GPL-3.0-or-later. See [LICENSE](LICENSE).
| text/markdown | CognitiveMemoryLayer Team | null | null | null | null | ai, cognitive, cqrs, event-sourcing, llm, memory, rag, sdk | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"asyncpg>=0.30.0",
"celery>=5.4",
"chonkie[semantic]",
"fastapi>=0.115.0",
"httpx>=0.27",
"neo4j>=5.0",
"openai>=1.0",
"pgvector>=0.3.0",
"prometheus-client>=0.21",
"pydantic-settings>=2.0",
"pydantic>=2.0",
"python-dotenv>=1.0",
"redis>=5.0",
"sentence-transformers>=3.0",
"sqlalchemy[asyncio]>=2.0",
"structlog>=24.0",
"tiktoken>=0.8",
"uvicorn[standard]>=0.32.0",
"chonkie[semantic]; extra == \"chonkie\"",
"anthropic; extra == \"claude\"",
"aiosqlite>=0.20; extra == \"dev\"",
"alembic>=1.14; extra == \"dev\"",
"black>=24.0; extra == \"dev\"",
"httpx>=0.28; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"pre-commit>=3.0; extra == \"dev\"",
"psycopg2-binary>=2.9; extra == \"dev\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest-httpx>=0.34; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.8; extra == \"dev\"",
"testcontainers>=4.6; extra == \"dev\"",
"mkdocs-material>=9.0; extra == \"docs\"",
"mkdocstrings[python]>=0.27; extra == \"docs\"",
"aiosqlite>=0.20; extra == \"embedded\"",
"asyncpg>=0.30; extra == \"embedded\"",
"neo4j>=5.0; extra == \"embedded\"",
"openai>=1.0; extra == \"embedded\"",
"pgvector>=0.3; extra == \"embedded\"",
"pydantic-settings>=2.0; extra == \"embedded\"",
"redis>=5.0; extra == \"embedded\"",
"sentence-transformers>=3.0; extra == \"embedded\"",
"sqlalchemy[asyncio]>=2.0; extra == \"embedded\"",
"structlog>=24.0; extra == \"embedded\"",
"tiktoken>=0.8; extra == \"embedded\"",
"google-generativeai; extra == \"gemini\""
] | [] | [] | [] | [
"Homepage, https://github.com/avinash-mall/CognitiveMemoryLayer",
"Documentation, https://github.com/avinash-mall/CognitiveMemoryLayer/tree/main/packages/py-cml#readme",
"Repository, https://github.com/avinash-mall/CognitiveMemoryLayer",
"Issues, https://github.com/avinash-mall/CognitiveMemoryLayer/issues",
"Changelog, https://github.com/avinash-mall/CognitiveMemoryLayer/blob/main/packages/py-cml/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:50:05.601291 | cognitive_memory_layer-1.3.1.tar.gz | 896,739 | 1e/0f/a4c044234a5843097d6771b45a519479dcd220a564ee1912af801d0e6578/cognitive_memory_layer-1.3.1.tar.gz | source | sdist | null | false | 8a60fd5c6cbce493b673d0c515fc48b4 | 91a3cf463316c8d5025cfcd4ed6b48254102ec50bd8ffd83a182b1e59fed64ea | 1e0fa4c044234a5843097d6771b45a519479dcd220a564ee1912af801d0e6578 | GPL-3.0-or-later | [
"LICENSE"
] | 210 |
2.1 | fix-cli | 0.5.0 | AI-powered command fixer with contract-based dispute resolution | # fix
AI-powered command fixer. A command fails, an LLM diagnoses it, proposes a fix, and a contract system tracks the whole thing. Disputes go to an AI judge.
## Quick start
```sh
pip install git+https://github.com/karans4/fix.git
```
### Local mode (you need an API key)
```sh
export ANTHROPIC_API_KEY=sk-ant-... # or OPENAI_API_KEY, or run Ollama
fix "gcc foo.c" # run command, fix if it fails
fix it # fix the last failed command
fix --explain "make" # just explain the error
fix --dry-run "make" # show fix without running
fix --local "make" # force Ollama (free, local)
```
### Market mode (free platform agent)
Post a contract to the platform. A free AI agent picks it up and proposes a fix.
```sh
fix --market "gcc foo.c"
```
Platform: `https://fix.notruefireman.org` (free during testing)
Configure in `~/.fix/config.py`:
```python
platform_url = "https://fix.notruefireman.org"
remote = True # default to remote mode
```
## Shell integration
For `fix it` / `fix !!` to work, add to your shell config:
```sh
# bash/zsh
eval "$(fix shell)"
# fish
fix shell fish | source
# or auto-install
fix shell --install
```
## Safe mode (sandbox)
Default on Linux. Runs fixes in OverlayFS -- changes only committed if verification passes.
```sh
fix "make build" # sandbox on Linux by default
fix --no-safe "make" # skip sandbox
fix --safe "make" # force sandbox
```
## Verification
```sh
fix "gcc foo.c" # default: re-run, exit 0 = success
fix --verify=human "python3 render.py" # human judges
fix --verify="pytest tests/" "pip install x" # custom command
```
## How it works
1. Command fails, stderr captured
2. Contract built (task, environment, verification terms, escrow)
3. Agent investigates (read-only commands), then proposes fix
4. Fix applied, verified mechanically
5. Multi-attempt: up to 3 tries, feeding failures back as context
6. Disputes go to an AI judge who reviews the full transcript
## Architecture
- `fix` -- CLI entry point
- `server/` -- FastAPI platform (contracts, escrow, reputation, judge)
- `protocol.py` -- state machine, constants
- `scrubber.py` -- redacts secrets from error output before sending to LLM
- `contract.py` -- builds structured contracts
- `client.py` / `agent.py` -- remote mode client and agent
## License
MIT
| text/markdown | null | Karan Sharma <karans4@protonmail.com> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.24",
"fastapi>=0.100; extra == \"server\"",
"uvicorn>=0.20; extra == \"server\"",
"starlette>=0.27; extra == \"server\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T03:49:51.481948 | fix_cli-0.5.0.tar.gz | 88,267 | 29/20/e5c0c90242db53a9b3c558c07a0b313fcc27f6c9797564838cd0e2761c79/fix_cli-0.5.0.tar.gz | source | sdist | null | false | 1c14c5aad3489bcd0e2d2df46a1ac4eb | 2435486e4b5297215be7f59851a41abfa1049647d50cd39a7da835ff4dea5b21 | 2920e5c0c90242db53a9b3c558c07a0b313fcc27f6c9797564838cd0e2761c79 | null | [] | 209 |
2.4 | voicetest | 0.21 | A generic test harness for voice agent workflows | [](https://pypi.org/project/voicetest/) [](https://github.com/voicetestdev/voicetest/releases) [](https://opensource.org/licenses/Apache-2.0) [](https://github.com/voicetestdev/voicetest/actions/workflows/test.yml)
<picture>
<source media="(prefers-color-scheme: dark)" srcset="assets/logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="assets/logo-light.svg">
<img alt="voicetest" src="assets/logo-light.svg" width="300">
</picture>
A generic test harness for voice agent workflows. Test agents from Retell, VAPI, LiveKit, Bland, Telnyx, and custom sources using a unified execution and evaluation model.
## Installation
```bash
uv tool install voicetest
```
Or add to a project (use `uv run voicetest` to run):
```bash
uv add voicetest
```
Or with pip:
```bash
pip install voicetest
```
## Quick Demo
Try voicetest with a sample healthcare receptionist agent and tests:
```bash
# Set up an API key (free, no credit card at https://console.groq.com)
export GROQ_API_KEY=gsk_...
# Load demo and start interactive shell
voicetest demo
# Or load demo and start web UI
voicetest demo --serve
```
> **Tip:** If you have [Claude Code](https://claude.ai/claude-code) installed, you can skip API key setup entirely and use `claudecode/sonnet` as your model. See [Claude Code Passthrough](#claude-code-passthrough) for details.
The demo includes a healthcare receptionist agent with 8 test cases covering appointment scheduling, identity verification, and more.


## Quick Start
### Interactive Shell
```bash
# Launch interactive shell (default)
uv run voicetest
# In the shell:
> agent tests/fixtures/retell/sample_config.json
> tests tests/fixtures/retell/sample_tests.json
> set agent_model ollama_chat/qwen2.5:0.5b
> run
```
### CLI Commands
```bash
# List available importers
voicetest importers
# Run tests against an agent definition
voicetest run --agent agent.json --tests tests.json
# Export agent to different formats
voicetest export --agent agent.json --format mermaid # Diagram
voicetest export --agent agent.json --format livekit # Python code
voicetest export --agent agent.json --format retell-llm # Retell LLM JSON
voicetest export --agent agent.json --format retell-cf # Retell Conversation Flow JSON
voicetest export --agent agent.json --format vapi-assistant # VAPI Assistant JSON
voicetest export --agent agent.json --format vapi-squad # VAPI Squad JSON
voicetest export --agent agent.json --format bland # Bland AI JSON
voicetest export --agent agent.json --format telnyx # Telnyx AI JSON
# Launch full TUI
voicetest tui --agent agent.json --tests tests.json
# Start REST API server with Web UI
voicetest serve
# Start infrastructure (LiveKit, Whisper, Kokoro) + backend for live calls
voicetest up
# Stop infrastructure services
voicetest down
```
### Live Voice Calls
For live voice calls, you need infrastructure services (LiveKit, Whisper STT, Kokoro TTS). The `up` command starts them via Docker and then launches the backend:
```bash
# Start infrastructure + backend server
voicetest up
# Or start infrastructure only (e.g., to run the backend separately)
voicetest up --detach
# Stop infrastructure when done
voicetest down
```
This requires Docker with the compose plugin. The infrastructure services are:
| Service | URL | Description |
| --------- | --------------------- | ---------------------------------------- |
| `livekit` | ws://localhost:7880 | LiveKit server for real-time voice calls |
| `whisper` | http://localhost:8001 | Faster Whisper STT server |
| `kokoro` | http://localhost:8002 | Kokoro TTS server |
If you only need simulated tests (no live voice), `voicetest serve` is sufficient and does not require Docker.
### Web UI
Start the server and open http://localhost:8000 in your browser:
```bash
voicetest serve
```
The web UI provides:
- Agent import and graph visualization
- Export agents to multiple formats (Mermaid, LiveKit, Retell, VAPI, Bland, Telnyx)
- Platform integration: import agents from, push agents to, and sync changes back to Retell, VAPI, LiveKit, Telnyx
- Test case management with persistence
- Export tests to platform formats (Retell)
- Global metrics configuration (compliance checks that run on all tests)
- Test execution with real-time streaming transcripts
- Cancel in-progress tests
- Run history with detailed results
- Transcript and metric inspection with scores
- Audio evaluation with word-level diff of original vs. heard text
- Settings configuration (models, max turns, streaming, audio eval)
Data is persisted to `.voicetest/data.duckdb` (configurable via `VOICETEST_DB_PATH`).
### REST API
The REST API is available at http://localhost:8000/api when running `voicetest serve`. Full API documentation is at [voicetest.dev/api](https://voicetest.dev/api/).
```bash
# Health check
curl http://localhost:8000/api/health
# List agents
curl http://localhost:8000/api/agents
# Start a test run
curl -X POST http://localhost:8000/api/agents/{id}/runs \
-H "Content-Type: application/json" \
-d '{"test_ids": ["test-1", "test-2"]}'
# WebSocket for real-time updates
wscat -c ws://localhost:8000/api/runs/{id}/ws
```
## Format Conversion
voicetest can convert between agent formats via its unified AgentGraph representation:
```
Retell CF ─────┐ ┌───▶ Retell LLM
│ │
Retell LLM ────┼ ├───▶ Retell CF
│ │
VAPI ──────────┼ ├───▶ VAPI
│ │
Bland ─────────┼───▶ AgentGraph ──┼───▶ Bland
│ │
Telnyx ────────┤ ├───▶ Telnyx
│ │
LiveKit ───────┤ ├───▶ LiveKit
│ │
XLSForm ───────┤ └───▶ Mermaid
│
Custom ────────┘
```
Import from any supported format, then export to any other:
```bash
# Convert Retell Conversation Flow to Retell LLM format
voicetest export --agent retell-cf-agent.json --format retell-llm > retell-llm-agent.json
# Convert VAPI assistant to Retell LLM format
voicetest export --agent vapi-assistant.json --format retell-llm > retell-agent.json
# Convert Retell LLM to VAPI format
voicetest export --agent retell-llm-agent.json --format vapi-assistant > vapi-agent.json
```
## Test Case Format
Test cases follow the Retell export format:
```json
[
{
"name": "Customer billing inquiry",
"user_prompt": "## Identity\nYour name is Jane.\n\n## Goal\nGet help with a charge on your bill.",
"metrics": ["Agent greeted the customer and addressed the billing concern"],
"dynamic_variables": {},
"tool_mocks": [],
"type": "simulation"
}
]
```
## Features
- **Multi-source import**: Retell CF, Retell LLM, VAPI, Bland, Telnyx, LiveKit, XLSForm, custom Python functions
- **Format conversion**: Convert between Retell, VAPI, Bland, Telnyx, LiveKit, and other formats
- **Unified IR**: AgentGraph representation for any voice agent
- **Multi-format export**: Mermaid diagrams, LiveKit Python, Retell LLM, Retell CF, VAPI, Bland, Telnyx
- **Platform integration**: Import, push, and sync agents with Retell, VAPI, Bland, Telnyx, LiveKit via API
- **Configurable LLMs**: Separate models for agent, simulator, and judge
- **DSPy-based evaluation**: LLM judges with reasoning and 0-1 scores
- **Global metrics**: Define compliance checks that run on all tests for an agent
- **Multiple interfaces**: CLI, TUI, interactive shell, Web UI, REST API
- **Persistence**: DuckDB storage for agents, tests, and run history
- **Real-time streaming**: WebSocket-based transcript streaming during test execution
- **Token streaming**: Optional token-level streaming as LLM generates responses (experimental)
- **Cancellation**: Cancel in-progress tests to stop token usage
- **Audio evaluation**: TTS→STT round-trip to catch pronunciation issues (e.g., phone numbers spoken as words)
## Global Metrics
Global metrics are compliance-style checks that run on every test for an agent. Configure them in the "Metrics" tab in the Web UI.
Each agent has:
- **Pass threshold**: Default score (0-1) required for metrics to pass (default: 0.7)
- **Global metrics**: List of criteria evaluated on every test run
Each global metric has:
- **Name**: Display name (e.g., "HIPAA Compliance")
- **Criteria**: What the LLM judge evaluates (e.g., "Agent must verify patient identity before sharing medical information")
- **Threshold override**: Optional per-metric threshold (uses agent default if not set)
- **Enabled**: Toggle to skip without deleting
Example use cases:
- HIPAA compliance checks for healthcare agents
- PCI-DSS validation for payment processing
- Brand voice consistency across all conversations
- Safety guardrails and content policy adherence
## Audio Evaluation
Text-only evaluation has a blind spot: when an agent produces "415-555-1234", an LLM judge sees correct digits and passes. But TTS might speak it as "four hundred fifteen, five hundred fifty-five..." — which a caller can't use. Audio evaluation catches these issues by round-tripping agent messages through TTS→STT and judging what would actually be *heard*.
```
Conversation runs normally (text-only)
↓
Judges evaluate raw text → metric_results
↓
Agent messages → TTS → audio → STT → "heard" text
↓
Judges evaluate heard text → audio_metric_results
```
Both sets of results are stored. The original message text is preserved alongside what was heard, with a word-level diff shown in the UI.
### Triggering Audio Evaluation
**Automatic (per-run setting):** Enable `audio_eval` in settings to run TTS→STT round-trip on every test automatically.
```toml
# .voicetest/settings.toml
[run]
audio_eval = true
```
Or toggle the "Audio evaluation" checkbox in the Settings page of the Web UI.
**On-demand (button):** After a test completes, click "Run audio eval" in the results view to run audio evaluation on that specific result.
**REST API:**
```bash
# Run audio eval on an existing result
curl -X POST http://localhost:8000/api/results/{result_id}/audio-eval
```
### Requirements
Audio evaluation requires the TTS and STT services from `voicetest up`:
| Service | URL | Description |
| --------- | --------------------- | ------------------ |
| `whisper` | http://localhost:8001 | Faster Whisper STT |
| `kokoro` | http://localhost:8002 | Kokoro TTS |
Service URLs are configurable in settings:
```toml
# .voicetest/settings.toml
[audio]
tts_url = "http://localhost:8002/v1"
stt_url = "http://localhost:8001/v1"
```
## Platform Integration
voicetest can connect directly to voice platforms to import and push agent configurations.
### Supported Platforms
| Platform | Import | Push | Sync | API Key Env Var |
| -------- | ------ | ---- | ---- | ---------------------------------------- |
| Retell | ✓ | ✓ | ✓ | `RETELL_API_KEY` |
| VAPI | ✓ | ✓ | ✓ | `VAPI_API_KEY` |
| Bland | ✓ | ✓ | | `BLAND_API_KEY` |
| Telnyx | ✓ | ✓ | ✓ | `TELNYX_API_KEY` |
| LiveKit | ✓ | ✓ | ✓ | `LIVEKIT_API_KEY` + `LIVEKIT_API_SECRET` |
### Usage
In the Web UI, go to the "Platforms" tab to:
1. **Configure** - Enter API keys (stored in settings, not in env)
1. **Browse** - List agents on the remote platform
1. **Import** - Pull an agent config into voicetest for testing
1. **Push** - Deploy a local agent to the platform
1. **Sync** - Push local changes back to the source platform (for imported agents)
API keys can also be set via environment variables or in the Settings page.
## CI/CD Integration
Run voice agent tests in CI to catch regressions before they reach production. Key benefits:
- **Bring your own LLM keys** - Use OpenRouter, OpenAI, etc. directly instead of paying per-minute through Retell/VAPI's built-in LLM interfaces
- **Test on prompt changes** - Automatically validate agent behavior when prompts or configs change
- **Track quality over time** - Ensure consistent agent performance across releases
Example GitHub Actions workflow (see [docs/examples/ci-workflow.yml](docs/examples/ci-workflow.yml)):
```yaml
name: Voice Agent Tests
on:
push:
paths: ["agents/**"] # Trigger on agent config or test changes
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v5
- run: uv tool install voicetest
- run: voicetest run --agent agents/receptionist.json --tests agents/tests.json --all
env:
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
```
## LLM Configuration
Configure different models for each role using [LiteLLM format](https://docs.litellm.ai/docs/providers):
```python
from voicetest.models.test_case import RunOptions
options = RunOptions(
agent_model="openai/gpt-4o-mini",
simulator_model="gemini/gemini-1.5-flash",
judge_model="anthropic/claude-3-haiku-20240307",
max_turns=20,
)
```
Or use Ollama for local execution:
```python
options = RunOptions(
agent_model="ollama_chat/qwen2.5:0.5b",
simulator_model="ollama_chat/qwen2.5:0.5b",
judge_model="ollama_chat/qwen2.5:0.5b",
)
```
In the shell:
```
> set agent_model gemini/gemini-1.5-flash
> set simulator_model ollama_chat/qwen2.5:0.5b
```
### Claude Code Passthrough
If you have [Claude Code](https://claude.ai/claude-code) installed, you can use it as your LLM backend without configuring API keys:
```toml
# .voicetest/settings.toml
[models]
agent = "claudecode/sonnet"
simulator = "claudecode/haiku"
judge = "claudecode/sonnet"
```
Available model strings:
- `claudecode/sonnet` - Claude Sonnet
- `claudecode/opus` - Claude Opus
- `claudecode/haiku` - Claude Haiku
This invokes the `claude` CLI via subprocess, using your existing Claude Code authentication.
## Development
### Docker Development (Recommended)
The easiest way to get a full development environment running is with Docker Compose:
```bash
# Clone and start all services
git clone https://github.com/voicetestdev/voicetest
cd voicetest
docker compose -f docker-compose.dev.yml up
```
The dev compose file includes the base infrastructure from `voicetest/compose/docker-compose.yml` (the same file bundled with the package for `voicetest up`) and adds backend + frontend services on top. This starts five services:
| Service | URL | Description |
| ---------- | --------------------- | ---------------------------------------- |
| `livekit` | ws://localhost:7880 | LiveKit server for real-time voice calls |
| `whisper` | http://localhost:8001 | Faster Whisper STT server |
| `kokoro` | http://localhost:8002 | Kokoro TTS server |
| `backend` | http://localhost:8000 | FastAPI backend with hot reload |
| `frontend` | http://localhost:5173 | Vite dev server with hot reload |
Open http://localhost:5173 to access the web UI. Changes to Python or TypeScript files trigger automatic reloads.
**Claude Code Authentication:** The dev image includes Claude Code CLI. To authenticate for `claudecode/*` model passthrough:
```bash
docker compose -f docker-compose.dev.yml exec backend claude login
```
Credentials persist in the `claude-auth` Docker volume across container restarts.
**Linked Agents:** The compose file mounts your home directory (`$HOME`) read-only so linked agents with absolute paths work inside the container. On macOS, you may need to grant Docker Desktop access to your home directory in Settings → Resources → File Sharing.
To stop all services:
```bash
docker compose -f docker-compose.dev.yml down
```
### Manual Development
If you prefer running services manually (e.g., for debugging):
```bash
# Clone and install
git clone https://github.com/voicetestdev/voicetest
cd voicetest
uv sync
# Run unit tests
uv run pytest tests/unit
# Run integration tests (requires Ollama with qwen2.5:0.5b)
uv run pytest tests/integration
# Lint
uv run ruff check voicetest/ tests/
```
### LiveKit CLI
LiveKit integration tests require the `lk` CLI tool for agent deployment and listing operations. Install it from https://docs.livekit.io/home/cli/cli-setup/
```bash
# macOS
brew install livekit-cli
# Linux
curl -sSL https://get.livekit.io/cli | bash
```
Tests that require the CLI will skip automatically if it's not installed.
### Frontend Development
The web UI is built with Bun + Svelte + Vite. The recommended approach is to use Docker Compose (see above), which handles all services automatically.
For manual frontend development, uses [mise](https://mise.jdx.dev/) for version management:
```bash
# Terminal 1 - Frontend dev server with hot reload
cd web
mise exec -- bun install
mise exec -- bun run dev # http://localhost:5173
# Terminal 2 - Backend API
uv run voicetest serve --reload # http://localhost:8000
# Terminal 3 - LiveKit server (for live voice calls)
docker run --rm -p 7880:7880 -p 7881:7881 -p 7882:7882/udp livekit/livekit-server --dev
```
The Vite dev server proxies `/api/*` to the FastAPI backend.
```bash
# Run frontend tests
cd web && npx vitest run
# Build for production
cd web && mise exec -- bun run build
```
**Svelte 5 Reactivity Guidelines:**
- Use `$derived($store)` to consume Svelte stores in components - the `$store` syntax alone may not trigger re-renders in browsers
- Do not use `Set` or `Map` with `$state` or `writable` stores - use arrays and `Record<K,V>` instead
- Always reassign objects/arrays instead of mutating them: `obj = { ...obj, [key]: value }` not `obj[key] = value`
- Use `onMount()` for one-time data fetching, not `$effect()` - effects are for reactive dependencies
- Violating these rules can cause the entire app's reactivity to silently break
```svelte
<script lang="ts">
import { myStore } from "./stores";
// WRONG - may not trigger re-renders in browser
// {#if $myStore === "value"}
// CORRECT - use $derived for reliable reactivity
let value = $derived($myStore);
// {#if value === "value"}
</script>
```
**When to Add Libraries:**
The frontend uses minimal dependencies by design. Consider adding a library when:
- Table features: Need filtering, pagination, virtual scrolling, or column resizing (→ `@tanstack/svelte-table`)
- Forms: Complex validation, multi-step wizards, or field arrays (→ `superforms` + `zod`)
- Charts: Need data visualization beyond simple metrics (→ `layerchart` or `pancake`)
- State: Cross-component state becomes unwieldy with stores (→ evaluate if architecture needs rethinking first)
## Project Structure
```
voicetest/
├── voicetest/ # Python package
│ ├── api.py # Core API
│ ├── cli.py # CLI
│ ├── rest.py # REST API server + WebSocket + SPA serving
│ ├── container.py # Dependency injection (Punq)
│ ├── compose/ # Bundled Docker Compose for infrastructure services
│ ├── models/ # Pydantic models
│ ├── importers/ # Source importers (retell, vapi, bland, telnyx, livekit, xlsform, custom)
│ ├── exporters/ # Format exporters (mermaid, livekit, retell, vapi, bland, telnyx, test_cases)
│ ├── platforms/ # Platform SDK clients (retell, vapi, bland, telnyx, livekit)
│ ├── engine/ # Execution engine
│ ├── simulator/ # User simulation
│ ├── judges/ # Evaluation judges (metric, rule)
│ ├── storage/ # DuckDB persistence layer
│ └── tui/ # TUI and shell
├── web/ # Frontend (Bun + Svelte + Vite)
│ ├── src/
│ │ ├── components/ # Svelte components
│ │ └── lib/ # API client, stores, types
│ └── dist/ # Built assets (bundled in package)
├── tests/
│ ├── unit/ # Unit tests
│ └── integration/ # Integration tests (Ollama)
└── docs/
```
## Contact
Questions, feedback, or partnerships: [hello@voicetest.dev](mailto:hello@voicetest.dev)
## License
Apache 2.0
| text/markdown | null | pld <50706+pld@users.noreply.github.com> | null | null | null | ai, bland, livekit, retell, telnyx, testing, vapi, voice | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Testing",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"bland>=0.3.0",
"click>=8.0",
"dspy>=2.6",
"duckdb-engine>=0.13",
"duckdb>=1.4.3",
"fastapi>=0.115",
"livekit-agents<2.0,>=1.0",
"livekit-plugins-openai<2.0,>=1.0",
"livekit-plugins-silero<2.0,>=1.0",
"openpyxl>=3.1.5",
"punq>=0.7.0",
"pyarrow>=22.0.0",
"pydantic>=2.0",
"python-multipart>=0.0.21",
"retell-sdk>=5.0",
"rich>=13.0",
"sqlalchemy>=2.0",
"telnyx>=4.41.0",
"textual>=7.2.0",
"uvicorn>=0.32",
"vapi-server-sdk>=0.2",
"misaki[en]>=0.1; extra == \"macos\"",
"mlx-audio>=0.3; extra == \"macos\"",
"psycopg[binary]>=3.0; extra == \"postgres\"",
"google-re2>=1.1; extra == \"re2\""
] | [] | [] | [] | [
"Homepage, https://voicetest.dev",
"Repository, https://github.com/voicetestdev/voicetest",
"Issues, https://github.com/voicetestdev/voicetest/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:49:36.319698 | voicetest-0.21-py3-none-any.whl | 1,154,522 | 99/16/4c27345b3ec059335da3b5ca608fba2da3a574937e027af8d24ade3b563b/voicetest-0.21-py3-none-any.whl | py3 | bdist_wheel | null | false | 5402976b80185cc20d341bb2722f501d | 4c198f3a72d51ccf7687fddeefbc5d4967467f27c1e4e8ea75f5d76a20dd1aef | 99164c27345b3ec059335da3b5ca608fba2da3a574937e027af8d24ade3b563b | Apache-2.0 | [
"LICENSE"
] | 80 |
2.4 | wool | 0.1rc37 | A Python framework for distributed multiprocessing. | 
**Wool** is a native Python package for transparently executing tasks in a horizontally scalable, distributed network of agnostic worker processes. Any picklable async function or method can be converted into a task with a simple decorator and a client connection.
## Installation
### Using pip
To install the package using pip, run the following command:
```sh
pip install --pre wool
```
### Cloning from GitHub
To install the package by cloning from GitHub, run the following commands:
```sh
git clone https://github.com/wool-labs/wool.git
cd wool
pip install ./wool
```
## Features
### Declaring tasks
Wool tasks are coroutine functions that are executed in a remote `asyncio` event loop within a worker process. To declare a task, use the `@wool.task` decorator:
```python
import wool
@wool.task
async def sample_task(x, y):
return x + y
```
Tasks must be picklable, stateless, and idempotent. Avoid passing unpicklable objects as arguments or return values.
### Worker pools
Worker pools are responsible for executing tasks. Wool provides two types of pools:
#### Ephemeral pools
Ephemeral pools are created and destroyed within the scope of a context manager. Use `wool.pool` to declare an ephemeral pool:
```python
import asyncio, wool
@wool.task
async def sample_task(x, y):
return x + y
async def main():
with wool.pool():
result = await sample_task(1, 2)
print(f"Result: {result}")
asyncio.run(main())
```
#### Durable pools
Durable pools are started independently and persist beyond the scope of a single application. Use the `wool` CLI to manage durable pools:
```bash
wool pool up --port 5050 --authkey deadbeef --module tasks
```
Connect to a durable pool using `wool.session`:
```python
import asyncio, wool
@wool.task
async def sample_task(x, y):
return x + y
async def main():
with wool.session(port=5050, authkey=b"deadbeef"):
result = await sample_task(1, 2)
print(f"Result: {result}")
asyncio.run(main())
```
### CLI commands
Wool provides a command-line interface (CLI) for managing worker pools.
#### Start the worker pool
```sh
wool pool up --host <host> --port <port> --authkey <authkey> --breadth <breadth> --module <module>
```
- `--host`: The host address (default: `localhost`).
- `--port`: The port number (default: `0`).
- `--authkey`: The authentication key (default: `b""`).
- `--breadth`: The number of worker processes (default: number of CPU cores).
- `--module`: Python module containing Wool task definitions (optional, can be specified multiple times).
#### Stop the worker pool
```sh
wool pool down --host <host> --port <port> --authkey <authkey> --wait
```
- `--host`: The host address (default: `localhost`).
- `--port`: The port number (required).
- `--authkey`: The authentication key (default: `b""`).
- `--wait`: Wait for in-flight tasks to complete before shutting down.
#### Ping the worker pool
```sh
wool ping --host <host> --port <port> --authkey <authkey>
```
- `--host`: The host address (default: `localhost`).
- `--port`: The port number (required).
- `--authkey`: The authentication key (default: `b""`).
### Advanced usage
#### Nested pools and sessions
Wool supports nesting pools and sessions to achieve complex workflows. Tasks can be dispatched to specific pools by nesting contexts:
```python
import wool
@wool.task
async def task_a():
await asyncio.sleep(1)
@wool.task
async def task_b():
with wool.pool(port=5051):
await task_a()
async def main():
with wool.pool(port=5050):
await task_a()
await task_b()
asyncio.run(main())
```
In this example, `task_a` is executed by two different pools, while `task_b` is executed by the pool on port 5050.
### Best practices
#### Sizing worker pools
When configuring worker pools, it is important to balance the number of processes with the available system resources:
- **CPU-bound tasks**: Size the worker pool to match the number of CPU cores. This is the default behavior when spawning a pool.
- **I/O-bound tasks**: For workloads involving significant I/O, consider oversizing the pool slightly to maximize the system's I/O capacity utilization.
- **Mixed workloads**: Monitor memory usage and system load to avoid oversubscription, especially for memory-intensive tasks. Use profiling tools to determine the optimal pool size.
#### Defining tasks
Wool tasks are coroutine functions that execute asynchronously in a remote `asyncio` event loop. To ensure smooth execution and scalability, prioritize:
- **Picklability**: Ensure all task arguments and return values are picklable. Avoid passing unpicklable objects such as open file handles, database connections, or lambda functions.
- **Statelessness and idempotency**: Design tasks to be stateless and idempotent. Avoid relying on global variables or shared mutable state. This ensures predictable behavior and safe retries.
- **Non-blocking operations**: To achieve higher concurrency, avoid blocking calls within tasks. Use `asyncio`-compatible libraries for I/O operations.
- **Inter-process synchronization**: Use Wool's synchronization primitives (e.g., `wool.locking`) for inter-worker and inter-pool coordination. Standard `asyncio` primitives will not behave as expected in a multi-process environment.
#### Debugging and logging
- Enable detailed logging during development to trace task execution and worker pool behavior:
```python
import logging
logging.basicConfig(level=logging.DEBUG)
```
- Use Wool's built-in logging configuration to capture worker-specific logs.
#### Nested pools and sessions
Wool supports nesting pools and sessions to achieve complex workflows. Tasks can be dispatched to specific pools by nesting contexts. This is useful for workflows requiring task segregation or resource isolation.
Example:
```python
import asyncio, wool
@wool.task
async def task_a():
await asyncio.sleep(1)
@wool.task
async def task_b():
with wool.pool(port=5051):
await task_a()
async def main():
with wool.pool(port=5050):
await task_a()
await task_b()
asyncio.run(main())
```
#### Performance optimization
- Minimize the size of arguments and return values to reduce serialization overhead.
- For large datasets, consider using shared memory or passing references (e.g., file paths) instead of transferring the entire data.
- Profile tasks to identify and optimize performance bottlenecks.
#### Task cancellation
- Handle task cancellations gracefully by cleaning up resources and rolling back partial changes.
- Use `asyncio.CancelledError` to detect and respond to cancellations.
#### Error propagation
- Wool propagates exceptions raised within tasks to the caller. Use this feature to handle errors centrally in your application.
Example:
```python
try:
result = await some_task()
except Exception as e:
print(f"Task failed with error: {e}")
```
## License
This project is licensed under the Apache License Version 2.0.
| text/markdown | null | Conrad Bzura <conrad@wool.io> | null | maintainers@wool.io | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2025 Wool Labs LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | null | [
"Intended Audience :: Developers",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"cloudpickle",
"grpcio>=1.76.0",
"portalocker",
"protobuf",
"shortuuid",
"tblib",
"typing-extensions",
"watchdog",
"zeroconf",
"cryptography; extra == \"dev\"",
"debugpy; extra == \"dev\"",
"hypothesis; extra == \"dev\"",
"pyright; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest-grpc-aio~=0.3.0; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T03:49:20.426431 | wool-0.1rc37-py3-none-any.whl | 84,594 | 46/5c/d40647092e13bc2a10f831430d62568e589c951c64d76f5faed2c48c8649/wool-0.1rc37-py3-none-any.whl | py3 | bdist_wheel | null | false | 7686550096cf073b05fe3b6811ca131b | 6767dbf5c86e23165f4ccba8876438ad862b88d5697302b5985490e74a48a95c | 465cd40647092e13bc2a10f831430d62568e589c951c64d76f5faed2c48c8649 | null | [] | 189 |
2.4 | mat-ret | 0.3.1 | Unified retrieval and property mapping for materials databases (Materials Project, JARVIS, AFLOW, Alexandria, Materials Cloud, MPDS) | # mat_ret
Unified retrieval and property mapping for materials databases, with a PyQt6 GUI for materials search, structure viewing, and XRD generation.
## Supported Databases
1. Materials Project
2. JARVIS
3. AFLOW
4. Alexandria
5. Materials Cloud
6. MPDS
7. OQMD
8. OPTIMADE providers (registry search)
## Installation
Install as package:
```bash
pip install .
```
Install editable for development:
```bash
pip install -e .
```
Install editable with test dependencies:
```bash
pip install -e ".[dev]"
```
## Quick Start
1. Configure API keys (`MP_API_KEY`, `MPDS_API_KEY`) via environment variables or `config.py`.
2. Run examples:
```bash
python example_fetch.py
python example_single_fetch.py
```
3. Use the Python API:
```python
from mat_ret.api import fetch_all_databases
results = fetch_all_databases(
formula="MgO",
limit_per_database=3,
mp_api_key="YOUR_MP_KEY",
mpds_api_key="YOUR_MPDS_KEY",
)
print(results["materials_project"][0])
```
## Direct Client Usage
You can call specific clients from `mat_ret.databases` directly:
```python
from mat_ret.databases import MaterialsProjectClient
client = MaterialsProjectClient(api_key="YOUR_MP_KEY")
results = client.get_structures("MgO", limit=1)
if results:
entry = results[0]
print(entry["material_id"])
```
## OPTIMADE Search
When OPTIMADE is selected in the GUI, providers are shown as a tree.
Select the parent to toggle all providers or select individual providers.
The search filter applies to each provider, and limit is applied per provider.
## Running in a Virtual Environment
```bash
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
```
## GUI
mat_ret includes a PyQt6 desktop GUI.

### Launch GUI
```bash
mat-ret-gui
# or
python -m mat_ret.gui
```
### GUI Features
- Database selection and API key controls
- OPTIMADE provider tree with per-provider toggles
- Formula search (e.g., `Fe2O3`, `LiFePO4`) across selected databases
- Element-set search via periodic-table picker icon next to the search box
- Chemsys text format: `Fe-O`, `Li-Fe-O`
- Element mode uses contains-all semantics
- Unsupported providers/databases are skipped with explicit status messages (e.g., OQMD)
- Results table and JSON views
- Structure viewer with CIF export
- File menu exports (JSON/CSV)
- Tools menu:
- XRD Generator
## XRD Generator
Open `Tools -> XRD Generator...` in the GUI.
Capabilities:
- Input sources:
- Any CIF file from disk
- Currently selected structure from the main results window
- Radiation presets from pymatgen (including `CuKa`, `CuKa1`, `CuKa2`, etc.)
- Optional custom wavelength (Angstrom)
- Scan controls: `2theta min`, `2theta max`, `2theta step`
- Profile controls:
- `Stick`
- `Gaussian`
- `Lorentzian`
- `Pseudo-Voigt` (with `eta`)
- Peak broadening width control (`FWHM`, degrees in 2theta)
- Peak finder controls using SciPy:
- minimum height
- prominence
- minimum distance
- minimum width
- theoretical peak match tolerance
- Interactive plot view + peak table
- Exports:
- plot (`PNG`, `SVG`, `PDF`)
- profile/stick CSV
- peaks CSV
Defaults:
- Radiation: `CuKa` (`1.54184 A`)
- Scan range: `5` to `90` deg (2theta), step `0.02`
- Profile: `Pseudo-Voigt`
- FWHM: `0.15` deg
- Eta: `0.5`
Detailed XRD notes are in `doc/XRD_GENERATOR_GUIDE.md`.
## XRD API
```python
from mat_ret import XRDConfig, generate_xrd_pattern_from_cif
cfg = XRDConfig(
radiation="CuKa",
two_theta_min=5.0,
two_theta_max=90.0,
two_theta_step=0.02,
profile="pseudo_voigt",
fwhm=0.15,
)
result = generate_xrd_pattern_from_cif("example.cif", config=cfg)
print(result.wavelength, len(result.peaks))
```
## Tests
Run XRD tests:
```bash
pytest tests/test_xrd.py
```
Run all tests:
```bash
pytest
```
## Project Structure
```text
mat_ret/
├── src/mat_ret/
│ ├── api.py
│ ├── databases.py
│ ├── property_mapping.py
│ ├── xrd.py
│ └── gui/
│ ├── main.py
│ ├── main_window.py
│ ├── workers.py
│ ├── utils.py
│ └── widgets/
│ ├── database_selector.py
│ ├── results_view.py
│ ├── structure_viewer.py
│ └── xrd_generator_window.py
├── tests/
│ └── test_xrd.py
├── doc/
│ ├── PROPERTY_MAPPING_GUIDE.md
│ └── XRD_GENERATOR_GUIDE.md
├── README.md
├── pyproject.toml
└── requirements.txt
```
## Contributing
Issues and pull requests are welcome:
https://github.com/Aadhityan-A/mat_ret
| text/markdown | null | Aadhityan A <aadhityan1995@gmail.com> | null | null | CeCILL FREE SOFTWARE LICENSE AGREEMENT
Version 2.1 dated 2013-06-21
Notice
This Agreement is a Free Software license agreement that is the result
of discussions between its authors in order to ensure compliance with
the two main principles guiding its drafting:
* firstly, compliance with the principles governing the distribution
of Free Software: access to source code, broad rights granted to users,
* secondly, the election of a governing law, French law, with which it
is conformant, both as regards the law of torts and intellectual
property law, and the protection that it offers to both authors and
holders of the economic rights over software.
The authors of the CeCILL (for Ce[a] C[nrs] I[nria] L[ogiciel] L[ibre])
license are:
Commissariat à l'énergie atomique et aux énergies alternatives - CEA, a
public scientific, technical and industrial research establishment,
having its principal place of business at 25 rue Leblanc, immeuble Le
Ponant D, 75015 Paris, France.
Centre National de la Recherche Scientifique - CNRS, a public scientific
and technological establishment, having its principal place of business
at 3 rue Michel-Ange, 75794 Paris cedex 16, France.
Institut National de Recherche en Informatique et en Automatique -
Inria, a public scientific and technological establishment, having its
principal place of business at Domaine de Voluceau, Rocquencourt, BP
105, 78153 Le Chesnay cedex, France.
Preamble
The purpose of this Free Software license agreement is to grant users
the right to modify and redistribute the software governed by this
license within the framework of an open source distribution model.
The exercising of this right is conditional upon certain obligations for
users so as to preserve this status for all subsequent redistributions.
In consideration of access to the source code and the rights to copy,
modify and redistribute granted by the license, users are provided only
with a limited warranty and the software's author, the holder of the
economic rights, and the successive licensors only have limited liability.
In this respect, the risks associated with loading, using, modifying
and/or developing or reproducing the software by the user are brought to
the user's attention, given its Free Software status, which may make it
complicated to use, with the result that its use is reserved for
developers and experienced professionals having in-depth computer
knowledge. Users are therefore encouraged to load and test the
suitability of the software as regards their requirements in conditions
enabling the security of their systems and/or data to be ensured and,
more generally, to use and operate it in the same conditions of
security. This Agreement may be freely reproduced and published,
provided it is not altered, and that no provisions are either added or
removed herefrom.
This Agreement may apply to any or all software for which the holder of
the economic rights decides to submit the use thereof to its provisions.
Frequently asked questions can be found on the official website of the
CeCILL licenses family (http://www.cecill.info/index.en.html) for any
necessary clarification.
Article 1 - DEFINITIONS
For the purpose of this Agreement, when the following expressions
commence with a capital letter, they shall have the following meaning:
Agreement: means this license agreement, and its possible subsequent
versions and annexes.
Software: means the software in its Object Code and/or Source Code form
and, where applicable, its documentation, "as is" when the Licensee
accepts the Agreement.
Initial Software: means the Software in its Source Code and possibly its
Object Code form and, where applicable, its documentation, "as is" when
it is first distributed under the terms and conditions of the Agreement.
Modified Software: means the Software modified by at least one
Contribution.
Source Code: means all the Software's instructions and program lines to
which access is required so as to modify the Software.
Object Code: means the binary files originating from the compilation of
the Source Code.
Holder: means the holder(s) of the economic rights over the Initial
Software.
Licensee: means the Software user(s) having accepted the Agreement.
Contributor: means a Licensee having made at least one Contribution.
Licensor: means the Holder, or any other individual or legal entity, who
distributes the Software under the Agreement.
Contribution: means any or all modifications, corrections, translations,
adaptations and/or new functions integrated into the Software by any or
all Contributors, as well as any or all Internal Modules.
Module: means a set of sources files including their documentation that
enables supplementary functions or services in addition to those offered
by the Software.
External Module: means any or all Modules, not derived from the
Software, so that this Module and the Software run in separate address
spaces, with one calling the other when they are run.
Internal Module: means any or all Module, connected to the Software so
that they both execute in the same address space.
GNU GPL: means the GNU General Public License version 2 or any
subsequent version, as published by the Free Software Foundation Inc.
GNU Affero GPL: means the GNU Affero General Public License version 3 or
any subsequent version, as published by the Free Software Foundation Inc.
EUPL: means the European Union Public License version 1.1 or any
subsequent version, as published by the European Commission.
Parties: mean both the Licensee and the Licensor.
These expressions may be used both in singular and plural form.
Article 2 - PURPOSE
The purpose of the Agreement is the grant by the Licensor to the
Licensee of a non-exclusive, transferable and worldwide license for the
Software as set forth in Article 5 hereinafter for the whole term of the
protection granted by the rights over said Software.
Article 3 - ACCEPTANCE
3.1 The Licensee shall be deemed as having accepted the terms and
conditions of this Agreement upon the occurrence of the first of the
following events:
* (i) loading the Software by any or all means, notably, by downloading
from a remote server, or by loading from a physical medium;
* (ii) the first time the Licensee exercises any of the rights granted
hereunder.
3.2 One copy of the Agreement, containing a notice relating to the
characteristics of the Software, to the limited warranty, and to the
fact that its use is restricted to experienced users has been provided
to the Licensee prior to its acceptance as set forth in Article 3.1
hereinabove, and the Licensee hereby acknowledges that it has read and
understood it.
Article 4 - EFFECTIVE DATE AND TERM
4.1 EFFECTIVE DATE
The Agreement shall become effective on the date when it is accepted by
the Licensee as set forth in Article 3.1.
4.2 TERM
The Agreement shall remain in force for the entire legal term of
protection of the economic rights over the Software.
Article 5 - SCOPE OF RIGHTS GRANTED
The Licensor hereby grants to the Licensee, who accepts, the following
rights over the Software for any or all use, and for the term of the
Agreement, on the basis of the terms and conditions set forth hereinafter.
Besides, if the Licensor owns or comes to own one or more patents
protecting all or part of the functions of the Software or of its
components, the Licensor undertakes not to enforce the rights granted by
these patents against successive Licensees using, exploiting or
modifying the Software. If these patents are transferred, the Licensor
undertakes to have the transferees subscribe to the obligations set
forth in this paragraph.
5.1 RIGHT OF USE
The Licensee is authorized to use the Software, without any limitation
as to its fields of application, with it being hereinafter specified
that this comprises:
1. permanent or temporary reproduction of all or part of the Software
by any or all means and in any or all form.
2. loading, displaying, running, or storing the Software on any or all
medium.
3. entitlement to observe, study or test its operation so as to
determine the ideas and principles behind any or all constituent
elements of said Software. This shall apply when the Licensee
carries out any or all loading, displaying, running, transmission or
storage operation as regards the Software, that it is entitled to
carry out hereunder.
5.2 ENTITLEMENT TO MAKE CONTRIBUTIONS
The right to make Contributions includes the right to translate, adapt,
arrange, or make any or all modifications to the Software, and the right
to reproduce the resulting software.
The Licensee is authorized to make any or all Contributions to the
Software provided that it includes an explicit notice that it is the
author of said Contribution and indicates the date of the creation thereof.
5.3 RIGHT OF DISTRIBUTION
In particular, the right of distribution includes the right to publish,
transmit and communicate the Software to the general public on any or
all medium, and by any or all means, and the right to market, either in
consideration of a fee, or free of charge, one or more copies of the
Software by any means.
The Licensee is further authorized to distribute copies of the modified
or unmodified Software to third parties according to the terms and
conditions set forth hereinafter.
5.3.1 DISTRIBUTION OF SOFTWARE WITHOUT MODIFICATION
The Licensee is authorized to distribute true copies of the Software in
Source Code or Object Code form, provided that said distribution
complies with all the provisions of the Agreement and is accompanied by:
1. a copy of the Agreement,
2. a notice relating to the limitation of both the Licensor's warranty
and liability as set forth in Articles 8 and 9,
and that, in the event that only the Object Code of the Software is
redistributed, the Licensee allows effective access to the full Source
Code of the Software for a period of at least three years from the
distribution of the Software, it being understood that the additional
acquisition cost of the Source Code shall not exceed the cost of the
data transfer.
5.3.2 DISTRIBUTION OF MODIFIED SOFTWARE
When the Licensee makes a Contribution to the Software, the terms and
conditions for the distribution of the resulting Modified Software
become subject to all the provisions of this Agreement.
The Licensee is authorized to distribute the Modified Software, in
source code or object code form, provided that said distribution
complies with all the provisions of the Agreement and is accompanied by:
1. a copy of the Agreement,
2. a notice relating to the limitation of both the Licensor's warranty
and liability as set forth in Articles 8 and 9,
and, in the event that only the object code of the Modified Software is
redistributed,
3. a note stating the conditions of effective access to the full source
code of the Modified Software for a period of at least three years
from the distribution of the Modified Software, it being understood
that the additional acquisition cost of the source code shall not
exceed the cost of the data transfer.
5.3.3 DISTRIBUTION OF EXTERNAL MODULES
When the Licensee has developed an External Module, the terms and
conditions of this Agreement do not apply to said External Module, that
may be distributed under a separate license agreement.
5.3.4 COMPATIBILITY WITH OTHER LICENSES
The Licensee can include a code that is subject to the provisions of
one of the versions of the GNU GPL, GNU Affero GPL and/or EUPL in the
Modified or unmodified Software, and distribute that entire code under
the terms of the same version of the GNU GPL, GNU Affero GPL and/or EUPL.
The Licensee can include the Modified or unmodified Software in a code
that is subject to the provisions of one of the versions of the GNU GPL,
GNU Affero GPL and/or EUPL and distribute that entire code under the
terms of the same version of the GNU GPL, GNU Affero GPL and/or EUPL.
Article 6 - INTELLECTUAL PROPERTY
6.1 OVER THE INITIAL SOFTWARE
The Holder owns the economic rights over the Initial Software. Any or
all use of the Initial Software is subject to compliance with the terms
and conditions under which the Holder has elected to distribute its work
and no one shall be entitled to modify the terms and conditions for the
distribution of said Initial Software.
The Holder undertakes that the Initial Software will remain ruled at
least by this Agreement, for the duration set forth in Article 4.2.
6.2 OVER THE CONTRIBUTIONS
The Licensee who develops a Contribution is the owner of the
intellectual property rights over this Contribution as defined by
applicable law.
6.3 OVER THE EXTERNAL MODULES
The Licensee who develops an External Module is the owner of the
intellectual property rights over this External Module as defined by
applicable law and is free to choose the type of agreement that shall
govern its distribution.
6.4 JOINT PROVISIONS
The Licensee expressly undertakes:
1. not to remove, or modify, in any manner, the intellectual property
notices attached to the Software;
2. to reproduce said notices, in an identical manner, in the copies of
the Software modified or not.
The Licensee undertakes not to directly or indirectly infringe the
intellectual property rights on the Software of the Holder and/or
Contributors, and to take, where applicable, vis-à-vis its staff, any
and all measures required to ensure respect of said intellectual
property rights of the Holder and/or Contributors.
Article 7 - RELATED SERVICES
7.1 Under no circumstances shall the Agreement oblige the Licensor to
provide technical assistance or maintenance services for the Software.
However, the Licensor is entitled to offer this type of services. The
terms and conditions of such technical assistance, and/or such
maintenance, shall be set forth in a separate instrument. Only the
Licensor offering said maintenance and/or technical assistance services
shall incur liability therefor.
7.2 Similarly, any Licensor is entitled to offer to its licensees, under
its sole responsibility, a warranty, that shall only be binding upon
itself, for the redistribution of the Software and/or the Modified
Software, under terms and conditions that it is free to decide. Said
warranty, and the financial terms and conditions of its application,
shall be subject of a separate instrument executed between the Licensor
and the Licensee.
Article 8 - LIABILITY
8.1 Subject to the provisions of Article 8.2, the Licensee shall be
entitled to claim compensation for any direct loss it may have suffered
from the Software as a result of a fault on the part of the relevant
Licensor, subject to providing evidence thereof.
8.2 The Licensor's liability is limited to the commitments made under
this Agreement and shall not be incurred as a result of in particular:
(i) loss due the Licensee's total or partial failure to fulfill its
obligations, (ii) direct or consequential loss that is suffered by the
Licensee due to the use or performance of the Software, and (iii) more
generally, any consequential loss. In particular the Parties expressly
agree that any or all pecuniary or business loss (i.e. loss of data,
loss of profits, operating loss, loss of customers or orders,
opportunity cost, any disturbance to business activities) or any or all
legal proceedings instituted against the Licensee by a third party,
shall constitute consequential loss and shall not provide entitlement to
any or all compensation from the Licensor.
Article 9 - WARRANTY
9.1 The Licensee acknowledges that the scientific and technical
state-of-the-art when the Software was distributed did not enable all
possible uses to be tested and verified, nor for the presence of
possible defects to be detected. In this respect, the Licensee's
attention has been drawn to the risks associated with loading, using,
modifying and/or developing and reproducing the Software which are
reserved for experienced users.
The Licensee shall be responsible for verifying, by any or all means,
the suitability of the product for its requirements, its good working
order, and for ensuring that it shall not cause damage to either persons
or properties.
9.2 The Licensor hereby represents, in good faith, that it is entitled
to grant all the rights over the Software (including in particular the
rights set forth in Article 5).
9.3 The Licensee acknowledges that the Software is supplied "as is" by
the Licensor without any other express or tacit warranty, other than
that provided for in Article 9.2 and, in particular, without any
warranty as to its commercial value, its secured, safe, innovative or
relevant nature.
Specifically, the Licensor does not warrant that the Software is free
from any error, that it will operate without interruption, that it will
be compatible with the Licensee's own equipment and software
configuration, nor that it will meet the Licensee's requirements.
9.4 The Licensor does not either expressly or tacitly warrant that the
Software does not infringe any third party intellectual property right
relating to a patent, software or any other property right. Therefore,
the Licensor disclaims any and all liability towards the Licensee
arising out of any or all proceedings for infringement that may be
instituted in respect of the use, modification and redistribution of the
Software. Nevertheless, should such proceedings be instituted against
the Licensee, the Licensor shall provide it with technical and legal
expertise for its defense. Such technical and legal expertise shall be
decided on a case-by-case basis between the relevant Licensor and the
Licensee pursuant to a memorandum of understanding. The Licensor
disclaims any and all liability as regards the Licensee's use of the
name of the Software. No warranty is given as regards the existence of
prior rights over the name of the Software or as regards the existence
of a trademark.
Article 10 - TERMINATION
10.1 In the event of a breach by the Licensee of its obligations
hereunder, the Licensor may automatically terminate this Agreement
thirty (30) days after notice has been sent to the Licensee and has
remained ineffective.
10.2 A Licensee whose Agreement is terminated shall no longer be
authorized to use, modify or distribute the Software. However, any
licenses that it may have granted prior to termination of the Agreement
shall remain valid subject to their having been granted in compliance
with the terms and conditions hereof.
Article 11 - MISCELLANEOUS
11.1 EXCUSABLE EVENTS
Neither Party shall be liable for any or all delay, or failure to
perform the Agreement, that may be attributable to an event of force
majeure, an act of God or an outside cause, such as defective
functioning or interruptions of the electricity or telecommunications
networks, network paralysis following a virus attack, intervention by
government authorities, natural disasters, water damage, earthquakes,
fire, explosions, strikes and labor unrest, war, etc.
11.2 Any failure by either Party, on one or more occasions, to invoke
one or more of the provisions hereof, shall under no circumstances be
interpreted as being a waiver by the interested Party of its right to
invoke said provision(s) subsequently.
11.3 The Agreement cancels and replaces any or all previous agreements,
whether written or oral, between the Parties and having the same
purpose, and constitutes the entirety of the agreement between said
Parties concerning said purpose. No supplement or modification to the
terms and conditions hereof shall be effective as between the Parties
unless it is made in writing and signed by their duly authorized
representatives.
11.4 In the event that one or more of the provisions hereof were to
conflict with a current or future applicable act or legislative text,
said act or legislative text shall prevail, and the Parties shall make
the necessary amendments so as to comply with said act or legislative
text. All other provisions shall remain effective. Similarly, invalidity
of a provision of the Agreement, for any reason whatsoever, shall not
cause the Agreement as a whole to be invalid.
11.5 LANGUAGE
The Agreement is drafted in both French and English and both versions
are deemed authentic.
Article 12 - NEW VERSIONS OF THE AGREEMENT
12.1 Any person is authorized to duplicate and distribute copies of this
Agreement.
12.2 So as to ensure coherence, the wording of this Agreement is
protected and may only be modified by the authors of the License, who
reserve the right to periodically publish updates or new versions of the
Agreement, each with a separate number. These subsequent versions may
address new issues encountered by Free Software.
12.3 Any Software distributed under a given version of the Agreement may
only be subsequently distributed under the same version of the Agreement
or a subsequent version, subject to the provisions of Article 5.3.4.
Article 13 - GOVERNING LAW AND JURISDICTION
13.1 The Agreement is governed by French law. The Parties agree to
endeavor to seek an amicable solution to any disagreements or disputes
that may arise during the performance of the Agreement.
13.2 Failing an amicable solution within two (2) months as from their
occurrence, and unless emergency proceedings are necessary, the
disagreements or disputes shall be referred to the Paris Courts having
jurisdiction, by the more diligent Party.
| null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"pymatgen",
"requests",
"numpy",
"scipy",
"mp-api",
"jarvis-tools",
"optimade[http_client]",
"zstandard",
"ase",
"aflow",
"matplotlib",
"pandas",
"tqdm",
"urllib3",
"mpds-client",
"PyQt6>=6.4.0",
"pyqtgraph>=0.13.0",
"PyOpenGL>=3.1.0",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Aadhityan-A/mat_ret",
"BugTracker, https://github.com/Aadhityan-A/mat_ret/issues"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-21T03:47:37.818371 | mat_ret-0.3.1.tar.gz | 89,665 | ca/c7/16ea61377bc5f2f746ed0b7e96250058b35862e09f9f0e0366142272d264/mat_ret-0.3.1.tar.gz | source | sdist | null | false | bc39028cfcceaa6fe5e1a6cbd85eca57 | d816f68a4398c7d2d11fadc95240b317bdace4fdde977166693cb26a21062db1 | cac716ea61377bc5f2f746ed0b7e96250058b35862e09f9f0e0366142272d264 | null | [
"LICENSE"
] | 204 |
2.4 | amcl-server | 1.0.2 | A/MCL — Agent/Multi-Coding-agent Context Layer. MCP server for automatic context persistence across AI coding agents. | # A/MCL — Agent/Multi-Coding-agent Context Layer
**Zero-intervention context persistence across AI coding agents.**
When you hit a rate limit on one agent (Antigravity, Cursor, Claude Code) and switch to another, A/MCL ensures the new agent *automatically* has access to the complete conversation history, file changes, reasoning chains, and project state. No commands. No manual handoff. It just works.
---
## How It Works
A/MCL is an **MCP (Model Context Protocol) server**. MCP-compatible agents discover it automatically at launch and gain access to shared project context via standard tools, resources, and prompts.
```
┌──────────────┐ stdio ┌────────────────┐ SQLite ┌──────────┐
│ AI Agent │ ◄────────────► │ A/MCL Server │ ◄────────────► │ Context │
│ (Antigravity,│ │ (FastMCP) │ │ Store │
│ Cursor, etc)│ │ │ │ (~/.amcl)│
└──────────────┘ └────────────────┘ └──────────┘
```
1. **Install once** → `pip install amcl-server` (or `pip3 install`)
2. **Setup** → `amcl-server setup` (creates DB, shows MCP config)
3. **Use your agents normally** → Context is automatically shared
4. **Switch agents** → New agent picks up exactly where the last one left off
---
## Quick Start
```bash
# Install universally
pip install amcl-server # or pip3 install amcl-server
# Initialize and Auto-Register
amcl-server setup
# Check status
amcl-server status
```
The `setup` command automatically detects all installed AI agents and IDE extensions on your system and registers A/MCL directly into their settings. Once you run setup, you are completely done.
---
## Supported Agents (Auto-Detected)
A/MCL automatically integrates with:
- **Cursor** (`~/.cursor/mcp.json`)
- **Claude Desktop / Antigravity** (`Library/Application Support/Claude/claude_desktop_config.json`)
- **Amp** (`~/.amp/mcp.json`)
- **Roo / Cline** (VSCode & Cursor instances)
- **Generic MCP clients** (`~/.mcp/config.json`)
If your agent isn't automatically found, the `setup` command will print the exact JSON snippet you can manually paste into its MCP configuration.
---
## What Gets Shared
| Category | Data |
|----------|------|
| **Conversation** | All messages between user and agents, with agent attribution |
| **File Changes** | Files created, modified, deleted — tracked automatically |
| **Tasks** | Current goal, active tasks, completed work, blockers |
| **Decisions** | Design decisions with rationale and alternatives considered |
| **Agent History** | Which agents worked on what, session timestamps |
---
## MCP Tools
Agents can invoke these tools:
| Tool | Description |
|------|-------------|
| `context_get_current` | Full project context snapshot |
| `context_update` | Append new conversation/file/decision data |
| `context_query` | Search context history |
| `context_get_files_changed` | Files changed since last agent switch |
| `context_get_conversation` | Recent N messages |
| `context_get_reasoning` | Decision log |
| `context_add_decision` | Log a design decision |
| `context_mark_complete` | Mark a task as completed |
| `context_add_blocker` | Record a blocking issue |
---
## MCP Resources
Read-only data endpoints:
- `context://current-project` — Complete snapshot
- `context://conversation` — Message history
- `context://files` — File state & changes
- `context://reasoning` — Decision log
- `context://tasks` — Task state
- `context://agents` — Agent session history
---
## MCP Prompts
Pre-built prompts for context injection:
- **Project Context** — Complete project overview
- **Recent Work** — Summary of last session
- **Continuation** — "Pick up where we left off"
- **Decisions Log** — Key decisions and rationale
---
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `AMCL_DATA_DIR` | `~/.amcl` | Where context database lives |
| `AMCL_LOG_LEVEL` | `WARNING` | Logging verbosity |
| `AMCL_AGENT_NAME` | `unknown` | Name of the connecting agent |
| `AMCL_PROJECT_DIR` | `cwd` | Project directory override |
---
---
## Privacy
All data stays local on your machine in `~/.amcl/amcl.db`. Nothing is ever sent to external servers. No telemetry, no analytics.
---
## License
MIT
| text/markdown | null | Your Name <your.email@example.com> | null | null | MIT | mcp, ai, agents, context, llm, coding | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp>=1.0.0",
"watchdog>=4.0.0",
"gitpython>=3.1.0",
"click>=8.1.0",
"psutil>=5.9.0"
] | [] | [] | [] | [
"Homepage, https://github.com/yourusername/amcl",
"Repository, https://github.com/yourusername/amcl.git"
] | twine/6.2.0 CPython/3.11.7 | 2026-02-21T03:46:20.981996 | amcl_server-1.0.2.tar.gz | 21,150 | d7/03/d04d8c63dee8f175b5c5396ca1140872888fe00cc7e3e0c6a1c2d010d71e/amcl_server-1.0.2.tar.gz | source | sdist | null | false | 33a338820eafae0080187c872b5c3663 | 1429cb15dd8993e838865afb4de9d65d61029f6c142e595966b9f6de38351493 | d703d04d8c63dee8f175b5c5396ca1140872888fe00cc7e3e0c6a1c2d010d71e | null | [] | 226 |
2.1 | odoo-addon-product-logistics-uom | 18.0.1.1.0.2 | Configure product weights and volume UoM | =====================
Product logistics UoM
=====================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:326d9bbcf8543523257fb6237f3387b9f67e4e3c781935665691ac2055f58ab2
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/licence-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fproduct--attribute-lightgray.png?logo=github
:target: https://github.com/OCA/product-attribute/tree/18.0/product_logistics_uom
:alt: OCA/product-attribute
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/product-attribute-18-0/product-attribute-18-0-product_logistics_uom
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/product-attribute&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module allows to choose an Unit Of Measure (UoM) for products
weight and volume. It can be set product per product for users in
group_uom.
Without this module, you only have the choice between Kg or Lb(s) and m³
for all the products.
For some business cases, you need to express in more precise UoM than
default ones like Liters instead of M³.
Even if you choose another UoM for the weight or volume, the system will
still store the value for these fields in the Odoo default UoM (Kg or
Lb(s) and m³). This ensures that the arithmetic operations on these
fields are correct and consistent with the rest of the addons.
Once this addon is installed values stored into the initial Volume and
Weight fields on the product and product template models are no more
rounded to the decimal precision defined for these fields. This could
lead to some side effects into reportss where these fields are used. You
can replace the fields by the new ones provided by this addon to avoid
this problem (product_volume and product_weight). In any cases, since
you use different UoM by product, you should most probably modify your
reportss to display the right UoM.
**Table of contents**
.. contents::
:local:
Installation
============
Be aware, that this module only change the UoM but not the value.
It's the same behavior as base Odoo when you change from Metric System
to Imperial System.
Configuration
=============
To change the default UoM
1. Go "General Settings", then in "Products"
2. you have to select a default unit of measure for weights and volumes.
To change on a specific product
1. Go the product form you can change the UoM directly.
Usage
=====
Once installed and the 'Sell and purchase products in different units of
measure' option is enabled, the 'Unit of Measure' field will become
updatable on the 'Product' form for users with the permission 'Manage
Multiple Units of Measure'.
If the displayed value is 0.00 and a warning icon is displayed in front
of the unit of measure, it means that the value is too small to be
displayed in the current unit of measure. You should change the unit of
measure to a larger one to see the value.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/product-attribute/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/product-attribute/issues/new?body=module:%20product_logistics_uom%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Akretion
* ACSONE SA/NV
Contributors
------------
- Raphaël Reverdy <raphael.reverdy@akretion.com>
- Fernando La Chica <fernandolachica@gmail.com>
- Laurent Mignon <laurent.mignon@acsone.eu>
- Nhan Tran <nhant@trobz.com>
Other credits
-------------
The development of this module has been financially supported by:
- Akretion <https://akretion.com>
- La Base <https://labase.coop>
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-hparfr| image:: https://github.com/hparfr.png?size=40px
:target: https://github.com/hparfr
:alt: hparfr
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-hparfr|
This module is part of the `OCA/product-attribute <https://github.com/OCA/product-attribute/tree/18.0/product_logistics_uom>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Akretion, ACSONE SA/NV, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Development Status :: 4 - Beta"
] | [] | https://github.com/OCA/product-attribute | null | >=3.10 | [] | [] | [] | [
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T03:45:31.878974 | odoo_addon_product_logistics_uom-18.0.1.1.0.2-py3-none-any.whl | 40,554 | 1a/c0/4999abbd71a530ab761b963f70b0ffe7ce53e17ce631a4aef21a7363a491/odoo_addon_product_logistics_uom-18.0.1.1.0.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 77e925133dae8db4f8a96eeb319f9bfa | 1e1d996077ea53d6319eaceb8c4fba7ff704da95c0ef3748f34bdba82b104873 | 1ac04999abbd71a530ab761b963f70b0ffe7ce53e17ce631a4aef21a7363a491 | null | [] | 81 |
2.4 | ready-to-work | 0.0.0a1 | Architect loop framework for AI-driven development | # ready-to-work (rtw)
Plan → Build → Review loop for AI-driven development. Uses [Cursor Agent CLI](https://cursor.com/docs/cli) as the default backend.
## Installation
```bash
uv tool install git+https://github.com/joey-lou/ready-to-work.git
```
**Prerequisites:** [uv](https://docs.astral.sh/uv/), [Cursor Agent CLI](https://cursor.com/docs/cli) (authenticated), Python 3.11+.
## Usage
```bash
rtw run task.md # Run architect loop (default model: sonnet-4.6)
rtw run task.md --max-iter 5
rtw run task.md --model gpt-4o # Override model (must be a known-valid model)
rtw list # List runs
rtw resume # Resume latest run (use -w /path/to/project if needed)
```
## Development
```bash
uv tool install --editable . # Pick up code changes without reinstalling
uv sync
uv run python -m pytest
uv run ruff check . && uv run ruff format .
```
Pre-commit runs ruff and pytest on commit:
```bash
uv run pre-commit install
```
## Releasing
Version is derived from git tags via `hatch-vcs`. To release:
```bash
git tag v0.3.0 && git push --tags
```
Pushing a tag triggers the [Release workflow](.github/workflows/release.yml), which builds the package and publishes to PyPI and GitHub Releases.
## License
MIT
| text/markdown | null | Joey Lou <joey99lou@gmail.com> | null | null | MIT License
Copyright (c) 2026 Joey Lou
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | agent, ai, automation, cursor, development, llm | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Code Generators"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"fastapi>=0.110.0; extra == \"api\"",
"pydantic>=2.0.0; extra == \"api\"",
"uvicorn[standard]>=0.29.0; extra == \"api\""
] | [] | [] | [] | [
"Homepage, https://github.com/joey-lou/ready-to-work",
"Documentation, https://github.com/joey-lou/ready-to-work#readme",
"Repository, https://github.com/joey-lou/ready-to-work",
"Issues, https://github.com/joey-lou/ready-to-work/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:45:28.572006 | ready_to_work-0.0.0a1.tar.gz | 30,867 | 21/10/606163366017ce2a6dc326c8b9566a63b85a10a0c2907c5196ee7800c299/ready_to_work-0.0.0a1.tar.gz | source | sdist | null | false | 0107c98c481f5ab7c31b087c88be3e1a | 5b72e4faf76cbc4a1920382537659237b6acf5c5b93fe3931909f4b3fb882371 | 2110606163366017ce2a6dc326c8b9566a63b85a10a0c2907c5196ee7800c299 | null | [
"LICENSE"
] | 224 |
2.3 | kbasic | 0.1.13 | Keyan's basic utility functions. | # KBasic
The core utility functions that Keyan is sick of rewriting in every project :-)
```
pip install kbasic
``` | text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=2.4.2",
"pylatexenc>=2.10",
"scipy>=1.17.0",
"tqdm>=4.67.3"
] | [] | [] | [] | [] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T03:44:59.242086 | kbasic-0.1.13-py3-none-any.whl | 38,437,851 | 3f/a8/71b7319795bf59c0e4c6bbb103c8b078f3c0bb115717e13e23afacf19e20/kbasic-0.1.13-py3-none-any.whl | py3 | bdist_wheel | null | false | 3e8af541fe29d1e61870f4f6f37efd5d | ded3d6635e5df521a0a07c208d6debee9e6fafd2f000db46452f7addda6cb09b | 3fa871b7319795bf59c0e4c6bbb103c8b078f3c0bb115717e13e23afacf19e20 | null | [] | 225 |
2.4 | qolsys-controller | 0.4.9 | A Python module that emulates a virtual IQ Remote device, enabling full local control of a Qolsys IQ Panel | # Qolsys Controller - qolsys-controller
[](https://github.com/EHylands/QolsysController/actions/workflows/build.yml)
A Python module that emulates a virtual IQ Remote device, enabling full **local control** of a Qolsys IQ Panel over MQTT — no cloud access required.
## QolsysController
- ✅ Connects directly to the **Qolsys Panel's local MQTT server as an IQ Remote**
- 🔐 Pairs by only using **Installer Code** (same procedure as standard IQ Remote pairing)
- 🔢 Supports **4-digit user codes**
- ⚠️ Uses a **custom local usercode database** — panel's internal user code verification process is not yet supported
## ✨ Functionality Highlights
| Category | Feature | Status |
|------------------------|--------------------------------------|--------|
| **Panel** | Diagnostic Sensors | ✅ |
| | Panel Scenes | ✅ |
| | Weather Forecast | ✅ |
| | (Alarm.com Weather to Panel) | |
| **Partition** | Arming Status | ✅ |
| | Alarm State | ✅ |
| | Home Instant Arming | ✅ |
| | Home Silent Disarming (Firmware 4.6.1)| ✅ |
| | Set Exit Sounds | ✅ |
| | Set Entry Delay | ✅ |
| | TTS | 🛠️ |
| **Zones** | Sensor Status | ✅ |
| | Tamper State | ✅ |
| | Battery Level | ✅ |
| | Temperature (supported PowerG device)| ✅ |
| | Light (supported PowerG device) | ✅ |
| | Average dBm | ✅ |
| | Latest dBm | ✅ |
| **Z-Wave Devices** | Battery Level | ✅ |
| | Node Status | ✅ |
| | Control Generic Devices | ✅. |
| **Z-Wave Dimmers** | | ✅ |
| **Z-Wave Door Locks** | | ✅ |
| **Z-Wave Thermostats** | | ✅ |
| **Z-Wave Garage Doors**| | 🛠️ |
| **Z-Wave Outlets** | | 🛠️ |
## ⚠️ Certificate Warning
During pairing, the main panel issues **only one signed client certificate** per virtual IQ Remote. If any key files are lost or deleted, re-pairing may become impossible.
A new PKI, including a new private key, can be recreated under specific circumstances, though the precise conditions remain unknown at this time.
**Important:**
Immediately back up the following files from the `pki/` directory after initial pairing:
- `.key` (private key)
- `.cer` (certificate)
- `.csr` (certificate signing request)
- `.secure` (signed client certificate)
- `.qolsys` (Qolsys Panel public certificate)
Store these files securely.
## 📦 Installation
```bash
git clone https://github.com/EHylands/QolsysController.git
cd qolsys_controller
pip3.12 install -r requirements.txt
# Change panel_ip and plugin_in in main.py file
python3.12 example.py
```
| text/markdown | Eric Hylands | null | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.12",
"Topic :: Home Automation"
] | [] | null | null | >=3.12 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/EHylands/QolsysController",
"Issues, https://github.com/EHylands/QolsysController/issues",
"Repository, https://github.com/EHylands/QolsysController.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:43:41.444141 | qolsys_controller-0.4.9.tar.gz | 75,532 | 72/2a/90099f73bce10210c1ea424aaeb83df0800eed23bf59dd27c61446b58aaf/qolsys_controller-0.4.9.tar.gz | source | sdist | null | false | cae990c9890a84953fca78bfa1440c55 | a2a79ed89b8ec9ee30e4ffdc35a3a9230743edac7a9445cedaabeaa7362e0360 | 722a90099f73bce10210c1ea424aaeb83df0800eed23bf59dd27c61446b58aaf | MIT | [
"LICENSE"
] | 228 |
2.1 | keble-db | 1.5.0 | Keble db | # Keble-DB
Lightweight database toolkit for MongoDB (PyMongo/Motor), SQL (SQLModel/SQLAlchemy), Qdrant, and Neo4j.
Includes sync + async CRUD base classes, a shared `QueryBase`, a `Db` session manager, FastAPI deps (`ApiDbDeps`), and Redis namespace wrappers.
## Installation
```bash
pip install keble-db
```
## Core API (import from `keble_db`)
- Queries/types: `DbSettingsABC`, `QueryBase`, `ObjectId`, `Uuid`
- CRUD:
- `MongoCRUDBase[Model]`
- `SqlCRUDBase[Model]`
- `QdrantCRUDBase[Payload, Vector]` (+ `Record`)
- `Neo4jCRUDBase[Model]`
- Connections/DI: `Db(settings: DbSettingsABC)`, `ApiDbDeps(db)`
- Redis: `ExtendedRedis`, `ExtendedAsyncRedis`
- Mongo helpers: `build_mongo_find_query`, `merge_mongo_and_queries`, `merge_mongo_or_queries`
Async methods are prefixed with `a` (e.g. `afirst`, `aget_multi`, `adelete`).
## `QueryBase` expectations
`QueryBase` fields: `filters`, `order_by`, `offset`, `limit`, `id`, `ids`.
- Mongo: `filters` is a Mongo query `dict`; `order_by` is `[(field, ASCENDING|DESCENDING)]`; `offset/limit` are `int`.
- SQL: `filters` is a `list` of SQLAlchemy expressions; `order_by` is an expression or list; `offset/limit` are `int`.
- Qdrant:
- `search()`: `filters` is a Qdrant filter `dict`, `offset` is `int|None`, `limit` defaults to 100.
- `scroll()`: `offset` is `PointId|None` (point id) and `limit` is required; ordering uses `order_by` (str or Qdrant `OrderBy`) or falls back to `QueryBase.order_by`. Qdrant requires a payload range index for the ordered key.
Example: `from qdrant_client.models import PayloadSchemaType`; `crud.ensure_payload_indexes(client, payload_indexes={"id": PayloadSchemaType.INTEGER})`.
- Neo4j: `filters` is a `dict` of property predicates (operators: `$gt`, `$gte`, `$lt`, `$lte`, `$in`, `$contains`, `$startswith`, `$endswith`);
`order_by` is `[(field, "asc"|"desc")]`; `offset/limit` are `int`.
## Examples
### MongoDB
```py
from pydantic import BaseModel
from pymongo import MongoClient, DESCENDING
from keble_db import MongoCRUDBase, QueryBase
class User(BaseModel):
name: str
age: int
crud = MongoCRUDBase(User, collection="users", database="app")
m = MongoClient("mongodb://localhost:27017")
crud.create(m, obj_in=User(name="Alice", age=30))
users = crud.get_multi(
m,
query=QueryBase(filters={"age": {"$gte": 18}}, order_by=[("age", DESCENDING)]),
)
```
### SQL (SQLModel)
```py
import uuid
from typing import Optional
from sqlmodel import Field, Session, SQLModel, create_engine
from keble_db import QueryBase, SqlCRUDBase
class User(SQLModel, table=True):
id: Optional[str] = Field(
default_factory=lambda: str(uuid.uuid4()), primary_key=True
)
name: str
age: int
engine = create_engine("sqlite:///db.sqlite")
SQLModel.metadata.create_all(engine)
crud = SqlCRUDBase(User, table_name="users")
with Session(engine) as s:
created = crud.create(s, obj_in=User(name="Alice", age=30))
found = crud.first(s, query=QueryBase(id=created.id))
```
### Qdrant
Requires `qdrant-client>=1.16.0` (uses `query_points`).
```py
from pydantic import BaseModel
from qdrant_client import QdrantClient
from qdrant_client.models import Distance, PayloadSchemaType, VectorParams
from keble_db import QdrantCRUDBase, QueryBase
class Payload(BaseModel):
id: int
name: str
class Vector(BaseModel):
vector: list[float]
client = QdrantClient(host="localhost", port=6333)
client.recreate_collection(
collection_name="items",
vectors_config={"vector": VectorParams(size=3, distance=Distance.COSINE)},
)
crud = QdrantCRUDBase(Payload, Vector, collection="items")
crud.ensure_payload_indexes(
client,
payload_indexes={"id": PayloadSchemaType.INTEGER},
)
crud.create(client, Vector(vector=[0.1, 0.2, 0.3]), Payload(id=1, name="a"), "p1")
hits = crud.search(
client,
vector=[0.1, 0.2, 0.3],
vector_key="vector",
query=QueryBase(filters={"must": [{"key": "id", "match": {"value": 1}}]}, limit=5),
)
```
If you have per-embedder collections (common in RAG), use deterministic naming:
```py
collection = QdrantCRUDBase.derive_collection_name(
base="items",
embedder_id="text-embedding-3-small",
)
crud = QdrantCRUDBase(Payload, Vector, collection=collection)
```
### Neo4j
```py
from pydantic import BaseModel
from neo4j import GraphDatabase
from keble_db import Neo4jCRUDBase, QueryBase
class Person(BaseModel):
id: int
name: str
driver = GraphDatabase.driver("neo4j://localhost:7687", auth=("neo4j", "password"))
crud = Neo4jCRUDBase(Person, label="Person", id_field="id")
with driver.session() as s:
crud.create(s, obj_in=Person(id=1, name="Alice"))
people = crud.get_multi(s, query=QueryBase(filters={"id": 1}))
```
## Db + FastAPI
`Db(settings)` builds clients from a `DbSettingsABC` implementation (see `keble_db/schemas.py` or `tests/config.py`).
`ApiDbDeps(db)` exposes FastAPI-friendly generator dependencies such as `get_mongo`, `get_amongo`, `get_read_sql`, `get_write_asql`, `get_qdrant`, `get_neo4j_session`, plus Redis equivalents.
Neo4j dependency behavior:
- `get_neo4j_session` and `get_async_neo4j_session` yield session objects.
- `get_aneo4j` yields an `AsyncTransaction`.
## More runnable examples
See `tests/test_crud/` and `tests/test_api_deps.py`.
| text/markdown | zhenhao-ma | bob0103779@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"pydantic<3,>=2",
"qdrant-client<2.0.0,>=1.16.0",
"redis<6,>=5",
"sqlmodel<1,>=0",
"pymongo<5,>=4",
"keble_helpers<2,>=1.0.2",
"motor<4,>=3",
"deprecated<2,>=1",
"psycopg[binary,pool]>=3.1.10",
"greenlet<4,>=3",
"neo4j<6,>=5"
] | [] | [] | [] | [] | poetry/1.8.2 CPython/3.12.2 Darwin/25.1.0 | 2026-02-21T03:43:20.365635 | keble_db-1.5.0.tar.gz | 30,477 | f8/72/e0f227527d93cc20d061bfcb84cf683a212bc508344a4c191cf33cdc6688/keble_db-1.5.0.tar.gz | source | sdist | null | false | ca77d98ab0894cd9ad858f381ae1e4ec | fd946459113242d0fd07082102c6f127691f05002162a9c383f14ec24ba4eedc | f872e0f227527d93cc20d061bfcb84cf683a212bc508344a4c191cf33cdc6688 | null | [] | 235 |
2.4 | rvsim | 0.11.0 | High-performance RISC-V cycle-accurate system simulator | # rvsim
A cycle-accurate RISC-V 64-bit system simulator (RV64IMAFDC) written in Rust with Python bindings. Features a 10-stage superscalar pipeline, multi-level cache hierarchy, branch prediction, and virtual memory. Can boot Linux (experimental).
## Pipeline
10-stage in-order pipeline with configurable superscalar width:
```
Fetch1 → Fetch2 → Decode → Rename → Issue → Execute → Memory1 → Memory2 → Writeback → Commit
```
- **Superscalar:** configurable width (1, 2, 4+)
- **Reorder buffer** for in-order commit with tag-based register scoreboard
- **Store buffer** with store-to-load forwarding
- **Branch prediction:** Static, GShare, Tournament, Perceptron, TAGE
## Memory System
- **MMU:** SV39 virtual addressing with separate iTLB and dTLB
- **Cache hierarchy:** configurable L1i, L1d, L2, L3 with LRU/PLRU/FIFO/Random replacement
- **Prefetchers:** next-line, stride, stream, tagged
- **DRAM controller:** row-buffer aware timing (CAS/RAS/precharge)
## ISA Support
RV64IMAFDC — base integer, multiply/divide, atomics, single/double float, compressed instructions. Privileged ISA with M/S/U modes, traps, CSRs, and CLINT timer.
## Quick Start
**Install:**
```bash
pip install rvsim
```
**Run a program:**
```python
from rvsim import Config, Environment
result = Environment(binary="software/bin/programs/mandelbrot.bin").run()
print(result.stats.query("ipc|branch"))
```
**From the command line:**
```bash
make build
rvsim -f software/bin/programs/qsort.bin
```
## Python API
```python
from rvsim import Config, Cache, BranchPredictor, Environment, Stats
config = Config(
width=4,
branch_predictor=BranchPredictor.TAGE(),
l1d=Cache("64KB", ways=8),
l2=Cache("512KB", ways=16, latency=10),
)
result = Environment(binary="software/bin/programs/qsort.bin", config=config).run()
# Query specific stats
print(result.stats.query("branch"))
print(result.stats.query("miss"))
# Compare configurations
rows = {}
for w in [1, 2, 4]:
cfg = Config(width=w, uart_quiet=True)
r = Environment(binary="software/bin/programs/qsort.bin", config=cfg).run()
rows[f"w{w}"] = r.stats.query("ipc|cycles")
print(Stats.tabulate(rows, title="Width Scaling"))
```
## Analysis Scripts
Modular scripts for design-space exploration in `scripts/analysis/`:
| Script | Purpose |
|--------|---------|
| `width_scaling.py` | IPC vs pipeline width |
| `branch_predict.py` | Compare branch predictor accuracy |
| `cache_sweep.py` | L1 D-cache size vs miss rate |
| `inst_mix.py` | Instruction class breakdown |
| `stall_breakdown.py` | Memory/control/data stall cycles |
```bash
rvsim scripts/analysis/width_scaling.py --bp TAGE --widths 1 2 4
rvsim scripts/analysis/branch_predict.py --width 2 --programs maze qsort
rvsim scripts/analysis/cache_sweep.py --sizes 4KB 16KB 64KB
```
Machine model benchmarks in `scripts/benchmarks/`:
```bash
rvsim scripts/benchmarks/p550/run.py
rvsim scripts/benchmarks/m1/run.py
rvsim scripts/benchmarks/tests/compare_p550_m1.py
```
## Project Structure
```
rvsim/
├── crates/
│ ├── hardware/ # Simulator core (Rust)
│ │ └── src/
│ │ ├── core/ # CPU, pipeline, execution units
│ │ ├── isa/ # RV64IMAFDC decode and execution
│ │ ├── sim/ # Simulator driver, binary loader
│ │ └── soc/ # Bus, UART, PLIC, VirtIO, CLINT
│ └── bindings/ # Python bindings (PyO3)
├── rvsim/ # Python package
├── examples/
│ ├── programs/ # C and assembly source
│ └── benchmarks/ # Microbenchmarks and synthetic workloads
├── software/
│ ├── libc/ # Custom minimal C standard library
│ └── linux/ # Linux boot configuration
├── scripts/
│ ├── analysis/ # Design-space exploration scripts
│ ├── benchmarks/ # Machine model configs (P550, M1)
│ └── setup/ # Linux build helpers
└── docs/ # Architecture and API documentation
```
## Build from Source
**Requirements:**
- Rust (2024 edition)
- Python 3.10+ with maturin
- `riscv64-unknown-elf-gcc` cross-compiler (for building example programs)
```bash
make build # Build Python bindings (editable)
make software # Build libc and example programs
make test # Run Rust tests
make lint # Format check + clippy
make clean # Remove all build artifacts
```
## Linux Boot (Experimental)
The simulator can boot Linux through OpenSBI. Full boot is still in progress.
```bash
make linux # Download and build Linux via Buildroot
make run-linux # Boot Linux
```
## License
Licensed under either of [MIT](LICENSE-MIT) or [Apache-2.0](LICENSE-APACHE), at your option.
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | MIT OR Apache-2.0 | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"License :: OSI Approved :: MIT License",
"License :: OSI Approved :: Apache Software License",
"Topic :: Scientific/Engineering",
"Topic :: System :: Emulators",
"Intended Audience :: Science/Research"
] | [] | https://github.com/willmccallion/rvsim | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:42:43.526639 | rvsim-0.11.0.tar.gz | 288,265 | 0f/9e/7bc1b779817f6c72ea5b90f6e39ba0bc3808cb3ef8d8fc4e29c91d4b137c/rvsim-0.11.0.tar.gz | source | sdist | null | false | bdd1a80538c14c959854db64a521c02f | d55e7fbf7f7b4b9647b169dc0c6aa007cd14ccd6858abe42981e95a891985819 | 0f9e7bc1b779817f6c72ea5b90f6e39ba0bc3808cb3ef8d8fc4e29c91d4b137c | null | [
"LICENSE-APACHE",
"LICENSE-MIT"
] | 445 |
2.4 | calculator-lib-rubens | 0.1.3 | A python calculator library. | # Calculator-LIB
A Python calculator library providing stateless arithmetic, power/root, modulo,
rounding, and logarithmic/exponential operations.
[](https://www.python.org/downloads/)
[](LICENSE)
## Installation
```bash
pip install calculator-lib-rubens
```
## Quick Start
```python
from calculator_lib import Calculator
calc = Calculator()
calc.add(2, 3) # 5
calc.subtract(10, 4) # 6
calc.multiply(3, 4) # 12
calc.divide(10, 2) # 5.0
```
## API Reference
### Core Arithmetic
| Method | Description |
|------------------|------------------------------------|
| `add(a, b)` | Returns the sum of `a` and `b` |
| `subtract(a, b)` | Returns `a` minus `b` |
| `multiply(a, b)` | Returns the product of `a` and `b` |
| `divide(a, b)` | Returns `a` divided by `b` |
### Power & Roots
| Method | Description |
|------------------|-------------------------------------|
| `power(a, b)` | Returns `a` raised to the power `b` |
| `sqrt(a)` | Returns the square root of `a` |
| `nth_root(a, n)` | Returns the n-th root of `a` |
### Modulo & Integer Math
| Method | Description |
|----------------------|---------------------------------------|
| `modulo(a, b)` | Returns the remainder of `a / b` |
| `floor_divide(a, b)` | Returns the floor division of `a / b` |
### Absolute & Rounding
| Method | Description |
|-------------------------------|----------------------------------------|
| `absolute(a)` | Returns the absolute value of `a` |
| `round_number(a, decimals=0)` | Rounds `a` to the given decimal places |
| `floor(a)` | Returns the largest integer <= `a` |
| `ceil(a)` | Returns the smallest integer >= `a` |
### Logarithmic & Exponential
| Method | Description |
|------------|--------------------------------------|
| `log10(a)` | Returns the base-10 logarithm of `a` |
| `ln(a)` | Returns the natural logarithm of `a` |
| `exp(a)` | Returns *e* raised to the power `a` |
## Usage Examples
```python
import math
from calculator_lib import Calculator
calc = Calculator()
# Power & Roots
calc.power(2, 3) # 8
calc.sqrt(16) # 4.0
calc.nth_root(27, 3) # 3.0
# Modulo & Rounding
calc.modulo(10, 3) # 1
calc.floor_divide(7, 2) # 3
calc.round_number(3.14159, 2) # 3.14
calc.floor(3.7) # 3.0
calc.ceil(3.2) # 4.0
# Logarithmic & Exponential
calc.log10(100) # 2.0
calc.ln(math.e) # 1.0
calc.exp(1) # 2.718281828459045
```
## Error Handling
All methods validate their inputs and raise `ValueError` with descriptive
messages for invalid operations:
```python
calc.divide(10, 0) # ValueError: Cannot divide by zero
calc.sqrt(-1) # ValueError: Cannot take square root of a negative number
calc.log10(0) # ValueError: Cannot take logarithm of a non-positive number
calc.nth_root(-4, 2) # ValueError: Cannot take even root of a negative number
calc.modulo(10, 0) # ValueError: Cannot modulo by zero
```
## Requirements
- Python 3.14+
## License
[Modified MIT](LICENSE)
| text/markdown | Rubens Gomes | rubens.s.gomes@gmail.com | null | null | null | math, calculator, utility | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License"
] | [] | https://github.com/rubensgomes/calculator-lib/ | null | >=3.14 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/rubensgomes/calculator-lib/",
"Repository, https://github.com/rubensgomes/calculator-lib/",
"Documentation, https://github.com/rubensgomes/calculator-lib/"
] | poetry/2.2.1 CPython/3.12.3 Linux/6.6.87.2-microsoft-standard-WSL2 | 2026-02-21T03:41:58.368257 | calculator_lib_rubens-0.1.3.tar.gz | 4,516 | 1f/32/978d59969bbc0c269113643de5f3765704a4f0c2158c7cb9684c1bb4ecff/calculator_lib_rubens-0.1.3.tar.gz | source | sdist | null | false | fe9bcbc64b673b234fc1e0bd989fba83 | 2b670d5b981fa423cf8dc74caec5d359d1354050182e1d6e7df775b78ddc217c | 1f32978d59969bbc0c269113643de5f3765704a4f0c2158c7cb9684c1bb4ecff | null | [] | 228 |
2.4 | Gluonix | 7.5 | A GUI lib based on tkinter. | # Gluonix V(7.5)
- Gluonix is a lightweight GUI library built on top of Tkinter for building desktop applications in Python. It provides reusable widgets and convenience utilities so you can build UIs faster with less boilerplate.
## Feedback
- If you have any thoughts or suggestions, we’d love to hear from you at [feedback@nucleonautomation.com](mailto:feedback@nucleonautomation.com). Your feedback is invaluable in helping us improve and enhance the library.
## Features
- Tkinter-based widgets: A set of ready-to-use UI components built on top of Tkinter.
- Higher-level utilities: Helper functions/classes to reduce repetitive Tkinter setup code.
- Layout helpers: Utilities and patterns to compose windows and dialogs more cleanly.
- Styling support: Common configuration helpers for fonts, colors, and consistent UI options.
- Image support: Works well with Pillow for loading and displaying images in UI components.
## Help
- See the [HELP](https://github.com/nucleonautomation/Gluonix/blob/main/Help.pdf) file for details.
## Installation Pypi
```
# Requires Python >=3.6
pip install Gluonix
```
## Installation Git
```
# Requires Python >=3.6
git clone https://github.com/nucleonautomation/Gluonix.git
```
## Requirements
```
pip install -r requirements.txt
```
## License
- This project is licensed under BSD 3-Clause License. See the [LICENSE](https://github.com/nucleonautomation/Gluonix/blob/main/LICENSE.md) file for details.
## Change Log
- See the [LOG](https://github.com/nucleonautomation/Gluonix/blob/main/LOG.md) file for details.
| text/markdown | Nucleon Automation | Nucleon Automation <jagroop@nucleonautomation.com> | null | null | BSD 3-Clause License
Copyright (c) 2026 Nucleon Automation
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | null | [] | [] | null | null | >=3.6 | [] | [] | [] | [
"pillow>=6.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.9 | 2026-02-21T03:41:33.681562 | gluonix-7.5.tar.gz | 200,522 | 77/54/b139aeec1f38e51aaac0b815be545e80ab484955b07644d2fc771c899401/gluonix-7.5.tar.gz | source | sdist | null | false | b0a866b7108e776d9f8fc0c43d0592a3 | a6c717d21174ee1500c2c675fa212450ee9d8814334fa8372673ac844901da42 | 7754b139aeec1f38e51aaac0b815be545e80ab484955b07644d2fc771c899401 | null | [
"LICENSE.md"
] | 0 |
2.4 | aws-bootstrap-g4dn | 0.12.0 | Bootstrap AWS EC2 GPU instances for hybrid local-remote development | # aws-bootstrap-g4dn
--------------------------------------------------------------------------------
[](https://github.com/promptromp/aws-bootstrap-g4dn/actions/workflows/ci.yml)
[](https://github.com/promptromp/aws-bootstrap-g4dn/blob/main/LICENSE)
[](https://pypi.org/project/aws-bootstrap-g4dn/)
[](https://pypi.org/project/aws-bootstrap-g4dn/)
One command to go from zero to a **fully configured GPU dev box** on AWS — with CUDA-matched PyTorch, Jupyter, SSH aliases, and a GPU benchmark ready to run.
```bash
aws-bootstrap launch # Spot g4dn.xlarge in ~3 minutes
ssh aws-gpu1 # You're in, venv activated, PyTorch works
```
### ✨ Key Features
| | Feature | Details |
|---|---|---|
| 🚀 | **One-command launch** | Spot (default) or on-demand, with automatic fallback on capacity errors |
| 🔑 | **Auto SSH config** | Adds `aws-gpu1` alias to `~/.ssh/config` — no IP juggling. Cleaned up on terminate |
| 🐍 | **CUDA-aware PyTorch** | Detects the installed CUDA toolkit (`nvcc`) and installs PyTorch from the matching wheel index — no more `torch.version.cuda` mismatches |
| ✅ | **PyTorch smoke test** | Runs a quick `torch.cuda` matmul after setup to verify the GPU stack works end-to-end |
| 📊 | **GPU benchmark included** | CNN (MNIST) + Transformer benchmarks with FP16/FP32/BF16 precision and tqdm progress |
| 📓 | **Jupyter ready** | Lab server auto-starts as a systemd service on port 8888 — just SSH tunnel and open |
| 🖥️ | **`status --gpu`** | Shows CUDA toolkit version, driver max, GPU architecture, spot pricing, uptime, and estimated cost |
| 💾 | **EBS data volumes** | Attach persistent storage at `/data` — survives spot interruptions and termination, reattach to new instances |
| 🗑️ | **Clean terminate** | Stops instances, removes SSH aliases, cleans up EBS volumes (or preserves with `--keep-ebs`) |
| 🤖 | **[Agent Skill](https://agentskills.io/)** | Included Claude Code plugin lets LLM agents autonomously provision, manage, and tear down GPU instances |
### 🎯 Target Workflows
1. **Jupyter server-client** — Jupyter runs on the instance, connect from your local browser
2. **VSCode Remote SSH** — opens `~/workspace` with pre-configured CUDA debug/build tasks and an example `.cu` file
3. **NVIDIA Nsight remote debugging** — GPU debugging over SSH
---
## Requirements
1. AWS profile configured with relevant permissions (profile name can be passed via `--profile` or read from `AWS_PROFILE` env var)
2. AWS CLI v2 — see [here](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
3. Python 3.12+ and [uv](https://github.com/astral-sh/uv)
4. An SSH key pair (see below)
## Installation
### From PyPI
```bash
pip install aws-bootstrap-g4dn
```
### With uvx (no install needed)
[uvx](https://docs.astral.sh/uv/guides/tools/) runs the CLI directly in a temporary environment — no global install required:
```bash
uvx --from aws-bootstrap-g4dn aws-bootstrap launch
uvx --from aws-bootstrap-g4dn aws-bootstrap status
uvx --from aws-bootstrap-g4dn aws-bootstrap terminate
```
### From source (development)
```bash
git clone https://github.com/promptromp/aws-bootstrap-g4dn.git
cd aws-bootstrap-g4dn
uv venv
uv sync
```
All methods install the `aws-bootstrap` CLI.
## SSH Key Setup
The CLI expects an Ed25519 SSH public key at `~/.ssh/id_ed25519.pub` by default. If you don't have one, generate it:
```bash
ssh-keygen -t ed25519
```
Accept the default path (`~/.ssh/id_ed25519`) and optionally set a passphrase. The key pair will be imported into AWS automatically on first launch.
To use a different key, pass `--key-path`:
```bash
aws-bootstrap launch --key-path ~/.ssh/my_other_key.pub
```
## Usage
### 🚀 Launching an Instance
```bash
# Show available commands
aws-bootstrap --help
# Dry run — validates AMI lookup, key import, and security group without launching
aws-bootstrap launch --dry-run
# Launch a spot g4dn.xlarge (default)
aws-bootstrap launch
# Launch on-demand in a specific region with a custom instance type
aws-bootstrap launch --on-demand --instance-type g5.xlarge --region us-east-1
# Launch without running the remote setup script
aws-bootstrap launch --no-setup
# Use a specific Python version in the remote venv
aws-bootstrap launch --python-version 3.13
# Use a non-default SSH port
aws-bootstrap launch --ssh-port 2222
# Attach a persistent EBS data volume (96 GB gp3, mounted at /data)
aws-bootstrap launch --ebs-storage 96
# Reattach an existing EBS volume from a previous instance
aws-bootstrap launch --ebs-volume-id vol-0abc123def456
# Use a specific AWS profile
aws-bootstrap launch --profile my-aws-profile
```
After launch, the CLI:
1. **Creates/attaches EBS volume** (if `--ebs-storage` or `--ebs-volume-id` was specified)
2. **Adds an SSH alias** (e.g. `aws-gpu1`) to `~/.ssh/config`
3. **Runs remote setup** — installs utilities, creates a Python venv, installs CUDA-matched PyTorch, sets up Jupyter
4. **Mounts EBS volume** at `/data` (if applicable — formats new volumes, mounts existing ones as-is)
5. **Runs a CUDA smoke test** — verifies `torch.cuda.is_available()` and runs a quick GPU matmul
6. **Prints connection commands** — SSH, Jupyter tunnel, GPU benchmark, and terminate
```bash
ssh aws-gpu1 # venv auto-activates on login
```
### 🔧 What Remote Setup Does
The setup script runs automatically on the instance after SSH becomes available:
| Step | What |
|------|------|
| **GPU verify** | Confirms `nvidia-smi` and `nvcc` are working |
| **Utilities** | Installs `htop`, `tmux`, `tree`, `jq`, `ffmpeg` |
| **Python venv** | Creates `~/venv` with `uv`, auto-activates in `~/.bashrc`. Use `--python-version` to pin a specific Python (e.g. `3.13`) |
| **CUDA-aware PyTorch** | Detects CUDA toolkit version → installs PyTorch from the matching `cu{TAG}` wheel index |
| **CUDA smoke test** | Runs `torch.cuda.is_available()` + GPU matmul to verify the stack |
| **GPU benchmark** | Copies `gpu_benchmark.py` to `~/gpu_benchmark.py` |
| **GPU smoke test notebook** | Copies `gpu_smoke_test.ipynb` to `~/gpu_smoke_test.ipynb` (open in JupyterLab) |
| **Jupyter** | Configures and starts JupyterLab as a systemd service on port 8888 |
| **SSH keepalive** | Configures server-side keepalive to prevent idle disconnects |
| **VSCode workspace** | Creates `~/workspace/.vscode/` with `launch.json` and `tasks.json` (auto-detected `cuda-gdb` path and GPU arch), plus an example `saxpy.cu` |
### 📊 GPU Benchmark
A GPU throughput benchmark is pre-installed at `~/gpu_benchmark.py` on every instance:
```bash
# Run both CNN and Transformer benchmarks (default)
ssh aws-gpu1 'python ~/gpu_benchmark.py'
# CNN only, quick run
ssh aws-gpu1 'python ~/gpu_benchmark.py --mode cnn --benchmark-batches 20'
# Transformer only with custom batch size
ssh aws-gpu1 'python ~/gpu_benchmark.py --mode transformer --transformer-batch-size 16'
# Run CUDA diagnostics first (tests FP16/FP32 matmul, autocast, etc.)
ssh aws-gpu1 'python ~/gpu_benchmark.py --diagnose'
# Force FP32 precision (if FP16 has issues on your GPU)
ssh aws-gpu1 'python ~/gpu_benchmark.py --precision fp32'
```
Reports: iterations/sec, samples/sec, peak GPU memory, and avg batch time for each model.
### 📓 Jupyter (via SSH Tunnel)
```bash
ssh -NL 8888:localhost:8888 aws-gpu1
# Then open: http://localhost:8888
```
Or with explicit key/IP:
```bash
ssh -i ~/.ssh/id_ed25519 -NL 8888:localhost:8888 ubuntu@<public-ip>
```
A **GPU smoke test notebook** (`~/gpu_smoke_test.ipynb`) is pre-installed on every instance. Open it in JupyterLab to interactively verify the CUDA stack, run FP32/FP16 matmuls, train a small CNN on MNIST, and visualise training loss and GPU memory usage.
### 🖥️ VSCode Remote SSH
The remote setup creates a `~/workspace` folder with pre-configured CUDA debug and build tasks:
```
~/workspace/
├── .vscode/
│ ├── launch.json # CUDA debug configs (cuda-gdb path auto-detected)
│ └── tasks.json # nvcc build tasks (GPU arch auto-detected, e.g. sm_75)
└── saxpy.cu # Example CUDA source — open and press F5 to debug
```
Connect directly from your terminal:
```bash
code --folder-uri vscode-remote://ssh-remote+aws-gpu1/home/ubuntu/workspace
```
Then install the [Nsight VSCE extension](https://marketplace.visualstudio.com/items?itemName=NVIDIA.nsight-vscode-edition) on the remote when prompted. Open `saxpy.cu`, set a breakpoint, and press F5.
See [Nsight remote profiling guide](docs/nsight-remote-profiling.md) for more details on CUDA debugging and profiling workflows.
### 📤 Structured Output
All commands support `--output` / `-o` for machine-readable output — useful for scripting, piping to `jq`, or LLM tool-use:
```bash
# JSON output (pipe to jq)
aws-bootstrap -o json status
aws-bootstrap -o json status | jq '.instances[0].instance_id'
# YAML output
aws-bootstrap -o yaml status
# Table output
aws-bootstrap -o table status
# Works with all commands
aws-bootstrap -o json list instance-types | jq '.[].instance_type'
aws-bootstrap -o json launch --dry-run
aws-bootstrap -o json terminate --yes
aws-bootstrap -o json cleanup --dry-run
```
Supported formats: `text` (default, human-readable with color), `json`, `yaml`, `table`. Commands that require confirmation (`terminate`, `cleanup`) require `--yes` in structured output modes.
### 📋 Listing Resources
```bash
# List all g4dn instance types (default)
aws-bootstrap list instance-types
# List a different instance family
aws-bootstrap list instance-types --prefix p3
# List Deep Learning AMIs (default filter)
aws-bootstrap list amis
# List AMIs with a custom filter
aws-bootstrap list amis --filter "ubuntu/images/hvm-ssd-gp3/ubuntu-noble*"
# Use a specific region
aws-bootstrap list instance-types --region us-east-1
aws-bootstrap list amis --region us-east-1
```
### 🖥️ Managing Instances
```bash
# Show all aws-bootstrap instances (including shutting-down)
aws-bootstrap status
# Include GPU info (CUDA toolkit + driver version, GPU name, architecture) via SSH
aws-bootstrap status --gpu
# Hide connection commands (shown by default for each running instance)
aws-bootstrap status --no-instructions
# List instances in a specific region
aws-bootstrap status --region us-east-1
# Terminate all aws-bootstrap instances (with confirmation prompt)
aws-bootstrap terminate
# Terminate but preserve EBS data volumes for reuse
aws-bootstrap terminate --keep-ebs
# Terminate by SSH alias (resolved via ~/.ssh/config)
aws-bootstrap terminate aws-gpu1
# Terminate by instance ID
aws-bootstrap terminate i-abc123
# Mix aliases and instance IDs
aws-bootstrap terminate aws-gpu1 i-def456
# Skip confirmation prompt
aws-bootstrap terminate --yes
# Remove stale SSH config entries for terminated instances
aws-bootstrap cleanup
# Preview what would be removed without modifying config
aws-bootstrap cleanup --dry-run
# Also find and delete orphan EBS data volumes
aws-bootstrap cleanup --include-ebs
# Preview orphan volumes without deleting
aws-bootstrap cleanup --include-ebs --dry-run
# Skip confirmation prompt
aws-bootstrap cleanup --yes
```
`status --gpu` reports both the **installed CUDA toolkit** version (from `nvcc`) and the **maximum CUDA version supported by the driver** (from `nvidia-smi`), so you can see at a glance whether they match:
```
CUDA: 12.8 (driver supports up to 13.0)
```
SSH aliases are managed automatically — they're created on `launch`, shown in `status`, and cleaned up on `terminate`. Aliases use sequential numbering (`aws-gpu1`, `aws-gpu2`, etc.) and never reuse numbers from previous instances. You can use aliases anywhere you'd use an instance ID, e.g. `aws-bootstrap terminate aws-gpu1`.
## EBS Data Volumes
Attach persistent EBS storage to keep datasets and model checkpoints across instance lifecycles. Volumes are mounted at `/data` and persist independently of the instance.
```bash
# Create a new 96 GB gp3 volume, formatted and mounted at /data
aws-bootstrap launch --ebs-storage 96
# After terminating with --keep-ebs, reattach the same volume to a new instance
aws-bootstrap terminate --keep-ebs
# Output: Preserving EBS volume: vol-0abc123...
# Reattach with: aws-bootstrap launch --ebs-volume-id vol-0abc123...
aws-bootstrap launch --ebs-volume-id vol-0abc123def456
```
Key behaviors:
- `--ebs-storage` and `--ebs-volume-id` are mutually exclusive
- New volumes are formatted as ext4; existing volumes are mounted as-is
- Volumes are tagged for automatic discovery by `status` and `terminate`
- `terminate` deletes data volumes by default; use `--keep-ebs` to preserve them
- **Orphan cleanup** — use `aws-bootstrap cleanup --include-ebs` to find and delete orphan volumes (e.g. from spot interruptions or forgotten `--keep-ebs` volumes). Use `--dry-run` to preview
- **Spot-safe** — data volumes survive spot interruptions. If AWS reclaims your instance, the volume detaches automatically and can be reattached to a new instance with `--ebs-volume-id`
- EBS volumes must be in the same availability zone as the instance
- Mount failures are non-fatal — the instance remains usable
## EC2 vCPU Quotas
AWS accounts have [service quotas](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-limits.html) that limit how many vCPUs you can run per instance family. New or lightly-used accounts often have a **default quota of 0 vCPUs** for GPU instance families (G and VT), which will cause errors on launch:
- **Spot**: `MaxSpotInstanceCountExceeded`
- **On-Demand**: `VcpuLimitExceeded`
Check your current quotas (g4dn.xlarge requires at least 4 vCPUs):
```bash
# Built-in: show all GPU family quotas
aws-bootstrap quota show
# Show only G/VT family quotas
aws-bootstrap quota show --family gvt
# Show P family quotas (P2 through P6)
aws-bootstrap quota show --family p
# Or use the AWS CLI directly:
aws service-quotas get-service-quota \
--service-code ec2 \
--quota-code L-3819A6DF \
--region us-west-2
```
Request increases:
```bash
# Built-in: request a G/VT spot quota increase (default family)
aws-bootstrap quota request --type spot --desired-value 4
# Request a P family spot quota increase
aws-bootstrap quota request --family p --type spot --desired-value 192
# Check request status
aws-bootstrap quota history
# Or use the AWS CLI directly:
aws service-quotas request-service-quota-increase \
--service-code ec2 \
--quota-code L-3819A6DF \
--desired-value 4 \
--region us-west-2
```
Quota codes may vary by region or account type. To list the actual codes in your region:
```bash
# List all G/VT-related quotas
aws service-quotas list-service-quotas \
--service-code ec2 \
--region us-west-2 \
--query "Quotas[?contains(QuotaName, 'G and VT')].[QuotaCode,QuotaName,Value]" \
--output table
```
Common quota codes:
| Family | Type | Code | Description |
|--------|------|------|-------------|
| G/VT | Spot | `L-3819A6DF` | All G and VT Spot Instance Requests |
| G/VT | On-Demand | `L-DB2E81BA` | Running On-Demand G and VT instances |
| P | Spot | `L-7212CCBC` | All P Spot Instance Requests |
| P | On-Demand | `L-417A185B` | Running On-Demand P instances |
| DL | Spot | `L-85EED4F7` | All DL Spot Instance Requests |
| DL | On-Demand | `L-6E869C2A` | Running On-Demand DL instances |
Small increases (4-8 vCPUs) are typically auto-approved within minutes. You can also request increases via the [Service Quotas console](https://console.aws.amazon.com/servicequotas/home). While waiting, you can test the full launch/poll/SSH flow with a non-GPU instance type:
```bash
aws-bootstrap launch --instance-type t3.medium --ami-filter "ubuntu/images/hvm-ssd-gp3/ubuntu-noble-24.04-amd64-server-*"
```
## Claude Code Plugin
A [Claude Code](https://docs.anthropic.com/en/docs/claude-code) plugin is included in the [`aws-bootstrap-skill/`](aws-bootstrap-skill/) directory, enabling LLM coding agents to autonomously provision and manage GPU instances.
### Install from GitHub
```bash
# Add the marketplace (registers this repo as a plugin source)
/plugin marketplace add promptromp/aws-bootstrap-g4dn
# Install the plugin
/plugin install aws-bootstrap-skill@promptromp-aws-bootstrap-g4dn
```
### Install locally (from repo checkout)
```bash
claude --plugin-dir ./aws-bootstrap-skill
```
See [`aws-bootstrap-skill/README.md`](aws-bootstrap-skill/README.md) for details.
## Additional Resources
| Topic | Link |
|-------|------|
| GPU instance pricing | [instances.vantage.sh](https://instances.vantage.sh/aws/ec2/g4dn.xlarge) |
| Spot instance quotas | [AWS docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-limits.html) |
| Deep Learning AMIs | [AWS docs](https://docs.aws.amazon.com/dlami/latest/devguide/what-is-dlami.html) |
| Nsight remote GPU profiling | [Guide](docs/nsight-remote-profiling.md) — Nsight Compute, Nsight Systems, and Nsight VSCE on EC2 |
Tutorials on setting up a CUDA environment on EC2 GPU instances:
- [Provision an EC2 GPU Host on AWS](https://www.dolthub.com/blog/2025-03-12-provision-an-ec2-gpu-host-on-aws/) (DoltHub, 2025)
- [AWS EC2 Setup for GPU/CUDA Programming](https://techfortalk.co.uk/2025/10/11/aws-ec2-setup-for-gpu-cuda-programming/) (TechForTalk, 2025)
| text/markdown | Adam Ever-Hadani | null | null | null | null | aws, ec2, gpu, cuda, deep-learning, spot-instances, cli | [
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"boto3>=1.35",
"click>=8.1",
"pyyaml>=6.0.3",
"tabulate>=0.9.0"
] | [] | [] | [] | [
"Homepage, https://github.com/promptromp/aws-bootstrap-g4dn",
"Issues, https://github.com/promptromp/aws-bootstrap-g4dn/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:41:02.620964 | aws_bootstrap_g4dn-0.12.0.tar.gz | 115,115 | 12/d5/05bf61f80e7c8e9c8a4677352c40dfd9d719bf7992f1cbe7854e41378c8d/aws_bootstrap_g4dn-0.12.0.tar.gz | source | sdist | null | false | 5468eb6765e316ec0fd35f850f46ff86 | 05fdc1e07d28f43bdd360982c1ae8d49ae17acb0a6170a3ddc5495eeae0fd11b | 12d505bf61f80e7c8e9c8a4677352c40dfd9d719bf7992f1cbe7854e41378c8d | MIT | [
"LICENSE"
] | 214 |
2.4 | veritail | 0.1.1 | Ecommerce search relevance evaluation tool | # veritail
LLM evals framework tailored for ecommerce search.
veritail scores every query-result pair, computes IR metrics from those scores, and runs deterministic quality checks — all in a single command. Run it on every release to track search quality, or compare two configurations side by side to measure the impact of a change before it ships.
Five evaluation layers:
- **LLM-as-a-Judge scoring** — every query-result pair scored 0-3 with structured reasoning, using any cloud or local model
- **IR metrics** — NDCG, MRR, MAP, Precision, and attribute match computed from LLM scores
- **Deterministic quality checks** — low result counts, near-duplicate results, out-of-stock ranking issues, price outliers, and more
- **Autocorrect evaluation** — catches intent-altering or unnecessary query corrections
- **Autocomplete evaluation** — deterministic checks and LLM-based semantic evaluation for type-ahead suggestions
Includes 14 built-in ecommerce verticals for domain-aware judging, with support for custom vertical context and rubrics. Optional [Langfuse](docs/backends.md#langfuse-backend) integration for full observability — every judgment, score, and LLM call traced and grouped by evaluation run.
<p align="center">
<img src="assets/main.gif" alt="Search relevance evaluation demo" width="900">
</p>
<p align="center">
<em>LLM-as-a-Judge scores every query-result pair, computes NDCG/MRR/MAP/Precision, runs deterministic checks, and evaluates autocorrect behavior.</em>
</p>
## Quick Start
### 1. Install
```bash
pip install veritail # OpenAI + local models (default)
pip install veritail[anthropic] # + Claude support
pip install veritail[gemini] # + Gemini support
pip install veritail[cloud] # all three cloud providers
pip install veritail[cloud,langfuse] # everything
```
The base install includes the OpenAI SDK because it doubles as the client for OpenAI-compatible local servers (Ollama, vLLM, LM Studio, etc.) — so `pip install veritail` works with both cloud and local models out of the box.
### 2. Bootstrap starter files (recommended)
```bash
veritail init
```
This generates:
- `adapter.py` with a real HTTP request skeleton for both `search()` and `suggest()` (endpoint, auth header, timeout, JSON parsing)
- `queries.csv` with example search queries (query types are automatically classified by the LLM during evaluation)
- `prefixes.csv` with example prefixes (prefix types are automatically inferred from character count)
By default, existing files are not overwritten. Use `--force` to overwrite.
### 3. Create a query set (manual option)
```csv
query
red running shoes
wireless earbuds
nike air max 90
```
Optional columns: `type` (navigational, broad, long_tail, attribute) and `category`. When omitted, `type` is automatically classified by the LLM judge before evaluation.
### 4. Generate queries with an LLM (alternative)
If you don't have query logs yet, let an LLM generate a starter set:
```bash
# From a built-in vertical
veritail generate-queries --vertical electronics --output queries.csv --llm-model gpt-4o
# From business context
veritail generate-queries --context "B2B industrial fastener distributor" --output queries.csv --llm-model gpt-4o
# Both vertical and context, custom count
veritail generate-queries \
--vertical foodservice \
--context "BBQ restaurant equipment supplier" \
--output queries.csv \
--count 50 \
--llm-model gpt-4o
```
This writes a CSV with `query`, `type`, `category`, and `source` columns. Review and edit the generated queries before running an evaluation — the file is designed for human-in-the-loop review.
**Cost note:** Query generation makes a single LLM call (a fraction of a cent with most cloud models).
### 5. Create an adapter (manual option)
```python
# my_adapter.py
from veritail import SearchResponse, SearchResult
def search(query: str) -> SearchResponse:
results = my_search_api.query(query)
items = [
SearchResult(
product_id=r["id"],
title=r["title"],
description=r["description"],
category=r["category"],
price=r["price"],
position=i,
in_stock=r.get("in_stock", True),
attributes=r.get("attributes", {}),
)
for i, r in enumerate(results)
]
return SearchResponse(results=items)
# To report autocorrect / "did you mean" corrections:
# return SearchResponse(results=items, corrected_query="corrected text")
```
Adapters can return either `SearchResponse` or a bare `list[SearchResult]` (backward compatible). Use `SearchResponse` when your search engine returns autocorrect information.
### 6. Run evaluation
```bash
export OPENAI_API_KEY=sk-...
veritail run \
--queries queries.csv \
--adapter my_adapter.py \
--llm-model gpt-4o \
--top-k 10 \
--open
```
For a detailed breakdown of API call volume and cost control options, see [LLM Usage & Cost](docs/llm-usage-and-cost.md).
Outputs are written under:
```text
eval-results/<generated-or-custom-config-name>/
```
### 7. Compare two search configurations
```bash
veritail run \
--queries queries.csv \
--adapter bm25_search_adapter.py --config-name bm25-baseline \
--adapter semantic_search_adapter.py --config-name semantic-v2 \
--llm-model gpt-4o
```
The comparison report shows metric deltas, overlap, rank correlation, and position shifts.
## Vertical Guidance
`--vertical` injects domain-specific scoring guidance into the judge prompt. Each vertical teaches the LLM judge what matters most in a particular ecommerce domain — the hard constraints, industry jargon, certification requirements, and category-specific nuances that generic relevance scoring would miss.
Choose the vertical that best matches the ecommerce site you are evaluating.
| Vertical | Description | Example retailers |
|---|---|---|
| `automotive` | Aftermarket, OEM, and remanufactured parts for cars, trucks, and light vehicles | RockAuto, AutoZone, FCP Euro |
| `beauty` | Skincare, cosmetics, haircare, fragrance, and body care | Sephora, Ulta Beauty, Dermstore |
| `electronics` | Consumer electronics and computer components | Best Buy, Newegg, B&H Photo |
| `fashion` | Clothing, shoes, and accessories | Nordstrom, ASOS, Zappos |
| `foodservice` | Commercial kitchen equipment and supplies for restaurants, cafeterias, and catering | WebstaurantStore, Katom, TigerChef |
| `furniture` | Furniture and home furnishings for residential, commercial, and contract use | Wayfair, Pottery Barn, IKEA |
| `groceries` | Online grocery retail covering food, beverages, and household essentials | Instacart, Amazon Fresh, FreshDirect |
| `home-improvement` | Building materials, hardware, plumbing, electrical, and tools for contractors and DIY | Home Depot, Lowe's, Menards |
| `industrial` | Industrial supply and MRO (Maintenance, Repair, and Operations) | Grainger, McMaster-Carr, Fastenal |
| `marketplace` | Multi-seller marketplace platforms | Amazon, eBay, Etsy |
| `medical` | Medical and surgical supplies for hospitals, clinics, and home health | Henry Schein, Medline, McKesson |
| `office-supplies` | Office products, ink/toner, paper, and workspace equipment | Staples, Office Depot, W.B. Mason |
| `pet-supplies` | Pet food, treats, toys, health products, and habitat equipment across all species | Chewy, PetSmart, Petco |
| `sporting-goods` | Athletic equipment, apparel, and accessories across all sports and outdoor activities | Dick's Sporting Goods, REI, Academy Sports |
You can also provide a custom vertical as a plain text file with `--vertical ./my_vertical.txt`. Use the built-in verticals in `src/veritail/verticals/` as templates.
Use `--context` to layer enterprise-specific rules on top of a vertical — things like brand priorities, certification requirements, or domain jargon unique to your store. See [Custom Rubrics & Enterprise Context](docs/custom-rubrics.md) for details.
Examples:
```bash
# Built-in vertical
veritail run \
--queries queries.csv \
--adapter my_adapter.py \
--vertical foodservice
# Custom vertical text file
veritail run \
--queries queries.csv \
--adapter my_adapter.py \
--vertical ./my_vertical.txt
# Vertical + enterprise-specific rules
veritail run \
--queries queries.csv \
--adapter my_adapter.py \
--vertical home-improvement \
--context "Pro contractor supplier. Queries for lumber should always prioritize pressure-treated options."
# Vertical + detailed business context from a file
veritail run \
--queries queries.csv \
--adapter my_adapter.py \
--vertical home-improvement \
--context context.txt
```
## More Reports
### Evaluate autocomplete suggestions
<br>
<p align="center">
<img src="assets/autocomplete.gif" alt="Autocomplete evaluation demo" width="900">
</p>
<p align="center">
<em>Deterministic checks (duplicates, prefix coherence, encoding) and LLM-based semantic scoring for suggestion relevance and diversity.</em>
</p>
---
### Side-by-side comparison
<br>
<p align="center">
<img src="assets/comparison.gif" alt="Side-by-side comparison demo" width="900">
</p>
<p align="center">
<em>Two search configurations compared head-to-head: per-query NDCG deltas, win/loss/tie analysis, rank correlation, and result overlap.</em>
</p>
---
### Langfuse observability
<br>
<p align="center">
<img src="assets/langfuse.gif" alt="Langfuse observability demo" width="900">
</p>
<p align="center">
<em>Every judgment, score, and LLM call traced and grouped by evaluation run — with full prompt/response visibility.</em>
</p>
## Documentation
| Guide | Description |
|---|---|
| [Evaluation Model](docs/evaluation-model.md) | LLM judgment scoring, deterministic checks, and IR metrics |
| [Supported LLM Providers](docs/supported-llm-providers.md) | Cloud providers, local model servers, and model quality guidance |
| [LLM Usage & Cost](docs/llm-usage-and-cost.md) | API call volume breakdown and cost control strategies |
| [Batch Mode & Resume](docs/batch-mode-and-resume.md) | 50% cost reduction via batch APIs and resuming interrupted runs |
| [Autocorrect Evaluation](docs/autocorrect-evaluation.md) | Evaluating query correction quality |
| [Autocomplete Evaluation](docs/autocomplete-evaluation.md) | Type-ahead suggestion evaluation with checks and LLM scoring |
| [Custom Rubrics & Enterprise Context](docs/custom-rubrics.md) | Custom scoring rubrics and business-specific evaluation rules |
| [Custom Checks](docs/custom-checks.md) | Adding domain-specific deterministic checks |
| [CLI Reference](docs/cli-reference.md) | Complete flag reference for all commands |
| [Backends](docs/backends.md) | File and Langfuse storage backends |
| [Development](docs/development.md) | Local development setup and running tests |
## Disclaimer
veritail uses large language models to generate relevance judgments. LLM outputs can be inaccurate, inconsistent, or misleading. All scores, reasoning, and reports produced by this tool should be reviewed by a qualified human before informing production decisions. veritail is an evaluation aid, not a substitute for human judgment. The authors are not liable for any decisions made based on its output or for any API costs incurred by running evaluations. Users are responsible for complying with the terms of service of any LLM provider they use with this tool. Evaluation data is sent to the configured LLM provider for scoring — use a local model if data must stay on-premise. Adapter modules, custom check modules, and custom rubric files are loaded and executed as Python code at runtime — only run files you trust. Evaluation results, including product catalog data, are written to disk in plaintext under the output directory (`eval-results/` by default) — ensure this directory is excluded from version control and not stored in shared or publicly accessible locations.
## License
MIT
| text/markdown | Ahmed Arnaout | null | null | null | null | ecommerce, evaluation, information-retrieval, ndcg, relevance, search | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Information Analysis"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.0",
"jinja2>=3.0",
"openai>=1.0",
"python-dotenv>=1.0",
"rich>=13.0",
"anthropic>=0.39.0; extra == \"anthropic\"",
"anthropic>=0.39.0; extra == \"cloud\"",
"google-genai>=1.0; extra == \"cloud\"",
"anthropic>=0.39.0; extra == \"dev\"",
"google-genai>=1.0; extra == \"dev\"",
"langfuse>=2.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"openai>=1.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\"",
"google-genai>=1.0; extra == \"gemini\"",
"langfuse>=2.0; extra == \"langfuse\""
] | [] | [] | [] | [
"Homepage, https://github.com/asarnaout/veritail",
"Repository, https://github.com/asarnaout/veritail",
"Documentation, https://github.com/asarnaout/veritail/tree/main/docs",
"Bug Tracker, https://github.com/asarnaout/veritail/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T03:40:37.376214 | veritail-0.1.1.tar.gz | 44,010,706 | 09/be/e68bc4d6c6c6952a33dfb36c30b54e309f00981e267f0f8a880669d99914/veritail-0.1.1.tar.gz | source | sdist | null | false | e8660c15ecb6e6a5443ee7299ba37436 | f53e5369d60899d4913af63fa9a0a44835b9700837628c56b5c7deaf031173d3 | 09bee68bc4d6c6c6952a33dfb36c30b54e309f00981e267f0f8a880669d99914 | MIT | [
"LICENSE"
] | 274 |
2.4 | chimeric | 0.3.0 | Unified interface for multiple LLM providers with automatic provider detection and seamless switching | <div align="center">
<img src=".github/assets/chimeric.png" alt="Chimeric Logo" width="200"/>
# Chimeric
[](https://pypi.org/project/chimeric/)
[](https://pypi.org/project/chimeric/)
[](https://opensource.org/licenses/MIT)
[](https://verdenroz.github.io/chimeric/)
[](https://github.com/Verdenroz/chimeric/actions/workflows/ci.yml)
[](https://codecov.io/gh/Verdenroz/chimeric)
**Unified Python interface for multiple LLM providers with automatic provider detection and seamless switching.**
</div>
## 🚀 Supported Providers
[](https://openai.com/)
[](https://anthropic.com/)
[](https://ai.google.dev/)
[](https://x.ai/)
[](https://groq.com/)
[](https://cohere.ai/)
[](https://cerebras.ai/)
[](https://openrouter.ai/)
## 📖 Documentation
For detailed usage examples, configuration options, and advanced features, visit our [documentation](https://verdenroz.github.io/chimeric/).
## 📦 Installation
```bash
pip install chimeric
```
Set your API keys as environment variables:
```bash
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"
```
## ⚡ Quickstart
### Basic Usage
```python
from chimeric import Chimeric
client = Chimeric() # Auto-detects API keys from environment
response = client.generate(
model="gpt-4o",
messages="Hello!"
)
print(response.content)
```
### Streaming Responses
```python
# Real-time streaming
stream = client.generate(
model="claude-3-5-sonnet-latest",
messages="Tell me a story about space exploration",
stream=True
)
for chunk in stream:
print(chunk.content, end="", flush=True)
```
### Function Calling with Tools
```python
@client.tool()
def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"Sunny, 72°F in {city}"
@client.tool()
def calculate_tip(bill_amount: float, tip_percentage: float = 18.0) -> dict:
"""Calculate tip and total amount for a restaurant bill."""
tip = bill_amount * (tip_percentage / 100)
total = bill_amount + tip
return {"tip": tip, "total": total, "tip_percentage": tip_percentage}
response = client.generate(
model="gpt-4o",
messages=[
{"role": "user", "content": "What's the weather in NYC?"},
{"role": "user", "content": "Also calculate a tip for a $50 dinner bill"}
]
)
print(response.content)
```
### Structured Output
```python
from pydantic import BaseModel
class Sentiment(BaseModel):
label: str
score: float
reasoning: str
response = client.generate(
model="gpt-4o",
messages="Analyse the sentiment: 'This library is fantastic!'",
response_model=Sentiment,
)
print(response.parsed.label) # "positive"
print(response.parsed.score) # 0.98
```
### Embeddings
```python
# Single text → result.embedding (list[float])
result = client.embed(
model="text-embedding-3-small",
input="Python developer with 5 years experience",
)
print(len(result.embedding)) # e.g. 1536
# Batch → result.embeddings (list[list[float]])
result = client.embed(
model="text-embedding-3-small",
input=["Python developer", "Go engineer", "React developer"],
)
print(len(result.embeddings)) # 3
# Also available via Google and Cohere
result = client.embed(model="gemini-embedding-001", input="Hello")
result = client.embed(model="embed-english-v3.0", input="Hello")
```
### Multi-Provider Switching
```python
# Seamlessly switch between providers
models = ["gpt-4o-mini", "claude-3-5-haiku-latest", "gemini-2.5-flash"]
for model in models:
response = client.generate(
model=model,
messages="Explain quantum computing in one sentence"
)
print(f"{model}: {response.content}")
```
## 🔧 Key Features
- **Multi-Provider Support**: Switch between 8 major AI providers seamlessly
- **Automatic Detection**: Auto-detects available API keys from environment
- **Unified Interface**: Consistent API across all providers
- **Embeddings**: Single and batch text embeddings via OpenAI, Google, and Cohere
- **Structured Output**: Parse responses directly into Pydantic models
- **Streaming Support**: Real-time response streaming
- **Function Calling**: Tool integration with decorators
- **Async Support**: Full async/await compatibility
- **Local AI**: Connect to Ollama, LM Studio, or any OpenAI-compatible endpoint
## 🐛 Issues & Feature Requests
- **Found a bug?** Use our [Bug Report](.github/ISSUE_TEMPLATE/bug_report.yml) template
- **Want a feature?** Use our [Feature Request](.github/ISSUE_TEMPLATE/feature_request.yml) template
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
| text/markdown | null | Harvey Tseng <harveytseng2@gmail.com> | null | null | MIT | ai, anthropic, gemini, llm, multi-provider, openai, unified-interface | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"httpx>=0.27.0",
"pydantic>=2.11.5"
] | [] | [] | [] | [
"Repository, https://github.com/Verdenroz/chimeric",
"Issues, https://github.com/Verdenroz/chimeric/issues",
"Documentation, https://verdenroz.github.io/chimeric/",
"Changelog, https://github.com/Verdenroz/chimeric/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:39:00.566487 | chimeric-0.3.0.tar.gz | 4,149,005 | 93/a0/8fcce2d4c817523aea126230d2bd5d5352df045fa7ce2f435ed9c36a4106/chimeric-0.3.0.tar.gz | source | sdist | null | false | edab72eb81348964392ba097d979133b | 872f6053441260e0f24fe9077fab3c866e46d2af54f77ed3a7b16442e39358d4 | 93a08fcce2d4c817523aea126230d2bd5d5352df045fa7ce2f435ed9c36a4106 | null | [
"LICENSE"
] | 226 |
2.4 | agent-recall | 0.2.1 | Persistent memory and AI briefings for coding agents — drop-in MCP server | # agent-recall
[](https://github.com/mnardit/agent-recall/actions/workflows/tests.yml)
[](https://pypi.org/project/agent-recall/)
[](LICENSE)
[](https://pypi.org/project/agent-recall/)
**Persistent memory for AI coding agents.** Your agent forgets everything between sessions — names, decisions, preferences, context. agent-recall fixes this.
Built from production: extracted from a real system running **30+ concurrent AI agents** at a digital agency. Not a prototype — every feature exists because something broke in production.
```
Before: "Who is Alice?" (every single session)
After: Agent starts with: "Alice — Lead Engineer at Acme, prefers async,
last discussed the API migration on Feb 12"
```
Works with **Claude Code**, **Cursor**, **Windsurf**, **Cline** — any MCP-compatible client.
### Why agent-recall?
Other memory solutions exist (Mem0, Zep, LangGraph checkpointing). Here's what makes this different:
| | agent-recall | Mem0 | Zep | LangGraph |
|---|---|---|---|---|
| **Deployment** | Local SQLite | Cloud-first | Self-hosted (Neo4j) | Postgres/SQLite |
| **Multi-tenant** | Scope hierarchy | User IDs | Org-based | Thread-based |
| **AI briefings** | Built-in | No | No | No |
| **MCP integration** | Native | No | No | No |
| **Temporal queries** | Bitemporal slots | Versioned | Valid at/invalid at | Checkpoints |
| **Open source** | MIT | Limited | Yes | Apache 2.0 |
| **Cost** | Free | Free tier + paid | Free | Free |
**In short:** Local-first. Your data stays on your machine. Multi-tenant scope hierarchy, not just user IDs — built for agencies and teams managing multiple projects. AI briefings that summarize hundreds of facts into what actually matters. MCP-native — works with any editor that supports MCP.
---
## How It Works
```
┌─────────────────────────────────────────────────────────────────────┐
│ SESSION 1 │
│ │
│ You: "Alice from Acme called. She wants the API done by Friday." │
│ │ │
│ ▼ │
│ Agent saves automatically via MCP tools: │
│ create_entities: Alice (person), Acme (client) │
│ add_observations: "wants API done by Friday" │
│ create_relations: Alice → works_at → Acme │
│ │ │
│ ▼ │
│ Stored in local SQLite ─────► ~/.agent-recall/frames.db │
└─────────────────────────────────────────────────────────────────────┘
│
(session ends)
│
┌─────────────────────────────────────────────────────────────────────┐
│ SESSION 2 │
│ │
│ Agent starts and receives a briefing: │
│ "Alice (Lead Engineer, Acme) — wants API done by Friday. │
│ Acme is a client. Last discussed Feb 12." │
│ │ │
│ ▼ │
│ Agent already knows who Alice is, what's urgent, and what to do. │
└─────────────────────────────────────────────────────────────────────┘
```
**Why does the agent save facts automatically?** The MCP server includes behavioral instructions that tell the agent to proactively save people, decisions, and context as it encounters them. No special prompting needed — the agent receives these instructions when it connects to the memory server.
---
## Setup
### Step 1: Install
```bash
pip install 'agent-recall[mcp]'
agent-recall init
```
This creates the SQLite database at `~/.agent-recall/frames.db`.
> `agent-recall[mcp]` installs with MCP server support. Use `pip install agent-recall` if you only need the Python API/CLI.
### Step 2: Add MCP server to your editor
This gives your agent the memory tools (`create_entities`, `add_observations`, `search_nodes`, etc.) and the instructions to use them proactively.
<details open>
<summary><strong>Claude Code</strong></summary>
Add to `.mcp.json` in your project root:
```json
{
"mcpServers": {
"memory": {
"command": "python3",
"args": ["-m", "agent_recall.mcp_server"]
}
}
}
```
</details>
<details>
<summary><strong>Cursor</strong></summary>
Add to `.cursor/mcp.json`:
```json
{
"mcpServers": {
"memory": {
"command": "python3",
"args": ["-m", "agent_recall.mcp_server"]
}
}
}
```
</details>
<details>
<summary><strong>Windsurf</strong></summary>
Add to `~/.codeium/windsurf/mcp_config.json`:
```json
{
"mcpServers": {
"memory": {
"command": "python3",
"args": ["-m", "agent_recall.mcp_server"]
}
}
}
```
</details>
<details>
<summary><strong>Cline</strong></summary>
Add to `cline_mcp_settings.json`:
```json
{
"mcpServers": {
"memory": {
"command": "python3",
"args": ["-m", "agent_recall.mcp_server"]
}
}
}
```
</details>
### Step 3: (Claude Code) Add hooks for automatic context injection
Hooks make the agent receive its memory briefing at the start of every session, and keep caches fresh after writes. **This step is optional but strongly recommended for Claude Code users.**
Add to `.claude/settings.json` (project or global):
```json
{
"hooks": {
"SessionStart": [
{ "command": "agent-recall-session-start" }
],
"PostToolUse": [
{ "command": "agent-recall-post-tool-use" }
]
}
}
```
| Hook | What it does |
|------|-------------|
| `SessionStart` | Injects AI briefing (or raw context) into the agent's system prompt when a session starts |
| `PostToolUse` | After the agent writes to memory, invalidates stale caches and regenerates vault files |
> **Other editors:** Hooks are Claude Code-specific. For other clients, use the [CLI](#cli) (`agent-recall generate`) or [Python API](#python-api) to generate and serve briefings.
### Step 4: Verify it works
Start a new session with your agent and look for:
- The agent should have memory tools listed (e.g., `create_entities`, `search_nodes`)
- If hooks are set up, the agent shows a "Memory is empty" message on first run
- Mention a person or make a decision — the agent should save it automatically
- Start another session — the agent should know about the person/decision
---
## What Happens Under the Hood
Here's the full lifecycle:
```
1. CONNECT
Agent connects to MCP server
└─► Server sends instructions: "Proactively save people, decisions, facts..."
└─► Agent receives 9 memory tools with descriptions explaining when to use each
2. SAVE (during conversation)
Agent encounters important information
└─► search_nodes("Alice") — check if entity exists
└─► create_entities([{...}]) — create if new
└─► add_observations([{...}]) — add facts to existing entity
└─► create_relations([{...}]) — link entities together
All stored in ~/.agent-recall/frames.db (SQLite, scoped per project)
3. NOTIFY (PostToolUse hook, Claude Code only)
After each memory write
└─► Marks affected agent caches as stale
└─► Regenerates Obsidian vault files (if configured)
4. BRIEFING (SessionStart hook or CLI)
Next session starts
└─► Reads cached AI briefing (if fresh)
└─► Or assembles raw context from database
└─► Or generates new briefing via LLM (if stale + adaptive mode)
└─► Injects into agent's system prompt
5. AGENT KNOWS
Agent starts with structured context:
└─► Key people, their roles, preferences
└─► Current tasks, blockers, deadlines
└─► Recent decisions and their rationale
```
---
## Key Concepts
| Concept | Description |
|---------|-------------|
| **Entity** | A named thing: person, client, project. Has a type and unique name. |
| **Slot** | A key-value pair on an entity (e.g., `role: "Engineer"`). Scoped and bitemporal — old values are archived, not deleted. |
| **Observation** | Free-text fact attached to an entity (e.g., "Prefers async communication"). Scoped. |
| **Relation** | Directed link between two entities (e.g., Alice —works_at→ Acme). |
| **Scope** | Namespace for data isolation. Slots and observations belong to a scope (e.g., `"global"`, `"acme"`, `"proj-a"`). |
| **Scope chain** | Ordered list of scopes from general to specific: `["global", "acme", "proj-a"]`. Local overrides parent for the same slot. |
| **Tier** | Agent importance level: 0 = no context, 1 = minimal, 2 = full (default), 3 = orchestrator (sees everything). |
| **Briefing** | AI-generated summary of raw memory data, injected into agent's system prompt at startup. Cached and invalidated adaptively. |
---
## Features
### Scoped Memory
Not a flat key-value store. Memory is **scoped** — the same person can have different roles in different projects:
```
Alice:
role (global) = "Engineer"
role (acme) = "Lead Engineer" ← Agent working on Acme sees this
role (beta-corp) = "Consultant" ← Agent working on Beta sees this
```
Scoping keeps context clean across projects and prevents data from leaking between workstreams.
### AI Briefings
Raw data dumps don't work. Thousands of facts is noise, not context.
agent-recall uses an LLM to **summarize** what matters into a structured briefing:
```
Raw (what's in the database):
147 slots across 34 entities, 89 observations, 23 relations...
Briefing (what the agent actually sees):
## Key People
- Alice (Lead Engineer, Acme) — prefers async, owns the API migration
- Bob (PM) — on vacation until Feb 20
## Current Tasks
- API migration: blocked on auth module (Alice working on it)
- Dashboard redesign: waiting for Bob's review
## Recent Decisions
- Team agreed to use GraphQL on Feb 10 call
- Next client meeting: Feb 19
```
Generate briefings via CLI (`agent-recall generate my-agent`) or let the SessionStart hook handle it automatically.
### Multi-Agent Ready
Built for systems with multiple agents sharing one knowledge base but seeing different slices:
```
global → acme-agency → client-a (client-a sees: global + acme + client-a)
→ client-b (client-b sees: global + acme + client-b)
→ personal → side-project (side-project sees: global + personal + side-project)
```
Each agent reads and writes within its scope chain. The MCP server enforces this automatically.
### Adaptive Cache
When one agent saves new facts, caches for affected agents are marked stale. Next time those agents start a session, their briefings regenerate automatically.
---
## Configuration
For a single agent with defaults, no config file is needed. By default, agent-recall auto-discovers project files (`CLAUDE.md`, `README.md`, `.cursorrules`, `.windsurfrules`) in the current directory and includes them in the data sent to the LLM for briefing generation. This means new agents get useful briefings from day one, even with an empty database. Disable with `auto_discover: false` in the briefing config.
For multiple agents or custom settings, create `memory.yaml` in your project root or `~/.agent-recall/memory.yaml`:
```yaml
# Database location (default: ~/.agent-recall/frames.db)
db_path: ~/.agent-recall/frames.db
cache_dir: ~/.agent-recall/context_cache
# Scope hierarchy — which agents see which data
hierarchy:
acme-agency:
children: [client-a, client-b]
# Tier 0 = no context injection, Tier 2 = full
tiers:
0: [infra-bot]
2: [acme-agency, client-a, client-b]
# AI briefing settings
briefing:
backend: cli # "cli" = claude -p (free on subscription), "api" = Anthropic SDK (needs API key)
model: opus # LLM model for generating briefings
timeout: 300 # LLM timeout in seconds
adaptive: true # Auto-regenerate stale caches
min_cache_age: 1800 # Minimum 30 min between regenerations
# Per-agent overrides
agents:
coordinator:
model: opus
output_budget: 12000
dashboard:
model: haiku
template: system
```
<details>
<summary><strong>All per-agent options</strong></summary>
| Key | Type | Description |
|-----|------|-------------|
| `model` | string | LLM model for this agent's briefings |
| `timeout` | int | LLM timeout in seconds |
| `output_budget` | int | Target output size in characters |
| `template` | string | Builtin type name or inline text |
| `enabled` | bool | Disable briefing generation (default: true) |
| `context_files` | list | Extra files to include in context |
| `context_budget` | int | Max chars for context files (default: 8000) |
| `extra_context` | string | Static text appended to raw context |
| `adaptive` | bool | Per-agent adaptive cache override |
| `min_cache_age` | int | Min seconds between regenerations |
</details>
**Environment variables:**
| Variable | Description |
|----------|-------------|
| `AGENT_RECALL_SLUG` | Explicit agent identifier (defaults to current directory name) |
---
## LLM Backend
AI briefings need an LLM to generate summaries. Two built-in backends:
| Backend | Config | Install | Cost | Notes |
|---------|--------|---------|------|-------|
| `cli` (default) | `backend: cli` | Claude Code installed | Free on Max/Team subscription | Creates a session file per call |
| `api` | `backend: api` | `pip install 'agent-recall[api]'` | Pay per token | Clean, no side effects, needs `ANTHROPIC_API_KEY` |
Switch in `memory.yaml`:
```yaml
briefing:
backend: api # uses Anthropic SDK instead of claude CLI
model: opus
```
### Bring Your Own LLM
For other providers, pass a callable matching `(prompt, model, timeout) -> str`:
```python
from agent_recall import generate_briefing, LLMResult
def my_llm(prompt: str, model: str, timeout: int) -> LLMResult:
result = call_my_api(prompt, model)
return LLMResult(text=result.text, input_tokens=result.usage.input,
output_tokens=result.usage.output)
generate_briefing("my-agent", llm_caller=my_llm, force=True)
```
Full examples: [OpenAI](examples/llm_openai.py) | [Anthropic SDK](examples/llm_anthropic.py) | [Ollama](examples/llm_ollama.py)
By default, briefing generation uses the `claude` CLI (`claude -p --model <model>`). If you don't use Claude, pass your own `llm_caller`.
---
## CLI
```bash
agent-recall init # Create database
agent-recall status # Database stats
agent-recall set Alice person role Engineer # Set a scoped slot
agent-recall get Alice role # Get slot value
agent-recall entity Alice # Show entity details
agent-recall search "engineer" # Search entities
agent-recall history Alice role # Bitemporal slot history
agent-recall log Alice "Joined the team" # Add observation
agent-recall generate my-agent --force # Generate AI briefing
agent-recall refresh --force # Refresh all briefings
agent-recall rename-scope old-name new-name # Migrate data between scopes
```
## Python API
```python
from agent_recall import MemoryStore, ScopedView
with MemoryStore() as store:
alice = store.resolve_entity("Alice", "person")
acme = store.resolve_entity("Acme Corp", "client")
store.set_slot(alice, "role", "Engineer", scope="global")
store.set_slot(alice, "role", "Lead Engineer", scope="acme")
store.add_observation(alice, "Prefers async communication", scope="acme")
store.add_relation(alice, acme, "works_at")
# Scoped view — local overrides parent
view = ScopedView(store, ["global", "acme"])
entity = view.get_entity("Alice")
print(entity["slots"]["role"]) # "Lead Engineer" (acme overrides global)
```
See [`examples/quickstart.py`](examples/quickstart.py) for a runnable version.
---
## Born in Production
agent-recall was extracted from a live system managing real client projects at a digital agency — 30+ agents, 15+ clients, hundreds of scoped facts.
Why specific features exist:
- **Scope isolation** — two agents wrote conflicting data to the same entity
- **Adaptive caching** — briefings went stale during busy hours
- **AI summaries** — agents couldn't make sense of raw data dumps with hundreds of entries
- **Proactive saving instructions** — agents ignored memory tools until explicitly told to use them
- **Bitemporal slots** — needed to track what was true *when*, not just what's true now
## Development
```bash
git clone https://github.com/mnardit/agent-recall.git
cd agent-recall
pip install -e ".[dev]"
pytest
```
300 tests covering store, config, hierarchy, context assembly, AI briefings, vault generation, hooks, dedup, and MCP bridge.
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
MIT
| text/markdown | null | Max Nardit <max@nardit.com> | null | null | null | agents, ai, ai-agents, briefings, claude, claude-code, coding-agents, context, knowledge-graph, llm, mcp, memory, model-context-protocol, scope, sqlite | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"pyyaml>=6.0.2",
"anthropic>=0.39; extra == \"api\"",
"anthropic>=0.39; extra == \"dev\"",
"mcp<2.0,>=1.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"mcp<2.0,>=1.0; extra == \"mcp\""
] | [] | [] | [] | [
"Homepage, https://github.com/mnardit/agent-recall",
"Repository, https://github.com/mnardit/agent-recall",
"Documentation, https://github.com/mnardit/agent-recall#readme",
"Changelog, https://github.com/mnardit/agent-recall/blob/main/CHANGELOG.md",
"Bug Tracker, https://github.com/mnardit/agent-recall/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:38:53.423107 | agent_recall-0.2.1.tar.gz | 70,096 | 5c/35/c44d8fed151cbaac146da8eb9a01527bb04ad0d4ad6c49afc2c40593bd84/agent_recall-0.2.1.tar.gz | source | sdist | null | false | 13148b5707dc7c80f2db969dcf2d39cb | d092e94cf96d629da0a4380e9243ebdf163c9088bc1d3546fcfedc3e7aab1d32 | 5c35c44d8fed151cbaac146da8eb9a01527bb04ad0d4ad6c49afc2c40593bd84 | MIT | [
"LICENSE"
] | 240 |
2.4 | em-agent-framework | 1.0.22 | Production-ready AI agent framework purpose built for Vertex AI | # em-agent-framework
A production-ready AI agent framework with support for Gemini and Anthropic (Claude) models via Vertex AI.
## Features
- **Multi-Model Support**: Seamless integration with Gemini and Anthropic models
- **Automatic Fallback**: Cascading fallback across multiple models on failure
- **Parallel Tool Execution**: Execute independent function calls concurrently
- **Recursive Agent Calls**: Agents can spawn sub-agents for parallel task decomposition
- **Context Injection**: Pass large data/secrets to tools without sending through LLM
- **Group Chat**: Multi-agent conversations with flexible routing strategies
- **Metrics & Observability**: Built-in tracking for performance and usage
## Installation
```bash
pip install em-agent-framework
```
### Requirements
- Python 3.8+
- Google Cloud Project with Vertex AI enabled
- Vertex AI API credentials configured
## Quick Start
```python
import asyncio
from typing import Annotated
from em_agent_framework.core.agent import Agent
from em_agent_framework.config.settings import ModelConfig, AgentConfig
# Define a tool
def get_weather(city: Annotated[str, "City name"]) -> str:
"""Get weather for a city."""
return f"Weather in {city}: Sunny, 72°F"
async def main():
# Configure models with fallback
model_configs = [
ModelConfig(name="gemini-2.0-flash-exp", provider="gemini"),
ModelConfig(name="claude-3-5-sonnet-v2@20241022", provider="anthropic"),
]
# Create agent
agent = Agent(
name="assistant",
system_instruction="You are a helpful assistant.",
tools=[get_weather],
model_configs=model_configs,
agent_config=AgentConfig(verbose=True)
)
# Send message
response = await agent.send_message("What's the weather in Tokyo?")
print(response)
asyncio.run(main())
```
## Key Features
### Model Fallback
Automatically falls back to alternative models if the primary model fails:
```python
model_configs = [
ModelConfig(name="gemini-2.0-flash-exp", provider="gemini"), # Try first
ModelConfig(name="claude-3-5-sonnet-v2@20241022", provider="anthropic"), # Fallback
]
```
### Parallel Tool Execution
Execute multiple independent function calls concurrently:
```python
agent_config = AgentConfig(
enable_parallel_tools=True,
max_parallel_tools=5
)
```
### Context Injection
Pass data to tools without sending it through the LLM (ideal for large datasets, API keys, or user info):
```python
def analyze_data(metric: Annotated[str, "Metric to analyze"], context: dict) -> str:
"""Analyze user data."""
df = context.get('dataframe') # Not sent to LLM
userid = context.get('userid') # Not sent to LLM
return f"User {userid}: Analysis complete"
agent = Agent(
name="analyst",
tools=[analyze_data],
context={
'dataframe': large_df, # Never sent to LLM
'userid': 'USER_12345', # Never sent to LLM
},
model_configs=model_configs,
agent_config=agent_config
)
```
Benefits:
- **Reduce token costs**: Large data stays local
- **Security**: Sensitive data (API keys, credentials) stays private
- **Authentication**: Pass user info for validation
### Dynamic Tool Loading
Load tools on-demand instead of all at once:
```python
# Define tools
def basic_search(query: Annotated[str, "Search query"]) -> str:
return f"Results for: {query}"
def advanced_analysis(data: Annotated[str, "Data to analyze"]) -> str:
return f"Analysis of: {data}"
# Create agent with complementary tools
agent = Agent(
name="assistant",
tools=[basic_search], # Loaded immediately
complementary_tools=[advanced_analysis], # Available via search_tool
model_configs=model_configs,
agent_config=agent_config
)
# Agent can dynamically load tools when needed
# Just mention the tool name and the agent will use search_tool to find and load it
response = await agent.send_message("Use advanced_analysis to analyze this data")
```
### Dynamic Instructions
Load specialized instructions on-demand:
```python
# Create instructions.json
{
"instructions": [
{
"id": "code_review",
"description": "Code review guidelines",
"instruction": "Review code for: correctness, efficiency, security, readability"
},
{
"id": "api_design",
"description": "API design principles",
"instruction": "Design RESTful APIs following best practices"
}
]
}
# Create agent with instructions file
agent = Agent(
name="assistant",
system_instruction="You are a code assistant.",
instructions_file="instructions.json",
model_configs=model_configs,
agent_config=agent_config
)
# Agent can load instructions dynamically
response = await agent.send_message("Load code_review instructions and review this code: ...")
```
### Recursive Agent Calls
Agents can spawn sub-agents to handle subtasks in parallel, enabling complex task decomposition:
```python
# Create agent with recursion support
agent_config = AgentConfig(
enable_recursive_agents=True, # Enable recursive_agent_call tool
max_recursion_depth=3, # Allow up to 3 levels of nesting
verbose=True
)
agent = Agent(
name="math_coordinator",
system_instruction=(
"You are a math assistant. When you receive complex problems with "
"multiple independent calculations, use recursive_agent_call to spawn "
"sub-agents for each piece. This enables parallel execution."
),
tools=[add, multiply, subtract, divide],
model_configs=model_configs,
agent_config=agent_config
)
# Agent automatically decomposes task into parallel subtasks
response = await agent.send_message(
"Calculate (125 + 75) * (20 - 8). Use recursive_agent_call to "
"handle (125 + 75) and (20 - 8) as separate subtasks."
)
```
Features:
- **Agent Hierarchy**: Each agent has a unique ID and tracks its parent_agent_id
- **Depth Control**: Configurable max_recursion_depth prevents infinite recursion
- **Metadata Tracking**: Sub-agent responses include agent_id, parent_agent_id, and recursion_depth
- **UI Bundling**: Metadata enables grouping of intermediate responses in the UI
- **Parallel Execution**: Sub-agents run independently for better performance
### Multi-Agent Group Chat
```python
from em_agent_framework.core.group_chat.manager import GroupChatManager
# Create specialized agents
researcher = Agent(name="researcher", system_instruction="Research expert", ...)
developer = Agent(name="developer", system_instruction="Python developer", ...)
# Create group chat with skill-based routing
manager = GroupChatManager(
agents=[researcher, developer],
strategy="skill_based", # Routes based on agent descriptions
max_total_turns=10
)
# Start conversation
await manager.initiate_conversation(
query="Research and build a JSON parser"
)
```
## Configuration
### Agent Configuration
```python
AgentConfig(
max_turns=100, # Max conversation turns
max_retries_per_model=3, # Retries before fallback
verbose=True, # Print debug logs
enable_parallel_tools=True, # Enable parallel execution
max_parallel_tools=5, # Max concurrent tools
enable_recursive_agents=False, # Enable recursive_agent_call tool
max_recursion_depth=2 # Max depth for recursive agent calls
)
```
### Model Configuration
```python
ModelConfig(
name="gemini-2.0-flash-exp",
provider="gemini", # "gemini" or "anthropic"
temperature=0.1,
max_output_tokens=8192,
timeout=10.0 # Request timeout (seconds)
)
```
## License
Apache-2.0 License - see LICENSE file for details
| text/markdown | null | Emergence AI <deepak@emergence.ai> | null | Emergence AI <deepak@emergence.ai> | null | ai, agent, llm, vertex-ai, gemini, claude, anthropic, multi-agent, tool-calling | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Typing :: Typed"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"google-genai<2.0.0,>=1.0.0",
"anthropic[vertex]>=0.40.0",
"python-dotenv>=1.0.0",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"pandas>=2.0.0; extra == \"examples\"",
"sphinx>=5.0.0; extra == \"docs\"",
"sphinx-rtd-theme>=1.0.0; extra == \"docs\"",
"sphinx-autodoc-typehints>=1.0.0; extra == \"docs\"",
"em-agent-framework[dev,docs,examples]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/emergence-ai/em-agent-framework",
"Documentation, https://em-agent-framework.readthedocs.io",
"Repository, https://github.com/emergence-ai/em-agent-framework",
"Issues, https://github.com/emergence-ai/em-agent-framework/issues",
"Changelog, https://github.com/emergence-ai/em-agent-framework/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-21T03:38:46.727301 | em_agent_framework-1.0.22.tar.gz | 82,066 | eb/79/5411933a14a6701a1d219a692bf5c81e1f9ece67777ca4618d7ea636226c/em_agent_framework-1.0.22.tar.gz | source | sdist | null | false | f7e713c846bdefe087b0972df011df44 | 0c0c55d5a500adb11c4a0a85147cf9c503972ada3a11ecd2e55ffa7b958e458a | eb795411933a14a6701a1d219a692bf5c81e1f9ece67777ca4618d7ea636226c | Apache-2.0 | [
"LICENSE"
] | 218 |
2.4 | chemparseplot | 1.1.0 | Parsers and plotting tools for computational chemistry |
# Table of Contents
- [About](#org1aaacbd)
- [Ecosystem Overview](#org86a8ea7)
- [Features](#org2d23f41)
- [Supported Engines [WIP]](#orgd14b359)
- [Rationale](#orgd89d807)
- [License](#orgef56819)
<a id="org1aaacbd"></a>
# About

[](https://github.com/pypa/hatch)
A **pure-python**<sup><a id="fnr.1" class="footref" href="#fn.1" role="doc-backlink">1</a></sup> project to provide unit-aware uniform visualizations
of common computational chemistry tasks. Essentially this means we provide:
- Parsers for various computational chemistry software outputs
- Plotting scripts for specific workflows
Computational tasks (surface fitting, structure analysis, interpolation) are
handled by [`rgpycrumbs`](https://github.com/HaoZeke/rgpycrumbs), which is a required dependency. `chemparseplot` parses
output files, delegates heavy computation to `rgpycrumbs`, and produces
publication-quality plots.
This is a spin-off from `wailord` ([here](https://wailord.xyz)) which is meant to handle aggregated
runs in a specific workflow, while here the goal is to do no input handling and
very pragmatic output parsing, with the goal of generating uniform plots.
<a id="org86a8ea7"></a>
## Ecosystem Overview
`chemparseplot` is part of the `rgpycrumbs` suite of interlinked libraries.

<a id="org2d23f41"></a>
## Features
- [Scientific color maps](https://www.fabiocrameri.ch/colourmaps/) for the plots
- Camera ready
- Unit preserving
- Via `pint`
<a id="orgd14b359"></a>
### Supported Engines [WIP]
- ORCA (**5.x**)
- Scanning energies over a degree of freedom (`OPT` scans)
- Nudged elastic band (`NEB`) visualizations (over the "linearized" reaction
coordinate)
<a id="orgd89d807"></a>
## Rationale
`wailord` is for production runs, however often there is a need to collect
"spot" calculation visualizations, which should nevertheless be uniform, i.e.
either Bohr/Hartree or Angstron/eV or whatever.
Also I couldn't find (m)any scripts using the scientific colorschemes.
<a id="orgef56819"></a>
# License
MIT. However, this is an academic resource, so **please cite** as much as possible
via:
- The Zenodo DOI for general use.
- The `wailord` paper for ORCA usage
# Footnotes
<sup><a id="fn.1" href="#fnr.1">1</a></sup> To distinguish it from my other thin-python wrapper projects
| text/markdown | null | Rohit Goswami <rog32@hi.is> | null | null | MIT | compchem, parser, plot | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.26.2",
"pint>=0.22",
"rgpycrumbs>=1.1.0",
"mdit-py-plugins>=0.3.4; extra == \"doc\"",
"myst-nb>=1; extra == \"doc\"",
"myst-parser>=2; extra == \"doc\"",
"sphinx-autodoc2>=0.5; extra == \"doc\"",
"sphinx-copybutton>=0.5.2; extra == \"doc\"",
"sphinx-library>=1.1.2; extra == \"doc\"",
"sphinx-sitemap>=2.5.1; extra == \"doc\"",
"sphinx-togglebutton>=0.3.2; extra == \"doc\"",
"sphinx>=7.2.6; extra == \"doc\"",
"sphinxcontrib-apidoc>=0.4; extra == \"doc\"",
"ruff>=0.1.6; extra == \"lint\"",
"cmcrameri>=1.7; extra == \"plot\"",
"matplotlib>=3.8.2; extra == \"plot\"",
"pytest-cov>=4.1.0; extra == \"test\"",
"pytest>=7.4.3; extra == \"test\""
] | [] | [] | [] | [
"Documentation, https://github.com/HaoZeke/chemparseplot#readme",
"Issues, https://github.com/HaoZeke/chemparseplot/issues",
"Source, https://github.com/HaoZeke/chemparseplot"
] | uv/0.8.4 | 2026-02-21T03:38:41.004579 | chemparseplot-1.1.0.tar.gz | 29,121 | 1c/3b/1f101dd4f06c2a439b451bad28cf9814c253cb118d776f639aa1800b2f8d/chemparseplot-1.1.0.tar.gz | source | sdist | null | false | 30337cf769dded8319c9cd6f774831ba | 44c43ae55e04a91d0e48b453fba0fb656e05fe62f181aab7d1c6a7845e2e3423 | 1c3b1f101dd4f06c2a439b451bad28cf9814c253cb118d776f639aa1800b2f8d | null | [
"LICENSE"
] | 226 |
2.4 | tollbooth-dpyc | 0.1.10 | Don't Pester Your Customer — Bitcoin Lightning micropayments for MCP servers | # Tollbooth DPYC
[](https://opensource.org/licenses/Apache-2.0)
[](https://pypi.org/project/tollbooth-dpyc/)
[](https://www.python.org/downloads/)
<p align="center">
<img src="https://raw.githubusercontent.com/lonniev/tollbooth-dpyc/main/docs/tollbooth-hero.png" alt="Milo drives the Lightning Turnpike — Don't Pester Your Customer" width="800">
</p>
**Don't Pester Your Customer** — Bitcoin Lightning micropayments for MCP servers.
> *The metaphors in this project are drawn with admiration from* The Phantom Tollbooth *by Norton Juster, illustrated by Jules Feiffer (1961). Milo, Tock, the Tollbooth, Dictionopolis, and Digitopolis are creations of Mr. Juster's extraordinary imagination. We just built the payment infrastructure.*
---
## The Problem
Thousands of developers are building [MCP](https://modelcontextprotocol.io/) servers — services that let AI agents like Claude interact with the world. Knowledge graphs, financial data, code repositories, medical records. Each one is a city on the map. But the turnpike between them? Wide open. No toll collectors. No sustainable economics. Just a growing network of roads that nobody's figured out how to fund.
Every MCP operator faces the same question: *how do I keep the lights on?*
Traditional API keys with monthly billing? You're running a SaaS company now. The L402 protocol — Lightning-native pay-per-request? Every single API call requires a payment negotiation. Milo's toy car stops at every intersection to fumble for exact change.
## The Solution
Tollbooth DPYC takes a different approach — one that respects everyone's time:
**Milo drives up to the tollbooth once, buys a roll of tokens with a single Lightning invoice, and drives.** No stops. No negotiations. No per-request friction. The tokens quietly decrement in the background. When the roll runs low, he buys another. The turnpike stays fast.
Prepaid credits over Bitcoin's Lightning Network, gated at the tool level, settled instantly, with no subscription management and no third-party payment processor taking a cut.
## Install
```bash
pip install tollbooth-dpyc
```
## What's in the Box
| Module | Purpose |
|--------|---------|
| `TollboothConfig` | Plain frozen dataclass — no pydantic, no env-var reading. Your host constructs it. |
| `UserLedger` | Per-user credit balance with debit/credit/rollback, daily usage logs, JSON serialization. |
| `BTCPayClient` | Async HTTP client for BTCPay Server's Greenfield API — invoices, payouts, health checks. |
| `VaultBackend` | Protocol for pluggable persistence — implement `store_ledger`, `fetch_ledger`, `snapshot_ledger`. |
| `LedgerCache` | In-memory LRU cache with write-behind flush. The hot path for all credit operations. |
| `ToolTier` | Cost tiers for tool-call metering (FREE=0, READ=1, WRITE=5, HEAVY=10 sats per call). |
| `tools.credits` | Ready-made tool implementations: `purchase_credits`, `check_payment`, `check_balance`, and more. |
## Quick Start
```python
from tollbooth import TollboothConfig, UserLedger, BTCPayClient, LedgerCache
# Configure — your host reads env vars, Tollbooth gets a plain dataclass
config = TollboothConfig(
btcpay_host="https://your-btcpay.example.com",
btcpay_store_id="your-store-id",
btcpay_api_key="your-api-key",
tollbooth_royalty_address="tollbooth@btcpay.digitalthread.link",
)
# Create a BTCPay client
async with BTCPayClient(config.btcpay_host, config.btcpay_api_key, config.btcpay_store_id) as client:
# Create an invoice for 1000 sats
invoice = await client.create_invoice(1000, metadata={"user": "milo"})
print(f"Pay here: {invoice['checkoutLink']}")
```
## Configuration
`TollboothConfig` is a plain frozen dataclass. Your host application constructs it from its own settings (env vars, pydantic-settings, YAML — whatever you prefer). Tollbooth never reads environment variables directly.
| Field | Type | Default | Purpose |
|-------|------|---------|---------|
| `btcpay_host` | `str \| None` | `None` | BTCPay Server URL for creating invoices and checking payments |
| `btcpay_store_id` | `str \| None` | `None` | BTCPay store ID — each operator runs their own store |
| `btcpay_api_key` | `str \| None` | `None` | BTCPay API key — must have invoice + payout permissions |
| `btcpay_tier_config` | `str \| None` | `None` | JSON string mapping tier names to credit multipliers |
| `btcpay_user_tiers` | `str \| None` | `None` | JSON string mapping user IDs to tier names |
| `seed_balance_sats` | `int` | `0` | Free starter balance granted to new users (0 to disable) |
| `tollbooth_royalty_address` | `str \| None` | `None` | Lightning Address for the 2% royalty payout to the Tollbooth originator |
| `tollbooth_royalty_percent` | `float` | `0.02` | Royalty percentage (0.02 = 2%) |
| `tollbooth_royalty_min_sats` | `int` | `10` | Minimum royalty payout in sats (below this, no payout fires) |
| `authority_public_key` | `str \| None` | `None` | Authority's Ed25519 public key — bare base64 or full PEM. Required for `purchase_credits` (every purchase needs a valid Authority JWT). |
## Tool Functions
The `tollbooth.tools.credits` module provides ready-made implementations that your MCP server wraps as tools. Each function takes infrastructure objects (BTCPayClient, LedgerCache) as parameters — you wire them up, Tollbooth handles the logic.
| Function | Purpose |
|----------|---------|
| `purchase_credits_tool` | Creates a BTCPay invoice, records it as pending, returns a checkout link. Validates Authority certificate when `authority_public_key` is configured. |
| `verify_certificate` | Verifies an Authority-signed Ed25519 JWT. Checks signature, expiry, and anti-replay (JTI). |
| `check_payment_tool` | Polls an invoice, credits the balance on settlement, fires the royalty payout. Idempotent. |
| `check_balance_tool` | Returns current balance, usage summary, tier info, and invoice history. Read-only. |
| `restore_credits_tool` | Recovers credits from a paid invoice lost to cache/vault issues. Checks vault first, falls back to BTCPay. |
| `btcpay_status_tool` | Diagnostics: BTCPay connectivity, store name, API key permissions, royalty config. |
| `compute_low_balance_warning` | Pure function — returns a warning dict if balance is below threshold, `None` if healthy. |
## The Three-Party Settlement
Here's where the story takes a turn that even Milo wouldn't expect.
We didn't build Tollbooth to sell. We built it to **give away** — like the Massachusetts Turnpike Authority. The Authority doesn't operate every toll plaza. Independent operators run the booths. What the Authority does is simpler: it collects a small percentage of every fare that flows through infrastructure it designed.
When a user purchases credits, the settlement is three-party:
1. **Milo** pays the operator's Lightning invoice
2. **The operator's** BTCPay Server credits Milo's balance
3. **Automatically, in the background** — BTCPay creates a small payout to the Tollbooth originator's Lightning Address
A royalty. Two percent of the fare. The operator sees it transparently in their BTCPay dashboard. Milo never knows it happened.
The enforcement is both technical and social. At startup, Tollbooth inspects the operator's BTCPay API key permissions. If the key lacks payout capability, **Tollbooth refuses to start**. Not a warning. A hard stop. The social contract, made executable.
## The Economics
**For Milo (the user):** Nothing changes. Buy credits, use tools, drive the turnpike.
**For the operator:** A free, production-tested monetization framework. No license fee. The 2% royalty is a rounding error compared to the revenue you couldn't collect before. The tollbooth pays for itself on the first transaction.
**For the ecosystem:** Revenue scales with adoption, not effort. Every new MCP server that installs Tollbooth becomes a node in the Lightning economy. The infrastructure hums along — collecting its modest fare, maintaining the roads, and making sure the turnpike stays open for everyone.
*It's the transition from mining fees to transaction fees. You stop competing on compute and start collecting on flow.*
## Reference Integration
[thebrain-mcp](https://github.com/lonniev/thebrain-mcp) — the first MCP server powered by Tollbooth. A FastMCP service that gives AI agents access to TheBrain knowledge graphs, with all 40+ tools metered via Tollbooth credits.
## Architecture
Tollbooth is a three-party ecosystem:
| Repo | Role |
|------|------|
| [tollbooth-authority](https://github.com/lonniev/tollbooth-authority) | The institution — tax collection, EdDSA signing, purchase order certification |
| **tollbooth-dpyc** (this package) | The booth — operator-side credit ledger, BTCPay client, tool gating |
| [thebrain-mcp](https://github.com/lonniev/thebrain-mcp) | The first city — reference MCP server powered by Tollbooth |
See the [Three-Party Protocol diagram](https://github.com/lonniev/tollbooth-authority/blob/main/docs/diagrams/tollbooth-three-party-protocol.svg) for the full architecture.
```
tollbooth-authority tollbooth-dpyc (this package) your-mcp-server (consumer)
================================ ================================ ================================
EdDSA signing + tax ledger TollboothConfig Settings ──constructs──> TollboothConfig
certify_purchase → JWT UserLedger implements VaultBackend
Authority BTCPay BTCPayClient TOOL_COSTS maps tools to ToolTier
VaultBackend (Protocol)
LedgerCache + credit tools
```
Dependency flows one way: `your-mcp-server --> tollbooth-dpyc`. Authority is a network peer, not a code dependency. Only runtime dependency: `httpx`.
## DPYC Identity (Nostr npub)
Every participant in the Tollbooth ecosystem — Authorities, Operators, and Users — is identified by a [Nostr](https://nostr.com/) keypair. The `npub` (public key) is your identity on the DPYC Honor Chain. The `nsec` (private key) stays with you — never shared, never sent to a service.
Generate a keypair:
```bash
pip install nostr-sdk
python scripts/generate_nostr_keypair.py
```
Then set the appropriate environment variable in your `.env`:
| Role | Variable | Purpose |
|------|----------|---------|
| Authority | `DPYC_AUTHORITY_NPUB` | This Authority's identity on the Honor Chain |
| Authority | `DPYC_UPSTREAM_AUTHORITY_NPUB` | The Authority above in the chain (empty for Prime) |
| Operator | `DPYC_OPERATOR_NPUB` | This Operator's identity on the Honor Chain |
| Operator | `DPYC_AUTHORITY_NPUB` | The Authority this Operator is registered with |
Users provide their npub at session time via `activate_dpyc()` — no env var needed.
## Development
```bash
git clone https://github.com/lonniev/tollbooth-dpyc.git
cd tollbooth-dpyc
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
pytest tests/ -q
```
## Further Reading
[The Phantom Tollbooth on the Lightning Turnpike](https://stablecoin.myshopify.com/blogs/our-value/the-phantom-tollbooth-on-the-lightning-turnpike) — the full story of how we're monetizing the monetization of AI APIs, and then fading to the background.
## License
Apache 2.0 — see [LICENSE](LICENSE).
---
*Because in the end, the tollbooth was never the destination. It was always just the beginning of the journey.*
| text/markdown | Lonnie VanZandt | null | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"pyjwt[crypto]>=2.8.0",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/lonniev/tollbooth-dpyc",
"Repository, https://github.com/lonniev/tollbooth-dpyc",
"Issues, https://github.com/lonniev/tollbooth-dpyc/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:37:29.818619 | tollbooth_dpyc-0.1.10.tar.gz | 1,859,244 | 3f/74/e5193ece7757d8360a8f529eeb08fd00107b603fb2c878fce88cad1a04d3/tollbooth_dpyc-0.1.10.tar.gz | source | sdist | null | false | afa65d4f3c2f689920a1b6600e0fd246 | 9f013c7c09709220a2e3d47f04585da72b5506759ed007d7b89ef4e30f2d4f0b | 3f74e5193ece7757d8360a8f529eeb08fd00107b603fb2c878fce88cad1a04d3 | Apache-2.0 | [
"LICENSE",
"NOTICE"
] | 360 |
2.4 | novyx | 2.6.0 | Persistent memory + rollback + audit trail for AI agents. Unified API for memory, security, and time-travel debugging. | # Novyx SDK
**Persistent memory + rollback + audit trail for AI agents.** Give your AI persistent memory, semantic search, Magic Rollback to undo mistakes, and cryptographic audit trails. Works with LangChain, CrewAI, or any Python agent framework.
## Installation
```bash
pip install novyx
```
### CLI Installation (Optional)
For the command-line interface:
```bash
pip install novyx[cli]
```
This installs the `novyx` command for managing memory, rollback, audit, and traces from the terminal.
## Quick Start
```python
from novyx import Novyx
# Initialize with your API key
nx = Novyx(api_key="nram_your_key_here")
# Store a memory
nx.remember("User prefers dark mode and async communication", tags=["preferences"])
# Search memories semantically
memories = nx.recall("communication style", limit=5)
for mem in memories:
print(f"{mem['observation']} (relevance: {mem['score']:.2f})")
# Check audit trail
audit = nx.audit(limit=10)
print(f"Last 10 operations: {[e['operation'] for e in audit]}")
# Magic Rollback - undo to any point in time (Pro+)
nx.rollback("2 hours ago")
```
## Complete Lifecycle Example
```python
from novyx import Novyx
nx = Novyx(api_key="nram_your_key_here")
# 1. Store memories
nx.remember("User mentioned budget is $50K for Q1", tags=["sales", "budget"], importance=8)
nx.remember("User prefers email over phone calls", tags=["preferences"], importance=7)
nx.remember("User is building a real estate AI assistant", tags=["project"])
# 2. Search with semantic recall
memories = nx.recall("what is the user's budget?", limit=3)
print(f"Found: {memories[0]['observation']}")
# 3. Check audit trail
audit = nx.audit(limit=5, operation="CREATE")
print(f"Created {len(audit)} memories")
# 4. Rollback if needed (Pro+ only)
# Preview what will change
preview = nx.rollback_preview("1 hour ago")
if preview["safe_rollback"]:
result = nx.rollback("1 hour ago")
print(f"Rolled back {result['operations_undone']} operations")
# 5. Trace agent actions (Pro+ only)
trace = nx.trace_create("sales-agent", session_id="session-123")
nx.trace_step(trace["trace_id"], "thought", "Analyzing budget", content="User has $50K")
nx.trace_step(trace["trace_id"], "action", "draft_proposal", attributes={"budget": 50000})
result = nx.trace_complete(trace["trace_id"])
print(f"Trace completed with signature: {result['signature'][:16]}...")
```
## Features
### 🧠 Persistent Memory
Store observations about users, contexts, and decisions. Your AI remembers everything across sessions.
```python
# Store with metadata
nx.remember(
"Customer mentioned budget is $50K for Q1",
tags=["sales", "budget"],
importance=8,
metadata={"customer_id": "12345", "quarter": "Q1"}
)
# List all memories
all_memories = nx.memories(limit=100, min_importance=7)
print(f"Found {len(all_memories)} high-importance memories")
# Get specific memory
memory = nx.memory("urn:uuid:abc123...")
# Delete memory
nx.forget("urn:uuid:abc123...")
```
### 🔍 Semantic Search
Find relevant memories using natural language queries. No exact keyword matching required. Send plain text — Novyx handles embedding generation server-side.
```python
memories = nx.recall("what is the user working on?", limit=3)
# Returns: "User is building a real estate AI assistant"
# Filter by tags
memories = nx.recall("budget constraints", tags=["sales"], limit=5)
```
### ⏮️ Magic Rollback (Pro+)
Made a mistake? Roll back your AI's memory to any point in time.
```python
# Preview rollback first
preview = nx.rollback_preview("2 hours ago")
print(f"Will restore {preview['artifacts_restored']} artifacts")
print(f"Warnings: {preview['warnings']}")
# Execute rollback
result = nx.rollback("2 hours ago")
print(f"Rolled back to {result['rolled_back_to']}")
# View rollback history
history = nx.rollback_history(limit=10)
for rb in history:
print(f"Rollback to {rb['target_timestamp']}")
```
### 📜 Cryptographic Audit Trail
Every operation is logged with SHA-256 hashing for tamper-proof history.
```python
# Get recent audit entries
audit = nx.audit(limit=50, operation="CREATE")
# Filter by time range
from datetime import datetime, timedelta
since = (datetime.now() - timedelta(days=7)).isoformat()
audit = nx.audit(since=since, operation="ROLLBACK")
# Export audit log (Pro+)
csv_data = nx.audit_export(format="csv")
with open("audit.csv", "wb") as f:
f.write(csv_data)
# Verify integrity
verification = nx.audit_verify()
if verification["valid"]:
print("✅ Audit trail integrity verified!")
```
### 🔐 Trace Audit (Pro+)
Track agent actions with RSA signatures and real-time policy enforcement.
```python
# Create trace session
trace = nx.trace_create("my-agent", session_id="session-123")
# Add steps
nx.trace_step(trace["trace_id"], "thought", "Planning email")
nx.trace_step(trace["trace_id"], "action", "send_email", attributes={"to": "user@example.com"})
nx.trace_step(trace["trace_id"], "observation", "Email sent successfully")
# Finalize with RSA signature
result = nx.trace_complete(trace["trace_id"])
# Verify integrity later
verification = nx.trace_verify(trace["trace_id"])
print(f"Verified {verification['steps_verified']} steps")
```
### 📊 Usage & Plans
Monitor your usage and explore pricing options.
```python
# Check current usage
usage = nx.usage()
print(f"Tier: {usage['tier']}")
print(f"API calls: {usage['api_calls']['current']}/{usage['api_calls']['limit']}")
print(f"Memories: {usage['memories']['current']}")
# View available plans
plans = nx.plans()
for plan in plans:
print(f"{plan['name']}: {plan['price_display']}")
print(f" Memories: {plan['memory_limit'] or 'Unlimited'}")
print(f" Features: {', '.join([k for k, v in plan['features'].items() if v])}")
```
## API Reference
### Memory Methods
| Method | Description |
|--------|-------------|
| `remember(observation, tags=[], importance=5, metadata=None)` | Store a memory |
| `recall(query, limit=5, tags=None, min_score=0.0)` | Semantic search |
| `memories(limit=100, offset=0, tags=None, min_importance=None)` | List all memories |
| `memory(memory_id)` | Get specific memory by ID |
| `forget(memory_id)` | Delete memory |
| `stats()` | Get memory statistics |
### Rollback Methods (Pro+)
| Method | Description |
|--------|-------------|
| `rollback(target, dry_run=False, preserve_evidence=True)` | Rollback to timestamp or relative time |
| `rollback_preview(target)` | Preview rollback changes |
| `rollback_history(limit=50)` | List past rollbacks |
### Audit Methods
| Method | Description |
|--------|-------------|
| `audit(limit=50, since=None, until=None, operation=None)` | Get audit entries |
| `audit_export(format="csv")` | Export audit log (Pro+) |
| `audit_verify()` | Verify audit integrity |
### Trace Methods (Pro+)
| Method | Description |
|--------|-------------|
| `trace_create(agent_id, session_id=None, metadata=None)` | Create trace session |
| `trace_step(trace_id, step_type, name, content=None, attributes=None)` | Add trace step |
| `trace_complete(trace_id)` | Finalize trace with RSA signature |
| `trace_verify(trace_id)` | Verify trace integrity |
### Usage & Plans
| Method | Description |
|--------|-------------|
| `usage()` | Get current usage vs limits |
| `plans()` | List available pricing plans |
## Error Handling
```python
from novyx import (
Novyx,
NovyxRateLimitError,
NovyxForbiddenError,
NovyxAuthError,
NovyxNotFoundError
)
nx = Novyx(api_key="nram_...")
try:
nx.remember("Test memory")
except NovyxRateLimitError as e:
# Memory limit exceeded
print(f"Limit exceeded: {e.current}/{e.limit}")
print(f"Upgrade at: {e.upgrade_url}")
except NovyxForbiddenError as e:
# Feature not available on current plan
print(f"Requires {e.tier_required} tier")
print(f"Upgrade at: {e.upgrade_url}")
except NovyxAuthError as e:
# Invalid API key
print(f"Auth error: {e}")
except NovyxNotFoundError as e:
# Resource not found
print(f"Not found: {e}")
```
## Pricing
| Tier | Price | Memories | API Calls | Rollbacks | Audit | Features |
|------|-------|----------|-----------|-----------|-------|----------|
| **Free** | $0 | 5,000 | 5,000/mo | 10/month | 7 days | Basic memory |
| **Starter** | $12/mo | 25,000 | 25,000/mo | 30/month | 14 days | + Basic rollback |
| **Pro** | $39/mo | Unlimited | 100,000/mo | Unlimited | 30 days | + Rollback, trace audit, anomaly alerts |
| **Enterprise** | $199/mo | Unlimited | Unlimited | Unlimited | 90 days | + Priority support, SSO-ready |
## Command-Line Interface (CLI)
Novyx includes a powerful CLI for managing memory, rollback, audit, and traces from the terminal.
### Setup
```bash
# Install with CLI support
pip install novyx[cli]
# Configure API key
novyx config set api_key nram_your_key_here
# Or use environment variable
export NOVYX_API_KEY=nram_your_key_here
```
### Quick Start
```bash
# Check API health and usage
novyx status
# List memories
novyx memories list --limit 20
# Semantic search
novyx memories search "user preferences"
# Get memory count
novyx memories count
# Delete memory
novyx memories delete <uuid>
```
### Memory Management
```bash
# List all memories
novyx memories list --limit 100 --format table
# Export as JSON
novyx memories list --format json > memories.json
# Filter by tags
novyx memories list --tags "important,user-data"
# Semantic search with relevance scores
novyx memories search "what are the user's communication preferences?" --limit 5
# Delete with confirmation
novyx memories delete urn:uuid:abc123...
```
### Rollback (Pro+)
```bash
# Preview rollback (ALWAYS do this first)
novyx rollback preview "2 hours ago"
# Execute rollback (shows preview + confirmation)
novyx rollback "2 hours ago"
# Skip confirmation (use with caution!)
novyx rollback "1 hour ago" --yes
# View rollback history
novyx rollback history --limit 10
```
### Audit Trail
```bash
# List recent audit entries
novyx audit list --limit 50
# Filter by operation
novyx audit list --operation CREATE --limit 20
novyx audit list --operation ROLLBACK
# Export audit log (Pro+)
novyx audit export --format csv > audit.csv
novyx audit export --format json > audit.json
# Verify audit integrity
novyx audit verify
```
### Trace Audit (Pro+)
```bash
# Verify trace integrity
novyx traces verify trace-abc123
# List and show coming in v2.1
novyx traces list
novyx traces show trace-abc123
```
### Account & Status
```bash
# Show API health, plan, and usage
novyx status
# Configure API key
novyx config set api_key nram_your_key_here
# Show configuration (API key masked)
novyx config show
# Reset configuration
novyx config reset
```
### CLI Features
- **Rich Output**: Beautiful tables and colored output with [Rich](https://github.com/Textualize/rich)
- **Progress Indicators**: Spinners for long-running operations
- **Safety First**: Rollback always shows preview + requires confirmation
- **Multiple Formats**: JSON or table output for easy parsing
- **API Key Resolution**: Flag > Environment > Config file (priority order)
- **Error Handling**: Clear, human-readable error messages
### API Key Priority
The CLI resolves API keys in this order:
1. `--api-key` flag (highest priority)
2. `NOVYX_API_KEY` environment variable
3. `~/.novyx/config.json` (lowest priority)
```bash
# Using flag
novyx --api-key nram_xxx status
# Using environment variable
export NOVYX_API_KEY=nram_xxx
novyx status
# Using config file
novyx config set api_key nram_xxx
novyx status
```
### Example Workflow
```bash
# 1. Check system status
novyx status
# 2. List recent memories
novyx memories list --limit 20
# 3. Search for specific context
novyx memories search "budget constraints" --min-score 0.7
# 4. Made a mistake? Preview rollback
novyx rollback preview "1 hour ago"
# 5. Execute rollback if safe
novyx rollback "1 hour ago"
# 6. Verify audit integrity
novyx audit verify
# 7. Export audit for compliance
novyx audit export --format csv > audit-2026-02.csv
```
## Links
- [Documentation](https://novyxlabs.com/docs)
- [Pricing](https://novyxlabs.com/pricing)
- [Get API Key](https://novyxlabs.com)
## License
MIT
| text/markdown | Novyx Labs | blake@novyxlabs.com | null | null | null | ai, agents, memory, llm, langchain, semantic-search, rollback, audit, trace, observability, cli | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | https://novyxlabs.com | null | >=3.8 | [] | [] | [] | [
"requests>=2.28.0",
"click>=8.0.0; extra == \"cli\"",
"rich>=13.0.0; extra == \"cli\""
] | [] | [] | [] | [
"Documentation, https://novyxlabs.com/docs"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-21T03:36:09.137827 | novyx-2.6.0.tar.gz | 35,137 | 0d/d6/aea56f3c3b9e2cbe0eeed0001f5daee1431ccede676a9e46bef735a208e1/novyx-2.6.0.tar.gz | source | sdist | null | false | 1a333821ecb1c507670830582fc7294e | cf28bae67b1417184f72378b9c32dfb6a0d0b267c11c439fcd16ef1880b18caf | 0dd6aea56f3c3b9e2cbe0eeed0001f5daee1431ccede676a9e46bef735a208e1 | null | [] | 235 |
2.4 | GluonixDesigner | 7.5 | A GUI Designer with drag-and-drop features. | # Gluonix Designer V(7.5)
- Gluonix Designer is a GUI design tool for Python applications that simplifies the process of creating graphical user interfaces using a drag-and-drop approach. This tool is aimed at developers who want to quickly prototype or build Python GUIs without extensive manual coding.
## Feedback
- We hope you find Gluonix Designer helpful and easy to use. If you have any thoughts or suggestions, we’d love to hear from you at feedback@nucleonautomation.com. Your feedback is invaluable in helping us improve and enhance the tool.
## Features
- Drag-and-Drop Interface: Design your GUI by dragging and dropping components onto a canvas.
- Python Code Generation: Automatically generates Python code based on your GUI design.
- Component Library: Includes a variety of pre-built components such as buttons, labels, input fields, etc.
- Customization: Customize properties and behaviors of components directly through the interface.
- Export to Python: Export your designed GUI to a Python script that can be integrated into your Python project.## Screenshots
### Project Setting

### Design Panel

## Examples
### [Replica of Covision Quality](https://github.com/nucleonautomation/Gluonix-Designer/blob/main/Examples/Covision)

### [Replica of Github Desktop](https://github.com/nucleonautomation/Gluonix-Designer/blob/main/Examples/Github)


## Tutorial
- See the [TUTORIAL](https://github.com/nucleonautomation/Gluonix-Designer/blob/main/TUTORIAL.md) file for details.
## Help
- See the [HELP](https://github.com/nucleonautomation/Gluonix-Designer/blob/main/Help.pdf) file for details.
## Installation Pypi
```
# Requires Python >=3.6
pip install GluonixDesigner
# Start Application
Gluonix
#or
GluonixDesigner
```
## Installation Git
```
# Requires Python >=3.6
git clone https://github.com/nucleonautomation/Gluonix-Designer.git
# Start Application
cd Gluonix-Designer
python Designer.py
```
## Requirements
```
pip install -r requirements.txt
```
## License
- This project is licensed under BSD 4-Clause License. See the [LICENSE](https://github.com/nucleonautomation/Gluonix-Designer/blob/main/LICENSE.md) file for details.
## Change Log
- See the [LOG](https://github.com/nucleonautomation/Gluonix-Designer/blob/main/LOG.md) file for details.
| text/markdown | Nucleon Automation | Nucleon Automation <jagroop@nucleonautomation.com> | null | null | BSD 4-Clause License
Copyright (c) 2026 Nucleon Automation
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. All advertising materials mentioning features or use of this software must display the following acknowledgement:
This product includes software developed by the the organization .
4. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | null | [] | [] | null | null | >=3.6 | [] | [] | [] | [
"pillow>=6.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.11.9 | 2026-02-21T03:34:34.555806 | gluonixdesigner-7.5.tar.gz | 796,098 | 88/ba/8384994b4d601882568721486f0a3e01dbc28d0fcb76a55044563aa69bba/gluonixdesigner-7.5.tar.gz | source | sdist | null | false | 6442ffc63908a763944d6191214c4224 | 1659ee5d1673af1bd7481826ea4922ac43a0c9b81d267ba3433a3963f5525c02 | 88ba8384994b4d601882568721486f0a3e01dbc28d0fcb76a55044563aa69bba | null | [
"LICENSE.md"
] | 0 |
2.4 | tab2seq | 0.1.1 | Transform tabular event data into sequences ready for Transformer and Sequential models: Life2Vec, BEHRT and more. | # tab2seq
[](https://pypi.org/project/tab2seq/)
[](https://pypi.org/project/tab2seq/)
[](https://pypi.org/project/tab2seq/)
[](https://github.com/carlomarxdk/tab2seq/blob/main/LICENSE)
**tab2seq** adapts the Life2Vec data processing pipeline to make it easy to work with multi-source tabular event data for sequential modeling projects. Transform registry data, EHR records, and other event-based datasets into formats ready for Transformer and sequential deep learning models.
> [!WARNING]
> This is an alpha package. In the beta version, it will reimplement all the data-preprocessing steps of the [life2vec](https://github.com/SocialComplexityLab/life2vec) and [life2vec-light](https://github.com/carlomarxdk/life2vec-light) repos. See [TODOs](#todos) to see what is implemented at this point.
## About
This package extracts and generalizes the data processing patterns from the [Life2Vec](https://github.com/SocialComplexityLab/life2vec) project, making them reusable for similar research projects that need to:
- Work with multiple longitudinal data sources (registries, databases)
- Define and filter cohorts based on complex criteria
- Generate realistic synthetic data for development and testing
- Process large-scale tabular event data efficiently
Whether you're working with healthcare data, financial records, or any time-stamped event data, tab2seq provides the building blocks for preparing data for Life2Vec-style sequential models.
## Features
- **Multi-Source Data Management**: Handle multiple data sources (registries) with unified schema
- **Type-Safe Configuration**: Pydantic-based configuration with YAML support
- **Synthetic Data Generation**: Generate realistic dummy registry data for testing and exploration
- **Memory-Efficient Loading**: Chunked iteration and lazy loading with Polars
- **Schema Validation**: Automatic validation of entity IDs, timestamps, and column types
- **Cross-Source Operations**: Unified access and operations across multiple data sources
## Installation
```bash
# Basic installation
pip install tab2seq
# Development installation
pip install -e ".[dev]"
```
## Quick Start
### Working with Multiple Data Sources
```python
from tab2seq.source import Source, SourceCollection, SourceConfig
# Define your data sources
configs = [
SourceConfig(
name="health",
filepath="data/health.parquet",
entity_id_col="patient_id",
timestamp_cols=["date"],
categorical_cols=["diagnosis", "procedure", "department"],
continuous_cols=["cost", "length_of_stay"],
),
SourceConfig(
name="income",
filepath="data/income.parquet",
entity_id_col="person_id",
timestamp_cols=["year"],
categorical_cols=["income_type", "sector"],
continuous_cols=["income_amount"],
),
]
# Create a source collection
collection = SourceCollection.from_configs(configs)
# Access individual sources
health = collection["health"]
df = health.read_all()
# Or iterate over all sources
for source in collection:
print(f"{source.name}: {len(source.get_entity_ids())} entities")
# Cross-source operations
all_entity_ids = collection.get_all_entity_ids()
```
### Generating Synthetic Data
```python
from tab2seq.datasets import generate_synthetic_collections
# Generate synthetic registry data for testing
collection = generate_synthetic_collections(
output_dir="data/dummy",
n_entities=1000,
seed=42
)
# Returns a ready-to-use SourceCollection
health = collection["health"]
print(health.read_all().head())
```
## Architecture
> [!warning]
> Work in progress!
**Available Registries:**
- **health**: Medical events with diagnoses (ICD codes), procedures, departments, costs, and length of stay
- **income**: Yearly income records with income type, sector, and amounts
- **labour**: Quarterly labour status with occupation, employment status, and residence
- **survey**: Periodic survey responses with education level, marital status, and satisfaction scores
All synthetic data includes realistic temporal patterns, missing data, and correlations between fields to mimic real-world registry data.
## Use Cases
- **Healthcare Research**: Transform electronic health records (EHR) into sequences for predictive modeling
- **Registry Data Processing**: Work with multiple event-based registries (health, income, labour, surveys)
- **Sequential Modeling**: Prepare multi-source data for Life2Vec, BEHRT, or other transformer-based models
- **Data Pipeline Development**: Use synthetic data to develop and test processing pipelines before working with sensitive real data
- **Multi-Source Analysis**: Combine and analyze data from multiple longitudinal sources with unified tooling
## Development
```bash
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run tests with coverage
pytest --cov=tab2seq --cov-report=html
# Format code
black src/tab2seq tests
# Lint code
ruff check src/tab2seq tests
```
## TODOs
- [x] Synthetic Datasets
- [x] `Source` implementation
- [ ] `Cohort` implementation
- [ ] `Cohort` and data splits
- [ ] `Tokenization` implementation
- [ ] `Vocabulary` implementation
- [ ] Caching and chunking
## Citation
If you use this package in your research, please cite:
```bibtex
@software{tab2seq2024,
author = {Savcisens, Germans},
title = {tab2seq: Scalable Tabular to Sequential Data Processing},
year = {2024},
url = {https://github.com/carlomarxdk/tab2seq}
}
```
And the original Life2Vec paper that inspired this work:
```bibtex
@article{savcisens2024using,
title={Using sequences of life-events to predict human lives},
author={Savcisens, Germans and Eliassi-Rad, Tina and Hansen, Lars Kai and Mortensen, Laust Hvas and Lilleholt, Lau and Rogers, Anna and Zettler, Ingo and Lehmann, Sune},
journal={Nature computational science},
volume={4},
number={1},
pages={43--56},
year={2024},
publisher={Nature Publishing Group US New York}
}
```
## Acknowledgments
- Inspired by the data processing pipeline from [Life2Vec](https://github.com/SocialComplexityLab/life2vec) and [Life2Vec-Light](https://github.com/SocialComplexityLab/life2vec-light)
- Built with [Polars](https://polars.rs/), [PyArrow](https://arrow.apache.org/docs/python/), [Pydantic](https://pydantic.dev/), and [Joblib](https://joblib.readthedocs.io/)
## Contributing
Contributions are welcome! Please open an issue or submit a pull request on [GitHub](https://github.com/carlomarxdk/tab2seq).
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Support
- 🐛 Issues: [GitHub Issues](https://github.com/carlomarxdk/tab2seq/issues)
- 💬 Discussions: [GitHub Discussions](https://github.com/carlomarxdk/tab2seq/discussions)
| text/markdown | null | Germans Savcisens <germans@savcisens.com> | null | null | MIT | tokenization, data preprocessing, tabular data, transformer models, sequential models, life2vec | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=2.0.0",
"polars<2.0,>=1.38.0",
"pyarrow>=12.0.0",
"pydantic>=2.0.0",
"tqdm>=4.65.0",
"pyyaml>=6.0",
"click>=8.1.0",
"joblib>=1.3.0",
"pytest>=9.0.0; extra == \"dev\"",
"pytest-cov>=6.0.0; extra == \"dev\"",
"pytest-xdist>=3.5.0; extra == \"dev\"",
"ruff>=0.15.0; extra == \"dev\"",
"mypy>=1.19.0; extra == \"dev\"",
"types-PyYAML>=6.0.0; extra == \"dev\"",
"mkdocs>=1.6.1; extra == \"docs\"",
"mkdocs-material>=9.7.1; extra == \"docs\"",
"mkdocstrings>=1.0.2; extra == \"docs\"",
"mkdocstrings-python>=2.0.0; extra == \"docs\"",
"mkdocs-gen-files>=0.6.0; extra == \"docs\"",
"mkdocs-literate-nav>=0.6.2; extra == \"docs\"",
"mkdocs-section-index>=0.3.10; extra == \"docs\"",
"mkdocs-bibtex>=4.4.0; extra == \"docs\"",
"tab2seq[dev,docs]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/carlomarxdk/tab2seq",
"Documentation, https://tab2seq.readthedocs.io",
"Repository, https://github.com/carlomarxdk/tab2seq",
"Issues, https://github.com/carlomarxdk/tab2seq/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:34:07.945608 | tab2seq-0.1.1.tar.gz | 22,063 | e8/06/7222e9d06edaa67e0a8209423a8748206d0306d7fafd3bd27a6196c79e40/tab2seq-0.1.1.tar.gz | source | sdist | null | false | 11535ddbb46997eaa7ba22f65af3f761 | 2c9b4fa4f13c30d54b047bf5cf715ddc46f2667ed658cc091d274a18500b70d1 | e8067222e9d06edaa67e0a8209423a8748206d0306d7fafd3bd27a6196c79e40 | null | [
"LICENSE"
] | 230 |
2.4 | WebSearcher | 0.6.8 | Tools for conducting, collecting, and parsing web search | # WebSearcher
## Tools for conducting and parsing web searches
[](https://badge.fury.io/py/WebSearcher)
This package provides tools for conducting algorithm audits of web search and
includes a scraper built on `selenium` with tools for geolocating, conducting,
and saving searches. It also includes a modular parser built on `BeautifulSoup`
for decomposing a SERP into list of components with categorical classifications
and position-based specifications.
## Table of Contents
- [WebSearcher](#websearcher)
- [Getting Started](#getting-started)
- [Usage](#usage)
- [Example Search Script](#example-search-script)
- [Step by Step](#step-by-step)
- [Localization](#localization)
- [Contributing](#contributing)
- [GitHub Actions](#github-actions)
- [Recent Updates](#recent-updates)
- [Update Log](#update-log)
- [Similar Packages](#similar-packages)
- [License](#license)
---
## Recent Updates
### 0.6.7
- Added `get_text_by_selectors()` to `webutils` -- centralizes multi-selector fallback pattern across 7 component parsers
- Added `perspectives`, `recent_posts`, and `latest_from` component classifiers
- Added `sub_type` to perspectives parser from header text
- Added CI test workflow on push to dev branch
- Added compressed test fixtures with `condense_fixtures.py` script
- Updated dependency lower bounds for security patches (protobuf, orjson)
- Updated GitHub Actions to checkout v6 and setup-python v6
---
## Getting Started
```bash
# Install from PyPI
pip install WebSearcher
# Or install with uv
uv add WebSearcher
# Install development version from GitHub
pip install git+https://github.com/gitronald/WebSearcher@dev
```
---
## Usage
### Example Search Script
There's an example search script that can be run from the command line with uv with a search query argument (`-q` or `--query`).
```bash
uv run demo-search -q "election news"
```
Search results are constantly changing, especially for news, but just now (see timestamp below), that search returned the following details (only a subset of columns are shown):
```
WebSearcher v0.4.2.dev0
Search Query: election news
Output Dir: data/demo-ws-v0.4.2.dev0
2024-11-11 10:55:27.362 | INFO | WebSearcher.searchers | 200 | election news
type title url
0 top_stories There’s a Lot of Fighting Over Why H... https://slate.com/news-and-politics/...
1 top_stories Dearborn’s Arab Americans feel vindi... https://www.politico.com/news/2024/1...
2 top_stories Former Kamala Harris aide says Joe B... https://www.usatoday.com/story/news/...
3 top_stories Election live updates: Control of Co... https://apnews.com/live/house-senate...
4 top_stories Undecided races of the 2024 election... https://abcnews.go.com/538/live-upda...
5 local_news These Southern California House race... https://www.nbclosangeles.com/decisi...
6 local_news Election Day is over in California. ... https://www.sacbee.com/news/politics...
7 local_news Why Haven’t Numerous California Hous... https://www.democracydocket.com/news...
8 local_news Anti-slavery measure Prop. 6 fails, ... https://calmatters.org/politics/elec...
9 general November 10, 2024, election and Trum... https://www.cnn.com/politics/live-ne...
10 general When do states have to certify 2024 ... https://www.cbsnews.com/news/state-e...
11 general US Election 2024 | Latest News & Ana... https://www.bbc.com/news/topics/cj3e...
12 unknown None None
13 general 2024 Election https://www.npr.org/sections/elections/
14 general Politics, Policy, Political News - P... https://www.politico.com/
15 general Presidential election highlights: No... https://apnews.com/live/trump-harris...
16 general Election 2024: Latest News, Top Stor... https://calmatters.org/category/poli...
17 searches_related None None
```
By default, that script will save the outputs to a directory (`data/demo-ws-{version}/`) with the structure below. Within that, the script saves the HTML both to a single JSON lines file (`serps.json`), which is recommended because it includes metadata about the search, and to individual HTML files in a subdirectory (`html/`) for ease of viewing the SERPs (e.g., in a browser). The script also saves the parsed search results to a JSON file (`results.json`).
```sh
ls -hal data/demo-ws-v0.4.2.dev0/
```
```
total 1020K
drwxr-xr-x 3 user user 4.0K 2024-11-11 10:54 ./
drwxr-xr-x 8 user user 4.0K 2024-11-11 10:54 ../
drwxr-xr-x 2 user user 4.0K 2024-11-11 10:55 html/
-rw-r--r-- 1 user user 16K 2024-11-11 10:55 results.json
-rw-r--r-- 1 user user 990K 2024-11-11 10:55 serps.json
```
### Step by Step
Example search and parse pipeline (via requests):
```python
import WebSearcher as ws
se = ws.SearchEngine() # 1. Initialize collector
se.search('immigration news') # 2. Conduct a search
se.parse_results() # 3. Parse search results
se.save_serp(append_to='serps.json') # 4. Save HTML and metadata
se.save_results(append_to='results.json') # 5. Save parsed results
```
#### 1. Initialize Collector
```python
import WebSearcher as ws
# Initialize collector with method and other settings
se = ws.SearchEngine(
method="selenium",
selenium_config = {
"headless": False,
"use_subprocess": False,
"driver_executable_path": "",
"version_main": 141,
}
)
```
#### 2. Conduct a Search
```python
se.search('immigration news')
# 2024-08-19 14:09:18.502 | INFO | WebSearcher.searchers | 200 | immigration news
```
#### 3. Parse Search Results
The example below is primarily for parsing search results as you collect HTML.
See `ws.parse_serp(html)` for parsing existing HTML data.
```python
se.parse_results()
# Show first result
se.results[0]
{'section': 'main',
'cmpt_rank': 0,
'sub_rank': 0,
'type': 'top_stories',
'sub_type': None,
'title': 'Biden citizenship program for migrant spouses in US launches',
'url': 'https://www.newsnationnow.com/us-news/immigration/biden-citizenship-program-migrant-spouses-us-launches/',
'text': None,
'cite': 'NewsNation',
'details': None,
'error': None,
'serp_rank': 0}
```
#### 4. Save HTML and Metadata
Recommended: Append html and meta data as lines to a json file for larger or
ongoing collections.
```python
se.save_serp(append_to='serps.json')
```
Alternative: Save individual html files in a directory, named by a provided or (default) generated `serp_id`. Useful for smaller qualitative explorations where you want to quickly look at what is showing up. No meta data is saved, but timestamps could be recovered from the files themselves.
```python
se.save_serp(save_dir='./serps')
```
#### 5. Save Parsed Results
Save to a json lines file.
```python
se.save_results(append_to='results.json')
```
---
## Localization
To conduct localized searches--from a location of your choice--you only need
one additional data point: The __"Canonical Name"__ of each location. These are
available online, and can be downloaded using a built in function
(`ws.download_locations()`) to check for the most recent version.
A brief guide on how to select a canonical name and use it to conduct a
localized search is available in a [jupyter notebook here](https://gist.github.com/gitronald/45bad10ca2b78cf4ec1197b542764e05).
---
## Contributing
Happy to have help! If you see a component that we aren't covering yet, please add it using the process below. If you aren't sure about how to write a parser, you can also create an issue and I'll try to check it out. When creating that type of issue, providing the query that produced the new component and the time it was seen are essential, a screenshot of the component would be helpful, and the HTML would be ideal. Feel free to reach out if you have questions or need help.
### Repair or Enhance a Parser
1. Examine parser names in `/component_parsers/__init__.py`
2. Find parser file as `/component_parsers/{cmpt_name}.py`.
### Add a Parser
1. Add classifier to `classifiers/{main,footer,headers}.py`
2. Add parser as new file in `/component_parsers`
3. Add new parser to imports and catalogue in `/component_parsers/__init__.py`
### Testing
Run tests:
```bash
uv run pytest tests/ -q
```
Update snapshots:
```bash
uv run pytest tests/ --snapshot-update
```
Show snapshot diffs with `-vv`:
```bash
uv run pytest tests/ -vv
```
Run a specific snapshot test by serp_id prefix:
```bash
uv run pytest tests/ -k "45b6e019bfa2"
```
### Test Fixtures
Tests load from compressed fixtures in `tests/fixtures/`. To update fixtures after collecting new demo data:
```bash
uv run python scripts/condense_fixtures.py 0.6.7
uv run pytest tests/ --snapshot-update
```
---
## GitHub Actions
**Test Workflow** (`.github/workflows/test.yml`)
Runs the test suite on every push to `dev`.
**Release Workflow** (`.github/workflows/publish.yml`)
Publishes to PyPI when a pull request is merged into `master`:
- Builds the package using uv
- Publishes using trusted publishing (no API tokens required)
To release a new version:
1. Merge `dev` into `master` via PR
2. Once merged, the package is automatically published to PyPI
---
## Update Log
`0.6.7`
- Add `get_text_by_selectors()` utility, CI test workflow, compressed test fixtures
- Add `perspectives`, `recent_posts`, `latest_from` classifiers and `sub_type` for perspectives
- Update dependency bounds for security patches, GitHub Actions to v6
`0.6.6`
- Update packages with dependabot alerts (brotli, urllib3)
`0.6.5`
- Add GitHub Actions section to README
`0.6.0`
- Method for collecting data with selenium; requests no longer works without a redirect
- Pull request [#72](https://github.com/gitronald/WebSearcher/pull/72)
`0.5.2`
- Added support for Spanish component headers by text
- Pull request [#74](https://github.com/gitronald/WebSearcher/pull/74)
`0.5.1`
- Fixed canonical name -> UULE converter using `protobuf`, see [this gist](https://gist.github.com/gitronald/66cac42194ea2d489ff3a1e32651e736) for details
- Added lang arg to specify language in se.search, uses hl URL param and does not change Accept-Language request header (which defaults to en-US), but works in tests.
- Fixed null location/language arg input handling (again)
- Pull Request [#76](https://github.com/gitronald/WebSearcher/pull/76)
`0.5.0`
- configuration now using poetry v2
`0.4.9` - last version with poetry v1, future versions (`>=0.5.0`) will use [poetry v2](https://python-poetry.org/blog/announcing-poetry-2.0.1/) configs.
`0.4.2` - `0.4.8` - varied parser updates, testing with py3.12.
`0.4.1` - Added notices component types, including query edits, suggestions, language tips, and location tips.
`0.4.0` - Restructured parser for component classes, split classifier into submodules for header, main, footer, etc., and rewrote extractors to work with component classes. Various bug fixes.
`0.3.13` - New footer parser, broader extraction coverage, various bug and deprecation fixes.
`0.3.12` - Added num_results to search args, added handling for local results text and labels (made by the SE), ignore hidden_survey type at extraction.
`0.3.11` - Added extraction of labels for ads (made by the SE), use model validation, cleanup and various bug fixes.
`0.3.10` - Updated component classifier for images, added exportable header text mappings, added gist on localized searches.
`0.3.9` - Small fixes for video url parsing
`0.3.8` - Using SERP pydantic model, added github pip publishing workflow
`0.3.7` - Fixed localization, parser and classifier updates and fixes, image subtypes, changed rhs component handling.
`0.3.0` - `0.3.6` - Parser updates for SERPs from 2022 and 2023, standalone extractors file, added pydantic, reduced redundancies in outputs.
`2020.0.0`, `2022.12.18`, `2023.01.04` - Various updates, attempt at date versioning that seemed like a good idea at the time ¯\\\_(ツ)\_/¯
<!-- refs/tags/v2022.12.18 -->
<!-- refs/tags/v2023.01.04 -->
`0.2.15` - Fix people-also-ask and hotel false positives, add flag for left-hand side bar
`0.2.14` - Add shopping ads carousel and three knowledge subtypes (flights, hotels, events)
`0.2.13` - Small fixes for knowledge subtypes, general subtypes, and ads
`0.2.12` - Try to brotli decompress by default
`0.2.11` - Fixed local result parser and no return in general extra details
`0.2.10` - a) Add right-hand-side knowledge panel and top image carousel, b) Add knowledge and general component subtypes, c) Updates to component classifier, footer, ad, and people_also_ask components
`0.2.9` - Various fixes for SERPs with a left-hand side bar, which are becoming more common and change other parts of the SERP layout.
`0.2.8` - Small fixes due to HTML changes, such as missing titles and URLs in general components
`0.2.7` - Added fix for parsing twitter cards, removed pandas dependencies and
several unused functions, moving towards greater package simplicity.
`0.2.6` - Updated ad parser for latest format, still handles older ad format.
`0.2.5` - Google Search, like most online platforms, undergoes changes over time.
These changes often affect not just their outward appearance, but the underlying
code that parsers depend on. This makes parsing a goal with a moving target.
Sometime around February 2020, Google changed a few elements of their HTML
structure which broke this parser. I created this patch for these changes,
but have not tested its backwards compatibility (e.g. on SERPs collected prior to
2/2020). More generally, there's no guarantee on future compatibility. In fact,
there is almost certainly the opposite: more changes will inevitably occur.
If you have older data that you need to parse and the current parser doesn't work,
you can try using `0.2.1`, or send a pull request if you find a way to make both work!
---
## Similar Packages
Many of the packages I've found for collecting web search data via python are no longer maintained, but others are still ongoing and interesting or useful. The primary strength of WebSearcher is its parser, which provides a level of detail that enables examinations of SERP [composition](http://dl.acm.org/citation.cfm?doid=3178876.3186143) by recording the type and position of each result, and its modular design, which has allowed us to (itermittenly) maintain it for so long and to cover such a wide array of component types (currently 25 without considering `sub_types`). Feel free to add to the list of packages or services through a pull request if you are aware of others:
- https://github.com/jarun/googler
- http://googolplex.sourceforge.net
- https://github.com/Jayin/google.py
- https://github.com/ecoron/SerpScrap
- https://github.com/henux/cli-google
- https://github.com/Kaiz0r/netcrawler
- https://github.com/nabehide/WebSearch
- https://github.com/NikolaiT/se-scraper
- https://github.com/rrwen/search_google
- https://github.com/howie6879/magic_google
- https://github.com/rohithpr/py-web-search
- https://github.com/MarioVilas/googlesearch
- https://github.com/aviaryan/python-gsearch
- https://github.com/nickmvincent/you-geo-see
- https://github.com/anthonyhseb/googlesearch
- https://github.com/KokocGroup/google-parser
- https://github.com/vijayant123/google-scrap
- https://github.com/BirdAPI/Google-Search-API
- https://github.com/bisoncorps/search-engine-parser
- https://github.com/the-markup/investigation-google-search-audit
- http://googlesystem.blogspot.com/2008/04/google-search-rest-api.html
- https://valentin.app
- https://app.samuelschmitt.com/
---
## License
Copyright (C) 2017-2026 Ronald E. Robertson <rer@acm.org>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
| text/markdown | null | "Ronald E. Robertson" <rer@acm.org> | null | null | null | parser, search, web | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"beautifulsoup4>=4.12.3",
"brotli>=1.1.0",
"lxml>=5.3.0",
"orjson<4.0.0,>=3.11.5",
"pandas>=2.2.3",
"protobuf<7.0.0,>=6.33.5",
"pydantic>=2.9.2",
"requests>=2.32.4",
"selenium>=4.9.0",
"tldextract>=5.1.2",
"undetected-chromedriver>=3.5.5"
] | [] | [] | [] | [
"homepage, http://github.com/gitronald/WebSearcher",
"repository, http://github.com/gitronald/WebSearcher"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:34:07.061386 | websearcher-0.6.8.tar.gz | 19,181,758 | a7/c6/efb35bcd9490cc89405f58c109a0c6bcb9e6e3a79b0451ba48e372602057/websearcher-0.6.8.tar.gz | source | sdist | null | false | ec4169cfa2e343cc94a95ed21c615eb3 | a37f518c27d0e7f14236df6acf0b85cac62b4a526886ca63c13c8ac202055a91 | a7c6efb35bcd9490cc89405f58c109a0c6bcb9e6e3a79b0451ba48e372602057 | GPL-3.0 | [
"LICENSE"
] | 0 |
2.4 | csvsmith | 0.2.1 | Small CSV utilities: classification, duplicates, row digests, and CLI helpers. | # csvsmith
[](https://pypi.org/project/csvsmith/)

[](https://pypi.org/project/csvsmith/)
## Introduction
`csvsmith` is a lightweight collection of CSV utilities designed for
data integrity, deduplication, and organization. It provides a robust
Python API for programmatic data cleaning and a convenient CLI for quick
operations.
Whether you need to organize thousands of files based on their structural
signatures or pinpoint duplicate rows in a complex dataset, `csvsmith`
ensures the process is predictable, transparent, and reversible.
As of recent versions, CSV classification supports:
- strict vs relaxed header matching
- exact vs subset (“contains”) matching
- auto clustering with collision‑resistant hashes
- dry‑run preview
- report‑only planning mode (scan without moving)
- full rollback via manifest
## Installation
From PyPI:
```bash
pip install csvsmith
```
For local development:
```bash
git clone https://github.com/yeiichi/csvsmith.git
cd csvsmith
python -m venv .venv
source .venv/bin/activate
pip install -e .[dev]
```
## Python API Usage
### Count duplicate values
```python
from csvsmith import count_duplicates_sorted
items = ["a", "b", "a", "c", "a", "b"]
print(count_duplicates_sorted(items))
# [('a', 3), ('b', 2)]
```
### Find duplicate rows in a DataFrame
```python
import pandas as pd
from csvsmith import find_duplicate_rows
df = pd.read_csv("input.csv")
dup_rows = find_duplicate_rows(df)
```
### Deduplicate with report
```python
import pandas as pd
from csvsmith import dedupe_with_report
df = pd.read_csv("input.csv")
deduped, report = dedupe_with_report(df)
deduped.to_csv("deduped.csv", index=False)
report.to_csv("duplicate_report.csv", index=False)
# Exclude columns (e.g. IDs or timestamps)
deduped2, report2 = dedupe_with_report(df, exclude=["id"])
```
### CSV File Classification (Python)
```python
from csvsmith.classify import CSVClassifier
classifier = CSVClassifier(
source_dir="./raw_data",
dest_dir="./organized",
auto=True,
mode="relaxed", # or "strict"
match="exact", # or "contains"
)
classifier.run()
# Roll back using the generated manifest
classifier.rollback("./organized/manifest_YYYYMMDD_HHMMSS.json")
```
## CLI Usage
csvsmith provides a CLI for duplicate detection and CSV organization.
### Show duplicate rows
```bash
csvsmith row-duplicates input.csv
```
Save duplicate rows only:
```bash
csvsmith row-duplicates input.csv -o duplicates_only.csv
```
### Deduplicate and generate a report
```bash
csvsmith dedupe input.csv --deduped deduped.csv --report duplicate_report.csv
```
### Classify CSVs
```bash
# Dry-run (preview only)
csvsmith classify --src ./raw --dest ./out --auto --dry-run
# Exact matching (default)
csvsmith classify --src ./raw --dest ./out --config signatures.json
# Relaxed matching (ignore column order)
csvsmith classify --src ./raw --dest ./out --config signatures.json --mode relaxed
# Subset matching (signature columns must be present)
csvsmith classify --src ./raw --dest ./out --config signatures.json --match contains
# Report-only (plan without moving files)
csvsmith classify --src ./raw --dest ./out --auto --report-only
# Roll back using manifest
csvsmith classify --rollback ./out/manifest_YYYYMMDD_HHMMSS.json
```
### Report-only mode
`--report-only` scans all CSVs and writes a manifest describing what *would*
happen, without touching the filesystem. This enables downstream pipelines
to consume the classification plan for custom processing.
## Philosophy
1. CSVs deserve tools that are simple, predictable, and transparent.
2. A row has meaning only when its identity is stable and hashable.
3. Collisions are sin; determinism is virtue.
4. Let no delimiter sow ambiguity among fields.
5. Love thy \x1f — the unseen separator, guardian of clean hashes.
6. The pipeline should be silent unless something is wrong.
7. Your data deserves respect — and your tools should help you give it.
## License
MIT License.
| text/markdown | null | Eiichi YAMAMOTO <info@yeiichi.com> | null | null | MIT License
Copyright (c) 2025 Eiichi YAMAMOTO
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.
| csv, pandas, duplicates, data-cleaning, file-organization | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"License :: OSI Approved :: MIT License",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas>=2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/yeiichi/csvsmith",
"Repository, https://github.com/yeiichi/csvsmith"
] | twine/6.2.0 CPython/3.12.2 | 2026-02-21T03:33:34.874968 | csvsmith-0.2.1.tar.gz | 16,453 | 2b/9e/4342445ba20fa02a69651f6dd8c813d1aa73d7ae129b46b1f2178421179d/csvsmith-0.2.1.tar.gz | source | sdist | null | false | c2633442ceca5c26fa503da3f41507ec | 70e37377e7c7ccec023fc3fe3b3fca2344b3688d7b193778c252e2c5cc74641f | 2b9e4342445ba20fa02a69651f6dd8c813d1aa73d7ae129b46b1f2178421179d | null | [
"LICENSE"
] | 236 |
2.4 | openra-rl | 0.2.1 | Play Red Alert with AI agents — LLMs, scripted bots, or RL | # OpenRA-RL
Play [Red Alert](https://www.openra.net/) with AI agents. LLMs, scripted bots, or RL — your agent commands armies in the classic RTS through a Python API.
```
┌──────────────────┐ HTTP / WS :8000 ┌──────────────────────────────┐
│ Your Agent │ ◄────────────────────────► │ OpenRA-RL Server (Docker) │
│ │ gRPC :9999 │ FastAPI + gRPC bridge │
│ LLM / Bot / RL │ ◄────────────────────────► │ OpenRA engine (headless) │
└──────────────────┘ └──────────────────────────────┘
```
## Quick Start
```bash
pip install openra-rl
openra-rl play
```
On first run, an interactive wizard helps you configure your LLM provider (OpenRouter, Ollama, or LM Studio). The CLI pulls the game server Docker image and starts everything automatically.
### Skip the wizard
```bash
# Cloud (OpenRouter)
openra-rl play --provider openrouter --api-key sk-or-... --model anthropic/claude-sonnet-4-20250514
# Local (Ollama — free, no API key)
openra-rl play --provider ollama --model qwen3:32b
# Reconfigure later
openra-rl config
```
### Prerequisites
- **Docker** — the game server runs in a container
- **Python 3.10+**
- An LLM endpoint (cloud API key or local model server)
## CLI Reference
```
openra-rl play Run the LLM agent (wizard on first use)
openra-rl config Re-run the setup wizard
openra-rl server start | stop | status | logs
openra-rl mcp-server Start MCP stdio server (for OpenClaw / Claude Desktop)
openra-rl doctor Check system prerequisites
openra-rl version Print version
```
## MCP Server (OpenClaw / Claude Desktop)
OpenRA-RL exposes all 48 game tools as a standard MCP server:
```bash
openra-rl mcp-server
```
Add to your MCP client config (e.g. `~/.openclaw/openclaw.json`):
```json
{
"mcpServers": {
"openra-rl": {
"command": "openra-rl",
"args": ["mcp-server"]
}
}
}
```
Then chat: _"Start a game of Red Alert on easy difficulty, build a base, and defeat the enemy."_
## Architecture
| Component | Language | Role |
|-----------|----------|------|
| **OpenRA-RL** | Python | Environment wrapper, agents, HTTP/WebSocket API |
| **OpenRA** (submodule) | C# | Modified game engine with embedded gRPC server |
| **OpenEnv** (pip dep) | Python | Standardized Gymnasium-style environment interface |
**Data flow:** Agent <-> FastAPI (port 8000) <-> gRPC bridge (port 9999) <-> OpenRA game engine
The game runs at ~25 ticks/sec independent of agent speed. Observations use a DropOldest channel so the agent always sees the latest game state, even if it's slower than real time.
## Example Agents
### Scripted Bot
A hardcoded state-machine bot that demonstrates all action types. Deploys MCV, builds a base, trains infantry, and attacks.
```bash
python examples/scripted_bot.py --url http://localhost:8000 --verbose --max-steps 2000
```
### MCP Bot
A planning-aware bot that uses game knowledge tools (tech tree lookups, faction briefings, map analysis) to formulate strategy before playing.
```bash
python examples/mcp_bot.py --url http://localhost:8000 --verbose --max-turns 3000
```
### LLM Agent
An AI agent powered by any OpenAI-compatible model. Supports cloud APIs (OpenRouter, OpenAI) and local model servers (Ollama, LM Studio).
```bash
python examples/llm_agent.py \
--config examples/config-openrouter.yaml \
--api-key sk-or-... \
--verbose \
--log-file game.log
```
CLI flags override config file values. See `python examples/llm_agent.py --help` for all options.
## Configuration
OpenRA-RL uses a unified YAML config system. Settings are resolved with this precedence:
**CLI flags > Environment variables > Config file > Built-in defaults**
### Config file
Copy and edit the default config:
```bash
cp config.yaml my-config.yaml
# Edit my-config.yaml, then:
python examples/llm_agent.py --config my-config.yaml
```
Key sections:
```yaml
game:
openra_path: "/opt/openra" # Path to OpenRA installation
map_name: "singles.oramap" # Map to play
headless: true # No GPU rendering
record_replays: false # Save .orarep replay files
opponent:
bot_type: "normal" # AI difficulty: easy, normal, hard
ai_slot: "Multi0" # AI player slot
planning:
enabled: true # Pre-game planning phase
max_turns: 10 # Max planning turns
max_time_s: 60.0 # Planning time limit
llm:
base_url: "https://openrouter.ai/api/v1/chat/completions"
model: "qwen/qwen3-coder-next"
max_tokens: 1500
temperature: null # null = provider default
tools:
categories: # Toggle tool groups on/off
read: true
knowledge: true
movement: true
production: true
# ... see config.yaml for all categories
disabled: [] # Disable specific tools by name
alerts:
under_attack: true
low_power: true
idle_production: true
no_scouting: true
# ... see config.yaml for all alerts
```
### Example configs
| File | Use case |
|------|----------|
| `examples/config-openrouter.yaml` | Cloud LLM via OpenRouter (Claude, GPT, etc.) |
| `examples/config-ollama.yaml` | Local LLM via Ollama |
| `examples/config-lmstudio.yaml` | Local LLM via LM Studio |
| `examples/config-minimal.yaml` | Reduced tool set for limited-context models |
### Environment variables
| Variable | Config path | Description |
|----------|-------------|-------------|
| `OPENROUTER_API_KEY` | `llm.api_key` | API key for OpenRouter |
| `LLM_API_KEY` | `llm.api_key` | Generic LLM API key (overrides OpenRouter key) |
| `LLM_BASE_URL` | `llm.base_url` | LLM endpoint URL |
| `LLM_MODEL` | `llm.model` | Model identifier |
| `BOT_TYPE` | `opponent.bot_type` | AI difficulty: easy, normal, hard |
| `OPENRA_PATH` | `game.openra_path` | Path to OpenRA installation |
| `RECORD_REPLAYS` | `game.record_replays` | Save replay files (true/false) |
| `PLANNING_ENABLED` | `planning.enabled` | Enable planning phase (true/false) |
## Using Local Models
### Ollama
```bash
# Pull a model with tool-calling support
ollama pull qwen3:32b
# For models that need more context (default is often 2048-4096 tokens):
cat > /tmp/Modelfile <<EOF
FROM qwen3:32b
PARAMETER num_ctx 32768
EOF
ollama create qwen3-32k -f /tmp/Modelfile
# Run
openra-rl play --provider ollama --model qwen3-32k
```
> **Note:** Not all Ollama models support tool calling. Check with `ollama show <model>` — the template must include a `tools` block. Models known to work: `qwen3:32b`, `qwen3:4b`.
### LM Studio
1. Load a model in LM Studio and start the local server (default port 1234)
2. Run:
```bash
openra-rl play --provider lmstudio --model <model-name>
```
## Docker
### Server management
```bash
openra-rl server start # Start game server container
openra-rl server start --port 9000 # Custom port
openra-rl server status # Check if running
openra-rl server logs --follow # Tail logs
openra-rl server stop # Stop container
```
### Docker Compose (development)
| Service | Command | Description |
|---------|---------|-------------|
| `openra-rl` | `docker compose up openra-rl` | Headless game server (ports 8000, 9999) |
| `agent` | `docker compose up agent` | LLM agent (requires `OPENROUTER_API_KEY`) |
| `mcp-bot` | `docker compose run mcp-bot` | MCP bot |
```bash
# LLM agent via Docker Compose
OPENROUTER_API_KEY=sk-or-... docker compose up agent
```
### Replays
Games save `.orarep` replay files inside the container. Extract them:
```bash
docker cp openra-rl-server:/root/.config/openra/Replays ./replays
```
> **Note:** Replays use the dev engine version and cannot be opened in the standard OpenRA release. Build the dev client from the `OpenRA/` submodule to view them.
## Local Development (without Docker)
For running the game server natively (macOS/Linux):
### Install dependencies
```bash
# Python
pip install -e ".[dev]"
# .NET 8.0 SDK
# macOS: brew install dotnet@8
# Ubuntu: sudo apt install dotnet-sdk-8.0
# Native libraries (macOS arm64)
brew install sdl2 openal-soft freetype luajit
cp $(brew --prefix sdl2)/lib/libSDL2.dylib OpenRA/bin/SDL2.dylib
cp $(brew --prefix openal-soft)/lib/libopenal.dylib OpenRA/bin/soft_oal.dylib
cp $(brew --prefix freetype)/lib/libfreetype.dylib OpenRA/bin/freetype6.dylib
cp $(brew --prefix luajit)/lib/libluajit-5.1.dylib OpenRA/bin/lua51.dylib
```
### Build OpenRA
```bash
cd OpenRA && make && cd ..
```
### Start the server
```bash
python openra_env/server/app.py
```
### Run tests
```bash
pytest
```
## Observation Space
Each tick, the agent receives structured game state:
| Field | Description |
|-------|-------------|
| `tick` | Current game tick |
| `cash`, `ore`, `power_provided`, `power_drained` | Economy |
| `units` | Own units with position, health, type, facing, stance, speed, attack range |
| `buildings` | Own buildings with production queues, power, rally points |
| `visible_enemies`, `visible_enemy_buildings` | Fog-of-war limited enemy intel |
| `spatial_map` | 9-channel spatial tensor (terrain, height, resources, passability, fog, own buildings, own units, enemy buildings, enemy units) |
| `military` | Kill/death costs, asset value, experience, order count |
| `available_production` | What can currently be built |
## Action Space
18 action types available through the command API:
| Category | Actions |
|----------|---------|
| **Movement** | `move`, `attack_move`, `attack`, `stop` |
| **Production** | `produce`, `cancel_production` |
| **Building** | `place_building`, `sell`, `repair`, `power_down`, `set_rally_point`, `set_primary` |
| **Unit control** | `deploy`, `guard`, `set_stance`, `enter_transport`, `unload`, `harvest` |
## MCP Tools
The LLM agent interacts through 48 MCP (Model Context Protocol) tools organized into categories:
| Category | Tools | Purpose |
|----------|-------|---------|
| **Read** | `get_game_state`, `get_economy`, `get_units`, `get_buildings`, `get_enemies`, `get_production`, `get_map_info`, `get_exploration_status` | Query current game state |
| **Knowledge** | `lookup_unit`, `lookup_building`, `lookup_tech_tree`, `lookup_faction` | Static game data reference |
| **Bulk Knowledge** | `get_faction_briefing`, `get_map_analysis`, `batch_lookup` | Efficient batch queries |
| **Planning** | `start_planning_phase`, `end_planning_phase`, `get_opponent_intel`, `get_planning_status` | Pre-game strategy planning |
| **Game Control** | `advance` | Advance game ticks |
| **Movement** | `move_units`, `attack_move`, `attack_target`, `stop_units` | Unit movement commands |
| **Production** | `build_unit`, `build_structure`, `build_and_place` | Build units and structures |
| **Building Actions** | `place_building`, `cancel_production`, `deploy_unit`, `sell_building`, `repair_building`, `set_rally_point`, `guard_target`, `set_stance`, `harvest`, `power_down`, `set_primary` | Building and unit management |
| **Placement** | `get_valid_placements` | Query valid building locations |
| **Unit Groups** | `assign_group`, `add_to_group`, `get_groups`, `command_group` | Group management |
| **Compound** | `batch`, `plan` | Multi-action sequences |
| **Utility** | `get_replay_path`, `surrender` | Misc |
| **Terrain** | `get_terrain_at` | Terrain queries |
Tools can be toggled per-category or individually via `config.yaml`.
## Project Structure
```
OpenRA-RL/
├── OpenRA/ # Game engine (git submodule, C#)
├── openra_env/ # Python package
│ ├── cli/ # CLI entry point (openra-rl command)
│ ├── mcp_server.py # Standard MCP server (stdio transport)
│ ├── client.py # WebSocket client
│ ├── config.py # Unified YAML configuration
│ ├── models.py # Pydantic data models
│ ├── game_data.py # Unit/building stats, tech tree
│ ├── reward.py # Multi-component reward function
│ ├── opponent_intel.py # AI opponent profiles
│ ├── mcp_ws_client.py # MCP WebSocket client
│ ├── server/
│ │ ├── app.py # FastAPI application
│ │ ├── openra_environment.py # OpenEnv environment (reset/step/state)
│ │ ├── bridge_client.py # Async gRPC client
│ │ └── openra_process.py # OpenRA subprocess manager
│ └── generated/ # Auto-generated protobuf stubs
├── examples/
│ ├── scripted_bot.py # Hardcoded strategy bot
│ ├── mcp_bot.py # MCP tool-based bot
│ ├── llm_agent.py # LLM-powered agent
│ └── config-*.yaml # Example configs (ollama, lmstudio, openrouter, minimal)
├── skill/ # OpenClaw skill definition
├── proto/ # Protobuf definitions (rl_bridge.proto)
├── tests/ # Test suite
├── .github/workflows/ # CI, Docker publish, PyPI publish
├── config.yaml # Default configuration
├── docker-compose.yaml # Service orchestration
├── Dockerfile # Game server image
└── Dockerfile.agent # Lightweight agent image
```
## License
[GPL-3.0](LICENSE)
| text/markdown | null | null | null | null | GPL-3.0 | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastapi>=0.100.0",
"grpcio-tools>=1.60.0",
"grpcio>=1.60.0",
"httpx>=0.24.0",
"mcp>=1.2.0",
"openenv-core>=0.2.0",
"protobuf>=4.25.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"uvicorn>=0.20.0",
"websockets>=12.0",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"torch>=2.0.0; extra == \"training\"",
"transformers>=4.30.0; extra == \"training\"",
"trl>=0.7.0; extra == \"training\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:31:38.012776 | openra_rl-0.2.1.tar.gz | 1,226,564 | e1/90/3e7f04ad0cc02638387c492dc0e064d96971f40de7d9567831e4d53110dc/openra_rl-0.2.1.tar.gz | source | sdist | null | false | 5a766e21cedfb540f07ec70f3ee05257 | 8a1d2717c9c741b0a4da2296ec51952f8e00f3bf4b54420bcec6baa31b707bae | e1903e7f04ad0cc02638387c492dc0e064d96971f40de7d9567831e4d53110dc | null | [
"LICENSE"
] | 231 |
2.4 | kotonebot | 0.9.0 | Kotonebot is game automation library based on computer vision technology, works for Windows and Android. | # kotonebot
> [!WARNING]
> 本项目仍然处于早期开发阶段,可能随时会发生 breaking change。如果要使用,建议 pin 到一个具体的版本。
kotonebot 是一个使用 Python 编写,基于 OpenCV、RapidOCR 等技术,致力于简化 Python 游戏自动化脚本编写流程的框架。
## 特性
* 层次化引入
* 包含 Library、Framework、Application 三个不同层次,分别封装到不同程度,可自由选择
* 平台无关的输入输出(截图与模拟点击)
* 基于代码生成的图片资源引用
* 避免硬编码字符串
* 图像/OCR 识别结果追踪 & 可视化查看工具
* 开箱即用的模拟器管理(目前仅支持 MuMu12 与雷电模拟器)
## 安装
要求:Python >= 3.10
```bash
# Windows Host, Windows Client
pip install kotonebot[windows]
# Windows Host, Android Client
pip install kotonebot[android]
# Development dependencies
pip install kotonebot[dev]
```
## 快速开始
WIP
### 协同开发
有时候你可能想以源码方式安装 kotonebot,以便与自己的项目一起调试修改。此时,如果你以 `pip install -e /path/to/kotonebot` 的方式安装,Pylance 可能无法正常静态分析。
解决方案是在 VSCode 里搜索 `python.analysis.extraPaths` 并将其设置为你本地 kotonebot 的根目录。
## 文档
WIP
## 其他
本项目分离自 [KotonesAutoAssistant](https://github.com/XcantloadX/kotones-auto-assistant),因此 c69130 以前的提交均为 KotonesAutoAssistant 的历史提交。
由于使用 filter-repo 移除了大量无用文件,因此历史提交信息和更改的文件可能无法完全对应。
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"opencv-python~=4.10",
"rapidocr_onnxruntime~=1.4",
"scikit-image~=0.25",
"thefuzz~=0.22",
"pydantic~=2.10",
"ksaa-res~=0.2",
"typing-extensions~=4.12",
"python-dotenv~=1.0",
"onnxruntime~=1.14",
"rich~=13.9",
"numpy",
"mouse>=0.7.1",
"adbutils>=2.8; extra == \"android\"",
"uiautomator2>=3.2; extra == \"android\"",
"pywin32; extra == \"windows\"",
"ahk>=1.8; extra == \"windows\"",
"win11toast>=0.35; extra == \"windows\"",
"psutil>=6.1; extra == \"windows\"",
"fastapi~=0.115; extra == \"dev\"",
"uvicorn~=0.34; extra == \"dev\"",
"python-multipart~=0.0; extra == \"dev\"",
"websockets~=14.1; extra == \"dev\"",
"psutil~=6.1; extra == \"dev\"",
"twine~=6.1; extra == \"dev\"",
"build; extra == \"dev\"",
"snakeviz; extra == \"dev\"",
"tomli; python_version < \"3.11\" and extra == \"dev\"",
"dataclass-wizard; extra == \"dev\"",
"jinja2~=3.1; extra == \"dev\"",
"tqdm~=4.67; extra == \"dev\"",
"kotonebot[android,dev,windows]; extra == \"all\""
] | [] | [] | [] | [] | uv/0.8.3 | 2026-02-21T03:31:29.406010 | kotonebot-0.9.0.tar.gz | 1,530,011 | 12/d7/9b6b787af4c36114160adbe10587eb14743ceafc750316efe2eddabbaf28/kotonebot-0.9.0.tar.gz | source | sdist | null | false | e285e43a8dae752d4036a3c703f73def | 5dcc249d5fc4d576c1dec1c29a12b3f9816d58f3aca1877d2d54f1f1add8096a | 12d79b6b787af4c36114160adbe10587eb14743ceafc750316efe2eddabbaf28 | null | [
"LICENSE"
] | 227 |
2.4 | antaris-contracts | 1.0.0 | Versioned state schemas, failure semantics, and debug CLI for the Antaris Analytics Suite. | # antaris-contracts
[](https://badge.fury.io/py/antaris-contracts)
[](LICENSE)
**Versioned state schemas, failure semantics, and debug tooling for the antaris-suite.**
antaris-contracts defines the shared data contracts used across all antaris packages — ensuring consistent serialization, migration, and observability without coupling the packages to each other.
## Installation
```bash
pip install antaris-contracts
```
## What's Included
- **State schemas** — Versioned dataclass contracts for memory, router, guard, context, and telemetry state
- **Migration tooling** — Chained schema migrations with version detection and rollback safety
- **Failure semantics** — Canonical error types and failure modes documented in `FAILURE_SEMANTICS.md`
- **Concurrency model** — Lock ordering and thread-safety contracts in `CONCURRENCY_MODEL.md`
- **`antaris-debug` CLI** — Inspect memory stores, tail telemetry logs, validate schema files
## CLI Usage
```bash
# Inspect a memory workspace
antaris-debug memory inspect ./my_agent_workspace
# Search memories
antaris-debug memory search ./my_agent_workspace "important decision"
# Tail a telemetry log
antaris-debug telemetry tail ./telemetry.jsonl
# Validate a schema file
antaris-debug schema validate ./state.json
```
## Schema Usage
```python
from antaris_contracts.schemas.memory import MemoryState
from antaris_contracts.schemas.router import RouterState
from antaris_contracts.migration import migrate
# Load and auto-migrate state
raw = json.loads(open("state.json").read())
state = migrate(raw, target_version="2.2.0")
```
## Zero Runtime Dependencies
antaris-contracts has no runtime dependencies beyond the Python standard library. It is safe to import in any environment.
## Part of antaris-suite
| Package | Role |
|---------|------|
| antaris-contracts | Schemas + contracts (this package) |
| antaris-memory | Persistent agent memory |
| antaris-router | Intelligent model routing |
| antaris-guard | Input/output safety layer |
| antaris-context | Context window management |
| antaris-pipeline | Orchestration + telemetrics |
## License
Apache 2.0 — see [LICENSE](LICENSE)
| text/markdown | null | Antaris Analytics <dev@antarisanalytics.com> | null | null | Apache-2.0 | ai, agents, llm, schemas, contracts, telemetry, debugging | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/Antaris-Analytics/antaris-contracts",
"Documentation, https://antarisanalytics.ai",
"Repository, https://github.com/Antaris-Analytics/antaris-contracts"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-21T03:30:54.348589 | antaris_contracts-1.0.0.tar.gz | 26,220 | c2/24/f1bf9d186acfe2c69cf01386701e24a7dc655158e3115332a18d2605a5e6/antaris_contracts-1.0.0.tar.gz | source | sdist | null | false | 5bd2fc18796e1d29fee4f30aea48b5ff | 74b6dcc70a04369d435dbee40bcf8769c0d14e67a2bd67a8142d1f6475745744 | c224f1bf9d186acfe2c69cf01386701e24a7dc655158e3115332a18d2605a5e6 | null | [
"LICENSE"
] | 259 |
2.4 | PyADRecon | 0.12.2 | Python Active Directory Reconnaissance Tool (ADRecon port) with NTLM and Kerberos support. | <img src="https://raw.githubusercontent.com/l4rm4nd/PyADRecon/refs/heads/main/.github/pyadrecon.png" alt="pyadrecon" width="300"/>
Python3 implementation of an improved [ADRecon](https://github.com/sense-of-security/ADRecon) for Pentesters and Blue Teams.
> ADRecon is a tool which gathers information about MS Active Directory and generates an XSLX report to provide a holistic picture of the current state of the target AD environment.
> [!TIP]
> If you are a Red Team, may check out [ADRecon-ADWS](https://github.com/l4rm4nd/PyADRecon-ADWS) instead.
## Table of Contents
- [Installation](#installation)
- [Usage](#usage)
- [Docker](#docker)
- [Collection Modules](#collection-modules)
- [HTML Dashboard](#html-dashboard)
- [Acknowledgements](#acknowledgements)
- [License](#license)
## Installation
````bash
# stable release from pypi
pipx install pyadrecon
# latest commit from github
pipx install git+https://github.com/l4rm4nd/PyADRecon
````
Then verify installation:
````bash
pyadrecon --version
````
> [!TIP]
> For Windows, may read [this](https://github.com/l4rm4nd/PyADRecon/tree/main/windows). NTLM + Kerberos supported.
## Usage
````py
usage: pyadrecon.py [-h] [--version] [--generate-excel-from CSV_DIR] [-dc DOMAIN_CONTROLLER] [-u USERNAME] [-p [PASSWORD]] [-d DOMAIN] [--auth {ntlm,kerberos}] [--tgt-file TGT_FILE] [--tgt-base64 TGT_BASE64]
[--ssl] [--port PORT] [-o OUTPUT] [--page-size PAGE_SIZE] [--threads THREADS] [--dormant-days DORMANT_DAYS] [--password-age PASSWORD_AGE] [--only-enabled] [--collect COLLECT]
[--no-excel] [-v]
PyADRecon - Python Active Directory Reconnaissance Tool
options:
-h, --help show this help message and exit
--version show program's version number and exit
--generate-excel-from CSV_DIR
Generate Excel report from CSV directory (standalone mode, no AD connection needed)
-dc, --domain-controller DOMAIN_CONTROLLER
Domain Controller IP or hostname
-u, --username USERNAME
Username for authentication
-p, --password [PASSWORD]
Password for authentication (optional if using TGT)
-d, --domain DOMAIN Domain name (e.g., DOMAIN.LOCAL) - Required for Kerberos auth
--auth {ntlm,kerberos}
Authentication method (default: ntlm)
--tgt-file TGT_FILE Path to Kerberos TGT ccache file (for Kerberos auth)
--tgt-base64 TGT_BASE64
Base64-encoded Kerberos TGT ccache (for Kerberos auth)
--ssl Force SSL/TLS (LDAPS). No LDAP fallback allowed.
--port PORT LDAP port (default: 389, use 636 for LDAPS)
-o, --output OUTPUT Output directory (default: PyADRecon-Report-<timestamp>)
--page-size PAGE_SIZE
LDAP page size (default: 500)
--dormant-days DORMANT_DAYS
Days for dormant account threshold (default: 90)
--password-age PASSWORD_AGE
Days for password age threshold (default: 180)
--only-enabled Only collect enabled objects
--collect COLLECT Comma-separated modules to collect (default: all)
--workstation WORKSTATION
Explicitly spoof workstation name for NTLM authentication (default: empty string, bypasses userWorkstations restrictions)
--no-excel Skip Excel report generation
--no-dashboard Skip interactive HTML dashboard generation
-v, --verbose Verbose output
Examples:
# Basic usage with NTLM authentication
pyadrecon.py -dc 192.168.1.1 -u admin -p password123 -d DOMAIN.LOCAL
# With Kerberos authentication (bypasses channel binding)
pyadrecon.py -dc dc01.domain.local -u admin -p password123 -d DOMAIN.LOCAL --auth kerberos
# With Kerberos using TGT from file (bypasses channel binding)
pyadrecon.py -dc dc01.domain.local -u admin -d DOMAIN.LOCAL --auth kerberos --tgt-file /tmp/admin.ccache
# With Kerberos using TGT from base64 string (bypasses channel binding)
pyadrecon.py -dc dc01.domain.local -u admin -d DOMAIN.LOCAL --auth kerberos --tgt-base64 BQQAAAw...
# Only collect specific modules
pyadrecon.py -dc 192.168.1.1 -u admin -p pass -d DOMAIN.LOCAL --collect users,groups,computers
# Output to specific directory
pyadrecon.py -dc 192.168.1.1 -u admin -p pass -d DOMAIN.LOCAL -o /tmp/adrecon_output
# Generate Excel report from existing CSV files (standalone mode)
pyadrecon.py --generate-excel-from /path/to/CSV-Files -o report.xlsx
````
>[!TIP]
>PyADRecon always tries LDAPS on TCP/636 first.
>
>If flag `--ssl` is not used, LDAP on TCP/389 may be tried as fallback.
>[!WARNING]
>If LDAP channel binding is enabled, this script will fail with `automatic bind not successful - strongerAuthRequired`, as ldap3 does not support it (see [here](https://github.com/cannatag/ldap3/issues/1049#issuecomment-1222826803)). You must use Kerberos authentication instead.
>
>If you use Kerberos auth under Linux, please create a valid `/etc/krb5.conf` and DC hostname entry in `/etc/hosts`. May read [this](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=32628#KerberosClientConfiguration-*NIX/etc/krb5.confConfiguration). If you are on Windows, please make sure you have valid Kerberos tickets. May read [this](https://github.com/l4rm4nd/PyADRecon/tree/main/windows#kerberos-authentication). Note that you can provide an already existing TGT ticket to the script via `--tgt-file` or `--tgt-base64`. For example, obtained by Netexec via `netexec smb <TARGET> <ARGS> --generate-tgt <FILEMAME>`.
## Docker
There is also a Docker image available on GHCR.IO.
````
docker run --rm -v /etc/krb5.conf:/etc/krb5.conf:ro -v /etc/hosts:/etc/hosts:ro -v ./:/tmp/pyadrecon_output ghcr.io/l4rm4nd/pyadrecon:latest -dc dc01.domain.local -u admin -p password123 -d DOMAIN.LOCAL -o /tmp/pyadrecon_output
````
## Collection Modules
As default, PyADRecon runs all collection modules. They are referenced to as `default` or `all`.
Though, you can freely select your own collection of modules to run:
| Icon | Meaning |
|------|---------|
| 🛑 | Requires administrative domain privileges (e.g. Domain Admins) |
| ✅ | Requires regular domain privileges (e.g. Authenticated Users) |
| 💥 | New collection modul in beta state. Results may be incorrect. |
**Forest & Domain**
- `forest` ✅
- `domain` ✅
- `trusts` ✅
- `sites` ✅
- `subnets` ✅
- `schema` or `schemahistory` ✅
**Domain Controllers**
- `dcs` or `domaincontrollers` ✅
**Users & Groups**
- `users` ✅
- `userspns` ✅
- `groups` ✅
- `groupmembers` ✅
- `protectedgroups` ✅💥
- `krbtgt` ✅
- `asreproastable` ✅
- `kerberoastable` ✅
**Computers & Printers**
- `computers` ✅
- `computerspns` ✅
- `printers` ✅
**OUs & Group Policy**
- `ous` ✅
- `gpos` ✅
- `gplinks` ✅
**Passwords & Credentials**
- `passwordpolicy` ✅
- `fgpp` or `finegrainedpasswordpolicy` 🛑
- `laps` 🛑
- `bitlocker` 🛑💥
**Managed Service Accounts**
- `gmsa` or `groupmanagedserviceaccounts` ✅💥
- `dmsa` or `delegatedmanagedserviceaccounts` ✅💥
- Only works for Windows Server 2025+ AD schema
**Certificates**
- `adcs` or `certificates` ✅💥
- Detects ESC1, ESC2, ESC3, ESC4 and ESC9
**DNS**
- `dnszones` ✅
- `dnsrecords` ✅
## HTML Dashboard
PyADRecon will automatically create an HTML dashboard with important stats and security findings.
You may disable HTML dashboard generation via `--no-dashboard`.
>[!CAUTION]
> This is a beta feature. Displayed data may be falsely parsed or reported as issue. Take it with a grain of salt!
<img width="1209" height="500" alt="image" src="https://github.com/user-attachments/assets/e9500806-374d-4c69-a9a8-7f1540779266" />
<details>
<img width="1318" height="927" alt="image" src="https://github.com/user-attachments/assets/0760056c-963d-48fb-a252-fd082862bb01" />
<img width="1283" height="817" alt="image" src="https://github.com/user-attachments/assets/325197eb-8bd7-4aca-ac4e-c34b85057df1" />
<img width="1253" height="569" alt="image" src="https://github.com/user-attachments/assets/b6c4f94b-9da3-4a55-808d-23036181d02b" />
</details>
## Acknowledgements
Many thanks to the following folks:
- [S3cur3Th1sSh1t](https://github.com/S3cur3Th1sSh1t) for a first Claude draft of this Python3 port
- [Sense-of-Security](https://github.com/sense-of-security) for the original ADRecon script in PowerShell
- [cannatag](https://github.com/cannatag) for the awesome ldap3 Python client
- [Forta](https://github.com/fortra) for the awesome impacket suite
- [Anthropic](https://github.com/anthropics) for Claude LLMs
## License
**PyADRecon** is released under the **MIT License**.
The following third-party libraries are used:
| Library | License |
|-------------|----------------|
| ldap3 | LGPL v3 |
| openpyxl | MIT |
| gssapi | MIT |
| impacket | Apache 2.0 |
| winkerberos | Apache 2.0 |
Please refer to the respective licenses of these libraries when using or redistributing this software.
| text/markdown | LRVT | null | null | null | null | active-directory, active directory, ad, recon, reconnaissance, enum, enumeration, adrecon, pyadrecon, security, audit, pentest, ldap, kerberos, adcs, kerberoast | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"ldap3<3,>=2.9.1",
"openpyxl<4,>=3.1.5",
"impacket<1,>=0.13.0",
"gssapi<2,>=1.11.1; sys_platform != \"win32\"",
"winkerberos<1,>=0.13.0; sys_platform == \"win32\"",
"pycryptodome<4,>=3.23.0; sys_platform == \"win32\""
] | [] | [] | [] | [
"Homepage, https://github.com/l4rm4nd/PyADRecon",
"Repository, https://github.com/l4rm4nd/PyADRecon",
"Issues, https://github.com/l4rm4nd/PyADRecon/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:29:52.409112 | pyadrecon-0.12.2.tar.gz | 644,688 | 3d/67/a80fb5e02eac4d1c4baeeb4abf6bdc4b06d8a3c90ec43bd575896a98984a/pyadrecon-0.12.2.tar.gz | source | sdist | null | false | 90fe7d400e330fdbfcf7c2433b9ce9b9 | eae80ff32c63253d35003e60d15ceb7472386bdc8c6bcfbc449193be13d2bb1d | 3d67a80fb5e02eac4d1c4baeeb4abf6bdc4b06d8a3c90ec43bd575896a98984a | MIT | [
"LICENSE"
] | 0 |
2.4 | cqc-quam-state | 2026.2.21 | Add your description here | # CQC QUAM State
A command-line tool for managing CQC QuAM (Quantum Abstract Machine) state configuration.
## Overview
This package provides access to calibrated quantum device configurations and state files. It includes:
- Pre-calibrated QuAM state files (JSON format)
- CLI tools for managing and loading state configurations
- Environment variable management for QuAM state paths
To quickly set the `QUAM_STATE_PATH` environment variable to the current calibrated state (after installing and activating the environment):
```bash
source load-cqc-quam
```
**Note**: The package version follows the format `YYYY.MM.DD[.X]` where `YYYY.MM.DD` indicates the date of the last calibration, and the optional `.X` is a sub-version for multiple releases on the same day.
## Installation
Install the package using `uv` (recommended) or `pip`. Make sure to use the latest version to get the most recent calibration data:
### Using uv (recommended)
```bash
uv venv
source .venv/bin/activate
uv pip install cqc-quam-state==2025.6.4.1
```
### Using pip
```bash
pip install cqc-quam-state==2025.6.4.1
```
### Installing the latest version
To install the most recent calibration data, check for the latest version:
```bash
# Find the latest version
pip index versions cqc-quam-state
# Install the latest version (e.g., if there are multiple releases today)
pip install cqc-quam-state==2025.6.4.3
```
## Usage
### Quick Start
The simplest way to use this package is to source the provided script, which sets the `QUAM_STATE_PATH` environment variable:
```bash
source load-cqc-quam
```
This will set `QUAM_STATE_PATH` to point to the current calibrated state files included in the package.
### CLI Commands
The package also provides a `cqc-quam-state` CLI tool for more advanced usage:
#### Get Help
```bash
cqc-quam-state --help
```
#### Available Commands
- **`info`**: Display information about the current state
- **`load`**: Output the export command for setting `QUAM_STATE_PATH` (used by the `load-cqc-quam` script)
- **`set`**: Set configuration values (placeholder for future functionality)
#### Examples
Display current state information:
```bash
cqc-quam-state info
```
Get the export command for the QuAM state path:
```bash
cqc-quam-state load
```
Set configuration values:
```bash
cqc-quam-state set
```
(In development, the idea is to set the IP address and port of the OPX and octave and the calibration db dynamically here)
## State Files
The package includes pre-calibrated state files in the `quam_state/` directory:
- **`state.json`**: Main QuAM state configuration containing octave settings, RF outputs, and calibration parameters
- **`wiring.json`**: Wiring configuration for the quantum device setup
These files are automatically included when you install the package and can be accessed via the `QUAM_STATE_PATH` environment variable.
## Version Information
The package uses a date-based versioning system with optional sub-versions:
### Version Format: `YYYY.MM.DD[.X]`
- **`YYYY.MM.DD`**: The calibration date ( generated from `date +"%Y.%-m.%-d"`)
- **`.X`**: Optional sub-version for multiple releases on the same day
### Version Examples
- **`2025.6.4`**: First release on June 4, 2025
- **`2025.6.4.1`**: Second release on June 4, 2025 (updated calibration)
- **`2025.6.4.2`**: Third release on June 4, 2025
- **`2025.6.5`**: First release on June 5, 2025
## Troubleshooting
### Environment Variable Not Set
If the `QUAM_STATE_PATH` environment variable is not set after sourcing the script:
1. Ensure you're in the correct virtual environment
2. Verify the package is installed: `pip show cqc-quam-state`
3. Try running the load command directly: `cqc-quam-state load`
### Package Not Found
If you get import errors:
1. Check if the package is installed: `pip list | grep cqc-quam-state`
2. Ensure you're using the correct Python environment
3. Try reinstalling: `pip install --force-reinstall cqc-quam-state`
## License
This project is licensed under the MIT License.
| text/markdown | null | JUNIQ Software Team <info-juniq@fz-juelich.de> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.0.0"
] | [] | [] | [] | [
"Homepage, https://gitlab.jsc.fz-juelich.de/qip/cqc/cqc-quam-state",
"Repository, https://gitlab.jsc.fz-juelich.de/qip/cqc/cqc-quam-state"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T03:29:24.569190 | cqc_quam_state-2026.2.21.tar.gz | 10,318 | ef/0f/8676e5794984b97e188b6ce11dacf6ccfea264a1b55fef29febc02e7095a/cqc_quam_state-2026.2.21.tar.gz | source | sdist | null | false | 845bde5f8cfeaee7efe1724561126f85 | cb3cf178e6c0efe421635149b381788d72e0335330f2135d2ad4ce17c9b02a82 | ef0f8676e5794984b97e188b6ce11dacf6ccfea264a1b55fef29febc02e7095a | null | [] | 214 |
2.4 | synth-agent-sdk | 0.8.0 | Autonomous agents, engineered. A Python SDK for building production-grade AI agents and multi-agent systems. | # Synth
> Autonomous agents, engineered.
**Latest Version:** 0.6.1 | [PyPI](https://pypi.org/project/synth-agent-sdk/) | [Changelog](CHANGELOG.md)
A Python SDK for building production-grade AI agents and multi-agent systems. From a 3-line single agent to complex, stateful, resumable multi-agent graphs — with model-agnostic provider support, streaming, observability, evaluation, and guardrails out of the box.
---
## Table of Contents
1. [What is Synth?](#what-is-synth)
2. [Installation](#installation)
3. [Quick Start](#quick-start)
4. [Core Concepts](#core-concepts)
5. [Creating an Agent](#creating-an-agent)
6. [Tools](#tools)
7. [Running Your Agent](#running-your-agent)
8. [Streaming](#streaming)
9. [Model Providers](#model-providers)
10. [Memory](#memory)
11. [Guards](#guards)
12. [Structured Output](#structured-output)
13. [Pipelines](#pipelines)
14. [Graphs](#graphs)
15. [Human-in-the-Loop](#human-in-the-loop)
16. [Agent Teams](#agent-teams)
17. [Tracing](#tracing)
18. [Checkpointing](#checkpointing)
19. [Evaluation](#evaluation)
20. [CLI Commands](#cli-commands)
21. [Deploying to AWS AgentCore](#deploying-to-aws-agentcore)
22. [Error Handling](#error-handling)
23. [Environment Variables](#environment-variables)
24. [FAQ](#faq)
---
## What is Synth?
Synth is a Python library for building AI-powered agents. An agent uses a large language model (Claude, GPT, Gemini, etc.) to understand instructions, make decisions, and take actions — calling functions, searching databases, generating reports, or coordinating with other agents.
Synth handles the plumbing (provider communication, conversation management, retries, cost tracking) so you focus on what your agent actually does.
---
## Installation
Requires Python 3.10+.
```bash
pip install synth-agent-sdk[anthropic] # Anthropic Claude (recommended)
```
Other options:
```bash
pip install synth-agent-sdk[quickstart] # Claude + GPT (tutorials/demos)
pip install synth-agent-sdk[openai] # OpenAI GPT
pip install synth-agent-sdk[google] # Google Gemini
pip install synth-agent-sdk[ollama] # Local Ollama models
pip install synth-agent-sdk[bedrock] # AWS Bedrock
pip install synth-agent-sdk[agentcore] # AWS AgentCore deployment
pip install synth-agent-sdk[all] # All providers
```
Set your API key:
```bash
export ANTHROPIC_API_KEY="your-key-here" # Claude
export OPENAI_API_KEY="your-key-here" # GPT
export GOOGLE_API_KEY="your-key-here" # Gemini
# AWS Bedrock uses standard IAM credentials — no Synth-specific key needed
```
Verify your setup:
```bash
synth doctor
```
---
## Quick Start
```python
from synth import Agent
agent = Agent(model="claude-sonnet-4-5", instructions="You are a helpful assistant.")
result = agent.run("What is the capital of France?")
print(result.text)
# => "The capital of France is Paris."
```
---
## Core Concepts
| Concept | What It Is |
|---------|-----------|
| `Agent` | The main building block. Wraps an AI model with tools, memory, and guards. |
| `Tool` | A Python function your agent can call. |
| `ToolKit` | A bundle of related tools. |
| `RunResult` | Returned by `agent.run()` — text, token usage, cost, latency, trace. |
| `Memory` | Lets your agent remember previous conversations. |
| `Guard` | A safety rule applied to input or output. |
| `Pipeline` | Chains agents sequentially. |
| `Graph` | A workflow with branching, loops, and conditional logic. |
| `AgentTeam` | Multiple agents coordinated by an orchestrator. |
| `Trace` | A detailed record of everything that happened during a run. |
| `Checkpoint` | A saved snapshot of a run's state for resumption. |
---
## Creating an Agent
```python
from synth import Agent, Guard, Memory
agent = Agent(
model="claude-sonnet-4-5", # AI model to use
instructions="You are helpful.", # System prompt
tools=[my_tool, my_toolkit], # Optional tools
memory=Memory.thread(), # Optional memory
guards=[Guard.no_pii_output()], # Optional safety rules
output_schema=MyModel, # Optional Pydantic schema
max_retries=3, # Retry on transient errors
retry_backoff=1.0, # Base delay between retries (seconds)
)
```
All parameters except `model` are optional. Default model is `claude-sonnet-4-5`.
---
## Tools
Tools are Python functions your agent can call. Mark them with `@tool` — Synth auto-generates JSON schemas from type hints and docstrings.
```python
from synth import tool
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"The weather in {city} is sunny, 72°F."
agent = Agent(
model="claude-sonnet-4-5",
instructions="You are a weather assistant.",
tools=[get_weather],
)
```
Rules: every parameter needs a type annotation, and the function needs a docstring. Missing either raises `ToolDefinitionError` immediately.
Group related tools with `ToolKit`:
```python
from synth import ToolKit
math_tools = ToolKit([add, multiply, divide])
agent = Agent(model="gpt-4o", tools=[math_tools, get_weather])
```
Inspect tool calls after a run:
```python
for tc in result.tool_calls:
print(f"{tc.name}({tc.args}) → {tc.result} [{tc.latency_ms:.1f}ms]")
```
---
## Running Your Agent
**Synchronous:**
```python
result = agent.run("Explain quantum computing in simple terms.")
print(result.text) # Response text
print(result.tokens) # TokenUsage(input, output, total)
print(result.cost) # Estimated cost in USD
print(result.latency_ms) # Latency in milliseconds
print(result.tool_calls) # Tools that were called
print(result.trace) # Full execution trace
print(result.output) # Parsed structured output (if output_schema set)
```
**Asynchronous:**
```python
import asyncio
async def main():
result = await agent.arun("What is 2 + 2?")
print(result.text)
asyncio.run(main())
```
---
## Streaming
```python
from synth import TokenEvent, ToolCallEvent, ToolResultEvent, DoneEvent, ErrorEvent
for event in agent.stream("Write a short poem about coding."):
if isinstance(event, TokenEvent):
print(event.text, end="", flush=True)
elif isinstance(event, ToolCallEvent):
print(f"\n[Calling: {event.name}]")
elif isinstance(event, DoneEvent):
print(f"\n\nTokens: {event.result.tokens.total_tokens}")
```
Async streaming:
```python
async for event in agent.astream("Write a haiku."):
if isinstance(event, TokenEvent):
print(event.text, end="", flush=True)
```
| Event | When |
|-------|------|
| `TokenEvent` | Model produced a text token |
| `ToolCallEvent` | Model decided to call a tool |
| `ToolResultEvent` | Tool finished executing |
| `ThinkingEvent` | Model produced a reasoning token |
| `DoneEvent` | Stream completed — contains full `RunResult` |
| `ErrorEvent` | Something went wrong |
---
## Model Providers
Switch providers by changing the `model` string — no other code changes needed.
| Provider | Model String Examples | Extra | API Key |
|----------|----------------------|-------|---------|
| Anthropic | `"claude-sonnet-4-5"`, `"claude-haiku-3-5"` | `synth[anthropic]` | `ANTHROPIC_API_KEY` |
| OpenAI | `"gpt-4o"`, `"gpt-4o-mini"` | `synth[openai]` | `OPENAI_API_KEY` |
| Google | `"gemini-2.0-flash"` | `synth[google]` | `GOOGLE_API_KEY` |
| Ollama | `"ollama/llama3"`, `"ollama/mistral"` | `synth[ollama]` | None (local) |
| AWS Bedrock | `"bedrock/claude-sonnet-4-5"` | `synth[bedrock]` | AWS IAM |
Custom endpoint:
```python
agent = Agent(model="my-model", base_url="https://my-proxy.example.com/v1")
```
---
## Memory
By default each `run()` is stateless. Add memory to persist conversations.
**Thread memory** (in-process, fast):
```python
agent = Agent(model="claude-sonnet-4-5", memory=Memory.thread())
agent.run("My name is Alice.", thread_id="user-123")
result = agent.run("What's my name?", thread_id="user-123")
print(result.text) # "Your name is Alice."
```
**Persistent memory** (Redis, survives restarts):
```python
agent = Agent(model="gpt-4o", memory=Memory.persistent("redis://localhost:6379"))
```
**Semantic memory** (vector embeddings, retrieves most relevant context):
```python
agent = Agent(model="gemini-2.0-flash", memory=Memory.semantic(embedder=my_embedder_fn))
```
---
## Guards
Declarative safety rules applied automatically to every run.
```python
from synth import Guard
agent = Agent(
model="claude-sonnet-4-5",
guards=[
Guard.no_pii_output(), # Block PII in responses
Guard.max_cost(dollars=0.50), # Stop if cost exceeds $0.50
Guard.no_tool_calls(["delete_*"]), # Block tools matching glob
Guard.custom(my_check_fn), # Your own check function
],
)
```
Guards run in order. First failure stops execution and raises `GuardViolationError`.
---
## Structured Output
Get typed Pydantic objects back instead of raw text:
```python
from pydantic import BaseModel
class MovieReview(BaseModel):
title: str
rating: float
summary: str
recommended: bool
agent = Agent(
model="claude-sonnet-4-5",
instructions="You are a movie critic.",
output_schema=MovieReview,
)
result = agent.run("Review the movie Inception.")
review = result.output # MovieReview instance
print(review.title) # "Inception"
print(review.rating) # 9.2
print(review.recommended) # True
```
If parsing fails, Synth retries with a corrective prompt up to `max_retries` times.
---
## Pipelines
Chain agents sequentially — output of each becomes input of the next:
```python
from synth import Pipeline
researcher = Agent(model="claude-sonnet-4-5", instructions="You research topics.")
writer = Agent(model="claude-sonnet-4-5", instructions="You write clear articles.")
editor = Agent(model="claude-sonnet-4-5", instructions="You edit for clarity.")
pipeline = Pipeline([researcher, writer, editor])
result = pipeline.run("The history of the internet")
```
Run stages in parallel with `ParallelGroup`:
```python
from synth.orchestration.pipeline import ParallelGroup
pipeline = Pipeline([
writer,
ParallelGroup([fact_checker, style_checker]), # Run concurrently
editor,
])
```
Stream with stage labels:
```python
for stage_event in pipeline.stream("Write about AI"):
print(f"[{stage_event.stage_name}] {stage_event.event}")
```
---
## Graphs
Directed-graph workflows with branching, loops, and conditional logic:
```python
from synth import Graph, node
graph = Graph()
@node(graph)
def classify(state):
state["priority"] = "high" if "urgent" in state["text"].lower() else "low"
return state
@node(graph)
def handle_urgent(state):
state["response"] = "Escalating immediately."
return state
@node(graph)
def handle_normal(state):
state["response"] = "We'll respond within 24 hours."
return state
graph.set_entry("classify")
graph.add_edge("classify", "handle_urgent", when=lambda s: s["priority"] == "high")
graph.add_edge("classify", "handle_normal", when=lambda s: s["priority"] == "low")
graph.add_edge("handle_urgent", Graph.END)
graph.add_edge("handle_normal", Graph.END)
result = graph.run({"text": "This is urgent! Server is down!"})
print(result.output["response"])
```
Loops are supported. Synth enforces `max_iterations=100` by default to prevent infinite loops.
Visualize your graph:
```python
print(graph.visualise()) # Outputs a Mermaid diagram
```
---
## Human-in-the-Loop
Pause a graph at specific nodes for human review before continuing:
```python
graph.with_human_in_the_loop(pause_at=["draft_email"], timeout=3600)
graph.with_checkpointing()
result = graph.run({"customer": "Alice"}, run_id="email-001")
# result is a PausedRun — inspect result.state["draft"] here
final = graph.resume("email-001", human_input="Looks good, send it.")
```
---
## Agent Teams
Coordinate multiple specialized agents under an orchestrator:
```python
from synth import AgentTeam
team = AgentTeam(
orchestrator="claude-sonnet-4-5",
agents=[researcher, writer, analyst],
strategy="auto", # orchestrator decides who does what
)
result = team.run("Write a report on renewable energy trends.")
print(result.answer)
print(result.contributions) # Each agent's individual contribution
print(result.total_cost)
```
Use `strategy="parallel"` to run all agents concurrently.
---
## Tracing
Every run automatically records a detailed trace:
```python
result = agent.run("Summarize this document.")
trace = result.trace
print(f"Tokens: {trace.total_tokens}")
print(f"Cost: ${trace.total_cost:.4f}")
print(f"Latency: {trace.total_latency_ms:.1f}ms")
result.trace.show() # Open visual timeline in browser
path = result.trace.export() # Export as OpenTelemetry JSON
```
Auto-forward all traces to an OTel collector:
```bash
export SYNTH_TRACE_ENDPOINT="https://my-otel-collector.example.com/v1/traces"
```
---
## Checkpointing
Save and resume graph execution state:
```python
graph.with_checkpointing()
result = graph.run(initial_state, run_id="my-run-001")
# Later, even in a different process
result = graph.resume("my-run-001")
```
Redis backend for distributed systems:
```python
from synth.checkpointing.redis import RedisCheckpointStore
graph.with_checkpointing(store=RedisCheckpointStore("redis://localhost:6379"))
```
---
## Evaluation
Run structured tests against your agent:
```python
from synth import Eval
evaluation = Eval(agent=agent)
evaluation.add_case(input="Capital of France?", expected="Paris")
evaluation.add_case(input="Capital of Japan?", expected="Tokyo")
report = evaluation.run()
print(f"Score: {report.overall_score}")
for case in report.cases:
status = "PASS" if case.passed else "FAIL"
print(f" [{status}] {case.input} → {case.actual}")
```
Custom checker:
```python
def contains_keyword(output: str, expected: str) -> float:
return 1.0 if expected.lower() in output.lower() else 0.0
evaluation.add_case(input="Explain photosynthesis.", expected="chlorophyll", checker=contains_keyword)
```
---
## CLI Commands
Run `synth` with no arguments to launch the interactive shell:
```bash
synth
```
```
synth> run agent.py "Hello"
synth> create agent my-bot
synth> doctor
synth> exit
```
All commands also work directly:
```bash
synth init # Interactive project setup wizard
synth create agent my-bot # Scaffold an agent project
synth create agent my-bot -p openai # Skip prompt, use OpenAI
synth create agentcore my-service # AWS AgentCore project
synth create team my-team # Multi-agent team + pipeline
synth create tool my-tools # Standalone tools file
synth create mcp my-server # MCP server with FastMCP
synth create ui my-ui # Local browser testing UI
synth dev my_agent.py # Rich terminal UI with hot-reload
synth run my_agent.py "prompt" # Execute agent, print result
synth bench my_agent.py "prompt" --runs 20 # Benchmark latency/cost
synth eval my_agent.py --dataset cases.json # Run evaluation suite
synth trace <run_id> # Open trace in browser
synth deploy --target agentcore # Deploy to AWS AgentCore
synth deploy --target agentcore --dry-run # Validate without deploying
synth edit agent agent.py # Modify existing agent config
synth doctor # Check env, credentials, deps
synth info --extra anthropic # Show package info
synth help # Quick reference card
```
### `synth init`
The fastest way to start a new project. Walks you through:
1. Project name and description
2. Provider selection
3. Model selection (region-aware for AgentCore with Bedrock model catalog)
4. Agent instructions
5. **Tool Wizard** — pick pre-built tools or scaffold custom `@tool` stubs
6. **MCP Wizard** — pick pre-built MCP servers or scaffold custom `@mcp.tool()` stubs
7. Feature toggles (memory, guards, structured output, eval, deploy)
8. Credential check (AgentCore only)
9. Summary and confirmation
10. Project generation
11. Optional "Deploy now?" prompt (AgentCore only)
For AgentCore projects, `synth init` also:
- Auto-detects AWS credentials (env vars → `~/.aws/credentials` → AWS Toolkit profiles)
- Prompts for target AWS region (default: `us-east-1`)
- Shows Bedrock models available in that region
- Writes `aws_region`, `model_id`, `cris_enabled`, and `aws_profile` to `agentcore.yaml`
### `synth dev`
Rich terminal UI for interactive development:
```bash
synth dev my_agent.py
```
Features: streaming token-by-token output, tool call visualization, slash commands (`/tools`, `/reload`, `/trace`, `/export`, `/clear`, `/cost`, `/quit`), markdown rendering, status bar with live cost/token tracking.
### `synth deploy`
Guided six-stage deployment wizard:
```bash
synth deploy --target agentcore my_agent.py
synth deploy --target agentcore --dry-run my_agent.py # Stages 1–4 only
```
Stages: credential validation → dependency check → file validation → manifest generation → artifact packaging → AgentCore API submission. Each prints `[ OK ]` or `[FAIL]` with a corrective suggestion on failure.
### `synth edit agent`
Interactively modify an existing agent without editing files manually:
```bash
synth edit agent agent.py
```
Menu options: (a) instructions, (b) model, (c) tools, (d) MCP servers. Shows a diff before writing. Uses atomic temp-file rename to prevent corruption.
### `synth doctor`
```bash
synth doctor
```
Checks: Python version, core dependencies, provider API keys, `SYNTH_TRACE_ENDPOINT` format, optional provider packages, and (when `agentcore.yaml` is present) AgentCore config fields (`aws_region`, `model_id`, `cris_enabled`, `aws_profile`).
### `synth bench`
```bash
synth bench my_agent.py "Hello" --runs 20 --warmup 2
```
Reports p50/p95/p99 latency, average tokens, cost per run, and success rate.
---
## Deploying to AWS AgentCore
### Prerequisites
```bash
pip install synth-agent-sdk[agentcore]
```
### Wrapping Your Agent
```python
from synth import Agent
from synth.deploy.agentcore import agentcore_handler
agent = Agent(
model="bedrock/claude-sonnet-4-5",
instructions="You are a customer support agent.",
tools=[lookup_order, check_inventory],
)
app = agentcore_handler(agent)
```
### Deploy
```bash
synth deploy --target agentcore --dry-run # Validate first
synth deploy --target agentcore # Deploy
```
The packager automatically excludes `.env` files, credential files, and `.synth/checkpoints/` from the artifact. It also scans `agentcore.yaml` for accidental credential patterns and aborts if any are found.
### Secure User Identity
```python
from synth.deploy.agentcore import extract_user_id
user_id = extract_user_id(context) # Extracts from signed JWT in RequestContext
```
### Gateway MCP Client
```python
from synth.deploy.agentcore import create_gateway_client
client = create_gateway_client(
gateway_url="https://my-gateway.example.com",
client_id_param="/myapp/gateway/client_id",
client_secret_param="/myapp/gateway/client_secret",
)
mcp_client = client.as_mcp_client()
```
### Code Interpreter
```python
from synth.deploy.agentcore import CodeInterpreterTools
ci = CodeInterpreterTools()
result = ci.execute_python("import math; print(math.sqrt(144))")
print(result) # "12.0"
```
### SSM Config
```python
from synth.deploy.agentcore import get_ssm_parameter
db_url = get_ssm_parameter("/myapp/prod/db_url")
api_key = get_ssm_parameter("/myapp/prod/api_key", decrypt=True)
```
---
## Error Handling
All Synth errors inherit from `SynthError` and include `component` and `suggestion` fields.
| Error | When |
|-------|------|
| `SynthConfigError` | Missing API key, invalid model, missing provider package |
| `ToolDefinitionError` | `@tool` missing type annotations or docstring |
| `ToolExecutionError` | Tool function raised an exception |
| `GuardViolationError` | A guard check failed |
| `CostLimitError` | Cost guard limit exceeded |
| `SynthParseError` | Structured output couldn't be parsed after retries |
| `GraphRoutingError` | No edge condition matched at a graph node |
| `GraphLoopError` | Graph exceeded `max_iterations` |
| `RunNotFoundError` | No checkpoint found for the given `run_id` |
| `PipelineError` | A pipeline stage failed |
```python
from synth.errors import SynthConfigError, ToolExecutionError, GuardViolationError
try:
result = agent.run("Do something risky.")
except GuardViolationError as e:
print(f"Guard '{e.guard_name}' blocked: {e.remediation}")
except ToolExecutionError as e:
print(f"Tool '{e.tool_name}' failed: {e.original_error}")
except SynthConfigError as e:
print(f"Config issue in {e.component}: {e.suggestion}")
```
---
## Environment Variables
| Variable | Purpose | Required? |
|----------|---------|-----------|
| `ANTHROPIC_API_KEY` | Anthropic Claude API key | Only for `claude-*` models |
| `OPENAI_API_KEY` | OpenAI GPT API key | Only for `gpt-*` models |
| `GOOGLE_API_KEY` | Google Gemini API key | Only for `gemini-*` models |
| `AWS_ACCESS_KEY_ID` / `AWS_SECRET_ACCESS_KEY` | AWS credentials for Bedrock | Only for `bedrock/*` (or use IAM) |
| `SYNTH_TRACE_ENDPOINT` | HTTPS URL of an OTel collector | No |
| `SYNTH_NO_BANNER` | Set to `1` to skip the boot sequence | No |
| `NO_COLOR` | Disable colored terminal output | No |
---
## FAQ
**Do I need an API key?**
Yes, for cloud models. Ollama runs locally and needs no key.
**Can I use Synth in Jupyter?**
Yes. Synth detects an existing event loop and handles it automatically.
**How do I switch models?**
Change the `model` string. Install the matching extra and set the API key.
**What if the provider is down?**
Synth retries on HTTP 429 and 5xx with exponential backoff. Configure with `max_retries` and `retry_backoff`.
**Can I use multiple models in one app?**
Yes. Each `Agent` has its own model.
**How do I debug what my agent is doing?**
Use `result.trace.show()` for a visual timeline, or `synth dev my_agent.py` for an interactive terminal UI with `/trace` command.
**Is my data secure?**
Synth never logs or serializes API keys. Guards run before side-effecting operations. Checkpoints use JSON only. All provider calls use HTTPS.
**What are the core dependencies?**
`pydantic`, `httpx`, `click`, `typing-extensions`, `rich`, `prompt-toolkit`. Provider SDKs are optional extras.
---
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"httpx>=0.27",
"prompt-toolkit>=3.0",
"pydantic>=2.0",
"rich>=13.0",
"typing-extensions>=4.0",
"bedrock-agentcore-starter-toolkit>=0.1.0; extra == \"agentcore\"",
"bedrock-agentcore>=0.1.0; extra == \"agentcore\"",
"boto3>=1.35; extra == \"agentcore\"",
"botocore[crt]>=1.35; extra == \"agentcore\"",
"pyjwt>=2.8; extra == \"agentcore\"",
"requests>=2.31; extra == \"agentcore\"",
"anthropic>=0.39; extra == \"all\"",
"boto3>=1.35; extra == \"all\"",
"botocore[crt]>=1.35; extra == \"all\"",
"google-genai>=1.0; extra == \"all\"",
"ollama>=0.4; extra == \"all\"",
"openai>=1.0; extra == \"all\"",
"anthropic>=0.39; extra == \"anthropic\"",
"bedrock-agentcore-starter-toolkit>=0.1.0; extra == \"aws\"",
"bedrock-agentcore>=0.1.0; extra == \"aws\"",
"boto3>=1.35; extra == \"aws\"",
"botocore[crt]>=1.35; extra == \"aws\"",
"pyjwt>=2.8; extra == \"aws\"",
"requests>=2.31; extra == \"aws\"",
"boto3>=1.35; extra == \"bedrock\"",
"botocore[crt]>=1.35; extra == \"bedrock\"",
"google-genai>=1.0; extra == \"google\"",
"ollama>=0.4; extra == \"ollama\"",
"openai>=1.0; extra == \"openai\"",
"anthropic>=0.39; extra == \"quickstart\"",
"openai>=1.0; extra == \"quickstart\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.12 | 2026-02-21T03:27:31.701392 | synth_agent_sdk-0.8.0.tar.gz | 1,155,648 | 82/86/d2d094a85b5c84d69d8e19d6fd525db42afdf91703ad9ce111957365eb0c/synth_agent_sdk-0.8.0.tar.gz | source | sdist | null | false | 250b3dbcb655c33f1f678d1f6f8887f8 | 89f14e24038ec798756964a0f999b155682751f43050ebff18453260edfe8cc3 | 8286d2d094a85b5c84d69d8e19d6fd525db42afdf91703ad9ce111957365eb0c | MIT | [] | 209 |
2.3 | galileo | 1.47.0 | Client library for the Galileo platform. | # Galileo Python SDK
<div align="center">
<strong>The Python client library for the Galileo AI platform.</strong>
[![PyPI][pypi-badge]][pypi-url]
[![Python Version][python-badge]][python-url]
![codecov.io][codecov-url]
</div>
[pypi-badge]: https://img.shields.io/pypi/v/galileo.svg
[pypi-url]: https://pypi.org/project/galileo/
[python-badge]: https://img.shields.io/pypi/pyversions/galileo.svg
[python-url]: https://www.python.org/downloads/
[codecov-url]: https://codecov.io/github/rungalileo/galileo-python/coverage.svg?branch=main
## Getting Started
### Installation
`pip install galileo`
### Setup
Set the following environment variables:
- `GALILEO_API_KEY`: Your Galileo API key
- `GALILEO_PROJECT`: (Optional) Project name
- `GALILEO_LOG_STREAM`: (Optional) Log stream name
- `GALILEO_LOGGING_DISABLED`: (Optional) Disable collecting and sending logs to galileo.
Note: if you would like to point to an environment other than `app.galileo.ai`, you'll need to set the `GALILEO_CONSOLE_URL` environment variable.
### Usage
#### Logging traces
```python
import os
from galileo import galileo_context
from galileo.openai import openai
# If you've set your GALILEO_PROJECT and GALILEO_LOG_STREAM env vars, you can skip this step
galileo_context.init(project="your-project-name", log_stream="your-log-stream-name")
# Initialize the Galileo wrapped OpenAI client
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
def call_openai():
chat_completion = client.chat.completions.create(
messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4o"
)
return chat_completion.choices[0].message.content
# This will create a single span trace with the OpenAI call
call_openai()
# This will upload the trace to Galileo
galileo_context.flush()
```
You can also use the `@log` decorator to log spans. Here's how to create a workflow span with two nested LLM spans:
```python
from galileo import log
@log
def make_nested_call():
call_openai()
call_openai()
# If you've set your GALILEO_PROJECT and GALILEO_LOG_STREAM env vars, you can skip this step
galileo_context.init(project="your-project-name", log_stream="your-log-stream-name")
# This will create a trace with a workflow span and two nested LLM spans containing the OpenAI calls
make_nested_call()
```
Here's how to create a retriever span using the decorator:
```python
from galileo import log
@log(span_type="retriever")
def retrieve_documents(query: str):
return ["doc1", "doc2"]
# This will create a trace with a retriever span containing the documents in the output
retrieve_documents(query="history")
```
Here's how to create a tool span using the decorator:
```python
from galileo import log
@log(span_type="tool")
def tool_call(input: str = "tool call input"):
return "tool call output"
# This will create a trace with a tool span containing the tool call output
tool_call(input="question")
# This will upload the trace to Galileo
galileo_context.flush()
```
In some cases, you may want to wrap a block of code to start and flush a trace automatically. You can do this using the `galileo_context` context manager:
```python
from galileo import galileo_context
# This will log a block of code to the project and log stream specified in the context manager
with galileo_context():
content = make_nested_call()
print(content)
```
`galileo_context` also allows you specify a separate project and log stream for the trace:
```python
from galileo import galileo_context
# This will log to the project and log stream specified in the context manager
with galileo_context(project="gen-ai-project", log_stream="test2"):
content = make_nested_call()
print(content)
```
You can also use the `GalileoLogger` for manual logging scenarios:
```python
from galileo.logger import GalileoLogger
# This will log to the project and log stream specified in the logger constructor
logger = GalileoLogger(project="gen-ai-project", log_stream="test3")
trace = logger.start_trace("Say this is a test")
logger.add_llm_span(
input="Say this is a test",
output="Hello, this is a test",
model="gpt-4o",
num_input_tokens=10,
num_output_tokens=3,
total_tokens=13,
duration_ns=1000,
)
logger.conclude(output="Hello, this is a test", duration_ns=1000)
logger.flush() # This will upload the trace to Galileo
```
OpenAI streaming example:
```python
import os
from galileo.openai import openai
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
stream = client.chat.completions.create(
messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4o", stream=True,
)
# This will create a single span trace with the OpenAI call
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
```
In some cases (like long-running processes), it may be necessary to explicitly flush the trace to upload it to Galileo:
```python
import os
from galileo import galileo_context
from galileo.openai import openai
galileo_context.init(project="your-project-name", log_stream="your-log-stream-name")
# Initialize the Galileo wrapped OpenAI client
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
def call_openai():
chat_completion = client.chat.completions.create(
messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4o"
)
return chat_completion.choices[0].message.content
# This will create a single span trace with the OpenAI call
call_openai()
# This will upload the trace to Galileo
galileo_context.flush()
```
Using the Langchain callback handler:
```python
from galileo.handlers.langchain import GalileoCallback
from langchain.schema import HumanMessage
from langchain_openai import ChatOpenAI
# You can optionally pass a GalileoLogger instance to the callback if you don't want to use the default context
callback = GalileoCallback()
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7, callbacks=[callback])
# Create a message with the user's query
messages = [HumanMessage(content="What is LangChain and how is it used with OpenAI?")]
# Make the API call
response = llm.invoke(messages)
print(response.content)
```
#### Datasets
Create a dataset:
```python
from galileo.datasets import create_dataset
create_dataset(
name="names",
content=[
{"name": "Lola"},
{"name": "Jo"},
]
)
```
Get a dataset:
```python
from galileo.datasets import get_dataset
dataset = get_dataset(name="names")
```
List all datasets:
```python
from galileo.datasets import list_datasets
datasets = list_datasets()
```
> **Dataset Record Fields:**
>
> - **`generated_output`**: New field for storing model-generated outputs separately from ground truth. This allows you to track both the expected output (ground truth) and the actual model output in the same dataset record. In the UI, this field is displayed as "Generated Output".
>
> Example:
> ```python
> from galileo.schema.datasets import DatasetRecord
>
> record = DatasetRecord(
> input="What is 2+2?",
> output="4", # Ground truth
> generated_output="The answer is 4" # Model-generated output
> )
> ```
>
> - **`output` / `ground_truth`**: The existing `output` field is now displayed as "Ground Truth" in the Galileo UI for better clarity. The SDK supports both `output` and `ground_truth` field names when creating records - both are normalized to `output` internally, ensuring full backward compatibility. You can use either field name, and access the value via the `ground_truth` property.
>
> Example:
> ```python
> from galileo.schema.datasets import DatasetRecord
>
> # Using 'output' (backward compatible)
> record1 = DatasetRecord(input="What is 2+2?", output="4")
> assert record1.ground_truth == "4" # Property accessor
>
> # Using 'ground_truth' (new recommended way)
> record2 = DatasetRecord(input="What is 2+2?", ground_truth="4")
> assert record2.output == "4" # Normalized internally
> assert record2.ground_truth == "4" # Property accessor
> ```
#### Experiments
Run an experiment with a prompt template:
```python
from galileo import Message, MessageRole
from galileo.datasets import get_dataset
from galileo.experiments import run_experiment
from galileo.prompts import create_prompt_template
prompt = create_prompt_template(
name="my-prompt",
project="new-project",
messages=[
Message(role=MessageRole.system, content="you are a helpful assistant"),
Message(role=MessageRole.user, content="why is sky blue?")
]
)
results = run_experiment(
"my-experiment",
dataset=get_dataset(name="storyteller-dataset"),
prompt=prompt,
metrics=["correctness"],
project="andrii-new-project",
)
```
Run an experiment with a runner function with local dataset:
```python
import openai
from galileo.experiments import run_experiment
dataset = [
{"name": "Lola"},
{"name": "Jo"},
]
def runner(input):
return openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": f"Say hello: {input['name']}"}
],
).choices[0].message.content
run_experiment(
"test experiment runner",
project="awesome-new-project",
dataset=dataset,
function=runner,
metrics=['output_tone'],
)
```
### Sessions
Sessions allow you to group related traces together. By default, a session is created for each trace and a session name is auto-generated. If you would like to override this, you can explicitly start a session:
```python
from galileo import GalileoLogger
logger = GalileoLogger(project="gen-ai-project", log_stream="my-log-stream")
session_id =logger.start_session(name="my-session-name")
...
logger.conclude()
logger.flush()
```
You can continue a previous session by using the same session ID that was previously generated:
```python
from galileo import GalileoLogger
logger = GalileoLogger(project="gen-ai-project", log_stream="my-log-stream")
logger.set_session(session_id="123e4567-e89b-12d3-a456-426614174000")
...
logger.conclude()
logger.flush()
```
All of this can also be done using the `galileo_context` context manager:
```python
from galileo import galileo_context
session_id = galileo_context.start_session(name="my-session-name")
# OR
galileo_context.set_session(session_id=session_id)
```
| text/markdown | Galileo Technologies Inc. | team@galileo.ai | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"attrs>=22.2.0",
"backoff<3.0.0,>=2.2.1",
"crewai<=0.201.1,>=0.152.0; python_version < \"3.14\" and extra == \"all\"",
"crewai<=0.201.1,>=0.152.0; python_version < \"3.14\" and extra == \"crewai\"",
"galileo-core<4.0.0,>=3.82.1",
"langchain; extra == \"all\"",
"langchain; extra == \"langchain\"",
"langchain-core; extra == \"all\"",
"langchain-core; extra == \"langchain\"",
"openai; extra == \"all\"",
"openai; extra == \"openai\"",
"openai-agents; extra == \"all\"",
"openai-agents; extra == \"openai\"",
"opentelemetry-api<2.0.0,>=1.38.0; extra == \"all\"",
"opentelemetry-api<2.0.0,>=1.38.0; extra == \"otel\"",
"opentelemetry-exporter-otlp<2.0.0,>=1.38.0; extra == \"all\"",
"opentelemetry-exporter-otlp<2.0.0,>=1.38.0; extra == \"otel\"",
"opentelemetry-sdk<2.0.0,>=1.38.0; extra == \"all\"",
"opentelemetry-sdk<2.0.0,>=1.38.0; extra == \"otel\"",
"packaging<25.0,>=24.2; extra == \"all\"",
"packaging<25.0,>=24.2; extra == \"openai\"",
"pydantic<3.0.0,>=2.12.0",
"pyjwt<3.0.0,>=2.8.0",
"python-dateutil<3.0.0,>=2.8.0",
"starlette; extra == \"all\"",
"starlette; extra == \"middleware\"",
"wrapt<2.0,>=1.14"
] | [] | [] | [] | [
"Repository, https://github.com/rungalileo/galileo-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:25:27.960634 | galileo-1.47.0.tar.gz | 698,207 | fe/44/e3e73298fe0fdb33dec0672ccff2a7b43116071c565e1b6ae2393ab12f9c/galileo-1.47.0.tar.gz | source | sdist | null | false | c152d0fd02d517855336ae6ba24643fb | add641a43ae7363934ddaf9d091bb4525c7adc81d282e27e3653e81fe7a9884a | fe44e3e73298fe0fdb33dec0672ccff2a7b43116071c565e1b6ae2393ab12f9c | null | [] | 271 |
2.4 | music-assistant-frontend | 2.17.92 | The Music Assistant frontend | # Music Assistant frontend (Vue PWA)
The Music Assistant frontend/panel is developed in Vue, development instructions below.
## Recommended IDE Setup
[VSCode](https://code.visualstudio.com/) + [Volar](https://marketplace.visualstudio.com/items?itemName=johnsoncodehk.volar) (and disable Vetur) + [TypeScript Vue Plugin (Volar)](https://marketplace.visualstudio.com/items?itemName=johnsoncodehk.vscode-typescript-vue-plugin).
## Type Support for `.vue` Imports in TS
TypeScript cannot handle type information for `.vue` imports by default, so we replace the `tsc` CLI with `vue-tsc` for type checking. In editors, we need [TypeScript Vue Plugin (Volar)](https://marketplace.visualstudio.com/items?itemName=johnsoncodehk.vscode-typescript-vue-plugin) to make the TypeScript language service aware of `.vue` types.
If the standalone TypeScript plugin doesn't feel fast enough to you, Volar has also implemented a [Take Over Mode](https://github.com/johnsoncodehk/volar/discussions/471#discussioncomment-1361669) that is more performant. You can enable it by the following steps:
1. Disable the built-in TypeScript Extension
1. Run `Extensions: Show Built-in Extensions` from VSCode's command palette
2. Find `TypeScript and JavaScript Language Features`, right click and select `Disable (Workspace)`
2. Reload the VSCode window by running `Developer: Reload Window` from the command palette.
## Customize configuration
See [Vite Configuration Reference](https://vitejs.dev/config/).
## Project Setup
```sh
nvm use node
yarn install
```
### Compile and Hot-Reload for Development
```sh
yarn dev
```
This will launch an auto-reload development environment (usually at http://localhost:3000)
Open the url in the browser and a popup will ask the location of the MA server.
You can either connect to a locally launched dev server or an existing running server on port 8095.
### Type-Check, Compile and Minify for Production
```sh
yarn build
```
### Lint with [ESLint](https://eslint.org/)
```sh
yarn lint
```
## UI Framework
This project is migrating from **Vuetify** to **[shadcn-vue](https://www.shadcn-vue.com/)** as its primary UI component library.
### Guidelines
- **All new features** should be built using shadcn-vue components
- Shadcn-vue components are located in `src/components/ui/`
- When working on existing features that use Vuetify, consider refactoring them to use shadcn-vue components if you have time
- Refer to the [shadcn-vue documentation](https://www.shadcn-vue.com/) for available components and usage examples
# Translation Management
We use Lokalise to manage the translation files for the Music Assistant frontend
[<img src="https://github.com/lokalise/i18n-ally/raw/screenshots/lokalise-logo.png?raw=true" alt="Lokalise logo" width="275px">](https://lokalise.com)
### Contributing
If you wish to assist in translating Music Assistant into a language that it currently does not support, please see here https://music-assistant.io/help/lokalise/.
---
[](https://www.openhomefoundation.org/)
| text/markdown | null | The Music Assistant Authors <m.vanderveldt@outlook.com> | null | null | Apache-2.0 | null | [] | [
"any"
] | null | null | >=3.11.0 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/music-assistant/frontend"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:25:08.831153 | music_assistant_frontend-2.17.92.tar.gz | 4,673,509 | 17/e6/fa5e7a6604a2f84db2be88e2f18d52d382f12574f6d741f818fdd7ce58eb/music_assistant_frontend-2.17.92.tar.gz | source | sdist | null | false | 40719da2db333953c24a2fad0b65747b | 7dbe0d49287955da29b66a4998e730e586b285fab284e9955e4565fb765efaeb | 17e6fa5e7a6604a2f84db2be88e2f18d52d382f12574f6d741f818fdd7ce58eb | null | [
"LICENSE"
] | 784 |
2.4 | pyAgrum-nightly | 2.3.2.9.dev202602211770834561 | Bayesian networks and other Probabilistic Graphical Models. |
pyAgrum
=======
``pyAgrum`` is a scientific C++ and Python library dedicated to Bayesian Networks and other Probabilistic Graphical Models. It provides a high-level interface to the part of aGrUM allowing to create, model, learn, use, calculate with and embed Bayesian Networks and other graphical models. Some specific (python and C++) codes are added in order to simplify and extend the ``aGrUM`` API.
Important
=========
Since pyAgrum 2.0.0, the package name follows PEP8 rules and is now ``pyagrum`` (lowercase).
Please use ``import pyagrum`` instead of ``import pyAgrum`` in your code.
See the `CHANGELOG <https://gitlab.com/agrumery/aGrUM/-/blob/master/CHANGELOG.md?ref_type=heads#changelog-for-200>`_ for more details.
Example
=======
.. code:: python
import pyagrum as gum
# Creating BayesNet with 4 variables
bn=gum.BayesNet('WaterSprinkler')
print(bn)
# Adding nodes the long way
c=bn.add(gum.LabelizedVariable('c','cloudy ?',["Yes","No"]))
print(c)
# Adding nodes the short way
s, r, w = [ bn.add(name, 2) for name in "srw" ]
print (s,r,w)
print (bn)
# Addings arcs c -> s, c -> r, s -> w, r -> w
bn.addArc(c,s)
for link in [(c,r),(s,w),(r,w)]:
bn.addArc(*link)
print(bn)
# or, equivalenlty, creating the BN with 4 variables, and the arcs in one line
bn=gum.fastBN("w<-r<-c{Yes|No}->s->w")
# Filling CPTs
bn.cpt("c").fillWith([0.5,0.5])
bn.cpt("s")[0,:]=0.5 # equivalent to [0.5,0.5]
bn.cpt("s")[{"c":1}]=[0.9,0.1]
bn.cpt("w")[0,0,:] = [1, 0] # r=0,s=0
bn.cpt("w")[0,1,:] = [0.1, 0.9] # r=0,s=1
bn.cpt("w")[{"r":1,"s":0}] = [0.1, 0.9] # r=1,s=0
bn.cpt("w")[1,1,:] = [0.01, 0.99] # r=1,s=1
bn.cpt("r")[{"c":0}]=[0.8,0.2]
bn.cpt("r")[{"c":1}]=[0.2,0.8]
# Saving BN as a BIF file
gum.saveBN(bn,"WaterSprinkler.bif")
# Loading BN from a BIF file
bn2=gum.loadBN("WaterSprinkler.bif")
# Inference
ie=gum.LazyPropagation(bn)
ie.makeInference()
print (ie.posterior("w"))
# Adding hard evidence
ie.setEvidence({"s": 1, "c": 0})
ie.makeInference()
print(ie.posterior("w"))
# Adding soft and hard evidence
ie.setEvidence({"s": [0.5, 1], "c": 0})
ie.makeInference()
print(ie.posterior("w"))
LICENSE
=======
Copyright (C) 2005-2024 by Pierre-Henri WUILLEMIN et Christophe GONZALES
{prenom.nom}_at_lip6.fr
The aGrUM/pyAgrum library and all its derivatives are distributed under the dual LGPLv3+MIT license, see LICENSE.LGPL and LICENSE.MIT.
You can therefore integrate this library into your software solution but it will remain covered by either the LGPL v.3 license or the MIT license or, as aGrUM itself, by the dual LGPLv3+MIT license at your convenience.
If you wish to integrate the aGrUM library into your product without being affected by this license, please contact us (info@agrum.org).
This library depends on different third-party codes. See src/aGrUM/tools/externals for specific COPYING and explicit permission of
the authors, if needed.
If you use aGrUM/pyAgrum as a dependency of your own project, you are not contaminated by the GPL license of some of these third-party
codes as long as you use only their aGrUM/pyAgrum interfaces and not their native interfaces.
Authors
=======
- Pierre-Henri Wuillemin
- Christophe Gonzales
Maintainers
===========
- Lionel Torti
- Gaspard Ducamp
| null | Pierre-Henri Wuillemin and Christophe Gonzales | info@agrum.org | null | null | null | probabilities probabilistic-graphical-models inference diagnosis | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: MIT License",
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX :: Linux",
"Programming Language :: C++",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research"
] | [
"any"
] | https://agrum.gitlab.io/ | null | >=3.10 | [] | [] | [] | [
"numpy",
"matplotlib",
"pydot",
"scikit-learn>=1.7"
] | [] | [] | [] | [
"Bug Tracker, https://gitlab.com/agrumery/aGrUM/-/issues",
"Documentation, https://pyagrum.readthedocs.io/",
"Source Code, https://gitlab.com/agrumery/aGrUM"
] | twine/6.1.0 CPython/3.8.10 | 2026-02-21T03:24:39.555304 | pyagrum_nightly-2.3.2.9.dev202602211770834561-cp310-abi3-win_amd64.whl | 3,089,799 | dc/21/e50022e4d58d2aa93d5c0bb984ffdb4ce6c360a0058e128099cb632a37fc/pyagrum_nightly-2.3.2.9.dev202602211770834561-cp310-abi3-win_amd64.whl | cp310 | bdist_wheel | null | false | df412a225797a6202863a57071f54c34 | cbc942970777cef2729f93e1830780dcb490b8e4d60a20f94661c056b6360bcd | dc21e50022e4d58d2aa93d5c0bb984ffdb4ce6c360a0058e128099cb632a37fc | LGPL-3.0-only OR MIT | [] | 0 |
2.4 | gravity-sdk | 0.1.0 | Python server SDK for fetching Gravity ads | # gravity-py
Python SDK for requesting ads from the [Gravity](https://trygravity.ai) API.
```python
from gravity_sdk import Gravity
gravity = Gravity(production=True) # reads GRAVITY_API_KEY from env
result = await gravity.get_ads(request, messages, placements)
```
Works with FastAPI, Starlette, Django, and Flask. Only dependency is [httpx](https://www.python-httpx.org/).
## Install
```bash
pip install gravity-sdk
```
Set `GRAVITY_API_KEY` in your server environment.
## Integration
Add a few lines to your existing streaming chat endpoint. The ad request runs in parallel with your LLM call — zero added latency.
```diff
+ import asyncio
+ from gravity_sdk import Gravity
+ gravity = Gravity(production=True)
@app.post("/api/chat")
async def chat(request: Request):
body = await request.json()
messages = body["messages"]
+ ad_task = asyncio.create_task(
+ gravity.get_ads(request, messages, [{"placement": "chat", "placement_id": "main"}])
+ )
async def event_stream():
async for token in stream_your_llm(messages):
yield f"data: {json.dumps({'type': 'chunk', 'content': token})}\n\n"
- yield f"data: {json.dumps({'type': 'done'})}\n\n"
+ ad_result = await ad_task
+ ads = [a.to_dict() for a in ad_result.ads]
+ yield f"data: {json.dumps({'type': 'done', 'ads': ads})}\n\n"
return StreamingResponse(event_stream(), media_type="text/event-stream")
```
`gravity.get_ads()` takes your server's request object, the conversation messages, and your ad placements, then calls the Gravity API and returns the ads. Never raises — returns `AdResult(ads=[])` on any failure.
### Message handling
The SDK sends the last 2 conversational messages to the Gravity API for contextual ad matching. Only messages with recognized conversational roles are included:
- `user`, `assistant`, `system`, `developer`, `model` (Gemini's alias for `assistant`)
Messages with other roles (e.g. `tool`, `function`, `ipython`) are filtered out because they typically contain structured data rather than natural language.
### Constructor
```python
Gravity(*, api_key=None, api_url=None, timeout=3.0, production=False, relevancy=0.2)
```
| Parameter | Type | Description |
|-------------|---------|------------------------------------------------------------------------------|
| `api_key` | `str` | Gravity API key (default: `GRAVITY_API_KEY` env var) |
| `api_url` | `str` | Gravity API endpoint URL (default: production) |
| `timeout` | `float` | Request timeout in seconds (default: `3.0`) |
| `production` | `bool` | Serve real ads when `True`. Defaults to `False` (test ads). |
| `relevancy` | `float` | Minimum relevancy threshold, 0.0–1.0 (default: `0.2`). Lower = more ads with weaker contextual matches. |
The client reuses its HTTP connection pool across calls. Use `async with Gravity() as g:` or call `await gravity.close()` for explicit cleanup.
### Return types
```python
@dataclass
class AdResult:
ads: list[AdResponse] # Parsed ad objects
status: int # 200, 204, 0 (error)
elapsed_ms: str # e.g. "142"
request_body: dict | None
error: str | None
@dataclass
class AdResponse:
ad_text: str
title: str | None
cta: str | None
brand_name: str | None
url: str | None
favicon: str | None
imp_url: str | None
click_url: str | None
```
Both have `.to_dict()` methods that serialize to the camelCase JSON shape renderers expect.
## Development
```bash
uv sync
uv run pytest
uv run ruff check src/ tests/
```
## License
MIT
| text/markdown | Gravity Labs | null | null | null | null | advertising, ai-ads, contextual-advertising, gravity, llm, sdk | [
"Development Status :: 3 - Alpha",
"Framework :: AsyncIO",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27"
] | [] | [] | [] | [
"Homepage, https://trygravity.ai",
"Repository, https://github.com/Try-Gravity/gravity-py",
"Issues, https://github.com/Try-Gravity/gravity-py/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:24:34.240598 | gravity_sdk-0.1.0.tar.gz | 9,568 | 13/1e/710da3172b28f3831d42ec841744ea3fece5a33dba4cc3a92015b16a9f2e/gravity_sdk-0.1.0.tar.gz | source | sdist | null | false | 67bc9d7492666be16edd7d0c56320702 | 535ac51a1bdb40e69a5507cf580872eae097ac64d08cd48604fe4fa2cdb2cd88 | 131e710da3172b28f3831d42ec841744ea3fece5a33dba4cc3a92015b16a9f2e | MIT | [
"LICENSE"
] | 227 |
2.4 | ai-agentcompany | 0.1.0 | Spin up an AI agent company - a business run by AI agents, managed by you | # AgentCompany
**Spin up an AI agent company - a business run by AI agents, managed by you.**
AgentCompany lets a solo entrepreneur create a virtual company staffed entirely by AI agents. Each agent has a specific business role (CEO, CTO, Developer, Marketer, etc.), they collaborate on tasks, and you manage everything through a CLI or web dashboard.
## Quick Start
```bash
pip install agentcompany
```
### 1. Initialize your company
```bash
agentcompany init --name "My AI Startup"
```
### 2. Configure your LLM provider
Edit `.agentcompany/config.yaml`:
```yaml
company:
name: "My AI Startup"
llm:
default_provider: anthropic
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-sonnet-4-5-20250929
openai:
api_key: ${OPENAI_API_KEY}
model: gpt-4o
```
### 3. Hire your team
```bash
agentcompany hire ceo --name Alice
agentcompany hire cto --name Bob
agentcompany hire developer --name Carol
agentcompany hire marketer --name Dave
```
### 4. Run autonomously
Give the CEO a goal and watch the company run:
```bash
agentcompany run "Build a landing page for our new product"
```
The CEO will break down the goal, delegate tasks to the team, and agents will collaborate to deliver results.
## Commands
| Command | Description |
|---------|-------------|
| `agentcompany init` | Initialize a new company |
| `agentcompany hire <role>` | Hire an agent |
| `agentcompany fire <name>` | Remove an agent |
| `agentcompany team` | List all agents |
| `agentcompany assign "<task>"` | Assign a task |
| `agentcompany tasks` | Show the task board |
| `agentcompany chat <name>` | Chat with an agent |
| `agentcompany run "<goal>"` | Autonomous mode |
| `agentcompany broadcast "<msg>"` | Message all agents |
| `agentcompany dashboard` | Launch web dashboard |
| `agentcompany status` | Company overview |
| `agentcompany roles` | List available roles |
## Available Roles
| Role | Title | Reports To |
|------|-------|------------|
| `ceo` | Chief Executive Officer | Owner |
| `cto` | Chief Technology Officer | CEO |
| `developer` | Software Developer | CTO |
| `marketer` | Head of Marketing | CEO |
| `sales` | Head of Sales | CEO |
| `support` | Customer Support Lead | CEO |
| `finance` | CFO / Finance | CEO |
| `hr` | Head of HR | CEO |
| `project_manager` | Project Manager | CEO |
## Web Dashboard
Launch the dashboard:
```bash
agentcompany dashboard --port 8420
```
Features:
- **Org Chart** - Visual company hierarchy
- **Agent Roster** - See all agents and their roles
- **Task Board** - Kanban-style task management
- **Chat** - Talk directly to any agent
- **Activity Feed** - Real-time event stream
- **Autonomous Mode** - Set goals from the UI
## Multi-Provider LLM Support
Configure different LLM providers per agent:
```yaml
llm:
default_provider: anthropic
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-sonnet-4-5-20250929
openai:
api_key: ${OPENAI_API_KEY}
model: gpt-4o
base_url: https://api.openai.com/v1 # or any compatible endpoint
agents:
- name: Alice
role: ceo
provider: anthropic
- name: Bob
role: developer
provider: openai
```
## Custom Roles
Create custom roles by adding YAML files:
```yaml
# .agentcompany/roles/custom_analyst.yaml
name: analyst
title: "Data Analyst"
description: "Analyzes data and creates reports"
system_prompt: |
You are a data analyst at {company_name}.
Your expertise: data analysis, visualization, reporting.
Team: {team_members}
Delegates: {delegates}
default_tools:
- code_exec
- file_io
can_delegate_to: []
reports_to: cto
```
## Built-in Tools
Agents have access to these tools based on their role:
- **web_search** - Search the web for information
- **read_file / write_file / list_files** - File operations in the workspace
- **code_exec** - Execute Python code
- **shell** - Run shell commands
- **delegate_task** - Delegate work to other agents
- **report_result** - Submit task results
## License
MIT
| text/markdown | AgentCompany | null | null | null | null | agents, ai, automation, business, llm | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiosqlite>=0.19.0",
"anthropic>=0.39.0",
"fastapi>=0.110.0",
"httpx>=0.27.0",
"jinja2>=3.1.0",
"openai>=1.0.0",
"pydantic>=2.0",
"pyyaml>=6.0",
"rich>=13.0",
"typer[all]>=0.9.0",
"uvicorn[standard]>=0.27.0",
"websockets>=12.0"
] | [] | [] | [] | [
"Homepage, https://github.com/gobeyondfj-cmd/agentcompany",
"Documentation, https://github.com/gobeyondfj-cmd/agentcompany#readme",
"Issues, https://github.com/gobeyondfj-cmd/agentcompany/issues"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-21T03:24:02.080737 | ai_agentcompany-0.1.0.tar.gz | 48,564 | 0c/18/43585823fc67788d35983a0467584bfa8771792ad48c08b1317218759255/ai_agentcompany-0.1.0.tar.gz | source | sdist | null | false | 795a8e8a09d0e0bb299ac324cbe1409f | 44bed1d44e9d24aebf19ad5d5040583993ea64e20d0c6ccae0eb5c7f6f7ac858 | 0c1843585823fc67788d35983a0467584bfa8771792ad48c08b1317218759255 | MIT | [
"LICENSE"
] | 113 |
2.4 | axionquant-sdk | 1.1.2 | Short description | # Axion Python SDK
A comprehensive Python client for the [Axion](https://axionquant.com) financial market data, technical analysis, visualization, and machine learning - built for quantitative research in Jupyter notebooks and Python scripts.
## Installation
```bash
pip install axionquant-sdk
```
## Quick Start
[Get your free API key](https://axionquant.com/dashboard/api-keys)
```python
from axion import Axion, ta, visualize, utils as axion_utils
client = Axion(api_key="your_api_key_here")
# Fetch stock prices and convert to DataFrame
prices = client.stocks.prices("AAPL", from_date="2024-01-01")
df = axion_utils.df(prices)
# Run technical analysis
roc = ta.roc(df, 'close')
# Visualize
visualize.candles(df)
```
## Modules
The SDK is organized into five importable modules:
```python
from axion import Axion # API client
from axion import ta # Technical analysis
from axion import visualize # Charting & visualization
from axion import utils # Data utilities
from axion import models # ML / prediction models
```
---
## Axion Client
Initialize with your API key:
```python
client = Axion(api_key="your_api_key_here")
```
### Stocks
```python
client.stocks.tickers(country="america")
client.stocks.quote("AAPL")
client.stocks.prices("AAPL", from_date="2024-01-01", to_date="2024-12-31", frame="daily")
```
### Crypto
```python
client.crypto.tickers(type="coin")
client.crypto.quote("BTC")
client.crypto.prices("BTC", from_date="2024-01-01", frame="weekly")
```
### Forex
```python
client.forex.tickers()
client.forex.quote("EURUSD")
client.forex.prices("EURUSD", from_date="2024-01-01")
```
### Futures
```python
client.futures.tickers(exchange="CME")
client.futures.quote("ES")
client.futures.prices("ES", from_date="2024-01-01")
```
### Indices
```python
client.indices.tickers()
client.indices.quote("SPX")
client.indices.prices("SPX", from_date="2024-01-01")
```
### Company Profiles
```python
client.profiles.profile("AAPL") # Business summary
client.profiles.info("AAPL") # Company info
client.profiles.statistics("AAPL") # Key ratios
client.profiles.summary("AAPL") # Market data summary
client.profiles.recommendation("AAPL") # Analyst recommendations
client.profiles.calendar("AAPL") # Earnings dates / dividends
```
### Earnings
```python
client.earnings.history("AAPL")
client.earnings.trend("AAPL")
client.earnings.index("AAPL")
client.earnings.report("AAPL", year="2024", quarter="Q1")
```
### Financials
```python
client.financials.revenue("AAPL", periods=8)
client.financials.net_income("AAPL")
client.financials.free_cash_flow("AAPL")
client.financials.total_assets("AAPL")
client.financials.total_liabilities("AAPL")
client.financials.stockholders_equity("AAPL")
client.financials.operating_cash_flow("AAPL")
client.financials.capital_expenditures("AAPL")
client.financials.shares_outstanding_basic("AAPL")
client.financials.shares_outstanding_diluted("AAPL")
client.financials.metrics("AAPL") # Calculated ratios
client.financials.snapshot("AAPL") # Full data snapshot
```
### Insiders & Ownership
```python
client.insiders.individuals("AAPL") # Insider holders
client.insiders.institutions("AAPL") # Institutional ownership
client.insiders.funds("AAPL") # Fund ownership
client.insiders.ownership("AAPL") # Major holders breakdown
client.insiders.transactions("AAPL") # Insider transactions
client.insiders.activity("AAPL") # Net share purchase activity
```
### SEC Filings
```python
client.filings.filings("AAPL", limit=10, form="10-K")
client.filings.forms("AAPL", form_type="10-Q", year="2024", quarter="Q2")
client.filings.search(year="2024", quarter="Q1", form="10-K", ticker="AAPL")
client.filings.desc_forms() # List available form types
```
### Economic Data
```python
client.econ.search("unemployment rate")
client.econ.dataset("UNRATE")
client.econ.calendar(
from_date="2024-01-01",
to_date="2024-12-31",
country="US",
min_importance=3,
currency="USD",
category="employment"
)
```
### ETFs
```python
client.etfs.fund("SPY")
client.etfs.holdings("SPY")
client.etfs.exposure("SPY")
```
### News
```python
client.news.general()
client.news.company("AAPL")
client.news.country("US")
client.news.category("technology")
```
### Sentiment
```python
client.sentiment.all("AAPL")
client.sentiment.social("AAPL")
client.sentiment.news("AAPL")
client.sentiment.analyst("AAPL")
```
### ESG
```python
client.esg.data("AAPL")
```
### Credit Ratings
```python
client.credit.search("Apple Inc")
client.credit.ratings("entity_id")
```
### Supply Chain
```python
client.supply_chain.customers("AAPL")
client.supply_chain.suppliers("AAPL")
client.supply_chain.peers("AAPL")
```
### Web Traffic
```python
client.web_traffic.traffic("AAPL")
```
---
## Technical Analysis (`ta`)
All functions accept a pandas DataFrame and return a Series (or tuple of Series).
```python
import axion.ta as ta
# Trend
ta.sma(df, column="close", period=14)
ta.ema(df, column="close", period=14)
ta.dema(df, column="close", period=14)
ta.ssma(df, column="close", period=14)
ta.trima(df, column="close", period=14)
ta.kama(df, column="close", period=14)
# Momentum & Oscillators
ta.rsi(df, column="close", period=14)
ta.macd(df) # returns (macd_line, signal, histogram)
ta.roc(df, column="close", period=10)
ta.mom(df, column="close", period=10)
ta.cmo(df, column="close", period=20)
ta.stochastic_oscillator(df) # returns (K, D)
ta.williams_r(df, period=14)
ta.adx(df, period=14)
# Volatility & Channels
ta.atr(df, period=14)
ta.bbands(df) # returns (upper, mid, lower)
ta.kc(df) # Keltner Channels
# Volume
ta.obv(df)
ta.vpt(df)
ta.vwap(df)
# Trend Direction
ta.vi(df, period=14) # Vortex Indicator (VI+, VI-)
ta.ichi(df) # Ichimoku Cloud
ta.sar(df) # Parabolic SAR
ta.fib(df) # Fibonacci Pivot Points
```
---
## Visualization (`visualize`)
All chart functions accept a pandas DataFrame and render an interactive Plotly chart.
```python
import axion.visualize as visualize
visualize.candles(df) # Candlestick chart
visualize.line(df, x="time", y="close") # Line chart
visualize.bar(df, x="time", y="volume") # Bar chart
visualize.barh(df, x="value", y="label") # Horizontal bar
visualize.scatter(df, x="time", y="close") # Scatter plot
visualize.fit(df, x="revenue", y="price") # Scatter + OLS trendline
visualize.area(df, x="time", y="value", group="sector")
visualize.pie(df, values="marketCap", labels="ticker")
visualize.radar(df, values="score", labels="category")
visualize.heatmap(df, x="col1", y="col2")
visualize.cov(df) # Correlation heatmap
visualize.polls(df) # Multi-series line chart
visualize.spread(dfs, x="time", y="close") # Two assets + spread
visualize.tree(df) # Treemap (sector/industry/symbol)
# Flexible multi-series chart
visualize.graph(df, x="time", bars=["volume"], lines=["close", "sma"])
```
---
## Utilities (`utils`)
```python
import axion.utils as axion_utils
# Date helpers
axion_utils.d("1 month ago") # Natural language → "YYYY-MM-DD"
axion_utils.to_timestamp("2024-01-01")
axion_utils.nearest_day("2024-01-06") # Nearest market open day
# Date shorthand constants
axion_utils.today / axion_utils.yesterday / axion_utils.weekago
axion_utils.monthago / axion_utils.yearago / axion_utils.yearfrom
# Frame shorthand constants
axion_utils.d # daily
axion_utils.w # weekly
axion_utils.m # monthly
axion_utils.y # yearly
# DataFrame helpers
axion_utils.df(items) # Create DataFrame from list of dicts
axion_utils.pds(list_of_lists) # Convert 2D list of dicts → list of DataFrames
axion_utils.stack(dfs) # Concat DataFrames vertically
axion_utils.stitch(dfs, col="time") # Merge different data types on shared column
axion_utils.snap(dfs, names, overwrite) # Merge same-type DataFrames side-by-side
axion_utils.filter(df, col, items) # Filter rows by column values
axion_utils.dedup(lst) # Remove duplicates from list
axion_utils.simmer(arr) # Flatten 2D list to 1D
axion_utils.resample(df, "2024-01-01 2024-12-31")
# Analysis helpers
axion_utils.relativity(df, cols) # Add pct_change columns
axion_utils.indexed(prices) # Average a dict of price DataFrames
axion_utils.composite(dfs) # Combine and average financials/facts
axion_utils.contrast(dfs, joins) # Reshape for side-by-side graphing
axion_utils.compare(dfs, joins) # Cross-DataFrame % diff comparison
axion_utils.gainers(prices, frame) # Top gainers from price dict
axion_utils.losers(prices, frame) # Top losers from price dict
axion_utils.overlap(dfs, col) # Intersection of column values
axion_utils.difference(dfs, col) # Non-overlapping values
# Concurrency
axion_utils.work(df, callback, ref) # Threaded execution with progress bar
# Caching / Persistence
axion_utils.cache(id, fn) # Load from cache or run function
axion_utils.save(id, obj) # Manually save object to cache
axion_utils.read(id) # Read object from cache
axion_utils.scribe(df, id, cb) # Cache + work helper combined
```
---
## ML Models (`models`)
```python
import axion.models as models
# Linear regression forecast
preds = models.linearRegression(df, x="time", target="close", n_preds=10)
# Multi-feature linear regression
preds = models.multiLinearRegression(df, x="time", target="close",
features=["volume", "rsi"], n_preds=10)
# Beta relative to benchmark
b = models.beta(df, x="stock_return", y="market_return")
# LSTM deep learning forecast
preds = models.lstm(df, x="time", target="close",
features=["volume", "rsi"], n_preds=10)
```
---
## Date Formats & Time Frames
All API date parameters use `YYYY-MM-DD` format. Supported price frames: `daily`, `weekly`, `monthly`, `quarterly`, `yearly`.
## Error Handling
```python
try:
data = client.stocks.prices("INVALID")
except Exception as e:
print(f"Error: {e}")
```
Common errors: `HTTP Error`, `Connection Error`, `Timeout Error`, `Authentication Error`.
## Get Started
For detailed API documentation, support, or to obtain an API key, visit the [Axion](https://axionquant.com) website.
## License
MIT
| text/markdown | null | AxionQuant <admin@axionquant.com> | null | null | MIT | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.28.0",
"pandas>=1.5.0",
"numpy>=1.21.0",
"ipython",
"plotly",
"tensorflow",
"scikit-learn",
"tqdm",
"parsedatetime"
] | [] | [] | [] | [
"Homepage, https://github.com/axionquant/python-sdk"
] | twine/6.2.0 CPython/3.10.17 | 2026-02-21T03:23:23.196459 | axionquant_sdk-1.1.2.tar.gz | 19,947 | a1/37/e3e029ad0ab7ce79429f757097829313135d03cc9ef51fe2c5ef99c0479a/axionquant_sdk-1.1.2.tar.gz | source | sdist | null | false | 1cec76609332ab3191bba5db7a495251 | 40fab2c8936137094a3b6a0f402e4007949aca6e09c626384db746f1e717a419 | a137e3e029ad0ab7ce79429f757097829313135d03cc9ef51fe2c5ef99c0479a | null | [] | 215 |
2.4 | buildfunctions | 0.2.4 | The Buildfunctions SDK for Agents: Hardware-isolated CPU and GPU Sandboxes with runtime controls for untrusted AI actions | <p align="center">
<h1 align="center">
<a href="https://www.buildfunctions.com" target="_blank">
<img src="./static/readme/buildfunctions-header.svg" alt="logo" width="900">
</a>
</h1>
</p>
<h1 align="center">The Buildfunctions SDK for Agents</h1>
<p align="center">
<!-- <a href="https://discord.com/users/buildfunctions" target="_blank">
<img src="./static/readme/discord-button.png" height="32" />
</a> -->
<a href="https://www.buildfunctions.com/docs/sdk/quickstart" target="_blank">
<img src="./static/readme/read-the-docs-button.png" height="32" />
</a>
</p>
<p align="center">
<a href="https://pypi.org/project/buildfunctions" target="_blank">
<img src="https://img.shields.io/badge/pypi-buildfunctions-green">
</a>
</p>
<p align="center">
<h1 align="center">
<a href="https://www.buildfunctions.com" target="_blank">
<img src="./static/readme/buildfunctions-logo-and-servers-dark.svg" alt="logo" width="900">
</a>
</h1>
</p>
> Hardware-isolated execution environments for AI agents — with runtime controls to help keep unattended runs bounded
## Installation
```bash
pip install buildfunctions
```
## Quick Start
### 1. Create an API Token
Get your API token at [buildfunctions.com/settings](https://www.buildfunctions.com/settings)
### 2. CPU Function
```python
from buildfunctions import Buildfunctions, CPUFunction
client = await Buildfunctions({"apiToken": API_TOKEN})
deployed_function = await CPUFunction.create({
"name": "my-cpu-function",
"code": "./cpu_function_code.py",
"language": "python",
"memory": 128,
"timeout": 30,
})
print(f"Endpoint: {deployed_function.endpoint}")
await deployed_function.delete()
```
### 3. CPU Sandbox
```python
from buildfunctions import Buildfunctions, CPUSandbox
client = await Buildfunctions({"apiToken": API_TOKEN})
sandbox = await CPUSandbox.create({
"name": "my-cpu-sandbox",
"language": "python",
"code": "/path/to/code/cpu_sandbox_code.py",
"memory": 128,
"timeout": 30,
})
result = await sandbox.run()
print(f"Result: {result}")
await sandbox.delete()
```
### 4. GPU Function
```python
from buildfunctions import Buildfunctions, GPUFunction
client = await Buildfunctions({"apiToken": API_TOKEN})
deployed_function = await GPUFunction.create({
"name": "my-gpu-function",
"code": "/path/to/code/gpu_function_code.py",
"language": "python",
"gpu": "T4",
"vcpus": 30,
"memory": "50000MB",
"timeout": 300,
"requirements": ["transformers==4.47.1", "torch", "accelerate"],
})
print(f"Endpoint: {deployed_function.endpoint}")
await deployed_function.delete()
```
### 5. GPU Sandbox with Local Model
```python
from buildfunctions import Buildfunctions, GPUSandbox
client = await Buildfunctions({"apiToken": API_TOKEN})
sandbox = await GPUSandbox.create({
"name": "my-gpu-sandbox",
"language": "python",
"memory": 10000,
"timeout": 300,
"vcpus": 6,
"code": "./gpu_sandbox_code.py",
"model": "/path/to/models/Qwen/Qwen3-8B",
"requirements": "torch",
})
result = await sandbox.run()
print(f"Response: {result}")
await sandbox.delete()
```
## Runtime Controls: Help Keep Your Agent Running Unattended
Wrap any tool call with composable guardrails — no API key required, no sandbox needed. RuntimeControls works standalone around your own functions, or combined with Buildfunctions sandboxes.
**Available control layers (configure per workflow):**
retries with backoff, per-run tool-call budgets, circuit breakers, loop detection, timeout + cancellation, policy gates, injection guards, idempotency, concurrency locks, and event-based observability via event sinks.
### 1. Wrap Any Tool Call (No API Key)
```python
import httpx
from buildfunctions import RuntimeControls
controls = RuntimeControls.create({
"maxToolCalls": 50,
"timeoutMs": 30_000,
"retry": {"maxAttempts": 3, "initialDelayMs": 200, "backoffFactor": 2},
"loopBreaker": {"warningThreshold": 5, "quarantineThreshold": 8, "stopThreshold": 12},
"onEvent": lambda event: print(f"[controls] {event['type']}: {event['message']}"),
})
# Wrap any function — an API call, a shell command, an LLM tool invocation
async def run_api(args, runtime):
payload = args[0]
async with httpx.AsyncClient() as client:
response = await client.post("https://api.example.com/data", json=payload)
return response.json()
guarded_fetch = controls.wrap({
"toolName": "api-call",
"runKey": "agent-run-1",
"destination": "https://api.example.com",
"run": run_api,
})
result = await guarded_fetch({"query": "latest results"})
print(result)
# Reset budget counters when starting a new run
await controls.reset("agent-run-1")
```
### 2. With Hardware-Isolated Sandbox + Agent Safety
```python
import re
from buildfunctions import Buildfunctions, CPUSandbox, RuntimeControls, applyAgentLogicSafety
await Buildfunctions({"apiToken": API_TOKEN})
sandbox = await CPUSandbox.create({
"name": "guarded-sandbox",
"language": "python",
"code": "./my_handler.py",
"memory": 128,
"timeout": 30,
})
controls = RuntimeControls.create(
applyAgentLogicSafety(
{
"maxToolCalls": 20,
"retry": {"maxAttempts": 2, "initialDelayMs": 200, "backoffFactor": 2},
"onEvent": lambda event: print(f"[controls] {event['type']}: {event['message']}"),
},
{
"injectionGuard": {
"enabled": True,
"patterns": [
re.compile(r"ignore\s+previous\s+instructions", re.I),
re.compile(r"\brm\s+-rf\b", re.I),
],
},
},
)
)
async def run_sandbox(runtime):
_ = runtime
return await sandbox.run()
result = await controls.run(
{
"toolName": "cpu-sandbox-run",
"runKey": "sandbox-run-1",
"destination": sandbox.endpoint,
"action": "execute",
},
run_sandbox,
)
print(f"Result: {result}")
await sandbox.delete()
```
Full runtime controls documentation: https://www.buildfunctions.com/docs/runtime-controls
The SDK is currently in beta. If you encounter any issues or have specific syntax requirements, please reach out and contact us at team@buildfunctions.com, and we’ll work to address them.
| text/markdown | Buildfunctions | null | null | null | null | AIAgents, AIApps, AIInfrastructure, DeveloperExperience, GPUs, Sandboxes, Serverless | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27",
"python-dotenv>=1.0",
"mypy>=1.13; extra == \"dev\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.8; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://www.buildfunctions.com",
"Documentation, https://www.buildfunctions.com/docs/sdk/quickstart",
"Repository, https://github.com/buildfunctions/sdk-python",
"Issues, https://github.com/buildfunctions/sdk-python/issues"
] | twine/6.2.0 CPython/3.11.0rc1 | 2026-02-21T03:23:18.761961 | buildfunctions-0.2.4.tar.gz | 4,893,507 | e1/8f/f3e2175474836c8f1c18a65b9213e17c61fa417fe6a9c11a4cbf66217505/buildfunctions-0.2.4.tar.gz | source | sdist | null | false | ec2ab2cf50e48284eff00dc72b040160 | 325e68dcac7d16f78db3b982edecad3069575f697a93785464b308dea2777300 | e18ff3e2175474836c8f1c18a65b9213e17c61fa417fe6a9c11a4cbf66217505 | null | [] | 209 |
2.1 | odoo14-addon-l10n-es-aeat-mod303 | 14.0.5.2.1.dev1 | AEAT modelo 303 | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
===============
AEAT modelo 303
===============
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:3aa146dc4cc2a138667d0fd87487da5cd35361b3350e56353fdbfe2801a2d9d2
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fl10n--spain-lightgray.png?logo=github
:target: https://github.com/OCA/l10n-spain/tree/14.0/l10n_es_aeat_mod303
:alt: OCA/l10n-spain
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/l10n-spain-14-0/l10n-spain-14-0-l10n_es_aeat_mod303
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/l10n-spain&target_branch=14.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
Módulo para la presentación del modelo 303 (IVA - Autodeclaración) de la
Agencia Española de Administración Tributaria.
Instrucciones del modelo: http://goo.gl/pgVbXH
Diseño de registros BOE en Excel: https://goo.gl/HKOGec
Incluye la exportación al formato BOE para su uso telemático y la creación
del asiento de regularización de las cuentas de impuestos.
Para el régimen de criterio de caja, hay que buscar el módulo
*l10n_es_aeat_mod303_cash_basis*.
* La prorrata del IVA está contemplada por el módulo adicional `l10n_es_aeat_vat_prorrate`.
* Existen 2 casos de IVA no sujeto que van a la casilla 61 del modelo, que no
están cubiertos en este módulo:
- Con reglas de localización, pero que no corresponde a Canarias, Ceuta y
Melilla. Por ejemplo, un abogado de España que da servicios en Francia.
- Articulos 7,14, Otros
Para dichos casos, se espera un módulo extra que añada los impuestos y
posiciones fiscales.
Más información en https://www.boe.es/diario_boe/txt.php?id=BOE-A-2014-12329
**Table of contents**
.. contents::
:local:
Known issues / Roadmap
======================
* No se pueden cambiar las cuentas contables genéricas de los impuestos si se
quiere que el modelo recoja correctamente sus cifras.
* Los regimenes simplificado y agrícola, ganadero y forestal no están
contemplados en el desarrollo actual.
* No se permite definir que una compañía realiza tributación conjunta.
* No se permite definir que una compañía está en concurso de acreedores.
* No se permite definir que una compañía es de una Administración Tributaria
Foral.
* Posibilidad de marcar en el resultado el ingreso/devolución en la cuenta
corriente tributaria.
* No se pueden rellenar las casillas [108] y [111] de rectificación de autoliquidación.
* No se puede rellenar la casilla [109]: Devoluciones acordadas por la Agencia
Tributaria como consecuencia de la tramitación de anteriores autoliquidaciones
correspondientes al ejercicio y período objeto de la autoliquidación.
* El régimen de criterio de caja está contemplado por el módulo adicional
`l10n_es_aeat_mod303_cash_basis`.
* La prorrata del IVA está contemplada por el módulo adicional
`l10n_es_aeat_vat_prorrate`.
* Existen 2 casos de IVA no sujeto que van a la casilla 61 del modelo, que no
están cubiertos en este módulo:
- Con reglas de localización, pero que no corresponde a Canarias, Ceuta y
Melilla. Por ejemplo, un abogado de España que da servicios en Francia.
- Articulos 7,14, Otros
Para dichos casos, se espera un módulo extra que añada los impuestos y
posiciones fiscales.
Más información en https://www.boe.es/diario_boe/txt.php?id=BOE-A-2014-12329
* No se han mapeado las ventas con el nuevo IVA a la electricidad del 5%, a la
espera de si Hacienda cambia el modelo para alojar dicho valor.
* No se ha comprobado en producción la funcionalidad de comunicaciones usando
la opción de cuenta tributaria (tipos de resultado "G" y "V").
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/l10n-spain/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/l10n-spain/issues/new?body=module:%20l10n_es_aeat_mod303%0Aversion:%2014.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
~~~~~~~
* Guadaltech
* AvanzOSC
* Tecnativa
* ForgeFlow
Contributors
~~~~~~~~~~~~
* GuadalTech (http://www.guadaltech.es)
* AvanzOSC (http://www.avanzosc.es)
* Comunitea (http://www.comunitea.com)
* Jordi Ballester <jordi.ballester@forgeflow.com>
* `Tecnativa <https://www.tecnativa.com>`__:
* Antonio Espinosa
* Luis M. Ontalba
* Pedro M. Baeza
* `Sygel <https://www.sygel.es>`__:
* Harald Panten
* Valentin Vinagre
* `Ozono Multimedia <https://www.ozonomultimedia.com>`__:
* Iván Antón
* `Moduon <https://www.moduon.team/>`__:
* Arantxa Sudón
* Rafael Blasco
Maintainers
~~~~~~~~~~~
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-pedrobaeza| image:: https://github.com/pedrobaeza.png?size=40px
:target: https://github.com/pedrobaeza
:alt: pedrobaeza
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-pedrobaeza|
This module is part of the `OCA/l10n-spain <https://github.com/OCA/l10n-spain/tree/14.0/l10n_es_aeat_mod303>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| null | Guadaltech,AvanzOSC,Tecnativa,ForgeFlow,Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 14.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/l10n-spain | null | >=3.6 | [] | [] | [] | [
"odoo14-addon-l10n-es-aeat",
"odoo14-addon-l10n-es-extra-data",
"odoo<14.1dev,>=14.0a"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T03:23:04.210557 | odoo14_addon_l10n_es_aeat_mod303-14.0.5.2.1.dev1-py3-none-any.whl | 1,169,683 | 61/83/b088b7dddaf7a7b433077d1299abadc33d5dc7c0940584226eb1b89f916b/odoo14_addon_l10n_es_aeat_mod303-14.0.5.2.1.dev1-py3-none-any.whl | py3 | bdist_wheel | null | false | a14693165f321998345ec1157c8c9093 | 00719276287476d0f2971c74ead348b80984be0fa9582f8dd1061bee6092412c | 6183b088b7dddaf7a7b433077d1299abadc33d5dc7c0940584226eb1b89f916b | null | [] | 64 |
2.4 | gjalla | 0.4.27 | Gjalla CLI — architecture guardrails for AI coding agents | # Gjalla Pre-commit CLI
Pre-commit analysis CLI for validating changes against team rules.
## Installation
```bash
pip install -e .
```
## Commands
| Command | Description |
|---------|-------------|
| `init` | Initialize gjalla in a repository |
| `commit` | Analyze staged changes before commit |
| `bypass` | Bypass analysis for a commit |
| `status` | Show current status |
| `hook` | Manage git hooks (install/uninstall) |
| `caps` | Show available capabilities |
| `sync` | Sync configuration |
| `auth` | Authentication management |
| `impact` | Show impact analysis |
| `mcp` | MCP server mode |
---
## User Guide
### Initial Setup
1. **Configure API key:**
```bash
gjalla init
```
- Prompts for API key (or set `GJALLA_API_KEY` env var)
- Discovers available projects from API
- Links current repository to a project
2. **Install pre-commit hook (optional):**
```bash
gjalla hook install
```
### Daily Usage
**Automatic (via pre-commit hook):**
- Hook runs `gjalla commit` automatically on each `git commit`
- Blocks commit if violations are found
**Manual:**
```bash
git add <files>
gjalla commit
git commit -m "message"
```
### Analysis Flow
When `gjalla commit` runs:
1. Opens WebSocket connection to analysis service
2. Sends staged changes for analysis
3. Executes tool calls from the service (file reads, searches)
4. Displays results: violations, warnings, suggestions
5. Stores analysis receipt on successful analysis
### Reading Output
- **Violations** (red): Must fix before commit proceeds
- **Warnings** (yellow): Should review, won't block commit
- **Suggestions** (blue): Optional improvements
- **Receipt**: Analysis ID stored for audit trail
### Bypassing Analysis
When you need to skip analysis:
```bash
gjalla bypass -m "reason for bypass"
git commit -m "message"
```
Logs bypass with timestamp and reason.
### Checking Status
```bash
gjalla status
```
Shows: project linkage, hook status, last analysis receipt.
### Hook Management
```bash
gjalla hook install # Install pre-commit hook
gjalla hook uninstall # Remove pre-commit hook
gjalla hook status # Check hook installation
```
### Configuration
**Global config:** `~/.gjalla/config.yaml`
```yaml
api_key: "your-api-key"
api_url: "https://api.gjalla.io"
projects:
/path/to/repo: "project-id"
```
**Project config:** `.gjalla.yaml` (in repo root)
- Project-specific rules and settings
**Environment variables:**
- `GJALLA_API_KEY` - Overrides config file api_key
- `GJALLA_API_URL` - Overrides config file api_url
---
## Development Guide
### Setup
```bash
git clone <repo-url>
cd gjalla-precommit
python3 -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
pre-commit install # Install hooks after cloning
```
### Running Tests
```bash
python3 -m pytest tests/ -v
```
### Project Structure
```
gjalla_precommit/
cli.py # Entry point, command group
commands/ # CLI command implementations
init.py # gjalla init
commit.py # gjalla commit
bypass.py # gjalla bypass
status.py # gjalla status
hook.py # gjalla hook install/uninstall
caps.py # gjalla caps
sync.py # gjalla sync
impact.py # gjalla impact
mcp.py # gjalla mcp
config/ # Configuration management
settings.py # Settings loading, config paths
protocol/ # WebSocket protocol handling
display/ # Terminal output formatting (rich)
tools/ # Tool implementations for analysis
tests/ # Test suite
```
### Dependencies
- `click` - CLI framework
- `rich` - Terminal formatting
- `websockets` - WebSocket client
- `gitpython` - Git operations
- `pyyaml` - Config file parsing
- `pydantic` - Settings validation
- `httpx` - HTTP client
- `pathspec` - Gitignore-style pattern matching
### Dev Dependencies
- `pytest` - Test runner
- `pytest-asyncio` - Async test support
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Quality Assurance",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"rich>=13.0",
"gitpython>=3.0",
"pyyaml>=6.0",
"pydantic>=2.0",
"httpx>=0.25",
"pathspec>=0.11",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"vulture>=2.11; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:21:47.751787 | gjalla-0.4.27.tar.gz | 92,407 | 8e/42/32404960f47f2c75f3455f28350af542eaad38b37d79cd703450565fafb6/gjalla-0.4.27.tar.gz | source | sdist | null | false | 3165be251ca72e1a9ee92b7d0911d54c | 5e37314a2cb561930542fa31eaa2d743bc87035697f630916c67da1b3bb2d550 | 8e4232404960f47f2c75f3455f28350af542eaad38b37d79cd703450565fafb6 | null | [] | 226 |
2.4 | statsig-python-core | 0.16.2b2602210300 | Statsig Python bindings for the Statsig Core SDK. | <h1 align="center">
<a href="https://statsig.com/?ref=gh_server_core">
<img src="https://github.com/statsig-io/js-client-monorepo/assets/95646168/ae5499ed-20ff-4584-bf21-8857f800d485" />
</a>
<div />
<a href="https://statsig.com/?ref=gh_server_core">Statsig</a>
</h1>
<p align="center">
<a href="https://github.com/statsig-io/statsig-server-core/blob/main/LICENSE">
<img src="https://img.shields.io/badge/license-ISC-blue.svg?colorA=1b2528&colorB=ccfbc7&style=for-the-badge">
</a>
<a href="https://www.npmjs.com/package/@statsig/statsig-node-core">
<img src="https://img.shields.io/npm/v/@statsig/statsig-node-core.svg?colorA=1b2528&colorB=b2d3ff&style=for-the-badge">
</a>
<a href="https://statsig.com/community?ref=gh_server_core">
<img src="https://img.shields.io/badge/slack-statsig-brightgreen.svg?logo=slack&colorA=1b2528&colorB=FFF8BA&style=for-the-badge">
</a>
</p>
Statsig helps you move faster with feature gates (feature flags), and/or dynamic configs. It also allows you to run A/B/n tests to validate your new features and understand their impact on your KPIs. If you're new to Statsig, check out our product and create an account at [statsig.com](https://www.statsig.com/?ref=gh_server_core).
## Getting Started
Read through the [Documentation](https://docs.statsig.com/server-core?ref=gh_server_core) or check out the [Samples](https://github.com/statsig-io/statsig-server-core/tree/main/examples).
## Packages
Bindings
- Node [[npm](https://www.npmjs.com/package/@statsig/statsig-node-core)] [[source](https://github.com/statsig-io/statsig-server-core/blob/main/statsig-node)] [[docs](https://docs.statsig.com/server-core/node-core?ref=gh_server_core)]
- Python [[pypi](https://pypi.org/project/statsig-python-core)] [[source](https://github.com/statsig-io/statsig-server-core/blob/main/statsig-pyo3)] [[docs](https://docs.statsig.com/server-core/python-core?ref=gh_server_core)]
- PHP [[packagist](https://packagist.org/packages/statsig/statsig-php-core)] [[source](https://github.com/statsig-io/statsig-server-core/blob/main/statsig-php)] [[docs](https://docs.statsig.com/server-core/php-core?ref=gh_server_core)]
- Java [[maven](https://search.maven.org/artifact/com.statsig/javacore)] [[source](https://github.com/statsig-io/statsig-server-core/tree/main/statsig-java)] [[docs](https://docs.statsig.com/server-core/java-core?ref=gh_server_core)]
- Rust [[crates.io](https://crates.io/crates/statsig-rust)] [[source](https://github.com/statsig-io/statsig-server-core/tree/main/statsig-rust)] [[docs](https://docs.statsig.com/server-core/rust-core?ref=gh_server_core)]
- Elixir [[hex](https://hex.pm/packages/statsig_elixir)] [[source](https://github.com/statsig-io/statsig-server-core/tree/main/statsig-elixir)] [[docs](https://docs.statsig.com/server-core/elixir-core?ref=gh_server_core)]
## Community
If you need any assitance or just have a question, feel free to reach out to us on [Slack](https://statsig.com/community?ref=gh_server_core).
| text/markdown; charset=UTF-8; variant=GFM | Statsig, Daniel Loomb <daniel@statsig.com> | Statsig, Daniel Loomb <daniel@statsig.com> | null | null | ISC | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | https://statsig.com/ | null | >=3.7 | [] | [] | [] | [
"requests",
"typing-extensions"
] | [] | [] | [] | [
"homepage, https://statsig.com",
"documentation, https://docs.statsig.com/server-core/python-core",
"repository, https://github.com/statsig-io/statsig-server-core/tree/main/statsig-pyo3",
"changelog, https://github.com/statsig-io/statsig-server-core/releases"
] | maturin/1.12.3 | 2026-02-21T03:19:47.340205 | statsig_python_core-0.16.2b2602210300.tar.gz | 1,794,244 | ae/f0/afed1bfb9c3bbca1f3c97c271c6144876fe816a2640fcdedec10c0745f2f/statsig_python_core-0.16.2b2602210300.tar.gz | source | sdist | null | false | c88ebe225a7676ef4569e82fe2264cc6 | 7f3944d9e58be93906b9c79a5ec923172717640d9ba85100611f9ad52dc838c6 | aef0afed1bfb9c3bbca1f3c97c271c6144876fe816a2640fcdedec10c0745f2f | null | [] | 557 |
2.4 | PyADRecon-ADWS | 0.4.2 | Python Active Directory Reconnaissance Tool (ADRecon port) using ADWS with NTLM support. | <img src="https://raw.githubusercontent.com/l4rm4nd/PyADRecon-ADWS/refs/heads/main/.github/pyadrecon.png" alt="pyadrecon" width="300"/>
A Python3 implementation of [PyADRecon](https://github.com/l4rm4nd/PyADRecon) using ADWS instead of LDAP for Pentesters, Red and Blue Teams
> PyADRecon is a tool which gathers information about MS Active Directory and generates an XSLX report to provide a holistic picture of the current state of the target AD environment.
>[!TIP]
>Queries Active Directory Web Services (ADWS) over TCP/9389 instead of LDAP to fly under the EDR radar.
## Table of Contents
- [Installation](#installation)
- [Usage](#usage)
- [Docker](#docker)
- [Known Limitations](#known-limitations)
- [Collection Modules](#collection-modules)
- [HTML Dashboard](#html-dashboard)
- [Acknowledgements](#acknowledgements)
- [License](#license)
## Installation
````bash
# stable release from pypi
pipx install pyadrecon-adws
# latest commit from github
pipx install git+https://github.com/l4rm4nd/PyADRecon-ADWS
````
Then verify installation:
````bash
pyadrecon_adws --version
````
> [!TIP]
> For Windows, may read [this](https://github.com/l4rm4nd/PyADRecon/tree/main/windows). Only NTLM authentication works on Windows atm.
## Usage
````py
usage: pyadrecon_adws.py [-h] [--version] [--generate-excel-from CSV_DIR] [-d DOMAIN] [-u USERNAME] [-p PASSWORD]
[-dc DOMAIN_CONTROLLER] [--port PORT] [--auth {ntlm,kerberos}] [--spn SPN]
[--workstation WORKSTATION] [-c COLLECT] [--only-enabled] [--page-size PAGE_SIZE]
[--dormant-days DORMANT_DAYS] [--password-age PASSWORD_AGE] [-o OUTPUT] [--no-excel]
[-v]
PyADRecon-ADWS # Active Directory Reconnaissance using ADWS
options:
-h, --help show this help message and exit
--version show program's version number and exit
--generate-excel-from CSV_DIR
Generate Excel report from existing CSV files (standalone mode)
-d, --domain DOMAIN Domain name (e.g., example.com)
-u, --username USERNAME
Username (DOMAIN\user or user@domain.com)
-p, --password PASSWORD
Password or LM:NTLM hash (will prompt if not provided)
-dc, --domain-controller DOMAIN_CONTROLLER
Domain controller hostname or IP
--port PORT ADWS port (default: 9389)
--auth {ntlm,kerberos}
Authentication method: ntlm or kerberos (default: ntlm)
--spn SPN Service Principal Name override (default: HTTP/dc.fqdn)
--workstation WORKSTATION
NTLM authentication workstation name (default: random)
-c, --collect COLLECT
Comma-separated modules to collect (default: all)
--only-enabled Only collect enabled users/computers
--page-size PAGE_SIZE
ADWS query page size (default: 256)
--dormant-days DORMANT_DAYS
Users/Computers with lastLogon older than X days are dormant (default: 90)
--password-age PASSWORD_AGE
Users with pwdLastSet older than X days have old passwords (default: 180)
-o, --output OUTPUT Output directory (default: PyADRecon-Report-<timestamp>)
--no-excel Skip Excel export
--no-dashboard Skip interactive HTML dashboard generation
-v, --verbose Enable verbose output
Examples:
# Basic usage with NTLM authentication
pyadrecon_adws.py -dc 192.168.1.1 -u admin -p password123 -d DOMAIN.LOCAL
# With Kerberos authentication (only works on Linux with gssapi atm)
pyadrecon.py -dc dc01.domain.local -u admin -p password123 -d DOMAIN.LOCAL --auth kerberos
# Only collect specific modules
pyadrecon_adws.py -dc 192.168.1.1 -u admin -p pass -d DOMAIN.LOCAL --collect users,groups,computers
# Output to specific directory
pyadrecon_adws.py -dc 192.168.1.1 -u admin -p pass -d DOMAIN.LOCAL -o /tmp/adrecon_output
# Generate Excel report from existing CSV files (standalone mode)
pyadrecon_adws.py --generate-excel-from /path/to/CSV-Files -o report.xlsx
````
## Docker
There is also a Docker image available on GHCR.IO.
````
docker run --rm -v /etc/krb5.conf:/etc/krb5.conf:ro -v /etc/hosts:/etc/hosts:ro -v ./:/tmp/pyadrecon_output ghcr.io/l4rm4nd/pyadrecon-adws:latest -dc dc01.domain.local -u admin -p password123 -d DOMAIN.LOCAL -o /tmp/pyadrecon_output
````
## Known Limitations
### Multi-Domain Forests - Certificate Templates
When querying **child domains** in a multi-domain forest, ADWS returns **incomplete security descriptors** for forest-wide objects like certificate templates. This means:
**Issue:**
- Certificate template ACLs (enrollment rights, write permissions) may not show principals from the **child domain itself**
- Only parent domain principals will appear in enrollment rights
- This is an ADWS protocol limitation, not a PyADRecon-ADWS bug
**Example:**
- Querying from child domain (`deham.domain.local`): Shows parent domain principals only
- Querying from parent domain (`domain.local`): Shows all principals including child domain
**Solution:**
- For **complete certificate template ACL data**, connect to the **forest root domain controller** instead of a child DC:
```bash
# Instead of connecting to child DC:
pyadrecon_adws.py -dc child-dc.deham.domain.local -d deham.domain.local ...
# Connect to forest root DC:
pyadrecon_adws.py -dc root-dc.domain.local -d domain.local ...
```
**Warning:** PyADRecon-ADWS will display a warning when collecting certificate templates from a child domain:
```
[WARNING] [!] Connected to child domain - certificate template ACLs may be incomplete!
[WARNING] For complete ACL data, connect to forest root DC instead.
```
## Collection Modules
As default, PyADRecon-ADWS runs all collection modules. They are referenced to as `default` or `all`.
Though, you can freely select your own collection of modules to run:
| Icon | Meaning |
|------|---------|
| 🛑 | Requires administrative domain privileges (e.g. Domain Admins) |
| ✅ | Requires regular domain privileges (e.g. Authenticated Users) |
| 💥 | New collection modul in beta state. Results may be incorrect. |
**Forest & Domain**
- `forest` ✅
- `domain` ✅
- `trusts` ✅
- `sites` ✅
- `subnets` ✅
- `schema` or `schemahistory` ✅
**Domain Controllers**
- `dcs` or `domaincontrollers` ✅
**Users & Groups**
- `users` ✅
- `userspns` ✅
- `groups` ✅
- `groupmembers` ✅
- `protectedgroups` ✅💥
- `krbtgt` ✅
- `asreproastable` ✅
- `kerberoastable` ✅
**Computers & Printers**
- `computers` ✅
- `computerspns` ✅
- `printers` ✅
**OUs & Group Policy**
- `ous` ✅
- `gpos` ✅
- `gplinks` ✅
**Passwords & Credentials**
- `passwordpolicy` ✅
- `fgpp` or `finegrainedpasswordpolicy` 🛑
- `laps` 🛑
- `bitlocker` 🛑💥
**Managed Service Accounts**
- `gmsa` or `groupmanagedserviceaccounts` ✅💥
- `dmsa` or `delegatedmanagedserviceaccounts` ✅💥
- Only works for Windows Server 2025+ AD schema
**Certificates**
- `adcs` or `certificates` ✅💥
- Detects ESC1, ESC2, ESC3, ESC4 and ESC9
**DNS**
- `dnszones` ✅
- `dnsrecords` ✅
## HTML Dashboard
PyADRecon-ADWS will automatically create an HTML dashboard with important stats and security findings.
You may disable HTML dashboard generation via `--no-dashboard`.
>[!CAUTION]
> This is a beta feature. Displayed data may be falsely parsed or reported as issue. Take it with a grain of salt!
<img width="1209" height="500" alt="image" src="https://github.com/user-attachments/assets/e9500806-374d-4c69-a9a8-7f1540779266" />
<details>
<img width="1318" height="927" alt="image" src="https://github.com/user-attachments/assets/0760056c-963d-48fb-a252-fd082862bb01" />
<img width="1283" height="817" alt="image" src="https://github.com/user-attachments/assets/325197eb-8bd7-4aca-ac4e-c34b85057df1" />
<img width="1253" height="569" alt="image" src="https://github.com/user-attachments/assets/b6c4f94b-9da3-4a55-808d-23036181d02b" />
</details>
## Acknowledgements
Many thanks to the following folks:
- [S3cur3Th1sSh1t](https://github.com/S3cur3Th1sSh1t) for a first Claude draft of PyADRecon using LDAP
- [Sense-of-Security](https://github.com/sense-of-security) for the original ADRecon script in PowerShell
- [dirkjanm](https://github.com/dirkjanm) for the original ldapdomaindump script
- [mverschu](https://github.com/mverschu) for his port of ldapdomaindump using ADWS (adwsdomaindump). PyADRecon-ADWS heavily makes use of the ldap-to-adws wrapper.
- [Forta](https://github.com/fortra) for the awesome impacket suite
- [Anthropic](https://github.com/anthropics) for Claude LLMs
## License
**PyADRecon-ADWS** is released under the **MIT License**.
The following third-party libraries are used:
| Library | License |
|-------------|----------------|
| openpyxl | MIT |
| impacket | Apache 2.0 |
| adwsdomaindump ADWS Wrapper | MIT |
Please refer to the respective licenses of these libraries when using or redistributing this software.
| text/markdown | LRVT | null | null | null | null | active-directory, active directory, adws, active directory web services, ad, recon, reconnaissance, enum, enumeration, adrecon, pyadrecon, security, audit, pentest, adcs, kerberoast | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"openpyxl<4,>=3.1.5",
"impacket<1,>=0.13.0",
"gssapi<2,>=1.11.1; sys_platform != \"win32\"",
"winkerberos<1,>=0.13.0; sys_platform == \"win32\"",
"pycryptodome<4,>=3.23.0; sys_platform == \"win32\""
] | [] | [] | [] | [
"Homepage, https://github.com/l4rm4nd/PyADRecon-ADWS",
"Repository, https://github.com/l4rm4nd/PyADRecon-ADWS",
"Issues, https://github.com/l4rm4nd/PyADRecon-ADWS/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:19:24.563430 | pyadrecon_adws-0.4.2.tar.gz | 1,838,714 | 92/65/e465a78322dd544a72cef015b6299141924581024b2e8af7cf7f5195f1b0/pyadrecon_adws-0.4.2.tar.gz | source | sdist | null | false | 44e080715fb15519873397bbe08980a2 | 53cbb898cd01721785649f7a2a9a2c8e77cd5b9508a6511c26fa491de9b24f64 | 9265e465a78322dd544a72cef015b6299141924581024b2e8af7cf7f5195f1b0 | MIT | [
"LICENSE"
] | 0 |
2.4 | long-running-agents | 0.1.3 | A Pydantic AI agent that remembers across sessions, can create its own tools, and delegates to specialists | # Long Running Agents
> A Pydantic AI agent that remembers across sessions, can create its own tools, and delegates to specialists. Memory, sandbox, and tasks in one stack.
[Source](https://github.com/prith27/lra)
## Why this exists
**Problem:** Most agents forget between runs. They can't recall past conversations, and they can't extend their own tool set.
**Solution:** Long Running Agents gives you persistent memory (SQLite + ChromaDB), cross-session recall, and dynamic tool generation—so your agent remembers, learns, and adapts over time.
**Audience:** Developers building long-lived, memory-aware agents with Pydantic AI.
## What makes it different
| Feature | What it does | Why it stands out |
|---------|--------------|-------------------|
| **Cross-session memory** | `get_recent_conversations(all_sessions=True)` + `search_memory` | Most agents forget between runs. This one recalls past turns across sessions. |
| **Dynamic tool generation** | Agent creates new tools at runtime via `generate_tool` | Extends its own tool set with AST validation and persistence. |
| **Subagent delegation** | Code and research specialists | Routes work to focused subagents instead of one monolithic agent. |
| **Hybrid retrieval** | Vector + keyword module | Optional hybrid search (semantic + keyword) available in the memory layer. |
| **Pydantic AI** | Typed agent framework | Uses Pydantic AI instead of LangChain/LlamaIndex. |
| **Sandbox + memory + tasks** | All in one stack | Memory, code execution, and task tracking in a single package. |
## What it is (and isn't)
- **Is:** A chat-driven AI agent with persistent memory, code execution, and task tracking. You interact via a terminal loop; the agent responds using its tools; context is saved across runs.
- **Isn't:** A workflow automation engine. No schedules, triggers, or DAGs. It's an interactive assistant, not Zapier or Airflow.
## Install
**Prerequisites:** Python 3.10+, [OpenAI API key](https://platform.openai.com/api-keys)
```bash
# From PyPI
pip install long-running-agents
# From GitHub
pip install git+https://github.com/prith27/lra.git
# From source (clone first)
git clone https://github.com/prith27/lra.git
cd lra
pip install -e .
```
## Setup
Set your OpenAI API key (required). Choose one:
```bash
# Option A: Export in shell
export OPENAI_API_KEY=sk-your-key-here
# Option B: Create .env file in your project directory
echo "OPENAI_API_KEY=sk-your-key-here" > .env
# Option C: Use lra init to create .env template, then add your key
lra init
```
Get a key at [platform.openai.com/api-keys](https://platform.openai.com/api-keys).
## Quick start
```bash
lra chat
```
**Note:** Basic chat and memory work without the sandbox. For code execution or dynamic tool creation, start the sandbox first in a separate terminal (see [Sandbox](#sandbox-optional)).
## How it works
1. Run `lra chat` to start the chat loop.
2. Type a message; the agent may call tools (search memory, run code, create tasks, delegate to subagents).
3. Each turn is persisted to SQLite and ChromaDB so the agent can recall past context in future runs.
4. Each run gets a new session ID, but `get_recent_conversations(all_sessions=True)` and `search_memory` allow cross-session recall.
## Agent tools
| Tool | Purpose |
|------|---------|
| `search_memory` | Semantic search over past summaries and facts (ChromaDB) |
| `write_memory` | Store facts or summaries for later recall |
| `get_recent_conversations` | Fetch recent turns (session or all sessions) |
| `create_task`, `update_task_status`, `list_tasks` | Track multi-step work |
| `create_sandbox`, `execute_code` | Run Python in isolated containers |
| `delegate_code_task`, `delegate_research_task` | Hand off to specialist subagents |
| `generate_tool` | Create new tools at runtime when no existing tool fits |
## Library usage
```python
import asyncio
from long_running_agents import run_agent, AgentDeps, StructuredMemoryStore, VectorMemoryStore
from config import SANDBOX_URL
async def main():
structured = StructuredMemoryStore()
vector = VectorMemoryStore()
await structured.init_db()
deps = AgentDeps(
session_id="my-session",
structured_store=structured,
vector_store=vector,
sandbox_base_url=SANDBOX_URL,
)
output, messages = await run_agent("What can you do?", deps)
print(output)
await structured.close()
asyncio.run(main())
```
## Examples
Examples are included in the package. After installing:
```bash
python -m long_running_agents.examples.01_basic_chat
python -m long_running_agents.examples.02_single_turn "Your question here"
```
| Example | Description |
|---------|-------------|
| `01_basic_chat` | Chat loop: multiple turns, memory persists across runs |
| `02_single_turn` | One-off query: ask a question, get a response, exit |
When installed from source, see `long_running_agents/examples/README.md` for details.
## CLI
```bash
lra init # Create .env and show setup instructions
lra chat # Start the agent chat loop
lra list-tools # List static and dynamic tools
lra inspect-tool X # Inspect a dynamic tool
lra list-memory -s SESSION # List memory for a session
```
### Framework commands
Create and run custom agents with their own system prompts:
```bash
lra create-agent [name] # Create agent dir (default: my_agent). Use --prompt or enter interactively
lra run [path] # Run a custom agent (path to agent dir or main.py)
lra list-agents # List agent directories
lra config # Show config
```
Create and manage tools:
```bash
lra create-tool # Create a tool interactively
lra create-tool --file X # Create a tool from a Python file
lra export-tools [-o path] # Export dynamic tools to static file
lra validate-tool FILE # Validate a tool file in sandbox
```
**Note:** `my_agent/` and `*_agent/` are in `.gitignore` by default so user-created agents are not committed. Add your own pattern to `.gitignore` if you want to ignore different agent dirs.
## Configuration
| Variable | Description | Default |
|----------|-------------|---------|
| OPENAI_API_KEY | OpenAI API key | (required) |
| SANDBOX_URL | Sandbox API base URL | http://localhost:8000 |
| DATABASE_URL | SQLAlchemy async URL | sqlite+aiosqlite:///./data/agent_memory.db |
| VECTOR_STORE_PATH | ChromaDB path | ./data/chroma_db |
## Sandbox (optional)
The sandbox enables code execution and tool validation. It includes `requests` and `httpx` for HTTP-fetching tools. Basic chat and memory work without it.
**Prerequisites:** Docker must be installed and running. On macOS, open Docker Desktop and wait until it's ready before starting the sandbox. The sandbox spawns isolated containers for code execution.
### Run order
1. **Start Docker** (e.g. open Docker Desktop on Mac).
2. Start the sandbox in one terminal.
3. Run the agent in another terminal.
```bash
# Terminal 1: Ensure Docker is running, then start sandbox (keep running)
python -m uvicorn sandbox.server:app --reload --port 8000
# Terminal 2: Run agent
lra chat
```
### Options
| Option | Command | When to use |
|--------|---------|-------------|
| **A: Local** | `python -m uvicorn sandbox.server:app --reload --port 8000` | Development; run from project root with deps installed |
| **B: Docker Compose** | `docker compose up sandbox` | Fully containerized; no local Python needed for sandbox |
### First run
On first start, the kernel image (`longrunningagents-kernel:latest`) is built automatically. This may take a minute.
### Rebuild kernel
If you updated `sandbox/Dockerfile` (e.g. added packages), rebuild the kernel:
```bash
docker rmi longrunningagents-kernel:latest
# Then restart the sandbox
```
## Project structure
```
├── agents/ # Main agent and subagents
├── tools/ # Memory, sandbox, task tools
├── sandbox/ # Sandbox API and kernel
├── memory/ # Structured and vector stores
├── schemas/ # Pydantic models
├── long_running_agents/ # Package exports
├── long_running_agents/examples/ # Example recipes (shipped with package)
├── cli.py # CLI entry point
├── config.py
├── main.py
└── pyproject.toml
```
## Development
```bash
pip install -e ".[dev]"
pytest tests/ -v
mypy agents tools memory schemas
```
## License
MIT
| text/markdown | LongRunningAgents | null | null | null | MIT | ai, agents, pydantic-ai, sandbox, memory | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic-ai[openai]>=0.0.14",
"httpx>=0.27.0",
"chromadb>=0.4.22",
"sqlalchemy[asyncio]>=2.0.0",
"aiosqlite>=0.19.0",
"greenlet>=3.0.0",
"python-dotenv>=1.0.0",
"tenacity>=8.0.0",
"logfire>=0.1.0",
"sentence-transformers>=2.2.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/prith27/lra",
"Repository, https://github.com/prith27/lra"
] | twine/6.2.0 CPython/3.10.9 | 2026-02-21T03:19:06.713539 | long_running_agents-0.1.3.tar.gz | 33,472 | 56/bb/7a33448a09a4eaf4d14e75588b04074b97aee4afde64c5e2781f8da9f9a1/long_running_agents-0.1.3.tar.gz | source | sdist | null | false | 2b623cd0005ca352c46cc0a7fcfaf4c1 | 0aec3819f06613a2db632afe5ea4a9a7591901e21e6d9564030122eafd1ece19 | 56bb7a33448a09a4eaf4d14e75588b04074b97aee4afde64c5e2781f8da9f9a1 | null | [] | 214 |
2.1 | odoo-addon-l10n-it-riba-oca | 18.0.1.2.0.5 | Ricevute bancarie | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=======================
ITA - Ricevute bancarie
=======================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:c03bb376298aabd54edc7c9993ada36a9a48a80f8c93ea4346bd1f4a83b8f792
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fl10n--italy-lightgray.png?logo=github
:target: https://github.com/OCA/l10n-italy/tree/18.0/l10n_it_riba_oca
:alt: OCA/l10n-italy
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/l10n-italy-18-0/l10n-italy-18-0-l10n_it_riba_oca
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/l10n-italy&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
**Italiano**
Modulo per gestire le ricevute bancarie.
**Table of contents**
.. contents::
:local:
Configuration
=============
**Italiano**
Nella configurazione delle RiBa è possibile specificare se si tratti di
'Salvo buon fine' o 'Al dopo incasso', che hanno un flusso completamente
diverso.
- Al dopo incasso: le fatture risulteranno pagate all'accettazione;
l'incasso potrà essere registrato con una normale riconciliazione
bancaria, che andrà a chiudere gli "effetti attivi" aperti
all'accettazione.
- Salvo buon fine: le registrazioni generate seguiranno la struttura
descritta nel documento http://goo.gl/jpRhJp
È possibile specificare diverse configurazioni (dal menù *Configurazione
→ Pagamenti → Configurazione RiBa*). Per ognuna, in caso di 'Salvo buon
fine', è necessario specificare almeno il registro e il conto da
utilizzare al momento dell'accettazione della distinta da parte della
banca. Tale conto deve essere di tipo 'Crediti' (ad esempio "RiBa
all'incasso", eventualmente da creare).
La configurazione relativa alla fase di accredito, verrà usata nel
momento in cui la banca accredita l'importo della distinta. È possibile
utilizzare un registro creato appositamente, ad esempio "Accredito
RiBa", e un conto chiamato ad esempio "Banche c/RiBa all'incasso", che
non deve essere di tipo 'Banca e cassa'.
La configurazione relativa all'insoluto verrà utilizzata in caso di
mancato pagamento da parte del cliente. Il conto può chiamarsi ad
esempio "Crediti insoluti".
Nel caso si vogliano gestire anche le spese per ogni scadenza con
ricevuta bancaria, si deve configurare un prodotto di tipo servizio e
collegarlo in *Configurazione → Impostazioni → Contabilità → Imposte →
Spese di incasso RiBa*.
Usage
=====
**Italiano**
Per utilizzare il meccanismo delle RiBa è necessario configurare un
termine di pagamento di tipo 'RiBa'.
Per emettere una distinta è necessario andare su *RiBa → Emetti RiBa* e
selezionare i pagamenti per i quali emettere la distinta. Se per il
cliente è stato abilitato il raggruppamento, i pagamenti dello stesso
cliente e con la stessa data di scadenza andranno a costituire un solo
elemento della distinta.
I possibili stati della distinta sono: *Bozza*, *Accettata*,
*Accreditata*, *Pagata*, *Insoluta* e *Annullata*. Ad ogni passaggio di
stato sarà possibile generare le relative registrazioni contabili, le
quali verranno riepilogate nella scheda «Contabilità». Questa scheda è
presente sia sulla distinta che sulle sue righe. Queste ultime hanno una
vista dedicata per facilitare le operazioni sul singolo elemento invece
che su tutta la distinta.
La voce di menù 'Presentazione Riba' permette estrarre le riba fino al
raggiungimento dell'importo massimo inserito dall'utente.
Nella lista delle fatture è presente una colonna per monitorare l'
esposizione, cioè l'importo dovuto dal cliente a fronte dell'emissione
della RiBa non ancora scaduta.
In maniera predefinita la data delle registrazioni dei pagamenti viene
impostata con la data di scadenza della RiBa, ma è possibile modificarla
in due momenti:
- durante la creazione del pagamento, cliccando su "Segna righe come
pagate" o su "Segna coma pagata" o usando l'azione "Registrazione Riba
a data di scadenza" e indicando una data nel campo ``Data pagamento``,
- successivamente a pagamento effettivamente avvenuto selezionando la
registrazione dalla vista ed elenco ed eseguendo l'azione "Imposta
data di pagamento RiBa".
Non è possibile emettere Riba per fatture verso Enti che richiedono più
di un CIG e un CUP differenti per fattura. In questo caso particolare,
emettere più fatture. Non è possibile raggruppare Riba in fase di
emissione se le fatture contengono CIG e CUP differenti. Verrà creata
una riga di distinta per ogni fattura.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/l10n-italy/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/l10n-italy/issues/new?body=module:%20l10n_it_riba_oca%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Contributors
------------
- Lorenzo Battistini <lorenzo.battistini@agilebg.com>
- Andrea Cometa <a.cometa@apuliasoftware.it>
- Andrea Gallina <a.gallina@apuliasoftware.it>
- Davide Corio <info@davidecorio.com>
- Giacomo Grasso <giacomo.grasso@agilebg.com>
- Gabriele Baldessari <gabriele.baldessari@gmail.com>
- Alex Comba <alex.comba@agilebg.com>
- Marco Calcagni <mcalcagni@dinamicheaziendali.it>
- Sergio Zanchetta <https://github.com/primes2h>
- Simone Vanin <simone.vanin@agilebg.com>
- Sergio Corato <https://github.com/sergiocorato>
- Giovanni Serra <giovanni@gslab.it>
- `Aion Tech <https://aiontech.company/>`__:
- Simone Rubino <simone.rubino@aion-tech.it>
- `TAKOBI <https://takobi.online>`__:
- Simone Rubino <sir@takobi.online>
- Nextev Srl <odoo@nextev.it>
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/l10n-italy <https://github.com/OCA/l10n-italy/tree/18.0/l10n_it_riba_oca>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Development Status :: 4 - Beta"
] | [] | https://github.com/OCA/l10n-italy | null | >=3.10 | [] | [] | [] | [
"odoo-addon-account_due_list==18.0.*",
"odoo-addon-account_payment_term_extension==18.0.*",
"odoo-addon-l10n_it_abicab==18.0.*",
"odoo-addon-l10n_it_edi_related_document==18.0.*",
"odoo==18.0.*",
"openupgradelib",
"unidecode"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T03:17:38.442413 | odoo_addon_l10n_it_riba_oca-18.0.1.2.0.5-py3-none-any.whl | 119,277 | 58/f2/1bf01da46ebc1f36c82ea088ceea2812a894098cefc6f9f88bc009b063dc/odoo_addon_l10n_it_riba_oca-18.0.1.2.0.5-py3-none-any.whl | py3 | bdist_wheel | null | false | 239d02c76785b0b701eaa6be0412d46a | a477855b20ce956baeed06fb4688a7a347637a9ca65c5f4cbc3b37bb1df46720 | 58f21bf01da46ebc1f36c82ea088ceea2812a894098cefc6f9f88bc009b063dc | null | [] | 87 |
2.4 | gmccc | 0.2.3 | Guan-Ming's Claude Code Skill CLI | # Guan-Ming's Claude Code Skill CLI
[](https://pypi.org/project/gmccc/)
[](https://www.npmjs.com/package/gmccc)
CLI to install, run, and schedule Claude Code skills.
## Install
```bash
npx gmccc install # npm (gmccc install/uninstall only)
uv tool install gmccc # PyPI (full CLI)
```
## Commands
| Command | Alias | Description |
|---|---|---|
| `gmccc install` | `gmccc i` | Install skills via openskills |
| `gmccc uninstall` | `gmccc u` | Remove skills and config |
| `gmccc run <name>` | `gmccc r <name>` | Run a specific job |
| `gmccc config` | `gmccc c` | Create default config file |
| `gmccc config <path>` | `gmccc c <path>` | Create config at custom path |
| `gmccc start` | | Start scheduler daemon |
| `gmccc stop` | | Stop scheduler daemon |
| `gmccc restart` | | Restart scheduler daemon |
| `gmccc info` | | Show status, jobs, and logs |
| `gmccc test` | `gmccc t` | Simulate execution |
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"apscheduler>=3.11.2",
"pydantic>=2.12.5"
] | [] | [] | [] | [] | uv/0.9.8 | 2026-02-21T03:17:08.038059 | gmccc-0.2.3.tar.gz | 50,683 | 40/c7/6bb292ef2783bf9592618a14697a20541d53489b27240a704c88c4101189/gmccc-0.2.3.tar.gz | source | sdist | null | false | 18bc6719a51b54753302da9af0173fbe | 1a12a146d479a267e4ca72844609fd63de6398a875f895283b2149928a5c6df0 | 40c76bb292ef2783bf9592618a14697a20541d53489b27240a704c88c4101189 | null | [
"LICENSE"
] | 220 |
2.4 | rgpycrumbs | 1.1.0 | A dispatcher-based analytical and computational suite for chemical physics |
# Table of Contents
- [About](#about)
- [Ecosystem Overview](#org369b7ed)
- [CLI Design Philosophy](#cli-how)
- [Usage](#usage)
- [Library API](#library-api)
- [CLI Tools](#cli-tools)
- [eOn](#cli-eon)
- [Contributing](#contributing)
- [Development](#development)
- [When is pixi needed?](#org0ef8b0d)
- [Versioning](#org0974275)
- [Release Process](#release-notes)
- [License](#license)
<a id="about"></a>
# About

[](https://github.com/pypa/hatch)
A **pure-python** computational library and CLI toolkit for chemical physics
research. `rgpycrumbs` provides both importable library modules for
computational tasks (surface fitting, structure analysis, interpolation) and a
dispatcher-based CLI for running self-contained research scripts.
The library side offers:
- **Surface fitting** (`rgpycrumbs.surfaces`) – JAX-based kernel methods (TPS, RBF, Matern, SE, IMQ) with gradient-enhanced variants for energy landscape interpolation
- **Structure analysis** (`rgpycrumbs.geom.analysis`) – distance matrices, bond matrices, and fragment detection via ASE
- **IRA matching** (`rgpycrumbs.geom.ira`) – iterative rotations and assignments for RMSD-based structure comparison
- **Interpolation** (`rgpycrumbs.interpolation`) – spline interpolation utilities
- **Data types** (`rgpycrumbs.basetypes`) – shared data structures for NEB paths, saddle searches, and molecular geometries
The CLI tools rely on optional dependencies fetched on-demand via PEP 723 + `uv`.
<a id="org369b7ed"></a>
## Ecosystem Overview
`rgpycrumbs` is the central hub of an interlinked suite of libraries.

<a id="cli-how"></a>
## CLI Design Philosophy
The library is designed with the following principles in mind:
- **Dispatcher-Based Architecture:** The top-level `rgpycrumbs.cli` command acts as a
lightweight dispatcher. It does not contain the core logic of the tools
itself. Instead, it parses user commands to identify the target script and
then invokes it in an isolated subprocess using the `uv` runner. This provides
a unified command-line interface while keeping the tools decoupled.
- **Isolated & Reproducible Execution:** Each script is a self-contained unit that
declares its own dependencies via [PEP 723](https://peps.python.org/pep-0723/) metadata. The `uv` runner uses this
information to resolve and install the exact required packages into a
temporary, cached environment on-demand. This design guarantees
reproducibility and completely eliminates the risk of dependency conflicts
between different tools in the collection.
- **Lightweight Core, On-Demand Dependencies:** The installable `rgpycrumbs`
package has minimal core dependencies (`click`, `numpy`). Heavy scientific
libraries are available as optional extras (e.g. `pip install
rgpycrumbs[surfaces]` for JAX). For CLI tools, dependencies are fetched by
`uv` only when a script that needs them is executed, keeping the base
installation lightweight.
- **Modular & Extensible Tooling:** Each utility is an independent script. This
modularity simplifies development, testing, and maintenance, as changes to one
tool cannot inadvertently affect another. New tools can be added to the
collection without modifying the core dispatcher logic, making the system
easily extensible.
<a id="usage"></a>
# Usage
<a id="library-api"></a>
## Library API
The library modules can be imported directly:
# Surface fitting (requires jax: pip install rgpycrumbs[surfaces])
from rgpycrumbs.surfaces import get_surface_model
model = get_surface_model("tps")
# Structure analysis (requires ase, scipy: pip install rgpycrumbs[analysis])
from rgpycrumbs.geom.analysis import analyze_structure
# Spline interpolation (requires scipy: pip install rgpycrumbs[interpolation])
from rgpycrumbs.interpolation import spline_interp
# Data types (no extra deps)
from rgpycrumbs.basetypes import nebpath, SaddleMeasure
<a id="cli-tools"></a>
## CLI Tools
The general command structure is:
python -m rgpycrumbs.cli [subcommand-group] [script-name] [script-options]
You can see the list of available command groups:
$ python -m rgpycrumbs.cli --help
Usage: rgpycrumbs [OPTIONS] COMMAND [ARGS]...
A dispatcher that runs self-contained scripts using 'uv'.
Options:
--help Show this message and exit.
Commands:
eon Dispatches to a script within the 'eon' submodule.
<a id="cli-eon"></a>
### eOn
- Plotting NEB Paths (`plt-neb`)
This script visualizes the energy profile of Nudged Elastic Band (NEB) calculations over optimization steps.
To see the help text for this specific script:
$ python -m rgpycrumbs eon plt-neb --help
--> Dispatching to: uv run /path/to/rgpycrumbs/eon/plt_neb.py --help
Usage: plt_neb.py [OPTIONS]
Plots a series of NEB energy paths from .dat files.
...
Options:
--input-pattern TEXT Glob pattern for input data files.
-o, --output-file PATH Output file name.
--start INTEGER Starting file index to plot (inclusive).
--end INTEGER Ending file index to plot (exclusive).
--help Show this message and exit.
To plot a specific range of `neb_*.dat` files and save the output:
python -m rgpycrumbs eon plt-neb --start 100 --end 150 -o final_path.pdf
To show the plot interactively without saving:
python -m rgpycrumbs eon plt-neb --start 280
- Splitting CON files (`con-splitter`)
This script takes a multi-image trajectory file (e.g., from a finished NEB
calculation) and splits it into individual frame files, creating an input file
for a new calculation.
To split a trajectory file:
rgpycrumbs eon con-splitter neb_final_path.con -o initial_images
This will create a directory named `initial_images` containing `ipath_000.con`,
`ipath_001.con`, etc., along with an `ipath.dat` file listing their paths.
<a id="contributing"></a>
# Contributing
All contributions are welcome, but for the CLI tools please follow [established
best practices](https://realpython.com/python-script-structure/).
<a id="development"></a>
## Development
This project uses [`uv`](https://docs.astral.sh/uv/) as the primary development tool with
[`hatchling`](https://hatch.pypa.io/) + [`hatch-vcs`](https://github.com/ofek/hatch-vcs) for building and versioning.
# Clone and install in development mode with test dependencies
uv sync --extra test
# Run the pure tests (no heavy optional deps)
uv run pytest -m pure
# Run interpolation tests (needs scipy)
uv run --extra interpolation pytest -m interpolation
<a id="org0ef8b0d"></a>
### When is pixi needed?
[Pixi](https://prefix.dev/) is only needed for features that require **conda-only** packages (not
available on PyPI):
- `fragments` tests: need `tblite`, `ira`, `pyvista` (conda)
- `surfaces` tests: may prefer conda `jax` builds
For everything else, `uv` is sufficient.
<a id="org0974275"></a>
### Versioning
Versions are derived automatically from **git tags** via `hatch-vcs`
(setuptools-scm). There is no manual version field; the version is the latest
tag (e.g. `v1.0.0` → `1.0.0`). Between tags, dev versions are generated
automatically (e.g. `1.0.1.dev3+gabcdef`).
<a id="release-notes"></a>
## Release Process
# 1. Ensure tests pass
uv run --extra test pytest -m pure
# 2. Build changelog (uses towncrier fragments in docs/newsfragments/)
uvx towncrier build --version "v1.0.0"
# 3. Commit the changelog
git add CHANGELOG.rst && git commit -m "doc: release notes for v1.0.0"
# 4. Tag the release (hatch-vcs derives the version from this tag)
git tag -a v1.0.0 -m "Version 1.0.0"
# 5. Build and publish
uv build
uvx twine upload dist/*
<a id="license"></a>
# License
MIT. However, this is an academic resource, so **please cite** as much as possible
via:
- The Zenodo DOI for general use.
- The `wailord` paper for ORCA usage
| text/markdown | null | Rohit Goswami <rgoswami@ieee.org> | null | null | MIT | analysis, compchem, interpolation, surfaces | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.1",
"numpy>=1.24",
"rich>=13.0",
"ase>=3.22; extra == \"analysis\"",
"scipy>=1.11; extra == \"analysis\"",
"scipy>=1.11; extra == \"interpolation\"",
"ruff>=0.1.6; extra == \"lint\"",
"jax>=0.4; extra == \"surfaces\"",
"chemparseplot; extra == \"test\"",
"pychum; extra == \"test\"",
"pytest-cov>=4.1.0; extra == \"test\"",
"pytest>=9.0; extra == \"test\""
] | [] | [] | [] | [
"Documentation, https://github.com/HaoZeke/rgpycrumbs#readme",
"Issues, https://github.com/HaoZeke/rgpycrumbs/issues",
"Source, https://github.com/HaoZeke/rgpycrumbs"
] | uv/0.8.4 | 2026-02-21T03:16:56.928241 | rgpycrumbs-1.1.0.tar.gz | 69,510 | a3/70/2717221d3239d43b0a1b3ddfdb6345cd2445afb5d379bab68ebc5a1f754c/rgpycrumbs-1.1.0.tar.gz | source | sdist | null | false | 8b5e38c3f2fde9b9bcae87af98617880 | 8a949e24e3d55d99f7372d1733e1e6c4076982272f5a571197400b1220b13af3 | a3702717221d3239d43b0a1b3ddfdb6345cd2445afb5d379bab68ebc5a1f754c | null | [
"LICENSE"
] | 233 |
2.4 | otto-agent | 0.6.2 | Otto — AI agent platform | <p align="center">
<img src="docs/otto-logo.png" alt="Otto" width="180" />
</p>
<h1 align="center">Otto</h1>
<p align="center">
<strong>Self-hosted AI agent platform. Telegram bot, MCP tools, scheduled jobs, persistent memory.</strong>
</p>
<p align="center">
<a href="https://pypi.org/project/otto-agent/"><img src="https://img.shields.io/pypi/v/otto-agent" alt="PyPI" /></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-green" alt="License" /></a>
<a href="#"><img src="https://img.shields.io/badge/python-3.12+-yellow" alt="Python" /></a>
</p>
---
Otto is a personal AI agent that runs on your machine and talks to you through Telegram. It connects to any LLM via LiteLLM, exposes tools through MCP, runs scheduled jobs, and remembers context across conversations.
No cloud platform. No vendor lock-in. Just `pip install otto-agent` and go.
## Install
```bash
pip install otto-agent
```
Or with uv:
```bash
uv tool install otto-agent
```
## Setup
```bash
otto setup
```
The wizard walks you through:
- Choosing a model (`provider/model-name` format — Anthropic, OpenAI, Google, Ollama, OpenRouter, etc.)
- Connecting your Telegram bot token (from [@BotFather](https://t.me/botfather))
- Setting an owner ID so only you can use it
Config lives in `~/.otto/`.
## Usage
Start the Telegram bot:
```bash
otto start # daemonized
otto run # foreground (for debugging)
```
Manage the process:
```bash
otto status # check if running
otto stop # stop the daemon
otto logs # tail recent logs
```
Configure the model:
```bash
otto config model get
otto config model set openai/gpt-4o
otto config model list
```
## Telegram Commands
| Command | What it does |
|---------|-------------|
| `/model` | Switch LLM model |
| `/tools` | List available tools |
| `/memory` | Search stored memories |
| `/stop` | Cancel a running response |
| `/session` | Start a fresh conversation |
## Features
**Multi-backend LLM** — Any model supported by LiteLLM: Anthropic, OpenAI, Google, Ollama, OpenRouter, and more. Switch models mid-conversation with `/model`. Supports **OAuth-authenticated models** (Claude Code, Google Code, OpenAI Codex).
**MCP Tool Gateway** — Tools are MCP servers defined in `~/.otto/tools.yaml`. Otto connects to them at startup and exposes them to the agent. Includes **Workspace Policies** for sandboxed file operations (default vs. strict modes).
**Persistent Memory** — Stores and retrieves context across sessions. Memory is searchable and can persist identity/personality rules.
**Agent Orchestration** — Otto can delegate tasks to async sub-agents running in parallel. Delegation is fire-and-forget: Otto returns immediately and notifies you via Telegram when the job is done. Sub-agents run with the same tools and model access as the main session. Delegation is contract-based — specify deliverables, constraints, and optional validation commands so results are verified before delivery.
Built-in delegation tools:
- `delegate_task` — spawn a background sub-agent with a structured contract
- `list_jobs` — inspect status of all delegated jobs
- `cancel_job` — cancel a running sub-agent
**Scheduled Jobs** — Cron-style scheduling built in, with background prompt execution.
**Web UI** — Built-in dashboard for monitoring status, viewing logs, and managing configuration (default: http://localhost:7070).
**Telegram UX** — Interactive commands, inline controls, status cards, and chunked delivery for long responses.
**File Sending** — Send files (PDFs, images, documents) directly to Telegram.
## Architecture
```
You (Telegram / Web)
│
▼
┌─────────────┐
│ Telegram Bot │──── Commands (/model, /tools, /stop, ...)
└──────┬──────┘
│
┌──────▼──────┐
│ Web UI │──── Dashboard, monitoring, config
└──────┬──────┘
▼
┌─────────────┐
│ Chat Layer │──── Sessions, memory, system prompt
└──────┬──────┘
▼
┌─────────────┐
│ Agent │──── Tool-calling loop (LiteLLM → any LLM)
└──────┬──────┘
▼
┌─────────────┐
│ MCP Gateway │──── Connects to tool servers defined in tools.yaml
└─────────────┘
```
## Configuration
All config is in `~/.otto/`:
| File | Purpose |
|------|---------|
| `config.yaml` | Model, Telegram token, owner ID, web/workspace settings |
| `tools.yaml` | MCP tool server definitions |
| `skills/` | Custom skill modules |
| `memory.db` | Persistent memory store |
| `sessions/` | Conversation history |
| `logs/` | Structured logs |
| `credentials/` | OAuth provider tokens |
## Development
```bash
git clone https://github.com/1broseidon/otto.git
cd otto
uv sync --dev
make check # lint + test
otto web # start web UI for development
```
## License
MIT
| text/markdown | null | George <george@example.com> | null | null | null | agent, automation, cli, tools | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Utilities"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp>=3.9.0",
"anthropic>=0.52.0",
"beautifulsoup4>=4.12.0",
"clypi>=0.2.0",
"croniter>=2.0.0",
"filelock>=3.16.0",
"litellm>=1.81.8",
"markdownify>=0.11.0",
"mcp>=1.0.0",
"mistune>=3.2.0",
"pdfplumber>=0.10.0",
"pillow>=10.0.0",
"playwright>=1.40.0",
"python-telegram-bot[job-queue]>=20.0",
"pyyaml>=6.0",
"structlog>=25.0.0",
"tomlkit>=0.13.0",
"tree-sitter-go>=0.23.0",
"tree-sitter-java>=0.23.0",
"tree-sitter-javascript>=0.23.0",
"tree-sitter-python>=0.23.0",
"tree-sitter-rust>=0.23.0",
"tree-sitter-typescript>=0.23.0",
"tree-sitter>=0.23.0",
"faster-whisper>=1.0.0; extra == \"all\"",
"faster-whisper>=1.0.0; extra == \"voice\""
] | [] | [] | [] | [
"Homepage, https://github.com/1broseidon/otto",
"Documentation, https://otto-agent.dev",
"Repository, https://github.com/1broseidon/otto",
"Issues, https://github.com/1broseidon/otto/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:14:52.721796 | otto_agent-0.6.2.tar.gz | 158,374 | fb/ab/3faf6bac45afe5c2beb026c7f64e4ce89abab0541af78063a8e296ccf43d/otto_agent-0.6.2.tar.gz | source | sdist | null | false | d5bc735dba764acfeb13fad3df3cb508 | 305784ed39abeb17b136a4fbd089e16bb6e0e66961a97f1c65a9d14b605cdb30 | fbab3faf6bac45afe5c2beb026c7f64e4ce89abab0541af78063a8e296ccf43d | MIT | [
"LICENSE"
] | 226 |
2.3 | nonebot-plugin-message-snapper-core | 0.1.0 | 消息快照核心库,将消息转换为图片快照 | <div align="center">
<a href="https://v2.nonebot.dev/store">
<img src="https://raw.githubusercontent.com/fllesser/nonebot-plugin-template/refs/heads/resource/.docs/NoneBotPlugin.svg" width="310" alt="logo"></a>
## ✨ nonebot-plugin-message-snapper-core ✨
[](./LICENSE)
[](https://pypi.python.org/pypi/nonebot-plugin-message-snapper-core)
[](https://www.python.org)
[](https://github.com/astral-sh/uv)
<br/>
[](https://github.com/astral-sh/ruff)
[](https://results.pre-commit.ci/latest/github/Xwei1645/nonebot-plugin-message-snapper-core/master)
</div>
## 📖 介绍
这里是插件的详细介绍部分
## 💿 安装
<details open>
<summary>使用 nb-cli 安装</summary>
在 nonebot2 项目的根目录下打开命令行, 输入以下指令即可安装
nb plugin install nonebot-plugin-message-snapper-core --upgrade
使用 **pypi** 源安装
nb plugin install nonebot-plugin-message-snapper-core --upgrade -i "https://pypi.org/simple"
使用**清华源**安装
nb plugin install nonebot-plugin-message-snapper-core --upgrade -i "https://pypi.tuna.tsinghua.edu.cn/simple"
</details>
<details>
<summary>使用包管理器安装</summary>
在 nonebot2 项目的插件目录下, 打开命令行, 根据你使用的包管理器, 输入相应的安装命令
<details open>
<summary>uv</summary>
uv add nonebot-plugin-message-snapper-core
安装仓库 master 分支
uv add git+https://github.com/Xwei1645/nonebot-plugin-message-snapper-core@master
</details>
<details>
<summary>pdm</summary>
pdm add nonebot-plugin-message-snapper-core
安装仓库 master 分支
pdm add git+https://github.com/Xwei1645/nonebot-plugin-message-snapper-core@master
</details>
<details>
<summary>poetry</summary>
poetry add nonebot-plugin-message-snapper-core
安装仓库 master 分支
poetry add git+https://github.com/Xwei1645/nonebot-plugin-message-snapper-core@master
</details>
打开 nonebot2 项目根目录下的 `pyproject.toml` 文件, 在 `[tool.nonebot]` 部分追加写入
plugins = ["nonebot_plugin_message_snapper_core"]
</details>
<details>
<summary>使用 nbr 安装(使用 uv 管理依赖可用)</summary>
[nbr](https://github.com/fllesser/nbr) 是一个基于 uv 的 nb-cli,可以方便地管理 nonebot2
nbr plugin install nonebot-plugin-message-snapper-core
使用 **pypi** 源安装
nbr plugin install nonebot-plugin-message-snapper-core -i "https://pypi.org/simple"
使用**清华源**安装
nbr plugin install nonebot-plugin-message-snapper-core -i "https://pypi.tuna.tsinghua.edu.cn/simple"
</details>
## ⚙️ 配置
在 nonebot2 项目的`.env`文件中添加下表中的必填配置
| 配置项 | 必填 | 默认值 | 说明 |
| :-----: | :---: | :----: | :------: |
| 配置项1 | 是 | 无 | 配置说明 |
| 配置项2 | 否 | 无 | 配置说明 |
## 🎉 使用
### 指令表
| 指令 | 权限 | 需要@ | 范围 | 说明 |
| :---: | :---: | :---: | :---: | :------: |
| 指令1 | 主人 | 否 | 私聊 | 指令说明 |
| 指令2 | 群员 | 是 | 群聊 | 指令说明 |
### 🎨 效果图
如果有效果图的话
| text/markdown | Xwei1645 | Xwei1645 <Xwei1645@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiofiles>=25.1.0",
"nonebot-adapter-onebot<3.0.0,>=2.4.6",
"nonebot-plugin-htmlrender>=0.6.7",
"nonebot-plugin-localstore>=0.7.4",
"nonebot2<3.0.0,>=2.4.3"
] | [] | [] | [] | [
"Homepage, https://github.com/Xwei1645/nonebot-plugin-message-snapper-core",
"Issues, https://github.com/Xwei1645/nonebot-plugin-message-snapper-core/issues",
"Repository, https://github.com/Xwei1645/nonebot-plugin-message-snapper-core.git"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T03:13:56.246367 | nonebot_plugin_message_snapper_core-0.1.0.tar.gz | 8,395 | eb/e6/122b21dda012618056d2e9b6da083e8be9a986f101c333e5e9f64c662d52/nonebot_plugin_message_snapper_core-0.1.0.tar.gz | source | sdist | null | false | 1a81033814cc2f7d0c315682ae19012f | d8733429929bafc0317f29548180ff77a0dea8afbf2f699d2d352768912af408 | ebe6122b21dda012618056d2e9b6da083e8be9a986f101c333e5e9f64c662d52 | null | [] | 293 |
2.4 | aiel-runtime | 0.5.1 | AIEL Runtime adapter used by the Execution Plane to load and execute user projects. | # AI Execution Layer Runtime (`aiel-runtime`)
`aiel-runtime` is the **execution adapter** used by the **Execution Plane (EP)** to load and run user projects from
**immutable snapshots**. It provides a stable, versioned runtime entrypoint that:
1. loads a project snapshot (downloaded by EP from the Data Plane),
2. imports `entry_point.py` (registering exports via `aiel-sdk` decorators),
3. validates contracts (signatures, export consistency),
4. executes exported handlers (tools / agents / flows / HTTP / MCP).
This repository is intentionally focused on **runtime orchestration and invocation**, not on developer ergonomics.
Developer-facing contracts live in **`aiel-sdk`**.
---
## Why this exists
A production Execution Plane needs a deterministic way to execute user code that is:
- **versioned** (reproducible across time),
- **contract-driven** (predictable invocation shape),
- **isolated** (sandboxed by EP),
- **observable** (structured outputs and error taxonomy),
- **portable** (runs as a subprocess or container entrypoint).
`aiel-runtime` is that adapter.
---
## Key features
- **Single runner entrypoint**: `python -m aiel_runtime.runner`
- **Contract validation** via `aiel-sdk`
- **Deterministic execution** with lockstep runtime/SDK versioning
- **Describe + Invoke protocol** (stdin JSON → stdout JSON)
- **Snapshot-first** execution model (EP controls downloads/caching)
- Designed for EP sandbox enforcement (CPU/memory/timeouts/network/secrets)
---
## Repository layout (src-layout)
This repo uses a `src/` layout to prevent accidental import shadowing and to support editable installs reliably.
---
## Installation
### Local development (editable)
```bash
python -m pip install -U pip
python -m pip install -e .
```
## Production (pinned)
Pin versions for deterministic behavior:
```bash
python -m pip install "aiel-runtime==X.Y.Z"
```
Optional curated bundles (if your runtime image includes these):
```bash
python -m pip install "aiel-runtime[ai]==X.Y.Z"
```
> Recommended policy: lockstep versions
> `aiel-runtime==X.Y.Z` depends on `aiel-sdk==X.Y.Z.`
## Runtime entrypoint
The Execution Plane runs:
```bash
python -m aiel_runtime.runner
```
EP must provide:
- EP_FILES_ROOT: absolute path to a snapshot folder containing entry_point.py.
### Example:
```bash
export EP_FILES_ROOT=/tmp/ep/snapshots/<tenant>/<project>/<release>/files
```
Snapshot expectations
At minimum, the snapshot must contain:
- entry_point.py (required)
- any local modules imported by entry_point.py (e.g., tools/, agents/, mcp_server/)
A typical snapshot layout:
```bash
files/
entry_point.py
tools/
core.py
agents/
main.py
mcp_server/
core.py
requirements.txt # optional (runtime images should be pre-baked)
```
> Dependency installation at runtime is intentionally not supported.
> Runtime images must include curated dependencies up-front.
## Runner protocol (stdin → stdout)
`aiel-runtime` is a small JSON protocol engine.
### Describe exports
#### stdin
```json
{ "schema_version": "v1", "action": "describe" }
```
### stdout
```json
{
"schema_version": "v1",
"ok": true,
"result": {
"sdk_version": "X.Y.Z",
"runtime_version": "X.Y.Z",
"tools": ["normalize_email"],
"agents": ["collect_personal_data"],
"flows": ["driver_onboarding"],
"http_handlers": [{"method":"POST","path":"/driver/onboard"}],
"mcp_servers": [{"name":"driver_support","tools":["lookup_existing_driver"]}]
}
}
```
## Invoke an export
### stdin
```json
{
"schema_version": "v1",
"action": "invoke",
"kind": "tool",
"name": "normalize_email",
"payload": {
"email": "Test@Example.com",
"name": "Test",
"ctx": {
"request_id": "req_123",
"tenant_id": "t_1",
"workspace_id": "w_1",
"project_id": "p_1"
}
}
}
```
### stdout
```json
{ "schema_version": "v1", "ok": true, "result": { "email": "test@example.com" } }
```
## Error response
### stdout
```json
{
"schema_version": "v1",
"ok": false,
"error": {
"code": "NOT_FOUND",
"message": "Tool not found: normalize_email"
}
}
```
## Execution model
Runtime load sequence:
- Reset registry (defensive against warm process reuse)
- Add EP_FILES_ROOT to sys.path
- Import entry_point (registers exports via decorators)
- Validate contracts (signatures and export invariants)
- Execute describe or invoke
## Export kinds
Supported kind values (current):
- `tool` — callable signature: tool(ctx, payload)
- `agent` — callable signature: agent(ctx, state)
- `flow` — callable signature: flow(ctx, input) OR graph_builder() returning a compiled graph with ainvoke
- (`http`, `mcp` are typically served by EP; runtime can expose metadata via `describe`)
EP is responsible for mapping HTTP/MCP requests into invoke payloads.
## Execute a handler (Lambda-style)
You can execute a module handler using the AWS Lambda-style signature `(event, context)`:
```python
def aiel_handler(event, context):
...
```
stdin:
```json
{ "schema_version": "v1", "action": "execute", "handler": "app.aiel_handler", "event": {} }
```
CLI:
```bash
python -m aiel_runtime.runner --action execute --handler app.aiel_handler --event '{}'
```
Default handler:
```bash
python -m aiel_runtime.runner --action execute --event '{}' # uses entry_point.aiel_handler
```
### Invocation Context Injection
The control plane (e.g., CLI) should inject identity and workspace metadata into `ctx`:
Local credential discovery is disabled by default; enable it with `--auto-ctx` or `AIEL_AUTO_CTX=1`.
Pretty JSON output is optional; enable it with `--pretty` or `AIEL_PRETTY_JSON=1`.
```json
{
"schema_version": "v1",
"action": "execute",
"handler": "entry_point.aiel_handler",
"event": {},
"ctx": {
"workspace_id": "w_123",
"project_id": "p_123",
"user_id": "u_123",
"trace_id": "trace_123"
}
}
```
## Versioning and compatibility
Runtime version
`runtime_version` identifies the runner behavior and dependency bundle.
### SDK version
sdk_version identifies the contract surface expected by the user project.
### Recommended compatibility policy:
- `aiel-runtime==X.Y.Z` depends on `aiel-sdk==X.Y.Z`
- Data Plane manifest includes both `runtime` and `sdk_version`
- EP allowlists `runtime` values and uses (`runtime`, `sdk_version`, `file hashes`) for cache keys.
## Security model
`aiel-runtime` assumes EP enforces isolation. At minimum, EP should provide:
- process/container isolation
- read-only snapshot filesystem
- strict CPU/memory/timeouts
- deny-by-default network egress (unless explicitly enabled)
- controlled secret injection via context (do not expose host env to user code)
Runtime performs defense-in-depth checks where practical (contract validation, error normalization).
Hard security boundaries belong in EP.
## Production note (Cloud Run / Images)
If your entrypoint imports `aiel`, bake `aiel-cli` into the runtime image/venv so imports succeed during cold start.
## Development
### Run the runtime directly (fast local iteration)
This runs the same execution engine used by EP, but without auth and without the Data Plane. You point the runtime at a local folder that already contains the files.
#### 1) Prerequisites
Your local folder must contain at least:
- entry_point.py (required)
- any modules imported by entry_point.py (e.g. tools/, agents/, mcp_server/)
```bash
my-project/
entry_point.py
tools/
core.py
agents/
main.py
```
#### 2) Install runtime + SDK
```bash
python -m venv .venv
source .venv/bin/activate
python -m pip install -U pip
python -m pip install -U aiel-runtime aiel-sdk
```
If your `entry_point.py` imports `aiel`, install it in the same environment:
```bash
python -m pip install -U aiel-cli
```
If you use optional facades locally (LangGraph/LangChain/LangSmith), install extras.
Important (zsh): quote extras:
```bash
python -m pip install -U "aiel-sdk[all]"
```
#### 3) Run `describe` locally
From the project folder (where entry_point.py exists):
```bash
echo '{"schema_version":"v1","action":"describe"}' | python -m aiel_runtime.runner --files-root "$PWD" --no-traceback
```
Or with flags (defaults to `$PWD` if `entry_point.py` exists):
```bash
python -m aiel_runtime.runner --action describe --no-traceback
```
You should get something like:
{
"ok": true,
"result": {
"tools": ["..."],
"agents": ["..."],
"flows": ["..."],
"http_handlers": [...],
"mcp_servers": [...]
}
}
#### 4) Invoke a tool locally
Create a request inline:
```bash
cat > req.json <<'JSON'
{"schema_version":"v1","action":"invoke","kind":"tool","name":"persist_driver_profile","payload":{"email":"a@b.com","name":"Aldenir"}}
JSON
python -m aiel_runtime.runner --files-root "$PWD" < req.json
```
Or with flags:
```bash
python -m aiel_runtime.runner \
--action invoke \
--kind tool \
--name persist_driver_profile \
--payload '{"email":"a@b.com","name":"Aldenir"}'
```
To include sanitized tracebacks for debugging:
```bash
python -m aiel_runtime.runner --files-root "$PWD" --debug < req.json
```
### Common troubleshooting
zsh: `no matches found: aiel-sdk[all]`
Use quotes:
python -m pip install `"aiel-sdk[all]"`
`entry_point.py` not found
Make sure you run from the folder that contains `entry_point.py`, or pass the correct path:
```bash
python -m aiel_runtime.runner --files-root /path/to/project
```
#### Import errors (e.g. No module named ...)
Install the required packages in the same venv, or use an EP runtime bundle that includes them.
`ModuleNotFoundError`: `aiel_sdk`
- Ensure your runtime environment has aiel-sdk installed.
- Prefer installing aiel-runtime (which should depend on the correct aiel-sdk).
## Missing integrations (LangGraph / LangChain / LangSmith)
- Ensure runtime image includes the curated dependency bundle (aiel-runtime[ai]) or explicit deps.
# Registry leakage between invocations
- Ensure registry is reset per load (runtime does this by default).
- EP should avoid reusing a single Python process across unrelated tenants/projects unless explicitly designed.
# Registry leakage between invocations
- Ensure registry is reset per load (runtime does this by default).
- EP should avoid reusing a single Python process across unrelated tenants/projects unless explicitly designed.
## Testing
Recommended minimal tests:
- load snapshot → describe exports
- invoke tool/flow happy path
- contract violation errors are clear
- missing dependency errors are clear
- registry resets properly
Example (pytest):
```bash
python -m pip install -U pytest
pytest -q
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiel-cli>=1.2.3",
"aiel-sdk>=1.1.6",
"anyio>=4.0",
"pydantic>=2.8",
"langchain-core>=0.3.0; extra == \"ai\"",
"langgraph>=0.2.0; extra == \"ai\"",
"pytest>=7.0; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:13:55.490972 | aiel_runtime-0.5.1.tar.gz | 21,471 | d6/99/293107cb7ab1477b6c4036fac1cd5641195a01c11b03c43af48dd0992930/aiel_runtime-0.5.1.tar.gz | source | sdist | null | false | 14a977e11468760807feb9891aa2f3b2 | b117539587269b833fb17c9fa6da143b3c7d97f768c77a2bf411fec186cc2a41 | d699293107cb7ab1477b6c4036fac1cd5641195a01c11b03c43af48dd0992930 | null | [
"LICENSE"
] | 232 |
2.4 | fouroversix | 1.0.2 | More Accurate FP4 Quantization with Adaptive Block Scaling | # Four Over Six (4/6)
[](https://arxiv.org/abs/2512.02010)
_Improving the accuracy of NVFP4 quantization with Adaptive Block Scaling._

This repository contains kernels for efficient NVFP4 quantization and matrix multiplication, and fast post-training quantization with our method, 4/6.
If you have any questions, please get in touch or submit an issue.
## Setup
**Requirements:**
- Python version 3.10 or newer
- CUDA toolkit 12.8 or newer
- PyTorch version 2.8 or newer
**Install dependencies:**
```bash
pip install ninja packaging psutil "setuptools>=77.0.3"
```
**Install fouroversix:**
```bash
pip install fouroversix --no-build-isolation
```
Alternatively, you can compile from source:
```bash
pip install --no-build-isolation -e .
```
To speed up build times, set `CUDA_ARCHS=100` to only compile kernels for B-series GPUs (i.e. B200, GB200, GB300), or `CUDA_ARCHS=120` for RTX 50 and 60 Series GPUs (i.e. RTX 5090, RTX 6000).
Also, if you don't have a Blackwell GPU, you may use our reference implementation, which is slow but helpful for testing, by setting `SKIP_CUDA_BUILD=1` before running `pip install`.
### PTQ Experiments
To run PTQ experiments, make sure to install our test dependencies using either:
```bash
pip install "fouroversix[evals]" --no-build-isolation
# Or, if installing from source:
pip install --no-build-isolation -e ".[evals]"
```
Also, make sure all submodules are pulled and up to date:
```bash
git submodule update --init
```
Then, install dependencies for each PTQ method as needed, following the instructions [here](/docs/ptq.md).
## API
### Quantize a Model to NVFP4
```python
from fouroversix import ModelQuantizationConfig, quantize_model
from transformers import AutoModelForCausalLM
# NVFP4 using 4/6 with MSE block selection
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B")
quantize_model(model)
# Standard NVFP4 round-to-nearest quantization
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B")
config = ModelQuantizationConfig(scale_rule="static_6")
quantize_model(model, config)
```
### Quantize a Tensor to NVFP4
Check the `quantize_to_fp4` [arguments](https://github.com/mit-han-lab/fouroversix/blob/f1b78701c753ea49c091ac39d85c5753b703f5ca/src/fouroversix/frontend.py#L72) for more details about how you can enable certain features during quantization, such as stochastic rounding or 2D block quantization.
```python
import torch
from fouroversix import QuantizationConfig, quantize_to_fp4
x = torch.randn(1024, 1024, dtype=torch.bfloat16, device="cuda")
x_quantized = quantize_to_fp4(x)
# Standard NVFP4 round-to-nearest quantization
config = QuantizationConfig(scale_rule="static_6")
x_quantized = quantize_to_fp4(x, config)
```
### Multiply Two NVFP4 Tensors
```python
from fouroversix import fp4_matmul
# a and b can be either high-precision BF16 tensors, in which case they will be
# quantized, or low-precision QuantizedTensors if you've already quantized them
# yourself.
out = fp4_matmul(a, b)
```
## PTQ Evaluation with LM Evaluation Harness
```bash
# Round-to-nearest quantization with 4/6:
python -m scripts.ptq --model-name meta-llama/Llama-3.2-1B --ptq-method rtn --task wikitext
# Standard NVFP4 round-to-nearest (RTN) quantization:
python -m scripts.ptq --model-name meta-llama/Llama-3.2-1B --ptq-method rtn --task wikitext --a-scale-rule static_6 --w-scale-rule static_6
# AWQ with 4/6:
python -m scripts.ptq --model-name meta-llama/Llama-3.2-1B --ptq-method awq --task wikitext
# High-precision baseline, no NVFP4 quantization:
python -m scripts.ptq --model-name meta-llama/Llama-3.2-1B --ptq-method high_precision --task wikitext
```
If you would prefer not to worry about setting up your local environment, or about acquiring a Blackwell GPU to run your experiments faster, you may run PTQ experiments on [Modal](https://modal.com/) by adding the `--modal` flag, and optionally the `--detach` flag which will enable you to CTRL+C.
The first time you launch experiments on Modal, it may take several minutes to build everything, but following commands will reuse the cached images.
## Notes
This repository contains three implementations of NVFP4 quantization, each of which has various limitations:
- [CUDA](/src/fouroversix/csrc): Supports most but not all operations needed for efficient NVFP4 training. More operations will be added soon. Requires a Blackwell GPU.
- [Triton](/src/fouroversix/quantize/triton_kernel.py): Supports all operations needed for efficient NVFP4 training, including stochastic rounding, the random Hadamard transform, transposed inputs, and 2D block scaling. Requires a Blackwell GPU.
- [PyTorch](/src/fouroversix/quantize/reference.py): A reference implementation written in PyTorch that can run on any GPU. May have some educational value. Should not be used in real-world use cases.
When used with 4/6, these implementations have subtle numerical differences which can cause results to differ slightly, but not in a way that should cause uniformly worse performance for any of them.
For more details, see [here](https://github.com/mit-han-lab/fouroversix/blob/6bb13a8fc3b690154d11a1d6477bb6c2d09799e8/tests/test_correctness.py#L124-L132).
Our `quantize_to_fp4` function will automatically select one of these backends based on your GPU and the quantization parameters you select.
If you would like to force selection of a specific backend, you may specify it by setting `backend=QuantizeBackend.cuda` in the quantization config passed to `quantize_to_fp4`, or `quantize_backend=QuantizeBackend.cuda` in the layer and model configs passed to `quantize_model`.
## Contributing
We welcome contributions to our repository, but get in touch before making any substantial changes.
Also, please make sure any code changes are compliant with our linter:
```bash
ruff check
```
## Citation
Please use the following BibTeX entry to cite this work:
```bibtex
@misc{cook2025sixaccuratenvfp4quantization,
title={Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling},
author={Jack Cook and Junxian Guo and Guangxuan Xiao and Yujun Lin and Song Han},
year={2025},
eprint={2512.02010},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.02010},
}
```
## License
This repository is available under the MIT license.
See the [LICENSE.md](/LICENSE.md) file for details.
| text/markdown | null | Jack Cook <cookj@mit.edu>, Junxian Guo <junxian@mit.edu> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"torch>=2.7.0",
"mkdocs~=1.6.1; extra == \"docs\"",
"mkdocs-material~=9.7.1; extra == \"docs\"",
"mkdocstrings[python]~=1.0.3; extra == \"docs\"",
"inspect-ai~=0.3.179; extra == \"evals\"",
"inspect-evals~=0.3.106; extra == \"evals\"",
"lm-eval[hf]~=0.4.11; extra == \"evals\"",
"modal~=1.3.3; extra == \"evals\"",
"openai~=2.21.0; extra == \"evals\"",
"pytest~=8.1.1; extra == \"evals\"",
"ruff~=0.15.1; extra == \"evals\"",
"SQLAlchemy~=2.0.46; extra == \"evals\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:13:36.268245 | fouroversix-1.0.2.tar.gz | 4,343,532 | 1d/34/e2ff2c513b7ee37dcddae6b1d0e03b8c2b873a513e81fbfaad1a16ddce94/fouroversix-1.0.2.tar.gz | source | sdist | null | false | 161ae5f215b319d383ed5f6948df8f58 | 9ed1a2a51930d7a80ce84d93a5c5ab535f30b09eb8d95b9d966b8ab8c790cb6a | 1d34e2ff2c513b7ee37dcddae6b1d0e03b8c2b873a513e81fbfaad1a16ddce94 | MIT | [
"LICENSE.md"
] | 173 |
2.4 | comfyui-autoflow | 1.3.0 | Pure Python automation for ComfyUI: convert workflows, submit jobs, fetch images—no GUI required | <!-- Keep version below in sync with autoflow/version.py -->
```text
ComfyUI
█████╗ ██╗ ██╗████████╗ ██████╗ ███████╗██╗ ██████╗ ██╗ ██╗
██╔══██╗██║ ██║╚══██╔══╝██╔═══██╗██╔════╝██║ ██╔═══██╗██║ ██║
███████║██║ ██║ ██║ ██║ ██║█████╗ ██║ ██║ ██║██║ █╗ ██║
██╔══██║██║ ██║ ██║ ██║ ██║██╔══╝ ██║ ██║ ██║██║███╗██║
██║ ██║╚██████╔╝ ██║ ╚██████╔╝██║ ███████╗╚██████╔╝╚███╔███╔╝
╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═════╝ ╚═╝ ╚══════╝ ╚═════╝ ╚══╝╚══╝
version: 1.3.0
```
```mermaid
flowchart LR
workflowJson["workflow.json"] --> autoflow["autoflow"]
--> apiFlow[workflow-api.json]
```
---
# Imagine...
### What if you could `load, edit, and submit` ComfyUI workflows without ever exporting an API workflow from the GUI?
### What if you could `batch-convert and patch workflows offline` No running ComfyUI instance required?
### What if you could attach studio `metadata` to your workflow and have it carry through the entire production lifecycle?
### What if you could `render` comfyui node workflows with all of the above features, `without ever launching the comfyui server`?
- # Let me introduce `comfyui-autoflow`
---
# autoflow
Skip the GUI. `autoflow` handles the backend so you can automate/pipeline your ComfyUI renderables with full control through the entire conversion and submission process.
`autoflow` is a small and efficient, pure **Python package** (stdlib-only +extendable) for ComfyUI automation that gives you access to renderable conversion with or without ComfyUI.
## Features
| Feature | Description |
|---------|-------------|
| **Submit Workflow.json** | Directly edit and submit `workflow.json` files without the need for GUI Api exports |
| **Convert** | `workflow.json` → `ApiFlow` (renderable API payload) |
| **Offline/Online** | Convert without ComfyUI server running or fetch live from ComfyUI |
| **Subgraphs** | Flattens `definitions.subgraphs` (including nested subgraphs) into a normal API payload |
| **Edit** | Modify nodes, inputs, seeds before submission |
| **Find + address** | `flow.nodes.find(...)` / `api.find(...)` plus `.path()` / `.address()` for stable node addresses |
| **Submit** | Send to ComfyUI, wait for completion, fetch output images |
| **Progress** | Hook into ComfyUI render events for real-time progress control |
| **Serverless ComfyUI Execution** | `.execute` to process ComfyUI native nodes without running the ComfyUI HTTP server |
| **Map** | Patch values across nodes for pipelines (seeds, paths, prompts) |
| **Extract** | Load workflows from ComfyUI PNG outputs (embedded metadata) |
| **Stdlib-only** | No dependencies by default; optional Pillow, ImageMagick, ffmpeg |
| **Widget introspection** | `.choices()`, `.tooltip()`, `.spec()` on any node input — query valid options from `node_info` |
## Requirements
- Python 3.7+ (dict insertion order preserved)
- ComfyUI server (optional, for API mode)
- No additional Python packages required
## Tested ComfyUI Versions:
- ComfyUI `0.8.2`
- ComfyUI_frontend `v1.35.9`
---
## The Two `ComfyUI Formats` you should know about
ComfyUI uses two JSON formats:
| Format | File | Description |
|--------|------|-------------|
| **Workspace** | `workflow.json` | The UI-editable graph with node positions, colors, widgets. What you save from ComfyUI. |
| **API Payload** | `workflow-api.json` | The renderable blueprint—just nodes + inputs, ready for `POST /prompt`. This is what `ApiFlow` represents. |
**autoflow converts Workspace → API Payload** (or loads an existing API Payload directly).
```mermaid
flowchart LR
workspace["workflow.json (workspace)"] --> convert["autoflow"]
convert --> payload["workflow-api.json (API payload / ApiFlow)"]
payload --> submit["POST /prompt"]
submit --> comfy["ComfyUI renders"]
```
## Installation
```bash
pip install comfyui-autoflow
```
Then use with `python -m autoflow ...` or `import autoflow` from Python.
- Optional: set `AUTOFLOW_COMFYUI_SERVER_URL` once (then `server_url` / `--server-url` become optional):
- Linux/macOS: `export AUTOFLOW_COMFYUI_SERVER_URL="http://localhost:8188"`
- Windows PowerShell: `$env:AUTOFLOW_COMFYUI_SERVER_URL = "http://localhost:8188"`
- Windows CMD: `set AUTOFLOW_COMFYUI_SERVER_URL=http://localhost:8188`
- Python: `import os; os.environ["AUTOFLOW_COMFYUI_SERVER_URL"] = "http://localhost:8188"`
- Optional: set `AUTOFLOW_NODE_INFO_SOURCE=modules|fetch|server|/path/to/node_info.json` to auto-resolve `node_info`.
---
# `autoflow` - Quick Start
### Get `node_info.json` (optional, one-time)
Save `node_info.json` so you can convert offline. You can also convert against a running ComfyUI instance, but for efficiency we recommend pulling a new node_info.json file per instance (reproducible, no server needed).
```mermaid
flowchart LR
comfy["ComfyUI server"] --> obj["/object_info"]
obj --> file["node_info.json"]
```
```python
# api
from autoflow import NodeInfo
NodeInfo.fetch(server_url="http://localhost:8188", output_path="node_info.json")
```
```bash
# cli
python -m autoflow --download-node-info-path node_info.json --server-url http://localhost:8188
```
- Direct modules (no server): `NodeInfo.from_comfyui_modules()` builds `node_info` from local ComfyUI nodes.
- Env source (optional): set `AUTOFLOW_NODE_INFO_SOURCE=modules|fetch|server|/path/to/node_info.json`.
- More: [`docs/node-info-and-env.md`](docs/node-info-and-env.md)
## Convert live (using running ComfyUI)
Convert `workflow.json` by fetching `/object_info` from your running ComfyUI server.
```mermaid
flowchart LR
env["AUTOFLOW_COMFYUI_SERVER_URL"] --> wf["Workflow(...)"]
wf --> apiFlow["ApiFlow"]
comfy["ComfyUI server"] --> obj["/object_info"]
obj --> wf
```
If environment variable `AUTOFLOW_COMFYUI_SERVER_URL` is set, `server_url` becomes optional.
If `AUTOFLOW_NODE_INFO_SOURCE` is set, `Workflow(...)` will auto-resolve `node_info` when none is provided.
```python
# api
from autoflow import Workflow
api = Workflow("workflow.json") # uses AUTOFLOW_COMFYUI_SERVER_URL
api.save("workflow-api.json")
```
```bash
# cli
python -m autoflow --input-path workflow.json --output-path workflow-api.json
```
- More: [`docs/convert.md`](docs/convert.md), [`docs/node-info-and-env.md`](docs/node-info-and-env.md)
## Convert `workflow` to `workflow-api` (offline)
Convert using your saved `node_info.json` (no server needed).
```mermaid
flowchart LR
workflowJson["workflow.json"] --> wf["Workflow(...)"]
objectInfo["node_info.json"] --> wf
wf --> apiFlow["ApiFlow"]
apiFlow --> saveApi["save(workflow-api.json)"]
```
```python
# api
from autoflow import Workflow
api = Workflow("workflow.json", node_info="node_info.json")
api.save("workflow-api.json")
```
```bash
# cli
# Offline mode (saved node_info)
python -m autoflow --input-path workflow.json --output-path workflow-api.json --node-info-path node_info.json
# Short form (flags)
python -m autoflow -i workflow.json -o workflow-api.json -f node_info.json
```
- More: [`docs/convert.md`](docs/convert.md)
## Load from PNG (extract embedded workflow)
ComfyUI embeds workflow metadata in PNG outputs. Extract it directly—no external dependencies needed.
```python
# api
from autoflow import Flow, ApiFlow
# From PNG file
api_flow = ApiFlow.load("ComfyUI_00001_.png") # extracts API payload
flow = Flow.load("ComfyUI_00001_.png") # extracts workspace
# From bytes (e.g., HTTP upload, database blob)
with open("output.png", "rb") as f:
api_flow = ApiFlow.load(f.read())
```
All `.load()` methods accept: `dict`, `bytes`, `str` (JSON or path), `Path`
## Submit + images (optional)
Submit your `ApiFlow` directly to ComfyUI and get images back
```mermaid
flowchart LR
apiFlow["ApiFlow"] ==> submit["submit(wait=True)"]
submit ==> comfy["ComfyUI server"]
comfy --> |job handle|apiFlow
apiFlow ---> |job handle| images["fetch_images().save(...)"]
```
```python
# api
from autoflow import Workflow
api = Workflow("workflow.json", node_info="node_info.json")
api.saveimage.inputs.filename_prefix='autoflow'
res = api.submit(server_url="http://localhost:8188", wait=True)
images = res.fetch_images()
images.save("outputs/frame.###.png")
```
You can also set an output default with env `AUTOFLOW_OUTPUT_PATH` and then just provide a `filename=` template:
```python
# api
images.save(filename="frame.{src_frame}.png") # or "frame.###.png" for zero-indexed numbering
```
```bash
# cli
# Prints prompt_id first, then (if saving) the written file paths. Progress logs go to stderr.
python -m autoflow --submit --input-path workflow.json --server-url http://localhost:8188 \
--save-images outputs --filepattern "frame.###.png" --index-offset 1001
```
- More: [`docs/submit-and-images.md`](docs/submit-and-images.md), [`docs/progress-events.md`](docs/progress-events.md)
## Serverless execute (no ComfyUI HTTP server)
If you're running inside a ComfyUI environment (repo + venv), you can run workflows serverlessly:
- Details: [`docs/execute.md`](docs/execute.md)
## Optional Functionality
- **Polymorphic loading** (dict, bytes, JSON string, file path, PNG):
- all `.load()` methods auto-detect input type: [`docs/load-vs-convert.md`](docs/load-vs-convert.md)
- extract workflows from ComfyUI PNG outputs (no dependencies)
- **OOP node access**:
- `api.KSampler.seed = 42` — attribute-style access by class_type
- `api.find(class_type="KSampler")[0].seed = 42` — search + then edit (returns NodeProxy objects)
- `api.KSampler._meta` / `.meta` — access node metadata
- `api["ksampler/seed"]` — path-style access
- `api["18:17:3/seed"] = 42` — edit nodes inside flattened subgraph exports (ComfyUI-style path IDs)
- `flow.nodes.KSampler.type` — explicit via `.nodes` for workspace flows
- `flow.nodes.find(title="NewSubgraphName")[0].path()` — find renamed subgraph instances; prints a stable path like `18:17:3`
- `flow.extra.ds.scale` — drill into nested dicts with `DictView`
- `node.properties.models.url` — single-item list-of-dicts drill via `ListView` (otherwise index first)
- **Widget-value repr**: `NodeRef`/`NodeSet` display widget values as dicts — `f.nodes.CheckpointLoaderSimple` → `{'nodes.CheckpointLoaderSimple[0]': {'ckpt_name': '...'}}`
- **Widget introspection**: `.choices()` returns valid combo options, `.tooltip()` shows help text, `.spec()` gives the raw `node_info` spec
- **Tab completion**: curated `__dir__` on `ApiFlow`, `NodeSet`, `FlowTreeNodesView`, and `WidgetValue` — only shows user-facing attrs
- **Indexed nodes**: standard Python REPL can't tab-complete `api.KSampler[0].<tab>` — assign to a variable first: `k = api.KSampler[0]` then `k.<tab>`
- **Mapping** (seeds/paths/prompts):
- typed callback mapping: [`docs/mapping.md`](docs/mapping.md)
- declarative string/path mapping: [`docs/map-strings-and-paths.md`](docs/map-strings-and-paths.md)
- cache-busting repeat runs: [`docs/force-recompute.md`](docs/force-recompute.md)
- **Filename pattern saving**:
- `ImagesResult.save("outputs/frame.###.png")`: [`docs/submit-and-images.md`](docs/submit-and-images.md)
- **Service patterns**:
- FastAPI integration: [`docs/fastapi.md`](docs/fastapi.md)
- structured errors: [`docs/error-handling.md`](docs/error-handling.md)
- **When things break**:
- troubleshooting: [`docs/troubleshooting.md`](docs/troubleshooting.md)
- deeper options: [`docs/advanced.md`](docs/advanced.md)
## CLI Reference
| Argument | Short | Description |
|----------|-------|-------------|
| `--input-path` | `-i` | Input workflow JSON file path |
| `--output-path` | `-o` | Output API format JSON file path |
| `--server-url` | | ComfyUI server URL (or set `AUTOFLOW_COMFYUI_SERVER_URL`) |
| `--node-info-path` | `-f` | Path to saved `node_info.json` file |
| `--download-node-info-path` | | Download `/object_info` and save to file |
| `--submit` | | Submit converted API payload to ComfyUI |
| `--no-wait` | | Submit without waiting for completion (prints `prompt_id` and exits) |
| `--no-progress` | | Disable progress output during `--submit` when waiting |
| `--save-images` | | Directory to save fetched images (requires waiting) |
| `--filepattern` | | Filename pattern used when saving images (default: `frame.###.png`) |
| `--index-offset` | | Index offset for `#` patterns (default: 0) |
| `--save-files` | | Directory to save fetched registered files (requires waiting) |
| `--output-types` | | Comma-separated registered output types when saving files (e.g. `images,files`) |
**Note**: The CLI supports submission and saving registered outputs via `--submit` (see [`docs/submit-and-images.md`](docs/submit-and-images.md) and [`docs/progress-events.md`](docs/progress-events.md)).
## Contributing
This script is designed to be production-ready and maintainable. Key design principles:
- **Minimal Dependencies**: Uses only Python standard library
- **Cross-Platform Compatibility**: Works on Linux, Windows, and macOS
- **Robust Error Handling**: Graceful degradation and detailed error reporting
- **Exact Replication**: Matches ComfyUI's internal conversion exactly
### Running tests (offline)
#### Unit tests
```bash
# Run the full unittest suite (offline)
python -m unittest discover -s examples/unittests -v
```
What to expect:
- **Output**: test names + `... ok`, then a final `OK`
- **Exit code**: `0` on success, non-zero on failure
#### Docs examples test harness (`docs-test.py`)
This runs the fenced code examples from `docs/*.md` in a sandbox.
```bash
# Offline run: compiles python blocks, optionally executes safe ones, and runs safe CLI blocks
python examples/code/docs-test.py --mode offline --exec-python --run-cli
# List available labeled examples
python examples/code/docs-test.py --list
# Run only a subset (labels come from --list)
python examples/code/docs-test.py --mode offline --only "docs/convert.md#1:python" --exec-python
```
What to expect:
- **Output**: `START ...` / `END ... (ok)` banners per doc block
- **Skips**: network-looking snippets print `SKIP` unless you run in online mode
- **Exit code**: `0` if all selected examples pass; `1` if any fail
Diagram: see [`docs/contributing-tests.md`](docs/contributing-tests.md)
## License
[`MIT License`](LICENSE)
## Related
- [ComfyUI](https://github.com/comfyanonymous/ComfyUI) - The main ComfyUI project
- [ComfyUI API Documentation](https://github.com/comfyanonymous/ComfyUI/wiki/API) - API format specification
| text/markdown | Chris Reid | null | null | null | null | comfyui, automation, workflow, workflow.json, workflow-api, workflow-api.json, stable-diffusion, image-generation, nodes, renderables, flow, apiflow, pipeline, ml, ai, api | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"pillow; extra == \"pillow\""
] | [] | [] | [] | [
"Homepage, https://github.com/chrisdreid/comfyui-autoflow",
"Documentation, https://github.com/chrisdreid/comfyui-autoflow#readme",
"Repository, https://github.com/chrisdreid/comfyui-autoflow",
"Issues, https://github.com/chrisdreid/comfyui-autoflow/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T03:13:22.479770 | comfyui_autoflow-1.3.0.tar.gz | 89,917 | ed/92/551e1dd69b62cc48978b739a07d4d495681f8da21c2fb3eeb6d246b09da1/comfyui_autoflow-1.3.0.tar.gz | source | sdist | null | false | 06b67b5e6f19a0097ef814ef88c093b7 | faf0b57f50853775c77f7c5a5c5932d069aba3fece0aa245c74d59fc4b90781e | ed92551e1dd69b62cc48978b739a07d4d495681f8da21c2fb3eeb6d246b09da1 | MIT | [
"LICENSE"
] | 241 |
2.4 | luxetc | 0.0.9 | A simple Exposure Time Calculator for the Lux mission concept | # Lux_Proposal Repository
This is a location for code under development for the Lux mission concept. Currently it contains a simple exposure time calculator.
Dependencies: numpy, pandas, astropy, matplotlib
NOTE: Currently (v0.0.4) only the optical and NIR channels are implemented, and sensitivities are based on current best estimates. This will be modified soon to include the UV channel, as well as sensitivies at requirement levels.
Installation: pip install luxetc
Usage:
In [1]: import luxetc
Instatiate a LuxETC object.
In [2]: x = luxetc.LuxETC()
Create a source with AB mag = 20.24.
In [3]: import astropy.units as u
In [4]: source = (20.24 * u.ABmag).to(u.erg / u.cm**2 / u.s / u.AA, u.spectral_density(luxetc.config.WAV))
Calculate the SNR for 100 s exposures in the g, r, and i-bands.
In [5]: x.get_snr(source, texps={"g": 100., "r": 100., "i": 100., "z": 100., "y": 100., "j": 100.})
Out[5]:
{'g': np.float64(8.998967243642625),
'r': np.float64(6.4310822015567375),
'i': np.float64(4.068664798528796),
'z': np.float64(3.2757870535410034),
'y': np.float64(5.882262115840334),
'j': np.float64(5.886179099315213)}
Calculate the necessary exposure time to achieve SNR = 9.0, 6.4, 4.1, 3.3, 5.9, and 5.9 in g, r, i, z, y, and j-band (respectively).
In [6]: x.get_texp(source, snrs={"g": 9.0, "r": 6.4, "i": 4.1, "z": 3.3, "y": 5.9, "j": 5.9})
Out[6]:
{'g': np.float64(97.27763225445993),
'r': np.float64(96.78990062621607),
'i': np.float64(97.83428612199552),
'z': np.float64(90.28943141959489),
'y': np.float64(95.14224142371266),
'j': np.float64(95.12166345972044)}
Calculate the limiting magnitude for a given filter, exposure time, and SNR.
In [7]: x.get_limmag({"g": (100., 9.0), "r": (100., 6.4), "i": (100., 4.1), "z": (100., 3.3), "y": (100., 5.9), "j": (100., 5.9)})
Out[7]:
{'g': np.float64(20.258971055028226),
'r': np.float64(20.255269377129782),
'i': np.float64(20.250905066741165),
'z': np.float64(20.230548187712248),
'y': np.float64(20.264815842833464),
'j': np.float64(20.263580065925844)}
| text/markdown | null | Brad Cenko <brad.cenko@nasa.gov> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=2.0",
"pandas>=2.0",
"astropy>=6.0",
"matplotlib>=3.0"
] | [] | [] | [] | [
"Homepage, https://github.com/cenko/Lux_Proposal",
"Issues, https://github.com/cenko/Lux_Proposal/issues"
] | twine/6.2.0 CPython/3.12.9 | 2026-02-21T03:12:38.507477 | luxetc-0.0.9.tar.gz | 1,171,104 | c4/91/3c9703a27ed6a0ff5ccefe2f7efa6533e69fe05f03cc75d42adef5e34093/luxetc-0.0.9.tar.gz | source | sdist | null | false | 426aeaa36e4d801fa7a9524630b69bac | 2d0b6de2269f2b57aed2fe3041daedc85feccd6a21cb341cb153d9427acebdef | c4913c9703a27ed6a0ff5ccefe2f7efa6533e69fe05f03cc75d42adef5e34093 | MIT | [
"LICENSE"
] | 232 |
2.4 | terminaluse | 0.31.0 | Terminal Use Python SDK | # TerminalUse Python Library
[](https://buildwithfern.com?utm_source=github&utm_medium=github&utm_campaign=readme&utm_source=https%3A%2F%2Fgithub.com%2Fterminal-use%2Fpython-sdk)
[](https://pypi.python.org/pypi/terminaluse)
The TerminalUse Python library provides convenient access to the TerminalUse APIs from Python.
## Table of Contents
- [Installation](#installation)
- [Reference](#reference)
- [Usage](#usage)
- [Async Client](#async-client)
- [Exception Handling](#exception-handling)
- [Streaming](#streaming)
- [Advanced](#advanced)
- [Access Raw Response Data](#access-raw-response-data)
- [Retries](#retries)
- [Timeouts](#timeouts)
- [Custom Client](#custom-client)
- [Contributing](#contributing)
## Installation
```sh
pip install terminaluse
```
## Reference
A full reference for this library is available [here](https://github.com/terminal-use/python-sdk/blob/HEAD/./reference.md).
## Usage
Instantiate and use the client with the following:
```python
from terminaluse import TerminalUse
client = TerminalUse(
agent_api_key="YOUR_AGENT_API_KEY",
token="YOUR_TOKEN",
base_url="https://yourhost.com/path/to/api",
)
client.telemetry.agent_metrics_ingest(
resource_metrics=[{"key": "value"}],
)
```
## Async Client
The SDK also exports an `async` client so that you can make non-blocking calls to our API. Note that if you are constructing an Async httpx client class to pass into this client, use `httpx.AsyncClient()` instead of `httpx.Client()` (e.g. for the `httpx_client` parameter of this client).
```python
import asyncio
from terminaluse import AsyncTerminalUse
client = AsyncTerminalUse(
agent_api_key="YOUR_AGENT_API_KEY",
token="YOUR_TOKEN",
base_url="https://yourhost.com/path/to/api",
)
async def main() -> None:
await client.telemetry.agent_metrics_ingest(
resource_metrics=[{"key": "value"}],
)
asyncio.run(main())
```
## Exception Handling
When the API returns a non-success status code (4xx or 5xx response), a subclass of the following error
will be thrown.
```python
from terminaluse.core.api_error import ApiError
try:
client.telemetry.agent_metrics_ingest(...)
except ApiError as e:
print(e.status_code)
print(e.body)
```
## Streaming
The SDK supports streaming responses, as well, the response will be a generator that you can loop over.
```python
from terminaluse import TerminalUse
client = TerminalUse(
agent_api_key="YOUR_AGENT_API_KEY",
token="YOUR_TOKEN",
base_url="https://yourhost.com/path/to/api",
)
response = client.tasks.stream_by_name(
task_name="task_name",
)
for chunk in response.data:
yield chunk
```
## Advanced
### Access Raw Response Data
The SDK provides access to raw response data, including headers, through the `.with_raw_response` property.
The `.with_raw_response` property returns a "raw" client that can be used to access the `.headers` and `.data` attributes.
```python
from terminaluse import TerminalUse
client = TerminalUse(
...,
)
response = client.telemetry.with_raw_response.agent_metrics_ingest(...)
print(response.headers) # access the response headers
print(response.data) # access the underlying object
with client.tasks.with_raw_response.stream_by_name(...) as response:
print(response.headers) # access the response headers
for chunk in response.data:
print(chunk) # access the underlying object(s)
```
### Retries
The SDK is instrumented with automatic retries with exponential backoff. A request will be retried as long
as the request is deemed retryable and the number of retry attempts has not grown larger than the configured
retry limit (default: 2).
A request is deemed retryable when any of the following HTTP status codes is returned:
- [408](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/408) (Timeout)
- [429](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429) (Too Many Requests)
- [5XX](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500) (Internal Server Errors)
Use the `max_retries` request option to configure this behavior.
```python
client.telemetry.agent_metrics_ingest(..., request_options={
"max_retries": 1
})
```
### Timeouts
The SDK defaults to a 60 second timeout. You can configure this with a timeout option at the client or request level.
```python
from terminaluse import TerminalUse
client = TerminalUse(
...,
timeout=20.0,
)
# Override timeout for a specific method
client.telemetry.agent_metrics_ingest(..., request_options={
"timeout_in_seconds": 1
})
```
### Custom Client
You can override the `httpx` client to customize it for your use-case. Some common use-cases include support for proxies
and transports.
```python
import httpx
from terminaluse import TerminalUse
client = TerminalUse(
...,
httpx_client=httpx.Client(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
## Contributing
While we value open-source contributions to this SDK, this library is generated programmatically.
Additions made directly to this library would have to be moved over to our generation code,
otherwise they would be overwritten upon the next generated release. Feel free to open a PR as
a proof of concept, but know that we will not be able to merge it as-is. We suggest opening
an issue first to discuss with us!
On the other hand, contributions to the README are always very welcome!
| text/markdown | null | null | null | null | null | null | [
"Intended Audience :: Developers",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp<4.0.0,>=3.10.10",
"fastapi<0.116.0,>=0.115.0",
"httpx<1.0.0,>=0.21.2",
"jinja2<4.0.0,>=3.1.3",
"jsonref<2.0.0,>=1.1.0",
"jsonschema<5.0.0,>=4.23.0",
"opentelemetry-api<2.0.0,>=1.20.0",
"opentelemetry-instrumentation-fastapi<1.0.0,>=0.44b0",
"opentelemetry-instrumentation-httpx<1.0.0,>=0.44b0",
"opentelemetry-sdk<2.0.0,>=1.20.0",
"pydantic-core<3.0.0,>=2.18.2",
"pydantic<3.0.0,>=1.9.2",
"python-on-whales<0.74.0,>=0.73.0",
"python-ulid<4.0.0,>=3.0.0",
"pyyaml<7.0.0,>=6.0.2",
"pyzstd<0.17.0,>=0.16.0",
"questionary<3.0.0,>=2.0.1",
"rich<14.0.0,>=13.9.2",
"temporalio<2.0.0,>=1.18.2",
"tenacity<9.0.0,>=8.0.0",
"typer<1.0,>=0.16",
"typing-extensions<5.0.0,>=4.0.0",
"uvicorn<0.32.0,>=0.31.1",
"yaspin<4.0.0,>=3.1.0",
"claude-agent-sdk<0.2.0,>=0.1.38; extra == \"claude\""
] | [] | [] | [] | [
"Repository, https://github.com/terminal-use/monorepo"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T03:11:07.742763 | terminaluse-0.31.0.tar.gz | 948,896 | 70/c6/0dea5340fe605601dd85de0bcd65e456f41df5698008a6371e509f75f2e5/terminaluse-0.31.0.tar.gz | source | sdist | null | false | b9a66ad55461018948a8607b0a09cdd2 | d3874c0246a128dea744d69885bf4c0331ecd5f021510e15899def398f501ce4 | 70c60dea5340fe605601dd85de0bcd65e456f41df5698008a6371e509f75f2e5 | null | [] | 326 |
2.4 | ctinexus | 0.2.0 | CTINexus: A framework for data-efficient cyber threat intelligence extraction and knowledge graph construction using LLMs. | <div align="center">
<img src="https://raw.githubusercontent.com/peng-gao-lab/CTINexus/main/ctinexus/static/logo.png" alt="Logo" width="200">
<h1 align="center">Automatic Cyber Threat Intelligence Knowledge Graph Construction Using Large Language Models</h1>
</div>
<p align="center">
<a href='https://arxiv.org/abs/2410.21060'><img src='https://img.shields.io/badge/Paper-Arxiv-crimson'></a>
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-lavender.svg" alt="License: MIT"></a>
<a href='https://ctinexus.github.io/' target='_blank'><img src='https://img.shields.io/badge/Project-Website-turquoise'></a>
<a href="https://pepy.tech/projects/ctinexus" target='_blank'><img src="https://static.pepy.tech/personalized-badge/ctinexus?period=total&units=INTERNATIONAL_SYSTEM&left_color=GRAY&right_color=BRIGHTGREEN&left_text=Downloads" alt="PyPI Downloads"></a>
</p>
---
## News & Updates
📦 [2025/10] CTINexus Python package released! Install with `pip install ctinexus` for seamless integration into your Python projects.
🌟 [2025/07] CTINexus now features an intuitive Gradio interface! Submit threat intelligence text and instantly visualize extracted interactive graphs.
🔥 [2025/04] We released the camera-ready paper on [arxiv](https://arxiv.org/pdf/2410.21060).
🔥 [2025/02] CTINexus is accepted at 2025 IEEE European Symposium on Security and Privacy ([Euro S&P](https://eurosp2025.ieee-security.org/index.html)).
## 📖 Table of Contents
- [Overview](#overview)
- [Features](#features)
- [Supported AI Providers](#supported-ai-providers)
- [Getting Started](#getting-started)
- [Option 1: Python Package](#python-package)
- [Option 2: Web Interface (Local)](#web-interface)
- [Option 3: Docker](#docker-setup)
- [Command Line Interface](#command-line)
- [Contributing](#contributing)
- [Citation](#citation)
- [License](#license)
---
## Overview
**CTINexus** is a framework that leverages optimized in-context learning (ICL) of large language models (LLMs) to automatically extract cyber threat intelligence (CTI) from unstructured text and construct cybersecurity knowledge graphs (CSKG).
<p align="center">
<img src="https://raw.githubusercontent.com/peng-gao-lab/CTINexus/main/ctinexus/static/overview.png" alt="CTINexus Framework Overview" width="600"/>
</p>
The framework processes threat intelligence reports to:
- 🔍 **Extract cybersecurity entities** (malware, vulnerabilities, tactics, IOCs)
- 🔗 **Identify relationships** between security concepts
- 📊 **Construct knowledge graphs** with interactive visualizations
- ⚡ **Require minimal configuration** - no extensive training data or parameter tuning needed
---
## Features
### Core Pipeline Components
1. **Intelligence Extraction (IE)**
- Automatically extracts cybersecurity entities and relationships from unstructured text
- Uses optimized prompt construction and demonstration retrieval
2. **Hierarchical Entity Alignment**
- **Entity Typing (ET)**: Classifies entities by semantic type
- **Entity Merging (EM)**: Canonicalizes entities and removes redundancy with IOC protection
3. **Link Prediction (LP)**
- Predicts and adds missing relationships to complete the knowledge graph
4. **Interactive Visualization**
- Network graph visualization of the constructed cybersecurity knowledge graph
<p align="center">
<img src="https://raw.githubusercontent.com/peng-gao-lab/CTINexus/main/ctinexus/static/webui.png" alt="CTINexus WebUI" width="600"/>
</p>
---
## Supported AI Providers
CTINexus supports multiple AI providers for flexibility:
| Provider | Models | Setup Required |
|----------|--------|----------------|
| **OpenAI** | GPT-4, GPT-4o, o1, o3, etc. | API Key |
| **Google Gemini** | Gemini 2.0, 2.5 Flash, etc. | API Key |
| **AWS Bedrock** | Claude, Nova, Llama, DeepSeek, etc. | AWS Credentials |
| **Ollama** | Llama, Mistral, Qwen, Gemma, etc. | Local Installation (FREE) |
> Note: When using Ollama models, use the **📖 [Ollama Setup Guide](docs/ollama-guide.md)**.
---
## Getting Started
<a id="python-package"></a>
### 📦 Option 1: Python Package
#### Installation
```bash
pip install ctinexus
```
#### Configuration
Create a `.env` file in your project directory with credentials for at least one provider. Look at [.env.example](.env.example) for reference.
#### Usage
```python
from ctinexus import process_cti_report
from dotenv import load_dotenv
# Load API credentials
load_dotenv()
# Process threat intelligence text
text = """
APT29 used PowerShell to download additional malware from command-and-control
server at 192.168.1.100. The attack exploited CVE-2023-1234 in Microsoft Exchange.
"""
result = process_cti_report(
text=text,
provider="openai", # optional: auto-detected if not specified
model="gpt-4", # optional: uses default if not specified
similarity_threshold=0.6,
output="results.json" # optional: save results to file
)
# Access results
print(f"Graph saved to: {result['entity_relation_graph']}")
# Open the HTML file in your browser to view the interactive graph
# Or process from a CTI report/blog URL
result = process_cti_report(
source_url="https://example.com/threat-report",
provider="openai",
model="gpt-4",
)
```
**API Parameters:**
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `text` | str | None | Threat intelligence text to process (required if `source_url` is not provided) |
| `source_url` | str | None | CTI report/blog URL to ingest and process (required if `text` is not provided) |
| `provider` | str | Auto-detect | `"openai"`, `"gemini"`, `"aws"`, or `"ollama"` |
| `model` | str | Provider default | Model name (e.g., `"gpt-4o"`, `"gemini-2.0-flash"`) |
| `embedding_model` | str | Provider default | Embedding model for entity alignment |
| `similarity_threshold` | float | 0.6 | Entity similarity threshold (0.0-1.0) |
| `output` | str | None | Path to save JSON results |
**Note**: `text` and `source_url` are mutually exclusive. Provide exactly one input source.
**Return Value:**
The function returns a dictionary with complete analysis results:
```python
{
"text": "Original input text",
"IE": {"triplets": [...]}, # Extracted entities and relationships
"ET": {"typed_triplets": [...]}, # Entities with type classifications
"EA": {"aligned_triplets": [...]}, # Canonicalized entities
"LP": {"predicted_links": [...]}, # Predicted relationships
"entity_relation_graph": "path/to/graph.html" # Interactive visualization
}
```
---
<a id="web-interface"></a>
### 🖥️ Option 2: Web Interface (Local Setup)
#### Installation
```bash
git clone https://github.com/peng-gao-lab/CTINexus.git
cd CTINexus
# Create and activate virtual environment
python -m venv .venv
# Activate (macOS/Linux)
source .venv/bin/activate
# Activate (Windows)
# .venv\Scripts\activate
# Install the package
pip install -e .
```
#### Configuration
```bash
# Copy the example environment file
cp .env.example .env
# Edit .env with your credentials
```
#### Usage
**1. Launch the application:**
```bash
ctinexus
```
**2. Access the web interface:**
Open your browser to: **http://127.0.0.1:7860**
**3. Process threat intelligence:**
1. **Paste** threat intelligence text into the input area
2. **Select** your AI provider and model from dropdowns
3. **Click** "Run" to analyze
4. **View** extracted entities, relationships, and interactive graph
5. **Export** results as JSON or save graph images
---
<a id="docker-setup"></a>
### 🐳 Option 3: Docker (Containerized Setup)
**Prerequisites:**
- Install [Docker Desktop](https://docs.docker.com/get-docker/)
**Setup:**
```bash
# Clone the repository
git clone https://github.com/peng-gao-lab/CTINexus.git
cd CTINexus
# Copy environment template
cp .env.example .env
# Edit .env with your credentials
```
#### Usage
**1. Build and start:**
```bash
# Run in foreground
docker compose up --build
# OR run in background (detached mode)
docker compose up -d --build
# View logs (if running in background)
docker compose logs -f
```
**2. Access the application:**
Open your browser to: **http://localhost:8000**
**3. Process threat intelligence:**
1. **Paste** threat intelligence text into the input area
2. **Select** your AI provider and model from dropdowns
3. **Click** "Run" to analyze
4. **View** extracted entities, relationships, and interactive graph
5. **Export** results as JSON or save graph images
---
<a id="command-line"></a>
## ⚡ Command Line Interface
The CLI works with **any installation method** and is perfect for automation and batch processing.
### Basic Usage
```bash
# Process a file
ctinexus --input-file report.txt
# Process text directly
ctinexus --text "APT29 exploited CVE-2023-1234 using PowerShell..."
# Specify provider and model
ctinexus -i report.txt --provider openai --model gpt-4o
# Save to custom location
ctinexus -i report.txt --output results/analysis.json
```
**📖 [Complete CLI Documentation](docs/cli-guide.md)** - Detailed examples and all available options.
---
## Contributing
We warmly welcome contributions from the community! Whether you're interested in:
- 🐛 Fix bugs or add features
- 📖 Improve documentation
- 🎨 Enhance the UI/UX
- 🧪 Add tests or examples
Please check out our **[Contributing Guide](CONTRIBUTING.md)** for detailed information on how to get started, development setup, and submission guidelines.
## Citation
If you use CTINexus in your research, please cite our paper:
```bibtex
@inproceedings{cheng2025ctinexusautomaticcyberthreat,
title={CTINexus: Automatic Cyber Threat Intelligence Knowledge Graph Construction Using Large Language Models},
author={Yutong Cheng and Osama Bajaber and Saimon Amanuel Tsegai and Dawn Song and Peng Gao},
booktitle={2025 IEEE European Symposium on Security and Privacy (EuroS\&P)},
year={2025},
organization={IEEE}
}
```
## License
The source code is licensed under the [MIT](LICENSE.txt) License.
We warmly welcome industry collaboration. If you’re interested in building on CTINexus or exploring joint initiatives, please email yutongcheng@vt.edu or saimon.tsegai@vt.edu, we’d be happy to set up a brief call to discuss ideas.
| text/markdown | null | Saimon Tsegai <49simoney@gmail.com>, Yutong Cheng <yutongcheng@vt.edu> | null | null | The MIT License
Copyright (c) 2025 Yutong Cheng
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
| null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Topic :: Security",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"gradio<6.0.0,>=5.0.0",
"hydra-core",
"jinja2",
"litellm",
"networkx",
"nltk",
"omegaconf",
"pandas",
"pyvis",
"python-dotenv",
"scikit-learn",
"scipy",
"tld<1,>=0.13",
"trafilatura==2.0.0",
"ruff>=0.8.0; extra == \"dev\"",
"pre-commit>=4.0.0; extra == \"dev\"",
"pytest>=9.0.1; extra == \"dev\"",
"pytest-asyncio>=1.3.0; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest-mock>=3.15.1; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/peng-gao-lab/CTINexus",
"Homepage, https://ctinexus.github.io"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T03:10:51.001803 | ctinexus-0.2.0.tar.gz | 1,096,596 | 55/4d/95dc6d7a20f4f602b3187c6e6a5422d739ffdf65add0330be5c81de80c56/ctinexus-0.2.0.tar.gz | source | sdist | null | false | f966de57d0ab7ce9e4fa6c74601abb29 | 59a30c366df92faa371fbb1b376a876c80a759cdfaa147cda3eebd563826e226 | 554d95dc6d7a20f4f602b3187c6e6a5422d739ffdf65add0330be5c81de80c56 | null | [
"LICENSE.txt"
] | 243 |
2.4 | chalkpy | 2.106.3 | Python SDK for Chalk | # Chalk
Chalk enables innovative machine learning teams to focus on building
the unique products and models that make their business stand out.
Behind the scenes Chalk seamlessly handles data infrastructure with
a best-in-class developer experience. Here’s how it works –
---
## Develop
Chalk makes it simple to develop feature pipelines for machine
learning. Define Python functions using the libraries and tools you're
familiar with instead of specialized DSLs. Chalk then orchestrates
your functions into pipelines that execute in parallel on a Rust-based
engine and coordinates the infrastructure required to compute
features.
### Features
To get started, [define your features](/docs/features) with
[Pydantic](https://pydantic-docs.helpmanual.io/)-inspired Python classes.
You can define schemas, specify relationships, and add metadata
to help your team share and re-use work.
```py
@features
class User:
id: int
full_name: str
nickname: Optional[str]
email: Optional[str]
birthday: date
credit_score: float
datawarehouse_feature: float
transactions: DataFrame[Transaction] = has_many(lambda: Transaction.user_id == User.id)
```
### Resolvers
Next, tell Chalk how to compute your features.
Chalk ingests data from your existing data stores,
and lets you use Python to compute features with
[feature resolvers](/docs/resolver-overview).
Feature resolvers are declared with the decorators `@online` and
`@offline`, and can depend on the outputs of other feature resolvers.
Resolvers make it easy to rapidly integrate a wide variety of data
sources, join them together, and use them in your model.
#### SQL
```python
pg = PostgreSQLSource()
@online
def get_user(uid: User.id) -> Features[User.full_name, User.email]:
return pg.query_string(
"select email, full_name from users where id=:id",
args=dict(id=uid)
).one()
```
#### REST
```python
import requests
@online
def get_socure_score(uid: User.id) -> Features[User.socure_score]:
return (
requests.get("https://api.socure.com", json={
id: uid
}).json()['socure_score']
)
```
---
## Execute
Once you've defined your features and resolvers, Chalk orchestrates
them into flexible pipelines that make training and executing models easy.
Chalk has built-in support for feature engineering workflows --
no need to manage Airflow or orchestrate complicated streaming flows.
You can execute resolver pipelines with declarative caching,
ingest batch data on a schedule, and easily make slow sources
available online for low-latency serving.
### Caching
Many data sources (like vendor APIs) are too slow for online use cases
and/or charge a high dollar cost-per-call. Chalk lets you optimize latency
and cost by defining declarative caching policies which are well-integrated
throughout the system. You no longer have to manage Redis, Memcached, DynamodDB,
or spend time tuning cache-warming pipelines.
Add a caching policy with one line of code in your feature definition:
```python
@features
class ExternalBankAccount:
- balance: int
+ balance: int = feature(max_staleness="**1d**")
```
Optionally warm feature caches by executing resolvers on a schedule:
```py
@online(cron="**1d**")
def fn(id: User.id) -> User.credit_score:
return redshift.query(...).all()
```
Or override staleness tolerances at query time when you need fresher
data for your models:
```py
chalk.query(
...,
outputs=[User.fraud_score],
max_staleness={User.fraud_score: "1m"}
)
```
### Batch ETL ingestion
Chalk also makes it simple to generate training sets from data warehouse
sources -- join data from services like S3, Redshift, BQ, Snowflake
(or other custom sources) with historical features computed online.
Specify a cron schedule on an `@offline` resolver and Chalk automatically ingests
data with support for incremental reads:
```py
@offline(cron="**1h**")
def fn() -> Feature[User.id, User.datawarehouse_feature]:
return redshift.query(...).incremental()
```
Chalk makes this data available for point-in-time-correct dataset
generation for data science use-cases. Every pipeline has built-in
monitoring and alerting to ensure data quality and timeliness.
### Reverse ETL
When your model needs to use features that are canonically stored in
a high-latency data source (like a data warehouse), Chalk's Reverse
ETL support makes it simple to bring those features online and serve
them quickly.
Add a single line of code to an `offline` resolver, and Chalk constructs
a managed reverse ETL pipeline for that data source:
```py
@offline(offline_to_online_etl="5m")
```
Now data from slow offline data sources is automatically available for
low-latency online serving.
---
## Deploy + query
Once you've defined your pipelines, you can rapidly deploy them to
production with Chalk's CLI:
```bash
chalk apply
```
This creates a deployment of your project, which is served at a unique
preview URL. You can promote this deployment to production, or
perform QA workflows on your preview environment to make sure that
your Chalk deployment performs as expected.
Once you promote your deployment to production, Chalk makes features
available for low-latency [online inference](/docs/query-basics) and
[offline training](/docs/query-offline). Significantly, Chalk uses
the exact same source code to serve temporally-consistent training
sets to data scientists and live feature values to models. This re-use
ensures that feature values from online and offline contexts match and
dramatically cuts development time.
### Online inference
Chalk's online store & feature computation engine make it easy to query
features with ultra low-latency, so you can use your feature pipelines
to serve online inference use-cases.
Integrating Chalk with your production application takes minutes via
Chalk's simple REST API:
```python
result = ChalkClient().query(
input={
User.name: "Katherine Johnson"
},
output=[User.fico_score],
staleness={User.fico_score: "10m"},
)
result.get_feature_value(User.fico_score)
```
Features computed to serve online requests are also replicated to Chalk's
offline store for historical performance tracking and training set generation.
### Offline training
Data scientists can use Chalk's Jupyter integration to create datasets
and train models. Datasets are stored and tracked so that they can be
re-used by other modelers, and so that model provenance is tracked for
audit and reproducibility.
```python
X = ChalkClient.offline_query(
input=labels[[User.uid, timestamp]],
output=[
User.returned_transactions_last_60,
User.user_account_name_match_score,
User.socure_score,
User.identity.has_verified_phone,
User.identity.is_voip_phone,
User.identity.account_age_days,
User.identity.email_age,
],
)
```
Chalk datasets are always "temporally consistent."
This means that you can provide labels with different past timestamps and
get historical features that represent what your application would have
retrieved online at those past times. Temporal consistency ensures that
your model training doesn't mix "future" and "past" data.
| text/markdown | Chalk AI, Inc. | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python",
"Typing :: Typed",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Software Development :: Code Generators",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"attrs>=21.3.0",
"cattrs<25,>=22.1.0",
"dataclasses_json>=0.5.7",
"executing<3,>=1.2.0",
"googleapis-common-protos>=1.56.0",
"grpcio<2,>=1.63.0",
"ipywidgets>=8.0.6",
"isodate<0.8,>=0.6.1",
"numpy<3",
"orjson",
"pandas<2.3,>=1.5.1",
"protobuf<6,>=4.25",
"pyarrow>=16.1.0",
"pydantic<3,>=1.0.0",
"pyopenssl>=23.2.0",
"python-dateutil<3,>=2.8.0",
"pyyaml<7,>=6.0",
"requests<2.33.0,>=2.31",
"rich>=13.3.5",
"tenacity",
"tqdm",
"typing_extensions>=4.0.0",
"boto3-stubs[s3]; extra == \"dev\"",
"grpc-stubs; extra == \"dev\"",
"pendulum; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest-env; extra == \"dev\"",
"pytest-timeout; extra == \"dev\"",
"pytest-xdist; extra == \"dev\"",
"pytest; extra == \"dev\"",
"python-dotenv; extra == \"dev\"",
"sqlalchemy2-stubs; extra == \"dev\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"dev\"",
"types-protobuf<6; extra == \"dev\"",
"types-psycopg2; extra == \"dev\"",
"types-pymysql; extra == \"dev\"",
"types-pyyaml; extra == \"dev\"",
"types-requests; extra == \"dev\"",
"adlfs; extra == \"runtime\"",
"duckdb<1.2.0,>=0.6; extra == \"runtime\"",
"fsspec; extra == \"runtime\"",
"gcsfs; extra == \"runtime\"",
"google-auth; extra == \"runtime\"",
"google-cloud-storage; extra == \"runtime\"",
"s3fs; extra == \"runtime\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"runtime\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"runtime\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"runtime\"",
"openai<1.53,>=1.3.2; extra == \"openai\"",
"httpx<0.28.0; extra == \"openai\"",
"tiktoken<0.9,>=0.5.1; extra == \"openai\"",
"cohere==5.11.4; extra == \"cohere\"",
"google-cloud-aiplatform<1.76.0; extra == \"vertexai\"",
"sentence-transformers; extra == \"sentence-transformers\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"sql\"",
"adlfs; extra == \"sql\"",
"duckdb<1.2.0,>=0.6; extra == \"sql\"",
"fsspec; extra == \"sql\"",
"gcsfs; extra == \"sql\"",
"google-auth; extra == \"sql\"",
"google-cloud-storage; extra == \"sql\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"sql\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"sql\"",
"s3fs; extra == \"sql\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"sql\"",
"PyAthena>=3.0.0; extra == \"athena\"",
"adlfs; extra == \"athena\"",
"duckdb<1.2.0,>=0.6; extra == \"athena\"",
"fsspec; extra == \"athena\"",
"gcsfs; extra == \"athena\"",
"google-auth; extra == \"athena\"",
"google-cloud-storage; extra == \"athena\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"athena\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"athena\"",
"s3fs; extra == \"athena\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"athena\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"athena\"",
"sqlalchemy-bigquery<1.12,>=1.5.0; extra == \"bigquery\"",
"google-cloud-bigquery<4,>=3.25.0; extra == \"bigquery\"",
"google-cloud-bigquery-storage<2.28,>=2.22.0; extra == \"bigquery\"",
"adlfs; extra == \"bigquery\"",
"duckdb<1.2.0,>=0.6; extra == \"bigquery\"",
"fsspec; extra == \"bigquery\"",
"gcsfs; extra == \"bigquery\"",
"google-auth; extra == \"bigquery\"",
"google-cloud-storage; extra == \"bigquery\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"bigquery\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"bigquery\"",
"s3fs; extra == \"bigquery\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"bigquery\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"bigquery\"",
"clickhouse-sqlalchemy; extra == \"clickhouse\"",
"clickhouse-driver; extra == \"clickhouse\"",
"adlfs; extra == \"clickhouse\"",
"duckdb<1.2.0,>=0.6; extra == \"clickhouse\"",
"fsspec; extra == \"clickhouse\"",
"gcsfs; extra == \"clickhouse\"",
"google-auth; extra == \"clickhouse\"",
"google-cloud-storage; extra == \"clickhouse\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"clickhouse\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"clickhouse\"",
"s3fs; extra == \"clickhouse\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"clickhouse\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"clickhouse\"",
"sqlalchemy-spanner<1.12.0; extra == \"spanner\"",
"google-auth; extra == \"spanner\"",
"adlfs; extra == \"spanner\"",
"duckdb<1.2.0,>=0.6; extra == \"spanner\"",
"fsspec; extra == \"spanner\"",
"gcsfs; extra == \"spanner\"",
"google-auth; extra == \"spanner\"",
"google-cloud-storage; extra == \"spanner\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"spanner\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"spanner\"",
"s3fs; extra == \"spanner\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"spanner\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"spanner\"",
"psycopg2<3,>=2.9.4; extra == \"postgresql\"",
"psycopg[binary]<3.3,>=3.1.9; extra == \"postgresql\"",
"packaging; extra == \"postgresql\"",
"adlfs; extra == \"postgresql\"",
"duckdb<1.2.0,>=0.6; extra == \"postgresql\"",
"fsspec; extra == \"postgresql\"",
"gcsfs; extra == \"postgresql\"",
"google-auth; extra == \"postgresql\"",
"google-cloud-storage; extra == \"postgresql\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"postgresql\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"postgresql\"",
"s3fs; extra == \"postgresql\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"postgresql\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"postgresql\"",
"cryptography<42.0.0; extra == \"snowflake\"",
"snowflake-connector-python<4,>=3.12.4; extra == \"snowflake\"",
"snowflake-sqlalchemy<1.7,>=1.5.0; extra == \"snowflake\"",
"adlfs; extra == \"snowflake\"",
"duckdb<1.2.0,>=0.6; extra == \"snowflake\"",
"fsspec; extra == \"snowflake\"",
"gcsfs; extra == \"snowflake\"",
"google-auth; extra == \"snowflake\"",
"google-cloud-storage; extra == \"snowflake\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"snowflake\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"snowflake\"",
"s3fs; extra == \"snowflake\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"snowflake\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"snowflake\"",
"aiosqlite<0.21,>=0.19.0; extra == \"sqlite\"",
"adlfs; extra == \"sqlite\"",
"duckdb<1.2.0,>=0.6; extra == \"sqlite\"",
"fsspec; extra == \"sqlite\"",
"gcsfs; extra == \"sqlite\"",
"google-auth; extra == \"sqlite\"",
"google-cloud-storage; extra == \"sqlite\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"sqlite\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"sqlite\"",
"s3fs; extra == \"sqlite\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"sqlite\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"sqlite\"",
"chalk-sqlalchemy-redshift<0.9,>=0.8.11; extra == \"redshift\"",
"redshift_connector<2.2,>=2.0.909; extra == \"redshift\"",
"boto3; extra == \"redshift\"",
"adlfs; extra == \"redshift\"",
"duckdb<1.2.0,>=0.6; extra == \"redshift\"",
"fsspec; extra == \"redshift\"",
"gcsfs; extra == \"redshift\"",
"google-auth; extra == \"redshift\"",
"google-cloud-storage; extra == \"redshift\"",
"packaging; extra == \"redshift\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"redshift\"",
"psycopg2<3,>=2.9.4; extra == \"redshift\"",
"psycopg[binary]<3.3,>=3.1.9; extra == \"redshift\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"redshift\"",
"s3fs; extra == \"redshift\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"redshift\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"redshift\"",
"azure-identity<2,>=1.12.0; extra == \"mssql\"",
"pyodbc<6,>=4.0.0; extra == \"mssql\"",
"adlfs; extra == \"mssql\"",
"duckdb<1.2.0,>=0.6; extra == \"mssql\"",
"fsspec; extra == \"mssql\"",
"gcsfs; extra == \"mssql\"",
"google-auth; extra == \"mssql\"",
"google-cloud-storage; extra == \"mssql\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"mssql\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"mssql\"",
"s3fs; extra == \"mssql\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"mssql\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"mssql\"",
"pymysql<2,>=1.0.2; extra == \"mysql\"",
"aiomysql<0.3,>=0.1.1; extra == \"mysql\"",
"adlfs; extra == \"mysql\"",
"duckdb<1.2.0,>=0.6; extra == \"mysql\"",
"fsspec; extra == \"mysql\"",
"gcsfs; extra == \"mysql\"",
"google-auth; extra == \"mysql\"",
"google-cloud-storage; extra == \"mysql\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"mysql\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"mysql\"",
"s3fs; extra == \"mysql\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"mysql\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"mysql\"",
"trino[sqlalchemy]; extra == \"trino\"",
"adlfs; extra == \"trino\"",
"duckdb<1.2.0,>=0.6; extra == \"trino\"",
"fsspec; extra == \"trino\"",
"gcsfs; extra == \"trino\"",
"google-auth; extra == \"trino\"",
"google-cloud-storage; extra == \"trino\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"trino\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"trino\"",
"s3fs; extra == \"trino\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"trino\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"trino\"",
"databricks-sql-connector<3.5,>=2.5.2; extra == \"databricks\"",
"databricks-sdk<1.0.0,>=0.29.0; extra == \"databricks\"",
"adlfs; extra == \"databricks\"",
"duckdb<1.2.0,>=0.6; extra == \"databricks\"",
"fsspec; extra == \"databricks\"",
"gcsfs; extra == \"databricks\"",
"google-auth; extra == \"databricks\"",
"google-cloud-storage; extra == \"databricks\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"databricks\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"databricks\"",
"s3fs; extra == \"databricks\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"databricks\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"databricks\"",
"boto3; extra == \"dynamodb\"",
"pydynamodb<0.8,>=0.6; extra == \"dynamodb\"",
"adlfs; extra == \"dynamodb\"",
"duckdb<1.2.0,>=0.6; extra == \"dynamodb\"",
"fsspec; extra == \"dynamodb\"",
"gcsfs; extra == \"dynamodb\"",
"google-auth; extra == \"dynamodb\"",
"google-cloud-storage; extra == \"dynamodb\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"dynamodb\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"dynamodb\"",
"s3fs; extra == \"dynamodb\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"dynamodb\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"dynamodb\"",
"opentelemetry-api>=1.29.0; extra == \"tracing\"",
"opentelemetry-sdk>=1.29.0; extra == \"tracing\"",
"torch<3,>=1.7; extra == \"torch\"",
"PyAthena>=3.0.0; extra == \"all\"",
"adlfs; extra == \"all\"",
"aiomysql<0.3,>=0.1.1; extra == \"all\"",
"aiosqlite<0.21,>=0.19.0; extra == \"all\"",
"azure-identity<2,>=1.12.0; extra == \"all\"",
"boto3; extra == \"all\"",
"chalk-sqlalchemy-redshift<0.9,>=0.8.11; extra == \"all\"",
"clickhouse-driver; extra == \"all\"",
"clickhouse-sqlalchemy; extra == \"all\"",
"cohere==5.11.4; extra == \"all\"",
"cryptography<42.0.0; extra == \"all\"",
"databricks-sdk<1.0.0,>=0.29.0; extra == \"all\"",
"databricks-sql-connector<3.5,>=2.5.2; extra == \"all\"",
"duckdb<1.2.0,>=0.6; extra == \"all\"",
"fsspec; extra == \"all\"",
"gcsfs; extra == \"all\"",
"google-auth; extra == \"all\"",
"google-cloud-aiplatform<1.76.0; extra == \"all\"",
"google-cloud-bigquery-storage<2.28,>=2.22.0; extra == \"all\"",
"google-cloud-bigquery<4,>=3.25.0; extra == \"all\"",
"google-cloud-storage; extra == \"all\"",
"httpx<0.28.0; extra == \"all\"",
"openai<1.53,>=1.3.2; extra == \"all\"",
"opentelemetry-api>=1.29.0; extra == \"all\"",
"opentelemetry-sdk>=1.29.0; extra == \"all\"",
"packaging; extra == \"all\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"all\"",
"psycopg2<3,>=2.9.4; extra == \"all\"",
"psycopg[binary]<3.3,>=3.1.9; extra == \"all\"",
"pydynamodb<0.8,>=0.6; extra == \"all\"",
"pymysql<2,>=1.0.2; extra == \"all\"",
"pyodbc<6,>=4.0.0; extra == \"all\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"all\"",
"redshift_connector<2.2,>=2.0.909; extra == \"all\"",
"s3fs; extra == \"all\"",
"snowflake-connector-python<4,>=3.12.4; extra == \"all\"",
"snowflake-sqlalchemy<1.7,>=1.5.0; extra == \"all\"",
"sqlalchemy-bigquery<1.12,>=1.5.0; extra == \"all\"",
"sqlalchemy-spanner<1.12.0; extra == \"all\"",
"sqlalchemy[asyncio]<2,>=1.4.26; extra == \"all\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"all\"",
"tiktoken<0.9,>=0.5.1; extra == \"all\"",
"trino[sqlalchemy]; extra == \"all\"",
"adlfs; extra == \"polars\"",
"duckdb<1.2.0,>=0.6; extra == \"polars\"",
"fsspec; extra == \"polars\"",
"gcsfs; extra == \"polars\"",
"google-auth; extra == \"polars\"",
"google-cloud-storage; extra == \"polars\"",
"polars[timezone]!=1.0,!=1.1,!=1.10,!=1.11,!=1.12,!=1.13,!=1.14,!=1.15,!=1.16,!=1.17,!=1.18,!=1.19,!=1.2,!=1.20,!=1.21,!=1.22,!=1.23,!=1.24,!=1.25,!=1.26,!=1.27,!=1.28,!=1.29,!=1.3,!=1.30,!=1.31,!=1.32,!=1.4,!=1.5,!=1.6,!=1.7,!=1.8,!=1.9,<1.33.1,>=0.17.2; extra == \"polars\"",
"python-json-logger<4.0.0,>=3.0.0; extra == \"polars\"",
"s3fs; extra == \"polars\"",
"sqlglot<21.2.0,>=19.0.0; extra == \"polars\""
] | [] | [] | [] | [
"Homepage, https://chalk.ai",
"Documentation, https://docs.chalk.ai",
"Changelog, https://docs.chalk.ai/docs/changelog"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-21T03:10:47.670755 | chalkpy-2.106.3.tar.gz | 1,224,949 | a8/17/e64806f486a34a954755caa139352d7f9162c06c5bffa51fb96e7181ef3e/chalkpy-2.106.3.tar.gz | source | sdist | null | false | 6b2f279d5a1dff8c88738052ac2bfe7b | 48b2f493bd1034ee26fd13500bcb8dcdb211ca2ce122d2e44adc58dd58c941e9 | a817e64806f486a34a954755caa139352d7f9162c06c5bffa51fb96e7181ef3e | null | [] | 982 |
2.3 | bakefile | 0.0.31 | Add your description here | [](https://github.com/wislertt/bakefile/actions/workflows/cd.yml)
[](https://github.com/wislertt/bakefile/actions/workflows/cd.yml)
[](https://sonarcloud.io/summary/new_code?id=wislertt_bakefile)
[](https://sonarcloud.io/summary/new_code?id=wislertt_bakefile)
[](https://sonarcloud.io/summary/new_code?id=wislertt_bakefile)
[](https://codecov.io/gh/wislertt/bakefile)
[](https://pypi.python.org/pypi/bakefile)
[](https://pepy.tech/projects/bakefile)
[](https://github.com/wislertt/bakefile/)
# bakefile
An OOP task runner in Python. Like a Makefile, but with tasks as Python class methods—so you can inherit, compose, and reuse them across projects.
## Why bakefile?
- **Reusable** - Makefile/Justfile work well, but reusing tasks across projects is hard. bakefile uses OOP class methods—inherit, compose, and share them
- **Python** - Use Python instead of DSL syntax. Access the full ecosystem with Python's language features, tooling, and type safety (ruff/ty)—with subprocess support for normal CLI commands
- **Language-agnostic** - Write tasks in Python, run commands for any language (Go, Rust, JS, etc.)
## Installation
Install via pip:
```bash
pip install bakefile
```
Or via uv:
```bash
uv add bakefile # as a project dependency
uv tool install bakefile # as a global tool
```
## Quick Start
Create a file named `bakefile.py`:
```python
from bake import Bakebook, command, console
class MyBakebook(Bakebook):
@command()
def build(self) -> None:
console.echo("Building...")
# Use self.ctx to run commands
self.ctx.run("cargo build")
bakebook = MyBakebook()
@bakebook.command()
def hello(name: str = "world"):
console.echo(f"Hello {name}!")
```
**Tip:** Or generate a bakefile automatically:
```bash
bakefile init # Basic bakefile
bakefile init --inline # With PEP 723 standalone dependencies
```
Run your tasks:
```bash
bake hello # Hello world!
bake hello --name Alice # Hello Alice!
bake build # Building...
```
## Core Concepts
### Two CLIs
bakefile provides two command-line tools:
- **`bake`** - Runs tasks from your `bakefile.py`
- **`bakefile`** - Manages your `bakefile.py` (init, add-inline, lint, find-python, export, sync, lock, add, pip)
Detailed CLI documentation in [Usage](#usage).
### Bakebook
A class in `bakefile.py` that holds your tasks:
- **Inherit and reuse** - Create base classes with common tasks, extend them across projects
- **Extends Pydantic's `BaseSettings`** - Define configuration as class attributes
- **Uses `@command()` decorator** - Same syntax as Typer for defining CLI commands
- **Provides `ctx.run()`** - Execute CLI commands (built on Python's subprocess) from your tasks
```python
from bake import Bakebook, command, Context, console
from pydantic import Field
from typing import Annotated
import typer
class MyBakebook(Bakebook):
# Pydantic configuration
api_url: str = Field(default="https://api.example.com", env="API_URL")
@command()
def fetch(self) -> None:
# Run CLI commands via self.ctx
self.ctx.run(f"curl {self.api_url}")
bakebook = MyBakebook()
# Standalone functions also work
@bakebook.command()
def test(
verbose: Annotated[bool, typer.Option(False, "--verbose", "-v")] = False,
):
if verbose:
console.echo("Running tests...")
bakebook.ctx.run("pytest")
```
### PEP 723 Support
bakefile supports [PEP 723](https://peps.python.org/pep-0723/) inline script metadata—your `bakefile.py` can declare its own dependencies. Add PEP 723 metadata to an existing bakefile with `bakefile add-inline`:
```python
# /// script
# requires-python = ">=3.14"
# dependencies = [
# "bakefile>=0.0.0",
# ]
# ///
from bake import Bakebook, command, console
bakebook = Bakebook()
@bakebook.command()
def hello():
console.echo("Hello from standalone bakefile!")
```
**Use case:** Ideal for non-Python projects without `pyproject.toml`. For Python projects, add bakefile to your project's dependencies instead.
## Usage
### Bakebook API
#### Creating a Bakebook
**Tip:** Generate a bakefile automatically with `bakefile init` or `bakefile add-inline`.
Create a bakebook by inheriting from `Bakebook` or instantiating it:
```python
from bake import Bakebook
bakebook = Bakebook()
```
#### @command Decorator
- **Pattern 1: Before instantiating** - Use `@command()` on class methods
- **Pattern 2: After instantiating** - Use `@bakebook.command()` on standalone functions
- **Accepts all Typer options** - `name`, `help`, `deprecated`, etc.
```python
from bake import Bakebook, command, console
from typing import Annotated
import typer
# Pattern 1: On class (use self.ctx for context access)
class MyBakebook(Bakebook):
@command()
def task1(self) -> None:
console.echo("Task 1")
self.ctx.run("echo 'Task 1 complete'")
bakebook = MyBakebook()
# Pattern 2: On instance (use bakebook.ctx for context access)
@bakebook.command(name="deploy", help="Deploy application")
def deploy(
env: Annotated[str, typer.Option("dev", help="Environment to deploy")],
):
console.echo(f"Deploying to {env}...")
bakebook.ctx.run(f"kubectl apply -f {env}.yaml")
```
#### Context API
The `Bakebook` class provides a `.ctx` property for accessing CLI context:
```python
class MyBakebook(Bakebook):
@command()
def my_command(self) -> None:
# Run a command
self.ctx.run("echo hello")
# Run with options
self.ctx.run(
"pytest",
capture_output=False, # Stream to terminal
check=True, # Raise on error
cwd="/tmp", # Working directory
env={"KEY": "value"}, # Environment variables
)
# Run a multi-line script
self.ctx.run_script(
title="Setup",
script="""
echo "Step 1"
echo "Step 2"
""",
)
```
#### Pydantic Settings
Bakebooks extend Pydantic's `BaseSettings` for configuration:
```python
from bake import Bakebook
from pydantic import Field
class MyBakebook(Bakebook):
# Defaults
database_url: str = "sqlite:///db.sqlite3"
# With environment variable mapping
api_key: str = Field(default="default-key", env="API_KEY")
# With validation
port: int = Field(default=8000, ge=1, le=65535)
```
Settings are loaded from environment variables, `.env` files, or defaults.
### `bake` CLI - Running Tasks
The `bake` command runs tasks from your `bakefile.py`. Run `bake --help` to see all available commands and options.
#### Basic Execution
```bash
bake <command> [args]
```
```bash
bake hello
bake build
bake test --verbose
```
#### Dry-Run Mode
Preview what would happen without executing:
```bash
bake -n build
bake --dry-run deploy
```
#### Verbosity Levels
Control output verbosity:
```bash
bake build # Silent (errors only)
bake -v build # Info level
bake -vv build # Debug level
```
#### Chaining Commands
Run multiple commands sequentially:
```bash
bake -c lint test build
```
If any command fails, the chain stops.
#### Options
Override defaults when running bake:
```bash
bake -f tasks.py build # Custom filename
bake -b my_bakebook build # Custom bakebook object name
bake -C /path/to/project build # Run from different directory
```
### `bakefile` CLI - Managing bakefile.py
The `bakefile` command (short: `bf`) manages your `bakefile.py`.
#### init
Create a new `bakefile.py`:
```bash
bakefile init # Basic bakefile
bakefile init --inline # With PEP 723 inline metadata
bakefile init --force # Force overwrite existing bakefile
```
#### add-inline
Add PEP 723 inline metadata to an existing bakefile:
```bash
bakefile add-inline
```
#### lint
Lint `bakefile.py` (or entire project) with ruff and ty:
```bash
bakefile lint # Lint bakefile.py and all Python files
bakefile lint --only-bakefile # Lint only bakefile.py
bakefile lint --no-ty # Skip type checking
```
#### uv-based commands (PEP 723 bakefile.py only)
Convenience wrappers around `uv` commands with `--script bakefile.py` added. For PEP 723 bakefile.py files only. For normal Python projects, use your preferred dependency manager (pip, poetry, uv, etc.).
```bash
bakefile sync # = uv sync --script bakefile.py
bakefile lock # = uv lock --script bakefile.py
bakefile add requests # = uv add --script bakefile.py requests
bakefile pip install # = uv pip install --python <bakefile-python-path>
```
#### find-python
Find the Python interpreter path for the bakefile:
```bash
bakefile find-python
```
#### export
Export bakebook variables to external formats:
```bash
bakefile export # Shell format (default)
bakefile export -f sh # Shell format
bakefile export -f dotenv # .env format
bakefile export -f json # JSON format
bakefile export -f yaml # YAML format
bakefile export -o config.sh # Write to file
# Examples:
bakefile export -f dotenv -o .env # .env file
bakefile export -f json -o config.json # JSON file
```
### `bakelib` - Optional Helpers
**bakelib** is an optional collection of opinionated helpers built on top of Bakebook. Includes Spaces (pre-configured tasks) and Environ (multi-environment support).
Install with:
```bash
pip install bakefile[lib]
```
**Note:** bakelib is optional—you can use bakefile without it. Create your own Bakebook classes if you prefer different conventions.
#### PythonSpace
PythonSpace provides common tasks for Python projects:
```python
from bakelib import PythonSpace
bakebook = PythonSpace()
```
Available commands:
- `bake lint` - Run prettier, toml-sort, ruff format, ruff check, ty, deptry
- `bake test` - Run pytest with coverage on `tests/unit/`
- `bake test-integration` - Run integration tests from `tests/integration/`
- `bake test-all` - Run all tests
- `bake clean` - Clean gitignored files (with exclusions)
- `bake clean-all` - Clean all gitignored files
- `bake setup-dev` - Setup Python development environment
- `bake tools` - List development tools
- `bake update` - Upgrade dependencies (includes uv lock --upgrade)
#### Creating Custom Spaces
Create custom spaces by inheriting from BaseSpace:
```python
from bakelib import BaseSpace
class MySpace(BaseSpace):
def test(self) -> None:
self.ctx.run("npm test")
bakebook = MySpace()
```
BaseSpace provides these tasks (override as needed):
- `lint()` - Run prettier
- `clean()` / `clean_all()` - Clean gitignored files
- `setup_dev()` - Setup development environment
- `tools()` - List development tools
- `update()` - Upgrade dependencies
#### Multi-Environment Bakebooks
For projects with multiple environments (dev, staging, prod), use environment bakebooks:
```python
from bakelib.environ import (
DevEnvBakebook,
StagingEnvBakebook,
ProdEnvBakebook,
get_bakebook,
)
bakebook_dev = DevEnvBakebook()
bakebook_staging = StagingEnvBakebook()
bakebook_prod = ProdEnvBakebook()
# Select bakebook based on ENV environment variable
bakebook = get_bakebook([bakebook_dev, bakebook_staging, bakebook_prod])
```
```bash
ENV=prod bake deploy # Uses prod bakebook
ENV=dev bake deploy # Uses dev bakebook
bake deploy # Defaults to dev (lowest priority)
```
Create custom environments by inheriting from `BaseEnv`:
```python
from bakelib.environ import BaseEnv, EnvBakebook
class MyEnv(BaseEnv):
ENV_ORDER = ["dev", "sit", "qa", "uat", "prod"]
class MyEnvBakebook(EnvBakebook):
env_: MyEnv = MyEnv("local")
```
For more details, see the [bakelib source](https://github.com/wislertt/bakefile/tree/main/src/bakelib).
## Development
### Environment Setup
Clone and install the project:
```bash
git clone https://github.com/wislertt/bakefile.git
cd bakefile
# Install bakefile as a global tool
uv tool install bakefile
# Setup development environment (macOS only)
# Installs brew, bun, uv, and pre-commit hooks
bake setup-dev
# Verify development environment is setup correctly
# Checks tool locations and runs lint + test
bake assert-setup-dev
```
**Note:** `bake setup-dev` only supports macOS. For other platforms, run `bake --dry-run setup-dev` to see the commands and follow platform-specific alternatives.
The project uses [uv](https://github.com/astral-sh/uv) for dependency management.
### Testing
Run tests using the bake commands:
```bash
bake test # Unit tests (fast)
bake test-integration # Integration tests (slow, real subprocess)
bake test-all # All tests with coverage
```
### Code Quality
Run linters and formatters before committing:
```bash
bake lint # Run prettier, toml-sort, ruff format, ruff check, ty, deptry
```
**Verification workflow:**
1. Make changes
2. Run `bake lint` to check code quality
3. Run `bake test` to verify unit tests pass
4. Commit when both pass
## Contributing
Contributions are welcome! Please see [CLAUDE.md](/.claude/CLAUDE.md) for development guidelines, including:
- Project structure and testing conventions
- Code quality standards
- Development workflow
## License
Licensed under the Apache License 2.0. See [LICENSE](/LICENSE) for the full text.
| text/markdown | Wisaroot Lertthaweedech | Wisaroot Lertthaweedech <l.wisaroot@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"beautysh>=6.4.2",
"click>=8.3.1",
"loguru>=0.7.3",
"orjson>=3.11.5",
"pydantic-settings>=2.0.0",
"pydantic>=2.12.5",
"python-dotenv>=1.2.1",
"pyyaml>=6.0.3",
"rich>=14.2.0",
"ruff>=0.14.10",
"tomli>=2.0.0; python_full_version < \"3.11\"",
"ty>=0.0.8",
"typer>=0.21.0",
"uv>=0.9.20",
"keyring>=25.7.0; extra == \"lib\"",
"pathspec>=1.0.3; extra == \"lib\"",
"tenacity>=9.1.2; extra == \"lib\"",
"zerv-version>=0.8.0; extra == \"lib\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T03:10:20.888334 | bakefile-0.0.31.tar.gz | 50,154 | 14/7c/155c32c0cb5571e787cdde92b32dee3663fc618ecdf33049f669a54c0593/bakefile-0.0.31.tar.gz | source | sdist | null | false | d17ad4cebcd8a94b9744a951916da6f0 | b368691f2c24df6301987487026dce749fa0e08a164ae9c85c2711dcae9459b8 | 147c155c32c0cb5571e787cdde92b32dee3663fc618ecdf33049f669a54c0593 | null | [] | 228 |
2.4 | tallyfy | 1.1.0 | A comprehensive Python SDK for interacting with the Tallyfy API | # Tallyfy SDK
A comprehensive Python SDK for interacting with the Tallyfy API. This SDK provides a clean, modular interface for managing users, tasks, templates, and form fields in your Tallyfy organization.
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Architecture](#architecture)
- [Core Features](#core-features)
- [API Reference](#api-reference)
- [Data Models](#data-models)
- [Error Handling](#error-handling)
- [Examples](#examples)
- [Advanced Usage](#advanced-usage)
- [Contributing](#contributing)
## Installation
```bash
pip install tallyfy
```
**Dependencies:**
- `requests` - HTTP client library
- `typing` - Type hints support (Python 3.5+)
## Quick Start
```python
from tallyfy import TallyfySDK
# Initialize the SDK
sdk = TallyfySDK(
api_key="your_api_key_here"
)
# Get organization users
users = sdk.users.get_organization_users(org_id="your_org_id")
# Get current user's tasks
my_tasks = sdk.tasks.get_my_tasks(org_id="your_org_id")
# Search for templates
templates = sdk.search(org_id="your_org_id", search_query="onboarding", search_type="blueprint")
# Get a specific template
template = sdk.templates.get_template(org_id="your_org_id", template_name="Employee Onboarding")
```
## Architecture
The Tallyfy SDK follows a modular architecture with specialized management classes and comprehensive data models:
### Core Components
- **`TallyfySDK`** - Main SDK class that orchestrates all operations with backward compatibility methods
- **`BaseSDK`** - Base class with HTTP request handling, retry logic, and connection pooling
- **Management Modules:**
- `UserManager` - User and guest operations with modular retrieval and invitation components
- `TaskManager` - Task and process operations with search and creation capabilities
- `TemplateManager` - Template/checklist operations with automation analysis and health assessment
- `FormFieldManager` - Form field operations with AI-powered suggestions and dropdown management
- **Error Handling:**
- `TallyfyError` - Custom exception handling with status codes and response data
### File Structure
```
tallyfy/
├── __init__.py # SDK exports and version
├── core.py # BaseSDK and TallyfySDK classes
├── models.py # Data models and types
├── user_management/ # User and guest management (modular)
│ ├── __init__.py # UserManager with unified interface
│ ├── base.py # Common validation and error handling
│ ├── retrieval.py # User and guest retrieval operations
│ └── invitation.py # User invitation operations
├── task_management/ # Task and process management (modular)
│ ├── __init__.py # TaskManager with unified interface
│ ├── base.py # Common task operations base
│ ├── retrieval.py # Task and process retrieval
│ ├── search.py # Search functionality
│ └── creation.py # Task creation operations
├── template_management/ # Template management (modular)
│ ├── __init__.py # TemplateManager with unified interface
│ ├── base.py # Common template operations
│ ├── basic_operations.py # Template retrieval and CRUD operations
│ ├── automation.py # Automation rule management
│ ├── analysis.py # Template analysis and health checks
│ └── health_assessment.py # Template health assessment functionality
├── form_fields_management/ # Form field management (modular)
│ ├── __init__.py # FormFieldManager with unified interface
│ ├── base.py # Common form field operations
│ ├── crud_operations.py # CRUD operations for form fields
│ ├── options_management.py # Dropdown options management
│ └── suggestions.py # AI-powered field suggestions
└── README.md # This documentation
```
## Core Features
### 🔐 Authentication & Security
- Bearer token authentication with session management
- Configurable request timeouts and connection pooling
- Automatic retry logic for transient failures (5xx errors)
- No retry for client errors (4xx) to prevent API abuse
- Comprehensive error handling with detailed error information
### 👥 User Management
- **Modular architecture** with specialized retrieval and invitation components
- Get organization members with full profile data (country, timezone, job details)
- Get minimal user lists for performance-critical operations
- Manage guests and external users with guest-specific features
- **Enhanced search capabilities** - Find users by email or name with fuzzy matching
- **Batch invitation support** - Invite multiple users with default roles and messages
- **Advanced invitation features** - Custom messages, role validation, and resend functionality
- Support for user groups and permissions with flexible query parameters
- **Convenience methods** - Get all members (users + guests) in a single call
### ✅ Task Management
- Get tasks for specific users or processes with filtering
- Create standalone tasks with rich assignment options
- Search processes by name with fuzzy matching
- Advanced filtering for organization runs (status, owners, tags, etc.)
- Universal search across processes, templates, and tasks
### 📋 Template Management
- Get templates with full metadata and step details
- Search templates by name with exact and fuzzy matching
- Update template properties and metadata
- Duplicate templates with permission copying options
- **Automation management** - Create, update, and analyze conditional rules
- **Step dependency analysis** - Understand step visibility conditions
- **AI-powered deadline suggestions** for individual steps
- **Template health assessment** - Comprehensive analysis of template quality
- **Automation consolidation** - Optimize and merge redundant rules
- Add assignees and edit step descriptions
- Kickoff field management for template launch forms
### 📝 Form Field Management
- Add form fields to template steps with comprehensive validation
- Support for text, dropdown, date, file upload, WYSIWYG editor fields
- Update field properties, validation rules, and positioning
- Move fields between steps with automatic reordering
- **AI-powered field suggestions** based on step analysis
- Manage dropdown options with bulk updates
- **Smart field recommendations** with confidence scoring
- Field dependency management and conditional logic
### 🔍 Search & Discovery
- Universal search across processes, templates, and tasks
- Exact and fuzzy matching with relevance scoring
- Pagination support with configurable page sizes
- Rich search results with metadata and context
- Process and template name-based search with suggestions
## API Reference
### SDK Initialization
```python
TallyfySDK(
api_key: str,
timeout: int = 30,
max_retries: int = 3,
retry_delay: float = 1.0
)
```
### User Management
```python
# Get all organization users with optional group data
users = sdk.users.get_organization_users(org_id, with_groups=False)
# Get minimal user list for performance
users_list = sdk.users.get_organization_users_list(org_id)
# Get organization guests with optional statistics
guests = sdk.users.get_organization_guests(org_id, with_stats=False)
# Get current user info
current_user = sdk.users.get_current_user_info(org_id)
# Enhanced search capabilities
user = sdk.users.get_user_by_email(org_id, "john@company.com")
guest = sdk.users.get_guest_by_email(org_id, "contractor@company.com")
# Search members by name (fuzzy matching)
search_results = sdk.users.search_members_by_name(org_id, "John Smith", include_guests=True)
matching_users = search_results['users']
matching_guests = search_results['guests']
# Get all members in one call
all_members = sdk.users.get_all_organization_members(
org_id, include_guests=True, with_groups=True, with_stats=True
)
# Invite single user to organization
user = sdk.users.invite_user_to_organization(
org_id, email="new.hire@company.com",
first_name="John", last_name="Doe",
role="standard",
message="Welcome! Please complete your onboarding process."
)
# Batch invite multiple users
invitations = [
{
"email": "user1@company.com",
"first_name": "Jane",
"last_name": "Smith",
"role": "standard"
},
{
"email": "user2@company.com",
"first_name": "Bob",
"last_name": "Johnson",
"message": "Welcome to the engineering team!"
}
]
results = sdk.users.invite_multiple_users(
org_id, invitations,
default_role="light",
default_message="Welcome to our organization!"
)
# Resend invitation
success = sdk.users.resend_invitation(
org_id, email="pending@company.com",
message="Reminder: Please accept your invitation to join our organization."
)
# Generate custom invitation message
message = sdk.users.get_invitation_template_message(
org_name="ACME Corp",
custom_text="We're excited to have you join our innovative team!"
)
# Validate invitation data before sending
invitation_data = {
"email": "new@company.com",
"first_name": "Alice",
"last_name": "Wilson",
"role": "admin"
}
validated_data = sdk.users.validate_invitation_data(invitation_data)
# Access modular components directly (advanced usage)
# Retrieval operations
users_with_groups = sdk.users.retrieval.get_organization_users(org_id, with_groups=True)
# Invitation operations
batch_results = sdk.users.invitation.invite_multiple_users(org_id, invitations)
```
### Task Management
```python
# Get current user's tasks
my_tasks = sdk.tasks.get_my_tasks(org_id)
# Get specific user's tasks
user_tasks = sdk.tasks.get_user_tasks(org_id, user_id)
# Get tasks for a process
process_tasks = sdk.tasks.get_tasks_for_process(org_id, process_id=None, process_name="My Process")
# Get organization processes with filtering
runs = sdk.tasks.get_organization_runs(
org_id,
status="active",
owners="123,456",
checklist_id="template_id"
)
# Create standalone task
task = sdk.tasks.create_task(
org_id, title="Review Document",
deadline="2024-12-31T23:59:59Z",
description="Please review the attached document"
)
# Search processes, templates, or tasks
results = sdk.tasks.search(org_id, "onboarding", search_type="process")
# Search for specific entity types
templates = sdk.tasks.search(org_id, "employee onboarding", search_type="blueprint")
tasks = sdk.tasks.search(org_id, "review", search_type="task")
```
### Template Management
```python
# Get template by ID or name
template = sdk.templates.get_template(org_id, template_id=None, template_name="Onboarding")
# Get template with full step details
template_data = sdk.templates.get_template_with_steps(org_id, template_id)
# Update template metadata
updated = sdk.templates.update_template_metadata(
org_id, template_id,
title="New Title",
summary="Updated summary",
guidance="New guidance text"
)
# Get template steps
steps = sdk.templates.get_template_steps(org_id, template_id)
# Analyze step dependencies
dependencies = sdk.templates.get_step_dependencies(org_id, template_id, step_id)
# Get deadline suggestions
deadline_suggestion = sdk.templates.suggest_step_deadline(org_id, template_id, step_id)
# Advanced template operations
duplicated = sdk.templates.duplicate_template(org_id, template_id, "New Template Name", copy_permissions=True)
# Automation management
automation_data = {
"automated_alias": "Auto-assign reviewer",
"conditions": [{"field_id": "department", "operator": "equals", "value": "Engineering"}],
"actions": [{"type": "assign_step", "step_id": "review_step", "assignee_type": "user", "assignee_id": "123"}]
}
automation = sdk.templates.create_automation_rule(org_id, template_id, automation_data)
# Analyze automation conflicts and redundancies
analysis = sdk.templates.analyze_template_automations(org_id, template_id)
# Get consolidation suggestions
suggestions = sdk.templates.suggest_automation_consolidation(org_id, template_id)
# Assess overall template health
health_report = sdk.templates.assess_template_health(org_id, template_id)
# Template metadata management
updated = sdk.templates.update_template_metadata(
org_id, template_id,
title="Updated Template Title",
guidance="New guidance text",
is_featured=True
)
# Add assignees to step
assignees = {"users": [123, 456], "groups": ["managers"], "guests": ["contractor@company.com"]}
sdk.templates.add_assignees_to_step(org_id, template_id, step_id, assignees)
# Edit step description
sdk.templates.edit_description_on_step(org_id, template_id, step_id, "Updated step description with detailed instructions")
# Add new step to template
step_data = {
"title": "Quality Review",
"description": "Perform final quality check",
"position": 5,
"assignees": {"users": [789]}
}
new_step = sdk.templates.add_step_to_template(org_id, template_id, step_data)
```
### Form Field Management
```python
# Add form field to step
field_data = {
"field_type": "text",
"label": "Customer Name",
"required": True,
"position": 1
}
field = sdk.form_fields.add_form_field_to_step(org_id, template_id, step_id, field_data)
# Update form field
updated_field = sdk.form_fields.update_form_field(
org_id, template_id, step_id, field_id,
label="Updated Label",
required=False
)
# Move field between steps
success = sdk.form_fields.move_form_field(
org_id, template_id, from_step, field_id, to_step, position=2
)
# Delete form field
success = sdk.form_fields.delete_form_field(org_id, template_id, step_id, field_id)
# Get dropdown options
options = sdk.form_fields.get_dropdown_options(org_id, template_id, step_id, field_id)
# Update dropdown options
success = sdk.form_fields.update_dropdown_options(
org_id, template_id, step_id, field_id,
["Option 1", "Option 2", "Option 3"]
)
# Get AI-powered field suggestions
suggestions = sdk.form_fields.suggest_form_fields_for_step(org_id, template_id, step_id)
```
## Data Models
The SDK provides comprehensive dataclasses for type safety and easy data access:
### Core Models
- **`User`** - Organization member with full profile data (country, timezone, job details)
- **`Guest`** - External user with limited access and guest-specific details
- **`GuestDetails`** - Extended guest information and statistics
- **`Task`** - Individual work item with owners, deadlines, and process linkage
- **`Run`** - Process instance (workflow execution) with progress tracking
- **`Template`** - Complete template/checklist definition with automation rules
- **`Step`** - Individual step within a template with conditions and assignments
### Assignment and Ownership Models
- **`TaskOwners`** - Task assignment information supporting users, guests, and groups
- **`RunProgress`** - Process completion tracking with step-level progress
- **`Country`** - Geographic data for user profiles
- **`Folder`** - Organizational folder structure for templates and processes
### Template and Automation Models
- **`Tag`** - Industry and topic classification tags
- **`PrerunField`** - Form fields for template kickoff with validation rules
- **`AutomationCondition`** - Conditional logic for workflow automation
- **`AutomationAction`** - Actions triggered by automation rules
- **`AutomatedAction`** - Complete automation rule with conditions and actions
- **`AutomationDeadline`** - Deadline configuration for automated actions
- **`AutomationAssignees`** - Assignee configuration for automated actions
### Step and Form Models
- **`Capture`** - Form fields within steps with validation and positioning
- **`StepStartDate`** - Start date configuration for steps
- **`StepDeadline`** - Deadline configuration for individual steps
- **`StepBpToLaunch`** - Sub-process launch configuration
### Search and Utility Models
- **`SearchResult`** - Unified search results for templates, processes, and tasks
- **`TallyfyError`** - Custom exception with status codes and response data
### Model Features
- **Automatic parsing** from API responses via `from_dict()` class methods
- **Type safety** with comprehensive type hints
- **Nested object support** for complex data structures
- **Default value handling** for optional fields
Example model usage:
```python
# Models automatically parse API responses
users = sdk.users.get_organization_users(org_id)
for user in users:
print(f"{user.full_name} ({user.email})")
if user.country:
print(f"Country: {user.country.name}")
# Access nested data safely
template = sdk.templates.get_template(org_id, template_name="Onboarding")
if template.automated_actions:
for automation in template.automated_actions:
print(f"Automation: {automation.automated_alias}")
for condition in automation.conditions:
print(f" Condition: {condition.statement}")
```
## Error Handling
The SDK provides comprehensive error handling through the `TallyfyError` class:
```python
from tallyfy import TallyfyError
try:
users = sdk.users.get_organization_users("invalid_org_id")
except TallyfyError as e:
print(f"API Error: {e}")
print(f"Status Code: {e.status_code}")
print(f"Response Data: {e.response_data}")
except ValueError as e:
print(f"Validation Error: {e}")
```
### Error Types
- **`TallyfyError`** - API-specific errors with status codes and response data
- **`ValueError`** - Input validation errors for required parameters
- **`RequestException`** - Network and connection errors (automatically retried)
### Retry Logic
The SDK automatically retries failed requests with configurable settings:
- **Server errors (5xx)** - Automatically retried up to `max_retries` times
- **Client errors (4xx)** - Not retried (indicates user/input error)
- **Network errors** - Retried with exponential backoff
- **Timeout errors** - Retried with configurable delay
## Examples
### Complete User Onboarding Workflow
```python
from tallyfy import TallyfySDK, TaskOwners
sdk = TallyfySDK(api_key="your_api_key")
org_id = "your_org_id"
# 1. Invite new user
new_user = sdk.users.invite_user_to_organization(
org_id,
email="new.hire@company.com",
first_name="John",
last_name="Doe",
role="standard",
message="Welcome to the team! Please complete your onboarding."
)
# 2. Create onboarding task
if new_user:
owners = TaskOwners(users=[new_user.id])
onboarding_task = sdk.tasks.create_task(
org_id,
title="Complete Employee Onboarding",
deadline="2025-12-31 17:00:00",
description="Please complete all onboarding steps including HR paperwork and IT setup.",
owners=owners
)
print(f"Created onboarding task: {onboarding_task.id}")
```
### Template Analysis and Enhancement
```python
# Get template with full details
template_data = sdk.templates.get_template_with_steps(org_id, template_name="Project Kickoff")
if template_data:
template = template_data['template']
print(f"Template: {template.title}")
print(f"Steps: {template_data['step_count']}")
print(f"Automations: {template_data['automation_count']}")
# Analyze each step for improvements
for step_data in template_data['steps']:
step_id = step_data['id']
# Get dependency analysis
dependencies = sdk.templates.get_step_dependencies(org_id, template.id, step_id)
if dependencies['has_conditional_visibility']:
print(f"Step '{step_data['title']}' has conditional logic")
# Get deadline suggestions
deadline_suggestion = sdk.templates.suggest_step_deadline(org_id, template.id, step_id)
print(f"Suggested deadline: {deadline_suggestion['suggested_deadline']}")
# Get form field suggestions
field_suggestions = sdk.form_fields.suggest_form_fields_for_step(org_id, template.id, step_id)
for suggestion in field_suggestions[:2]: # Top 2 suggestions
if suggestion['confidence'] > 0.7:
print(f"High-confidence field suggestion: {suggestion['field_config']['label']}")
```
### Advanced Process Management
```python
# Get all active processes with comprehensive filtering
active_runs = sdk.tasks.get_organization_runs(
org_id,
status="active",
with_data="checklist,tasks,assets,tags",
form_fields_values=True,
starred=True
)
# Group by template
template_usage = {}
for run in active_runs:
template_id = run.checklist_id
if template_id not in template_usage:
template_usage[template_id] = {
'template_name': run.checklist_title,
'active_count': 0,
'runs': []
}
template_usage[template_id]['active_count'] += 1
template_usage[template_id]['runs'].append(run)
# Show most used templates
sorted_templates = sorted(template_usage.items(), key=lambda x: x[1]['active_count'], reverse=True)
for template_id, data in sorted_templates[:5]:
print(f"{data['template_name']}: {data['active_count']} active processes")
```
## Advanced Usage
### Custom Request Configuration
```python
# SDK with custom configuration
sdk = TallyfySDK(
api_key="your_api_key",
base_url="https://api.tallyfy.com",
timeout=60, # 60 second timeout
max_retries=5, # Retry up to 5 times
retry_delay=2.0 # 2 second delay between retries
)
```
### Accessing Raw API Responses
```python
# For advanced users who need raw API data
template_data = sdk.templates.get_template_with_steps(org_id, template_id)
raw_api_response = template_data['raw_data']
# Access nested API data not exposed in models
custom_fields = raw_api_response.get('custom_metadata', {})
```
### Batch Operations
```python
# Efficiently process multiple operations
org_users = sdk.users.get_organization_users(org_id)
user_tasks = {}
for user in org_users[:10]: # Process first 10 users
try:
tasks = sdk.tasks.get_user_tasks(org_id, user.id)
user_tasks[user.id] = tasks
print(f"{user.full_name}: {len(tasks)} tasks")
except TallyfyError as e:
print(f"Failed to get tasks for {user.full_name}: {e}")
```
### Form Field Automation
```python
# Automatically enhance templates with smart form fields
def enhance_template_with_smart_fields(org_id, template_id):
steps = sdk.templates.get_template_steps(org_id, template_id)
for step in steps:
# Get AI suggestions for each step
suggestions = sdk.form_fields.suggest_form_fields_for_step(org_id, template_id, step.id)
# Implement high-confidence suggestions
for suggestion in suggestions:
if suggestion['confidence'] > 0.8 and suggestion['priority'] == 'high':
field_data = suggestion['field_config']
try:
new_field = sdk.form_fields.add_form_field_to_step(
org_id, template_id, step.id, field_data
)
print(f"Added field '{field_data['label']}' to step '{step.title}'")
except TallyfyError as e:
print(f"Failed to add field: {e}")
# Use the enhancement function
enhance_template_with_smart_fields(org_id, "your_template_id")
```
## Contributing
### Code Style
- Follow PEP 8 style guidelines
- Use type hints for all functions and methods
- Add comprehensive docstrings for public APIs
- Include error handling for all external API calls
### Adding New Features
1. Add methods to appropriate management class
2. Create or update data models in `models.py`
3. Add comprehensive docstrings and type hints
4. Include error handling and logging
5. Update this README with usage examples
## Support
For bugs, feature requests, or questions:
1. Check existing issues in the project repository
2. Contact us at: support@tallyfy.com
---
**Version:** 1.0.14
**Last Updated:** 2025
| text/markdown | Tallyfy | Tallyfy <support@tallyfy.com> | null | Tallyfy <support@tallyfy.com> | MIT | tallyfy, api, sdk, workflow, automation, task, management | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Office/Business :: Scheduling",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content"
] | [] | https://github.com/tallyfy/sdk | null | >=3.7 | [] | [] | [] | [
"requests>=2.25.0",
"typing-extensions>=4.0.0; python_version < \"3.8\"",
"email-validator==2.2.0",
"pytest>=6.0.0; extra == \"dev\"",
"pytest-cov>=2.10.0; extra == \"dev\"",
"black>=21.0.1; extra == \"dev\"",
"flake8>=3.8.0; extra == \"dev\"",
"mypy>=0.800; extra == \"dev\""
] | [] | [] | [] | [
"Documentation, https://tallyfy.com/products/",
"Repository, https://github.com/tallyfy/sdk",
"Bug Tracker, https://github.com/tallyfy/sdk/issues",
"Changelog, https://github.com/tallyfy/sdk/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T03:08:32.353118 | tallyfy-1.1.0.tar.gz | 168,234 | 80/ef/fbb7fd9767ede20cb691220f29f23e07332537c60a1ec15b3c72ba1aefd2/tallyfy-1.1.0.tar.gz | source | sdist | null | false | 9694f3fae16d2281d0290f34aee63ed6 | 4d104bc22c87bcc3641492922174ec5e4c0639c7efcd13d6982c6fa1bc7f99c0 | 80effbb7fd9767ede20cb691220f29f23e07332537c60a1ec15b3c72ba1aefd2 | null | [
"LICENSE"
] | 231 |
2.4 | cortex-identity | 1.2.1 | Own your AI memory. Take it everywhere. | <h1 align="center">Cortex</h1>
<p align="center"><strong>Own your AI memory. Take it everywhere.</strong></p>
<p align="center">
<a href="https://pypi.org/project/cortex-identity/"><img src="https://img.shields.io/pypi/v/cortex-identity?color=blue&label=PyPI" alt="PyPI"></a>
<a href="https://pypi.org/project/cortex-identity/"><img src="https://img.shields.io/pypi/pyversions/cortex-identity" alt="Python"></a>
<a href="https://github.com/Junebugg1214/Cortex-AI/blob/main/LICENSE"><img src="https://img.shields.io/github/license/Junebugg1214/Cortex-AI" alt="License"></a>
<a href="https://github.com/Junebugg1214/Cortex-AI/stargazers"><img src="https://img.shields.io/github/stars/Junebugg1214/Cortex-AI?style=social" alt="Stars"></a>
<img src="https://img.shields.io/badge/tests-1%2C710%20passing-brightgreen" alt="Tests">
</p>
---
**Your ChatGPT knows you. Now Claude does too.**
Cortex extracts your context from ChatGPT, Claude, Gemini, Perplexity, and coding tools (Claude Code, Cursor, Copilot) — builds a portable knowledge graph you own — and pushes it to any platform or serves it over HTTP as an API. Cryptographically signed. Version controlled. Protocol-grade. Zero dependencies. Multiple storage backends (SQLite, PostgreSQL). SDKs for Python and TypeScript.
### Own your memory
<p align="center">
<img src="assets/demo-own.gif" alt="Extract and query your AI memory" width="800">
</p>
### Share it how you want
<p align="center">
<img src="assets/demo-share.gif" alt="Disclosure policies control what each platform sees" width="800">
</p>
### Serve it as an API
<p align="center">
<img src="assets/demo-api.gif" alt="CaaS server with grant tokens and HTTP access" width="800">
</p>
## Quick Start
```bash
pip install cortex-identity
# Migrate ChatGPT → Claude in one command
cortex chatgpt-export.zip --to claude -o ./output
# See what it extracted
cortex stats output/context.json
# Visualize your knowledge graph
cortex viz output/context.json --output graph.html
```
## What Makes This Different
| | Cortex | Mem0 | Letta | ChatGPT Memory | Claude Memory |
|---|:-:|:-:|:-:|:-:|:-:|
| **You own it** | Yes | No | No | No | No |
| **Portable** | Yes | No | No | No | No |
| **Knowledge graph** | Yes | Partial | No | No | No |
| **API-ready** | Yes | No | No | No | No |
| **Cryptographic identity** | Yes | No | No | No | No |
| **Role-based access (RBAC)** | Yes | No | No | No | No |
| **OAuth 2.0 / OIDC** | Yes | No | No | No | No |
| **SDKs (Python + TypeScript)** | Yes | Partial | No | No | No |
| **Temporal tracking** | Yes | No | No | No | No |
| **Works offline** | Yes | No | No | No | No |
| **Zero dependencies** | Yes | No | No | N/A | N/A |
> Mem0, Letta, and built-in AI memories are **agent memory** — owned by the platform. Cortex is **your memory**, under **your control**.
## How It Works
```
Chat Exports (ChatGPT, Claude, Gemini, Perplexity)
+ Coding Sessions (Claude Code, Cursor, Copilot)
|
Extract ──→ Knowledge Graph ──→ Sign & Version ──┬──→ Push Anywhere
└──→ Serve via API (CaaS)
```
Nodes are entities, not category items. "Python" is ONE node with tags `[technical_expertise, domain_knowledge]` — not duplicated across categories. Edges capture typed relationships: `Python --applied_in--> Healthcare`.
## Installation
```bash
pip install cortex-identity # Core (zero dependencies)
pip install cortex-identity[crypto] # + Ed25519 signatures
pip install cortex-identity[fast] # + 10x faster graph layout
pip install cortex-identity[postgres] # + PostgreSQL storage backend
pip install cortex-identity[full] # Everything
```
<details>
<summary><strong>Install from source</strong></summary>
```bash
git clone https://github.com/Junebugg1214/Cortex-AI.git
cd Cortex-AI
pip install -e .
```
Requires Python 3.10+ (macOS, Linux, Windows). No external packages required for core functionality.
</details>
---
## Features
<details>
<summary><strong>Knowledge Graph Engine</strong></summary>
### Graph Foundation
Everything is nodes and edges. Nodes have tags (not fixed categories), confidence scores, temporal metadata, and extensible properties. The graph is backward compatible — v4 flat-category JSON converts losslessly.
```bash
cortex query context.json --node "Python"
cortex query context.json --neighbors "Python"
cortex stats context.json
```
### Smart Edges
Automatic relationship discovery:
- **Pattern rules** — `technical_expertise` + `active_priorities` = `used_in` edge
- **Co-occurrence** — entities appearing together in messages get linked (PMI for large datasets, frequency thresholds for small)
- **Centrality** — identifies your most important nodes (degree centrality, PageRank for 200+ nodes)
- **Graph-aware dedup** — merges near-duplicates using 70% text similarity + 30% neighbor overlap
### Query + Intelligence
```bash
cortex query context.json --category technical_expertise
cortex query context.json --strongest 10
cortex query context.json --isolated
cortex query context.json --path "Python" "Mayo Clinic"
cortex query context.json --components
cortex gaps context.json
cortex digest context.json --previous last_week.json
```
</details>
<details>
<summary><strong>UPAI Protocol (Cryptographic Identity)</strong></summary>
Full protocol specification: [`spec/upai-v1.0.md`](spec/upai-v1.0.md)
- **W3C `did:key` identity** — Ed25519 multicodec + base58btc. Interoperable with the decentralized identity ecosystem.
- **Cryptographic signing** — SHA-256 integrity (always). Ed25519 signatures (with `pynacl`). Proves the graph is yours and untampered.
- **Signed envelopes** — Three-part `header.payload.signature` format with replay protection (nonce, iat, exp, audience binding).
- **Signed grant tokens** — Scoped access tokens (`context:read`, `versions:read`) with Ed25519 signatures and expiration.
- **Key rotation** — Rotate to a new keypair with a verifiable revocation chain. Old keys get revocation proofs.
- **Selective disclosure** — Policies control what each platform sees. "Professional" shows job/skills. "Technical" shows your tech stack. "Minimal" shows almost nothing.
- **Version control** — Git-like commits for your identity. Log, diff, checkout, rollback.
- **Verifiable credentials** — Issue and verify W3C-style credentials bound to your DID.
- **Encrypted backup** — AES-256-GCM encrypted identity exports with key-derived encryption.
- **Service discovery** — `.well-known/upai-configuration` endpoint for automated client bootstrapping.
- **RBAC** — Role-based access control with 4 roles (owner, admin, editor, viewer) and 10 permission scopes.
- **JSON Schema validation** — 9 schemas for all data structures, stdlib-only validator.
- **Structured error codes** — UPAI-4xxx (client) and UPAI-5xxx (server) error registry.
- **Webhook signing** — HMAC-SHA256 payload signatures for event notifications.
```bash
cortex identity --init --name "Your Name"
cortex commit context.json -m "Added June ChatGPT export"
cortex log
cortex identity --show
cortex sync context.json --to claude --policy professional -o ./output
cortex rotate # Rotate identity keys
```
**Built-in disclosure policies:**
| Policy | What's Shared | Min Confidence |
|--------|---------------|----------------|
| `full` | Everything | 0.0 |
| `professional` | Identity, work, skills, priorities | 0.6 |
| `technical` | Tech stack, domain knowledge, priorities | 0.5 |
| `minimal` | Identity, communication preferences only | 0.8 |
</details>
<details>
<summary><strong>Context-as-a-Service (CaaS) API</strong></summary>
Serve your context over HTTP so AI platforms can pull it directly. OpenAPI spec: [`spec/openapi.json`](spec/openapi.json)
```bash
# Start the CaaS server
cortex serve context.json --port 8421
# With PostgreSQL storage
cortex serve context.json --storage postgres --db-url "dbname=cortex"
# With SSE and Prometheus metrics
cortex serve context.json --enable-sse --enable-metrics
# With INI config file
cortex serve context.json --config deploy/cortex.ini
# Create a scoped access token for a platform
cortex grant --create --audience "Claude" --policy professional
# Revoke or list grants
cortex grant --list
cortex grant --revoke <grant_id>
# Manage custom disclosure policies
cortex policy --list
cortex policy --create --name "team" --include-tags technical_expertise domain_knowledge
```
**40+ API endpoints** across 12 groups:
| Group | Endpoints | Auth |
|-------|-----------|------|
| Discovery | `/.well-known/upai-configuration`, `/identity`, `/health` | None |
| Grants | `POST/GET/DELETE /grants` | Self-managed |
| Context | `/context`, `/context/compact`, `/context/nodes`, `/context/edges`, `/context/stats` | `context:read` |
| Graph Queries | `/context/path/<from>/<to>`, `POST /context/search`, `POST /context/batch` | `context:read` |
| CRUD | `POST/GET/PATCH/DELETE` on individual nodes and edges | `context:write` |
| Versions | `/versions`, `/versions/<id>`, `/versions/diff` | `versions:read` |
| Webhooks | `POST/GET/DELETE /webhooks` | Self-managed |
| Credentials | `POST/GET/DELETE /credentials`, `/credentials/<id>/verify` | `credentials:*` |
| Policies | `POST/GET/PATCH/DELETE /policies` | Owner |
| Audit | `/audit`, `/audit/verify` | Owner |
| SSE | `/events` (Server-Sent Events, `--enable-sse`) | `context:read` |
| Dashboard | `/dashboard` (SPA with OAuth login) | Session / OAuth |
**Auth flow:** Platform requests a signed grant token with `cortex grant` -> uses it as `Authorization: Bearer <token>` -> server verifies signature, expiry, and scope -> returns disclosure-filtered context.
</details>
<details>
<summary><strong>Storage Backends</strong></summary>
Cortex supports three storage backends, selectable via CLI flag:
| Backend | Flag | Use Case |
|---------|------|----------|
| JSON | `--storage json` (default) | Single-user, file-based, zero setup |
| SQLite | `--storage sqlite --db-path cortex.db` | Embedded SQL, concurrent reads, migrations |
| PostgreSQL | `--storage postgres --db-url "dbname=cortex"` | Production deployments, multi-instance |
```bash
# JSON (default — just works)
cortex serve context.json
# SQLite
cortex serve context.json --storage sqlite --db-path ./data/cortex.db
# PostgreSQL
cortex serve context.json --storage postgres --db-url "host=localhost dbname=cortex_prod user=cortex"
```
All backends share the same `StorageBackend` interface. SQLite and PostgreSQL include automatic schema migrations. PostgreSQL requires the optional `psycopg` dependency (`pip install cortex-identity[postgres]`).
</details>
<details>
<summary><strong>SDKs</strong></summary>
### Python SDK
Built-in Python client for the CaaS API:
```python
from cortex.sdk.client import CortexClient
client = CortexClient("http://localhost:8421", token="your-grant-token")
context = client.get_context()
nodes = client.list_nodes(limit=10)
stats = client.get_stats()
```
### TypeScript SDK
Zero-dependency TypeScript/JavaScript client (`@cortex-ai/sdk`):
```typescript
import { CortexClient } from '@cortex-ai/sdk';
const client = new CortexClient({
baseUrl: 'http://localhost:8421',
token: 'your-grant-token',
});
const context = await client.getContext();
const nodes = await client.listNodes({ limit: 10 });
const stats = await client.getStats();
```
- **Zero runtime dependencies** — native `fetch`, no axios/node-fetch
- **ESM + CJS dual build** — works in Node.js, Deno, Bun, and bundlers
- **Full TypeScript types** — all request/response types exported
- **33 tests** via `node:test`
Install: `npm install @cortex-ai/sdk`
</details>
<details>
<summary><strong>Security & Operations</strong></summary>
### Role-Based Access Control (RBAC)
4 roles with 10 permission scopes:
| Role | Permissions |
|------|-------------|
| **owner** | All scopes — full control |
| **admin** | Manage grants, webhooks, policies, credentials |
| **editor** | Read + write context and versions |
| **viewer** | Read-only context and versions |
### OAuth 2.0 / OIDC
```bash
cortex serve context.json \
--oauth-provider google CLIENT_ID CLIENT_SECRET \
--oauth-allowed-email you@example.com
```
Supports any OAuth 2.0 / OpenID Connect provider. Token exchange endpoint converts OAuth tokens into Cortex grant tokens.
### Rate Limiting
Token-bucket rate limiter per client IP. Configurable burst and sustained rates.
### Field-Level Encryption
AES-256-GCM encryption for sensitive node properties at rest. Key management via identity keychain.
### Audit Ledger
Hash-chained SHA-256 audit log. Every mutation is recorded with actor, timestamp, and cryptographic chain integrity. Tamper-evident — `GET /audit/verify` checks the full chain.
### CSRF / SSRF Protection
- CSRF tokens for dashboard session endpoints
- SSRF guards on webhook delivery (blocks private IP ranges)
- CORS configuration via `--allowed-origins`
### Prometheus Metrics
```bash
cortex serve context.json --enable-metrics
# GET /metrics → Prometheus-format counters
```
9 custom metrics: request latency histograms, webhook delivery counts, SSE connection gauges, storage operation counters, error rates, and more.
</details>
<details>
<summary><strong>Deployment</strong></summary>
Production deployment configs included in `deploy/`:
### Docker
```bash
docker build -t cortex .
docker run -p 8421:8421 -v cortex-data:/data cortex
```
### Docker Compose
```bash
docker compose up -d # Cortex + PostgreSQL + Caddy
```
### systemd
```bash
sudo cp deploy/cortex.service /etc/systemd/system/
sudo systemctl enable --now cortex
```
### Reverse Proxy
Caddy and nginx configs included:
```bash
# Caddy (auto-TLS)
cp deploy/Caddyfile /etc/caddy/Caddyfile
# nginx
cp deploy/nginx.conf /etc/nginx/conf.d/cortex.conf
```
### INI Configuration
```ini
# deploy/cortex.ini
[server]
port = 8421
storage = postgres
db_url = host=localhost dbname=cortex_prod
[security]
allowed_origins = https://yourdomain.com
enable_sse = true
enable_metrics = true
```
```bash
cortex serve context.json --config deploy/cortex.ini
```
See [`docs/deployment.md`](docs/deployment.md) for the full deployment guide.
</details>
<details>
<summary><strong>Temporal Tracking & Contradictions</strong></summary>
Every extraction snapshots each node's state. Cortex tracks how your identity evolves, detects contradictions ("said X in January, not-X in March"), and computes drift scores across time windows.
```bash
cortex timeline context.json --format html
cortex contradictions context.json --severity 0.5
cortex drift context.json --compare previous.json
```
### Conflict Detection
```
Input: "I use Python daily" + "I don't use Python anymore"
Result: negation_conflict detected, resolution: prefer_negation (more recent)
```
### Typed Relationships
```
Input: "We partner with Mayo Clinic. Dr. Smith is my mentor."
Result: Mayo Clinic (partner), Dr. Smith (mentor)
```
Supported types: `partner`, `mentor`, `advisor`, `investor`, `client`, `competitor`
</details>
<details>
<summary><strong>Visualization & Dashboard</strong></summary>
```bash
cortex viz context.json --output graph.html # Interactive HTML
cortex viz context.json --output graph.svg --format svg # Static SVG
cortex dashboard context.json --port 8420 # Web dashboard
cortex watch ~/exports/ --graph context.json # Auto-extract
cortex sync-schedule --config sync_config.json # Scheduled sync
```
The CaaS server includes a built-in admin dashboard at `/dashboard` with session-based authentication and optional OAuth login.
</details>
<details>
<summary><strong>Coding Tool Extraction</strong></summary>
Extract identity from what you *actually do*, not just what you say. Coding sessions reveal your real tech stack, tools, and workflow through behavior:
```bash
cortex extract-coding --discover -o coding_context.json
cortex extract-coding --discover --project chatbot-memory
cortex extract-coding --discover --merge context.json -o context.json
cortex extract-coding --discover --enrich --stats
```
| Signal | How | Example |
|--------|-----|---------|
| Languages | File extensions | Editing `.py` files -> Python |
| Frameworks | Config files | `package.json` -> Node.js |
| CLI tools | Bash commands | Running `pytest` -> Pytest |
| Projects | Working directory | `/home/user/myapp` -> myapp |
| Patterns | Tool sequence | Uses plan mode before coding |
**Project enrichment** (`--enrich`): Reads README, package manifests, and LICENSE files to extract project metadata. Detects CI/CD and Docker presence.
Currently supports **Claude Code** (JSONL transcripts). Cursor and Copilot parsers planned.
</details>
<details>
<summary><strong>Auto-Inject & Cross-Platform Context</strong></summary>
### Auto-Inject into Claude Code
Every new session automatically gets your Cortex identity injected:
```bash
cortex context-hook install context.json
cortex context-hook test
cortex context-export context.json --policy technical
```
### Cross-Platform Context Writer
Write persistent Cortex identity to every AI coding tool:
```bash
cortex context-write graph.json --platforms all --project ~/myproject
cortex context-write graph.json --platforms cursor copilot windsurf
cortex context-write graph.json --platforms all --dry-run
cortex context-write graph.json --platforms all --watch
```
| Platform | Config File | Scope |
|----------|------------|-------|
| Claude Code | `~/.claude/MEMORY.md` | Global |
| Claude Code (project) | `{project}/.claude/MEMORY.md` | Project |
| Cursor | `{project}/.cursor/rules/cortex.mdc` | Project |
| GitHub Copilot | `{project}/.github/copilot-instructions.md` | Project |
| Windsurf | `{project}/.windsurfrules` | Project |
| Gemini CLI | `{project}/GEMINI.md` | Project |
Uses `<!-- CORTEX:START -->` / `<!-- CORTEX:END -->` markers — your hand-written rules are never overwritten.
</details>
<details>
<summary><strong>Continuous Extraction</strong></summary>
Watch Claude Code sessions in real-time. Auto-extract behavioral signals as you code:
```bash
cortex extract-coding --watch -o coding_context.json
cortex extract-coding --watch -o ctx.json --context-refresh claude-code cursor copilot
cortex extract-coding --watch --project chatbot-memory -o ctx.json
cortex extract-coding --watch --interval 15 --settle 10 -o ctx.json
```
Polls for session changes, debounces active writes, and incrementally merges nodes.
</details>
<details>
<summary><strong>PII Redaction</strong></summary>
Strip sensitive data before extraction:
```bash
cortex chatgpt-export.zip --to claude --redact
cortex chatgpt-export.zip --to claude --redact --redact-patterns custom.json
```
Redacts: emails, phones, SSNs, credit cards, API keys, IP addresses, street addresses.
</details>
---
<details>
<summary><strong>Supported Platforms</strong></summary>
### Input (Extract From)
| Platform | File Type | Auto-Detected |
|----------|-----------|---------------|
| ChatGPT | `.zip` with `conversations.json` | Yes |
| Claude | `.json` with messages array | Yes |
| Claude Memories | `.json` array with `text` field | Yes |
| Gemini / AI Studio | `.json` with conversations/turns | Yes |
| Perplexity | `.json` with threads | Yes |
| API Logs | `.json` with requests array | Yes |
| JSONL | `.jsonl` (one message per line) | Yes |
| Claude Code | `.jsonl` session transcripts | Yes |
| Plain Text | `.txt`, `.md` | Yes |
### Output (Export To)
| Format | Output | Use Case |
|--------|--------|----------|
| Claude Preferences | `claude_preferences.txt` | Settings > Profile |
| Claude Memories | `claude_memories.json` | memory_user_edits |
| System Prompt | `system_prompt.txt` | Any LLM API |
| Notion Page | `notion_page.md` | Notion import |
| Notion Database | `notion_database.json` | Notion DB rows |
| Google Docs | `google_docs.html` | Google Docs paste |
| Summary | `summary.md` | Human overview |
| Full JSON | `full_export.json` | Lossless backup |
### Extraction Categories
Cortex extracts entities into 17 tag categories:
| Category | Examples |
|----------|----------|
| Identity | Name, credentials (MD, PhD) |
| Professional Context | Role, title, company |
| Business Context | Company, products, metrics |
| Active Priorities | Current projects, goals |
| Relationships | Partners, clients, collaborators |
| Technical Expertise | Languages, frameworks, tools |
| Domain Knowledge | Healthcare, finance, AI/ML |
| Market Context | Competitors, industry trends |
| Metrics | Revenue, users, timelines |
| Constraints | Budget, timeline, team size |
| Values | Principles, beliefs |
| Negations | What you explicitly avoid |
| User Preferences | Style and tool preferences |
| Communication Preferences | Response style preferences |
| Correction History | Self-corrections |
| Mentions | Catch-all for other entities |
</details>
<details>
<summary><strong>Full CLI Reference (28 commands)</strong></summary>
### Extract & Import
```bash
cortex <export> --to <platform> -o ./output # One-step migrate
cortex extract <export> -o context.json # Extract only
cortex import context.json --to <platform> # Import only
cortex extract new.json --merge old.json -o merged.json # Merge contexts
```
### Query & Intelligence
```bash
cortex query <graph> --node <label> # Find node
cortex query <graph> --neighbors <label> # Find neighbors
cortex query <graph> --category <tag> # Filter by tag
cortex query <graph> --path <from> <to> # Shortest path
cortex query <graph> --strongest <n> # Top N nodes
cortex query <graph> --weakest <n> # Bottom N nodes
cortex query <graph> --isolated # Unconnected nodes
cortex query <graph> --components # Connected clusters
cortex gaps <graph> # Gap analysis
cortex digest <graph> --previous <old> # Weekly digest
cortex stats <graph> # Graph statistics
```
### Identity & Sync
```bash
cortex identity --init --name <name> # Create identity
cortex commit <graph> -m <message> # Version commit
cortex log # Version history
cortex identity --show # Show identity
cortex sync <graph> --to <platform> --policy <name> # Push to platform
```
### Visualization & Flywheel
```bash
cortex viz <graph> --output graph.html # Interactive HTML
cortex viz <graph> --output graph.svg --format svg # Static SVG
cortex dashboard <graph> --port 8420 # Web dashboard
cortex watch <dir> --graph <graph> # Auto-extract
cortex sync-schedule --config <config.json> # Scheduled sync
```
### Coding Tool Extraction
```bash
cortex extract-coding <session.jsonl> # From specific file
cortex extract-coding --discover # Auto-find sessions
cortex extract-coding --discover -p <project> # Filter by project
cortex extract-coding --discover -m <context> # Merge with existing
cortex extract-coding --discover --stats # Show session stats
cortex extract-coding --discover --enrich # Enrich with project files
cortex extract-coding --watch -o ctx.json # Watch mode (continuous)
cortex extract-coding --watch --context-refresh claude-code cursor # Watch + auto-refresh
```
### Context Hook (Auto-Inject)
```bash
cortex context-hook install <graph> --policy technical # Install hook
cortex context-hook uninstall # Remove hook
cortex context-hook test # Preview injection
cortex context-hook status # Check status
cortex context-export <graph> --policy technical # One-shot export
```
### Cross-Platform Context Writer
```bash
cortex context-write <graph> --platforms all --project <dir> # All platforms
cortex context-write <graph> --platforms cursor copilot # Specific platforms
cortex context-write <graph> --platforms all --dry-run # Preview
cortex context-write <graph> --platforms all --watch # Auto-refresh
cortex context-write <graph> --platforms all --policy professional # Policy override
```
### Context-as-a-Service (CaaS)
```bash
cortex serve <graph> --port 8421 # Start CaaS server
cortex serve <graph> --storage postgres --db-url "dbname=cortex" # PostgreSQL backend
cortex serve <graph> --config deploy/cortex.ini # INI config file
cortex serve <graph> --enable-sse --enable-metrics # SSE + Prometheus
cortex grant --create --audience <name> # Create access token
cortex grant --list # List grants
cortex grant --revoke <grant_id> # Revoke grant
cortex policy --list # List disclosure policies
cortex policy --create --name <name> --include-tags <tags> # Create custom policy
cortex rotate # Rotate identity keys
```
### Temporal Analysis
```bash
cortex timeline <graph> --format html # Timeline view
cortex contradictions <graph> --severity 0.5 # Find conflicts
cortex drift <graph> --compare previous.json # Identity drift
```
</details>
<details>
<summary><strong>Architecture</strong></summary>
```
cortex-identity/ # pip install cortex-identity
├── pyproject.toml # Package metadata + entry points
├── Dockerfile # Container build
├── docker-compose.yml # Multi-service deployment
├── spec/
│ ├── upai-v1.0.md # UPAI protocol specification (RFC-style)
│ └── openapi.json # OpenAPI 3.1 CaaS API specification
├── deploy/
│ ├── cortex.ini # INI configuration template
│ ├── cortex.service # systemd unit file
│ ├── Caddyfile # Caddy reverse proxy (auto-TLS)
│ ├── nginx.conf # nginx reverse proxy
│ └── .env.example # Environment variable template
├── docs/
│ ├── architecture.md # System architecture
│ ├── deployment.md # Deployment guide
│ ├── user-guide.md # User documentation
│ ├── tutorial.md # Getting started
│ └── overview.md # Project overview
├── sdk/
│ └── typescript/ # @cortex-ai/sdk (TypeScript)
│ ├── src/
│ │ ├── client.ts # CaaS API client
│ │ ├── types.ts # Request/response types
│ │ ├── errors.ts # Error classes
│ │ └── index.ts # Public exports
│ └── package.json # Zero runtime deps, ESM+CJS
├── cortex/ # 75 source files
│ ├── cli.py # CLI entry point (28 subcommands)
│ ├── extract_memory.py # Extraction engine (~1400 LOC)
│ ├── import_memory.py # Import/export engine (~1000 LOC)
│ ├── graph.py # Node, Edge, CortexGraph (schema 6.0)
│ ├── compat.py # v4 <-> v5 conversion
│ ├── temporal.py # Snapshots, drift scoring
│ ├── contradictions.py # Contradiction detection
│ ├── timeline.py # Timeline views
│ ├── upai/ # Universal Personal AI Protocol (14 files)
│ │ ├── identity.py # did:key identity, Ed25519/HMAC, SignedEnvelope
│ │ ├── disclosure.py # Selective disclosure policies
│ │ ├── versioning.py # Git-like version control
│ │ ├── schemas.py # JSON Schema validation (stdlib-only)
│ │ ├── tokens.py # Signed grant tokens (Ed25519)
│ │ ├── keychain.py # Key rotation & revocation chain
│ │ ├── errors.py # Structured error codes (UPAI-4xxx/5xxx)
│ │ ├── pagination.py # Cursor-based pagination
│ │ ├── webhooks.py # HMAC-SHA256 webhook signing
│ │ ├── credentials.py # Verifiable credentials (W3C-style)
│ │ ├── discovery.py # Service discovery endpoint
│ │ ├── backup.py # AES-256-GCM encrypted backup
│ │ └── rbac.py # Role-based access control
│ ├── caas/ # Context-as-a-Service (25 files)
│ │ ├── server.py # HTTP API server (40+ endpoints)
│ │ ├── storage.py # StorageBackend interface
│ │ ├── sqlite_store.py # SQLite backend + migrations
│ │ ├── postgres_store.py # PostgreSQL backend
│ │ ├── config.py # INI file configuration
│ │ ├── oauth.py # OAuth 2.0 / OIDC integration
│ │ ├── sse.py # Server-Sent Events
│ │ ├── event_buffer.py # SSE replay (Last-Event-ID)
│ │ ├── rate_limit.py # Token-bucket rate limiter
│ │ ├── webhook_worker.py # Async webhook delivery
│ │ ├── circuit_breaker.py # Webhook circuit breaker
│ │ ├── dead_letter.py # Dead-letter queue
│ │ ├── audit_ledger.py # Hash-chained audit log
│ │ ├── sqlite_audit_ledger.py
│ │ ├── postgres_audit_ledger.py
│ │ ├── caching.py # HTTP caching (ETags, Cache-Control)
│ │ ├── correlation.py # Request correlation IDs
│ │ ├── encryption.py # Field-level AES-256-GCM
│ │ ├── metrics.py # Prometheus metrics
│ │ ├── instrumentation.py # Observability hooks
│ │ ├── logging_config.py # Structured logging
│ │ ├── migrations.py # Schema migrations
│ │ ├── shutdown.py # Graceful shutdown
│ │ └── dashboard/ # Admin dashboard (auth + static)
│ ├── sdk/ # Python SDK client
│ │ ├── client.py # CaaS API client
│ │ └── exceptions.py # SDK error types
│ ├── adapters.py # Claude/SystemPrompt/Notion/GDocs adapters
│ ├── edge_extraction.py # Pattern-based + proximity edge discovery
│ ├── cooccurrence.py # PMI / frequency co-occurrence
│ ├── dedup.py # Graph-aware deduplication
│ ├── centrality.py # Degree centrality + PageRank
│ ├── query.py # QueryEngine + graph algorithms
│ ├── intelligence.py # Gap analysis + weekly digest
│ ├── coding.py # Coding session behavioral extraction
│ ├── hooks.py # Auto-inject context into Claude Code
│ ├── context.py # Cross-platform context writer (6 platforms)
│ ├── continuous.py # Real-time session watcher
│ ├── _hook.py # cortex-hook entry point
│ ├── __main__.py # python -m cortex support
│ ├── viz/ # Visualization (renderer + layout)
│ ├── dashboard/ # Local web dashboard
│ └── sync/ # File watcher + scheduled sync
├── migrate.py # Backward-compat stub → cortex.cli
├── cortex-hook.py # Backward-compat stub → cortex._hook
└── tests/ # 1,710 tests across 62 files
```
</details>
<details>
<summary><strong>Version History</strong></summary>
| Version | Milestone |
|---------|-----------|
| v1.2.0 | **Production Hardening + SDKs + PostgreSQL** — RBAC (4 roles, 10 scopes), hash-chained audit ledger, HTTP caching (ETags), webhook resilience (circuit breaker, dead-letter queue), SSE with replay, OAuth 2.0/OIDC, field-level encryption, rate limiting, CSRF/SSRF protection, structured logging, graceful shutdown, verifiable credentials, encrypted backup, service discovery, custom disclosure policies, graph CRUD API, admin dashboard, Docker/systemd/Caddy/nginx deployment, INI config, SQLite storage backend, PostgreSQL storage backend, Python SDK, TypeScript SDK (`@cortex-ai/sdk`), Prometheus metrics (9 custom metrics). 28 CLI commands. 1,710 tests. |
| v1.1.0 | **UPAI Open Standard + CaaS API** — W3C `did:key` identity, signed envelopes with replay protection, signed grant tokens, key rotation chain, Context-as-a-Service HTTP API (18 endpoints), JSON Schema validation, structured error codes, cursor-based pagination, webhook signing, OpenAPI 3.1 spec, RFC-style protocol spec. 27 CLI commands. 796 tests. |
| v1.0.0 | **First public release** — 24 CLI commands, knowledge graph, UPAI protocol, temporal tracking, coding extraction, cross-platform context, continuous extraction, visualization, dashboard. 618 tests. Zero required dependencies. |
<details>
<summary>Pre-release development history</summary>
| Internal | Milestone |
|----------|-----------|
| v6.4 (dev) | pip packaging, continuous extraction, production hardening |
| v6.3 (dev) | Cross-platform context writer |
| v6.2 (dev) | Auto-inject context |
| v6.1 (dev) | Coding tool extraction |
| v6.0 (dev) | Visualization, dashboard, file monitor, sync scheduler |
| v5.4 (dev) | Query engine, gap analysis, weekly digest |
| v5.3 (dev) | Smart edge extraction, co-occurrence, centrality, dedup |
| v5.2 (dev) | UPAI Protocol — cryptographic signing, selective disclosure, version control |
| v5.1 (dev) | Temporal snapshots, contradiction engine, drift scoring |
| v5.0 (dev) | Graph foundation — category-agnostic nodes, edges |
| v4.x (dev) | PII redaction, typed relationships, Notion/Google Docs, semantic dedup |
</details>
</details>
---
## License
MIT — See [LICENSE](LICENSE)
## Author
Created by [@Junebugg1214](https://github.com/Junebugg1214)
| text/markdown | Junebugg1214 | null | null | null | null | ai-identity, knowledge-graph, chatgpt, claude, memory, portable | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pynacl>=1.5.0; extra == \"crypto\"",
"numpy>=1.24.0; extra == \"fast\"",
"psycopg[binary]>=3.1; extra == \"postgres\"",
"pynacl>=1.5.0; extra == \"full\"",
"numpy>=1.24.0; extra == \"full\"",
"psycopg[binary]>=3.1; extra == \"full\"",
"pytest>=7.0; extra == \"dev\"",
"pynacl>=1.5.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Junebugg1214/Cortex-AI",
"Repository, https://github.com/Junebugg1214/Cortex-AI",
"Issues, https://github.com/Junebugg1214/Cortex-AI/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:06:36.856922 | cortex_identity-1.2.1.tar.gz | 359,432 | 18/34/ac104a3f4e0cbd6a4ddc013dd4a61373ce129d4707667b08ec604ce1c6c6/cortex_identity-1.2.1.tar.gz | source | sdist | null | false | 794d7d44f86e22f268f1d455b0f94f20 | c10cc00256588545da8e2a112271c084d77ac1c25f193c20b37bacb1856e0023 | 1834ac104a3f4e0cbd6a4ddc013dd4a61373ce129d4707667b08ec604ce1c6c6 | MIT | [
"LICENSE"
] | 229 |
2.1 | odoo-addon-hr-employee-id | 18.0.1.0.0.4 | Employee ID | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
===========
Employee ID
===========
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:f3589618264e893809dcc1cd240603080f2c43e84f79c4c2bf2d9d538cc6228e
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fhr-lightgray.png?logo=github
:target: https://github.com/OCA/hr/tree/18.0/hr_employee_id
:alt: OCA/hr
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/hr-18-0/hr-18-0-hr_employee_id
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/hr&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
Company wide unique employee ID. Supports:
- Random ID Generation
- Sequence
This module supports sequence of employee ID which will be generated
automatically from the sequence predefined.
Nevertheless, if you need a difference ID in particular cases you can
pass a custom value for \`identificationid\`: if you do it no automatic
generation happens.
**Table of contents**
.. contents::
:local:
Installation
============
To install this module, you need to:
- clone the branch 11.0 of the repository https://github.com/OCA/hr
- add the path to this repository in your configuration (addons-path)
- update the module list
- search for "Employee Identification Numbers" in your addons
- install the module
Configuration
=============
If you want to modify the format of the sequence, go to Settings ->
Technical -> Sequences & Identifiers -> Sequences and search for the
"Employee ID" sequence, where you modify its prefix and numbering
formats.
To configure the 'ID Generation Method', the '# of Digits' and the
'Sequence', activate the developer mode and go to Employees ->
Configuration -> Settings -> Employee Identifier
Usage
=====
When you will create a new employee, the field reference will be
assigned automatically with the next number of the predefined sequence.
Known issues / Roadmap
======================
- When installing the module, the ID of existing employees is not
generated automatically
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/hr/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/hr/issues/new?body=module:%20hr_employee_id%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* CorporateHub
* Michael Telahun Makonnen
* OpenSynergy Indonesia
* Camptocamp
Contributors
------------
- Michael Telahun Makonnen <mmakonnen@gmail.com>
- Adrien Peiffer (ACSONE) <adrien.peiffer@acsone.eu>
- Salton Massally (iDT Labs) <smassally@idtlabs.sl>
- Andhitia Rama (OpenSynergy Indonesia) <andhitia.r@gmail.com>
- Simone Orsi <simone.orsi@camptocamp.com>
- Serpent Consulting Services Pvt. Ltd. <support@serpentcs.com>
- `CorporateHub <https://corporatehub.eu/>`__
- Alexey Pelykh <alexey.pelykh@corphub.eu>
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/hr <https://github.com/OCA/hr/tree/18.0/hr_employee_id>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | CorporateHub, Michael Telahun Makonnen, OpenSynergy Indonesia, Camptocamp, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/hr | null | >=3.10 | [] | [] | [] | [
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T03:06:17.018835 | odoo_addon_hr_employee_id-18.0.1.0.0.4-py3-none-any.whl | 53,108 | 7a/7d/dd34b82c6c61976a9de1b3717e97722887947d098e45d003c19e55629c8d/odoo_addon_hr_employee_id-18.0.1.0.0.4-py3-none-any.whl | py3 | bdist_wheel | null | false | 991646177c562f008af3505d3de4e6ca | 4d7b6c505210f8dc9c439c01e2edea2018b9f2c48939e06e6f63583c9c17dd4a | 7a7ddd34b82c6c61976a9de1b3717e97722887947d098e45d003c19e55629c8d | null | [] | 86 |
2.4 | pychum | 0.1.0 | Input file generators for computational chemistry |
# Table of Contents
- [About](#orgf82c3d6)
- [Ecosystem Overview](#org1ef5b99)
- [Features](#orgd7f6835)
- [Supported Engines](#org6e22848)
- [Rationale](#org5ac5dff)
- [Usage](#orgde48b7e)
- [Development](#org60cabf4)
- [Adding ORCA blocks](#org4153b25)
- [Documentation](#org721896a)
- [Readme](#orga426038)
- [License](#org33508b4)
<a id="orgf82c3d6"></a>
# About

[](https://github.com/pypa/hatch)
A **pure-python** project to generate input files for various common
computational chemistry workflows. This means:
- Generating input structures for `jobflow` / Fireworks
- From unified `toml` inputs
This is a spin-off from `wailord` ([here](https://wailord.xyz)) which is meant to handle aggregated
runs in a specific workflow, while `pychum` is meant to generate **single runs**.
It is also a companion to `chemparseplot` ([here](https://github.com/haoZeke/chemparseplot)) which is meant to provide
uniform visualizations for the outputs of various computational chemistry
programs.
<a id="org1ef5b99"></a>
## Ecosystem Overview
`pychum` is part of the `rgpycrumbs` suite of interlinked libraries.

<a id="orgd7f6835"></a>
## Features
- Jobflow support
- Along with Fireworks
- Unit aware conversions
- Via `pint`
<a id="org6e22848"></a>
### Supported Engines
- NEB calculations
- ORCA
- EON
- Single point calculations
- ORCA
- EON
<a id="org5ac5dff"></a>
## Rationale
I needed to run a bunch of systems. `jobflow` / Fireworks / AiiDA were ideal,
until I realized only VASP is really well supported by them.
Also there were some minor problems with the ORCA input parser…
- It chokes on multiple `#` symbols, so `# MaxIter 50 # something` will error
out on `SOMETHING`
- No real ordering or syntax highlighting in major IDEs
Along with other minor inconveniences which make for enough friction over time
to necessitate this library.
<a id="orgde48b7e"></a>
# Usage
The simplest usage is via the CLI:
uv run pychum --help
# Or alternatively
python -m pychum.cli --help
<a id="org60cabf4"></a>
# Development
Before writing tests and incorporating the functions into the CLI it is helpful
to often visualize the intermediate steps. For this we can setup a complete
development environment including the notebook server.
uv sync --all-extras
uv run jupyter lab --ServerApp.allow_remote_access=1 \
--ServerApp.open_browser=False --port=8889
Then go through the `nb` folder notebooks.
<a id="org4153b25"></a>
## Adding ORCA blocks
Changes are to be made in the following files under the `pychum/engine/orca/` folder:
- The relevant `.jinja` file in the `_blocks` directory
- The configuration loading mechanism in `config_loader.py`
- The `dataclasses` folder
- A sample test `.toml` file under `tests/test_orca`
While working on this, it may be instructive to use the `nb` folder notebooks.
Also all PRs must include a full test suite for the new blocks.
<a id="org721896a"></a>
## Documentation
<a id="orga426038"></a>
### Readme
The `readme` can be constructed via:
./scripts/org_to_md.sh readme_src.org readme.md
<a id="org33508b4"></a>
# License
MIT. However, this is an academic resource, so **please cite** as much as possible
via:
- The Zenodo DOI for general use.
- The `wailord` paper for ORCA usage
| text/markdown | null | Rohit Goswami <rgoswami@ieee.org> | null | null | MIT | compchem, config | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"ase>=3.27.0",
"jinja2>=3.1.2",
"pint>=0.22",
"tomli>=2.0.1",
"mdit-py-plugins>=0.3.4; extra == \"doc\"",
"myst-nb>=1; extra == \"doc\"",
"myst-parser>=2; extra == \"doc\"",
"sphinx-autodoc2>=0.5; extra == \"doc\"",
"sphinx-copybutton>=0.5.2; extra == \"doc\"",
"sphinx-library>=1.1.2; extra == \"doc\"",
"sphinx-sitemap>=2.5.1; extra == \"doc\"",
"sphinx-togglebutton>=0.3.2; extra == \"doc\"",
"sphinx>=7.2.6; extra == \"doc\"",
"sphinxcontrib-apidoc>=0.4; extra == \"doc\"",
"ruff>=0.1.6; extra == \"lint\"",
"coverage[toml]>=7.3.2; extra == \"test\"",
"pytest-cov>=4.1.0; extra == \"test\"",
"pytest-datadir>=1.5.0; extra == \"test\"",
"pytest>=7.4.3; extra == \"test\""
] | [] | [] | [] | [
"Documentation, https://github.com/HaoZeke/pychum#readme",
"Issues, https://github.com/HaoZeke/pychum/issues",
"Source, https://github.com/HaoZeke/pychum"
] | uv/0.8.4 | 2026-02-21T03:06:14.935266 | pychum-0.1.0.tar.gz | 16,112 | f2/e9/ca412cc206b06e4988c676c2d997f8b0d54086671cd6be31691fc2fc3916/pychum-0.1.0.tar.gz | source | sdist | null | false | 196fdbdda789d7b9eaae2ff4c700126a | aad81d3405c212db8b6543e56104de70aee18a4fe41b6a59f5354c34c38f5df9 | f2e9ca412cc206b06e4988c676c2d997f8b0d54086671cd6be31691fc2fc3916 | null | [
"LICENSE"
] | 239 |
2.4 | atlas-redteam | 2.0.0a1.dev38 | Next-generation AWS Cloud Adversary Emulation platform | # Atlas
**AWS Cloud Adversary Emulation Platform**
---
> **⚠️ This tool is still under development.** APIs and behavior may change. Use with caution in production environments.
---
## Contributing Red Team Techniques
**We welcome contributions of new red team techniques.** If you have attack paths, privilege escalation methods, or AWS abuse techniques you'd like to add to Atlas, please open an issue or submit a pull request. The planner and attack graph are designed to be extended—see `src/atlas/planner/attack_graph.py` and `src/atlas/knowledge/data/api_detection_profiles.yaml` for how techniques are modeled.
---
## What is Atlas?
Atlas is a next-generation AWS cloud adversary emulation platform. It helps red teams and security researchers:
- **Discover** attack paths from a given identity (recon + attack graph)
- **Plan** multi-step privilege escalation chains
- **Simulate** execution without making AWS API calls
- **Execute** attack paths with configurable stealth and safety guardrails
- **Explain** attack paths with AI-powered or template-based explanations
---
## Requirements
- Python 3.12+
- AWS credentials configured (e.g. `~/.aws/credentials`)
---
## Installation
**Install from PyPI (recommended):**
```bash
pip install atlas-redteam
```
**Or with pipx (isolated environment, no venv needed):**
```bash
pipx install atlas-redteam
```
**Update to latest version:**
```bash
pip install --upgrade atlas-redteam
# or
pipx upgrade atlas-redteam
```
> **Note for maintainers:** To publish new versions so users get updates, see [docs/RELEASE.md](docs/RELEASE.md).
**For development (editable install):**
```bash
git clone https://github.com/Haggag-22/Atlas.git
cd Atlas
pip install -e ".[dev]"
```
---
## Quick Start
```bash
# Configure AWS profile
atlas config --profile my-profile --region us-east-1
# Run recon + planning (creates a case)
atlas plan --case mycase
# List attack paths and simulate
atlas simulate --case mycase --attack-path AP-01
# Explain an attack path
atlas explain --case mycase --attack-path AP-01
# Open the GUI
atlas gui --case mycase
```
---
## Commands
| Command | Description |
|--------|-------------|
| `atlas config` | Set or show AWS profile and region |
| `atlas plan` | Run reconnaissance + planning, save to `output/<case>/plan/` |
| `atlas simulate` | Simulate an attack path (no AWS calls) |
| `atlas run` | Execute an attack path (uses AWS) |
| `atlas cases` | List saved cases |
| `atlas delete-case` | Delete a saved case |
| `atlas explain` | Explain an attack path (AI or template) |
| `atlas gui` | Open the Streamlit web UI |
| `atlas inspect` | Inspect detection profiles for API actions |
---
## Output Structure
```
output/<case>/
├── case.json # Case metadata
├── plan/ # Recon + planning
│ ├── env_model.json
│ ├── attack_edges.json
│ ├── attack_paths.json
│ └── ...
├── sim/ # Simulation results (if run)
├── run/ # Execution results (if run)
└── explanations.json # Cached AI/template explanations
```
---
## Attack Techniques (Examples)
Atlas models techniques such as:
- Role assumption (`sts:AssumeRole`)
- Access key creation (`iam:CreateAccessKey`)
- Policy attachment (`iam:AttachUserPolicy`, `iam:AttachRolePolicy`)
- Inline policy injection (`iam:PutUserPolicy`, `iam:PutRolePolicy`)
- PassRole abuse (Lambda, etc.)
- Trust policy modification (`iam:UpdateAssumeRolePolicy`)
- Lambda code injection
- S3 read/write access
Detection costs and noise levels are derived from CloudTrail and GuardDuty profiles in `src/atlas/knowledge/`.
---
## Discovered Resources
The recon engine collects the following resource types (configurable via `recon.resource_types`):
| Resource | Service | Key Security Data |
|----------|---------|-------------------|
| S3 Buckets | S3 | Bucket policies, Public Access Block |
| EC2 Instances | EC2 | Instance profiles, IMDS config, security groups |
| Lambda Functions | Lambda | Execution roles, resource policies, environment variables |
| RDS Instances | RDS | Public accessibility, encryption, IAM auth, snapshots |
| KMS Keys | KMS | Key policies, grants, rotation status |
| Secrets Manager Secrets | Secrets Manager | Resource policies, rotation, KMS encryption |
| SSM Parameters | SSM | Parameter types (SecureString), KMS key IDs |
| CloudFormation Stacks | CloudFormation | Stack roles, capabilities, outputs |
---
## License
MIT
| text/markdown | Omar | null | null | null | null | aws, security, adversary-emulation, red-team, cloud-security | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Information Technology",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"boto3>=1.35.0",
"aioboto3>=13.0.0",
"pydantic>=2.6.0",
"pydantic-settings>=2.2.0",
"networkx>=3.2",
"structlog>=24.1.0",
"typer>=0.12.0",
"rich>=13.7.0",
"pyyaml>=6.0.1",
"aiofiles>=24.1.0",
"streamlit>=1.40.0",
"pyvis>=0.3.2",
"openai>=1.30.0; extra == \"ai\"",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=5.0.0; extra == \"dev\"",
"pytest-mock>=3.12.0; extra == \"dev\"",
"mypy>=1.8.0; extra == \"dev\"",
"ruff>=0.3.0; extra == \"dev\"",
"types-pyyaml>=6.0.0; extra == \"dev\"",
"types-aiofiles>=24.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Haggag-22/Atlas",
"Repository, https://github.com/Haggag-22/Atlas",
"Issues, https://github.com/Haggag-22/Atlas/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T03:03:15.960228 | atlas_redteam-2.0.0a1.dev38.tar.gz | 299,717 | 36/25/6a92ae3f7acde6baa8651cfb2ec600e56fc4d617a506b44cc49aebac9694/atlas_redteam-2.0.0a1.dev38.tar.gz | source | sdist | null | false | fa06a16657cf08e257f7feae6ec037b8 | 8fa4e85f139fed9161f661e23c76a28d5e0ec9107a266d1b115e13ac4eb7ccaf | 36256a92ae3f7acde6baa8651cfb2ec600e56fc4d617a506b44cc49aebac9694 | MIT | [] | 213 |
2.4 | stac-auth-proxy | 1.0.1 | STAC authentication proxy with FastAPI | <div align="center">
<h1 style="font-family: monospace">stac auth proxy</h1>
<p align="center">Reverse proxy to apply auth*n to your STAC API.</p>
</div>
---
[![PyPI - Version][pypi-version-badge]][pypi-link]
[![GHCR - Version][ghcr-version-badge]][ghcr-link]
[![GHCR - Size][ghcr-size-badge]][ghcr-link]
[![codecov][codecov-badge]][codecov-link]
[![Tests][tests-badge]][tests-link]
STAC Auth Proxy is a proxy API that mediates between the client and your internally accessible STAC API to provide flexible authentication, authorization, and content-filtering mechanisms.
> [!IMPORTANT]
>
> **We would :heart: to hear from you!**
> Please [join the discussion](https://github.com/developmentseed/eoAPI/discussions/209) and let us know how you're using eoAPI! This helps us improve the project for you and others.
> If you prefer to remain anonymous, you can email us at eoapi@developmentseed.org, and we'll be happy to post a summary on your behalf.
## ✨Features✨
- **🔐 Authentication:** Apply [OpenID Connect (OIDC)](https://openid.net/developers/how-connect-works/) token validation and optional scope checks to specified endpoints and methods
- **🛂 Content Filtering:** Use CQL2 filters via the [Filter Extension](https://github.com/stac-api-extensions/filter?tab=readme-ov-file) to tailor API responses based on request context (e.g. user role)
- **🤝 External Policy Integration:** Integrate with external systems (e.g. [Open Policy Agent (OPA)](https://www.openpolicyagent.org/)) to generate CQL2 filters dynamically from policy decisions
- **🧩 Authentication Extension:** Add the [Authentication Extension](https://github.com/stac-extensions/authentication) to API responses to expose auth-related metadata
- **📘 OpenAPI Augmentation:** Enhance the [OpenAPI spec](https://swagger.io/specification/) with security details to keep auto-generated docs and UIs (e.g., [Swagger UI](https://swagger.io/tools/swagger-ui/)) accurate
- **🗜️ Response Compression:** Optimize response sizes using [`starlette-cramjam`](https://github.com/developmentseed/starlette-cramjam/)
## Documentation
[Full documentation is available on the website](https://developmentseed.org/stac-auth-proxy).
Head to [Getting Started](https://developmentseed.org/stac-auth-proxy/user-guide/getting-started/) to dig in.
[pypi-version-badge]: https://badge.fury.io/py/stac-auth-proxy.svg
[pypi-link]: https://pypi.org/project/stac-auth-proxy/
[ghcr-version-badge]: https://ghcr-badge.egpl.dev/developmentseed/stac-auth-proxy/latest_tag?color=%2344cc11&ignore=latest&label=image+version&trim=
[ghcr-size-badge]: https://ghcr-badge.egpl.dev/developmentseed/stac-auth-proxy/size?color=%2344cc11&tag=latest&label=image+size&trim=
[ghcr-link]: https://github.com/developmentseed/stac-auth-proxy/pkgs/container/stac-auth-proxy
[codecov-badge]: https://codecov.io/gh/developmentseed/stac-auth-proxy/branch/main/graph/badge.svg
[codecov-link]: https://codecov.io/gh/developmentseed/stac-auth-proxy
[tests-badge]: https://github.com/developmentseed/stac-auth-proxy/actions/workflows/cicd.yaml/badge.svg
[tests-link]: https://github.com/developmentseed/stac-auth-proxy/actions/workflows/cicd.yaml
| text/markdown | null | Anthony Lukach <anthonylukach@gmail.com> | null | null | MIT License Copyright (c) 2024 Development Seed Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | Authentication, FastAPI, Proxy, STAC | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"brotli>=1.1.0",
"cql2>=0.5.3",
"cryptography>=44.0.1",
"fastapi>=0.115.5",
"httpx[http2]>=0.28.0",
"jinja2>=3.1.6",
"pydantic-settings>=2.6.1",
"pyjwt>=2.10.1",
"starlette-cramjam>=0.4.0",
"starlette>=0.49.1",
"uvicorn>=0.32.1",
"griffe-fieldz>=0.3.0; extra == \"docs\"",
"griffe-inherited-docstrings>=1.1.1; extra == \"docs\"",
"markdown-gfm-admonition>=0.1.1; extra == \"docs\"",
"mkdocs-api-autonav>=0.3.0; extra == \"docs\"",
"mkdocs-material[imaging]>=9.6.16; extra == \"docs\"",
"mkdocs>=1.6.1; extra == \"docs\"",
"mkdocstrings[python]>=0.30.0; extra == \"docs\"",
"mangum>=0.19.0; extra == \"lambda\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T02:59:30.506655 | stac_auth_proxy-1.0.1.tar.gz | 248,288 | a4/5b/2b04f15c444a314def7426a9c1bca4dba31e6f64dd4247ce55f82b2e6695/stac_auth_proxy-1.0.1.tar.gz | source | sdist | null | false | bdbea1074dba01172707a455b8e289d6 | dcc4d09a2119621745b5e418634de9c75e74669aeb5a445ee680285e29013be2 | a45b2b04f15c444a314def7426a9c1bca4dba31e6f64dd4247ce55f82b2e6695 | null | [
"LICENSE"
] | 246 |
2.4 | PEGGHy-Viewer | 1.1.1 | PEGGHy-Viewer is the viewer microservice of PEGGHy | # PEGGHy-Viewer
| text/markdown | null | Geode-solutions <team-web@geode-solutions.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | <3.13,>=3.9 | [] | [] | [] | [
"opengeodeweb-viewer==1.*,>=1.15.3"
] | [] | [] | [] | [
"Homepage, https://github.com/Geode-solutions/PEGGHy-Viewer",
"Bug Tracker, https://github.com/Geode-solutions/PEGGHy-Viewer/issues"
] | twine/6.0.1 CPython/3.12.12 | 2026-02-21T02:58:24.453604 | pegghy_viewer-1.1.1.tar.gz | 11,423 | 1c/b7/aea9bb8f1d5367ebd39c1ead5333925114e0f0ff4cc4fa9761d0c6fd171b/pegghy_viewer-1.1.1.tar.gz | source | sdist | null | false | 5c54da8ef6f06d7752f00d06c4f7dbed | 96690c0cf00b5da93b7f79aea315a29b5c5b8acf71f00ce7a05096e4e4340a03 | 1cb7aea9bb8f1d5367ebd39c1ead5333925114e0f0ff4cc4fa9761d0c6fd171b | null | [] | 0 |
2.4 | message-db-py | 0.3.2 | The Python interface to the MessageDB Event Store and Message Store | # message-db-py
Message DB is a fully-featured event store and message store implemented in
PostgreSQL for Pub/Sub, Event Sourcing, Messaging, and Evented Microservices
applications.
`message-db-py` is a Python interface to the Message DB event store and message
store, designed for easy integration into Python applications.
[](https://github.com/subhashb/message-db-py/actions)
[](https://codecov.io/gh/subhashb/message-db-py)
[](https://pypi.org/project/message-db-py/)
[](https://pypi.org/project/message-db-py/)
[](https://opensource.org/licenses/MIT)
## Installation
Use pip to install:
```shell
pip install message-db-py
```
## Setting up Message DB database
Clone the Message DB repository to set up the database:
```shell
git clone git@github.com:message-db/message-db.git
```
More detailed instructions are in the [Installation](https://github.com/message-db/message-db?tab=readme-ov-file#installation)
section of Message DB repo.
Running the database installation script creates the database, schema, table,
indexes, functions, views, types, a user role, and limit the user's privileges
to the message store's public interface.
The installation script is in the database directory of the cloned Message DB
repo. Change directory to the message-db directory where you cloned the repo,
and run the script:
```shell
database/install.sh
```
Make sure that your default Postgres user has administrative privileges.
### Database Name
By default, the database creation tool will create a database named
`message_store`.
If you prefer either a different database name, you can override the name
using the `DATABASE_NAME` environment variable.
```shell
DATABASE_NAME=some_other_database database/install.sh
```
### Uninstalling the Database
If you need to drop the database (for example, on a local dev machine):
```bash
database/uninstall.sh
```
If you're upgrading a previous version of the database:
```bash
database/update.sh
```
## Docker Image
You can optionally use a Docker image with Message DB pre-installed and ready
to go. This is especially helpful to run test cases locally.
The docker image is available in [Docker Hub](https://hub.docker.com/r/ethangarofolo/message-db).
The source is in [Gitlab](https://gitlab.com/such-software/message-db-docker)
## Usage
The complete user guide for Message DB is available at
<http://docs.eventide-project.org/user-guide/message-db/>.
Below is documentation for methods exposed through the Python API.
### Quickstart
Here's a quick example of how to publish and read messages using Message-DB-py:
```python
from message_db import MessageDB
# Initialize the database connection
store = MessageDB(CONNECTION_URL)
# Write a message
store.write("user_stream", "register", {"name": "John Doe"})
# Read a message
message = store.read_last_message("user_stream")
print(message)
```
## Primary APIs
- [Write Messages](#write-messages)
- [Read Messages](#read-messages-from-a-stream-or-category)
- [Read Last Message from stream](#read-last-message-from-stream)
### Write messages
The `write` method is used to append a new message to a specified stream within
the message database. This method ensures that the message is written with the
appropriate type, data, and metadata, and optionally, at a specific expected
version of the stream.
```python
def write(
self,
stream_name: str,
message_type: str,
data: Dict,
metadata: Dict | None = None,
expected_version: int | None = None,
) -> int:
"""Write a message to a stream."""
```
**Parameters:**
- `stream_name` (`str`): The name of the stream to which the message will be
written. This identifies the logical series of messages.
- `message_type` (`str`): The type of message being written. Typically, this
reflects the nature of the event or data change the message represents.
- `data` (`Dict`): The data payload of the message. This should be a dictionary
containing the actual information the message carries.
- `metadata` (`Dict` | `None`): Optional. Metadata about the message, provided as a
dictionary. Metadata can include any additional information that is not part of
the main data payload, such as sender information or timestamps.
Defaults to None.
- `expected_version` (`int` | `None`): Optional. The version of the stream where the
client expects to write the message. This is used for concurrency control and
ensuring the integrity of the stream's order. Defaults to `None`.
**Returns:**
- `position` (`int`): The position (or version number) of the message in the
stream after it has been successfully written.
```python
message_db = MessageDB(connection_pool=my_pool)
stream_name = "user_updates"
message_type = "UserCreated"
data = {"user_id": 123, "username": "example"}
metadata = {"source": "web_app"}
position = message_db.write(stream_name, message_type, data, metadata)
print("Message written at position:", position)
```
---
### Read messages from a stream or category
The `read` method retrieves messages from a specified stream or category. This
method supports flexible query options through a direct SQL parameter or by
determining the SQL based on the stream name and its context
(stream vs. category vs. all messages).
```python
def read(
self,
stream_name: str,
sql: str | None = None,
position: int = 0,
no_of_messages: int = 1000,
) -> List[Dict[str, Any]]:
"""Read messages from a stream or category.
Returns a list of messages from the stream or category starting from the given position.
"""
```
**Parameters:**
- `stream_name` (`str`): The identifier for the stream or category from which
messages are to be retrieved. Special names like "$all" can be used to fetch
messages across all streams.
- `sql` (`str` | `None`, optional): An optional SQL query string that if
provided, overrides the default SQL generation based on the stream_name.
If None, the SQL is automatically generated based on the stream_name value.
Defaults to None.
- `position` (`int`, optional): The starting position in the stream or category
from which to begin reading messages. Defaults to 0.
- `no_of_messages` (`int`, optional): The maximum number of messages to
retrieve. Defaults to 1000.
**Returns:**
- List[Dict[str, Any]]: A list of messages, where each message is
represented as a dictionary containing details such as the message ID,
stream name, type, position, global position, data, metadata, and timestamp.
```python
message_db = MessageDB(connection_pool=my_pool)
stream_name = "user-updates"
position = 10
no_of_messages = 50
# Reading from a specific stream
messages = message_db.read(stream_name, position=position, no_of_messages=no_of_messages)
# Custom SQL query
custom_sql = "SELECT * FROM get_stream_messages(%(stream_name)s, %(position)s, %(batch_size)s);"
messages = message_db.read(stream_name, sql=custom_sql, position=position, no_of_messages=no_of_messages)
for message in messages:
print(message)
```
---
### Read Last Message from stream
The `read_last_message` method retrieves the most recent message from a
specified stream. This method is useful when you need the latest state or
event in a stream without querying the entire message history.
```python
def read_last_message(self, stream_name: str) -> Dict[str, Any] | None:
"""Read the last message from a stream."""
```
**Parameters:**
- `stream_name` (`str`): The name of the stream from which the last message is to be
retrieved.
**Returns:**
- `Dict`[`str`, `Any`] | `None`: A dictionary representing the last message
in the specified stream. If the stream is empty or the message does not exist,
`None` is returned.
```python
message_db = MessageDB(connection_pool=my_pool)
stream_name = "user_updates"
# Reading the last message from a stream
last_message = message_db.read_last_message(stream_name)
if last_message:
print("Last message data:", last_message)
else:
print("No messages found in the stream.")
```
---
## Utility APIs
- [Read Stream](#read-stream-utility)
- [Read Category](#read-category-utility)
- [Write Batch](#write-batch-utility)
### Read Stream (Utility)
The `read_stream` method retrieves a sequence of messages from a specified stream
within the message database. This method is specifically designed to fetch
messages from a well-defined stream based on a starting position and a
specified number of messages.
```python
def read_stream(
self, stream_name: str, position: int = 0, no_of_messages: int = 1000
) -> List[Dict[str, Any]]:
"""Read messages from a stream.
Returns a list of messages from the stream starting from the given position.
"""
```
**Parameters:**
- `stream_name` (`str`): The name of the stream from which messages are to be
retrieved. This name must include a hyphen (-) to be recognized as a valid
stream identifier.
- `position` (`int`, optional): The zero-based index position from which to start
reading messages. Defaults to 0, which starts reading from the beginning of
the stream.
- `no_of_messages` (`int`, optional): The maximum number of messages to retrieve
from the stream. Defaults to 1000.
**Returns:**
- `List`[`Dict`[`str`, `Any`]]: A list of dictionaries, each representing a message
retrieved from the stream. Each dictionary contains the message details
structured in key-value pairs.
**Exceptions:**
- `ValueError`: Raised if the provided stream_name does not contain a hyphen
(-), which is required to validate the name as a stream identifier.
```python
message_db = MessageDB(connection_pool=my_pool)
stream_name = "user-updates-2023"
position = 0
no_of_messages = 100
messages = message_db.read_stream(stream_name, position, no_of_messages)
for message in messages:
print(message)
```
---
### Read Category (Utility)
The `read_category` method retrieves a sequence of messages from a specified
category within the message database. It is designed to fetch messages based
on a category identifier, starting from a specific position, and up to a
defined limit of messages.
```python
def read_category(
self, category_name: str, position: int = 0, no_of_messages: int = 1000
) -> List[Dict[str, Any]]:
"""Read messages from a category.
Returns a list of messages from the category starting from the given position.
"""
```
**Parameters:**
- `category_name` (`str`): The name of the category from which messages are to be
retrieved. This identifier should not include a hyphen (-) to validate it as
a category name.
- `position` (`int`, optional): The zero-based index position from which to start
reading messages within the category. Defaults to 0.
- `no_of_messages` (`int`, optional): The maximum number of messages to retrieve
from the category. Defaults to 1000.
**Returns:**
- List[Dict[str, Any]]: A list of dictionaries, each representing a message.
Each dictionary includes details about the message such as the message ID,
stream name, type, position, global position, data, metadata, and time of
creation.
**Exceptions:**
- `ValueError`: Raised if the provided category_name contains a hyphen (-),
which is not allowed for category identifiers and implies a misunderstanding
between streams and categories.
```python
message_db = MessageDB(connection_pool=my_pool)
category_name = "user_updates"
position = 0
no_of_messages = 100
# Reading messages from a category
messages = message_db.read_category(category_name, position, no_of_messages)
for message in messages:
print(message)
```
---
### Write Batch (Utility)
The `write_batch` method is designed to write a series of messages to a
specified stream in a batch operation. It ensures atomicity in writing
operations, where all messages are written in sequence, and each subsequent
message can optionally depend on the position of the last message written.
This method is useful when multiple messages need to be written as a part of a
single transactional context.
```python
def write_batch(
self, stream_name, data, expected_version: int | None = None
) -> int:
"""Write a batch of messages to a stream."""
```
**Parameters:**
- `stream_name` (`str`): The name of the stream to which the batch of messages
will be written.
- `data` (`List`[`Tuple`[`str`, `Dict`, `Dict` | `None`]]): A list of tuples,
where each tuple represents a message. The tuple format is (message_type, data,
metadata), with metadata being optional.
- `expected_version` (`int` | `None`, optional): The version of the stream
where the batch operation expects to start writing. This can be used for
concurrency control to ensure messages are written in the expected order.
Defaults to None.
**Returns:**
- `position` (`int`): The position (or version number) of the last message
written in the stream as a result of the batch operation.
```python
message_db = MessageDB(connection_pool=my_pool)
stream_name = "order_events"
data = [
("OrderCreated", {"order_id": 123, "product_id": 456}, None),
("OrderShipped",
{"order_id": 123, "shipment_id": 789},
{"priority": "high"}
),
("OrderDelivered", {"order_id": 123, "delivery_date": "2024-04-23"}, None)
]
# Writing a batch of messages to a stream
last_position = message_db.write_batch(stream_name, data)
print(f"Last message written at position: {last_position}")
```
---
## Consumer Groups
Consumer groups enable horizontal scaling by distributing the processing load of a
single category among multiple consumers. This allows parallel processing of messages
while ensuring that each stream is processed by exactly one consumer in the group.
### How Consumer Groups Work
Consumer groups use **consistent hashing** to assign streams to consumers:
1. Each stream's cardinal ID (the part after the first hyphen) is hashed to a 64-bit integer
2. The hash is divided by the group size using modulo division
3. The result determines which consumer processes that stream
4. The same stream always maps to the same consumer, ensuring consistency
### Using Consumer Groups
To use consumer groups with `read_category`, specify both the `consumer_group_member`
(zero-based consumer identifier) and `consumer_group_size` (total number of consumers):
```python
message_db = MessageDB(connection_pool=my_pool)
category_name = "user_updates"
# Consumer 0 in a group of 3
messages_0 = message_db.read_category(
category_name,
consumer_group_member=0,
consumer_group_size=3
)
# Consumer 1 in a group of 3
messages_1 = message_db.read_category(
category_name,
consumer_group_member=1,
consumer_group_size=3
)
# Consumer 2 in a group of 3
messages_2 = message_db.read_category(
category_name,
consumer_group_member=2,
consumer_group_size=3
)
```
### Consumer Group Parameters
- `consumer_group_member` (`int` | `None`): Zero-based consumer identifier (0, 1, 2, etc.)
- Must be >= 0
- Must be less than `consumer_group_size`
- `consumer_group_size` (`int` | `None`): Total number of consumers in the group
- Must be > 0
**Important**: Both parameters must be provided together or both must be `None`.
Providing only one will raise a `ValueError`.
### Complete Example
Here's a complete example of using consumer groups for parallel processing:
```python
from message_db import MessageDB
import threading
# Initialize the database connection
message_db = MessageDB.from_url("postgresql://message_store@localhost:5432/message_store")
# Define a consumer function
def process_messages(consumer_id, group_size):
while True:
messages = message_db.read_category(
"user_updates",
position=get_last_processed_position(consumer_id),
no_of_messages=100,
consumer_group_member=consumer_id,
consumer_group_size=group_size
)
if not messages:
break
for message in messages:
# Process each message
print(f"Consumer {consumer_id} processing: {message['data']}")
# Update the last processed position
update_last_processed_position(consumer_id, messages[-1]['global_position'])
# Run 3 consumers in parallel
group_size = 3
threads = []
for consumer_id in range(group_size):
thread = threading.Thread(
target=process_messages,
args=(consumer_id, group_size)
)
thread.start()
threads.append(thread)
# Wait for all consumers to finish
for thread in threads:
thread.join()
```
### Benefits of Consumer Groups
- **Horizontal Scaling**: Distribute processing across multiple instances
- **No Duplication**: Each stream is processed by exactly one consumer
- **Consistency**: The same stream always goes to the same consumer
- **Parallel Processing**: Multiple consumers can process different streams simultaneously
- **Fault Tolerance**: If a consumer fails, you can redistribute the group
### Best Practices
1. **Group Size**: Choose a group size that matches your processing capacity
2. **Position Tracking**: Each consumer should track its own position independently
3. **Error Handling**: Implement retry logic for failed message processing
4. **Monitoring**: Monitor each consumer's progress and lag separately
5. **Deployment**: Deploy consumers as separate processes or containers
---
## License
[MIT](https://github.com/subhashb/message-db-py/blob/main/LICENSE)
| text/markdown | Subhash Bhushan | subhash.bhushan@gmail.com | null | null | MIT | message-db, event-sourcing, event-store, messaging, cqrs, command-query-responsibility-segregation, events, streaming, database, postgresql, python, async, microservices, distributed-systems, message-queue, event-driven, real-time | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Database",
"Topic :: Database :: Database Engines/Servers",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"psycopg2<3.0.0,>=2.9.11"
] | [] | [] | [] | [] | poetry/2.3.1 CPython/3.13.11 Darwin/25.2.0 | 2026-02-21T02:57:00.342433 | message_db_py-0.3.2-py3-none-any.whl | 11,492 | ba/24/94d5e4675af14e89f1c1b0fcc5dfbd1f1147f12edfa4772b3e4731913cc1/message_db_py-0.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 6b3c2363d3cbfea022001cef02689c89 | 198865cf482e845c8d76f36edd76566398dbc4d921a9fe9ee0030c0f56a42c50 | ba2494d5e4675af14e89f1c1b0fcc5dfbd1f1147f12edfa4772b3e4731913cc1 | null | [
"LICENSE"
] | 290 |
2.4 | nifti2bids | 0.17.3 | Post-hoc BIDS conversion for NIfTI datasets and neurobehavioral logs. | # NIfTI2BIDS
[](https://pypi.python.org/pypi/nifti2bids/)
[](https://pypi.python.org/pypi/nifti2bids/)
[](https://github.com/donishadsmith/nifti2bids)
[](https://opensource.org/licenses/MIT)
[](https://github.com/donishadsmith/nifti2bids/actions/workflows/testing.yaml)
[](https://codecov.io/gh/donishadsmith/nifti2bids)
[](https://github.com/psf/black)
[](http://nifti2bids.readthedocs.io/en/stable/?badge=stable)
A toolkit for post-hoc BIDS conversion of legacy or unstructured NIfTI datasets. Intended for cases that require custom code and flexibility, such as when NIfTI source files lack consistent naming conventions, organized folder hierarchies, or sidecar metadata. Includes utilities for metadata reconstruction from NIfTI headers, file renaming, neurobehavioral log parsing (for E-Prime and Presentation), and JSON sidecar generation.
## Installation
### Standard Installation
```bash
pip install nifti2bids[all]
```
### Development Version
```bash
git clone --depth 1 https://github.com/donishadsmith/nifti2bids/
cd nifti2bids
pip install -e .[all]
```
## Features
- **File renaming**: Convert arbitrary filenames to BIDS-compliant naming
- **File creation**: Generate `dataset_description.json` and `participants.tsv`
- **Metadata utilities**: Extract header metadata (e.g., TR, orientation, scanner info) and generate slice timing for singleband and multiband acquisitions
- **Log parsing**: Load Presentation (e.g., `.log`) and E-Prime 3 (e.g, `.edat3`, `.txt`) files as DataFrames, or use extractor classes to generate BIDS events for block and event designs:
| Class | Software | Design | Description |
|-------|----------|--------|-------------|
| `PresentationBlockExtractor` | Presentation | Block | Extracts block-level timing with mean RT and accuracy |
| `PresentationEventExtractor` | Presentation | Event | Extracts trial-level timing with individual responses |
| `EPrimeBlockExtractor` | E-Prime 3 | Block | Extracts block-level timing with mean RT and accuracy |
| `EPrimeEventExtractor` | E-Prime 3 | Event | Extracts trial-level timing with individual responses |
- **Auditing**: Generate a table of showing the presence or abscence of certain files for each subject and session
- **QC**: Creation and computation of certain quality control metrics (e.g., framewise displacement)
## Quick Start
### Creating BIDS-Compliant Filenames
```python
from nifti2bids.bids import create_bids_file
create_bids_file(
src_file="101_mprage.nii.gz",
subj_id="101",
ses_id="01",
desc="T1w",
dst_dir="/data/bids/sub-101/ses-01/anat",
)
```
### Extracting Metadata from NIfTI Headers
```python
from nifti2bids.metadata import get_tr, create_slice_timing, get_image_orientation
tr = get_tr("sub-01_bold.nii.gz")
slice_timing = create_slice_timing(
"sub-01_bold.nii.gz",
slice_acquisition_method="interleaved",
multiband_factor=4,
)
orientation_map, orientation = get_image_orientation("sub-01_bold.nii.gz")
```
### Loading Raw Log Files
```python
from nifti2bids.parsers import (
load_presentation_log,
load_eprime_log,
convert_edat3_to_txt,
)
presentation_df = load_presentation_log("sub-01_task.log", convert_to_seconds=["Time"])
# E-Prime 3: convert .edat3 to text first, or load .txt directly
eprime_txt_path = convert_edat3_to_txt("sub-01_task.edat3")
eprime_df = load_eprime_log(eprime_txt_path, convert_to_seconds=["Stimulus.OnsetTime"])
```
### Creating BIDS Events from Presentation Logs
```python
from nifti2bids.bids import PresentationBlockExtractor
import pandas as pd
extractor = PresentationBlockExtractor(
"sub-01_task-faces.log",
block_cue_names=("Face", "Place"), # Can use regex ("Fa.*", "Pla.*")
scanner_event_type="Pulse",
scanner_trigger_code="99",
convert_to_seconds=["Time"],
rest_block_codes="crosshair",
rest_code_frequency="fixed",
split_cue_as_instruction=True,
)
events_df = pd.DataFrame(
{
"onset": extractor.extract_onsets(),
"duration": extractor.extract_durations(),
"trial_type": extractor.extract_trial_types(),
"mean_rt": extractor.extract_mean_reaction_times(),
}
)
```
### Creating BIDS Events from E-Prime Logs
```python
from nifti2bids.bids import EPrimeEventExtractor
import pandas as pd
extractor = EPrimeEventExtractor(
"sub-01_task-gonogo.txt",
trial_types="Go|NoGo", # Can also use ("Go", "NoGo")
onset_column_name="Stimulus.OnsetTime",
procedure_column_name="Procedure",
trigger_column_name="ScannerTrigger.RTTime",
convert_to_seconds=[
"Stimulus.OnsetTime",
"Stimulus.OffsetTime",
"ScannerTrigger.RTTime",
],
)
events_df = pd.DataFrame(
{
"onset": extractor.extract_onsets(),
"duration": extractor.extract_durations(
offset_column_name="Stimulus.OffsetTime"
),
"trial_type": extractor.extract_trial_types(),
"reaction_time": extractor.extract_reaction_times(
reaction_time_column_name="Stimulus.RT"
),
}
)
```
### Audit BIDS Dataset
```python
from nifti2bids.audit import BIDSAuditor
from nifti2bids.simulate import simulate_bids_dataset
bids_root = simulate_bids_dataset()
auditor = BIDSAuditor(bids_root)
auditor.check_raw_nifti_availability()
auditor.check_raw_sidecar_availability()
auditor.check_events_availability()
auditor.check_preprocessed_nifti_availability()
analysis_dir = bids_root / "first_level"
analysis_sub_dir = analysis_dir / "sub-1" / "ses-1"
analysis_sub_dir.mkdir(parents=True, exist_ok=True)
with open(analysis_sub_dir / "sub-1_task-rest_desc-betas.nii.gz", "w") as f:
pass
auditor.check_first_level_availability(analysis_dir=analysis_dir, desc="betas")
```
### Compute QC
```python
from nifti2bids.qc import create_censor_mask, compute_consecutive_censor_stats
censor_mask = create_censor_mask(
"confounds.tsv",
column_name="framewise_displacement",
threshold=0.5,
n_dummy_scans=4,
)
consecutive_censor_mean, consecutive_censor_std = compute_consecutive_censor_stats(
censor_mask, n_dummy_scans=4
)
```
See the [API documentation](https://nifti2bids.readthedocs.io/en/latest/api.html) for full parameter details and additional utilities.
| text/markdown | null | Donisha Smith <dsmit420@jhu.edu> | null | null | null | python, neuroimaging, fMRI, MRI, BIDS, NIfTI | [
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: MacOS :: MacOS X",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows :: Windows 11",
"Development Status :: 3 - Alpha"
] | [] | null | null | >=3.10.0 | [] | [] | [] | [
"numpy>=1.26.3",
"nibabel>=5.0.0",
"rich>=14.2.0",
"nilearn>=0.10.4",
"pandas>=2.1.0",
"pybids>=0.16.5; platform_system != \"Windows\"",
"tqdm>=4.65.0",
"joblib>=1.3.0",
"nifti2bids[test,windows]; extra == \"all\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pybids>=0.16.5; extra == \"windows\""
] | [] | [] | [] | [
"Homepage, https://nifti2bids.readthedocs.io",
"Github, https://github.com/donishadsmith/nifti2bids",
"Issues, https://github.com/donishadsmith/nifti2bids/issues",
"Changelog, https://nifti2bids.readthedocs.io/en/stable/changelog.html"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T02:55:57.758654 | nifti2bids-0.17.3.tar.gz | 57,866 | f6/b1/d87842eb389388e05f264d797bbea8cc0122c12cedcd5f6ae9221299c60e/nifti2bids-0.17.3.tar.gz | source | sdist | null | false | f2206dcf5071de47a9920f1d87fe179b | fd28fbca01d2eb90ecf5f4e6571d19ccb6c50606db9b6a771711d7b10b3097b9 | f6b1d87842eb389388e05f264d797bbea8cc0122c12cedcd5f6ae9221299c60e | MIT | [
"LICENSE"
] | 242 |
2.4 | OpenGeodeWeb-Viewer | 1.15.3 | OpenGeodeWeb-Viewer is an open source framework that proposes handy python functions and wrappers for the OpenGeode ecosystem | # OpenGeodeWeb-Viewer
OpenSource Python framework for remote visualisation
| text/markdown | null | Geode-solutions <team-web@geode-solutions.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiohappyeyeballs>=2",
"aiohttp>=3",
"aiosignal>=1",
"attrs>=25",
"contourpy>=1",
"cycler>=0",
"fonttools>=4",
"frozenlist>=1",
"idna==3.11",
"kiwisolver>=1",
"matplotlib>=3",
"multidict>=6",
"numpy>=2",
"packaging==26.0",
"pillow>=12",
"propcache>=0",
"pyparsing>=3",
"python-dateutil==2.9.0.post0",
"six>=1",
"typing-extensions>=4",
"vtk==9.5.2",
"websocket-client>=1",
"wslink==1.12.4",
"yarl>=1",
"opengeodeweb-microservice==1.*,>=1.0.14"
] | [] | [] | [] | [
"Homepage, https://github.com/Geode-solutions/OpenGeodeWeb-Viewer",
"Bug Tracker, https://github.com/Geode-solutions/OpenGeodeWeb-Viewer/issues"
] | twine/6.0.1 CPython/3.12.12 | 2026-02-21T02:55:35.291182 | opengeodeweb_viewer-1.15.3.tar.gz | 31,698 | a9/1c/4c8f87ad2e5caceba2a6986efd9c7993dd2b394977171af357eb4460ba8f/opengeodeweb_viewer-1.15.3.tar.gz | source | sdist | null | false | bef2efa8c815eb389f94ea4a8dd61e09 | a73a1c56bcfd76b6c30255ac71075d98782fecc19e95fdda8fe7e6190a721c79 | a91c4c8f87ad2e5caceba2a6986efd9c7993dd2b394977171af357eb4460ba8f | null | [] | 0 |
2.4 | types-boto3 | 1.42.54 | Type annotations for boto3 1.42.54 generated with mypy-boto3-builder 8.12.0 | <a id="types-boto3"></a>
# types-boto3
[](https://pypi.org/project/types-boto3/)
[](https://pypi.org/project/types-boto3/)
[](https://youtype.github.io/types_boto3_docs/)
[](https://pypistats.org/packages/types-boto3)

Type annotations for [boto3 1.42.54](https://pypi.org/project/boto3/)
compatible with [VSCode](https://code.visualstudio.com/),
[PyCharm](https://www.jetbrains.com/pycharm/),
[Emacs](https://www.gnu.org/software/emacs/),
[Sublime Text](https://www.sublimetext.com/),
[mypy](https://github.com/python/mypy),
[pyright](https://github.com/microsoft/pyright) and other tools.
Generated with
[mypy-boto3-builder 8.12.0](https://github.com/youtype/mypy_boto3_builder).
More information can be found in
[types-boto3 docs](https://youtype.github.io/types_boto3_docs/).
See how it helps you find and fix potential bugs:

- [types-boto3](#types-boto3)
- [How to install](#how-to-install)
- [Generate locally (recommended)](<#generate-locally-(recommended)>)
- [VSCode extension](#vscode-extension)
- [From PyPI with pip](#from-pypi-with-pip)
- [How to uninstall](#how-to-uninstall)
- [Usage](#usage)
- [VSCode](#vscode)
- [PyCharm](#pycharm)
- [Emacs](#emacs)
- [Sublime Text](#sublime-text)
- [Other IDEs](#other-ides)
- [mypy](#mypy)
- [pyright](#pyright)
- [Pylint compatibility](#pylint-compatibility)
- [Explicit type annotations](#explicit-type-annotations)
- [How it works](#how-it-works)
- [What's new](#what's-new)
- [Implemented features](#implemented-features)
- [Latest changes](#latest-changes)
- [Versioning](#versioning)
- [Thank you](#thank-you)
- [Documentation](#documentation)
- [Support and contributing](#support-and-contributing)
- [Submodules](#submodules)
<a id="how-to-install"></a>
## How to install
<a id="generate-locally-(recommended)"></a>
### Generate locally (recommended)
You can generate type annotations for `boto3` package locally with
`mypy-boto3-builder`. Use
[uv](https://docs.astral.sh/uv/getting-started/installation/) for build
isolation.
1. Run mypy-boto3-builder in your package root directory:
`uvx --with 'boto3==1.42.54' mypy-boto3-builder`
2. Select `boto3` AWS SDK.
3. Select services you use in the current project.
4. Use provided commands to install generated packages.
<a id="vscode-extension"></a>
### VSCode extension
Add
[AWS Boto3](https://marketplace.visualstudio.com/items?itemName=Boto3typed.boto3-ide)
extension to your VSCode and run `AWS boto3: Quick Start` command.
Click `Auto-discover services` and select services you use in the current
project.
<a id="from-pypi-with-pip"></a>
### From PyPI with pip
Install `types-boto3` to add type checking for `boto3` package.
```bash
# install type annotations only for boto3
python -m pip install types-boto3
# install boto3 type annotations
# for cloudformation, dynamodb, ec2, lambda, rds, s3, sqs
python -m pip install 'types-boto3[essential]'
# or install annotations for services you use
python -m pip install 'types-boto3[acm,apigateway]'
# or install annotations in sync with boto3 version
python -m pip install 'types-boto3[boto3]'
# or install all-in-one annotations for all services
python -m pip install 'types-boto3[full]'
# Lite version does not provide session.client/resource overloads
# it is more RAM-friendly, but requires explicit type annotations
python -m pip install 'types-boto3-lite[essential]'
```
<a id="how-to-uninstall"></a>
## How to uninstall
```bash
# uninstall types-boto3
python -m pip uninstall -y types-boto3
```
<a id="usage"></a>
## Usage
<a id="vscode"></a>
### VSCode
- Install
[Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
- Install
[Pylance extension](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
- Set `Pylance` as your Python Language Server
- Install `types-boto3[essential]` in your environment:
```bash
python -m pip install 'types-boto3[essential]'
```
Both type checking and code completion should now work. No explicit type
annotations required, write your `boto3` code as usual.
<a id="pycharm"></a>
### PyCharm
> ⚠️ Due to slow PyCharm performance on `Literal` overloads (issue
> [PY-40997](https://youtrack.jetbrains.com/issue/PY-40997)), it is recommended
> to use [types-boto3-lite](https://pypi.org/project/types-boto3-lite/) until
> the issue is resolved.
> ⚠️ If you experience slow performance and high CPU usage, try to disable
> `PyCharm` type checker and use [mypy](https://github.com/python/mypy) or
> [pyright](https://github.com/microsoft/pyright) instead.
> ⚠️ To continue using `PyCharm` type checker, you can try to replace
> `types-boto3` with
> [types-boto3-lite](https://pypi.org/project/types-boto3-lite/):
```bash
pip uninstall types-boto3
pip install types-boto3-lite
```
Install `types-boto3[essential]` in your environment:
```bash
python -m pip install 'types-boto3[essential]'
```
Both type checking and code completion should now work.
<a id="emacs"></a>
### Emacs
- Install `types-boto3` with services you use in your environment:
```bash
python -m pip install 'types-boto3[essential]'
```
- Install [use-package](https://github.com/jwiegley/use-package),
[lsp](https://github.com/emacs-lsp/lsp-mode/),
[company](https://github.com/company-mode/company-mode) and
[flycheck](https://github.com/flycheck/flycheck) packages
- Install [lsp-pyright](https://github.com/emacs-lsp/lsp-pyright) package
```elisp
(use-package lsp-pyright
:ensure t
:hook (python-mode . (lambda ()
(require 'lsp-pyright)
(lsp))) ; or lsp-deferred
:init (when (executable-find "python3")
(setq lsp-pyright-python-executable-cmd "python3"))
)
```
- Make sure emacs uses the environment where you have installed `types-boto3`
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="sublime-text"></a>
### Sublime Text
- Install `types-boto3[essential]` with services you use in your environment:
```bash
python -m pip install 'types-boto3[essential]'
```
- Install [LSP-pyright](https://github.com/sublimelsp/LSP-pyright) package
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="other-ides"></a>
### Other IDEs
Not tested, but as long as your IDE supports `mypy` or `pyright`, everything
should work.
<a id="mypy"></a>
### mypy
- Install `mypy`: `python -m pip install mypy`
- Install `types-boto3[essential]` in your environment:
```bash
python -m pip install 'types-boto3[essential]'
```
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pyright"></a>
### pyright
- Install `pyright`: `npm i -g pyright`
- Install `types-boto3[essential]` in your environment:
```bash
python -m pip install 'types-boto3[essential]'
```
Optionally, you can install `types-boto3` to `typings` directory.
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pylint-compatibility"></a>
### Pylint compatibility
It is totally safe to use `TYPE_CHECKING` flag in order to avoid `types-boto3`
dependency in production. However, there is an issue in `pylint` that it
complains about undefined variables. To fix it, set all types to `object` in
non-`TYPE_CHECKING` mode.
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from types_boto3_ec2 import EC2Client, EC2ServiceResource
from types_boto3_ec2.waiters import BundleTaskCompleteWaiter
from types_boto3_ec2.paginators import DescribeVolumesPaginator
else:
EC2Client = object
EC2ServiceResource = object
BundleTaskCompleteWaiter = object
DescribeVolumesPaginator = object
...
```
<a id="explicit-type-annotations"></a>
### Explicit type annotations
To speed up type checking and code completion, you can set types explicitly.
```python
import boto3
from boto3.session import Session
from types_boto3_ec2.client import EC2Client
from types_boto3_ec2.service_resource import EC2ServiceResource
from types_boto3_ec2.waiter import BundleTaskCompleteWaiter
from types_boto3_ec2.paginator import DescribeVolumesPaginator
session = Session(region_name="us-west-1")
ec2_client: EC2Client = boto3.client("ec2", region_name="us-west-1")
ec2_resource: EC2ServiceResource = session.resource("ec2")
bundle_task_complete_waiter: BundleTaskCompleteWaiter = ec2_client.get_waiter(
"bundle_task_complete"
)
describe_volumes_paginator: DescribeVolumesPaginator = ec2_client.get_paginator("describe_volumes")
```
<a id="how-it-works"></a>
## How it works
Fully automated
[mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder) carefully
generates type annotations for each service, patiently waiting for `boto3`
updates. It delivers drop-in type annotations for you and makes sure that:
- All available `boto3` services are covered.
- Each public class and method of every `boto3` service gets valid type
annotations extracted from `botocore` schemas.
- Type annotations include up-to-date documentation.
- Link to documentation is provided for every method.
- Code is processed by [ruff](https://docs.astral.sh/ruff/) for readability.
<a id="what's-new"></a>
## What's new
<a id="implemented-features"></a>
### Implemented features
- Fully type annotated `boto3`, `botocore`, `aiobotocore` and `aioboto3`
libraries
- `mypy`, `pyright`, `VSCode`, `PyCharm`, `Sublime Text` and `Emacs`
compatibility
- `Client`, `ServiceResource`, `Resource`, `Waiter` `Paginator` type
annotations for each service
- Generated `TypeDefs` for each service
- Generated `Literals` for each service
- Auto discovery of types for `boto3.client` and `boto3.resource` calls
- Auto discovery of types for `session.client` and `session.resource` calls
- Auto discovery of types for `client.get_waiter` and `client.get_paginator`
calls
- Auto discovery of types for `ServiceResource` and `Resource` collections
- Auto discovery of types for `aiobotocore.Session.create_client` calls
<a id="latest-changes"></a>
### Latest changes
Builder changelog can be found in
[Releases](https://github.com/youtype/mypy_boto3_builder/releases).
<a id="versioning"></a>
## Versioning
`types-boto3` version is the same as related `boto3` version and follows
[Python Packaging version specifiers](https://packaging.python.org/en/latest/specifications/version-specifiers/).
<a id="thank-you"></a>
## Thank you
- [Allie Fitter](https://github.com/alliefitter) for
[boto3-type-annotations](https://pypi.org/project/boto3-type-annotations/),
this package is based on top of his work
- [black](https://github.com/psf/black) developers for an awesome formatting
tool
- [Timothy Edmund Crosley](https://github.com/timothycrosley) for
[isort](https://github.com/PyCQA/isort) and how flexible it is
- [mypy](https://github.com/python/mypy) developers for doing all dirty work
for us
- [pyright](https://github.com/microsoft/pyright) team for the new era of typed
Python
<a id="documentation"></a>
## Documentation
All services type annotations can be found in
[boto3 docs](https://youtype.github.io/types_boto3_docs/)
<a id="support-and-contributing"></a>
## Support and contributing
This package is auto-generated. Please reports any bugs or request new features
in [mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder/issues/)
repository.
<a id="submodules"></a>
## Submodules
- `types-boto3[full]` - Type annotations for all 413 services in one package
(recommended).
- `types-boto3[all]` - Type annotations for all 413 services in separate
packages.
- `types-boto3[essential]` - Type annotations for
[CloudFormation](https://youtype.github.io/types_boto3_docs/types_boto3_cloudformation/),
[DynamoDB](https://youtype.github.io/types_boto3_docs/types_boto3_dynamodb/),
[EC2](https://youtype.github.io/types_boto3_docs/types_boto3_ec2/),
[Lambda](https://youtype.github.io/types_boto3_docs/types_boto3_lambda/),
[RDS](https://youtype.github.io/types_boto3_docs/types_boto3_rds/),
[S3](https://youtype.github.io/types_boto3_docs/types_boto3_s3/) and
[SQS](https://youtype.github.io/types_boto3_docs/types_boto3_sqs/) services.
- `types-boto3[boto3]` - Install annotations in sync with `boto3` version.
- `types-boto3[accessanalyzer]` - Type annotations for
[AccessAnalyzer](https://youtype.github.io/types_boto3_docs/types_boto3_accessanalyzer/)
service.
- `types-boto3[account]` - Type annotations for
[Account](https://youtype.github.io/types_boto3_docs/types_boto3_account/)
service.
- `types-boto3[acm]` - Type annotations for
[ACM](https://youtype.github.io/types_boto3_docs/types_boto3_acm/) service.
- `types-boto3[acm-pca]` - Type annotations for
[ACMPCA](https://youtype.github.io/types_boto3_docs/types_boto3_acm_pca/)
service.
- `types-boto3[aiops]` - Type annotations for
[AIOps](https://youtype.github.io/types_boto3_docs/types_boto3_aiops/)
service.
- `types-boto3[amp]` - Type annotations for
[PrometheusService](https://youtype.github.io/types_boto3_docs/types_boto3_amp/)
service.
- `types-boto3[amplify]` - Type annotations for
[Amplify](https://youtype.github.io/types_boto3_docs/types_boto3_amplify/)
service.
- `types-boto3[amplifybackend]` - Type annotations for
[AmplifyBackend](https://youtype.github.io/types_boto3_docs/types_boto3_amplifybackend/)
service.
- `types-boto3[amplifyuibuilder]` - Type annotations for
[AmplifyUIBuilder](https://youtype.github.io/types_boto3_docs/types_boto3_amplifyuibuilder/)
service.
- `types-boto3[apigateway]` - Type annotations for
[APIGateway](https://youtype.github.io/types_boto3_docs/types_boto3_apigateway/)
service.
- `types-boto3[apigatewaymanagementapi]` - Type annotations for
[ApiGatewayManagementApi](https://youtype.github.io/types_boto3_docs/types_boto3_apigatewaymanagementapi/)
service.
- `types-boto3[apigatewayv2]` - Type annotations for
[ApiGatewayV2](https://youtype.github.io/types_boto3_docs/types_boto3_apigatewayv2/)
service.
- `types-boto3[appconfig]` - Type annotations for
[AppConfig](https://youtype.github.io/types_boto3_docs/types_boto3_appconfig/)
service.
- `types-boto3[appconfigdata]` - Type annotations for
[AppConfigData](https://youtype.github.io/types_boto3_docs/types_boto3_appconfigdata/)
service.
- `types-boto3[appfabric]` - Type annotations for
[AppFabric](https://youtype.github.io/types_boto3_docs/types_boto3_appfabric/)
service.
- `types-boto3[appflow]` - Type annotations for
[Appflow](https://youtype.github.io/types_boto3_docs/types_boto3_appflow/)
service.
- `types-boto3[appintegrations]` - Type annotations for
[AppIntegrationsService](https://youtype.github.io/types_boto3_docs/types_boto3_appintegrations/)
service.
- `types-boto3[application-autoscaling]` - Type annotations for
[ApplicationAutoScaling](https://youtype.github.io/types_boto3_docs/types_boto3_application_autoscaling/)
service.
- `types-boto3[application-insights]` - Type annotations for
[ApplicationInsights](https://youtype.github.io/types_boto3_docs/types_boto3_application_insights/)
service.
- `types-boto3[application-signals]` - Type annotations for
[CloudWatchApplicationSignals](https://youtype.github.io/types_boto3_docs/types_boto3_application_signals/)
service.
- `types-boto3[applicationcostprofiler]` - Type annotations for
[ApplicationCostProfiler](https://youtype.github.io/types_boto3_docs/types_boto3_applicationcostprofiler/)
service.
- `types-boto3[appmesh]` - Type annotations for
[AppMesh](https://youtype.github.io/types_boto3_docs/types_boto3_appmesh/)
service.
- `types-boto3[apprunner]` - Type annotations for
[AppRunner](https://youtype.github.io/types_boto3_docs/types_boto3_apprunner/)
service.
- `types-boto3[appstream]` - Type annotations for
[AppStream](https://youtype.github.io/types_boto3_docs/types_boto3_appstream/)
service.
- `types-boto3[appsync]` - Type annotations for
[AppSync](https://youtype.github.io/types_boto3_docs/types_boto3_appsync/)
service.
- `types-boto3[arc-region-switch]` - Type annotations for
[ARCRegionswitch](https://youtype.github.io/types_boto3_docs/types_boto3_arc_region_switch/)
service.
- `types-boto3[arc-zonal-shift]` - Type annotations for
[ARCZonalShift](https://youtype.github.io/types_boto3_docs/types_boto3_arc_zonal_shift/)
service.
- `types-boto3[artifact]` - Type annotations for
[Artifact](https://youtype.github.io/types_boto3_docs/types_boto3_artifact/)
service.
- `types-boto3[athena]` - Type annotations for
[Athena](https://youtype.github.io/types_boto3_docs/types_boto3_athena/)
service.
- `types-boto3[auditmanager]` - Type annotations for
[AuditManager](https://youtype.github.io/types_boto3_docs/types_boto3_auditmanager/)
service.
- `types-boto3[autoscaling]` - Type annotations for
[AutoScaling](https://youtype.github.io/types_boto3_docs/types_boto3_autoscaling/)
service.
- `types-boto3[autoscaling-plans]` - Type annotations for
[AutoScalingPlans](https://youtype.github.io/types_boto3_docs/types_boto3_autoscaling_plans/)
service.
- `types-boto3[b2bi]` - Type annotations for
[B2BI](https://youtype.github.io/types_boto3_docs/types_boto3_b2bi/) service.
- `types-boto3[backup]` - Type annotations for
[Backup](https://youtype.github.io/types_boto3_docs/types_boto3_backup/)
service.
- `types-boto3[backup-gateway]` - Type annotations for
[BackupGateway](https://youtype.github.io/types_boto3_docs/types_boto3_backup_gateway/)
service.
- `types-boto3[backupsearch]` - Type annotations for
[BackupSearch](https://youtype.github.io/types_boto3_docs/types_boto3_backupsearch/)
service.
- `types-boto3[batch]` - Type annotations for
[Batch](https://youtype.github.io/types_boto3_docs/types_boto3_batch/)
service.
- `types-boto3[bcm-dashboards]` - Type annotations for
[BillingandCostManagementDashboards](https://youtype.github.io/types_boto3_docs/types_boto3_bcm_dashboards/)
service.
- `types-boto3[bcm-data-exports]` - Type annotations for
[BillingandCostManagementDataExports](https://youtype.github.io/types_boto3_docs/types_boto3_bcm_data_exports/)
service.
- `types-boto3[bcm-pricing-calculator]` - Type annotations for
[BillingandCostManagementPricingCalculator](https://youtype.github.io/types_boto3_docs/types_boto3_bcm_pricing_calculator/)
service.
- `types-boto3[bcm-recommended-actions]` - Type annotations for
[BillingandCostManagementRecommendedActions](https://youtype.github.io/types_boto3_docs/types_boto3_bcm_recommended_actions/)
service.
- `types-boto3[bedrock]` - Type annotations for
[Bedrock](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock/)
service.
- `types-boto3[bedrock-agent]` - Type annotations for
[AgentsforBedrock](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_agent/)
service.
- `types-boto3[bedrock-agent-runtime]` - Type annotations for
[AgentsforBedrockRuntime](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_agent_runtime/)
service.
- `types-boto3[bedrock-agentcore]` - Type annotations for
[BedrockAgentCore](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_agentcore/)
service.
- `types-boto3[bedrock-agentcore-control]` - Type annotations for
[BedrockAgentCoreControl](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_agentcore_control/)
service.
- `types-boto3[bedrock-data-automation]` - Type annotations for
[DataAutomationforBedrock](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_data_automation/)
service.
- `types-boto3[bedrock-data-automation-runtime]` - Type annotations for
[RuntimeforBedrockDataAutomation](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_data_automation_runtime/)
service.
- `types-boto3[bedrock-runtime]` - Type annotations for
[BedrockRuntime](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_runtime/)
service.
- `types-boto3[billing]` - Type annotations for
[Billing](https://youtype.github.io/types_boto3_docs/types_boto3_billing/)
service.
- `types-boto3[billingconductor]` - Type annotations for
[BillingConductor](https://youtype.github.io/types_boto3_docs/types_boto3_billingconductor/)
service.
- `types-boto3[braket]` - Type annotations for
[Braket](https://youtype.github.io/types_boto3_docs/types_boto3_braket/)
service.
- `types-boto3[budgets]` - Type annotations for
[Budgets](https://youtype.github.io/types_boto3_docs/types_boto3_budgets/)
service.
- `types-boto3[ce]` - Type annotations for
[CostExplorer](https://youtype.github.io/types_boto3_docs/types_boto3_ce/)
service.
- `types-boto3[chatbot]` - Type annotations for
[Chatbot](https://youtype.github.io/types_boto3_docs/types_boto3_chatbot/)
service.
- `types-boto3[chime]` - Type annotations for
[Chime](https://youtype.github.io/types_boto3_docs/types_boto3_chime/)
service.
- `types-boto3[chime-sdk-identity]` - Type annotations for
[ChimeSDKIdentity](https://youtype.github.io/types_boto3_docs/types_boto3_chime_sdk_identity/)
service.
- `types-boto3[chime-sdk-media-pipelines]` - Type annotations for
[ChimeSDKMediaPipelines](https://youtype.github.io/types_boto3_docs/types_boto3_chime_sdk_media_pipelines/)
service.
- `types-boto3[chime-sdk-meetings]` - Type annotations for
[ChimeSDKMeetings](https://youtype.github.io/types_boto3_docs/types_boto3_chime_sdk_meetings/)
service.
- `types-boto3[chime-sdk-messaging]` - Type annotations for
[ChimeSDKMessaging](https://youtype.github.io/types_boto3_docs/types_boto3_chime_sdk_messaging/)
service.
- `types-boto3[chime-sdk-voice]` - Type annotations for
[ChimeSDKVoice](https://youtype.github.io/types_boto3_docs/types_boto3_chime_sdk_voice/)
service.
- `types-boto3[cleanrooms]` - Type annotations for
[CleanRoomsService](https://youtype.github.io/types_boto3_docs/types_boto3_cleanrooms/)
service.
- `types-boto3[cleanroomsml]` - Type annotations for
[CleanRoomsML](https://youtype.github.io/types_boto3_docs/types_boto3_cleanroomsml/)
service.
- `types-boto3[cloud9]` - Type annotations for
[Cloud9](https://youtype.github.io/types_boto3_docs/types_boto3_cloud9/)
service.
- `types-boto3[cloudcontrol]` - Type annotations for
[CloudControlApi](https://youtype.github.io/types_boto3_docs/types_boto3_cloudcontrol/)
service.
- `types-boto3[clouddirectory]` - Type annotations for
[CloudDirectory](https://youtype.github.io/types_boto3_docs/types_boto3_clouddirectory/)
service.
- `types-boto3[cloudformation]` - Type annotations for
[CloudFormation](https://youtype.github.io/types_boto3_docs/types_boto3_cloudformation/)
service.
- `types-boto3[cloudfront]` - Type annotations for
[CloudFront](https://youtype.github.io/types_boto3_docs/types_boto3_cloudfront/)
service.
- `types-boto3[cloudfront-keyvaluestore]` - Type annotations for
[CloudFrontKeyValueStore](https://youtype.github.io/types_boto3_docs/types_boto3_cloudfront_keyvaluestore/)
service.
- `types-boto3[cloudhsm]` - Type annotations for
[CloudHSM](https://youtype.github.io/types_boto3_docs/types_boto3_cloudhsm/)
service.
- `types-boto3[cloudhsmv2]` - Type annotations for
[CloudHSMV2](https://youtype.github.io/types_boto3_docs/types_boto3_cloudhsmv2/)
service.
- `types-boto3[cloudsearch]` - Type annotations for
[CloudSearch](https://youtype.github.io/types_boto3_docs/types_boto3_cloudsearch/)
service.
- `types-boto3[cloudsearchdomain]` - Type annotations for
[CloudSearchDomain](https://youtype.github.io/types_boto3_docs/types_boto3_cloudsearchdomain/)
service.
- `types-boto3[cloudtrail]` - Type annotations for
[CloudTrail](https://youtype.github.io/types_boto3_docs/types_boto3_cloudtrail/)
service.
- `types-boto3[cloudtrail-data]` - Type annotations for
[CloudTrailDataService](https://youtype.github.io/types_boto3_docs/types_boto3_cloudtrail_data/)
service.
- `types-boto3[cloudwatch]` - Type annotations for
[CloudWatch](https://youtype.github.io/types_boto3_docs/types_boto3_cloudwatch/)
service.
- `types-boto3[codeartifact]` - Type annotations for
[CodeArtifact](https://youtype.github.io/types_boto3_docs/types_boto3_codeartifact/)
service.
- `types-boto3[codebuild]` - Type annotations for
[CodeBuild](https://youtype.github.io/types_boto3_docs/types_boto3_codebuild/)
service.
- `types-boto3[codecatalyst]` - Type annotations for
[CodeCatalyst](https://youtype.github.io/types_boto3_docs/types_boto3_codecatalyst/)
service.
- `types-boto3[codecommit]` - Type annotations for
[CodeCommit](https://youtype.github.io/types_boto3_docs/types_boto3_codecommit/)
service.
- `types-boto3[codeconnections]` - Type annotations for
[CodeConnections](https://youtype.github.io/types_boto3_docs/types_boto3_codeconnections/)
service.
- `types-boto3[codedeploy]` - Type annotations for
[CodeDeploy](https://youtype.github.io/types_boto3_docs/types_boto3_codedeploy/)
service.
- `types-boto3[codeguru-reviewer]` - Type annotations for
[CodeGuruReviewer](https://youtype.github.io/types_boto3_docs/types_boto3_codeguru_reviewer/)
service.
- `types-boto3[codeguru-security]` - Type annotations for
[CodeGuruSecurity](https://youtype.github.io/types_boto3_docs/types_boto3_codeguru_security/)
service.
- `types-boto3[codeguruprofiler]` - Type annotations for
[CodeGuruProfiler](https://youtype.github.io/types_boto3_docs/types_boto3_codeguruprofiler/)
service.
- `types-boto3[codepipeline]` - Type annotations for
[CodePipeline](https://youtype.github.io/types_boto3_docs/types_boto3_codepipeline/)
service.
- `types-boto3[codestar-connections]` - Type annotations for
[CodeStarconnections](https://youtype.github.io/types_boto3_docs/types_boto3_codestar_connections/)
service.
- `types-boto3[codestar-notifications]` - Type annotations for
[CodeStarNotifications](https://youtype.github.io/types_boto3_docs/types_boto3_codestar_notifications/)
service.
- `types-boto3[cognito-identity]` - Type annotations for
[CognitoIdentity](https://youtype.github.io/types_boto3_docs/types_boto3_cognito_identity/)
service.
- `types-boto3[cognito-idp]` - Type annotations for
[CognitoIdentityProvider](https://youtype.github.io/types_boto3_docs/types_boto3_cognito_idp/)
service.
- `types-boto3[cognito-sync]` - Type annotations for
[CognitoSync](https://youtype.github.io/types_boto3_docs/types_boto3_cognito_sync/)
service.
- `types-boto3[comprehend]` - Type annotations for
[Comprehend](https://youtype.github.io/types_boto3_docs/types_boto3_comprehend/)
service.
- `types-boto3[comprehendmedical]` - Type annotations for
[ComprehendMedical](https://youtype.github.io/types_boto3_docs/types_boto3_comprehendmedical/)
service.
- `types-boto3[compute-optimizer]` - Type annotations for
[ComputeOptimizer](https://youtype.github.io/types_boto3_docs/types_boto3_compute_optimizer/)
service.
- `types-boto3[compute-optimizer-automation]` - Type annotations for
[ComputeOptimizerAutomation](https://youtype.github.io/types_boto3_docs/types_boto3_compute_optimizer_automation/)
service.
- `types-boto3[config]` - Type annotations for
[ConfigService](https://youtype.github.io/types_boto3_docs/types_boto3_config/)
service.
- `types-boto3[connect]` - Type annotations for
[Connect](https://youtype.github.io/types_boto3_docs/types_boto3_connect/)
service.
- `types-boto3[connect-contact-lens]` - Type annotations for
[ConnectContactLens](https://youtype.github.io/types_boto3_docs/types_boto3_connect_contact_lens/)
service.
- `types-boto3[connectcampaigns]` - Type annotations for
[ConnectCampaignService](https://youtype.github.io/types_boto3_docs/types_boto3_connectcampaigns/)
service.
- `types-boto3[connectcampaignsv2]` - Type annotations for
[ConnectCampaignServiceV2](https://youtype.github.io/types_boto3_docs/types_boto3_connectcampaignsv2/)
service.
- `types-boto3[connectcases]` - Type annotations for
[ConnectCases](https://youtype.github.io/types_boto3_docs/types_boto3_connectcases/)
service.
- `types-boto3[connectparticipant]` - Type annotations for
[ConnectParticipant](https://youtype.github.io/types_boto3_docs/types_boto3_connectparticipant/)
service.
- `types-boto3[controlcatalog]` - Type annotations for
[ControlCatalog](https://youtype.github.io/types_boto3_docs/types_boto3_controlcatalog/)
service.
- `types-boto3[controltower]` - Type annotations for
[ControlTower](https://youtype.github.io/types_boto3_docs/types_boto3_controltower/)
service.
- `types-boto3[cost-optimization-hub]` - Type annotations for
[CostOptimizationHub](https://youtype.github.io/types_boto3_docs/types_boto3_cost_optimization_hub/)
service.
- `types-boto3[cur]` - Type annotations for
[CostandUsageReportService](https://youtype.github.io/types_boto3_docs/types_boto3_cur/)
service.
- `types-boto3[customer-profiles]` - Type annotations for
[CustomerProfiles](https://youtype.github.io/types_boto3_docs/types_boto3_customer_profiles/)
service.
- `types-boto3[databrew]` - Type annotations for
[GlueDataBrew](https://youtype.github.io/types_boto3_docs/types_boto3_databrew/)
service.
- `types-boto3[dataexchange]` - Type annotations for
[DataExchange](https://youtype.github.io/types_boto3_docs/types_boto3_dataexchange/)
service.
- `types-boto3[datapipeline]` - Type annotations for
[DataPipeline](https://youtype.github.io/types_boto3_docs/types_boto3_datapipeline/)
service.
- `types-boto3[datasync]` - Type annotations for
[DataSync](https://youtype.github.io/types_boto3_docs/types_boto3_datasync/)
service.
- `types-boto3[datazone]` - Type annotations for
[DataZone](https://youtype.github.io/types_boto3_docs/types_boto3_datazone/)
service.
- `types-boto3[dax]` - Type annotations for
[DAX](https://youtype.github.io/types_boto3_docs/types_boto3_dax/) service.
- `types-boto3[deadline]` - Type annotations for
[DeadlineCloud](https://youtype.github.io/types_boto3_docs/types_boto3_deadline/)
service.
- `types-boto3[detective]` - Type annotations for
[Detective](https://youtype.github.io/types_boto3_docs/types_boto3_detective/)
service.
- `types-boto3[devicefarm]` - Type annotations for
[DeviceFarm](https://youtype.github.io/types_boto3_docs/types_boto3_devicefarm/)
service.
- `types-boto3[devops-guru]` - Type annotations for
[DevOpsGuru](https://youtype.github.io/types_boto3_docs/types_boto3_devops_guru/)
service.
- `types-boto3[directconnect]` - Type annotations for
[DirectConnect](https://youtype.github.io/types_boto3_docs/types_boto3_directconnect/)
service.
- `types-boto3[discovery]` - Type annotations for
[ApplicationDiscoveryService](https://youtype.github.io/types_boto3_docs/types_boto3_discovery/)
service.
- `types-boto3[dlm]` - Type annotations for
[DLM](https://youtype.github.io/types_boto3_docs/types_boto3_dlm/) service.
- `types-boto3[dms]` - Type annotations for
[DatabaseMigrationService](https://youtype.github.io/types_boto3_docs/types_boto3_dms/)
service.
- `types-boto3[docdb]` - Type annotations for
[DocDB](https://youtype.github.io/types_boto3_docs/types_boto3_docdb/)
service.
- `types-boto3[docdb-elastic]` - Type annotations for
[DocDBElastic](https://youtype.github.io/types_boto3_docs/types_boto3_docdb_elastic/)
service.
- `types-boto3[drs]` - Type annotations for
[Drs](https://youtype.github.io/types_boto3_docs/types_boto3_drs/) service.
- `types-boto3[ds]` - Type annotations for
[DirectoryService](https://youtype.github.io/types_boto3_docs/types_boto3_ds/)
service.
- `types-boto3[ds-data]` - Type annotations for
[DirectoryServiceData](https://youtype.github.io/types_boto3_docs/types_boto3_ds_data/)
service.
- `types-boto3[dsql]` - Type annotations for
[AuroraDSQL](https://youtype.github.io/types_boto3_docs/types_boto3_dsql/)
service.
- `types-boto3[dynamodb]` - Type annotations for
[DynamoDB](https://youtype.github.io/types_boto3_docs/types_boto3_dynamodb/)
service.
- `types-boto3[dynamodbstreams]` - Type annotations for
[DynamoDBStreams](https://youtype.github.io/types_boto3_docs/types_boto3_dynamodbstreams/)
service.
- `types-boto3[ebs]` - Type annotations for
[EBS](https://youtype.github.io/types_boto3_docs/types_boto3_ebs/) service.
- `types-boto3[ec2]` - Type annotations for
[EC2](https://youtype.github.io/types_boto3_docs/types_boto3_ec2/) service.
- `types-boto3[ec2-instance-connect]` - Type annotations for
[EC2InstanceConnect](https://youtype.github.io/types_boto3_docs/types_boto3_ec2_instance_connect/)
service.
- `types-boto3[ecr]` - Type annotations for
[ECR](https://youtype.github.io/types_boto3_docs/types_boto3_ecr/) service.
- `types-boto3[ecr-public]` - Type annotations for
[ECRPublic](https://youtype.github.io/types_boto3_docs/types_boto3_ecr_public/)
service.
- `types-boto3[ecs]` - Type annotations for
[ECS](https://youtype.github.io/types_boto3_docs/types_boto3_ecs/) service.
- `types-boto3[efs]` - Type annotations for
[EFS](https://youtype.github.io/types_boto3_docs/types_boto3_efs/) service.
- `types-boto3[eks]` - Type annotations for
[EKS](https://youtype.github.io/types_boto3_docs/types_boto3_eks/) service.
- `types-boto3[eks-auth]` - Type annotations for
[EKSAuth](https://youtype.github.io/types_boto3_docs/types_boto3_eks_auth/)
service.
- `types-boto3[elasticache]` - Type annotations for
[ElastiCache](https://youtype.github.io/types_boto3_docs/types_boto3_elasticache/)
service.
- `types-boto3[elasticbeanstalk]` - Type annotations for
[ElasticBeanstalk](https://youtype.github.io/types_boto3_docs/types_boto3_elasticbeanstalk/)
service.
- `types-boto3[elb]` - Type annotations for
[ElasticLoadBalancing](https://youtype.github.io/types_boto3_docs/types_boto3_elb/)
service.
- `types-boto3[elbv2]` - Type annotations for
[ElasticLoadBalancingv2](https://youtype.github.io/types_boto3_docs/types_boto3_elbv2/)
service.
- `types-boto3[emr]` - Type annotations for
[EMR](https://youtype.github.io/types_boto3_docs/types_boto3_emr/) service.
- `types-boto3[emr-containers]` - Type annotations for
[EMRContainers](https://youtype.github.io/types_boto3_docs/types_boto3_emr_containers/)
service.
- `types-boto3[emr-serverless]` - Type annotations for
[EMRServerless](https://youtype.github.io/types_boto3_docs/types_boto3_emr_serverless/)
service.
- `types-boto3[entityresolution]` - Type annotations for
[EntityResolution](https://youtype.github.io/types_boto3_docs/types_boto3_entityresolution/)
service.
- `types-boto3[es]` - Type annotations for
[ElasticsearchService](https://youtype.github.io/types_boto3_docs/types_boto3_es/)
service.
- `types-boto3[events]` - Type annotations for
[EventBridge](https://youtype.github.io/types_boto3_docs/types_boto3_events/)
service.
- `types-boto3[evs]` - Type annotations for
[EVS](https://youtype.github.io/types_boto3_docs/types_boto3_evs/) service.
- `types-boto3[finspace]` - Type annotations for
[Finspace](https://youtype.github.io/types_boto3_docs/types_boto3_finspace/)
service.
- `types-boto3[finspace-data]` - Type annotations for
[FinSpaceData](https://youtype.github.io/types_boto3_docs/types_boto3_finspace_data/)
service.
- `types-boto3[firehose]` - Type annotations for
[Firehose](https://youtype.github.io/types_boto3_docs/types_boto3_firehose/)
service.
- `types-boto3[fis]` - Type annotations for
[FIS](https://youtype.github.io/types_boto3_docs/types_boto3_fis/) service.
- `types-boto3[fms]` - Type annotations for
[FMS](https://youtype.github.io/types_boto3_docs/types_boto3_fms/) service.
- `types-boto3[forecast]` - Type annotations for
[ForecastService](https://youtype.github.io/types_boto3_docs/types_boto3_forecast/)
service.
- `types-boto3[forecastquery]` - Type annotations for
[ForecastQueryService](https://youtype.github.io/types_boto3_docs/types_boto3_forecastquery/)
service.
- `types-boto3[frauddetector]` - Type annotations for
[FraudDetector](https://youtype.github.io/types_boto3_docs/types_boto3_frauddetector/)
service.
- `types-boto3[freetier]` - Type annotations for
[FreeTier](https://youtype.github.io/types_boto3_docs/types_boto3_freetier/)
service.
- `types-boto3[fsx]` - Type annotations for
[FSx](https://youtype.github.io/types_boto3_docs/types_boto3_fsx/) service.
- `types-boto3[gamelift]` - Type annotations for
[GameLift](https://youtype.github.io/types_boto3_docs/types_boto3_gamelift/)
service.
- `types-boto3[gameliftstreams]` - Type annotations for
[GameLiftStreams](https://youtype.github.io/types_boto3_docs/types_boto3_gameliftstreams/)
service.
- `types-boto3[geo-maps]` - Type annotations for
[LocationServiceMapsV2](https://youtype.github.io/types_boto3_docs/types_boto3_geo_maps/)
service.
- `types-boto3[geo-places]` - Type annotations for
[LocationServicePlacesV2](https://youtype.github.io/types_boto3_docs/types_boto3_geo_places/)
service.
- `types-boto3[geo-routes]` - Type annotations for
[LocationServiceRoutesV2](https://youtype.github.io/types_boto3_docs/types_boto3_geo_routes/)
service.
- `types-boto3[glacier]` - Type annotations for
[Glacier](https://youtype.github.io/types_boto3_docs/types_boto3_glacier/)
service.
- `types-boto3[globalaccelerator]` - Type annotations for
[GlobalAccelerator](https://youtype.github.io/types_boto3_docs/types_boto3_globalaccelerator/)
service.
- `types-boto3[glue]` - Type annotations for
[Glue](https://youtype.github.io/types_boto3_docs/types_boto3_glue/) service.
- `types-boto3[grafana]` - Type annotations for
[ManagedGrafana](https://youtype.github.io/types_boto3_docs/types_boto3_grafana/)
service.
- `types-boto3[greengrass]` - Type annotations for
[Greengrass](https://youtype.github.io/types_boto3_docs/types_boto3_greengrass/)
service.
- `types-boto3[greengrassv2]` - Type annotations for
[GreengrassV2](https://youtype.github.io/types_boto3_docs/types_boto3_greengrassv2/)
service.
- `types-boto3[groundstation]` - Type annotations for
[GroundStation](https://youtype.github.io/types_boto3_docs/types_boto3_groundstation/)
service.
- `types-boto3[guardduty]` - Type annotations for
[GuardDuty](https://youtype.github.io/types_boto3_docs/types_boto3_guardduty/)
service.
- `types-boto3[health]` - Type annotations for
[Health](https://youtype.github.io/types_boto3_docs/types_boto3_health/)
service.
- `types-boto3[healthlake]` - Type annotations for
[HealthLake](https://youtype.github.io/types_boto3_docs/types_boto3_healthlake/)
service.
- `types-boto3[iam]` - Type annotations for
[IAM](https://youtype.github.io/types_boto3_docs/types_boto3_iam/) service.
- `types-boto3[identitystore]` - Type annotations for
[IdentityStore](https://youtype.github.io/types_boto3_docs/types_boto3_identitystore/)
service.
- `types-boto3[imagebuilder]` - Type annotations for
[Imagebuilder](https://youtype.github.io/types_boto3_docs/types_boto3_imagebuilder/)
service.
- `types-boto3[importexport]` - Type annotations for
[ImportExport](https://youtype.github.io/types_boto3_docs/types_boto3_importexport/)
service.
- `types-boto3[inspector]` - Type annotations for
[Inspector](https://youtype.github.io/types_boto3_docs/types_boto3_inspector/)
service.
- `types-boto3[inspector-scan]` - Type annotations for
[Inspectorscan](https://youtype.github.io/types_boto3_docs/types_boto3_inspector_scan/)
service. | text/markdown | null | Vlad Emelianov <vlad.emelianov.nz@gmail.com> | null | null | null | boto3, boto3-stubs, type-annotations, mypy, typeshed, autocomplete | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Environment :: Console",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
"Typing :: Stubs Only"
] | [
"any"
] | null | null | >=3.9 | [] | [] | [] | [
"botocore-stubs",
"types-s3transfer",
"typing-extensions>=4.1.0; python_version < \"3.12\"",
"types-boto3-full<1.43.0,>=1.42.0; extra == \"full\"",
"boto3==1.42.54; extra == \"boto3\"",
"types-boto3-accessanalyzer<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-account<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-acm<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-acm-pca<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-aiops<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-amp<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-amplify<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-amplifybackend<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-amplifyuibuilder<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-apigateway<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-apigatewaymanagementapi<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-apigatewayv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appconfig<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appconfigdata<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appfabric<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appflow<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appintegrations<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-application-autoscaling<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-application-insights<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-application-signals<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-applicationcostprofiler<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appmesh<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-apprunner<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appstream<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appsync<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-arc-region-switch<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-arc-zonal-shift<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-artifact<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-athena<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-auditmanager<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-autoscaling<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-autoscaling-plans<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-b2bi<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-backup<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-backup-gateway<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-backupsearch<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-batch<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bcm-dashboards<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bcm-data-exports<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bcm-pricing-calculator<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bcm-recommended-actions<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-agent<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-agent-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-agentcore<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-agentcore-control<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-data-automation<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-data-automation-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-billing<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-billingconductor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-braket<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-budgets<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ce<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chatbot<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chime-sdk-identity<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chime-sdk-media-pipelines<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chime-sdk-meetings<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chime-sdk-messaging<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chime-sdk-voice<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cleanrooms<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cleanroomsml<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloud9<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudcontrol<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-clouddirectory<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudformation<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudfront<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudfront-keyvaluestore<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudhsm<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudhsmv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudsearch<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudsearchdomain<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudtrail<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudtrail-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudwatch<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codeartifact<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codebuild<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codecatalyst<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codecommit<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codeconnections<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codedeploy<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codeguru-reviewer<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codeguru-security<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codeguruprofiler<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codepipeline<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codestar-connections<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codestar-notifications<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cognito-identity<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cognito-idp<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cognito-sync<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-comprehend<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-comprehendmedical<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-compute-optimizer<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-compute-optimizer-automation<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-config<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-connect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-connect-contact-lens<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-connectcampaigns<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-connectcampaignsv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-connectcases<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-connectparticipant<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-controlcatalog<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-controltower<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cost-optimization-hub<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cur<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-customer-profiles<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-databrew<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dataexchange<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-datapipeline<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-datasync<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-datazone<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dax<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-deadline<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-detective<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-devicefarm<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-devops-guru<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-directconnect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-discovery<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dlm<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dms<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-docdb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-docdb-elastic<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-drs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ds<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ds-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dsql<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dynamodb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dynamodbstreams<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ebs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ec2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ec2-instance-connect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ecr<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ecr-public<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ecs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-efs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-eks<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-eks-auth<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-elasticache<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-elasticbeanstalk<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-elb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-elbv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-emr<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-emr-containers<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-emr-serverless<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-entityresolution<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-es<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-events<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-evs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-finspace<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-finspace-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-firehose<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-fis<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-fms<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-forecast<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-forecastquery<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-frauddetector<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-freetier<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-fsx<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-gamelift<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-gameliftstreams<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-geo-maps<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-geo-places<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-geo-routes<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-glacier<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-globalaccelerator<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-glue<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-grafana<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-greengrass<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-greengrassv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-groundstation<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-guardduty<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-health<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-healthlake<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iam<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-identitystore<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-imagebuilder<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-importexport<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-inspector<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-inspector-scan<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-inspector2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-internetmonitor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-invoicing<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iot<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iot-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iot-jobs-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iot-managed-integrations<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotdeviceadvisor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotevents<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotevents-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotfleetwise<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotsecuretunneling<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotsitewise<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotthingsgraph<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iottwinmaker<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotwireless<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ivs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ivs-realtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ivschat<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kafka<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kafkaconnect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kendra<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kendra-ranking<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-keyspaces<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-keyspacesstreams<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesis<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesis-video-archived-media<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesis-video-media<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesis-video-signaling<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesis-video-webrtc-storage<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesisanalytics<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesisanalyticsv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesisvideo<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kms<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lakeformation<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lambda<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-launch-wizard<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lex-models<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lex-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lexv2-models<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lexv2-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-license-manager<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-license-manager-linux-subscriptions<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-license-manager-user-subscriptions<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lightsail<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-location<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-logs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lookoutequipment<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-m2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-machinelearning<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-macie2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mailmanager<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-managedblockchain<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-managedblockchain-query<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-marketplace-agreement<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-marketplace-catalog<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-marketplace-deployment<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-marketplace-entitlement<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-marketplace-reporting<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-marketplacecommerceanalytics<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediaconnect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediaconvert<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-medialive<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediapackage<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediapackage-vod<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediapackagev2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediastore<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediastore-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediatailor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-medical-imaging<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-memorydb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-meteringmarketplace<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mgh<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mgn<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-migration-hub-refactor-spaces<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-migrationhub-config<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-migrationhuborchestrator<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-migrationhubstrategy<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mpa<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mq<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mturk<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mwaa<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mwaa-serverless<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-neptune<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-neptune-graph<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-neptunedata<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-network-firewall<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-networkflowmonitor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-networkmanager<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-networkmonitor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-notifications<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-notificationscontacts<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-nova-act<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-oam<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-observabilityadmin<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-odb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-omics<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-opensearch<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-opensearchserverless<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-organizations<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-osis<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-outposts<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-panorama<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-partnercentral-account<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-partnercentral-benefits<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-partnercentral-channel<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-partnercentral-selling<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-payment-cryptography<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-payment-cryptography-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pca-connector-ad<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pca-connector-scep<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pcs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-personalize<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-personalize-events<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-personalize-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pi<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pinpoint<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pinpoint-email<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pinpoint-sms-voice<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pinpoint-sms-voice-v2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pipes<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-polly<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pricing<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-proton<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-qapps<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-qbusiness<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-qconnect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-quicksight<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ram<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rbin<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rds<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rds-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-redshift<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-redshift-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-redshift-serverless<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rekognition<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-repostspace<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-resiliencehub<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-resource-explorer-2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-resource-groups<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-resourcegroupstaggingapi<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rolesanywhere<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53-recovery-cluster<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53-recovery-control-config<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53-recovery-readiness<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53domains<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53globalresolver<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53profiles<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53resolver<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rtbfabric<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rum<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-s3<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-s3control<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-s3outposts<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-s3tables<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-s3vectors<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker-a2i-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker-edge<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker-featurestore-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker-geospatial<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker-metrics<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-savingsplans<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-scheduler<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-schemas<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sdb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-secretsmanager<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-security-ir<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-securityhub<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-securitylake<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-serverlessrepo<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-service-quotas<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-servicecatalog<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-servicecatalog-appregistry<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-servicediscovery<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ses<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sesv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-shield<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-signer<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-signer-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-signin<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-simspaceweaver<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-snow-device-management<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-snowball<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sns<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-socialmessaging<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sqs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ssm<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ssm-contacts<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ssm-guiconnect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ssm-incidents<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ssm-quicksetup<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ssm-sap<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sso<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sso-admin<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sso-oidc<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-stepfunctions<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-storagegateway<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sts<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-supplychain<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-support<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-support-app<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-swf<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-synthetics<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-taxsettings<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-textract<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-timestream-influxdb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-timestream-query<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-timestream-write<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-tnb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-transcribe<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-transfer<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-translate<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-trustedadvisor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-verifiedpermissions<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-voice-id<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-vpc-lattice<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-waf<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-waf-regional<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-wafv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-wellarchitected<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-wickr<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-wisdom<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workdocs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workmail<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workmailmessageflow<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workspaces<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workspaces-instances<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workspaces-thin-client<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workspaces-web<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-xray<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudformation<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-dynamodb<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-ec2<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-lambda<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-rds<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-s3<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-sqs<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-accessanalyzer<1.43.0,>=1.42.0; extra == \"accessanalyzer\"",
"types-boto3-account<1.43.0,>=1.42.0; extra == \"account\"",
"types-boto3-acm<1.43.0,>=1.42.0; extra == \"acm\"",
"types-boto3-acm-pca<1.43.0,>=1.42.0; extra == \"acm-pca\"",
"types-boto3-aiops<1.43.0,>=1.42.0; extra == \"aiops\"",
"types-boto3-amp<1.43.0,>=1.42.0; extra == \"amp\"",
"types-boto3-amplify<1.43.0,>=1.42.0; extra == \"amplify\"",
"types-boto3-amplifybackend<1.43.0,>=1.42.0; extra == \"amplifybackend\"",
"types-boto3-amplifyuibuilder<1.43.0,>=1.42.0; extra == \"amplifyuibuilder\"",
"types-boto3-apigateway<1.43.0,>=1.42.0; extra == \"apigateway\"",
"types-boto3-apigatewaymanagementapi<1.43.0,>=1.42.0; extra == \"apigatewaymanagementapi\"",
"types-boto3-apigatewayv2<1.43.0,>=1.42.0; extra == \"apigatewayv2\"",
"types-boto3-appconfig<1.43.0,>=1.42.0; extra == \"appconfig\"",
"types-boto3-appconfigdata<1.43.0,>=1.42.0; extra == \"appconfigdata\"",
"types-boto3-appfabric<1.43.0,>=1.42.0; extra == \"appfabric\"",
"types-boto3-appflow<1.43.0,>=1.42.0; extra == \"appflow\"",
"types-boto3-appintegrations<1.43.0,>=1.42.0; extra == \"appintegrations\"",
"types-boto3-application-autoscaling<1.43.0,>=1.42.0; extra == \"application-autoscaling\"",
"types-boto3-application-insights<1.43.0,>=1.42.0; extra == \"application-insights\"",
"types-boto3-application-signals<1.43.0,>=1.42.0; extra == \"application-signals\"",
"types-boto3-applicationcostprofiler<1.43.0,>=1.42.0; extra == \"applicationcostprofiler\"",
"types-boto3-appmesh<1.43.0,>=1.42.0; extra == \"appmesh\"",
"types-boto3-apprunner<1.43.0,>=1.42.0; extra == \"apprunner\"",
"types-boto3-appstream<1.43.0,>=1.42.0; extra == \"appstream\"",
"types-boto3-appsync<1.43.0,>=1.42.0; extra == \"appsync\"",
"types-boto3-arc-region-switch<1.43.0,>=1.42.0; extra == \"arc-region-switch\"",
"types-boto3-arc-zonal-shift<1.43.0,>=1.42.0; extra == \"arc-zonal-shift\"",
"types-boto3-artifact<1.43.0,>=1.42.0; extra == \"artifact\"",
"types-boto3-athena<1.43.0,>=1.42.0; extra == \"athena\"",
"types-boto3-auditmanager<1.43.0,>=1.42.0; extra == \"auditmanager\"",
"types-boto3-autoscaling<1.43.0,>=1.42.0; extra == \"autoscaling\"",
"types-boto3-autoscaling-plans<1.43.0,>=1.42.0; extra == \"autoscaling-plans\"",
"types-boto3-b2bi<1.43.0,>=1.42.0; extra == \"b2bi\"",
"types-boto3-backup<1.43.0,>=1.42.0; extra == \"backup\"",
"types-boto3-backup-gateway<1.43.0,>=1.42.0; extra == \"backup-gateway\"",
"types-boto3-backupsearch<1.43.0,>=1.42.0; extra == \"backupsearch\"",
"types-boto3-batch<1.43.0,>=1.42.0; extra == \"batch\"",
"types-boto3-bcm-dashboards<1.43.0,>=1.42.0; extra == \"bcm-dashboards\"",
"types-boto3-bcm-data-exports<1.43.0,>=1.42.0; extra == \"bcm-data-exports\"",
"types-boto3-bcm-pricing-calculator<1.43.0,>=1.42.0; extra == \"bcm-pricing-calculator\"",
"types-boto3-bcm-recommended-actions<1.43.0,>=1.42.0; extra == \"bcm-recommended-actions\"",
"types-boto3-bedrock<1.43.0,>=1.42.0; extra == \"bedrock\"",
"types-boto3-bedrock-agent<1.43.0,>=1.42.0; extra == \"bedrock-agent\"",
"types-boto3-bedrock-agent-runtime<1.43.0,>=1.42.0; extra == \"bedrock-agent-runtime\"",
"types-boto3-bedrock-agentcore<1.43.0,>=1.42.0; extra == \"bedrock-agentcore\"",
"types-boto3-bedrock-agentcore-control<1.43.0,>=1.42.0; extra == \"bedrock-agentcore-control\"",
"types-boto3-bedrock-data-automation<1.43.0,>=1.42.0; extra == \"bedrock-data-automation\"",
"types-boto3-bedrock-data-automation-runtime<1.43.0,>=1.42.0; extra == \"bedrock-data-automation-runtime\"",
"types-boto3-bedrock-runtime<1.43.0,>=1.42.0; extra == \"bedrock-runtime\"",
"types-boto3-billing<1.43.0,>=1.42.0; extra == \"billing\"",
"types-boto3-billingconductor<1.43.0,>=1.42.0; extra == \"billingconductor\"",
"types-boto3-braket<1.43.0,>=1.42.0; extra == \"braket\"",
"types-boto3-budgets<1.43.0,>=1.42.0; extra == \"budgets\"",
"types-boto3-ce<1.43.0,>=1.42.0; extra == \"ce\"",
"types-boto3-chatbot<1.43.0,>=1.42.0; extra == \"chatbot\"",
"types-boto3-chime<1.43.0,>=1.42.0; extra == \"chime\"",
"types-boto3-chime-sdk-identity<1.43.0,>=1.42.0; extra == \"chime-sdk-identity\"",
"types-boto3-chime-sdk-media-pipelines<1.43.0,>=1.42.0; extra == \"chime-sdk-media-pipelines\"",
"types-boto3-chime-sdk-meetings<1.43.0,>=1.42.0; extra == \"chime-sdk-meetings\"",
"types-boto3-chime-sdk-messaging<1.43.0,>=1.42.0; extra == \"chime-sdk-messaging\"",
"types-boto3-chime-sdk-voice<1.43.0,>=1.42.0; extra == \"chime-sdk-voice\"",
"types-boto3-cleanrooms<1.43.0,>=1.42.0; extra == \"cleanrooms\"",
"types-boto3-cleanroomsml<1.43.0,>=1.42.0; extra == \"cleanroomsml\"",
"types-boto3-cloud9<1.43.0,>=1.42.0; extra == \"cloud9\"",
"types-boto3-cloudcontrol<1.43.0,>=1.42.0; extra == \"cloudcontrol\"",
"types-boto3-clouddirectory<1.43.0,>=1.42.0; extra == \"clouddirectory\"",
"types-boto3-cloudformation<1.43.0,>=1.42.0; extra == \"cloudformation\"",
"types-boto3-cloudfront<1.43.0,>=1.42.0; extra == \"cloudfront\"",
"types-boto3-cloudfront-keyvaluestore<1.43.0,>=1.42.0; extra == \"cloudfront-keyvaluestore\"",
"types-boto3-cloudhsm<1.43.0,>=1.42.0; extra == \"cloudhsm\"",
"types-boto3-cloudhsmv2<1.43.0,>=1.42.0; extra == \"cloudhsmv2\"",
"types-boto3-cloudsearch<1.43.0,>=1.42.0; extra == \"cloudsearch\"",
"types-boto3-cloudsearchdomain<1.43.0,>=1.42.0; extra == \"cloudsearchdomain\"",
"types-boto3-cloudtrail<1.43.0,>=1.42.0; extra == \"cloudtrail\"",
"types-boto3-cloudtrail-data<1.43.0,>=1.42.0; extra == \"cloudtrail-data\"",
"types-boto3-cloudwatch<1.43.0,>=1.42.0; extra == \"cloudwatch\"",
"types-boto3-codeartifact<1.43.0,>=1.42.0; extra == \"codeartifact\"",
"types-boto3-codebuild<1.43.0,>=1.42.0; extra == \"codebuild\"",
"types-boto3-codecatalyst<1.43.0,>=1.42.0; extra == \"codecatalyst\"",
"types-boto3-codecommit<1.43.0,>=1.42.0; extra == \"codecommit\"",
"types-boto3-codeconnections<1.43.0,>=1.42.0; extra == \"codeconnections\"",
"types-boto3-codedeploy<1.43.0,>=1.42.0; extra == \"codedeploy\"",
"types-boto3-codeguru-reviewer<1.43.0,>=1.42.0; extra == \"codeguru-reviewer\"",
"types-boto3-codeguru-security<1.43.0,>=1.42.0; extra == \"codeguru-security\"",
"types-boto3-codeguruprofiler<1.43.0,>=1.42.0; extra == \"codeguruprofiler\"",
"types-boto3-codepipeline<1.43.0,>=1.42.0; extra == \"codepipeline\"",
"types-boto3-codestar-connections<1.43.0,>=1.42.0; extra == \"codestar-connections\"",
"types-boto3-codestar-notifications<1.43.0,>=1.42.0; extra == \"codestar-notifications\"",
"types-boto3-cognito-identity<1.43.0,>=1.42.0; extra == \"cognito-identity\"",
"types-boto3-cognito-idp<1.43.0,>=1.42.0; extra == \"cognito-idp\"",
"types-boto3-cognito-sync<1.43.0,>=1.42.0; extra == \"cognito-sync\"",
"types-boto3-comprehend<1.43.0,>=1.42.0; extra == \"comprehend\"",
"types-boto3-comprehendmedical<1.43.0,>=1.42.0; extra == \"comprehendmedical\"",
"types-boto3-compute-optimizer<1.43.0,>=1.42.0; extra == \"compute-optimizer\"",
"types-boto3-compute-optimizer-automation<1.43.0,>=1.42.0; extra == \"compute-optimizer-automation\"",
"types-boto3-config<1.43.0,>=1.42.0; extra == \"config\"",
"types-boto3-connect<1.43.0,>=1.42.0; extra == \"connect\"",
"types-boto3-connect-contact-lens<1.43.0,>=1.42.0; extra == \"connect-contact-lens\"",
"types-boto3-connectcampaigns<1.43.0,>=1.42.0; extra == \"connectcampaigns\"",
"types-boto3-connectcampaignsv2<1.43.0,>=1.42.0; extra == \"connectcampaignsv2\"",
"types-boto3-connectcases<1.43.0,>=1.42.0; extra == \"connectcases\"",
"types-boto3-connectparticipant<1.43.0,>=1.42.0; extra == \"connectparticipant\"",
"types-boto3-controlcatalog<1.43.0,>=1.42.0; extra == \"controlcatalog\"",
"types-boto3-controltower<1.43.0,>=1.42.0; extra == \"controltower\"",
"types-boto3-cost-optimization-hub<1.43.0,>=1.42.0; extra == \"cost-optimization-hub\"",
"types-boto3-cur<1.43.0,>=1.42.0; extra == \"cur\"",
"types-boto3-customer-profiles<1.43.0,>=1.42.0; extra == \"customer-profiles\"",
"types-boto3-databrew<1.43.0,>=1.42.0; extra == \"databrew\"",
"types-boto3-dataexchange<1.43.0,>=1.42.0; extra == \"dataexchange\"",
"types-boto3-datapipeline<1.43.0,>=1.42.0; extra == \"datapipeline\"",
"types-boto3-datasync<1.43.0,>=1.42.0; extra == \"datasync\"",
"types-boto3-datazone<1.43.0,>=1.42.0; extra == \"datazone\"",
"types-boto3-dax<1.43.0,>=1.42.0; extra == \"dax\"",
"types-boto3-deadline<1.43.0,>=1.42.0; extra == \"deadline\"",
"types-boto3-detective<1.43.0,>=1.42.0; extra == \"detective\"",
"types-boto3-devicefarm<1.43.0,>=1.42.0; extra == \"devicefarm\"",
"types-boto3-devops-guru<1.43.0,>=1.42.0; extra == \"devops-guru\"",
"types-boto3-directconnect<1.43.0,>=1.42.0; extra == \"directconnect\"",
"types-boto3-discovery<1.43.0,>=1.42.0; extra == \"discovery\"",
"types-boto3-dlm<1.43.0,>=1.42.0; extra == \"dlm\"",
"types-boto3-dms<1.43.0,>=1.42.0; extra == \"dms\"",
"types-boto3-docdb<1.43.0,>=1.42.0; extra == \"docdb\"",
"types-boto3-docdb-elastic<1.43.0,>=1.42.0; extra == \"docdb-elastic\"",
"types-boto3-drs<1.43.0,>=1.42.0; extra == \"drs\"",
"types-boto3-ds<1.43.0,>=1.42.0; extra == \"ds\"",
"types-boto3-ds-data<1.43.0,>=1.42.0; extra == \"ds-data\"",
"types-boto3-dsql<1.43.0,>=1.42.0; extra == \"dsql\"",
"types-boto3-dynamodb<1.43.0,>=1.42.0; extra == \"dynamodb\"",
"types-boto3-dynamodbstreams<1.43.0,>=1.42.0; extra == \"dynamodbstreams\"",
"types-boto3-ebs<1.43.0,>=1.42.0; extra == \"ebs\"",
"types-boto3-ec2<1.43.0,>=1.42.0; extra == \"ec2\"",
"types-boto3-ec2-instance-connect<1.43.0,>=1.42.0; extra == \"ec2-instance-connect\"",
"types-boto3-ecr<1.43.0,>=1.42.0; extra == \"ecr\"",
"types-boto3-ecr-public<1.43.0,>=1.42.0; extra == \"ecr-public\"",
"types-boto3-ecs<1.43.0,>=1.42.0; extra == \"ecs\"",
"types-boto3-efs<1.43.0,>=1.42.0; extra == \"efs\"",
"types-boto3-eks<1.43.0,>=1.42.0; extra == \"eks\"",
"types-boto3-eks-auth<1.43.0,>=1.42.0; extra == \"eks-auth\"",
"types-boto3-elasticache<1.43.0,>=1.42.0; extra == \"elasticache\"",
"types-boto3-elasticbeanstalk<1.43.0,>=1.42.0; extra == \"elasticbeanstalk\"",
"types-boto3-elb<1.43.0,>=1.42.0; extra == \"elb\"",
"types-boto3-elbv2<1.43.0,>=1.42.0; extra == \"elbv2\"",
"types-boto3-emr<1.43.0,>=1.42.0; extra == \"emr\"",
"types-boto3-emr-containers<1.43.0,>=1.42.0; extra == \"emr-containers\"",
"types-boto3-emr-serverless<1.43.0,>=1.42.0; extra == \"emr-serverless\"",
"types-boto3-entityresolution<1.43.0,>=1.42.0; extra == \"entityresolution\"",
"types-boto3-es<1.43.0,>=1.42.0; extra == \"es\"",
"types-boto3-events<1.43.0,>=1.42.0; extra == \"events\"",
"types-boto3-evs<1.43.0,>=1.42.0; extra == \"evs\"",
"types-boto3-finspace<1.43.0,>=1.42.0; extra == \"finspace\"",
"types-boto3-finspace-data<1.43.0,>=1.42.0; extra == \"finspace-data\"",
"types-boto3-firehose<1.43.0,>=1.42.0; extra == \"firehose\"",
"types-boto3-fis<1.43.0,>=1.42.0; extra == \"fis\"",
"types-boto3-fms<1.43.0,>=1.42.0; extra == \"fms\"",
"types-boto3-forecast<1.43.0,>=1.42.0; extra == \"forecast\"",
"types-boto3-forecastquery<1.43.0,>=1.42.0; extra == \"forecastquery\"",
"types-boto3-frauddetector<1.43.0,>=1.42.0; extra == \"frauddetector\"",
"types-boto3-freetier<1.43.0,>=1.42.0; extra == \"freetier\"",
"types-boto3-fsx<1.43.0,>=1.42.0; extra == \"fsx\"",
"types-boto3-gamelift<1.43.0,>=1.42.0; extra == \"gamelift\"",
"types-boto3-gameliftstreams<1.43.0,>=1.42.0; extra == \"gameliftstreams\"",
"types-boto3-geo-maps<1.43.0,>=1.42.0; extra == \"geo-maps\"",
"types-boto3-geo-places<1.43.0,>=1.42.0; extra == \"geo-places\"",
"types-boto3-geo-routes<1.43.0,>=1.42.0; extra == \"geo-routes\"",
"types-boto3-glacier<1.43.0,>=1.42.0; extra == \"glacier\"",
"types-boto3-globalaccelerator<1.43.0,>=1.42.0; extra == \"globalaccelerator\"",
"types-boto3-glue<1.43.0,>=1.42.0; extra == \"glue\"",
"types-boto3-grafana<1.43.0,>=1.42.0; extra == \"grafana\"",
"types-boto3-greengrass<1.43.0,>=1.42.0; extra == \"greengrass\"",
"types-boto3-greengrassv2<1.43.0,>=1.42.0; extra == \"greengrassv2\"",
"types-boto3-groundstation<1.43.0,>=1.42.0; extra == \"groundstation\"",
"types-boto3-guardduty<1.43.0,>=1.42.0; extra == \"guardduty\"",
"types-boto3-health<1.43.0,>=1.42.0; extra == \"health\"",
"types-boto3-healthlake<1.43.0,>=1.42.0; extra == \"healthlake\"",
"types-boto3-iam<1.43.0,>=1.42.0; extra == \"iam\"",
"types-boto3-identitystore<1.43.0,>=1.42.0; extra == \"identitystore\"",
"types-boto3-imagebuilder<1.43.0,>=1.42.0; extra == \"imagebuilder\"",
"types-boto3-importexport<1.43.0,>=1.42.0; extra == \"importexport\"",
"types-boto3-inspector<1.43.0,>=1.42.0; extra == \"inspector\"",
"types-boto3-inspector-scan<1.43.0,>=1.42.0; extra == \"inspector-scan\"",
"types-boto3-inspector2<1.43.0,>=1.42.0; extra == \"inspector2\"",
"types-boto3-internetmonitor<1.43.0,>=1.42.0; extra == \"internetmonitor\"",
"types-boto3-invoicing<1.43.0,>=1.42.0; extra == \"invoicing\"",
"types-boto3-iot<1.43.0,>=1.42.0; extra == \"iot\"",
"types-boto3-iot-data<1.43.0,>=1.42.0; extra == \"iot-data\"",
"types-boto3-iot-jobs-data<1.43.0,>=1.42.0; extra == \"iot-jobs-data\"",
"types-boto3-iot-managed-integrations<1.43.0,>=1.42.0; extra == \"iot-managed-integrations\"",
"types-boto3-iotdeviceadvisor<1.43.0,>=1.42.0; extra == \"iotdeviceadvisor\"",
"types-boto3-iotevents<1.43.0,>=1.42.0; extra == \"iotevents\"",
"types-boto3-iotevents-data<1.43.0,>=1.42.0; extra == \"iotevents-data\"",
"types-boto3-iotfleetwise<1.43.0,>=1.42.0; extra == \"iotfleetwise\"",
"types-boto3-iotsecuretunneling<1.43.0,>=1.42.0; extra == \"iotsecuretunneling\"",
"types-boto3-iotsitewise<1.43.0,>=1.42.0; extra == \"iotsitewise\"",
"types-boto3-iotthingsgraph<1.43.0,>=1.42.0; extra == \"iotthingsgraph\"",
"types-boto3-iottwinmaker<1.43.0,>=1.42.0; extra == \"iottwinmaker\"",
"types-boto3-iotwireless<1.43.0,>=1.42.0; extra == \"iotwireless\"",
"types-boto3-ivs<1.43.0,>=1.42.0; extra == \"ivs\"",
"types-boto3-ivs-realtime<1.43.0,>=1.42.0; extra == \"ivs-realtime\"",
"types-boto3-ivschat<1.43.0,>=1.42.0; extra == \"ivschat\"",
"types-boto3-kafka<1.43.0,>=1.42.0; extra == \"kafka\"",
"types-boto3-kafkaconnect<1.43.0,>=1.42.0; extra == \"kafkaconnect\"",
"types-boto3-kendra<1.43.0,>=1.42.0; extra == \"kendra\"",
"types-boto3-kendra-ranking<1.43.0,>=1.42.0; extra == \"kendra-ranking\"",
"types-boto3-keyspaces<1.43.0,>=1.42.0; extra == \"keyspaces\"",
"types-boto3-keyspacesstreams<1.43.0,>=1.42.0; extra == \"keyspacesstreams\"",
"types-boto3-kinesis<1.43.0,>=1.42.0; extra == \"kinesis\"",
"types-boto3-kinesis-video-archived-media<1.43.0,>=1.42.0; extra == \"kinesis-video-archived-media\"",
"types-boto3-kinesis-video-media<1.43.0,>=1.42.0; extra == \"kinesis-video-media\"",
"types-boto3-kinesis-video-signaling<1.43.0,>=1.42.0; extra == \"kinesis-video-signaling\"",
"types-boto3-kinesis-video-webrtc-storage<1.43.0,>=1.42.0; extra == \"kinesis-video-webrtc-storage\"",
"types-boto3-kinesisanalytics<1.43.0,>=1.42.0; extra == \"kinesisanalytics\"",
"types-boto3-kinesisanalyticsv2<1.43.0,>=1.42.0; extra == \"kinesisanalyticsv2\"",
"types-boto3-kinesisvideo<1.43.0,>=1.42.0; extra == \"kinesisvideo\"",
"types-boto3-kms<1.43.0,>=1.42.0; extra == \"kms\"",
"types-boto3-lakeformation<1.43.0,>=1.42.0; extra == \"lakeformation\"",
"types-boto3-lambda<1.43.0,>=1.42.0; extra == \"lambda\"",
"types-boto3-launch-wizard<1.43.0,>=1.42.0; extra == \"launch-wizard\"",
"types-boto3-lex-models<1.43.0,>=1.42.0; extra == \"lex-models\"",
"types-boto3-lex-runtime<1.43.0,>=1.42.0; extra == \"lex-runtime\"",
"types-boto3-lexv2-models<1.43.0,>=1.42.0; extra == \"lexv2-models\"",
"types-boto3-lexv2-runtime<1.43.0,>=1.42.0; extra == \"lexv2-runtime\"",
"types-boto3-license-manager<1.43.0,>=1.42.0; extra == \"license-manager\"",
"types-boto3-license-manager-linux-subscriptions<1.43.0,>=1.42.0; extra == \"license-manager-linux-subscriptions\"",
"types-boto3-license-manager-user-subscriptions<1.43.0,>=1.42.0; extra == \"license-manager-user-subscriptions\"",
"types-boto3-lightsail<1.43.0,>=1.42.0; extra == \"lightsail\"",
"types-boto3-location<1.43.0,>=1.42.0; extra == \"location\"",
"types-boto3-logs<1.43.0,>=1.42.0; extra == \"logs\"",
"types-boto3-lookoutequipment<1.43.0,>=1.42.0; extra == \"lookoutequipment\"",
"types-boto3-m2<1.43.0,>=1.42.0; extra == \"m2\"",
"types-boto3-machinelearning<1.43.0,>=1.42.0; extra == \"machinelearning\"",
"types-boto3-macie2<1.43.0,>=1.42.0; extra == \"macie2\"",
"types-boto3-mailmanager<1.43.0,>=1.42.0; extra == \"mailmanager\"",
"types-boto3-managedblockchain<1.43.0,>=1.42.0; extra == \"managedblockchain\"",
"types-boto3-managedblockchain-query<1.43.0,>=1.42.0; extra == \"managedblockchain-query\"",
"types-boto3-marketplace-agreement<1.43.0,>=1.42.0; extra == \"marketplace-agreement\"",
"types-boto3-marketplace-catalog<1.43.0,>=1.42.0; extra == \"marketplace-catalog\"",
"types-boto3-marketplace-deployment<1.43.0,>=1.42.0; extra == \"marketplace-deployment\"",
"types-boto3-marketplace-entitlement<1.43.0,>=1.42.0; extra == \"marketplace-entitlement\"",
"types-boto3-marketplace-reporting<1.43.0,>=1.42.0; extra == \"marketplace-reporting\"",
"types-boto3-marketplacecommerceanalytics<1.43.0,>=1.42.0; extra == \"marketplacecommerceanalytics\"",
"types-boto3-mediaconnect<1.43.0,>=1.42.0; extra == \"mediaconnect\"",
"types-boto3-mediaconvert<1.43.0,>=1.42.0; extra == \"mediaconvert\"",
"types-boto3-medialive<1.43.0,>=1.42.0; extra == \"medialive\"",
"types-boto3-mediapackage<1.43.0,>=1.42.0; extra == \"mediapackage\"",
"types-boto3-mediapackage-vod<1.43.0,>=1.42.0; extra == \"mediapackage-vod\"",
"types-boto3-mediapackagev2<1.43.0,>=1.42.0; extra == \"mediapackagev2\"",
"types-boto3-mediastore<1.43.0,>=1.42.0; extra == \"mediastore\"",
"types-boto3-mediastore-data<1.43.0,>=1.42.0; extra == \"mediastore-data\"",
"types-boto3-mediatailor<1.43.0,>=1.42.0; extra == \"mediatailor\"",
"types-boto3-medical-imaging<1.43.0,>=1.42.0; extra == \"medical-imaging\"",
"types-boto3-memorydb<1.43.0,>=1.42.0; extra == \"memorydb\"",
"types-boto3-meteringmarketplace<1.43.0,>=1.42.0; extra == \"meteringmarketplace\"",
"types-boto3-mgh<1.43.0,>=1.42.0; extra == \"mgh\"",
"types-boto3-mgn<1.43.0,>=1.42.0; extra == \"mgn\"",
"types-boto3-migration-hub-refactor-spaces<1.43.0,>=1.42.0; extra == \"migration-hub-refactor-spaces\"",
"types-boto3-migrationhub-config<1.43.0,>=1.42.0; extra == \"migrationhub-config\"",
"types-boto3-migrationhuborchestrator<1.43.0,>=1.42.0; extra == \"migrationhuborchestrator\"",
"types-boto3-migrationhubstrategy<1.43.0,>=1.42.0; extra == \"migrationhubstrategy\"",
"types-boto3-mpa<1.43.0,>=1.42.0; extra == \"mpa\"",
"types-boto3-mq<1.43.0,>=1.42.0; extra == \"mq\"",
"types-boto3-mturk<1.43.0,>=1.42.0; extra == \"mturk\"",
"types-boto3-mwaa<1.43.0,>=1.42.0; extra == \"mwaa\"",
"types-boto3-mwaa-serverless<1.43.0,>=1.42.0; extra == \"mwaa-serverless\"",
"types-boto3-neptune<1.43.0,>=1.42.0; extra == \"neptune\"",
"types-boto3-neptune-graph<1.43.0,>=1.42.0; extra == \"neptune-graph\"",
"types-boto3-neptunedata<1.43.0,>=1.42.0; extra == \"neptunedata\"",
"types-boto3-network-firewall<1.43.0,>=1.42.0; extra == \"network-firewall\"",
"types-boto3-networkflowmonitor<1.43.0,>=1.42.0; extra == \"networkflowmonitor\"",
"types-boto3-networkmanager<1.43.0,>=1.42.0; extra == \"networkmanager\"",
"types-boto3-networkmonitor<1.43.0,>=1.42.0; extra == \"networkmonitor\"",
"types-boto3-notifications<1.43.0,>=1.42.0; extra == \"notifications\"",
"types-boto3-notificationscontacts<1.43.0,>=1.42.0; extra == \"notificationscontacts\"",
"types-boto3-nova-act<1.43.0,>=1.42.0; extra == \"nova-act\"",
"types-boto3-oam<1.43.0,>=1.42.0; extra == \"oam\"",
"types-boto3-observabilityadmin<1.43.0,>=1.42.0; extra == \"observabilityadmin\"",
"types-boto3-odb<1.43.0,>=1.42.0; extra == \"odb\"",
"types-boto3-omics<1.43.0,>=1.42.0; extra == \"omics\"",
"types-boto3-opensearch<1.43.0,>=1.42.0; extra == \"opensearch\"",
"types-boto3-opensearchserverless<1.43.0,>=1.42.0; extra == \"opensearchserverless\"",
"types-boto3-organizations<1.43.0,>=1.42.0; extra == \"organizations\"",
"types-boto3-osis<1.43.0,>=1.42.0; extra == \"osis\"",
"types-boto3-outposts<1.43.0,>=1.42.0; extra == \"outposts\"",
"types-boto3-panorama<1.43.0,>=1.42.0; extra == \"panorama\"",
"types-boto3-partnercentral-account<1.43.0,>=1.42.0; extra == \"partnercentral-account\"",
"types-boto3-partnercentral-benefits<1.43.0,>=1.42.0; extra == \"partnercentral-benefits\"",
"types-boto3-partnercentral-channel<1.43.0,>=1.42.0; extra == \"partnercentral-channel\"",
"types-boto3-partnercentral-selling<1.43.0,>=1.42.0; extra == \"partnercentral-selling\"",
"types-boto3-payment-cryptography<1.43.0,>=1.42.0; extra == \"payment-cryptography\"",
"types-boto3-payment-cryptography-data<1.43.0,>=1.42.0; extra == \"payment-cryptography-data\"",
"types-boto3-pca-connector-ad<1.43.0,>=1.42.0; extra == \"pca-connector-ad\"",
"types-boto3-pca-connector-scep<1.43.0,>=1.42.0; extra == \"pca-connector-scep\"",
"types-boto3-pcs<1.43.0,>=1.42.0; extra == \"pcs\"",
"types-boto3-personalize<1.43.0,>=1.42.0; extra == \"personalize\"",
"types-boto3-personalize-events<1.43.0,>=1.42.0; extra == \"personalize-events\"",
"types-boto3-personalize-runtime<1.43.0,>=1.42.0; extra == \"personalize-runtime\"",
"types-boto3-pi<1.43.0,>=1.42.0; extra == \"pi\"",
"types-boto3-pinpoint<1.43.0,>=1.42.0; extra == \"pinpoint\"",
"types-boto3-pinpoint-email<1.43.0,>=1.42.0; extra == \"pinpoint-email\"",
"types-boto3-pinpoint-sms-voice<1.43.0,>=1.42.0; extra == \"pinpoint-sms-voice\"",
"types-boto3-pinpoint-sms-voice-v2<1.43.0,>=1.42.0; extra == \"pinpoint-sms-voice-v2\"",
"types-boto3-pipes<1.43.0,>=1.42.0; extra == \"pipes\"",
"types-boto3-polly<1.43.0,>=1.42.0; extra == \"polly\"",
"types-boto3-pricing<1.43.0,>=1.42.0; extra == \"pricing\"",
"types-boto3-proton<1.43.0,>=1.42.0; extra == \"proton\"",
"types-boto3-qapps<1.43.0,>=1.42.0; extra == \"qapps\"",
"types-boto3-qbusiness<1.43.0,>=1.42.0; extra == \"qbusiness\"",
"types-boto3-qconnect<1.43.0,>=1.42.0; extra == \"qconnect\"",
"types-boto3-quicksight<1.43.0,>=1.42.0; extra == \"quicksight\"",
"types-boto3-ram<1.43.0,>=1.42.0; extra == \"ram\"",
"types-boto3-rbin<1.43.0,>=1.42.0; extra == \"rbin\"",
"types-boto3-rds<1.43.0,>=1.42.0; extra == \"rds\"",
"types-boto3-rds-data<1.43.0,>=1.42.0; extra == \"rds-data\"",
"types-boto3-redshift<1.43.0,>=1.42.0; extra == \"redshift\"",
"types-boto3-redshift-data<1.43.0,>=1.42.0; extra == \"redshift-data\"",
"types-boto3-redshift-serverless<1.43.0,>=1.42.0; extra == \"redshift-serverless\"",
"types-boto3-rekognition<1.43.0,>=1.42.0; extra == \"rekognition\"",
"types-boto3-repostspace<1.43.0,>=1.42.0; extra == \"repostspace\"",
"types-boto3-resiliencehub<1.43.0,>=1.42.0; extra == \"resiliencehub\"",
"types-boto3-resource-explorer-2<1.43.0,>=1.42.0; extra == \"resource-explorer-2\"",
"types-boto3-resource-groups<1.43.0,>=1.42.0; extra == \"resource-groups\"",
"types-boto3-resourcegroupstaggingapi<1.43.0,>=1.42.0; extra == \"resourcegroupstaggingapi\"",
"types-boto3-rolesanywhere<1.43.0,>=1.42.0; extra == \"rolesanywhere\"",
"types-boto3-route53<1.43.0,>=1.42.0; extra == \"route53\"",
"types-boto3-route53-recovery-cluster<1.43.0,>=1.42.0; extra == \"route53-recovery-cluster\"",
"types-boto3-route53-recovery-control-config<1.43.0,>=1.42.0; extra == \"route53-recovery-control-config\"",
"types-boto3-route53-recovery-readiness<1.43.0,>=1.42.0; extra == \"route53-recovery-readiness\"",
"types-boto3-route53domains<1.43.0,>=1.42.0; extra == \"route53domains\"",
"types-boto3-route53globalresolver<1.43.0,>=1.42.0; extra == \"route53globalresolver\"",
"types-boto3-route53profiles<1.43.0,>=1.42.0; extra == \"route53profiles\"",
"types-boto3-route53resolver<1.43.0,>=1.42.0; extra == \"route53resolver\"",
"types-boto3-rtbfabric<1.43.0,>=1.42.0; extra == \"rtbfabric\"",
"types-boto3-rum<1.43.0,>=1.42.0; extra == \"rum\"",
"types-boto3-s3<1.43.0,>=1.42.0; extra == \"s3\"",
"types-boto3-s3control<1.43.0,>=1.42.0; extra == \"s3control\"",
"types-boto3-s3outposts<1.43.0,>=1.42.0; extra == \"s3outposts\"",
"types-boto3-s3tables<1.43.0,>=1.42.0; extra == \"s3tables\"",
"types-boto3-s3vectors<1.43.0,>=1.42.0; extra == \"s3vectors\"",
"types-boto3-sagemaker<1.43.0,>=1.42.0; extra == \"sagemaker\"",
"types-boto3-sagemaker-a2i-runtime<1.43.0,>=1.42.0; extra == \"sagemaker-a2i-runtime\"",
"types-boto3-sagemaker-edge<1.43.0,>=1.42.0; extra == \"sagemaker-edge\"",
"types-boto3-sagemaker-featurestore-runtime<1.43.0,>=1.42.0; extra == \"sagemaker-featurestore-runtime\"",
"types-boto3-sagemaker-geospatial<1.43.0,>=1.42.0; extra == \"sagemaker-geospatial\"",
"types-boto3-sagemaker-metrics<1.43.0,>=1.42.0; extra == \"sagemaker-metrics\"",
"types-boto3-sagemaker-runtime<1.43.0,>=1.42.0; extra == \"sagemaker-runtime\"",
"types-boto3-savingsplans<1.43.0,>=1.42.0; extra == \"savingsplans\"",
"types-boto3-scheduler<1.43.0,>=1.42.0; extra == \"scheduler\"",
"types-boto3-schemas<1.43.0,>=1.42.0; extra == \"schemas\"",
"types-boto3-sdb<1.43.0,>=1.42.0; extra == \"sdb\"",
"types-boto3-secretsmanager<1.43.0,>=1.42.0; extra == \"secretsmanager\"",
"types-boto3-security-ir<1.43.0,>=1.42.0; extra == \"security-ir\"",
"types-boto3-securityhub<1.43.0,>=1.42.0; extra == \"securityhub\"",
"types-boto3-securitylake<1.43.0,>=1.42.0; extra == \"securitylake\"",
"types-boto3-serverlessrepo<1.43.0,>=1.42.0; extra == \"serverlessrepo\"",
"types-boto3-service-quotas<1.43.0,>=1.42.0; extra == \"service-quotas\"",
"types-boto3-servicecatalog<1.43.0,>=1.42.0; extra == \"servicecatalog\"",
"types-boto3-servicecatalog-appregistry<1.43.0,>=1.42.0; extra == \"servicecatalog-appregistry\"",
"types-boto3-servicediscovery<1.43.0,>=1.42.0; extra == \"servicediscovery\"",
"types-boto3-ses<1.43.0,>=1.42.0; extra == \"ses\"",
"types-boto3-sesv2<1.43.0,>=1.42.0; extra == \"sesv2\"",
"types-boto3-shield<1.43.0,>=1.42.0; extra == \"shield\"",
"types-boto3-signer<1.43.0,>=1.42.0; extra == \"signer\"",
"types-boto3-signer-data<1.43.0,>=1.42.0; extra == \"signer-data\"",
"types-boto3-signin<1.43.0,>=1.42.0; extra == \"signin\"",
"types-boto3-simspaceweaver<1.43.0,>=1.42.0; extra == \"simspaceweaver\"",
"types-boto3-snow-device-management<1.43.0,>=1.42.0; extra == \"snow-device-management\"",
"types-boto3-snowball<1.43.0,>=1.42.0; extra == \"snowball\"",
"types-boto3-sns<1.43.0,>=1.42.0; extra == \"sns\"",
"types-boto3-socialmessaging<1.43.0,>=1.42.0; extra == \"socialmessaging\"",
"types-boto3-sqs<1.43.0,>=1.42.0; extra == \"sqs\"",
"types-boto3-ssm<1.43.0,>=1.42.0; extra == \"ssm\"",
"types-boto3-ssm-contacts<1.43.0,>=1.42.0; extra == \"ssm-contacts\"",
"types-boto3-ssm-guiconnect<1.43.0,>=1.42.0; extra == \"ssm-guiconnect\"",
"types-boto3-ssm-incidents<1.43.0,>=1.42.0; extra == \"ssm-incidents\"",
"types-boto3-ssm-quicksetup<1.43.0,>=1.42.0; extra == \"ssm-quicksetup\"",
"types-boto3-ssm-sap<1.43.0,>=1.42.0; extra == \"ssm-sap\"",
"types-boto3-sso<1.43.0,>=1.42.0; extra == \"sso\"",
"types-boto3-sso-admin<1.43.0,>=1.42.0; extra == \"sso-admin\"",
"types-boto3-sso-oidc<1.43.0,>=1.42.0; extra == \"sso-oidc\"",
"types-boto3-stepfunctions<1.43.0,>=1.42.0; extra == \"stepfunctions\"",
"types-boto3-storagegateway<1.43.0,>=1.42.0; extra == \"storagegateway\"",
"types-boto3-sts<1.43.0,>=1.42.0; extra == \"sts\"",
"types-boto3-supplychain<1.43.0,>=1.42.0; extra == \"supplychain\"",
"types-boto3-support<1.43.0,>=1.42.0; extra == \"support\"",
"types-boto3-support-app<1.43.0,>=1.42.0; extra == \"support-app\"",
"types-boto3-swf<1.43.0,>=1.42.0; extra == \"swf\"",
"types-boto3-synthetics<1.43.0,>=1.42.0; extra == \"synthetics\"",
"types-boto3-taxsettings<1.43.0,>=1.42.0; extra == \"taxsettings\"",
"types-boto3-textract<1.43.0,>=1.42.0; extra == \"textract\"",
"types-boto3-timestream-influxdb<1.43.0,>=1.42.0; extra == \"timestream-influxdb\"",
"types-boto3-timestream-query<1.43.0,>=1.42.0; extra == \"timestream-query\"",
"types-boto3-timestream-write<1.43.0,>=1.42.0; extra == \"timestream-write\"",
"types-boto3-tnb<1.43.0,>=1.42.0; extra == \"tnb\"",
"types-boto3-transcribe<1.43.0,>=1.42.0; extra == \"transcribe\"",
"types-boto3-transfer<1.43.0,>=1.42.0; extra == \"transfer\"",
"types-boto3-translate<1.43.0,>=1.42.0; extra == \"translate\"",
"types-boto3-trustedadvisor<1.43.0,>=1.42.0; extra == \"trustedadvisor\"",
"types-boto3-verifiedpermissions<1.43.0,>=1.42.0; extra == \"verifiedpermissions\"",
"types-boto3-voice-id<1.43.0,>=1.42.0; extra == \"voice-id\"",
"types-boto3-vpc-lattice<1.43.0,>=1.42.0; extra == \"vpc-lattice\"",
"types-boto3-waf<1.43.0,>=1.42.0; extra == \"waf\"",
"types-boto3-waf-regional<1.43.0,>=1.42.0; extra == \"waf-regional\"",
"types-boto3-wafv2<1.43.0,>=1.42.0; extra == \"wafv2\"",
"types-boto3-wellarchitected<1.43.0,>=1.42.0; extra == \"wellarchitected\"",
"types-boto3-wickr<1.43.0,>=1.42.0; extra == \"wickr\"",
"types-boto3-wisdom<1.43.0,>=1.42.0; extra == \"wisdom\"",
"types-boto3-workdocs<1.43.0,>=1.42.0; extra == \"workdocs\"",
"types-boto3-workmail<1.43.0,>=1.42.0; extra == \"workmail\"",
"types-boto3-workmailmessageflow<1.43.0,>=1.42.0; extra == \"workmailmessageflow\"",
"types-boto3-workspaces<1.43.0,>=1.42.0; extra == \"workspaces\"",
"types-boto3-workspaces-instances<1.43.0,>=1.42.0; extra == \"workspaces-instances\"",
"types-boto3-workspaces-thin-client<1.43.0,>=1.42.0; extra == \"workspaces-thin-client\"",
"types-boto3-workspaces-web<1.43.0,>=1.42.0; extra == \"workspaces-web\"",
"types-boto3-xray<1.43.0,>=1.42.0; extra == \"xray\""
] | [] | [] | [] | [
"Homepage, https://github.com/youtype/mypy_boto3_builder",
"Documentation, https://youtype.github.io/types_boto3_docs/",
"Source, https://github.com/youtype/mypy_boto3_builder",
"Tracker, https://github.com/youtype/mypy_boto3_builder/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-21T02:54:59.509731 | types_boto3-1.42.54.tar.gz | 101,112 | cf/0b/f870d9c6e8417e5ef5fa5ae963441af1df4c6126ac41c4902f1ffcf7e0c5/types_boto3-1.42.54.tar.gz | source | sdist | null | false | 2a143541aec5b3b31bfb86cbd4d8868f | ccae2546a5a2c754b860528c13bbdcda44e6ddccdf73320e32ed3b8422dedbf0 | cf0bf870d9c6e8417e5ef5fa5ae963441af1df4c6126ac41c4902f1ffcf7e0c5 | MIT | [
"LICENSE"
] | 5,370 |
2.4 | mng | 0.1.4 | Library for managing AI coding agents across different hosts | # mng: build your team of AI engineering agents
**installation:**
```bash
curl -fsSL https://raw.githubusercontent.com/imbue-ai/mng/main/scripts/install.sh | bash
```
**mng is *very* simple to use:**
```bash
mng # launch claude locally (defaults: command=create, agent=claude, provider=local, project=current dir)
mng --in modal # launch claude on Modal
mng my-task # launch claude with a name
mng my-task codex # launch codex instead of claude
mng -- --model opus # pass any arguments through to the underlying agent
# send an initial message so you don't have to wait around:
mng --no-connect --message "Speed up one of my tests and make a PR on github"
# or, be super explicit about all of the arguments:
mng create --name my-task --agent-type claude --in modal
# tons more arguments for anything you could want! Learn more via --help
mng create --help
# or see the other commands--list, destroy, message, connect, push, pull, clone, and more!
mng --help
```
**mng is fast:**
```bash
> time mng local-hello --message "Just say hello" --no-connect
Agent creation started in background (PID: 709262)
Agent name: local-hello
real 0m1.472s
user 0m1.181s
sys 0m0.227s
> time mng list
NAME STATE HOST PROVIDER HOST STATE LABELS
local-hello RUNNING @local local RUNNING project=mng
real 0m1.773s
user 0m0.955s
sys 0m0.166s
```
**mng itself is free, *and* the cheapest way to run remote agents (they shut down when idle):**
```bash
mng create --in modal --no-connect --message "just say 'hello'" --idle-timeout 60 -- --model sonnet
# costs $0.0387443 for inference (using sonnet)
# costs $0.0013188 for compute because it shuts down 60 seconds after the agent completes
```
**mng takes security and privacy seriously:**
```bash
# by default, cannot be accessed by anyone except your modal account (uses a local unique SSH key)
mng create example-task --in modal
# you (or your agent) can do whatever bad ideas you want in that container without fear
mng exec example-task "rm -rf /"
# you can block all outgoing internet access
mng create --in modal -b offline
# or restrict outgoing traffic to certain IPs
mng create --in modal -b cidr-allowlist=203.0.113.0/24
```
**mng is powerful and composable:**
```bash
# start multiple agents on the same host to save money and share data
mng create agent-1 --in modal --host-name shared-host
mng create agent-2 --host shared-host
# run commands directly on an agent's host
mng exec agent-1 "git log --oneline -5"
# never lose any work: snapshot and fork the entire agent states
mng create doomed-agent --in modal
SNAPSHOT=$(mng snapshot doomed-agent --format "{id}")
mng message doomed-agent "try running 'rm -rf /' and see what happens"
mng create new-agent --snapshot $SNAPSHOT
```
<!--
# programmatically send messages to your agents and see their chat histories
mng message agent-1 "Tell me a joke"
mng transcript agent-1 # [future]
# [future] schedule agents to run periodically
mng schedule --template my-daily-hook "look at any flaky tests over the past day and try to fix one of them" --cron "0 * * * *"
-->
**mng makes it easy to work with remote agents**
```bash
mng connect my-agent # directly connect to remote agents via SSH for debugging
mng pull my-agent # pull changes from an agent to your local machine
mng push my-agent # push your changes to an agent
mng pair my-agent # or sync changes continuously!
```
**mng is easy to learn:**
```text
> mng ask "How do I create a container on modal with custom packages installed by default?"
Simply run:
mng create --in modal --build-arg "--dockerfile path/to/Dockerfile"
```
<!--
If you don't have a Dockerfile for your project, run:
mng bootstrap # [future]
From the repo where you would like a Dockerfile created.
-->
## Overview
`mng` makes it easy to create and use any AI agent (ex: Claude Code, Codex), whether you want to run locally or remotely.
`mng` is built on open-source tools and standards (SSH, git, tmux, docker, etc.), and is extensible via [plugins](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/plugins.md) to enable the latest AI coding workflows.
## Installation
**Quick install** (installs system dependencies + mng automatically):
```bash
curl -fsSL https://raw.githubusercontent.com/imbue-ai/mng/main/scripts/install.sh | bash
```
**Manual install** (requires [uv](https://docs.astral.sh/uv/) and system deps: `git`, `tmux`, `jq`, `rsync`, `unison`):
```bash
uv tool install mng
# or run without installing
uvx mng
```
**Upgrade:**
```bash
uv tool upgrade mng
```
**For development:**
```bash
git clone git@github.com:imbue-ai/mng.git && cd mng && uv sync --all-packages && uv tool install -e libs/mng
```
## Shell Completion
`mng` supports tab completion for commands and agent names in bash, zsh, and fish.
**Zsh** (add to `~/.zshrc`):
```bash
eval "$(_MNG_COMPLETE=zsh_source mng)"
```
**Bash** (add to `~/.bashrc`):
```bash
eval "$(_MNG_COMPLETE=bash_source mng)"
```
**Fish** (run once):
```bash
_MNG_COMPLETE=fish_source mng > ~/.config/fish/completions/mng.fish
```
Note: `mng` must be installed on your PATH for completion to work (not invoked via `uv run`).
## Commands
```bash
# without installing:
uvx mng <command> [options]
# if installed:
mng <command> [options]
```
### For managing agents:
- **[`create`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/primary/create.md)**: (default) Create and run an agent in a host
- [`destroy`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/primary/destroy.md): Stop an agent (and clean up any associated resources)
- [`connect`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/primary/connect.md): Attach to an agent
<!-- - [`open`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/primary/open.md) [future]: Open a URL from an agent in your browser -->
- [`list`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/primary/list.md): List active agents
- [`stop`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/primary/stop.md): Stop an agent
- [`start`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/primary/start.md): Start a stopped agent
- [`snapshot`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/secondary/snapshot.md) [experimental]: Create a snapshot of a host's state
- [`exec`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/primary/exec.md): Execute a shell command on an agent's host
- [`rename`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/primary/rename.md): Rename an agent
- [`clone`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/aliases/clone.md): Create a copy of an existing agent
- [`migrate`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/aliases/migrate.md): Move an agent to a different host
- [`limit`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/secondary/limit.md): Configure limits for agents and hosts
### For moving data in and out:
- [`pull`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/primary/pull.md): Pull data from agent
- [`push`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/primary/push.md): Push data to agent
- [`pair`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/primary/pair.md): Continually sync data with an agent
- [`message`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/secondary/message.md): Send a message to an agent
- [`provision`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/secondary/provision.md): Re-run provisioning on an agent (useful for syncing config and auth)
### For maintenance:
- [`cleanup`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/secondary/cleanup.md): Clean up stopped agents and unused resources
- [`logs`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/secondary/logs.md): View agent and host logs
- [`gc`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/secondary/gc.md): Garbage collect unused resources
### For managing mng itself:
- [`ask`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/secondary/ask.md): Chat with mng for help
- [`plugin`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/secondary/plugin.md) [experimental]: Manage mng plugins
- [`config`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/secondary/config.md): View and edit mng configuration
## How it works
You can interact with `mng` via the terminal (run `mng --help` to learn more).
<!-- You can also interact via one of many [web interfaces](https://github.com/imbue-ai/mng/blob/main/web_interfaces.md) [future] (ex: [TheEye](http://ididntmakethisyet.com)) -->
`mng` uses robust open source tools like SSH, git, and tmux to run and manage your agents:
- **[agents](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/agents.md)** are simply processes that run in [tmux](https://github.com/tmux/tmux/wiki) sessions, each with their own `work_dir` (working folder) and configuration (ex: secrets, environment variables, etc)
<!-- - [agents](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/agents.md) usually expose URLs so you can access them from the web [future: mng open] -->
- [agents](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/agents.md) run on **[hosts](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/hosts.md)**--either locally (by default), or special environments like [Modal](https://modal.com) [Sandboxes](https://modal.com/docs/guide/sandboxes) (`--in modal`) or [Docker](https://www.docker.com) [containers](https://docs.docker.com/get-started/docker-concepts/the-basics/what-is-a-container/) (`--in docker`). Use `--host <name>` to target an existing host.
- multiple [agents](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/agents.md) can share a single [host](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/hosts.md).
- [hosts](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/hosts.md) come from **[providers](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/providers.md)** (ex: Modal, AWS, docker, etc)
- [hosts](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/hosts.md) help save money by automatically "pausing" when all of their [agents](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/agents.md) are "idle". See [idle detection](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/idle_detection.md) for more details.
- [hosts](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/hosts.md) automatically "stop" when all of their [agents](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/agents.md) are "stopped"
- `mng` is extensible via **[plugins](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/plugins.md)**--you can add new agent types, provider backends, CLI commands, and lifecycle hooks
<!-- - `mng` is absurdly extensible--there are existing **[plugins](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/concepts/plugins.md)** for almost everything, and `mng` can even [dynamically generate new plugins](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/commands/secondary/plugin.md#mng-plugin-generate) [future] -->
### Architecture
`mng` stores very little state (beyond configuration and local caches for performance), and instead relies on conventions:
- any process running in window 0 of a `mng-` prefixed tmux sessions is considered an agent
- agents store their status and logs in a standard location (default: `$MNG_HOST_DIR/agents/<agent_id>/`)
- all hosts are accessed via SSH--if you can SSH into it, it can be a host
- ...[and more](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/conventions.md)
See [`architecture.md`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/architecture.md) for an in-depth overview of the `mng` architecture and design principles.
## Security
**Best practices:**
1. Use providers with good isolation (like Docker or Modal) when working with agents, especially those that are untrusted.
2. Follow the "principle of least privilege": only expose the minimal set of API tokens and secrets for each agent, and restrict their access (eg to the network) as much as possible.
3. Avoid storing sensitive data in agents' filesystems (or encrypt it if necessary).
See [`./docs/security_model.md`](https://github.com/imbue-ai/mng/blob/main/libs/mng/docs/security_model.md) for more details on our security model.
<!--
## Learning more
TODO: put a ton of examples and references here!
-->
## Sub-projects
This is a monorepo that contains the code for `mng` here:
- [libs/mng/](https://github.com/imbue-ai/mng/blob/main/libs/mng/README.md)
As well as the code for some plugins that we maintain, including:
- [libs/mng_pair/](https://github.com/imbue-ai/mng/blob/main/libs/mng_pair/README.md)
- [libs/mng_opencode/](https://github.com/imbue-ai/mng/blob/main/libs/mng_opencode/README.md)
The repo also contains code for some dependencies and related projects, including:
- [libs/concurrency_group](https://github.com/imbue-ai/mng/blob/main/libs/concurrency_group/README.md): a simple Python library for managing synchronous concurrent primitives (threads and processes) in a way that makes it easy to ensure that they are cleaned up.
- [libs/imbue_common](https://github.com/imbue-ai/mng/blob/main/libs/imbue_common/README.md): core libraries that are shared across all of our projects
- [apps/changelings](https://github.com/imbue-ai/mng/blob/main/apps/changelings/README.md): an experimental project around scheduling runs of autonomous agents
## Contributing
Contributions are welcome!
<!-- Please see [`CONTRIBUTING.md`](https://github.com/imbue-ai/mng/blob/main/CONTRIBUTING.md) for guidelines. [future] -->
| text/markdown | Imbue, Josh Albrecht, Evan Ryan Gunter | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"cel-python>=0.1.5",
"click-option-group>=0.5.6",
"click>=8.0",
"concurrency-group",
"coolname>=3.0.0",
"cryptography>=42.0",
"docker>=7.0",
"dockerfile-parse>=2.0.0",
"imbue-common",
"modal==1.3.1",
"packaging>=21.0",
"pluggy>=1.5.0",
"psutil>=5.9",
"pydantic>=2.0",
"pyinfra>=3.0",
"python-dotenv>=1.0",
"tabulate>=0.9.0",
"tenacity>=8.0",
"tomlkit>=0.12.0",
"urwid>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/imbue-ai/mng",
"Repository, https://github.com/imbue-ai/mng"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:54:59.291573 | mng-0.1.4.tar.gz | 704,943 | aa/ea/8e6bfb30c68b1e6e7c9aee37d25848a0fac8ed35bb7b36196e145154c548/mng-0.1.4.tar.gz | source | sdist | null | false | 80e2a622775a72712cf171f0465e84e5 | a285d5879c0f42a20600507b198b6737395f4b7bd44f2a515643cbb55e5d35d2 | aaea8e6bfb30c68b1e6e7c9aee37d25848a0fac8ed35bb7b36196e145154c548 | MIT | [] | 245 |
2.4 | types-boto3-lite | 1.42.54 | Lite type annotations for boto3 1.42.54 generated with mypy-boto3-builder 8.12.0 | <a id="types-boto3-lite"></a>
# types-boto3-lite
[](https://pypi.org/project/types-boto3-lite/)
[](https://pypi.org/project/types-boto3-lite/)
[](https://youtype.github.io/types_boto3_docs/)
[](https://pypistats.org/packages/types-boto3-lite)

Type annotations for [boto3 1.42.54](https://pypi.org/project/boto3/)
compatible with [VSCode](https://code.visualstudio.com/),
[PyCharm](https://www.jetbrains.com/pycharm/),
[Emacs](https://www.gnu.org/software/emacs/),
[Sublime Text](https://www.sublimetext.com/),
[mypy](https://github.com/python/mypy),
[pyright](https://github.com/microsoft/pyright) and other tools.
Generated with
[mypy-boto3-builder 8.12.0](https://github.com/youtype/mypy_boto3_builder).
More information can be found on
[types-boto3](https://pypi.org/project/types-boto3/) page and in
[types-boto3-lite docs](https://youtype.github.io/types_boto3_docs/).
See how it helps you find and fix potential bugs:

- [types-boto3-lite](#types-boto3-lite)
- [How to install](#how-to-install)
- [Generate locally (recommended)](<#generate-locally-(recommended)>)
- [VSCode extension](#vscode-extension)
- [From PyPI with pip](#from-pypi-with-pip)
- [How to uninstall](#how-to-uninstall)
- [Usage](#usage)
- [VSCode](#vscode)
- [PyCharm](#pycharm)
- [Emacs](#emacs)
- [Sublime Text](#sublime-text)
- [Other IDEs](#other-ides)
- [mypy](#mypy)
- [pyright](#pyright)
- [Pylint compatibility](#pylint-compatibility)
- [Explicit type annotations](#explicit-type-annotations)
- [How it works](#how-it-works)
- [What's new](#what's-new)
- [Implemented features](#implemented-features)
- [Latest changes](#latest-changes)
- [Versioning](#versioning)
- [Thank you](#thank-you)
- [Documentation](#documentation)
- [Support and contributing](#support-and-contributing)
- [Submodules](#submodules)
<a id="how-to-install"></a>
## How to install
<a id="generate-locally-(recommended)"></a>
### Generate locally (recommended)
You can generate type annotations for `boto3` package locally with
`mypy-boto3-builder`. Use
[uv](https://docs.astral.sh/uv/getting-started/installation/) for build
isolation.
1. Run mypy-boto3-builder in your package root directory:
`uvx --with 'boto3==1.42.54' mypy-boto3-builder`
2. Select `boto3` AWS SDK.
3. Select services you use in the current project.
4. Use provided commands to install generated packages.
<a id="vscode-extension"></a>
### VSCode extension
Add
[AWS Boto3](https://marketplace.visualstudio.com/items?itemName=Boto3typed.boto3-ide)
extension to your VSCode and run `AWS boto3: Quick Start` command.
Click `Auto-discover services` and select services you use in the current
project.
<a id="from-pypi-with-pip"></a>
### From PyPI with pip
Install `types-boto3` to add type checking for `boto3` package.
```bash
# install type annotations only for boto3
python -m pip install types-boto3
# install boto3 type annotations
# for cloudformation, dynamodb, ec2, lambda, rds, s3, sqs
python -m pip install 'types-boto3[essential]'
# or install annotations for services you use
python -m pip install 'types-boto3[acm,apigateway]'
# or install annotations in sync with boto3 version
python -m pip install 'types-boto3[boto3]'
# or install all-in-one annotations for all services
python -m pip install 'types-boto3[full]'
```
<a id="how-to-uninstall"></a>
## How to uninstall
```bash
# uninstall types-boto3-lite
python -m pip uninstall -y types-boto3-lite
```
<a id="usage"></a>
## Usage
<a id="vscode"></a>
### VSCode
- Install
[Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
- Install
[Pylance extension](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
- Set `Pylance` as your Python Language Server
- Install `types-boto3-lite[essential]` in your environment:
```bash
python -m pip install 'types-boto3-lite[essential]'
```
Both type checking and code completion should now work. No explicit type
annotations required, write your `boto3` code as usual.
<a id="pycharm"></a>
### PyCharm
Install `types-boto3-lite[essential]` in your environment:
```bash
python -m pip install 'types-boto3-lite[essential]'
```
Both type checking and code completion should now work.
<a id="emacs"></a>
### Emacs
- Install `types-boto3-lite` with services you use in your environment:
```bash
python -m pip install 'types-boto3-lite[essential]'
```
- Install [use-package](https://github.com/jwiegley/use-package),
[lsp](https://github.com/emacs-lsp/lsp-mode/),
[company](https://github.com/company-mode/company-mode) and
[flycheck](https://github.com/flycheck/flycheck) packages
- Install [lsp-pyright](https://github.com/emacs-lsp/lsp-pyright) package
```elisp
(use-package lsp-pyright
:ensure t
:hook (python-mode . (lambda ()
(require 'lsp-pyright)
(lsp))) ; or lsp-deferred
:init (when (executable-find "python3")
(setq lsp-pyright-python-executable-cmd "python3"))
)
```
- Make sure emacs uses the environment where you have installed
`types-boto3-lite`
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="sublime-text"></a>
### Sublime Text
- Install `types-boto3-lite[essential]` with services you use in your
environment:
```bash
python -m pip install 'types-boto3-lite[essential]'
```
- Install [LSP-pyright](https://github.com/sublimelsp/LSP-pyright) package
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="other-ides"></a>
### Other IDEs
Not tested, but as long as your IDE supports `mypy` or `pyright`, everything
should work.
<a id="mypy"></a>
### mypy
- Install `mypy`: `python -m pip install mypy`
- Install `types-boto3-lite[essential]` in your environment:
```bash
python -m pip install 'types-boto3-lite[essential]'
```
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pyright"></a>
### pyright
- Install `pyright`: `npm i -g pyright`
- Install `types-boto3-lite[essential]` in your environment:
```bash
python -m pip install 'types-boto3-lite[essential]'
```
Optionally, you can install `types-boto3-lite` to `typings` directory.
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pylint-compatibility"></a>
### Pylint compatibility
It is totally safe to use `TYPE_CHECKING` flag in order to avoid
`types-boto3-lite` dependency in production. However, there is an issue in
`pylint` that it complains about undefined variables. To fix it, set all types
to `object` in non-`TYPE_CHECKING` mode.
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from types_boto3_ec2 import EC2Client, EC2ServiceResource
from types_boto3_ec2.waiters import BundleTaskCompleteWaiter
from types_boto3_ec2.paginators import DescribeVolumesPaginator
else:
EC2Client = object
EC2ServiceResource = object
BundleTaskCompleteWaiter = object
DescribeVolumesPaginator = object
...
```
<a id="explicit-type-annotations"></a>
### Explicit type annotations
To speed up type checking and code completion, you can set types explicitly.
```python
import boto3
from boto3.session import Session
from types_boto3_ec2.client import EC2Client
from types_boto3_ec2.service_resource import EC2ServiceResource
from types_boto3_ec2.waiter import BundleTaskCompleteWaiter
from types_boto3_ec2.paginator import DescribeVolumesPaginator
session = Session(region_name="us-west-1")
ec2_client: EC2Client = boto3.client("ec2", region_name="us-west-1")
ec2_resource: EC2ServiceResource = session.resource("ec2")
bundle_task_complete_waiter: BundleTaskCompleteWaiter = ec2_client.get_waiter(
"bundle_task_complete"
)
describe_volumes_paginator: DescribeVolumesPaginator = ec2_client.get_paginator("describe_volumes")
```
<a id="how-it-works"></a>
## How it works
Fully automated
[mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder) carefully
generates type annotations for each service, patiently waiting for `boto3`
updates. It delivers drop-in type annotations for you and makes sure that:
- All available `boto3` services are covered.
- Each public class and method of every `boto3` service gets valid type
annotations extracted from `botocore` schemas.
- Type annotations include up-to-date documentation.
- Link to documentation is provided for every method.
- Code is processed by [ruff](https://docs.astral.sh/ruff/) for readability.
<a id="what's-new"></a>
## What's new
<a id="implemented-features"></a>
### Implemented features
- Fully type annotated `boto3`, `botocore`, `aiobotocore` and `aioboto3`
libraries
- `mypy`, `pyright`, `VSCode`, `PyCharm`, `Sublime Text` and `Emacs`
compatibility
- `Client`, `ServiceResource`, `Resource`, `Waiter` `Paginator` type
annotations for each service
- Generated `TypeDefs` for each service
- Generated `Literals` for each service
- Auto discovery of types for `boto3.client` and `boto3.resource` calls
- Auto discovery of types for `session.client` and `session.resource` calls
- Auto discovery of types for `client.get_waiter` and `client.get_paginator`
calls
- Auto discovery of types for `ServiceResource` and `Resource` collections
- Auto discovery of types for `aiobotocore.Session.create_client` calls
<a id="latest-changes"></a>
### Latest changes
Builder changelog can be found in
[Releases](https://github.com/youtype/mypy_boto3_builder/releases).
<a id="versioning"></a>
## Versioning
`types-boto3-lite` version is the same as related `boto3` version and follows
[Python Packaging version specifiers](https://packaging.python.org/en/latest/specifications/version-specifiers/).
<a id="thank-you"></a>
## Thank you
- [Allie Fitter](https://github.com/alliefitter) for
[boto3-type-annotations](https://pypi.org/project/boto3-type-annotations/),
this package is based on top of his work
- [black](https://github.com/psf/black) developers for an awesome formatting
tool
- [Timothy Edmund Crosley](https://github.com/timothycrosley) for
[isort](https://github.com/PyCQA/isort) and how flexible it is
- [mypy](https://github.com/python/mypy) developers for doing all dirty work
for us
- [pyright](https://github.com/microsoft/pyright) team for the new era of typed
Python
<a id="documentation"></a>
## Documentation
All services type annotations can be found in
[boto3 docs](https://youtype.github.io/types_boto3_docs/)
<a id="support-and-contributing"></a>
## Support and contributing
This package is auto-generated. Please reports any bugs or request new features
in [mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder/issues/)
repository.
<a id="submodules"></a>
## Submodules
- `types-boto3-lite[full]` - Type annotations for all 413 services in one
package (recommended).
- `types-boto3-lite[all]` - Type annotations for all 413 services in separate
packages.
- `types-boto3-lite[essential]` - Type annotations for
[CloudFormation](https://youtype.github.io/types_boto3_docs/types_boto3_cloudformation/),
[DynamoDB](https://youtype.github.io/types_boto3_docs/types_boto3_dynamodb/),
[EC2](https://youtype.github.io/types_boto3_docs/types_boto3_ec2/),
[Lambda](https://youtype.github.io/types_boto3_docs/types_boto3_lambda/),
[RDS](https://youtype.github.io/types_boto3_docs/types_boto3_rds/),
[S3](https://youtype.github.io/types_boto3_docs/types_boto3_s3/) and
[SQS](https://youtype.github.io/types_boto3_docs/types_boto3_sqs/) services.
- `types-boto3-lite[boto3]` - Install annotations in sync with `boto3` version.
- `types-boto3-lite[accessanalyzer]` - Type annotations for
[AccessAnalyzer](https://youtype.github.io/types_boto3_docs/types_boto3_accessanalyzer/)
service.
- `types-boto3-lite[account]` - Type annotations for
[Account](https://youtype.github.io/types_boto3_docs/types_boto3_account/)
service.
- `types-boto3-lite[acm]` - Type annotations for
[ACM](https://youtype.github.io/types_boto3_docs/types_boto3_acm/) service.
- `types-boto3-lite[acm-pca]` - Type annotations for
[ACMPCA](https://youtype.github.io/types_boto3_docs/types_boto3_acm_pca/)
service.
- `types-boto3-lite[aiops]` - Type annotations for
[AIOps](https://youtype.github.io/types_boto3_docs/types_boto3_aiops/)
service.
- `types-boto3-lite[amp]` - Type annotations for
[PrometheusService](https://youtype.github.io/types_boto3_docs/types_boto3_amp/)
service.
- `types-boto3-lite[amplify]` - Type annotations for
[Amplify](https://youtype.github.io/types_boto3_docs/types_boto3_amplify/)
service.
- `types-boto3-lite[amplifybackend]` - Type annotations for
[AmplifyBackend](https://youtype.github.io/types_boto3_docs/types_boto3_amplifybackend/)
service.
- `types-boto3-lite[amplifyuibuilder]` - Type annotations for
[AmplifyUIBuilder](https://youtype.github.io/types_boto3_docs/types_boto3_amplifyuibuilder/)
service.
- `types-boto3-lite[apigateway]` - Type annotations for
[APIGateway](https://youtype.github.io/types_boto3_docs/types_boto3_apigateway/)
service.
- `types-boto3-lite[apigatewaymanagementapi]` - Type annotations for
[ApiGatewayManagementApi](https://youtype.github.io/types_boto3_docs/types_boto3_apigatewaymanagementapi/)
service.
- `types-boto3-lite[apigatewayv2]` - Type annotations for
[ApiGatewayV2](https://youtype.github.io/types_boto3_docs/types_boto3_apigatewayv2/)
service.
- `types-boto3-lite[appconfig]` - Type annotations for
[AppConfig](https://youtype.github.io/types_boto3_docs/types_boto3_appconfig/)
service.
- `types-boto3-lite[appconfigdata]` - Type annotations for
[AppConfigData](https://youtype.github.io/types_boto3_docs/types_boto3_appconfigdata/)
service.
- `types-boto3-lite[appfabric]` - Type annotations for
[AppFabric](https://youtype.github.io/types_boto3_docs/types_boto3_appfabric/)
service.
- `types-boto3-lite[appflow]` - Type annotations for
[Appflow](https://youtype.github.io/types_boto3_docs/types_boto3_appflow/)
service.
- `types-boto3-lite[appintegrations]` - Type annotations for
[AppIntegrationsService](https://youtype.github.io/types_boto3_docs/types_boto3_appintegrations/)
service.
- `types-boto3-lite[application-autoscaling]` - Type annotations for
[ApplicationAutoScaling](https://youtype.github.io/types_boto3_docs/types_boto3_application_autoscaling/)
service.
- `types-boto3-lite[application-insights]` - Type annotations for
[ApplicationInsights](https://youtype.github.io/types_boto3_docs/types_boto3_application_insights/)
service.
- `types-boto3-lite[application-signals]` - Type annotations for
[CloudWatchApplicationSignals](https://youtype.github.io/types_boto3_docs/types_boto3_application_signals/)
service.
- `types-boto3-lite[applicationcostprofiler]` - Type annotations for
[ApplicationCostProfiler](https://youtype.github.io/types_boto3_docs/types_boto3_applicationcostprofiler/)
service.
- `types-boto3-lite[appmesh]` - Type annotations for
[AppMesh](https://youtype.github.io/types_boto3_docs/types_boto3_appmesh/)
service.
- `types-boto3-lite[apprunner]` - Type annotations for
[AppRunner](https://youtype.github.io/types_boto3_docs/types_boto3_apprunner/)
service.
- `types-boto3-lite[appstream]` - Type annotations for
[AppStream](https://youtype.github.io/types_boto3_docs/types_boto3_appstream/)
service.
- `types-boto3-lite[appsync]` - Type annotations for
[AppSync](https://youtype.github.io/types_boto3_docs/types_boto3_appsync/)
service.
- `types-boto3-lite[arc-region-switch]` - Type annotations for
[ARCRegionswitch](https://youtype.github.io/types_boto3_docs/types_boto3_arc_region_switch/)
service.
- `types-boto3-lite[arc-zonal-shift]` - Type annotations for
[ARCZonalShift](https://youtype.github.io/types_boto3_docs/types_boto3_arc_zonal_shift/)
service.
- `types-boto3-lite[artifact]` - Type annotations for
[Artifact](https://youtype.github.io/types_boto3_docs/types_boto3_artifact/)
service.
- `types-boto3-lite[athena]` - Type annotations for
[Athena](https://youtype.github.io/types_boto3_docs/types_boto3_athena/)
service.
- `types-boto3-lite[auditmanager]` - Type annotations for
[AuditManager](https://youtype.github.io/types_boto3_docs/types_boto3_auditmanager/)
service.
- `types-boto3-lite[autoscaling]` - Type annotations for
[AutoScaling](https://youtype.github.io/types_boto3_docs/types_boto3_autoscaling/)
service.
- `types-boto3-lite[autoscaling-plans]` - Type annotations for
[AutoScalingPlans](https://youtype.github.io/types_boto3_docs/types_boto3_autoscaling_plans/)
service.
- `types-boto3-lite[b2bi]` - Type annotations for
[B2BI](https://youtype.github.io/types_boto3_docs/types_boto3_b2bi/) service.
- `types-boto3-lite[backup]` - Type annotations for
[Backup](https://youtype.github.io/types_boto3_docs/types_boto3_backup/)
service.
- `types-boto3-lite[backup-gateway]` - Type annotations for
[BackupGateway](https://youtype.github.io/types_boto3_docs/types_boto3_backup_gateway/)
service.
- `types-boto3-lite[backupsearch]` - Type annotations for
[BackupSearch](https://youtype.github.io/types_boto3_docs/types_boto3_backupsearch/)
service.
- `types-boto3-lite[batch]` - Type annotations for
[Batch](https://youtype.github.io/types_boto3_docs/types_boto3_batch/)
service.
- `types-boto3-lite[bcm-dashboards]` - Type annotations for
[BillingandCostManagementDashboards](https://youtype.github.io/types_boto3_docs/types_boto3_bcm_dashboards/)
service.
- `types-boto3-lite[bcm-data-exports]` - Type annotations for
[BillingandCostManagementDataExports](https://youtype.github.io/types_boto3_docs/types_boto3_bcm_data_exports/)
service.
- `types-boto3-lite[bcm-pricing-calculator]` - Type annotations for
[BillingandCostManagementPricingCalculator](https://youtype.github.io/types_boto3_docs/types_boto3_bcm_pricing_calculator/)
service.
- `types-boto3-lite[bcm-recommended-actions]` - Type annotations for
[BillingandCostManagementRecommendedActions](https://youtype.github.io/types_boto3_docs/types_boto3_bcm_recommended_actions/)
service.
- `types-boto3-lite[bedrock]` - Type annotations for
[Bedrock](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock/)
service.
- `types-boto3-lite[bedrock-agent]` - Type annotations for
[AgentsforBedrock](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_agent/)
service.
- `types-boto3-lite[bedrock-agent-runtime]` - Type annotations for
[AgentsforBedrockRuntime](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_agent_runtime/)
service.
- `types-boto3-lite[bedrock-agentcore]` - Type annotations for
[BedrockAgentCore](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_agentcore/)
service.
- `types-boto3-lite[bedrock-agentcore-control]` - Type annotations for
[BedrockAgentCoreControl](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_agentcore_control/)
service.
- `types-boto3-lite[bedrock-data-automation]` - Type annotations for
[DataAutomationforBedrock](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_data_automation/)
service.
- `types-boto3-lite[bedrock-data-automation-runtime]` - Type annotations for
[RuntimeforBedrockDataAutomation](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_data_automation_runtime/)
service.
- `types-boto3-lite[bedrock-runtime]` - Type annotations for
[BedrockRuntime](https://youtype.github.io/types_boto3_docs/types_boto3_bedrock_runtime/)
service.
- `types-boto3-lite[billing]` - Type annotations for
[Billing](https://youtype.github.io/types_boto3_docs/types_boto3_billing/)
service.
- `types-boto3-lite[billingconductor]` - Type annotations for
[BillingConductor](https://youtype.github.io/types_boto3_docs/types_boto3_billingconductor/)
service.
- `types-boto3-lite[braket]` - Type annotations for
[Braket](https://youtype.github.io/types_boto3_docs/types_boto3_braket/)
service.
- `types-boto3-lite[budgets]` - Type annotations for
[Budgets](https://youtype.github.io/types_boto3_docs/types_boto3_budgets/)
service.
- `types-boto3-lite[ce]` - Type annotations for
[CostExplorer](https://youtype.github.io/types_boto3_docs/types_boto3_ce/)
service.
- `types-boto3-lite[chatbot]` - Type annotations for
[Chatbot](https://youtype.github.io/types_boto3_docs/types_boto3_chatbot/)
service.
- `types-boto3-lite[chime]` - Type annotations for
[Chime](https://youtype.github.io/types_boto3_docs/types_boto3_chime/)
service.
- `types-boto3-lite[chime-sdk-identity]` - Type annotations for
[ChimeSDKIdentity](https://youtype.github.io/types_boto3_docs/types_boto3_chime_sdk_identity/)
service.
- `types-boto3-lite[chime-sdk-media-pipelines]` - Type annotations for
[ChimeSDKMediaPipelines](https://youtype.github.io/types_boto3_docs/types_boto3_chime_sdk_media_pipelines/)
service.
- `types-boto3-lite[chime-sdk-meetings]` - Type annotations for
[ChimeSDKMeetings](https://youtype.github.io/types_boto3_docs/types_boto3_chime_sdk_meetings/)
service.
- `types-boto3-lite[chime-sdk-messaging]` - Type annotations for
[ChimeSDKMessaging](https://youtype.github.io/types_boto3_docs/types_boto3_chime_sdk_messaging/)
service.
- `types-boto3-lite[chime-sdk-voice]` - Type annotations for
[ChimeSDKVoice](https://youtype.github.io/types_boto3_docs/types_boto3_chime_sdk_voice/)
service.
- `types-boto3-lite[cleanrooms]` - Type annotations for
[CleanRoomsService](https://youtype.github.io/types_boto3_docs/types_boto3_cleanrooms/)
service.
- `types-boto3-lite[cleanroomsml]` - Type annotations for
[CleanRoomsML](https://youtype.github.io/types_boto3_docs/types_boto3_cleanroomsml/)
service.
- `types-boto3-lite[cloud9]` - Type annotations for
[Cloud9](https://youtype.github.io/types_boto3_docs/types_boto3_cloud9/)
service.
- `types-boto3-lite[cloudcontrol]` - Type annotations for
[CloudControlApi](https://youtype.github.io/types_boto3_docs/types_boto3_cloudcontrol/)
service.
- `types-boto3-lite[clouddirectory]` - Type annotations for
[CloudDirectory](https://youtype.github.io/types_boto3_docs/types_boto3_clouddirectory/)
service.
- `types-boto3-lite[cloudformation]` - Type annotations for
[CloudFormation](https://youtype.github.io/types_boto3_docs/types_boto3_cloudformation/)
service.
- `types-boto3-lite[cloudfront]` - Type annotations for
[CloudFront](https://youtype.github.io/types_boto3_docs/types_boto3_cloudfront/)
service.
- `types-boto3-lite[cloudfront-keyvaluestore]` - Type annotations for
[CloudFrontKeyValueStore](https://youtype.github.io/types_boto3_docs/types_boto3_cloudfront_keyvaluestore/)
service.
- `types-boto3-lite[cloudhsm]` - Type annotations for
[CloudHSM](https://youtype.github.io/types_boto3_docs/types_boto3_cloudhsm/)
service.
- `types-boto3-lite[cloudhsmv2]` - Type annotations for
[CloudHSMV2](https://youtype.github.io/types_boto3_docs/types_boto3_cloudhsmv2/)
service.
- `types-boto3-lite[cloudsearch]` - Type annotations for
[CloudSearch](https://youtype.github.io/types_boto3_docs/types_boto3_cloudsearch/)
service.
- `types-boto3-lite[cloudsearchdomain]` - Type annotations for
[CloudSearchDomain](https://youtype.github.io/types_boto3_docs/types_boto3_cloudsearchdomain/)
service.
- `types-boto3-lite[cloudtrail]` - Type annotations for
[CloudTrail](https://youtype.github.io/types_boto3_docs/types_boto3_cloudtrail/)
service.
- `types-boto3-lite[cloudtrail-data]` - Type annotations for
[CloudTrailDataService](https://youtype.github.io/types_boto3_docs/types_boto3_cloudtrail_data/)
service.
- `types-boto3-lite[cloudwatch]` - Type annotations for
[CloudWatch](https://youtype.github.io/types_boto3_docs/types_boto3_cloudwatch/)
service.
- `types-boto3-lite[codeartifact]` - Type annotations for
[CodeArtifact](https://youtype.github.io/types_boto3_docs/types_boto3_codeartifact/)
service.
- `types-boto3-lite[codebuild]` - Type annotations for
[CodeBuild](https://youtype.github.io/types_boto3_docs/types_boto3_codebuild/)
service.
- `types-boto3-lite[codecatalyst]` - Type annotations for
[CodeCatalyst](https://youtype.github.io/types_boto3_docs/types_boto3_codecatalyst/)
service.
- `types-boto3-lite[codecommit]` - Type annotations for
[CodeCommit](https://youtype.github.io/types_boto3_docs/types_boto3_codecommit/)
service.
- `types-boto3-lite[codeconnections]` - Type annotations for
[CodeConnections](https://youtype.github.io/types_boto3_docs/types_boto3_codeconnections/)
service.
- `types-boto3-lite[codedeploy]` - Type annotations for
[CodeDeploy](https://youtype.github.io/types_boto3_docs/types_boto3_codedeploy/)
service.
- `types-boto3-lite[codeguru-reviewer]` - Type annotations for
[CodeGuruReviewer](https://youtype.github.io/types_boto3_docs/types_boto3_codeguru_reviewer/)
service.
- `types-boto3-lite[codeguru-security]` - Type annotations for
[CodeGuruSecurity](https://youtype.github.io/types_boto3_docs/types_boto3_codeguru_security/)
service.
- `types-boto3-lite[codeguruprofiler]` - Type annotations for
[CodeGuruProfiler](https://youtype.github.io/types_boto3_docs/types_boto3_codeguruprofiler/)
service.
- `types-boto3-lite[codepipeline]` - Type annotations for
[CodePipeline](https://youtype.github.io/types_boto3_docs/types_boto3_codepipeline/)
service.
- `types-boto3-lite[codestar-connections]` - Type annotations for
[CodeStarconnections](https://youtype.github.io/types_boto3_docs/types_boto3_codestar_connections/)
service.
- `types-boto3-lite[codestar-notifications]` - Type annotations for
[CodeStarNotifications](https://youtype.github.io/types_boto3_docs/types_boto3_codestar_notifications/)
service.
- `types-boto3-lite[cognito-identity]` - Type annotations for
[CognitoIdentity](https://youtype.github.io/types_boto3_docs/types_boto3_cognito_identity/)
service.
- `types-boto3-lite[cognito-idp]` - Type annotations for
[CognitoIdentityProvider](https://youtype.github.io/types_boto3_docs/types_boto3_cognito_idp/)
service.
- `types-boto3-lite[cognito-sync]` - Type annotations for
[CognitoSync](https://youtype.github.io/types_boto3_docs/types_boto3_cognito_sync/)
service.
- `types-boto3-lite[comprehend]` - Type annotations for
[Comprehend](https://youtype.github.io/types_boto3_docs/types_boto3_comprehend/)
service.
- `types-boto3-lite[comprehendmedical]` - Type annotations for
[ComprehendMedical](https://youtype.github.io/types_boto3_docs/types_boto3_comprehendmedical/)
service.
- `types-boto3-lite[compute-optimizer]` - Type annotations for
[ComputeOptimizer](https://youtype.github.io/types_boto3_docs/types_boto3_compute_optimizer/)
service.
- `types-boto3-lite[compute-optimizer-automation]` - Type annotations for
[ComputeOptimizerAutomation](https://youtype.github.io/types_boto3_docs/types_boto3_compute_optimizer_automation/)
service.
- `types-boto3-lite[config]` - Type annotations for
[ConfigService](https://youtype.github.io/types_boto3_docs/types_boto3_config/)
service.
- `types-boto3-lite[connect]` - Type annotations for
[Connect](https://youtype.github.io/types_boto3_docs/types_boto3_connect/)
service.
- `types-boto3-lite[connect-contact-lens]` - Type annotations for
[ConnectContactLens](https://youtype.github.io/types_boto3_docs/types_boto3_connect_contact_lens/)
service.
- `types-boto3-lite[connectcampaigns]` - Type annotations for
[ConnectCampaignService](https://youtype.github.io/types_boto3_docs/types_boto3_connectcampaigns/)
service.
- `types-boto3-lite[connectcampaignsv2]` - Type annotations for
[ConnectCampaignServiceV2](https://youtype.github.io/types_boto3_docs/types_boto3_connectcampaignsv2/)
service.
- `types-boto3-lite[connectcases]` - Type annotations for
[ConnectCases](https://youtype.github.io/types_boto3_docs/types_boto3_connectcases/)
service.
- `types-boto3-lite[connectparticipant]` - Type annotations for
[ConnectParticipant](https://youtype.github.io/types_boto3_docs/types_boto3_connectparticipant/)
service.
- `types-boto3-lite[controlcatalog]` - Type annotations for
[ControlCatalog](https://youtype.github.io/types_boto3_docs/types_boto3_controlcatalog/)
service.
- `types-boto3-lite[controltower]` - Type annotations for
[ControlTower](https://youtype.github.io/types_boto3_docs/types_boto3_controltower/)
service.
- `types-boto3-lite[cost-optimization-hub]` - Type annotations for
[CostOptimizationHub](https://youtype.github.io/types_boto3_docs/types_boto3_cost_optimization_hub/)
service.
- `types-boto3-lite[cur]` - Type annotations for
[CostandUsageReportService](https://youtype.github.io/types_boto3_docs/types_boto3_cur/)
service.
- `types-boto3-lite[customer-profiles]` - Type annotations for
[CustomerProfiles](https://youtype.github.io/types_boto3_docs/types_boto3_customer_profiles/)
service.
- `types-boto3-lite[databrew]` - Type annotations for
[GlueDataBrew](https://youtype.github.io/types_boto3_docs/types_boto3_databrew/)
service.
- `types-boto3-lite[dataexchange]` - Type annotations for
[DataExchange](https://youtype.github.io/types_boto3_docs/types_boto3_dataexchange/)
service.
- `types-boto3-lite[datapipeline]` - Type annotations for
[DataPipeline](https://youtype.github.io/types_boto3_docs/types_boto3_datapipeline/)
service.
- `types-boto3-lite[datasync]` - Type annotations for
[DataSync](https://youtype.github.io/types_boto3_docs/types_boto3_datasync/)
service.
- `types-boto3-lite[datazone]` - Type annotations for
[DataZone](https://youtype.github.io/types_boto3_docs/types_boto3_datazone/)
service.
- `types-boto3-lite[dax]` - Type annotations for
[DAX](https://youtype.github.io/types_boto3_docs/types_boto3_dax/) service.
- `types-boto3-lite[deadline]` - Type annotations for
[DeadlineCloud](https://youtype.github.io/types_boto3_docs/types_boto3_deadline/)
service.
- `types-boto3-lite[detective]` - Type annotations for
[Detective](https://youtype.github.io/types_boto3_docs/types_boto3_detective/)
service.
- `types-boto3-lite[devicefarm]` - Type annotations for
[DeviceFarm](https://youtype.github.io/types_boto3_docs/types_boto3_devicefarm/)
service.
- `types-boto3-lite[devops-guru]` - Type annotations for
[DevOpsGuru](https://youtype.github.io/types_boto3_docs/types_boto3_devops_guru/)
service.
- `types-boto3-lite[directconnect]` - Type annotations for
[DirectConnect](https://youtype.github.io/types_boto3_docs/types_boto3_directconnect/)
service.
- `types-boto3-lite[discovery]` - Type annotations for
[ApplicationDiscoveryService](https://youtype.github.io/types_boto3_docs/types_boto3_discovery/)
service.
- `types-boto3-lite[dlm]` - Type annotations for
[DLM](https://youtype.github.io/types_boto3_docs/types_boto3_dlm/) service.
- `types-boto3-lite[dms]` - Type annotations for
[DatabaseMigrationService](https://youtype.github.io/types_boto3_docs/types_boto3_dms/)
service.
- `types-boto3-lite[docdb]` - Type annotations for
[DocDB](https://youtype.github.io/types_boto3_docs/types_boto3_docdb/)
service.
- `types-boto3-lite[docdb-elastic]` - Type annotations for
[DocDBElastic](https://youtype.github.io/types_boto3_docs/types_boto3_docdb_elastic/)
service.
- `types-boto3-lite[drs]` - Type annotations for
[Drs](https://youtype.github.io/types_boto3_docs/types_boto3_drs/) service.
- `types-boto3-lite[ds]` - Type annotations for
[DirectoryService](https://youtype.github.io/types_boto3_docs/types_boto3_ds/)
service.
- `types-boto3-lite[ds-data]` - Type annotations for
[DirectoryServiceData](https://youtype.github.io/types_boto3_docs/types_boto3_ds_data/)
service.
- `types-boto3-lite[dsql]` - Type annotations for
[AuroraDSQL](https://youtype.github.io/types_boto3_docs/types_boto3_dsql/)
service.
- `types-boto3-lite[dynamodb]` - Type annotations for
[DynamoDB](https://youtype.github.io/types_boto3_docs/types_boto3_dynamodb/)
service.
- `types-boto3-lite[dynamodbstreams]` - Type annotations for
[DynamoDBStreams](https://youtype.github.io/types_boto3_docs/types_boto3_dynamodbstreams/)
service.
- `types-boto3-lite[ebs]` - Type annotations for
[EBS](https://youtype.github.io/types_boto3_docs/types_boto3_ebs/) service.
- `types-boto3-lite[ec2]` - Type annotations for
[EC2](https://youtype.github.io/types_boto3_docs/types_boto3_ec2/) service.
- `types-boto3-lite[ec2-instance-connect]` - Type annotations for
[EC2InstanceConnect](https://youtype.github.io/types_boto3_docs/types_boto3_ec2_instance_connect/)
service.
- `types-boto3-lite[ecr]` - Type annotations for
[ECR](https://youtype.github.io/types_boto3_docs/types_boto3_ecr/) service.
- `types-boto3-lite[ecr-public]` - Type annotations for
[ECRPublic](https://youtype.github.io/types_boto3_docs/types_boto3_ecr_public/)
service.
- `types-boto3-lite[ecs]` - Type annotations for
[ECS](https://youtype.github.io/types_boto3_docs/types_boto3_ecs/) service.
- `types-boto3-lite[efs]` - Type annotations for
[EFS](https://youtype.github.io/types_boto3_docs/types_boto3_efs/) service.
- `types-boto3-lite[eks]` - Type annotations for
[EKS](https://youtype.github.io/types_boto3_docs/types_boto3_eks/) service.
- `types-boto3-lite[eks-auth]` - Type annotations for
[EKSAuth](https://youtype.github.io/types_boto3_docs/types_boto3_eks_auth/)
service.
- `types-boto3-lite[elasticache]` - Type annotations for
[ElastiCache](https://youtype.github.io/types_boto3_docs/types_boto3_elasticache/)
service.
- `types-boto3-lite[elasticbeanstalk]` - Type annotations for
[ElasticBeanstalk](https://youtype.github.io/types_boto3_docs/types_boto3_elasticbeanstalk/)
service.
- `types-boto3-lite[elb]` - Type annotations for
[ElasticLoadBalancing](https://youtype.github.io/types_boto3_docs/types_boto3_elb/)
service.
- `types-boto3-lite[elbv2]` - Type annotations for
[ElasticLoadBalancingv2](https://youtype.github.io/types_boto3_docs/types_boto3_elbv2/)
service.
- `types-boto3-lite[emr]` - Type annotations for
[EMR](https://youtype.github.io/types_boto3_docs/types_boto3_emr/) service.
- `types-boto3-lite[emr-containers]` - Type annotations for
[EMRContainers](https://youtype.github.io/types_boto3_docs/types_boto3_emr_containers/)
service.
- `types-boto3-lite[emr-serverless]` - Type annotations for
[EMRServerless](https://youtype.github.io/types_boto3_docs/types_boto3_emr_serverless/)
service.
- `types-boto3-lite[entityresolution]` - Type annotations for
[EntityResolution](https://youtype.github.io/types_boto3_docs/types_boto3_entityresolution/)
service.
- `types-boto3-lite[es]` - Type annotations for
[ElasticsearchService](https://youtype.github.io/types_boto3_docs/types_boto3_es/)
service.
- `types-boto3-lite[events]` - Type annotations for
[EventBridge](https://youtype.github.io/types_boto3_docs/types_boto3_events/)
service.
- `types-boto3-lite[evs]` - Type annotations for
[EVS](https://youtype.github.io/types_boto3_docs/types_boto3_evs/) service.
- `types-boto3-lite[finspace]` - Type annotations for
[Finspace](https://youtype.github.io/types_boto3_docs/types_boto3_finspace/)
service.
- `types-boto3-lite[finspace-data]` - Type annotations for
[FinSpaceData](https://youtype.github.io/types_boto3_docs/types_boto3_finspace_data/)
service.
- `types-boto3-lite[firehose]` - Type annotations for
[Firehose](https://youtype.github.io/types_boto3_docs/types_boto3_firehose/)
service.
- `types-boto3-lite[fis]` - Type annotations for
[FIS](https://youtype.github.io/types_boto3_docs/types_boto3_fis/) service.
- `types-boto3-lite[fms]` - Type annotations for
[FMS](https://youtype.github.io/types_boto3_docs/types_boto3_fms/) service.
- `types-boto3-lite[forecast]` - Type annotations for
[ForecastService](https://youtype.github.io/types_boto3_docs/types_boto3_forecast/)
service.
- `types-boto3-lite[forecastquery]` - Type annotations for
[ForecastQueryService](https://youtype.github.io/types_boto3_docs/types_boto3_forecastquery/)
service.
- `types-boto3-lite[frauddetector]` - Type annotations for
[FraudDetector](https://youtype.github.io/types_boto3_docs/types_boto3_frauddetector/)
service.
- `types-boto3-lite[freetier]` - Type annotations for
[FreeTier](https://youtype.github.io/types_boto3_docs/types_boto3_freetier/)
service.
- `types-boto3-lite[fsx]` - Type annotations for
[FSx](https://youtype.github.io/types_boto3_docs/types_boto3_fsx/) service.
- `types-boto3-lite[gamelift]` - Type annotations for
[GameLift](https://youtype.github.io/types_boto3_docs/types_boto3_gamelift/)
service.
- `types-boto3-lite[gameliftstreams]` - Type annotations for
[GameLiftStreams](https://youtype.github.io/types_boto3_docs/types_boto3_gameliftstreams/)
service.
- `types-boto3-lite[geo-maps]` - Type annotations for
[LocationServiceMapsV2](https://youtype.github.io/types_boto3_docs/types_boto3_geo_maps/)
service.
- `types-boto3-lite[geo-places]` - Type annotations for
[LocationServicePlacesV2](https://youtype.github.io/types_boto3_docs/types_boto3_geo_places/)
service.
- `types-boto3-lite[geo-routes]` - Type annotations for
[LocationServiceRoutesV2](https://youtype.github.io/types_boto3_docs/types_boto3_geo_routes/)
service.
- `types-boto3-lite[glacier]` - Type annotations for
[Glacier](https://youtype.github.io/types_boto3_docs/types_boto3_glacier/)
service.
- `types-boto3-lite[globalaccelerator]` - Type annotations for
[GlobalAccelerator](https://youtype.github.io/types_boto3_docs/types_boto3_globalaccelerator/)
service.
- `types-boto3-lite[glue]` - Type annotations for
[Glue](https://youtype.github.io/types_boto3_docs/types_boto3_glue/) service.
- `types-boto3-lite[grafana]` - Type annotations for
[ManagedGrafana](https://youtype.github.io/types_boto3_docs/types_boto3_grafana/)
service.
- `types-boto3-lite[greengrass]` - Type annotations for
[Greengrass](https://youtype.github.io/types_boto3_docs/types_boto3_greengrass/)
service.
- `types-boto3-lite[greengrassv2]` - Type annotations for
[GreengrassV2](https://youtype.github.io/types_boto3_docs/types_boto3_greengrassv2/)
service.
- `types-boto3-lite[groundstation]` - Type annotations for
[GroundStation](https://youtype.github.io/types_boto3_docs/types_boto3_groundstation/)
service.
- `types-boto3-lite[guardduty]` - Type annotations for
[GuardDuty](https://youtype.github.io/types_boto3_docs/types_boto3_guardduty/)
service.
- `types-boto3-lite[health]` - Type annotations for
[Health](https://youtype.github.io/types_boto3_docs/types_boto3_health/)
service.
- `types-boto3-lite[healthlake]` - Type annotations for
[HealthLake](https://youtype.github.io/types_boto3_docs/types_boto3_healthlake/)
service.
- `types-boto3-lite[iam]` - Type annotations for
[IAM](https://youtype.github.io/types_boto3_docs/types_boto3_iam/) service.
- `types-boto3-lite[identitystore]` - Type annotations for
[IdentityStore](https://youtype.github.io/types_boto3_docs/types_boto3_identitystore/)
service.
- `types-boto3-lite[imagebuilder]` - Type annotations for
[Imagebuilder](https://youtype.github.io/types_boto3_docs/types_boto3_imagebuilder/)
service.
- `types-boto3-lite[importexport]` - Type annotations for
[ImportExport](https://youtype.github.io/types_boto3_docs/types_boto3_importexport/)
service.
- `types-boto3-lite[inspector]` - Type annotations f | text/markdown | null | Vlad Emelianov <vlad.emelianov.nz@gmail.com> | null | null | null | boto3, boto3-stubs, type-annotations, mypy, typeshed, autocomplete | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Environment :: Console",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
"Typing :: Stubs Only"
] | [
"any"
] | null | null | >=3.9 | [] | [] | [] | [
"botocore-stubs",
"types-s3transfer",
"typing-extensions>=4.1.0; python_version < \"3.12\"",
"types-boto3-full<1.43.0,>=1.42.0; extra == \"full\"",
"boto3==1.42.54; extra == \"boto3\"",
"types-boto3-accessanalyzer<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-account<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-acm<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-acm-pca<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-aiops<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-amp<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-amplify<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-amplifybackend<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-amplifyuibuilder<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-apigateway<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-apigatewaymanagementapi<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-apigatewayv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appconfig<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appconfigdata<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appfabric<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appflow<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appintegrations<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-application-autoscaling<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-application-insights<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-application-signals<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-applicationcostprofiler<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appmesh<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-apprunner<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appstream<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-appsync<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-arc-region-switch<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-arc-zonal-shift<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-artifact<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-athena<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-auditmanager<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-autoscaling<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-autoscaling-plans<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-b2bi<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-backup<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-backup-gateway<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-backupsearch<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-batch<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bcm-dashboards<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bcm-data-exports<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bcm-pricing-calculator<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bcm-recommended-actions<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-agent<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-agent-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-agentcore<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-agentcore-control<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-data-automation<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-data-automation-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-bedrock-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-billing<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-billingconductor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-braket<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-budgets<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ce<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chatbot<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chime-sdk-identity<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chime-sdk-media-pipelines<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chime-sdk-meetings<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chime-sdk-messaging<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-chime-sdk-voice<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cleanrooms<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cleanroomsml<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloud9<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudcontrol<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-clouddirectory<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudformation<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudfront<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudfront-keyvaluestore<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudhsm<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudhsmv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudsearch<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudsearchdomain<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudtrail<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudtrail-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudwatch<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codeartifact<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codebuild<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codecatalyst<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codecommit<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codeconnections<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codedeploy<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codeguru-reviewer<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codeguru-security<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codeguruprofiler<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codepipeline<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codestar-connections<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-codestar-notifications<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cognito-identity<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cognito-idp<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cognito-sync<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-comprehend<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-comprehendmedical<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-compute-optimizer<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-compute-optimizer-automation<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-config<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-connect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-connect-contact-lens<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-connectcampaigns<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-connectcampaignsv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-connectcases<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-connectparticipant<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-controlcatalog<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-controltower<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cost-optimization-hub<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cur<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-customer-profiles<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-databrew<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dataexchange<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-datapipeline<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-datasync<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-datazone<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dax<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-deadline<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-detective<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-devicefarm<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-devops-guru<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-directconnect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-discovery<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dlm<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dms<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-docdb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-docdb-elastic<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-drs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ds<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ds-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dsql<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dynamodb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-dynamodbstreams<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ebs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ec2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ec2-instance-connect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ecr<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ecr-public<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ecs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-efs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-eks<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-eks-auth<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-elasticache<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-elasticbeanstalk<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-elb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-elbv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-emr<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-emr-containers<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-emr-serverless<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-entityresolution<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-es<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-events<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-evs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-finspace<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-finspace-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-firehose<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-fis<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-fms<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-forecast<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-forecastquery<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-frauddetector<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-freetier<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-fsx<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-gamelift<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-gameliftstreams<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-geo-maps<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-geo-places<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-geo-routes<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-glacier<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-globalaccelerator<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-glue<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-grafana<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-greengrass<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-greengrassv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-groundstation<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-guardduty<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-health<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-healthlake<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iam<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-identitystore<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-imagebuilder<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-importexport<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-inspector<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-inspector-scan<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-inspector2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-internetmonitor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-invoicing<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iot<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iot-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iot-jobs-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iot-managed-integrations<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotdeviceadvisor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotevents<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotevents-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotfleetwise<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotsecuretunneling<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotsitewise<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotthingsgraph<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iottwinmaker<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-iotwireless<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ivs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ivs-realtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ivschat<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kafka<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kafkaconnect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kendra<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kendra-ranking<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-keyspaces<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-keyspacesstreams<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesis<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesis-video-archived-media<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesis-video-media<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesis-video-signaling<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesis-video-webrtc-storage<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesisanalytics<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesisanalyticsv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kinesisvideo<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-kms<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lakeformation<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lambda<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-launch-wizard<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lex-models<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lex-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lexv2-models<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lexv2-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-license-manager<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-license-manager-linux-subscriptions<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-license-manager-user-subscriptions<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lightsail<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-location<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-logs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-lookoutequipment<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-m2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-machinelearning<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-macie2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mailmanager<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-managedblockchain<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-managedblockchain-query<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-marketplace-agreement<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-marketplace-catalog<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-marketplace-deployment<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-marketplace-entitlement<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-marketplace-reporting<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-marketplacecommerceanalytics<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediaconnect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediaconvert<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-medialive<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediapackage<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediapackage-vod<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediapackagev2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediastore<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediastore-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mediatailor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-medical-imaging<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-memorydb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-meteringmarketplace<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mgh<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mgn<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-migration-hub-refactor-spaces<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-migrationhub-config<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-migrationhuborchestrator<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-migrationhubstrategy<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mpa<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mq<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mturk<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mwaa<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-mwaa-serverless<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-neptune<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-neptune-graph<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-neptunedata<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-network-firewall<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-networkflowmonitor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-networkmanager<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-networkmonitor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-notifications<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-notificationscontacts<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-nova-act<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-oam<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-observabilityadmin<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-odb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-omics<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-opensearch<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-opensearchserverless<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-organizations<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-osis<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-outposts<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-panorama<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-partnercentral-account<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-partnercentral-benefits<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-partnercentral-channel<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-partnercentral-selling<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-payment-cryptography<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-payment-cryptography-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pca-connector-ad<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pca-connector-scep<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pcs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-personalize<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-personalize-events<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-personalize-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pi<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pinpoint<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pinpoint-email<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pinpoint-sms-voice<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pinpoint-sms-voice-v2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pipes<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-polly<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-pricing<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-proton<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-qapps<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-qbusiness<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-qconnect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-quicksight<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ram<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rbin<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rds<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rds-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-redshift<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-redshift-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-redshift-serverless<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rekognition<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-repostspace<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-resiliencehub<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-resource-explorer-2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-resource-groups<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-resourcegroupstaggingapi<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rolesanywhere<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53-recovery-cluster<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53-recovery-control-config<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53-recovery-readiness<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53domains<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53globalresolver<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53profiles<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-route53resolver<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rtbfabric<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-rum<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-s3<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-s3control<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-s3outposts<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-s3tables<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-s3vectors<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker-a2i-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker-edge<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker-featurestore-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker-geospatial<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker-metrics<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sagemaker-runtime<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-savingsplans<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-scheduler<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-schemas<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sdb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-secretsmanager<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-security-ir<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-securityhub<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-securitylake<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-serverlessrepo<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-service-quotas<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-servicecatalog<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-servicecatalog-appregistry<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-servicediscovery<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ses<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sesv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-shield<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-signer<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-signer-data<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-signin<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-simspaceweaver<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-snow-device-management<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-snowball<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sns<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-socialmessaging<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sqs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ssm<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ssm-contacts<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ssm-guiconnect<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ssm-incidents<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ssm-quicksetup<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-ssm-sap<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sso<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sso-admin<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sso-oidc<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-stepfunctions<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-storagegateway<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-sts<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-supplychain<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-support<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-support-app<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-swf<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-synthetics<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-taxsettings<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-textract<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-timestream-influxdb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-timestream-query<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-timestream-write<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-tnb<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-transcribe<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-transfer<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-translate<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-trustedadvisor<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-verifiedpermissions<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-voice-id<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-vpc-lattice<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-waf<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-waf-regional<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-wafv2<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-wellarchitected<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-wickr<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-wisdom<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workdocs<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workmail<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workmailmessageflow<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workspaces<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workspaces-instances<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workspaces-thin-client<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-workspaces-web<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-xray<1.43.0,>=1.42.0; extra == \"all\"",
"types-boto3-cloudformation<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-dynamodb<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-ec2<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-lambda<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-rds<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-s3<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-sqs<1.43.0,>=1.42.0; extra == \"essential\"",
"types-boto3-accessanalyzer<1.43.0,>=1.42.0; extra == \"accessanalyzer\"",
"types-boto3-account<1.43.0,>=1.42.0; extra == \"account\"",
"types-boto3-acm<1.43.0,>=1.42.0; extra == \"acm\"",
"types-boto3-acm-pca<1.43.0,>=1.42.0; extra == \"acm-pca\"",
"types-boto3-aiops<1.43.0,>=1.42.0; extra == \"aiops\"",
"types-boto3-amp<1.43.0,>=1.42.0; extra == \"amp\"",
"types-boto3-amplify<1.43.0,>=1.42.0; extra == \"amplify\"",
"types-boto3-amplifybackend<1.43.0,>=1.42.0; extra == \"amplifybackend\"",
"types-boto3-amplifyuibuilder<1.43.0,>=1.42.0; extra == \"amplifyuibuilder\"",
"types-boto3-apigateway<1.43.0,>=1.42.0; extra == \"apigateway\"",
"types-boto3-apigatewaymanagementapi<1.43.0,>=1.42.0; extra == \"apigatewaymanagementapi\"",
"types-boto3-apigatewayv2<1.43.0,>=1.42.0; extra == \"apigatewayv2\"",
"types-boto3-appconfig<1.43.0,>=1.42.0; extra == \"appconfig\"",
"types-boto3-appconfigdata<1.43.0,>=1.42.0; extra == \"appconfigdata\"",
"types-boto3-appfabric<1.43.0,>=1.42.0; extra == \"appfabric\"",
"types-boto3-appflow<1.43.0,>=1.42.0; extra == \"appflow\"",
"types-boto3-appintegrations<1.43.0,>=1.42.0; extra == \"appintegrations\"",
"types-boto3-application-autoscaling<1.43.0,>=1.42.0; extra == \"application-autoscaling\"",
"types-boto3-application-insights<1.43.0,>=1.42.0; extra == \"application-insights\"",
"types-boto3-application-signals<1.43.0,>=1.42.0; extra == \"application-signals\"",
"types-boto3-applicationcostprofiler<1.43.0,>=1.42.0; extra == \"applicationcostprofiler\"",
"types-boto3-appmesh<1.43.0,>=1.42.0; extra == \"appmesh\"",
"types-boto3-apprunner<1.43.0,>=1.42.0; extra == \"apprunner\"",
"types-boto3-appstream<1.43.0,>=1.42.0; extra == \"appstream\"",
"types-boto3-appsync<1.43.0,>=1.42.0; extra == \"appsync\"",
"types-boto3-arc-region-switch<1.43.0,>=1.42.0; extra == \"arc-region-switch\"",
"types-boto3-arc-zonal-shift<1.43.0,>=1.42.0; extra == \"arc-zonal-shift\"",
"types-boto3-artifact<1.43.0,>=1.42.0; extra == \"artifact\"",
"types-boto3-athena<1.43.0,>=1.42.0; extra == \"athena\"",
"types-boto3-auditmanager<1.43.0,>=1.42.0; extra == \"auditmanager\"",
"types-boto3-autoscaling<1.43.0,>=1.42.0; extra == \"autoscaling\"",
"types-boto3-autoscaling-plans<1.43.0,>=1.42.0; extra == \"autoscaling-plans\"",
"types-boto3-b2bi<1.43.0,>=1.42.0; extra == \"b2bi\"",
"types-boto3-backup<1.43.0,>=1.42.0; extra == \"backup\"",
"types-boto3-backup-gateway<1.43.0,>=1.42.0; extra == \"backup-gateway\"",
"types-boto3-backupsearch<1.43.0,>=1.42.0; extra == \"backupsearch\"",
"types-boto3-batch<1.43.0,>=1.42.0; extra == \"batch\"",
"types-boto3-bcm-dashboards<1.43.0,>=1.42.0; extra == \"bcm-dashboards\"",
"types-boto3-bcm-data-exports<1.43.0,>=1.42.0; extra == \"bcm-data-exports\"",
"types-boto3-bcm-pricing-calculator<1.43.0,>=1.42.0; extra == \"bcm-pricing-calculator\"",
"types-boto3-bcm-recommended-actions<1.43.0,>=1.42.0; extra == \"bcm-recommended-actions\"",
"types-boto3-bedrock<1.43.0,>=1.42.0; extra == \"bedrock\"",
"types-boto3-bedrock-agent<1.43.0,>=1.42.0; extra == \"bedrock-agent\"",
"types-boto3-bedrock-agent-runtime<1.43.0,>=1.42.0; extra == \"bedrock-agent-runtime\"",
"types-boto3-bedrock-agentcore<1.43.0,>=1.42.0; extra == \"bedrock-agentcore\"",
"types-boto3-bedrock-agentcore-control<1.43.0,>=1.42.0; extra == \"bedrock-agentcore-control\"",
"types-boto3-bedrock-data-automation<1.43.0,>=1.42.0; extra == \"bedrock-data-automation\"",
"types-boto3-bedrock-data-automation-runtime<1.43.0,>=1.42.0; extra == \"bedrock-data-automation-runtime\"",
"types-boto3-bedrock-runtime<1.43.0,>=1.42.0; extra == \"bedrock-runtime\"",
"types-boto3-billing<1.43.0,>=1.42.0; extra == \"billing\"",
"types-boto3-billingconductor<1.43.0,>=1.42.0; extra == \"billingconductor\"",
"types-boto3-braket<1.43.0,>=1.42.0; extra == \"braket\"",
"types-boto3-budgets<1.43.0,>=1.42.0; extra == \"budgets\"",
"types-boto3-ce<1.43.0,>=1.42.0; extra == \"ce\"",
"types-boto3-chatbot<1.43.0,>=1.42.0; extra == \"chatbot\"",
"types-boto3-chime<1.43.0,>=1.42.0; extra == \"chime\"",
"types-boto3-chime-sdk-identity<1.43.0,>=1.42.0; extra == \"chime-sdk-identity\"",
"types-boto3-chime-sdk-media-pipelines<1.43.0,>=1.42.0; extra == \"chime-sdk-media-pipelines\"",
"types-boto3-chime-sdk-meetings<1.43.0,>=1.42.0; extra == \"chime-sdk-meetings\"",
"types-boto3-chime-sdk-messaging<1.43.0,>=1.42.0; extra == \"chime-sdk-messaging\"",
"types-boto3-chime-sdk-voice<1.43.0,>=1.42.0; extra == \"chime-sdk-voice\"",
"types-boto3-cleanrooms<1.43.0,>=1.42.0; extra == \"cleanrooms\"",
"types-boto3-cleanroomsml<1.43.0,>=1.42.0; extra == \"cleanroomsml\"",
"types-boto3-cloud9<1.43.0,>=1.42.0; extra == \"cloud9\"",
"types-boto3-cloudcontrol<1.43.0,>=1.42.0; extra == \"cloudcontrol\"",
"types-boto3-clouddirectory<1.43.0,>=1.42.0; extra == \"clouddirectory\"",
"types-boto3-cloudformation<1.43.0,>=1.42.0; extra == \"cloudformation\"",
"types-boto3-cloudfront<1.43.0,>=1.42.0; extra == \"cloudfront\"",
"types-boto3-cloudfront-keyvaluestore<1.43.0,>=1.42.0; extra == \"cloudfront-keyvaluestore\"",
"types-boto3-cloudhsm<1.43.0,>=1.42.0; extra == \"cloudhsm\"",
"types-boto3-cloudhsmv2<1.43.0,>=1.42.0; extra == \"cloudhsmv2\"",
"types-boto3-cloudsearch<1.43.0,>=1.42.0; extra == \"cloudsearch\"",
"types-boto3-cloudsearchdomain<1.43.0,>=1.42.0; extra == \"cloudsearchdomain\"",
"types-boto3-cloudtrail<1.43.0,>=1.42.0; extra == \"cloudtrail\"",
"types-boto3-cloudtrail-data<1.43.0,>=1.42.0; extra == \"cloudtrail-data\"",
"types-boto3-cloudwatch<1.43.0,>=1.42.0; extra == \"cloudwatch\"",
"types-boto3-codeartifact<1.43.0,>=1.42.0; extra == \"codeartifact\"",
"types-boto3-codebuild<1.43.0,>=1.42.0; extra == \"codebuild\"",
"types-boto3-codecatalyst<1.43.0,>=1.42.0; extra == \"codecatalyst\"",
"types-boto3-codecommit<1.43.0,>=1.42.0; extra == \"codecommit\"",
"types-boto3-codeconnections<1.43.0,>=1.42.0; extra == \"codeconnections\"",
"types-boto3-codedeploy<1.43.0,>=1.42.0; extra == \"codedeploy\"",
"types-boto3-codeguru-reviewer<1.43.0,>=1.42.0; extra == \"codeguru-reviewer\"",
"types-boto3-codeguru-security<1.43.0,>=1.42.0; extra == \"codeguru-security\"",
"types-boto3-codeguruprofiler<1.43.0,>=1.42.0; extra == \"codeguruprofiler\"",
"types-boto3-codepipeline<1.43.0,>=1.42.0; extra == \"codepipeline\"",
"types-boto3-codestar-connections<1.43.0,>=1.42.0; extra == \"codestar-connections\"",
"types-boto3-codestar-notifications<1.43.0,>=1.42.0; extra == \"codestar-notifications\"",
"types-boto3-cognito-identity<1.43.0,>=1.42.0; extra == \"cognito-identity\"",
"types-boto3-cognito-idp<1.43.0,>=1.42.0; extra == \"cognito-idp\"",
"types-boto3-cognito-sync<1.43.0,>=1.42.0; extra == \"cognito-sync\"",
"types-boto3-comprehend<1.43.0,>=1.42.0; extra == \"comprehend\"",
"types-boto3-comprehendmedical<1.43.0,>=1.42.0; extra == \"comprehendmedical\"",
"types-boto3-compute-optimizer<1.43.0,>=1.42.0; extra == \"compute-optimizer\"",
"types-boto3-compute-optimizer-automation<1.43.0,>=1.42.0; extra == \"compute-optimizer-automation\"",
"types-boto3-config<1.43.0,>=1.42.0; extra == \"config\"",
"types-boto3-connect<1.43.0,>=1.42.0; extra == \"connect\"",
"types-boto3-connect-contact-lens<1.43.0,>=1.42.0; extra == \"connect-contact-lens\"",
"types-boto3-connectcampaigns<1.43.0,>=1.42.0; extra == \"connectcampaigns\"",
"types-boto3-connectcampaignsv2<1.43.0,>=1.42.0; extra == \"connectcampaignsv2\"",
"types-boto3-connectcases<1.43.0,>=1.42.0; extra == \"connectcases\"",
"types-boto3-connectparticipant<1.43.0,>=1.42.0; extra == \"connectparticipant\"",
"types-boto3-controlcatalog<1.43.0,>=1.42.0; extra == \"controlcatalog\"",
"types-boto3-controltower<1.43.0,>=1.42.0; extra == \"controltower\"",
"types-boto3-cost-optimization-hub<1.43.0,>=1.42.0; extra == \"cost-optimization-hub\"",
"types-boto3-cur<1.43.0,>=1.42.0; extra == \"cur\"",
"types-boto3-customer-profiles<1.43.0,>=1.42.0; extra == \"customer-profiles\"",
"types-boto3-databrew<1.43.0,>=1.42.0; extra == \"databrew\"",
"types-boto3-dataexchange<1.43.0,>=1.42.0; extra == \"dataexchange\"",
"types-boto3-datapipeline<1.43.0,>=1.42.0; extra == \"datapipeline\"",
"types-boto3-datasync<1.43.0,>=1.42.0; extra == \"datasync\"",
"types-boto3-datazone<1.43.0,>=1.42.0; extra == \"datazone\"",
"types-boto3-dax<1.43.0,>=1.42.0; extra == \"dax\"",
"types-boto3-deadline<1.43.0,>=1.42.0; extra == \"deadline\"",
"types-boto3-detective<1.43.0,>=1.42.0; extra == \"detective\"",
"types-boto3-devicefarm<1.43.0,>=1.42.0; extra == \"devicefarm\"",
"types-boto3-devops-guru<1.43.0,>=1.42.0; extra == \"devops-guru\"",
"types-boto3-directconnect<1.43.0,>=1.42.0; extra == \"directconnect\"",
"types-boto3-discovery<1.43.0,>=1.42.0; extra == \"discovery\"",
"types-boto3-dlm<1.43.0,>=1.42.0; extra == \"dlm\"",
"types-boto3-dms<1.43.0,>=1.42.0; extra == \"dms\"",
"types-boto3-docdb<1.43.0,>=1.42.0; extra == \"docdb\"",
"types-boto3-docdb-elastic<1.43.0,>=1.42.0; extra == \"docdb-elastic\"",
"types-boto3-drs<1.43.0,>=1.42.0; extra == \"drs\"",
"types-boto3-ds<1.43.0,>=1.42.0; extra == \"ds\"",
"types-boto3-ds-data<1.43.0,>=1.42.0; extra == \"ds-data\"",
"types-boto3-dsql<1.43.0,>=1.42.0; extra == \"dsql\"",
"types-boto3-dynamodb<1.43.0,>=1.42.0; extra == \"dynamodb\"",
"types-boto3-dynamodbstreams<1.43.0,>=1.42.0; extra == \"dynamodbstreams\"",
"types-boto3-ebs<1.43.0,>=1.42.0; extra == \"ebs\"",
"types-boto3-ec2<1.43.0,>=1.42.0; extra == \"ec2\"",
"types-boto3-ec2-instance-connect<1.43.0,>=1.42.0; extra == \"ec2-instance-connect\"",
"types-boto3-ecr<1.43.0,>=1.42.0; extra == \"ecr\"",
"types-boto3-ecr-public<1.43.0,>=1.42.0; extra == \"ecr-public\"",
"types-boto3-ecs<1.43.0,>=1.42.0; extra == \"ecs\"",
"types-boto3-efs<1.43.0,>=1.42.0; extra == \"efs\"",
"types-boto3-eks<1.43.0,>=1.42.0; extra == \"eks\"",
"types-boto3-eks-auth<1.43.0,>=1.42.0; extra == \"eks-auth\"",
"types-boto3-elasticache<1.43.0,>=1.42.0; extra == \"elasticache\"",
"types-boto3-elasticbeanstalk<1.43.0,>=1.42.0; extra == \"elasticbeanstalk\"",
"types-boto3-elb<1.43.0,>=1.42.0; extra == \"elb\"",
"types-boto3-elbv2<1.43.0,>=1.42.0; extra == \"elbv2\"",
"types-boto3-emr<1.43.0,>=1.42.0; extra == \"emr\"",
"types-boto3-emr-containers<1.43.0,>=1.42.0; extra == \"emr-containers\"",
"types-boto3-emr-serverless<1.43.0,>=1.42.0; extra == \"emr-serverless\"",
"types-boto3-entityresolution<1.43.0,>=1.42.0; extra == \"entityresolution\"",
"types-boto3-es<1.43.0,>=1.42.0; extra == \"es\"",
"types-boto3-events<1.43.0,>=1.42.0; extra == \"events\"",
"types-boto3-evs<1.43.0,>=1.42.0; extra == \"evs\"",
"types-boto3-finspace<1.43.0,>=1.42.0; extra == \"finspace\"",
"types-boto3-finspace-data<1.43.0,>=1.42.0; extra == \"finspace-data\"",
"types-boto3-firehose<1.43.0,>=1.42.0; extra == \"firehose\"",
"types-boto3-fis<1.43.0,>=1.42.0; extra == \"fis\"",
"types-boto3-fms<1.43.0,>=1.42.0; extra == \"fms\"",
"types-boto3-forecast<1.43.0,>=1.42.0; extra == \"forecast\"",
"types-boto3-forecastquery<1.43.0,>=1.42.0; extra == \"forecastquery\"",
"types-boto3-frauddetector<1.43.0,>=1.42.0; extra == \"frauddetector\"",
"types-boto3-freetier<1.43.0,>=1.42.0; extra == \"freetier\"",
"types-boto3-fsx<1.43.0,>=1.42.0; extra == \"fsx\"",
"types-boto3-gamelift<1.43.0,>=1.42.0; extra == \"gamelift\"",
"types-boto3-gameliftstreams<1.43.0,>=1.42.0; extra == \"gameliftstreams\"",
"types-boto3-geo-maps<1.43.0,>=1.42.0; extra == \"geo-maps\"",
"types-boto3-geo-places<1.43.0,>=1.42.0; extra == \"geo-places\"",
"types-boto3-geo-routes<1.43.0,>=1.42.0; extra == \"geo-routes\"",
"types-boto3-glacier<1.43.0,>=1.42.0; extra == \"glacier\"",
"types-boto3-globalaccelerator<1.43.0,>=1.42.0; extra == \"globalaccelerator\"",
"types-boto3-glue<1.43.0,>=1.42.0; extra == \"glue\"",
"types-boto3-grafana<1.43.0,>=1.42.0; extra == \"grafana\"",
"types-boto3-greengrass<1.43.0,>=1.42.0; extra == \"greengrass\"",
"types-boto3-greengrassv2<1.43.0,>=1.42.0; extra == \"greengrassv2\"",
"types-boto3-groundstation<1.43.0,>=1.42.0; extra == \"groundstation\"",
"types-boto3-guardduty<1.43.0,>=1.42.0; extra == \"guardduty\"",
"types-boto3-health<1.43.0,>=1.42.0; extra == \"health\"",
"types-boto3-healthlake<1.43.0,>=1.42.0; extra == \"healthlake\"",
"types-boto3-iam<1.43.0,>=1.42.0; extra == \"iam\"",
"types-boto3-identitystore<1.43.0,>=1.42.0; extra == \"identitystore\"",
"types-boto3-imagebuilder<1.43.0,>=1.42.0; extra == \"imagebuilder\"",
"types-boto3-importexport<1.43.0,>=1.42.0; extra == \"importexport\"",
"types-boto3-inspector<1.43.0,>=1.42.0; extra == \"inspector\"",
"types-boto3-inspector-scan<1.43.0,>=1.42.0; extra == \"inspector-scan\"",
"types-boto3-inspector2<1.43.0,>=1.42.0; extra == \"inspector2\"",
"types-boto3-internetmonitor<1.43.0,>=1.42.0; extra == \"internetmonitor\"",
"types-boto3-invoicing<1.43.0,>=1.42.0; extra == \"invoicing\"",
"types-boto3-iot<1.43.0,>=1.42.0; extra == \"iot\"",
"types-boto3-iot-data<1.43.0,>=1.42.0; extra == \"iot-data\"",
"types-boto3-iot-jobs-data<1.43.0,>=1.42.0; extra == \"iot-jobs-data\"",
"types-boto3-iot-managed-integrations<1.43.0,>=1.42.0; extra == \"iot-managed-integrations\"",
"types-boto3-iotdeviceadvisor<1.43.0,>=1.42.0; extra == \"iotdeviceadvisor\"",
"types-boto3-iotevents<1.43.0,>=1.42.0; extra == \"iotevents\"",
"types-boto3-iotevents-data<1.43.0,>=1.42.0; extra == \"iotevents-data\"",
"types-boto3-iotfleetwise<1.43.0,>=1.42.0; extra == \"iotfleetwise\"",
"types-boto3-iotsecuretunneling<1.43.0,>=1.42.0; extra == \"iotsecuretunneling\"",
"types-boto3-iotsitewise<1.43.0,>=1.42.0; extra == \"iotsitewise\"",
"types-boto3-iotthingsgraph<1.43.0,>=1.42.0; extra == \"iotthingsgraph\"",
"types-boto3-iottwinmaker<1.43.0,>=1.42.0; extra == \"iottwinmaker\"",
"types-boto3-iotwireless<1.43.0,>=1.42.0; extra == \"iotwireless\"",
"types-boto3-ivs<1.43.0,>=1.42.0; extra == \"ivs\"",
"types-boto3-ivs-realtime<1.43.0,>=1.42.0; extra == \"ivs-realtime\"",
"types-boto3-ivschat<1.43.0,>=1.42.0; extra == \"ivschat\"",
"types-boto3-kafka<1.43.0,>=1.42.0; extra == \"kafka\"",
"types-boto3-kafkaconnect<1.43.0,>=1.42.0; extra == \"kafkaconnect\"",
"types-boto3-kendra<1.43.0,>=1.42.0; extra == \"kendra\"",
"types-boto3-kendra-ranking<1.43.0,>=1.42.0; extra == \"kendra-ranking\"",
"types-boto3-keyspaces<1.43.0,>=1.42.0; extra == \"keyspaces\"",
"types-boto3-keyspacesstreams<1.43.0,>=1.42.0; extra == \"keyspacesstreams\"",
"types-boto3-kinesis<1.43.0,>=1.42.0; extra == \"kinesis\"",
"types-boto3-kinesis-video-archived-media<1.43.0,>=1.42.0; extra == \"kinesis-video-archived-media\"",
"types-boto3-kinesis-video-media<1.43.0,>=1.42.0; extra == \"kinesis-video-media\"",
"types-boto3-kinesis-video-signaling<1.43.0,>=1.42.0; extra == \"kinesis-video-signaling\"",
"types-boto3-kinesis-video-webrtc-storage<1.43.0,>=1.42.0; extra == \"kinesis-video-webrtc-storage\"",
"types-boto3-kinesisanalytics<1.43.0,>=1.42.0; extra == \"kinesisanalytics\"",
"types-boto3-kinesisanalyticsv2<1.43.0,>=1.42.0; extra == \"kinesisanalyticsv2\"",
"types-boto3-kinesisvideo<1.43.0,>=1.42.0; extra == \"kinesisvideo\"",
"types-boto3-kms<1.43.0,>=1.42.0; extra == \"kms\"",
"types-boto3-lakeformation<1.43.0,>=1.42.0; extra == \"lakeformation\"",
"types-boto3-lambda<1.43.0,>=1.42.0; extra == \"lambda\"",
"types-boto3-launch-wizard<1.43.0,>=1.42.0; extra == \"launch-wizard\"",
"types-boto3-lex-models<1.43.0,>=1.42.0; extra == \"lex-models\"",
"types-boto3-lex-runtime<1.43.0,>=1.42.0; extra == \"lex-runtime\"",
"types-boto3-lexv2-models<1.43.0,>=1.42.0; extra == \"lexv2-models\"",
"types-boto3-lexv2-runtime<1.43.0,>=1.42.0; extra == \"lexv2-runtime\"",
"types-boto3-license-manager<1.43.0,>=1.42.0; extra == \"license-manager\"",
"types-boto3-license-manager-linux-subscriptions<1.43.0,>=1.42.0; extra == \"license-manager-linux-subscriptions\"",
"types-boto3-license-manager-user-subscriptions<1.43.0,>=1.42.0; extra == \"license-manager-user-subscriptions\"",
"types-boto3-lightsail<1.43.0,>=1.42.0; extra == \"lightsail\"",
"types-boto3-location<1.43.0,>=1.42.0; extra == \"location\"",
"types-boto3-logs<1.43.0,>=1.42.0; extra == \"logs\"",
"types-boto3-lookoutequipment<1.43.0,>=1.42.0; extra == \"lookoutequipment\"",
"types-boto3-m2<1.43.0,>=1.42.0; extra == \"m2\"",
"types-boto3-machinelearning<1.43.0,>=1.42.0; extra == \"machinelearning\"",
"types-boto3-macie2<1.43.0,>=1.42.0; extra == \"macie2\"",
"types-boto3-mailmanager<1.43.0,>=1.42.0; extra == \"mailmanager\"",
"types-boto3-managedblockchain<1.43.0,>=1.42.0; extra == \"managedblockchain\"",
"types-boto3-managedblockchain-query<1.43.0,>=1.42.0; extra == \"managedblockchain-query\"",
"types-boto3-marketplace-agreement<1.43.0,>=1.42.0; extra == \"marketplace-agreement\"",
"types-boto3-marketplace-catalog<1.43.0,>=1.42.0; extra == \"marketplace-catalog\"",
"types-boto3-marketplace-deployment<1.43.0,>=1.42.0; extra == \"marketplace-deployment\"",
"types-boto3-marketplace-entitlement<1.43.0,>=1.42.0; extra == \"marketplace-entitlement\"",
"types-boto3-marketplace-reporting<1.43.0,>=1.42.0; extra == \"marketplace-reporting\"",
"types-boto3-marketplacecommerceanalytics<1.43.0,>=1.42.0; extra == \"marketplacecommerceanalytics\"",
"types-boto3-mediaconnect<1.43.0,>=1.42.0; extra == \"mediaconnect\"",
"types-boto3-mediaconvert<1.43.0,>=1.42.0; extra == \"mediaconvert\"",
"types-boto3-medialive<1.43.0,>=1.42.0; extra == \"medialive\"",
"types-boto3-mediapackage<1.43.0,>=1.42.0; extra == \"mediapackage\"",
"types-boto3-mediapackage-vod<1.43.0,>=1.42.0; extra == \"mediapackage-vod\"",
"types-boto3-mediapackagev2<1.43.0,>=1.42.0; extra == \"mediapackagev2\"",
"types-boto3-mediastore<1.43.0,>=1.42.0; extra == \"mediastore\"",
"types-boto3-mediastore-data<1.43.0,>=1.42.0; extra == \"mediastore-data\"",
"types-boto3-mediatailor<1.43.0,>=1.42.0; extra == \"mediatailor\"",
"types-boto3-medical-imaging<1.43.0,>=1.42.0; extra == \"medical-imaging\"",
"types-boto3-memorydb<1.43.0,>=1.42.0; extra == \"memorydb\"",
"types-boto3-meteringmarketplace<1.43.0,>=1.42.0; extra == \"meteringmarketplace\"",
"types-boto3-mgh<1.43.0,>=1.42.0; extra == \"mgh\"",
"types-boto3-mgn<1.43.0,>=1.42.0; extra == \"mgn\"",
"types-boto3-migration-hub-refactor-spaces<1.43.0,>=1.42.0; extra == \"migration-hub-refactor-spaces\"",
"types-boto3-migrationhub-config<1.43.0,>=1.42.0; extra == \"migrationhub-config\"",
"types-boto3-migrationhuborchestrator<1.43.0,>=1.42.0; extra == \"migrationhuborchestrator\"",
"types-boto3-migrationhubstrategy<1.43.0,>=1.42.0; extra == \"migrationhubstrategy\"",
"types-boto3-mpa<1.43.0,>=1.42.0; extra == \"mpa\"",
"types-boto3-mq<1.43.0,>=1.42.0; extra == \"mq\"",
"types-boto3-mturk<1.43.0,>=1.42.0; extra == \"mturk\"",
"types-boto3-mwaa<1.43.0,>=1.42.0; extra == \"mwaa\"",
"types-boto3-mwaa-serverless<1.43.0,>=1.42.0; extra == \"mwaa-serverless\"",
"types-boto3-neptune<1.43.0,>=1.42.0; extra == \"neptune\"",
"types-boto3-neptune-graph<1.43.0,>=1.42.0; extra == \"neptune-graph\"",
"types-boto3-neptunedata<1.43.0,>=1.42.0; extra == \"neptunedata\"",
"types-boto3-network-firewall<1.43.0,>=1.42.0; extra == \"network-firewall\"",
"types-boto3-networkflowmonitor<1.43.0,>=1.42.0; extra == \"networkflowmonitor\"",
"types-boto3-networkmanager<1.43.0,>=1.42.0; extra == \"networkmanager\"",
"types-boto3-networkmonitor<1.43.0,>=1.42.0; extra == \"networkmonitor\"",
"types-boto3-notifications<1.43.0,>=1.42.0; extra == \"notifications\"",
"types-boto3-notificationscontacts<1.43.0,>=1.42.0; extra == \"notificationscontacts\"",
"types-boto3-nova-act<1.43.0,>=1.42.0; extra == \"nova-act\"",
"types-boto3-oam<1.43.0,>=1.42.0; extra == \"oam\"",
"types-boto3-observabilityadmin<1.43.0,>=1.42.0; extra == \"observabilityadmin\"",
"types-boto3-odb<1.43.0,>=1.42.0; extra == \"odb\"",
"types-boto3-omics<1.43.0,>=1.42.0; extra == \"omics\"",
"types-boto3-opensearch<1.43.0,>=1.42.0; extra == \"opensearch\"",
"types-boto3-opensearchserverless<1.43.0,>=1.42.0; extra == \"opensearchserverless\"",
"types-boto3-organizations<1.43.0,>=1.42.0; extra == \"organizations\"",
"types-boto3-osis<1.43.0,>=1.42.0; extra == \"osis\"",
"types-boto3-outposts<1.43.0,>=1.42.0; extra == \"outposts\"",
"types-boto3-panorama<1.43.0,>=1.42.0; extra == \"panorama\"",
"types-boto3-partnercentral-account<1.43.0,>=1.42.0; extra == \"partnercentral-account\"",
"types-boto3-partnercentral-benefits<1.43.0,>=1.42.0; extra == \"partnercentral-benefits\"",
"types-boto3-partnercentral-channel<1.43.0,>=1.42.0; extra == \"partnercentral-channel\"",
"types-boto3-partnercentral-selling<1.43.0,>=1.42.0; extra == \"partnercentral-selling\"",
"types-boto3-payment-cryptography<1.43.0,>=1.42.0; extra == \"payment-cryptography\"",
"types-boto3-payment-cryptography-data<1.43.0,>=1.42.0; extra == \"payment-cryptography-data\"",
"types-boto3-pca-connector-ad<1.43.0,>=1.42.0; extra == \"pca-connector-ad\"",
"types-boto3-pca-connector-scep<1.43.0,>=1.42.0; extra == \"pca-connector-scep\"",
"types-boto3-pcs<1.43.0,>=1.42.0; extra == \"pcs\"",
"types-boto3-personalize<1.43.0,>=1.42.0; extra == \"personalize\"",
"types-boto3-personalize-events<1.43.0,>=1.42.0; extra == \"personalize-events\"",
"types-boto3-personalize-runtime<1.43.0,>=1.42.0; extra == \"personalize-runtime\"",
"types-boto3-pi<1.43.0,>=1.42.0; extra == \"pi\"",
"types-boto3-pinpoint<1.43.0,>=1.42.0; extra == \"pinpoint\"",
"types-boto3-pinpoint-email<1.43.0,>=1.42.0; extra == \"pinpoint-email\"",
"types-boto3-pinpoint-sms-voice<1.43.0,>=1.42.0; extra == \"pinpoint-sms-voice\"",
"types-boto3-pinpoint-sms-voice-v2<1.43.0,>=1.42.0; extra == \"pinpoint-sms-voice-v2\"",
"types-boto3-pipes<1.43.0,>=1.42.0; extra == \"pipes\"",
"types-boto3-polly<1.43.0,>=1.42.0; extra == \"polly\"",
"types-boto3-pricing<1.43.0,>=1.42.0; extra == \"pricing\"",
"types-boto3-proton<1.43.0,>=1.42.0; extra == \"proton\"",
"types-boto3-qapps<1.43.0,>=1.42.0; extra == \"qapps\"",
"types-boto3-qbusiness<1.43.0,>=1.42.0; extra == \"qbusiness\"",
"types-boto3-qconnect<1.43.0,>=1.42.0; extra == \"qconnect\"",
"types-boto3-quicksight<1.43.0,>=1.42.0; extra == \"quicksight\"",
"types-boto3-ram<1.43.0,>=1.42.0; extra == \"ram\"",
"types-boto3-rbin<1.43.0,>=1.42.0; extra == \"rbin\"",
"types-boto3-rds<1.43.0,>=1.42.0; extra == \"rds\"",
"types-boto3-rds-data<1.43.0,>=1.42.0; extra == \"rds-data\"",
"types-boto3-redshift<1.43.0,>=1.42.0; extra == \"redshift\"",
"types-boto3-redshift-data<1.43.0,>=1.42.0; extra == \"redshift-data\"",
"types-boto3-redshift-serverless<1.43.0,>=1.42.0; extra == \"redshift-serverless\"",
"types-boto3-rekognition<1.43.0,>=1.42.0; extra == \"rekognition\"",
"types-boto3-repostspace<1.43.0,>=1.42.0; extra == \"repostspace\"",
"types-boto3-resiliencehub<1.43.0,>=1.42.0; extra == \"resiliencehub\"",
"types-boto3-resource-explorer-2<1.43.0,>=1.42.0; extra == \"resource-explorer-2\"",
"types-boto3-resource-groups<1.43.0,>=1.42.0; extra == \"resource-groups\"",
"types-boto3-resourcegroupstaggingapi<1.43.0,>=1.42.0; extra == \"resourcegroupstaggingapi\"",
"types-boto3-rolesanywhere<1.43.0,>=1.42.0; extra == \"rolesanywhere\"",
"types-boto3-route53<1.43.0,>=1.42.0; extra == \"route53\"",
"types-boto3-route53-recovery-cluster<1.43.0,>=1.42.0; extra == \"route53-recovery-cluster\"",
"types-boto3-route53-recovery-control-config<1.43.0,>=1.42.0; extra == \"route53-recovery-control-config\"",
"types-boto3-route53-recovery-readiness<1.43.0,>=1.42.0; extra == \"route53-recovery-readiness\"",
"types-boto3-route53domains<1.43.0,>=1.42.0; extra == \"route53domains\"",
"types-boto3-route53globalresolver<1.43.0,>=1.42.0; extra == \"route53globalresolver\"",
"types-boto3-route53profiles<1.43.0,>=1.42.0; extra == \"route53profiles\"",
"types-boto3-route53resolver<1.43.0,>=1.42.0; extra == \"route53resolver\"",
"types-boto3-rtbfabric<1.43.0,>=1.42.0; extra == \"rtbfabric\"",
"types-boto3-rum<1.43.0,>=1.42.0; extra == \"rum\"",
"types-boto3-s3<1.43.0,>=1.42.0; extra == \"s3\"",
"types-boto3-s3control<1.43.0,>=1.42.0; extra == \"s3control\"",
"types-boto3-s3outposts<1.43.0,>=1.42.0; extra == \"s3outposts\"",
"types-boto3-s3tables<1.43.0,>=1.42.0; extra == \"s3tables\"",
"types-boto3-s3vectors<1.43.0,>=1.42.0; extra == \"s3vectors\"",
"types-boto3-sagemaker<1.43.0,>=1.42.0; extra == \"sagemaker\"",
"types-boto3-sagemaker-a2i-runtime<1.43.0,>=1.42.0; extra == \"sagemaker-a2i-runtime\"",
"types-boto3-sagemaker-edge<1.43.0,>=1.42.0; extra == \"sagemaker-edge\"",
"types-boto3-sagemaker-featurestore-runtime<1.43.0,>=1.42.0; extra == \"sagemaker-featurestore-runtime\"",
"types-boto3-sagemaker-geospatial<1.43.0,>=1.42.0; extra == \"sagemaker-geospatial\"",
"types-boto3-sagemaker-metrics<1.43.0,>=1.42.0; extra == \"sagemaker-metrics\"",
"types-boto3-sagemaker-runtime<1.43.0,>=1.42.0; extra == \"sagemaker-runtime\"",
"types-boto3-savingsplans<1.43.0,>=1.42.0; extra == \"savingsplans\"",
"types-boto3-scheduler<1.43.0,>=1.42.0; extra == \"scheduler\"",
"types-boto3-schemas<1.43.0,>=1.42.0; extra == \"schemas\"",
"types-boto3-sdb<1.43.0,>=1.42.0; extra == \"sdb\"",
"types-boto3-secretsmanager<1.43.0,>=1.42.0; extra == \"secretsmanager\"",
"types-boto3-security-ir<1.43.0,>=1.42.0; extra == \"security-ir\"",
"types-boto3-securityhub<1.43.0,>=1.42.0; extra == \"securityhub\"",
"types-boto3-securitylake<1.43.0,>=1.42.0; extra == \"securitylake\"",
"types-boto3-serverlessrepo<1.43.0,>=1.42.0; extra == \"serverlessrepo\"",
"types-boto3-service-quotas<1.43.0,>=1.42.0; extra == \"service-quotas\"",
"types-boto3-servicecatalog<1.43.0,>=1.42.0; extra == \"servicecatalog\"",
"types-boto3-servicecatalog-appregistry<1.43.0,>=1.42.0; extra == \"servicecatalog-appregistry\"",
"types-boto3-servicediscovery<1.43.0,>=1.42.0; extra == \"servicediscovery\"",
"types-boto3-ses<1.43.0,>=1.42.0; extra == \"ses\"",
"types-boto3-sesv2<1.43.0,>=1.42.0; extra == \"sesv2\"",
"types-boto3-shield<1.43.0,>=1.42.0; extra == \"shield\"",
"types-boto3-signer<1.43.0,>=1.42.0; extra == \"signer\"",
"types-boto3-signer-data<1.43.0,>=1.42.0; extra == \"signer-data\"",
"types-boto3-signin<1.43.0,>=1.42.0; extra == \"signin\"",
"types-boto3-simspaceweaver<1.43.0,>=1.42.0; extra == \"simspaceweaver\"",
"types-boto3-snow-device-management<1.43.0,>=1.42.0; extra == \"snow-device-management\"",
"types-boto3-snowball<1.43.0,>=1.42.0; extra == \"snowball\"",
"types-boto3-sns<1.43.0,>=1.42.0; extra == \"sns\"",
"types-boto3-socialmessaging<1.43.0,>=1.42.0; extra == \"socialmessaging\"",
"types-boto3-sqs<1.43.0,>=1.42.0; extra == \"sqs\"",
"types-boto3-ssm<1.43.0,>=1.42.0; extra == \"ssm\"",
"types-boto3-ssm-contacts<1.43.0,>=1.42.0; extra == \"ssm-contacts\"",
"types-boto3-ssm-guiconnect<1.43.0,>=1.42.0; extra == \"ssm-guiconnect\"",
"types-boto3-ssm-incidents<1.43.0,>=1.42.0; extra == \"ssm-incidents\"",
"types-boto3-ssm-quicksetup<1.43.0,>=1.42.0; extra == \"ssm-quicksetup\"",
"types-boto3-ssm-sap<1.43.0,>=1.42.0; extra == \"ssm-sap\"",
"types-boto3-sso<1.43.0,>=1.42.0; extra == \"sso\"",
"types-boto3-sso-admin<1.43.0,>=1.42.0; extra == \"sso-admin\"",
"types-boto3-sso-oidc<1.43.0,>=1.42.0; extra == \"sso-oidc\"",
"types-boto3-stepfunctions<1.43.0,>=1.42.0; extra == \"stepfunctions\"",
"types-boto3-storagegateway<1.43.0,>=1.42.0; extra == \"storagegateway\"",
"types-boto3-sts<1.43.0,>=1.42.0; extra == \"sts\"",
"types-boto3-supplychain<1.43.0,>=1.42.0; extra == \"supplychain\"",
"types-boto3-support<1.43.0,>=1.42.0; extra == \"support\"",
"types-boto3-support-app<1.43.0,>=1.42.0; extra == \"support-app\"",
"types-boto3-swf<1.43.0,>=1.42.0; extra == \"swf\"",
"types-boto3-synthetics<1.43.0,>=1.42.0; extra == \"synthetics\"",
"types-boto3-taxsettings<1.43.0,>=1.42.0; extra == \"taxsettings\"",
"types-boto3-textract<1.43.0,>=1.42.0; extra == \"textract\"",
"types-boto3-timestream-influxdb<1.43.0,>=1.42.0; extra == \"timestream-influxdb\"",
"types-boto3-timestream-query<1.43.0,>=1.42.0; extra == \"timestream-query\"",
"types-boto3-timestream-write<1.43.0,>=1.42.0; extra == \"timestream-write\"",
"types-boto3-tnb<1.43.0,>=1.42.0; extra == \"tnb\"",
"types-boto3-transcribe<1.43.0,>=1.42.0; extra == \"transcribe\"",
"types-boto3-transfer<1.43.0,>=1.42.0; extra == \"transfer\"",
"types-boto3-translate<1.43.0,>=1.42.0; extra == \"translate\"",
"types-boto3-trustedadvisor<1.43.0,>=1.42.0; extra == \"trustedadvisor\"",
"types-boto3-verifiedpermissions<1.43.0,>=1.42.0; extra == \"verifiedpermissions\"",
"types-boto3-voice-id<1.43.0,>=1.42.0; extra == \"voice-id\"",
"types-boto3-vpc-lattice<1.43.0,>=1.42.0; extra == \"vpc-lattice\"",
"types-boto3-waf<1.43.0,>=1.42.0; extra == \"waf\"",
"types-boto3-waf-regional<1.43.0,>=1.42.0; extra == \"waf-regional\"",
"types-boto3-wafv2<1.43.0,>=1.42.0; extra == \"wafv2\"",
"types-boto3-wellarchitected<1.43.0,>=1.42.0; extra == \"wellarchitected\"",
"types-boto3-wickr<1.43.0,>=1.42.0; extra == \"wickr\"",
"types-boto3-wisdom<1.43.0,>=1.42.0; extra == \"wisdom\"",
"types-boto3-workdocs<1.43.0,>=1.42.0; extra == \"workdocs\"",
"types-boto3-workmail<1.43.0,>=1.42.0; extra == \"workmail\"",
"types-boto3-workmailmessageflow<1.43.0,>=1.42.0; extra == \"workmailmessageflow\"",
"types-boto3-workspaces<1.43.0,>=1.42.0; extra == \"workspaces\"",
"types-boto3-workspaces-instances<1.43.0,>=1.42.0; extra == \"workspaces-instances\"",
"types-boto3-workspaces-thin-client<1.43.0,>=1.42.0; extra == \"workspaces-thin-client\"",
"types-boto3-workspaces-web<1.43.0,>=1.42.0; extra == \"workspaces-web\"",
"types-boto3-xray<1.43.0,>=1.42.0; extra == \"xray\""
] | [] | [] | [] | [
"Homepage, https://github.com/youtype/mypy_boto3_builder",
"Documentation, https://youtype.github.io/types_boto3_docs/",
"Source, https://github.com/youtype/mypy_boto3_builder",
"Tracker, https://github.com/youtype/mypy_boto3_builder/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-21T02:54:56.204394 | types_boto3_lite-1.42.54-py3-none-any.whl | 42,656 | 7c/86/4ac580c9ab595f83411491871a62c3d52fdc25982fe73da8aae816df6a84/types_boto3_lite-1.42.54-py3-none-any.whl | py3 | bdist_wheel | null | false | da623802b13829c06096c8c894ada721 | 85dc06f227a2a845cf8e7d19450bdf6ac9866c373f2143893864a32ad9211fbe | 7c864ac580c9ab595f83411491871a62c3d52fdc25982fe73da8aae816df6a84 | MIT | [
"LICENSE"
] | 122 |
2.4 | types-boto3-trustedadvisor | 1.42.54 | Type annotations for boto3 TrustedAdvisorPublicAPI 1.42.54 service generated with mypy-boto3-builder 8.12.0 | <a id="types-boto3-trustedadvisor"></a>
# types-boto3-trustedadvisor
[](https://pypi.org/project/types-boto3-trustedadvisor/)
[](https://pypi.org/project/types-boto3-trustedadvisor/)
[](https://youtype.github.io/types_boto3_docs/)
[](https://pypistats.org/packages/types-boto3-trustedadvisor)

Type annotations for
[boto3 TrustedAdvisorPublicAPI 1.42.54](https://pypi.org/project/boto3/)
compatible with [VSCode](https://code.visualstudio.com/),
[PyCharm](https://www.jetbrains.com/pycharm/),
[Emacs](https://www.gnu.org/software/emacs/),
[Sublime Text](https://www.sublimetext.com/),
[mypy](https://github.com/python/mypy),
[pyright](https://github.com/microsoft/pyright) and other tools.
Generated with
[mypy-boto3-builder 8.12.0](https://github.com/youtype/mypy_boto3_builder).
More information can be found on
[types-boto3](https://pypi.org/project/types-boto3/) page and in
[types-boto3-trustedadvisor docs](https://youtype.github.io/types_boto3_docs/types_boto3_trustedadvisor/).
See how it helps you find and fix potential bugs:

- [types-boto3-trustedadvisor](#types-boto3-trustedadvisor)
- [How to install](#how-to-install)
- [Generate locally (recommended)](<#generate-locally-(recommended)>)
- [VSCode extension](#vscode-extension)
- [From PyPI with pip](#from-pypi-with-pip)
- [How to uninstall](#how-to-uninstall)
- [Usage](#usage)
- [VSCode](#vscode)
- [PyCharm](#pycharm)
- [Emacs](#emacs)
- [Sublime Text](#sublime-text)
- [Other IDEs](#other-ides)
- [mypy](#mypy)
- [pyright](#pyright)
- [Pylint compatibility](#pylint-compatibility)
- [Explicit type annotations](#explicit-type-annotations)
- [Client annotations](#client-annotations)
- [Paginators annotations](#paginators-annotations)
- [Literals](#literals)
- [Type definitions](#type-definitions)
- [How it works](#how-it-works)
- [What's new](#what's-new)
- [Implemented features](#implemented-features)
- [Latest changes](#latest-changes)
- [Versioning](#versioning)
- [Thank you](#thank-you)
- [Documentation](#documentation)
- [Support and contributing](#support-and-contributing)
<a id="how-to-install"></a>
## How to install
<a id="generate-locally-(recommended)"></a>
### Generate locally (recommended)
You can generate type annotations for `boto3` package locally with
`mypy-boto3-builder`. Use
[uv](https://docs.astral.sh/uv/getting-started/installation/) for build
isolation.
1. Run mypy-boto3-builder in your package root directory:
`uvx --with 'boto3==1.42.54' mypy-boto3-builder`
2. Select `boto3` AWS SDK.
3. Add `TrustedAdvisorPublicAPI` service.
4. Use provided commands to install generated packages.
<a id="vscode-extension"></a>
### VSCode extension
Add
[AWS Boto3](https://marketplace.visualstudio.com/items?itemName=Boto3typed.boto3-ide)
extension to your VSCode and run `AWS boto3: Quick Start` command.
Click `Modify` and select `boto3 common` and `TrustedAdvisorPublicAPI`.
<a id="from-pypi-with-pip"></a>
### From PyPI with pip
Install `types-boto3` for `TrustedAdvisorPublicAPI` service.
```bash
# install with boto3 type annotations
python -m pip install 'types-boto3[trustedadvisor]'
# Lite version does not provide session.client/resource overloads
# it is more RAM-friendly, but requires explicit type annotations
python -m pip install 'types-boto3-lite[trustedadvisor]'
# standalone installation
python -m pip install types-boto3-trustedadvisor
```
<a id="how-to-uninstall"></a>
## How to uninstall
```bash
python -m pip uninstall -y types-boto3-trustedadvisor
```
<a id="usage"></a>
## Usage
<a id="vscode"></a>
### VSCode
- Install
[Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
- Install
[Pylance extension](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
- Set `Pylance` as your Python Language Server
- Install `types-boto3[trustedadvisor]` in your environment:
```bash
python -m pip install 'types-boto3[trustedadvisor]'
```
Both type checking and code completion should now work. No explicit type
annotations required, write your `boto3` code as usual.
<a id="pycharm"></a>
### PyCharm
> ⚠️ Due to slow PyCharm performance on `Literal` overloads (issue
> [PY-40997](https://youtrack.jetbrains.com/issue/PY-40997)), it is recommended
> to use [types-boto3-lite](https://pypi.org/project/types-boto3-lite/) until
> the issue is resolved.
> ⚠️ If you experience slow performance and high CPU usage, try to disable
> `PyCharm` type checker and use [mypy](https://github.com/python/mypy) or
> [pyright](https://github.com/microsoft/pyright) instead.
> ⚠️ To continue using `PyCharm` type checker, you can try to replace
> `types-boto3` with
> [types-boto3-lite](https://pypi.org/project/types-boto3-lite/):
```bash
pip uninstall types-boto3
pip install types-boto3-lite
```
Install `types-boto3[trustedadvisor]` in your environment:
```bash
python -m pip install 'types-boto3[trustedadvisor]'
```
Both type checking and code completion should now work.
<a id="emacs"></a>
### Emacs
- Install `types-boto3` with services you use in your environment:
```bash
python -m pip install 'types-boto3[trustedadvisor]'
```
- Install [use-package](https://github.com/jwiegley/use-package),
[lsp](https://github.com/emacs-lsp/lsp-mode/),
[company](https://github.com/company-mode/company-mode) and
[flycheck](https://github.com/flycheck/flycheck) packages
- Install [lsp-pyright](https://github.com/emacs-lsp/lsp-pyright) package
```elisp
(use-package lsp-pyright
:ensure t
:hook (python-mode . (lambda ()
(require 'lsp-pyright)
(lsp))) ; or lsp-deferred
:init (when (executable-find "python3")
(setq lsp-pyright-python-executable-cmd "python3"))
)
```
- Make sure emacs uses the environment where you have installed `types-boto3`
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="sublime-text"></a>
### Sublime Text
- Install `types-boto3[trustedadvisor]` with services you use in your
environment:
```bash
python -m pip install 'types-boto3[trustedadvisor]'
```
- Install [LSP-pyright](https://github.com/sublimelsp/LSP-pyright) package
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="other-ides"></a>
### Other IDEs
Not tested, but as long as your IDE supports `mypy` or `pyright`, everything
should work.
<a id="mypy"></a>
### mypy
- Install `mypy`: `python -m pip install mypy`
- Install `types-boto3[trustedadvisor]` in your environment:
```bash
python -m pip install 'types-boto3[trustedadvisor]'
```
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pyright"></a>
### pyright
- Install `pyright`: `npm i -g pyright`
- Install `types-boto3[trustedadvisor]` in your environment:
```bash
python -m pip install 'types-boto3[trustedadvisor]'
```
Optionally, you can install `types-boto3` to `typings` directory.
Type checking should now work. No explicit type annotations required, write
your `boto3` code as usual.
<a id="pylint-compatibility"></a>
### Pylint compatibility
It is totally safe to use `TYPE_CHECKING` flag in order to avoid
`types-boto3-trustedadvisor` dependency in production. However, there is an
issue in `pylint` that it complains about undefined variables. To fix it, set
all types to `object` in non-`TYPE_CHECKING` mode.
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from types_boto3_ec2 import EC2Client, EC2ServiceResource
from types_boto3_ec2.waiters import BundleTaskCompleteWaiter
from types_boto3_ec2.paginators import DescribeVolumesPaginator
else:
EC2Client = object
EC2ServiceResource = object
BundleTaskCompleteWaiter = object
DescribeVolumesPaginator = object
...
```
<a id="explicit-type-annotations"></a>
## Explicit type annotations
<a id="client-annotations"></a>
### Client annotations
`TrustedAdvisorPublicAPIClient` provides annotations for
`boto3.client("trustedadvisor")`.
```python
from boto3.session import Session
from types_boto3_trustedadvisor import TrustedAdvisorPublicAPIClient
client: TrustedAdvisorPublicAPIClient = Session().client("trustedadvisor")
# now client usage is checked by mypy and IDE should provide code completion
```
<a id="paginators-annotations"></a>
### Paginators annotations
`types_boto3_trustedadvisor.paginator` module contains type annotations for all
paginators.
```python
from boto3.session import Session
from types_boto3_trustedadvisor import TrustedAdvisorPublicAPIClient
from types_boto3_trustedadvisor.paginator import (
ListChecksPaginator,
ListOrganizationRecommendationAccountsPaginator,
ListOrganizationRecommendationResourcesPaginator,
ListOrganizationRecommendationsPaginator,
ListRecommendationResourcesPaginator,
ListRecommendationsPaginator,
)
client: TrustedAdvisorPublicAPIClient = Session().client("trustedadvisor")
# Explicit type annotations are optional here
# Types should be correctly discovered by mypy and IDEs
list_checks_paginator: ListChecksPaginator = client.get_paginator("list_checks")
list_organization_recommendation_accounts_paginator: ListOrganizationRecommendationAccountsPaginator = client.get_paginator(
"list_organization_recommendation_accounts"
)
list_organization_recommendation_resources_paginator: ListOrganizationRecommendationResourcesPaginator = client.get_paginator(
"list_organization_recommendation_resources"
)
list_organization_recommendations_paginator: ListOrganizationRecommendationsPaginator = (
client.get_paginator("list_organization_recommendations")
)
list_recommendation_resources_paginator: ListRecommendationResourcesPaginator = (
client.get_paginator("list_recommendation_resources")
)
list_recommendations_paginator: ListRecommendationsPaginator = client.get_paginator(
"list_recommendations"
)
```
<a id="literals"></a>
### Literals
`types_boto3_trustedadvisor.literals` module contains literals extracted from
shapes that can be used in user code for type checking.
Full list of `TrustedAdvisorPublicAPI` Literals can be found in
[docs](https://youtype.github.io/types_boto3_docs/types_boto3_trustedadvisor/literals/).
```python
from types_boto3_trustedadvisor.literals import ExclusionStatusType
def check_value(value: ExclusionStatusType) -> bool: ...
```
<a id="type-definitions"></a>
### Type definitions
`types_boto3_trustedadvisor.type_defs` module contains structures and shapes
assembled to typed dictionaries and unions for additional type checking.
Full list of `TrustedAdvisorPublicAPI` TypeDefs can be found in
[docs](https://youtype.github.io/types_boto3_docs/types_boto3_trustedadvisor/type_defs/).
```python
# TypedDict usage example
from types_boto3_trustedadvisor.type_defs import AccountRecommendationLifecycleSummaryTypeDef
def get_value() -> AccountRecommendationLifecycleSummaryTypeDef:
return {
"accountId": ...,
}
```
<a id="how-it-works"></a>
## How it works
Fully automated
[mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder) carefully
generates type annotations for each service, patiently waiting for `boto3`
updates. It delivers drop-in type annotations for you and makes sure that:
- All available `boto3` services are covered.
- Each public class and method of every `boto3` service gets valid type
annotations extracted from `botocore` schemas.
- Type annotations include up-to-date documentation.
- Link to documentation is provided for every method.
- Code is processed by [ruff](https://docs.astral.sh/ruff/) for readability.
<a id="what's-new"></a>
## What's new
<a id="implemented-features"></a>
### Implemented features
- Fully type annotated `boto3`, `botocore`, `aiobotocore` and `aioboto3`
libraries
- `mypy`, `pyright`, `VSCode`, `PyCharm`, `Sublime Text` and `Emacs`
compatibility
- `Client`, `ServiceResource`, `Resource`, `Waiter` `Paginator` type
annotations for each service
- Generated `TypeDefs` for each service
- Generated `Literals` for each service
- Auto discovery of types for `boto3.client` and `boto3.resource` calls
- Auto discovery of types for `session.client` and `session.resource` calls
- Auto discovery of types for `client.get_waiter` and `client.get_paginator`
calls
- Auto discovery of types for `ServiceResource` and `Resource` collections
- Auto discovery of types for `aiobotocore.Session.create_client` calls
<a id="latest-changes"></a>
### Latest changes
Builder changelog can be found in
[Releases](https://github.com/youtype/mypy_boto3_builder/releases).
<a id="versioning"></a>
## Versioning
`types-boto3-trustedadvisor` version is the same as related `boto3` version and
follows
[Python Packaging version specifiers](https://packaging.python.org/en/latest/specifications/version-specifiers/).
<a id="thank-you"></a>
## Thank you
- [Allie Fitter](https://github.com/alliefitter) for
[boto3-type-annotations](https://pypi.org/project/boto3-type-annotations/),
this package is based on top of his work
- [black](https://github.com/psf/black) developers for an awesome formatting
tool
- [Timothy Edmund Crosley](https://github.com/timothycrosley) for
[isort](https://github.com/PyCQA/isort) and how flexible it is
- [mypy](https://github.com/python/mypy) developers for doing all dirty work
for us
- [pyright](https://github.com/microsoft/pyright) team for the new era of typed
Python
<a id="documentation"></a>
## Documentation
All services type annotations can be found in
[boto3 docs](https://youtype.github.io/types_boto3_docs/types_boto3_trustedadvisor/)
<a id="support-and-contributing"></a>
## Support and contributing
This package is auto-generated. Please reports any bugs or request new features
in [mypy-boto3-builder](https://github.com/youtype/mypy_boto3_builder/issues/)
repository.
| text/markdown | null | Vlad Emelianov <vlad.emelianov.nz@gmail.com> | null | null | null | boto3, trustedadvisor, boto3-stubs, type-annotations, mypy, typeshed, autocomplete | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Environment :: Console",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
"Typing :: Stubs Only"
] | [
"any"
] | null | null | >=3.9 | [] | [] | [] | [
"typing-extensions; python_version < \"3.12\""
] | [] | [] | [] | [
"Homepage, https://github.com/youtype/mypy_boto3_builder",
"Documentation, https://youtype.github.io/types_boto3_docs/types_boto3_trustedadvisor/",
"Source, https://github.com/youtype/mypy_boto3_builder",
"Tracker, https://github.com/youtype/mypy_boto3_builder/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-21T02:54:54.404972 | types_boto3_trustedadvisor-1.42.54.tar.gz | 19,833 | 85/e2/8ee5c70c16244fc1488b616608b86685495621dc5a5502b22c6ff6fcc201/types_boto3_trustedadvisor-1.42.54.tar.gz | source | sdist | null | false | 9032116984c37892e3661c9f5b32b6d2 | ae4033cec036364a86a4a7099e99ef30a5d264c4e832dbb21f11a26f68c15cdb | 85e28ee5c70c16244fc1488b616608b86685495621dc5a5502b22c6ff6fcc201 | MIT | [
"LICENSE"
] | 241 |
2.4 | gaya | 0.1.1 | Simple data quality checks that just work. | # Gaya
**Simple data quality checks that just work.**
Gaya helps you catch data issues early with sensible defaults and zero ceremony.
```bash
pip install gaya
gaya init
gaya run
```
---
## What Gaya Checks
Out of the box, Gaya runs common, practical data quality checks with clear thresholds.
| Check | Default Behavior |
|---|---|
| Null rate per column | Warn > 10%, fail > 25% |
| Required columns | Zero nulls allowed |
| Primary key uniqueness | Zero duplicates |
| Row count change | Warn > 20%, fail > 40% |
| Schema drift | Warn on column add, fail on removal |
All thresholds are configurable in `gaya.yml`.
---
## Quick Configuration
Define your data sources and tables in a simple YAML file.
```yaml
datasources:
main_db:
type: postgres
host: localhost
database: app_db
user: app_user
password: env:DB_PASSWORD
tables:
orders:
source: main_db
layer: staging
primary_key: order_id
not_null:
- order_id
- customer_id
```
---
## Example Output
Clear, readable output that explains what failed and why it matters.
```
──────────────────────────────────────────────────────
✖ staging.orders FAILED
✖ row count dropped 38% (1.2M → 740K)
→ A drop this large usually means a failed upstream load.
──────────────────────────────────────────────────────
1 table(s) · 1 failed · 7 passed
Finished in 2.3s
──────────────────────────────────────────────────────
```
---
## Exit Codes
Designed to integrate cleanly with CI/CD pipelines.
| Code | Meaning |
|---|---|
| 0 | All checks passed |
| 1 | Warnings only |
| 2 | One or more checks failed |
| 3 | Gaya error (config or connection) |
---
## CI Integration
```yaml
# GitHub Actions
- name: Run data quality checks
run: gaya run --quiet
```
---
## Supported Sources
- Postgres
Additional connectors are planned.
---
## Project Status
Gaya is an early-stage project. The core check logic, Postgres adapter, and CLI are
working. The API and configuration format may evolve, but the goal will always be the
same: simple, predictable, easy to reason about.
Feedback and contributions are welcome.
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"psycopg2-binary>=2.9",
"pyyaml>=6.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.7 | 2026-02-21T02:54:46.542024 | gaya-0.1.1.tar.gz | 25,681 | c0/7c/12a4b5a42d4deff3bea80d9534cee0364c738912564656c824ccdc5e4e1a/gaya-0.1.1.tar.gz | source | sdist | null | false | 16bfcf75c90a2692a5b3e6e4bafc48aa | 58ad12ffb4b4b43a7baa956cfb188453b7ab02c62931c2a116273e061ab92704 | c07c12a4b5a42d4deff3bea80d9534cee0364c738912564656c824ccdc5e4e1a | null | [] | 239 |
2.4 | fastapi-observer | 1.0.0 | Zero-glue FastAPI observability with security presets and runtime controls | # fastapi-observer
[](https://github.com/Vitaee/FastapiObserver/actions/workflows/ci.yml)
[](https://buymeacoffee.com/FYbPCSu)
**Zero-glue observability for FastAPI.**
`fastapi-observer` gives you structured JSON logs, request correlation, Prometheus metrics, OpenTelemetry tracing, security redaction presets, and runtime controls in one install step and one function call.
**Supported Python versions:** `3.10` to `3.14`
---
## Compatibility Matrix
| Component | Supported / Tested |
|---|---|
| Python | `3.10` to `3.14` (CI matrix) |
| FastAPI | `>=0.129.0` |
| Starlette | `>=0.52.1` |
| pydantic-settings | `>=2.10.1` |
| Prometheus backend | `prometheus-client>=0.24.1` (optional extra) |
| OpenTelemetry | `opentelemetry-api/sdk/exporter>=1.39.1` (optional extra) |
| Loguru bridge | `loguru>=0.7.2` (optional extra) |
---
## Why This Package Exists
Most FastAPI services eventually need the same observability plumbing:
- Structured JSON logging
- Request and trace correlation
- Metrics for dashboards and alerts
- OpenTelemetry setup
- Redaction/sanitization for sensitive data
- Runtime controls for incident response
Teams usually implement this as custom glue code in every service. That costs engineering time and creates drift between services.
`fastapi-observer` replaces this repeated wiring with a consistent, secure-by-default setup.
---
## Sponsor
If this library saves you engineering time, you can support maintenance here:
[buymeacoffee.com/FYbPCSu](https://buymeacoffee.com/FYbPCSu)
---
## What You Get Immediately
After one call to `install_observability()`:
| Capability | Included | Default |
|---|---|---|
| Structured JSON logs | Yes | Enabled |
| Request ID correlation | Yes | Enabled |
| Trace/span IDs in logs | Yes (with OTel) | Off until OTel enabled |
| Prometheus `/metrics` | Yes | Off until `metrics_enabled=True` |
| Sensitive-data redaction | Yes | Enabled |
| Security presets (`strict`, `pci`, `gdpr`) | Yes | Available |
| Runtime control endpoint | Yes | Off until enabled |
| Plugin hooks for enrichment/hooks | Yes | Available |
---
## Install
```bash
# Core (logging + metrics + security)
pip install fastapi-observer
# Prometheus metrics support
pip install "fastapi-observer[prometheus]"
# Loguru coexistence bridge support
pip install "fastapi-observer[loguru]"
# OpenTelemetry tracing/logs support
pip install "fastapi-observer[otel]"
# Everything
pip install "fastapi-observer[all]"
```
Import path:
```python
import fastapiobserver
```
---
## 5-Minute Quick Start
```python
from fastapi import FastAPI
from fastapiobserver import ObservabilitySettings, install_observability
app = FastAPI()
settings = ObservabilitySettings(
app_name="orders-api",
service="orders",
environment="production",
version="0.1.0",
metrics_enabled=True,
)
install_observability(app, settings)
@app.get("/orders/{order_id}")
def get_order(order_id: int) -> dict[str, int]:
return {"order_id": order_id}
```
Run:
```bash
uvicorn main:app --reload
```
Now you have:
- Structured request logs on every request
- Request ID propagation
- Sanitized event payloads
- Prometheus metrics at `/metrics`
---
## Security Defaults and Presets
### Default protections
| Protection | Default | Why |
|---|---|---|
| Body logging | `OFF` | Avoid leaking request/response secrets |
| Sensitive key masking | `ON` | Protect fields like `password`, `token`, `secret` |
| Sensitive header masking | `ON` | Protect `authorization`, `cookie`, `x-api-key` |
| Query string in logged path | Excluded | Prevent accidental token leakage |
| Request ID trust boundary | Trusted CIDRs only | Prevent spoofed correlation IDs |
### Presets for regulated environments
```python
from fastapiobserver import SecurityPolicy
# Strictest option: drop sensitive values and keep minimal safe headers
strict_policy = SecurityPolicy.from_preset("strict")
# PCI-focused redaction fields
pci_policy = SecurityPolicy.from_preset("pci")
# GDPR-focused hashed PII fields
gdpr_policy = SecurityPolicy.from_preset("gdpr")
```
Use a preset in installation:
```python
install_observability(app, settings, security_policy=SecurityPolicy.from_preset("pci"))
```
### Allowlist-only logging (audit-style)
If your compliance model is "log only approved fields", use allowlists:
```python
from fastapiobserver import SecurityPolicy
policy = SecurityPolicy(
header_allowlist=("x-request-id", "content-type", "user-agent"),
event_key_allowlist=("method", "path", "status_code"),
)
```
### Body capture media-type guard
```python
policy = SecurityPolicy(
log_request_body=True,
body_capture_media_types=("application/json",),
)
```
---
## Runtime Control Plane (No Restart)
Use runtime controls when you need higher log verbosity or different trace sampling during an incident.
```bash
export OBSERVABILITY_CONTROL_TOKEN="replace-me"
```
```python
from fastapiobserver import RuntimeControlSettings, install_observability
runtime_control = RuntimeControlSettings(enabled=True)
install_observability(app, settings, runtime_control_settings=runtime_control)
```
Inspect current runtime values:
```bash
curl -X GET http://localhost:8000/_observability/control \
-H "Authorization: Bearer replace-me"
```
Update runtime values:
```bash
curl -X POST http://localhost:8000/_observability/control \
-H "Authorization: Bearer replace-me" \
-H "Content-Type: application/json" \
-d '{"log_level":"DEBUG","trace_sampling_ratio":0.25}'
```
What changes immediately:
- Root logger level (and uvicorn loggers)
- Dynamic OTel trace sampling ratio
---
## OpenTelemetry (Traces + Optional OTLP Logs + Optional OTLP Metrics)
```python
from fastapiobserver import (
OTelLogsSettings,
OTelMetricsSettings,
OTelSettings,
install_observability,
)
otel_settings = OTelSettings(
enabled=True,
service_name="orders-api",
service_version="2.0.0",
environment="production",
otlp_endpoint="http://localhost:4317",
protocol="grpc", # or "http/protobuf"
trace_sampling_ratio=1.0,
extra_resource_attributes={
"k8s.namespace": "prod",
"team": "backend",
},
)
otel_logs_settings = OTelLogsSettings(
enabled=True,
logs_mode="both", # "local_json", "otlp", or "both"
otlp_endpoint="http://localhost:4317",
protocol="grpc",
)
otel_metrics_settings = OTelMetricsSettings(
enabled=True,
otlp_endpoint="http://localhost:4317",
protocol="grpc", # or "http/protobuf"
export_interval_millis=60000,
)
install_observability(
app,
settings,
otel_settings=otel_settings,
otel_logs_settings=otel_logs_settings,
otel_metrics_settings=otel_metrics_settings,
)
```
Design details:
- Reuses an externally configured tracer provider if one already exists.
- Injects trace IDs into application logs for log-trace correlation.
- Supports runtime sampling updates through the control plane.
- Sends OTel logs in OTLP mode with the same sanitization policy.
- Supports optional OTLP metrics export for unified OTel backends.
- Registers graceful shutdown hooks to flush provider buffers on app exit.
### Baggage propagation
`inject_trace_headers()` uses OpenTelemetry propagation, so it forwards
`traceparent`, `tracestate`, and `baggage` when baggage is present in the active context.
```python
from opentelemetry import baggage
from opentelemetry.context import attach, detach
from fastapiobserver import inject_trace_headers
token = attach(baggage.set_baggage("tenant_id", "acme"))
try:
headers = inject_trace_headers({})
# headers["baggage"] == "tenant_id=acme"
finally:
detach(token)
```
---
## What `install_observability()` Wires Up
1. Structured logging pipeline (JSON formatter + bounded async queue handler).
2. Metrics backend and `/metrics` endpoint when metrics are enabled.
3. OTel tracing setup when OTel is enabled.
4. Optional OTel logs/metrics setup when OTLP settings are enabled.
5. Request logging middleware with sanitization and context cleanup.
6. Runtime control endpoint when runtime control is enabled.
Request path lifecycle (high-level):
```text
Request arrives
-> request ID / trace context resolved
-> app handler executes
-> response classified (ok/client_error/server_error/exception)
-> payload sanitized by policy
-> log emitted + metrics recorded
-> context cleared
```
### Internal Package Layout (Contributor Map)
The project is now organized as focused subpackages instead of large monolithic modules:
- `fastapiobserver/logging/`: formatter, queueing, filters, setup lifecycle, sink circuit-breakers.
- `fastapiobserver/middleware/`: request logging orchestration, context, IP resolution, headers, body capture, metrics hooks.
- `fastapiobserver/sinks/`: sink protocol, registry/discovery, built-ins, factory wiring, Logtail + DLQ implementation.
- `fastapiobserver/metrics/`: backend contracts/registry/builder/endpoint, Prometheus integration subpackage.
- `fastapiobserver/security/`: policy/settings models, normalization helpers, redaction engine, trusted-proxy utilities.
- `fastapiobserver/otel/`: OTel settings/resource/tracing/logs/metrics/lifecycle helpers.
Public imports remain backward-compatible via package facades (`__init__.py` re-exports).
---
## Example JSON Log Event
```json
{
"timestamp": "2026-02-18T10:30:00.000000+00:00",
"level": "INFO",
"logger": "fastapiobserver.middleware",
"message": "request.completed",
"app_name": "orders-api",
"service": "orders",
"environment": "production",
"version": "0.1.0",
"log_schema_version": "1.0.0",
"library": "fastapiobserver",
"request_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"trace_id": "0af7651916cd43dd8448eb211c80319c",
"span_id": "b7ad6b7169203331",
"event": {
"method": "GET",
"path": "/orders/42",
"status_code": 200,
"http.request.method": "GET",
"url.path": "/orders/42",
"http.response.status_code": 200,
"duration_ms": 3.456,
"client_ip": "10.0.0.1",
"error_type": "ok"
}
}
```
On exception logs, a structured `error` object is included for indexed queries, featuring a stable AST-based `fingerprint` hash which ignores transient memory locations or exact line numbers, allowing zero-dependency alerting directly in your search backend.
```json
{
"error": {
"type": "RuntimeError",
"message": "boom",
"stacktrace": "Traceback (most recent call last): ...",
"fingerprint": "a1b2c3d4e5f67890abcd12345678bbcc"
}
}
```
---
## Production Deployment Guide
This section is deployment-first. A new engineer should be able to ship this stack without reading the source code.
### Reference architecture
```mermaid
flowchart LR
A["FastAPI services (fastapi-observer)"] --> C["OTel Collector"]
C --> D["Tempo (traces)"]
C --> E["Loki (logs)"]
A --> F["Prometheus (/metrics scrape)"]
F --> G["Grafana"]
D --> G
E --> G
```
### Minimal collector config
```yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
memory_limiter:
limit_mib: 512
spike_limit_mib: 128
check_interval: 5s
batch:
send_batch_size: 512
timeout: 5s
exporters:
otlphttp/tempo:
endpoint: http://tempo:4318
otlphttp/loki:
endpoint: http://loki:3100/otlp
service:
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlphttp/tempo]
logs:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlphttp/loki]
```
### Rollout strategy
1. Baseline current service SLOs before migration (`latency`, `error rate`, `availability`).
2. Enable `fastapi-observer` in one service with conservative settings (no body capture).
3. Run canary rollout (5-10% traffic) and compare:
latency p95, 5xx rate, and log/traces pipeline health.
4. Expand rollout to all replicas/services after 24-48h stable canary.
5. Enable advanced controls in phases:
security presets, allowlists, runtime control plane, OTLP logs mode.
### Failure modes and expected behavior
| Failure mode | Expected behavior | Immediate action |
|---|---|---|
| OTel Collector down | App still serves traffic; local logs still available if `OTEL_LOGS_MODE=both` | Fail over Collector or temporarily switch to local-json mode |
| Tempo down | Traces unavailable; logs/metrics continue | Restore Tempo, keep incident correlation via logs |
| Loki down | Logs unavailable in Grafana; metrics/traces continue | Restore Loki, use app stdout logs temporarily |
| Prometheus down | No metrics/alerts; app traffic unaffected | Restore Prometheus and alertmanager path |
| High cardinality on paths | Prometheus pressure increases | Use route templates and exclude noisy paths |
| Spoofed forwarded headers | Incorrect client IP/request ID trust | Tighten `OBS_TRUSTED_CIDRS` and proxy chain config |
### SLO and alert checklist
Recommended SLOs:
- Availability: `>= 99.9%` over 30 days
- p95 latency: `< 500ms` for core APIs
- 5xx rate: `< 1%` per service
- Error-budget burn alerting: fast burn (1h), slow burn (6h)
Starter alert queries:
```promql
# 5xx rate per service (5 minutes)
sum(rate(http_requests_total{status_code=~"5.."}[5m])) by (service)
# p95 latency per service
histogram_quantile(
0.95,
sum(rate(http_request_duration_seconds_bucket[5m])) by (le, service)
)
# Traffic drop detection
sum(rate(http_requests_total[5m])) by (service)
```
### Incident playbook (first 15 minutes)
1. Confirm blast radius in Grafana:
affected services, status codes, latency shifts, deployment changes.
2. Increase signal quality without restart:
use runtime control plane to raise log level and tracing sample ratio.
3. Identify dependency failures:
check Collector, Loki, Tempo, Prometheus health and ingestion queues.
4. Mitigate:
roll back latest app change, scale affected service, or disable expensive capture options.
5. Verify recovery:
p95 + 5xx return to baseline, trace volume normalized, alert clears.
### Kubernetes quickstart (copy/paste)
Use the bundled manifests:
```bash
kubectl kustomize --load-restrictor=LoadRestrictionsNone examples/k8s | kubectl apply -f -
kubectl -n observability rollout status deployment/app-a
kubectl -n observability rollout status deployment/app-b
kubectl -n observability rollout status deployment/app-c
kubectl -n observability rollout status deployment/otel-collector
kubectl -n observability rollout status deployment/prometheus
kubectl -n observability rollout status deployment/loki
kubectl -n observability rollout status deployment/tempo
kubectl -n observability rollout status deployment/grafana
kubectl -n observability rollout status deployment/traffic-generator
kubectl -n observability port-forward svc/grafana 3000:3000
```
Open [http://localhost:3000](http://localhost:3000).
Full guide: [`kubernetes.md`](kubernetes.md)
---
## Low-Overhead & Production Tuning (Advanced)
`fastapi-observer` integrates natively with the core OpenTelemetry Python SDK, meaning you can aggressively tune its resource usage purely via standard environment variables without altering your application code.
For high-throughput services (e.g. `10k+ RPS`), apply these exact variables to minimize the observer footprint:
### 1. Head-Based Sampling
Tracing 100% of requests is too expensive at scale. You should configure `fastapi-observer` to respect upstream trace flags, while only sampling a fraction of net-new requests:
```bash
# Keep the parent's sample decision if it exists, otherwise sample 5%
export OTEL_TRACES_SAMPLER="parentbased_traceidratio"
export OTEL_TRACES_SAMPLER_ARG="0.05"
```
### 2. Exclude Noisy URLs from the SDK
Do not waste cycles generating spans for health checks or static assets. `fastapi-observer` will auto-derive metrics exclusions, but you can explicitly drop them from tracing at the C-extension level:
```bash
export OTEL_PYTHON_FASTAPI_EXCLUDED_URLS="healthz,metrics,favicon.ico"
```
### 3. Cap Span Attributes
Prevent large, unmanageable spans from consuming excessive memory in the `BatchSpanProcessor`:
```bash
export OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT="128"
export OTEL_SPAN_EVENT_COUNT_LIMIT="128"
export OTEL_SPAN_LINK_COUNT_LIMIT="128"
```
### 4. Optimize Output Buffers
The default OpenTelemetry batch limits are too conservative for high-throughput ASGI microservices. Increase the max queue limits so spikes aren't dropped, but decrease the timeout so the process memory is flushed faster:
```bash
export OTEL_BSP_MAX_QUEUE_SIZE="10000"
export OTEL_BSP_MAX_EXPORT_BATCH_SIZE="5000"
export OTEL_BSP_SCHEDULE_DELAY="1000"
```
---
## Examples
The `examples/` directory contains runnable demos:
| Example | What it shows |
|---|---|
| [`basic_app.py`](examples/basic_app.py) | Minimal setup and request logging |
| [`security_presets_app.py`](examples/security_presets_app.py) | Preset-based security policy |
| [`allowlist_app.py`](examples/allowlist_app.py) | Allowlist-only sanitization |
| [`otel_app.py`](examples/otel_app.py) | OTel tracing and resource attributes |
| [`graphql_app.py`](examples/graphql_app.py) | Native Strawberry GraphQL observability |
| [`benchmarks/`](examples/benchmarks/) | Baseline vs observer benchmark harness |
| [`k8s/`](examples/k8s/) | Kubernetes-native stack with Prometheus + Loki + Tempo + Grafana |
| [`full_stack/`](examples/full_stack/) | **Docker Compose stack**: 3 FastAPI services + Grafana + Prometheus + Loki + Tempo |
Run an example:
```bash
uvicorn examples.basic_app:app --reload
```
### Dashboard Screenshots (Full-Stack Demo)
From `examples/full_stack`, these are real Grafana views generated by `fastapi-observer` telemetry:
**Overview panels (latency heatmap, route throughput, errors, CPU/memory):**

**Percentiles, request rate, and structured JSON logs in Loki:**

---
## Environment Variables
The library supports configuration from code and env vars. Below are the most relevant env vars by area.
### Identity and logging
| Variable | Default | Description |
|---|---|---|
| `APP_NAME` | `app` | Namespace for app-level identity |
| `SERVICE_NAME` | `api` | Service label for logs/metrics |
| `ENVIRONMENT` | `development` | Environment label |
| `APP_VERSION` | `0.0.0` | Service version |
| `LOG_LEVEL` | `INFO` | Root log level |
| `LOG_DIR` | - | Optional file log directory |
| `LOG_QUEUE_MAX_SIZE` | `10000` | Max in-memory records in core log queue |
| `LOG_QUEUE_OVERFLOW_POLICY` | `drop_oldest` | Queue overflow behavior: `drop_oldest`, `drop_newest`, `block` |
| `LOG_QUEUE_BLOCK_TIMEOUT_SECONDS` | `1.0` | Timeout used by `block` policy before dropping newest |
| `LOG_SINK_CIRCUIT_BREAKER_ENABLED` | `true` | Enable sink circuit-breaker protection |
| `LOG_SINK_CIRCUIT_BREAKER_FAILURE_THRESHOLD` | `5` | Consecutive sink failures before opening circuit |
| `LOG_SINK_CIRCUIT_BREAKER_RECOVERY_TIMEOUT_SECONDS` | `30.0` | Open-state cooldown before half-open probe |
| `REQUEST_ID_HEADER` | `x-request-id` | Incoming request ID header |
| `RESPONSE_REQUEST_ID_HEADER` | `x-request-id` | Response request ID header |
### Metrics
| Variable | Default | Description |
|---|---|---|
| `METRICS_ENABLED` | `false` | Enable metrics backend |
| `METRICS_BACKEND` | `prometheus` | Registered backend name used by `install_observability()` |
| `METRICS_PATH` | `/metrics` | Metrics endpoint path |
| `METRICS_EXCLUDE_PATHS` | `/metrics,/health,/healthz,/docs,/openapi.json` | Skip metrics for noisy endpoints |
| `METRICS_EXEMPLARS_ENABLED` | `false` | Enable exemplars where supported |
| `METRICS_FORMAT` | `negotiate` | `prometheus`, `openmetrics`, or `negotiate` |
> [!CAUTION]
> The `/metrics` endpoint is **unauthenticated by default**. In production it should be restricted to internal networks (e.g. behind a Kubernetes `NetworkPolicy`, VPC security group, or ingress rule that only allows your Prometheus scraper). Exposing it publicly leaks service topology, error rates, and request patterns.
### Security and trust boundary
| Variable | Default | Description |
|---|---|---|
| `OBS_REDACTION_PRESET` | - | `strict`, `pci`, `gdpr` |
| `OBS_REDACTED_FIELDS` | built-in list | CSV keys to redact |
| `OBS_REDACTED_HEADERS` | built-in list | CSV headers to redact |
| `OBS_REDACTION_MODE` | `mask` | `mask`, `hash`, `drop` |
| `OBS_MASK_TEXT` | `***` | Mask replacement text |
| `OBS_LOG_REQUEST_BODY` | `false` | Enable request body logging |
| `OBS_LOG_RESPONSE_BODY` | `false` | Enable response body logging |
| `OBS_MAX_BODY_LENGTH` | `256` | Max captured body bytes |
| `OBS_HEADER_ALLOWLIST` | - | CSV headers allowed in logs |
| `OBS_EVENT_KEY_ALLOWLIST` | - | CSV event keys allowed in logs |
| `OBS_BODY_CAPTURE_MEDIA_TYPES` | - | CSV allowed media types for body capture |
| `OBS_TRUSTED_PROXY_ENABLED` | `true` | Enable trusted-proxy policy |
| `OBS_TRUSTED_CIDRS` | RFC1918 + loopback | CSV trusted CIDRs |
| `OBS_HONOR_FORWARDED_HEADERS` | `false` | Trust forwarded headers |
Notes:
- `OBS_HEADER_ALLOWLIST`, `OBS_EVENT_KEY_ALLOWLIST`, and `OBS_BODY_CAPTURE_MEDIA_TYPES` accept `none`, `null`, or `unset` to clear values.
### OpenTelemetry tracing/log export
| Variable | Default | Description |
|---|---|---|
| `OTEL_ENABLED` | `false` | Enable tracing instrumentation |
| `OTEL_SERVICE_NAME` | `SERVICE_NAME` | OTel service name override |
| `OTEL_SERVICE_VERSION` | `APP_VERSION` | OTel service version override |
| `OTEL_ENVIRONMENT` | `ENVIRONMENT` | OTel environment override |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | - | OTLP endpoint |
| `OTEL_EXPORTER_OTLP_PROTOCOL` | `grpc` | `grpc` or `http/protobuf` |
| `OTEL_TRACE_SAMPLING_RATIO` | `1.0` | Initial trace sampling ratio |
| `OTEL_EXTRA_RESOURCE_ATTRIBUTES` | - | CSV `key=value` pairs |
| `OTEL_EXCLUDED_URLS` | auto-derived | CSV excluded paths for tracing |
| `OTEL_LOGS_ENABLED` | `false` | Enable OTLP log export |
| `OTEL_LOGS_MODE` | `local_json` | `local_json`, `otlp`, `both` |
| `OTEL_LOGS_ENDPOINT` | - | OTLP logs endpoint |
| `OTEL_LOGS_PROTOCOL` | `grpc` | `grpc` or `http/protobuf` |
| `OTEL_METRICS_ENABLED` | `false` | Enable OTLP metrics export |
| `OTEL_METRICS_ENDPOINT` | - | OTLP metrics endpoint |
| `OTEL_METRICS_PROTOCOL` | `grpc` | `grpc` or `http/protobuf` |
| `OTEL_METRICS_EXPORT_INTERVAL_MILLIS` | `60000` | OTLP metrics export interval in milliseconds |
### Runtime control plane
| Variable | Default | Description |
|---|---|---|
| `OBS_RUNTIME_CONTROL_ENABLED` | `false` | Enable runtime control endpoint |
| `OBS_RUNTIME_CONTROL_PATH` | `/_observability/control` | Control endpoint path |
| `OBS_RUNTIME_CONTROL_TOKEN_ENV_VAR` | `OBSERVABILITY_CONTROL_TOKEN` | Name of env var containing bearer token |
| `OBSERVABILITY_CONTROL_TOKEN` | - | Bearer token value used for auth |
### Optional Logtail sink
| Variable | Default | Description |
|---|---|---|
| `LOGTAIL_ENABLED` | `false` | Enable Better Stack Logtail sink |
| `LOGTAIL_SOURCE_TOKEN` | - | Logtail source token |
| `LOGTAIL_BATCH_SIZE` | `50` | Batch size for shipping |
| `LOGTAIL_FLUSH_INTERVAL` | `2.0` | Flush interval (seconds) |
| `LOGTAIL_DLQ_ENABLED` | `false` | Enable resilient local disk fallback for dropped logs |
| `LOGTAIL_DLQ_DIR` | `.dlq/logtail` | Directory to archive dropped NDJSON messages |
| `LOGTAIL_DLQ_MAX_BYTES` | `52428800` | Max bytes per DLQ file before rotation (50MB) |
| `LOGTAIL_DLQ_COMPRESS` | `true` | GZIP compress rotated DLQ files |
> [!TIP]
> The Logtail Dead Letter Queue (DLQ) provides best-effort local durability. If the internal memory queue overflows under immense API pressure (`queue.Full`), or if an external network partition completely exhausts the outbound HTTP retry backoff, the dropped log payloads are immediately salvaged into local NDJSON envelopes. You can replay these files to BetterStack later using the provided `scripts/replay_dlq.py` utility.
---
## Advanced Operations
### Middleware ordering for body capture
If body capture is enabled, install observability before other middleware:
```python
from fastapi.middleware.cors import CORSMiddleware
from fastapiobserver import SecurityPolicy, install_observability
install_observability(app, settings, security_policy=SecurityPolicy(log_request_body=True))
app.add_middleware(CORSMiddleware, allow_origins=["*"])
```
### Multi-worker Gunicorn
> [!WARNING]
> `--preload-app` is dangerous if visibility is initialized at the module level.
>
> When `--preload-app` is used, the application is loaded into the master process *before* forking the worker processes. `fastapi-observer` uses a highly-performant background thread (`QueueListener`) for non-blocking logging, and OpenTelemetry similarly spawns background export threads.
>
> Threads **do not survive** a process fork. If you call `install_observability()` at the module level (e.g., right under `app = FastAPI()`), the background threads will be created in the master process, and all workers will silently drop logs because their logging threads never started.
>
> **How to fix:** Either remove `--preload-app`, OR initialize observability inside the FastAPI `lifespan` context manager so it starts safely after the fork inside each worker:
>
> ```python
> from contextlib import asynccontextmanager
> from fastapi import FastAPI
> from fastapiobserver import ObservabilitySettings, install_observability
>
> settings = ObservabilitySettings(service="api")
>
> @asynccontextmanager
> async def lifespan(app: FastAPI):
> install_observability(app, settings) # Safely initializes per-worker
> yield
> # fastapiobserver automatically registers shutdown hooks so no teardown needed here
>
> app = FastAPI(lifespan=lifespan)
> ```
#### Prometheus Multiprocess Mode
If you are using Prometheus with multiple Gunicorn workers, you must configure a shared metrics directory:
```bash
export PROMETHEUS_MULTIPROC_DIR=/tmp/prometheus-metrics
rm -rf "$PROMETHEUS_MULTIPROC_DIR"
mkdir -p "$PROMETHEUS_MULTIPROC_DIR"
```
`gunicorn.conf.py`:
```python
from fastapiobserver import mark_prometheus_process_dead
def child_exit(server, worker):
mark_prometheus_process_dead(worker.pid)
```
### Bounded queue and overflow policy
Use queue controls to define behavior under sustained log pressure:
```python
settings = ObservabilitySettings(
app_name="orders-api",
service="orders",
environment="production",
log_queue_max_size=20000,
log_queue_overflow_policy="drop_oldest", # or "drop_newest" / "block"
log_queue_block_timeout_seconds=0.5,
)
```
Queue pressure metrics exposed on `/metrics` (Prometheus mode):
- `fastapiobserver_log_queue_size`
- `fastapiobserver_log_queue_capacity`
- `fastapiobserver_log_queue_enqueued_total`
- `fastapiobserver_log_queue_dropped_total{reason="drop_oldest|drop_newest"}`
- `fastapiobserver_log_queue_blocked_total`
- `fastapiobserver_log_queue_block_timeouts_total`
### Sink circuit breaker
Every output sink is wrapped with a circuit breaker so a failing sink does not
degrade request-path logging. This includes custom sinks registered via the
`LogSink` protocol.
The core package stays intentionally lean; provider-specific sinks can be added
as optional packages without changing `install_observability()`.
```python
settings = ObservabilitySettings(
app_name="orders-api",
service="orders",
environment="production",
sink_circuit_breaker_enabled=True,
sink_circuit_breaker_failure_threshold=5,
sink_circuit_breaker_recovery_timeout_seconds=30.0,
)
```
Breaker metrics exposed on `/metrics`:
- `fastapiobserver_sink_circuit_breaker_state_info{sink,state}`
- `fastapiobserver_sink_circuit_breaker_failures_total{sink}`
- `fastapiobserver_sink_circuit_breaker_skipped_total{sink}`
- `fastapiobserver_sink_circuit_breaker_opens_total{sink}`
- `fastapiobserver_sink_circuit_breaker_half_open_total{sink}`
- `fastapiobserver_sink_circuit_breaker_closes_total{sink}`
### Logging Shutdown Lifecycle
`install_observability()` now registers graceful logging teardown on FastAPI
shutdown and also uses an `atexit` fallback. This reduces lost log records
during process termination.
If you embed logging setup outside FastAPI lifecycle management, you can stop
the queue pipeline explicitly:
```python
from fastapiobserver import shutdown_logging
shutdown_logging()
```
### Loguru Coexistence
If your service already uses `loguru`, forward those logs into
`fastapi-observer` instead of maintaining two independent pipelines.
```python
from fastapiobserver import install_loguru_bridge
# loguru -> stdlib -> fastapi-observer queue/sinks
bridge_id = install_loguru_bridge()
```
Detailed migration/coexistence guide: [`loguru.md`](loguru.md)
---
## GraphQL Integrations (Strawberry)
If you use `strawberry-graphql`, routing all traffic through `POST /graphql` severely blinds your logs and traces.
`fastapi-observer` ships a native Strawberry extension (via Duck Typing, meaning NO bloated pip dependencies) to automatically extract GraphQL operations.
```python
import strawberry
from fastapiobserver.integrations.strawberry import StrawberryObservabilityExtension
@strawberry.type
class Query:
@strawberry.field
def hello(self) -> str:
return "world"
schema = strawberry.Schema(
query=Query,
extensions=[StrawberryObservabilityExtension], # Inject this!
)
```
With this extension, your logs will automatically get a `graphql` context key containing the extracted `operation_name`:
```json
{
"event": {
"method": "POST",
"path": "/graphql"
},
"user_context": {
"graphql": {
"operation_name": "GetUsersQuery"
}
}
}
```
If OpenTelemetry is enabled, your traces will dynamically rename from `POST /graphql` to `graphql.operation.GetUsersQuery`.
---
## Plugin Hooks
Extend behavior without editing package internals:
```python
from fastapiobserver import (
register_log_enricher,
register_log_filter,
register_metric_hook,
)
def add_git_sha(payload: dict) -> dict:
payload["git_sha"] = "abc123"
return payload
def drop_health_probe(record) -> bool:
return "health" not in record.getMessage().lower()
def track_slow_requests(request, response, duration):
if duration > 1.0:
print(f"slow request: {request.url.path} {duration:.2f}s")
register_log_enricher("git_sha", add_git_sha)
register_log_filter("drop_health_probe", drop_health_probe)
register_metric_hook("slow_requests", track_slow_requests)
```
Plugin failures are isolated and do not crash request handling.
### Custom Metrics Backend Registry
Use `register_metrics_backend()` to plug in non-Prometheus backends without
modifying core code:
```python
from fastapiobserver import register_metrics_backend
class MyBackend:
def observe(self, method, path, status_code, duration_seconds):
...
def mount_endpoint(self, app, *, path="/metrics", metrics_format="negotiate"):
# Optional: mount a backend-specific endpoint
...
def build_my_backend(*, service: str, environment: str, exemplars_enabled: bool):
return MyBackend()
register_metrics_backend("my_backend", build_my_backend)
```
### Formatter Dependency Injection
`StructuredJsonFormatter` accepts injectable callables for enrichment and
sanitization, keeping defaults unchanged while improving testability:
```python
formatter = StructuredJsonFormatter(
settings,
enrich_event=my_enricher,
sanitize_payload=my_sanitizer,
)
```
---
## OTel Test Coverage
Repository integration tests include:
- `tests/test_otel_log_correlation.py`: verifies trace/span IDs in logs map to real spans.
- `tests/test_otlp_export_integration.py`: validates OTLP HTTP export with local collector fixtures.
---
## Benchmarking
Reproducible benchmark harness and methodology:
- Guide: [`benchmarks.md`](benchmarks.md)
- Apps: `examples/benchmarks/app.py`
- Runner: `examples/benchmarks/harness.py`
---
## Release Tracks
- `0.1.x`: secure-by-default core
- `0.2.x`: OTel interoperability, security presets, allowlists
- `0.3.x`: GraphQL observability, error fingerprinting, and Logtail DLQ durability
- `0.4.x`: package modularization, sink/registry hardening, and runtime control token rotation
- `1.0.0`: first stable release contract for production deployments
Current release version: `1.0.0`
## Changelog Policy
Breaking changes must be listed under a `Breaking Changes` section in `CHANGELOG.md`.
---
## Packaging and Publishing (Maintainers)
Recommended release command (uses `.env` with `PYPI_TOKEN`):
```bash
scripts/deploy_pypi.sh --tag v1.0.0 --push-tag
```
### 1) Build distributions
```bash
python -m pip install --upgrade pip build
python -m build
```
### 2) Upload to TestPyPI
```bash
python -m pip install --upgrade twine
python -m twine upload --repository testpypi dist/*
```
### 3) Validate install from TestPyPI
```bash
python -m pip install \
--extra-index-url https://test.pypi.org/simple/ \
fastapi-observer
```
### 4) Upload to production PyPI
```bash
python -m twine upload dist/*
```
---
## Local Git Hook (Recommended)
```bash
git config core.hooksPath .githooks
```
The pre-push hook runs:
- `uv run ruff check`
- `uv run mypy src`
- `uv run pytest -q`
---
## Roadmap Tracking
See [NEXT_STEPS.md](NEXT_STEPS.md) for the active roadmap and release checklist.
| text/markdown | null | Vitaee <opensource@vitaee.dev> | null | null | null | fastapi, observability, logging, metrics, opentelemetry | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Framework :: FastAPI",
"Topic :: System :: Logging",
"Topic :: System :: Monitoring",
"Typing :: Typed"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"fastapi>=0.129.0",
"pydantic-settings>=2.10.1",
"starlette>=0.52.1",
"orjson>=3.11.7; extra == \"fast-json\"",
"prometheus-client>=0.24.1; extra == \"prometheus\"",
"loguru>=0.7.2; extra == \"loguru\"",
"opentelemetry-api>=1.39.1; extra == \"otel\"",
"opentelemetry-sdk>=1.39.1; extra == \"otel\"",
"opentelemetry-exporter-otlp>=1.39.1; extra == \"otel\"",
"opentelemetry-instrumentation-fastapi>=0.60b1; extra == \"otel\"",
"opentelemetry-instrumentation-logging>=0.60b1; extra == \"otel\"",
"opentelemetry-instrumentation-httpx>=0.60b1; extra == \"otel-httpx\"",
"opentelemetry-instrumentation-requests>=0.60b1; extra == \"otel-requests\"",
"orjson>=3.11.7; extra == \"all\"",
"prometheus-client>=0.24.1; extra == \"all\"",
"loguru>=0.7.2; extra == \"all\"",
"opentelemetry-api>=1.39.1; extra == \"all\"",
"opentelemetry-sdk>=1.39.1; extra == \"all\"",
"opentelemetry-exporter-otlp>=1.39.1; extra == \"all\"",
"opentelemetry-instrumentation-fastapi>=0.60b1; extra == \"all\"",
"opentelemetry-instrumentation-logging>=0.60b1; extra == \"all\"",
"opentelemetry-instrumentation-httpx>=0.60b1; extra == \"all\"",
"opentelemetry-instrumentation-requests>=0.60b1; extra == \"all\"",
"httpx>=0.28.1; extra == \"dev\"",
"loguru>=0.7.2; extra == \"dev\"",
"mypy>=1.19.1; extra == \"dev\"",
"pip-audit>=2.10.0; extra == \"dev\"",
"pytest>=9.0.2; extra == \"dev\"",
"pytest-randomly>=3.16.0; extra == \"dev\"",
"ruff>=0.15.1; extra == \"dev\"",
"cyclonedx-bom>=7.2.1; extra == \"dev\"",
"uvicorn>=0.30.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Vitaee/FastapiObserver",
"Documentation, https://github.com/Vitaee/FastapiObserver#readme",
"Repository, https://github.com/Vitaee/FastapiObserver.git",
"Issues, https://github.com/Vitaee/FastapiObserver/issues",
"Funding, https://buymeacoffee.com/FYbPCSu"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-21T02:54:46.052983 | fastapi_observer-1.0.0.tar.gz | 113,202 | 23/7e/aeceda3c12b27b9754c3190f1a5f7626abfa90caf21671d536c0a87aece2/fastapi_observer-1.0.0.tar.gz | source | sdist | null | false | 51eea788ba2415f5f848a9f81b007903 | d8a54dba429dcfa7cd547721f20e8e05d0771a14abb177f90415374aeef4d566 | 237eaeceda3c12b27b9754c3190f1a5f7626abfa90caf21671d536c0a87aece2 | MIT | [
"LICENSE"
] | 230 |
2.4 | homeassistant-stubs | 2026.2.3 | PEP 484 typing stubs for Home Assistant Core | [](https://github.com/KapJI/homeassistant-stubs/actions/workflows/ci.yaml)
[](https://pypi.org/project/homeassistant-stubs/)
# PEP 484 stubs for Home Assistant Core
This is unofficial stub-only package generated from [Home Assistant Core](https://github.com/home-assistant/core) sources.
You can use it to enable type checks against Home Assistant code in your custom component or AppDaemon app.
## How to use
Add it to dev dependencies of your project.
I recommend to use [uv](https://docs.astral.sh/uv/) for managing dependencies:
```shell
uv add --dev homeassistant-stubs
```
Please note that only stubs from strictly typed modules are added in this package.
This includes all core modules and some components.
Generic components like `sensor`, `light` or `media_player` are already typed.
If your project imports not yet typed components, `mypy` will be unable to find that module.
The best thing you can do to fix this is to submit PR to HA Core which adds type hints for these components.
After that stubs for these components will become available in this package.
## Motivation
Home Assistant maintainers don't want to distribute typing information with `homeassistant` package
([[1]](https://github.com/home-assistant/core/pull/28866),
[[2]](https://github.com/home-assistant/core/pull/47796)).
The reason is that [PEP 561](https://www.python.org/dev/peps/pep-0561/#packaging-type-information)
says that `py.typed` marker is applied recursively and the whole package must support type checking.
But many of the Home Assistant components are currently not type checked.
## How it works
- `update_stubs.py` script extracts list of strictly typed modules from Home Assistant configs.
- Then it runs `stubgen` which is shipped with `mypy` to generate typing stubs.
- New versions are generated and published automatically every 12 hours.
| text/markdown | null | Ruslan Sayfutdinov <ruslan@sayfutdinov.com> | null | null | MIT | homeassistant, pep484, typing | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.13",
"Topic :: Home Automation",
"Topic :: Software Development",
"Typing :: Typed"
] | [] | null | null | >=3.13.2 | [] | [] | [] | [
"homeassistant==2026.2.3"
] | [] | [] | [] | [
"Homepage, https://github.com/KapJI/homeassistant-stubs",
"Bug Tracker, https://github.com/KapJI/homeassistant-stubs/issues",
"Release Notes, https://github.com/KapJI/homeassistant-stubs/releases"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T02:52:03.984137 | homeassistant_stubs-2026.2.3.tar.gz | 1,241,369 | af/b0/a7376776f8a348ef0d7330fbed0e65860bffdf88a9f3bb8081e13d1604f2/homeassistant_stubs-2026.2.3.tar.gz | source | sdist | null | false | a8f15c7d9a38039727b7736df81c3040 | 2726a92636d7f450b97aff1abaf9185f51264fed7c4e76ca9fbd379496bc7e9f | afb0a7376776f8a348ef0d7330fbed0e65860bffdf88a9f3bb8081e13d1604f2 | null | [
"LICENSE"
] | 358 |
2.4 | pygents | 0.5.2 | A lightweight async framework for structuring and running AI agents in Python | # pygents
A lightweight async framework for structuring and running AI agents in Python. Define tools, queue turns, stream results.
## Install
```bash
pip install pygents
```
Requires Python 3.12+.
## Example
```python
import asyncio
from pygents import Agent, Turn, tool
@tool()
async def greet(name: str) -> str:
return f"Hello, {name}!"
async def main():
agent = Agent("greeter", "Greets people", [greet])
# Use kwargs:
await agent.put(Turn("greet", kwargs={"name": "World"}))
# Or positional args:
await agent.put(Turn("greet", args=["World"]))
async for turn, value in agent.run():
print(value) # "Hello, World!"
asyncio.run(main())
```
Tools are async functions. Turns say which tool to run and with what args. Agents process a queue of turns and stream results. The loop exits when the queue is empty.
## Features
- **Streaming** — agents yield `(turn, value)` as results are produced
- **Inter-agent messaging** — agents can send turns to each other
- **Dynamic arguments** — callable positional args and kwargs evaluated at runtime
- **Timeouts** — per-turn, default 60s
- **Per-tool locking** — opt-in serialization for shared state (lock is acquired inside the tool wrapper, so turn-level hooks run outside the tool lock)
- **Fixed kwargs** — decorator kwargs (e.g. `@tool(permission="admin")`) are merged into every invocation; call-time kwargs override
- **Hooks** — `@hook(hook_type, lock=..., **fixed_kwargs)` decorator; hooks stored as a list and selected by type; turn, agent, tool, and memory hooks; same fixed_kwargs and lock options as tools
- **Serialization** — `to_dict()` / `from_dict()` for turns and agents
## Docs
Full documentation: `uv run mkdocs serve`. MkDocs is an optional dependency—install with `pip install -e ".[docs]"` (or use `uv run` as above) so the library itself does not depend on it.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"mkdocs>=1.6.1; extra == \"dev\"",
"mkdocs-material>=9.5.0; extra == \"dev\"",
"bump2version>=1.0.1; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.15.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-21T02:51:36.406711 | pygents-0.5.2.tar.gz | 16,102 | 9a/67/007f0caad58aabf58624d570ee944e4504f763326468e2a827b64d3c4535/pygents-0.5.2.tar.gz | source | sdist | null | false | df18572a81491efbec487ab2fa8e65e6 | c9c3c37e612db9cb03e4e0700f55950c47aabe6f1e929f4d12b032c840bcf4c6 | 9a67007f0caad58aabf58624d570ee944e4504f763326468e2a827b64d3c4535 | null | [
"LICENSE.txt"
] | 248 |
2.4 | llmswap | 5.5.6 | Python SDK + CLI for 11 LLM providers: Claude Sonnet 4.6, Claude Opus 4.6, Llama 4 Maverick, GPT-5.2, Gemini 3 Flash, Grok 4.1. Universal tool calling, MCP protocol, automatic fallback, zero vendor lock-in. | # LLMSwap: Python SDK + CLI for Any LLM Provider
[](https://badge.fury.io/py/llmswap)
[](https://pepy.tech/projects/llmswap)
[](https://github.com/llmswap/homebrew-tap)
[](https://github.com/sreenathmmenon/llmswap/actions/workflows/comprehensive-ci.yml)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## Ship AI Apps Faster
**11 LLM Providers. Latest Models. Zero Vendor Lock-in.**
Claude Sonnet 4.6 (Feb '26) • Claude Opus 4.6 • Llama 4 Maverick • GPT-5.2 • Gemini 3 Flash • Grok 4.1 + 11 providers.
Universal tool calling • MCP protocol • Zero vendor lock-in • Production-ready SDK + CLI.
One simple interface for Anthropic, OpenAI, Gemini, Groq, X.AI and more. Stop wrestling with complex frameworks—build production AI in 10 lines of code.
**📚 Documentation:** [llmswap.org](https://llmswap.org) | **⚡ CLI Reference:** [CLI Docs](https://llmswap.org/docs/cli.html) | **🐍 SDK Guide:** [SDK Docs](https://llmswap.org/docs/sdk.html) | **🔧 MCP Guide:** [#mcp-integration](#-mcp-integration-new)
## 🆕 NEW in v5.2.0: Universal Tool Calling
**Enable LLMs to access YOUR data and systems** - Define tools once, works across ALL providers.
```python
from llmswap import LLMClient, Tool
# Define tool to access YOUR weather API
weather = Tool(
name="get_weather",
description="Get real-time weather data",
parameters={"city": {"type": "string"}},
required=["city"]
)
# Works with ANY provider - Anthropic, OpenAI, Gemini, Groq, xAI
client = LLMClient(provider="anthropic")
response = client.chat("What's the weather in Tokyo?", tools=[weather])
# LLM calls YOUR function → you return data → LLM gives natural response
```
**Real-World Use Cases:**
- 🌦️ Give LLM access to YOUR weather API for real-time data
- 💾 Let LLM query YOUR database for customer information
- 🛒 Enable LLM to search YOUR product catalog for shopping assistance
- 🔧 Connect LLM to YOUR systems and APIs
**Works with:** Anthropic, OpenAI, Groq, Gemini, xAI | **[Quick Start Guide →](examples/)** | **[Full Docs →](.llmswap/website-docs/)**
---
## ⚡ Quick Start (30 seconds)
```bash
# Recommended: Install with uv (fastest)
uv tool install llmswap
# Or install with pip
pip install llmswap
# or Homebrew
brew tap llmswap/tap && brew install llmswap
# Create your first workspace
cd ~/my-project
llmswap workspace init
# Chat with AI that remembers everything
llmswap chat "Help me with Flask routing"
# AI has full project context + all past learnings!
# 🆕 NEW: Connect to MCP servers with natural language
llmswap-mcp --command npx -y @modelcontextprotocol/server-filesystem ~/Documents
# Ask: "List all PDF files"
# Ask: "Read the contents of README.md"
# AI uses filesystem tools automatically!
# 🆕 Compare models visually (optional)
pip install llmswap[web]
llmswap web # Opens browser - compare GPT-4 vs Claude vs Gemini
```
---
## 🆕 Latest Models Supported (February 2026)
**New models work the day they launch** - LLMSwap's pass-through architecture means no SDK updates needed.
### ⚡ Claude Sonnet 4.6 (Released Feb 17, 2026)
```python
from llmswap import LLMClient
client = LLMClient(provider="anthropic", model="claude-sonnet-4-6")
response = client.chat("Build a full-stack application with authentication...")
print(response.content)
```
**Anthropic's new default model.** Improved coding, computer use, and design. Same pricing as Sonnet 4.5 ($3/$15 per million tokens).
**Best for:** Coding, everyday tasks, cost-effective quality work
### 🧠 Claude Opus 4.6 (Released Feb 5, 2026)
```python
from llmswap import LLMClient
client = LLMClient(provider="anthropic", model="claude-opus-4-6")
response = client.chat("Analyze this financial report and identify key risks...")
print(response.content)
```
**Anthropic's most capable model.** Top on Finance Agent benchmark. Better at long-context research and complex document analysis. 1M token context window (beta). Pricing: $15/$75 per million tokens.
**Best for:** Financial analysis, deep research, large document processing, complex coding
### 🚀 Gemini 3 Pro (Released Nov 18, 2025)
```python
from llmswap import LLMClient
client = LLMClient(provider="gemini", model="gemini-3-pro")
response = client.chat("Analyze this video and extract key insights...")
print(response.content)
```
**Google's most advanced multimodal model.** Processes text, images, videos, audio, PDFs. 1M+ input tokens.
**Best for:** Multimodal understanding, large document analysis, batch processing
### 🧠 GPT-5.2 (Released Dec 11, 2025)
```python
from llmswap import LLMClient
client = LLMClient(provider="openai", model="gpt-5.2")
response = client.chat("Design an algorithm for real-time fraud detection...")
print(response.content)
```
**OpenAI's latest flagship.** Most capable model for professional knowledge work. Variants: Instant (speed) & Thinking (reasoning). Also: GPT-5.2-Codex for agentic coding.
**Best for:** Professional tasks, complex reasoning, coding, science & math
### ⚡ Gemini 3 Flash (Released Dec 17, 2025)
```python
from llmswap import LLMClient
client = LLMClient(provider="gemini", model="gemini-3-flash")
response = client.chat("Analyze this codebase and suggest improvements...")
print(response.content)
```
**Google's fastest frontier model.** Pro-level reasoning at 10x lower cost. 1M input tokens, 64k output. Multimodal: text, images, video, audio, PDF.
**Best for:** High-speed inference, cost optimization, everyday tasks, agentic workflows
### 🏆 Grok 4.1 (Released Nov 17, 2025)
```python
from llmswap import LLMClient
client = LLMClient(provider="xai", model="grok-4.1")
response = client.chat("Help me understand this nuanced ethical dilemma...")
print(response.content)
```
**#1 on LMArena Text Leaderboard.** Enhanced emotional intelligence & creative collaboration. Preferred 64.78% in blind tests.
**Best for:** Emotional intelligence, creative writing, collaborative tasks, nuanced understanding
### 💎 DeepSeek V3.2 (Released Dec 16, 2025)
```python
from llmswap import LLMClient
client = LLMClient(provider="deepseek", model="deepseek-v3.2")
response = client.chat("Solve this complex mathematical problem...")
print(response.content)
```
**Open-source powerhouse.** Matches GPT-5 & Gemini 3 at 10x lower cost ($0.028/1M tokens). 671B parameters, 96% on AIME 2025. MIT License.
**Best for:** Cost-sensitive applications, open-source projects, math & reasoning, on-premise deployment
**Plus 6 more providers:** Groq (5x faster LPU), Cohere (enterprise), Perplexity (search), IBM Watsonx (Granite 4.0), Ollama, Sarvam AI, local models.
**Why it matters:** New models work day-one. Pass-through architecture means future models work immediately upon release.
---
> **🆕 Use Any Model from Any Provider!** New model just launched? Use it immediately. LLMSwap's pass-through architecture means GPT-5, Claude Opus 4, Gemini 2.5 Pro work the day they release. Currently supports **11 providers** (OpenAI, Anthropic, Gemini, Cohere, Perplexity, IBM watsonx, Groq, Ollama, **xAI Grok**, **Sarvam AI**).
> **✅ Battle-Tested with LMArena Top Models:** All 10 providers tested and validated with top-rated models from LMArena leaderboard. From Grok-4 (xAI's flagship) to Claude Sonnet 4.6 (latest default model) to Gemini 2.0 Flash Exp - every model in our defaults is production-validated and arena-tested for real-world use.
**The First AI Tool with Project Memory & Learning Journals** - LLMSwap v5.1.0 introduces revolutionary workspace system that remembers your learning journey across projects. Build apps without vendor lock-in (SDK) or use from terminal (CLI). Works with your existing subscriptions: Claude, OpenAI, Gemini, Cohere, Perplexity, IBM watsonx, Groq, Ollama, xAI Grok, Sarvam AI (**10 providers**). **Use any model from your provider** - even ones released tomorrow. Pass-through architecture means GPT-5, Gemini 2.5 Pro, Claude Opus 4? They work the day they launch.
**🎯 Solve These Common Problems:**
- ❌ "I need multiple second brains for different aspects of my life" 🆕
- ❌ "AI strays over time, I need to re-steer it constantly" 🆕
- ❌ "I keep explaining the same context to AI over and over"
- ❌ "AI forgets what I learned yesterday"
- ❌ "I lose track of architecture decisions across projects"
- ❌ "Context switching between projects is exhausting"
- ❌ "I want AI to understand my specific codebase, not generic answers"
**✅ llmswap v5.1.0 Solves All These:**
- ✅ Multiple independent "second brains" per project/life aspect 🆕
- ✅ Persistent context prevents AI from straying 🆕
- ✅ Per-project workspaces that persist context across sessions
- ✅ Auto-tracked learning journals - never forget what you learned
- ✅ Architecture decision logs - all your technical decisions documented
- ✅ Zero context switching - AI loads the right project automatically
- ✅ Project-aware AI - mentor understands YOUR specific tech stack
## Why Developers Choose llmswap
✅ **10 Lines to Production** - Not 1000 like LangChain
✅ **MCP Protocol Support** - Connect to any MCP server with natural language 🆕
✅ **Automatic Fallback** - Never down. Switches providers if one fails
✅ **50-90% Cost Savings** - Built-in caching. Same query = FREE
✅ **Workspace Memory** - Your AI remembers your project context
✅ **Universal Tool Calling** - Define once, works everywhere (NEW v5.2.0)
✅ **CLI + SDK** - Code AND terminal. Your choice
✅ **Zero Lock-in** - Switch from OpenAI to Claude in 1 line
**Built for Speed:**
- 🚀 **Hackathons** - Ship in hours
- 💡 **MVPs** - Validate ideas fast
- 📱 **Production Apps** - Scale as you grow
- 🎯 **Real Projects** - Trusted by developers worldwide
**v5.1.0**: Revolutionary AI mentorship with **project memory**, **workspace-aware context**, **auto-tracked learning journals**, and **persistent mentor relationships**. The first AI tool that truly remembers your learning journey across projects.
**NEW in v5.2.0:**
- 🛠️ **Universal Tool Calling** - Enable LLMs to use YOUR custom functions across all providers
- 🔧 **5 Providers Supported** - Anthropic, OpenAI, Groq, Gemini, xAI with automatic format conversion
- 📖 **Complete Documentation** - Full guides, examples, and real-world use cases
- ✅ **100% Backward Compatible** - All existing features work without changes
**v5.1.6:**
- 🌐 **Web UI** - Compare 20+ models side-by-side in beautiful browser interface & learn prompting techniques
- 📊 **Visual Comparison** - Live streaming results with speed badges (⚡🥈🥉), cost charts, efficiency metrics
- 💰 **Cost Optimizer** - See exact costs across providers, find cheapest model for your use case
- 🎨 **Markdown + Code Highlighting** - Syntax-highlighted code blocks with individual copy buttons
- 💾 **Smart Preferences** - Remembers your favorite models via localStorage
- 📈 **Real-time Metrics** - Tokens/sec efficiency, response length indicators, actual API token counts
**NEW in v5.1.0:**
- 🧠 **Workspace Memory** - Per-project context that persists across sessions
- 📚 **Auto-Learning Journal** - Automatically tracks what you learn in each project
- 🎯 **Context-Aware Mentorship** - AI mentor understands your project and past learnings
- 📖 **Architecture Decision Log** - Document and remember key technical decisions
- 🔄 **Cross-Project Intelligence** - Learn patterns from one project, apply to another
- 💡 **Proactive Learning** - AI suggests next topics based on your progress
- 🗂️ **Project Knowledge Base** - Custom prompt library per workspace
## 🧠 Finally: An Elegant Solution for Multiple Second Brains
**The Problem Industry Leaders Can't Solve:**
> "I still haven't found an elegant solution to the fact that I need several second brains for the various aspects of my life, each with different styles and contexts." - Industry feedback
**The LLMSwap Solution: Workspace System**
Each aspect of your life gets its own "brain" with independent memory:
- 💼 **Work Projects** - `~/work/api-platform` - Enterprise patterns, team conventions
- 📚 **Learning** - `~/learning/rust` - Your learning journey, struggles, progress
- 🚀 **Side Projects** - `~/personal/automation` - Personal preferences, experiments
- 🌐 **Open Source** - `~/oss/django` - Community patterns, contribution history
**What Makes It "Elegant":**
- ✅ Zero configuration - just `cd` to project directory
- ✅ Auto-switching - AI loads the right "brain" automatically
- ✅ No context bleed - work knowledge stays separate from personal
- ✅ Persistent memory - each brain remembers across sessions
- ✅ Independent personas - different teaching style per project if you want
**Stop Re-Explaining Context. Start Building.**
---
## 🎯 Transform AI Into Your Personal Mentor with Project Memory
**Inspired by Eklavya** - the legendary self-taught archer who learned from dedication and the right guidance - LLMSwap transforms any AI provider into a personalized mentor that adapts to your learning style **and remembers your journey**.
**The Challenge:** Developers struggle to learn effectively from AI because:
- 🔴 Responses are generic, lack personality, and don't adapt to individual needs
- 🔴 AI loses context between sessions - you repeat the same explanations
- 🔴 No learning history - AI doesn't know what you already learned
- 🔴 Project context is lost - AI doesn't understand your codebase
**LLMSwap's Solution v5.1.0:** Choose your mentorship style, initialize a workspace, and ANY AI provider becomes **your personalized guide that remembers everything**:
```bash
# 🆕 v5.1.0: Initialize workspace for your project
cd ~/my-flask-app
llmswap workspace init
# Creates .llmswap/ with context.md, learnings.md, decisions.md
# Now your AI mentor KNOWS your project
llmswap chat --mentor guru --alias "Guruji"
# Mentor has full context: your tech stack, past learnings, decisions made
# 🆕 Auto-tracked learning journal
# Every conversation automatically saves key learnings
llmswap workspace journal
# View everything you've learned in this project
# 🆕 Architecture decision log
llmswap workspace decisions
# See all technical decisions documented automatically
# View all your workspaces
llmswap workspace list
# Get wisdom and deep insights from a patient teacher
llmswap chat --mentor guru --alias "Guruji"
# High-energy motivation when you're stuck
llmswap ask "How do I debug this?" --mentor coach
# Collaborative peer learning for exploring ideas
llmswap chat --mentor friend --alias "CodeBuddy"
# Question-based learning for critical thinking
llmswap ask "Explain REST APIs" --mentor socrates
# 🆕 Use Claude Sonnet 4.6 - Latest default model
llmswap chat --provider anthropic --model claude-sonnet-4-6
# Or set as default in config for all queries
```
### 🔄 Rotate Personas to Expose Blind Spots
**Industry Insight:** "Rotate personas: mentor, skeptic, investor, end-user. Each lens exposes blind spots differently."
**Use Case: Reviewing API Design**
```bash
# Round 1: Long-term wisdom
llmswap chat --mentor guru "Design API for multi-tenant SaaS"
# Catches: scalability, technical debt, maintenance
# Round 2: Critical questions
llmswap chat --mentor socrates "Review this API design"
# Catches: assumptions, alternatives, edge cases
# Round 3: Practical execution
llmswap chat --mentor coach "What's the fastest path to v1?"
# Catches: over-engineering, paralysis by analysis
```
**Same project context. Different perspectives. Complete understanding.**
**What Makes v5.1.0 Revolutionary:**
- 🧠 **Works with ANY provider** - Transform Claude, GPT-4, or Gemini into your mentor
- 🎭 **6 Teaching Personas** - Guru, Coach, Friend, Socrates, Professor, Tutor
- 📊 **Project Memory** - Per-project context that persists across sessions ⭐ NEW
- 📚 **Auto-Learning Journal** - Automatically tracks what you learn ⭐ NEW
- 📖 **Decision Tracking** - Documents architecture decisions ⭐ NEW
- 🎓 **Age-Appropriate** - Explanations tailored to your level (--age 10, --age 25, etc.)
- 💰 **Cost Optimized** - Use cheaper providers for learning, premium for complex problems
- 🔄 **Workspace Detection** - Automatically loads project context ⭐ NEW
**Traditional AI tools give you answers. LLMSwap v5.1.0 gives you a personalized learning journey that REMEMBERS.**
---
## 🔧 MCP Integration (NEW)
**The Model Context Protocol (MCP)** lets LLMs connect to external tools and data sources. llmswap provides the **best MCP client experience** - just talk naturally, and AI handles the tools.
### Natural Language MCP CLI
Connect to any MCP server and interact with tools using plain English:
```bash
# Filesystem access
llmswap-mcp --command npx -y @modelcontextprotocol/server-filesystem ~/Documents
# Then ask naturally:
> "What files are in this directory?"
> "Read the contents of report.pdf"
> "Find all files modified in the last week"
# Database queries
llmswap-mcp --command npx -y @modelcontextprotocol/server-sqlite ./mydb.sqlite
> "Show me all users in the database"
> "What are the top 10 products by sales?"
# GitHub integration
llmswap-mcp --command npx -y @modelcontextprotocol/server-github --owner anthropics --repo anthropic-sdk-python
> "Show me recent issues"
> "What pull requests are open?"
```
### Supported MCP Transports
- **stdio** - Local command-line tools (most common)
- **SSE** - Server-Sent Events for remote servers
- **HTTP** - REST API endpoints
### Works With All 5 Providers
```bash
# Use your preferred LLM provider
llmswap-mcp --provider anthropic --command <mcp-server>
llmswap-mcp --provider openai --command <mcp-server>
llmswap-mcp --provider gemini --command <mcp-server>
llmswap-mcp --provider groq --command <mcp-server> # Fastest!
llmswap-mcp --provider xai --command <mcp-server> # Grok
```
### Python SDK Integration
```python
from llmswap import LLMClient
# Add MCP server to your client
client = LLMClient(provider="anthropic")
client.add_mcp_server("filesystem", command=["npx", "-y", "@modelcontextprotocol/server-filesystem", "/tmp"])
# Chat naturally - AI uses MCP tools automatically
response = client.chat("List all log files in /tmp", use_mcp=True)
print(response.content)
# List available tools
tools = client.list_mcp_tools()
for tool in tools:
print(f"- {tool['name']}: {tool['description']}")
```
### Popular MCP Servers
- **Filesystem** - Read/write files and directories
- **GitHub** - Search repos, issues, PRs
- **GitLab** - Project management
- **Google Drive** - Access documents
- **Slack** - Send messages, read channels
- **PostgreSQL** - Database queries
- **Brave Search** - Web search
- **Memory** - Persistent knowledge graphs
[Browse all MCP servers →](https://github.com/modelcontextprotocol/servers)
### MCP Features
✅ **Natural language interface** - No JSON, no manual tool calls
✅ **Multi-turn conversations** - Context preserved across queries
✅ **Beautiful UI** - Clean bordered interface like Claude/Factory Droids
✅ **Provider-specific formatting** - Optimized for each LLM
✅ **Connection management** - Automatic reconnection and health checks
✅ **Error handling** - Graceful degradation with circuit breaker
### Example Use Cases
**For Data Analysis:**
```bash
llmswap-mcp --command npx -y @modelcontextprotocol/server-sqlite ./sales.db
> "What were our top 5 products last quarter?"
> "Show me revenue trends by region"
```
**For Development:**
```bash
llmswap-mcp --command npx -y @modelcontextprotocol/server-github --owner myorg --repo myapp
> "What issues are labeled as bugs?"
> "Summarize recent commits"
```
**For Research:**
```bash
llmswap-mcp --command npx -y @modelcontextprotocol/server-brave-search
> "Find recent papers on transformer architectures"
> "What are the latest developments in quantum computing?"
```
---
## 🏢 Enterprise Deployment
### Remote MCP Servers (Production)
#### SSE Transport (Server-Sent Events)
```python
from llmswap import LLMClient
import os
# Connect to internal MCP server via SSE
client = LLMClient(provider="anthropic")
client.add_mcp_server(
"internal-api",
transport="sse",
url="https://mcp.yourcompany.com/events",
headers={
"Authorization": f"Bearer {os.getenv('INTERNAL_MCP_TOKEN')}"
}
)
# Use with natural language
response = client.chat("Query internal data", use_mcp=True)
```
#### HTTP Transport (REST API)
```python
# Connect to MCP server via HTTP
client.add_mcp_server(
"crm-api",
transport="http",
url="https://api.yourcompany.com/mcp",
headers={
"X-API-Key": os.getenv('CRM_API_KEY')
}
)
# Query your internal systems
response = client.chat("Get customer data for account #12345", use_mcp=True)
```
### Production Features
#### Health Monitoring
```python
# Check MCP server health
if not client.check_mcp_health("internal-api"):
logger.error("MCP server unhealthy")
# Fallback logic
```
#### Circuit Breaker (Built-in)
```python
# Automatic circuit breaker prevents cascade failures
client.add_mcp_server(
"backend-api",
transport="sse",
url="https://backend.company.com/mcp",
circuit_breaker_threshold=5, # Opens after 5 failures
circuit_breaker_timeout=60 # Retry after 60 seconds
)
```
### Multi-Provider Routing
#### Cost Optimization
```python
# Route to cheapest provider first, fallback to premium
try:
response = LLMClient(provider="groq").chat(query) # Fast & cheap
except:
response = LLMClient(provider="anthropic").chat(query) # Premium fallback
```
#### Latency Optimization
```python
# Route based on latency requirements
if requires_realtime:
client = LLMClient(provider="groq") # 840+ tokens/sec
else:
client = LLMClient(provider="openai") # More capable
```
#### Provider Fallback Chain
```python
from llmswap import LLMClient
providers = ["groq", "anthropic", "openai"] # Priority order
for provider in providers:
try:
client = LLMClient(provider=provider)
response = client.chat(query)
break
except Exception as e:
logger.warning(f"{provider} failed: {e}")
continue
```
---
## 🔒 Security & Compliance
### API Key Management
#### Environment Variables (Recommended)
```bash
# Never hardcode API keys
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export INTERNAL_MCP_TOKEN="your-token"
```
```python
import os
from llmswap import LLMClient
# Keys loaded from environment automatically
client = LLMClient(provider="anthropic") # Uses ANTHROPIC_API_KEY
```
#### Secrets Management Integration
**AWS Secrets Manager:**
```python
import boto3
import json
from llmswap import LLMClient
def get_secret(secret_name):
client = boto3.client('secretsmanager')
response = client.get_secret_value(SecretId=secret_name)
return json.loads(response['SecretString'])
secrets = get_secret('llm-api-keys')
client = LLMClient(provider="anthropic", api_key=secrets['anthropic_key'])
```
**Azure Key Vault:**
```python
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
from llmswap import LLMClient
credential = DefaultAzureCredential()
vault_client = SecretClient(
vault_url="https://your-vault.vault.azure.net",
credential=credential
)
api_key = vault_client.get_secret("anthropic-api-key").value
client = LLMClient(provider="anthropic", api_key=api_key)
```
**HashiCorp Vault:**
```python
import hvac
from llmswap import LLMClient
vault_client = hvac.Client(url='https://vault.company.com')
vault_client.auth.approle.login(role_id=..., secret_id=...)
secret = vault_client.secrets.kv.v2.read_secret_version(path='llm-keys')
api_key = secret['data']['data']['anthropic_key']
client = LLMClient(provider="anthropic", api_key=api_key)
```
### Data Privacy
**Zero Telemetry:**
- LLMSwap collects NO usage data
- NO analytics sent to third parties
- NO phone-home behavior
**Data Flow:**
```
Your Application → LLMSwap → LLM Provider API
↑
Your data goes ONLY here
(governed by provider's privacy policy)
```
**On-Premise MCP Servers:**
```python
# All data stays within your infrastructure
client.add_mcp_server(
"internal-db",
transport="http",
url="https://internal.company.local/mcp" # Internal network only
)
```
### Network Security
#### TLS/SSL Enforcement
```python
# HTTPS enforced for remote connections
client.add_mcp_server(
"api",
transport="https",
url="https://secure.company.com/mcp",
verify_ssl=True # Certificate verification
)
```
#### Timeout Controls
```python
# Prevent hanging connections
client = LLMClient(
provider="anthropic",
timeout=30 # 30 second timeout
)
```
### Audit Logging
```python
import logging
# Enable detailed logging for compliance
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger('llmswap')
# Logs include:
# - Provider used
# - Token usage
# - MCP tool calls
# - Error details
# - No sensitive data (keys redacted)
```
### Compliance Notes
**SOC2 / GDPR Considerations:**
- LLMSwap is a client library - does NOT store data
- Data retention governed by your chosen LLM provider
- See provider compliance: [Anthropic](https://www.anthropic.com/security), [OpenAI](https://openai.com/security), [Google](https://cloud.google.com/security/compliance)
**Industry Standards:**
- Uses standard HTTPS/TLS for transport security
- Supports enterprise authentication (OAuth, API keys, custom headers)
- No vendor lock-in - switch providers without code changes
---
## 🐳 Production Deployment
### Docker
#### Simple Dockerfile
```dockerfile
FROM python:3.11-slim
# Install llmswap
RUN pip install llmswap
# Set working directory
WORKDIR /app
# Copy your application
COPY . .
# Environment variables set at runtime
ENV ANTHROPIC_API_KEY=""
ENV MCP_SERVER_URL=""
# Run your application
CMD ["python", "your_app.py"]
```
#### Multi-Stage Build (Optimized)
```dockerfile
# Build stage
FROM python:3.11-slim as builder
RUN pip install --user llmswap
# Runtime stage
FROM python:3.11-slim
COPY --from=builder /root/.local /root/.local
ENV PATH=/root/.local/bin:$PATH
WORKDIR /app
COPY . .
CMD ["python", "your_app.py"]
```
#### Docker Compose
```yaml
version: '3.8'
services:
llmswap-app:
build: .
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- MCP_SERVER_URL=https://mcp.company.com
networks:
- internal
restart: unless-stopped
networks:
internal:
driver: bridge
```
### Kubernetes
#### Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: llmswap-service
labels:
app: llmswap
spec:
replicas: 3
selector:
matchLabels:
app: llmswap
template:
metadata:
labels:
app: llmswap
spec:
containers:
- name: llmswap
image: your-registry/llmswap-app:latest
env:
- name: ANTHROPIC_API_KEY
valueFrom:
secretKeyRef:
name: llm-secrets
key: anthropic-key
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: llm-secrets
key: openai-key
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
```
#### Secrets Management
```yaml
apiVersion: v1
kind: Secret
metadata:
name: llm-secrets
type: Opaque
data:
anthropic-key: <base64-encoded-key>
openai-key: <base64-encoded-key>
```
```bash
# Create secrets from file
kubectl create secret generic llm-secrets \
--from-literal=anthropic-key=$ANTHROPIC_API_KEY \
--from-literal=openai-key=$OPENAI_API_KEY
```
#### Service
```yaml
apiVersion: v1
kind: Service
metadata:
name: llmswap-service
spec:
selector:
app: llmswap
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
```
#### ConfigMap (MCP Configuration)
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mcp-config
data:
mcp-servers.json: |
{
"internal-api": {
"transport": "sse",
"url": "https://mcp.company.com/events"
},
"crm-system": {
"transport": "http",
"url": "https://crm-api.company.com/mcp"
}
}
```
### Environment Variables Reference
```bash
# LLM Provider API Keys
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=...
GROQ_API_KEY=gsk_...
XAI_API_KEY=xai-...
# MCP Configuration
MCP_SERVER_URL=https://mcp.company.com
MCP_AUTH_TOKEN=your-token
# Optional: Override defaults
LLMSWAP_DEFAULT_PROVIDER=anthropic
LLMSWAP_TIMEOUT=30
LLMSWAP_LOG_LEVEL=INFO
```
### Health Checks
```python
# your_app.py
from flask import Flask, jsonify
from llmswap import LLMClient
app = Flask(__name__)
client = LLMClient(provider="anthropic")
@app.route('/health')
def health():
"""Kubernetes liveness probe"""
return jsonify({"status": "healthy"}), 200
@app.route('/ready')
def ready():
"""Kubernetes readiness probe"""
try:
# Check if LLM provider is accessible
client.chat("test", max_tokens=1)
return jsonify({"status": "ready"}), 200
except Exception as e:
return jsonify({"status": "not ready", "error": str(e)}), 503
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080)
```
### Monitoring & Observability
#### Prometheus Metrics (Example)
```python
from prometheus_client import Counter, Histogram, start_http_server
from llmswap import LLMClient
# Metrics
llm_requests = Counter('llm_requests_total', 'Total LLM requests', ['provider'])
llm_latency = Histogram('llm_request_duration_seconds', 'LLM request latency', ['provider'])
llm_errors = Counter('llm_errors_total', 'Total LLM errors', ['provider', 'error_type'])
# Start metrics endpoint
start_http_server(9090)
# Instrument your calls
client = LLMClient(provider="anthropic")
with llm_latency.labels(provider="anthropic").time():
try:
response = client.chat("query")
llm_requests.labels(provider="anthropic").inc()
except Exception as e:
llm_errors.labels(provider="anthropic", error_type=type(e).__name__).inc()
raise
```
---
## 🏆 Production-Validated with LMArena Top Models
**Every model in LLMSwap's defaults comes from LMArena's top performers:**
All 10 providers ship with carefully selected default models based on LMArena rankings and real-world production testing. We track arena performance and update defaults to ensure you're always using validated, battle-tested models.
| Provider | Default Model | Arena Status | Why We Chose It |
|----------|---------------|--------------|-----------------|
| **Anthropic** | claude-sonnet-4-6 | Default Feb 2026 | Latest Sonnet — improved coding & computer use |
| **xAI** | grok-4-0709 | Top 5 Overall | Advanced reasoning, real-time data access |
| **Gemini** | gemini-2.0-flash-exp | Top 10 | Lightning-fast, multimodal, cutting-edge |
| **OpenAI** | gpt-4o-mini | Cost Leader | Best price/performance ratio |
| **Cohere** | command-r-08-2024 | Top RAG | Enterprise-grade retrieval-augmented generation |
| **Perplexity** | sonar | Web Search | Real-time web-connected AI with citations |
| **Groq** | llama-3.1-8b-instant | Speed King | 840+ tokens/second ultra-fast inference |
| **Sarvam** | sarvam-m | Multilingual | 24B params, best for 10 Indian languages |
| **Watsonx** | granite-3-8b-instruct | Enterprise | IBM's production-grade AI for business |
| **Ollama** | granite-code:8b | Local AI | Privacy-first, runs on your hardware |
**✅ Battle-tested with real API calls** - Every provider validated in production, not simulated tests.
**✅ Weekly model updates** - We monitor LMArena rankings and deprecation notices to keep defaults current.
**✅ Zero lock-in** - Don't like our defaults? Override with any model: `LLMClient(model="gpt-5")` or `llmswap config set provider.models.openai gpt-5`
---
## 🔓 Use Any Model Your Provider Supports (Zero-Wait Model Support)
Here's something cool: LLMSwap doesn't restrict which models you can use. When GPT-5 or Gemini 2.5 Pro drops tomorrow, you can start using it immediately. No waiting for us to update anything.
**How?** We use pass-through architecture. Whatever model name you pass goes directly to your provider's API. We don't gatekeep.
### CLI Examples:
```bash
# Use any OpenAI model (even ones that don't exist yet)
llmswap chat --provider openai --model gpt-5
llmswap chat --provider openai --model o3-mini
# Use any Anthropic model
llmswap chat --provider anthropic --model claude-sonnet-4-6
llmswap chat --provider anthropic --model claude-opus-4-6
# Use any Gemini model
llmswap chat --provider gemini --model gemini-2-5-pro
llmswap chat --provider gemini --model gemini-ultra-2
# Set as default so you don't have to type it every time
llmswap config set provider.models.openai gpt-5
llmswap config set provider.models.anthropic claude-opus-4
```
### Python SDK:
```python
from llmswap import LLMClient
# Use whatever model your provider offers
client = LLMClient(provider="openai", model="gpt-5")
client = LLMClient(provider="anthropic", model="claude-opus-4")
client = LLMClient(provider="gemini", model="gemini-2-5-pro")
# Model just released? Use it right now
client = LLMClient(provider="openai", model="gpt-6") # works!
```
**The point:** You're not limited to what we've documented. If your provider supports it, llmswap supports it.
## 🆚 LLMSwap vs Single-Provider Tools
### For Python Developers Building Apps:
| Your Need | Single-Provider SDKs | LLMSwap SDK |
|-----------|---------------------|-------------|
| Build chatbot/app | Import `openai` library (locked in) | Import `llmswap` (works with any provider) |
| Switch providers | Rewrite all API calls | Change 1 line: `provider="anthropic"` |
| Try different models | Sign up, new SDK, refactor code | Just change config, same code |
| Use new models | Wait for SDK update | Works immediately (pass-through) |
| Cost optimization | Manual implementation | Built-in caching (50-90% savings) |
| Use multiple providers | Maintain separate codebases | One codebase, switch dynamically |
### For Developers Using Terminal:
| Your Need | Vendor CLIs | LLMSwap CLI |
|-----------|-------------|-------------|
| Have Claude subscription | Install Claude Code (Claude only) | Use llmswap (works with Claude) |
| Have OpenAI subscription | Build your own scripts | Use llmswap (works with OpenAI) |
| Have multiple subscriptions | Install 3+ different CLIs | One CLI for all subscriptions |
| New model launches | Wait for CLI update | Use it same day (pass-through) |
| Want AI to teach you | Not available | Built-in Eklavya mentorship |
| Switch providers mid-chat | Can't - locked in | `/switch anthropic` command |
**The Bottom Line:**
- **Building an app?** Use LLMSwap SDK - no vendor lock-in
- **Using terminal?** Use LLMSwap CLI - works with your existing subscriptions
- **Both?** Perfect - it's the same tool!
---
## 🔧 LLMSwap vs MCP Alternatives
**The only multi-provider MCP client with natural language interface:**
| Feature | LLMSwap | langchain-mcp-tools | mcp-use | Anthropic SDK |
|---------|---------|---------------------|---------|---------------|
| **Natural Language** | ✅ Ask in plain English | ❌ Manual JSON | ❌ Manual JSON | ❌ Manual JSON |
| **Multi-Provider MCP** | ✅ 11 providers | ❌ LangChain only | ⚠️ Limited | ❌ Claude only |
| **Latest Models** | ✅ Day-one support (Dec '24) | ⚠️ Delayed updates | ⚠️ Delayed updates | ✅ Claude only |
| **Beautiful CLI** | ✅ Bordered UI | ❌ No CLI | ❌ Basic | ❌ No CLI |
| **Setup Time** | 🟢 30 seconds | 🔴 Hours (LangChain) | 🟡 Medium | 🟢 Fast |
| **Production Ready** | ✅ Circuit breakers | ❌ DIY | ❌ DIY | ⚠️ Limited |
| **Cost Optimization** | ✅ Auto caching | ❌ Manual | ❌ Manual | ❌ No |
| **Learning Curve** | 🟢 10 lines | 🔴 Complex | 🟡 Medium | 🟢 Easy |
| **Remote MCP** | ✅ SSE/HTTP | ⚠️ Limited | ⚠️ Limited | ✅ Yes |
| **Zero Lock-in** | ✅ Switch providers | ❌ Locked to LangChain | ⚠️ Limited | ❌ Claude only |
**Why LLMSwap for MCP?**
- **Natural language**: Just ask "List all PDFs" - no JSON schemas
- **Universal**: Works with 11 providers, not just one
- **Production-ready**: Circuit breakers, health checks, monitoring built-in
- **Latest models**: Claude 3.5 Haiku, Gemini 2.0, o1 work day-one
---
```bash
# 🆕 NEW v5.1.0: Workspace System - Project Memory That Persists
llmswap workspace init
# Creates .llmswap/ directory with:
# - workspace.json (project metadata)
# - context.md (editable project description)
# - learnings.md (auto-tracked learning journal)
# - decisions.md (architecture decision log)
llmswap workspace list # View all your workspaces
llmswap workspace info # Show current workspace statistics
llmswap workspace journal # View learning journal
llmswap workspace decisions # View decision log
llmswap workspace context # Edit project context
# 🆕 NEW v5.1.0: Context-Aware Mentorship
# AI mentor automatically loads project context, past learnings, and decisions
llmswap chat
# Mentor knows: your tech stack, what you've learned, decisions made
# 🆕 NEW v5.0: Age-Appropriate AI Explanations
llmswap ask "What is Docker?" --age 10
# Output: "Docker is like a magic lunch box! 🥪 When your mom packs..."
llmswap ask "What is blockchain?" --audience "business owner"
# Output: "Think of blockchain like your business ledger system..."
# 🆕 NEW v5.0: Teaching Personas & Personalization
llmswap ask "Explain Python classes" --teach --mentor developer --alias "Sarah"
# Output: "[Sarah - Senior Developer]: Here's how we handle classes in production..."
# 🆕 NEW v5.0: Conversational Chat with Provider Switching
llmswap chat --age 25 --mentor tutor
# In chat: /switch anthropic # Switch mid-conversation
# In chat: /provider # See current provider
# Commands: /help, /switch, /clear, /stats, /quit
# 🆕 NEW v5.0: Provider Management & Configuration
llmswap providers # View all providers and their status
llmswap config set provider.models.cohere command-r-plus-08-2024
llmswap config set provider.default anthropic
llmswap config show
# Code Generation (GitHub Copilot CLI Alternative)
llmswap generate "sort files by size in reverse order"
# Output: du -sh * | sort -hr
llmswap generate "Python function to read JSON with error handling" --language python
# Output: Complete Python function with try/catch blocks
# Advanced Log Analysis with AI
llmswap logs --analyze /var/log/app.log --since "2h ago"
llmswap logs --request-id REQ-12345 --correlate
# Code Review & Debugging
llmswap review app.py --focus security
llmswap debug --error "IndexError: list index out of range"
```
```python
# ❌ Problem: Vendor Lock-in
import openai # Locked to OpenAI forever
client = openai.Client(api_key="sk | text/markdown | null | Sreenath M Menon <sreenathmmmenon@gmail.com> | null | null | MIT License
Copyright (c) 2025 Sreenath Menon
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | 10-providers, 2025-models, 2026-models, adaptive-learning, age-appropriate-ai, agentic-ai, agents, ai, ai-abstraction-layer, ai-assistant, ai-benchmarks, ai-chat, ai-chat-cli, ai-cli, ai-code-assistant, ai-code-generation, ai-code-review, ai-comparison-tool, ai-context-switching, ai-cost-savings, ai-dashboard, ai-debugging, ai-infrastructure, ai-learning, ai-logs, ai-memory, ai-mentor, ai-pair-programming, ai-persona, ai-personas, ai-provider-switcher, ai-sdk, ai-sdk-2026, ai-straying-prevention, ai-teaching, ai-testing-tool, ai-tutor, anthropic, anthropic-claude-4.5, anthropic-claude-4.6, anthropic-mcp, anthropic-sdk-alternative, api-client, architecture-decisions, arena-validated-models, bash-generator, battle-tested-ai, best-coding-model, browser-ui, chatgpt-alternative, cheap-ai-api, cheapest-ai-model, chromadb, churn-detection, claude, claude-4-5, claude-4-6, claude-4.5, claude-4.6, claude-code-alternative, claude-opus-4, claude-opus-4-5, claude-opus-4-6, claude-opus-4-6-api, claude-opus-4.5, claude-sonnet-4-5, claude-sonnet-4-6, claude-sonnet-4-6-api, cli, cli-ai-assistant, code-completion-api, code-generation, code-review, coding-mentor, cohere, command-generation, command-line-ai, comparison-dashboard, context-aware-ai, contextual-ai, continue-dev-alternative, contract-analysis, conversational-ai, conversational-cli, copilot-alternative, copilot-cli, copilot-cli-alternative, copilot-replacement, cost-analytics, cost-comparison, cost-optimization, cost-savings, cross-provider-cli, cursor-alternative, custom-ai-personas, customer-support, debugging, december-2025-models, deepseek, deepseek-api, deepseek-python, deepseek-v3, deepseek-v3-2, deepseek-v3.2, developer-ai-tools, developer-tools, document-qa, due-diligence, dynamic-provider-switching, easy-mcp, editor-integration, educational-ai, eklavya, embeddings, enterprise, enterprise-ai, fast-inference, fastapi, fastest-ai-model, february-2026-models, filesystem-mcp, financial-analysis, flask-ui, function-calling, gemini, gemini-3, gemini-3-deep-think, gemini-3-flash, gemini-3-flash-api, gemini-3-pro, gemini-cli-alternative, gemini-flash, gemini-pro-3, gemini-sdk-alternative, github-copilot-alternative, github-mcp, google-gemini, google-gemini-3, google-gemini-3-flash, gpt-4, gpt-5, gpt-5-1, gpt-5.1, gpt-5.2, gpt-5.2-api, gpt-5.2-codex, gpt-5.2-python, gpt-5.2-sdk, granite, grok-4, grok-4-0709, grok-4-1, grok-4-fast, grok-4.1, grok-4.1-thinking, grok-4.20, groq, groq-inference, hackathon, hackathon-starter, ibm-watson, ibm-watsonx, indian-language-ai, intelligent-routing, langchain, langchain-alternative, latest-llm-models, leaderboard-models, learning-journal, library-and-cli, litellm-alternative, llama, llama-4, llama-4-maverick, llm, llm-abstraction, llm-api, llm-arena, llm-benchmarking, llm-cli-interface, llm-comparison, llm-evaluation, llm-gateway, llm-gateway-routing, llm-provider-abstraction, llm-routing, llm-sdk, llm-sdk-python, llm-switching, llm-tools, lmarena, lmarena-tested-models, lmarena-top, log-analysis, m&a, mcp, mcp-agents, mcp-cli, mcp-client, mcp-http, mcp-integration, mcp-natural-language, mcp-protocol, mcp-python, mcp-sdk, mcp-server, mcp-sse, mcp-stdio, mcp-tools, mcp-wrapper, meta-llama-4, mistral, model-comparison, model-context-protocol, model-evaluation, model-testing, multi-llm, multi-llm-cli, multi-llm-copilot, multi-modal-ai, multi-model-ai, multi-model-testing, multi-provider-chat, multi-provider-cli, multi-provider-mcp, multi-provider-routing, multi-provider-sdk, multi-provider-tools, multilingual-ai, multiple-contexts, natural-language-mcp, natural-language-to-code, newest-ai-models, no-vendor-lock-in, ollama, open-source-copilot, open-source-llm, openai, openai-alternative-sdk, openai-cli-alternative, openai-gpt-5.1, openai-gpt-5.2, opus-4-5, opus-4-6, pdf-qa, perplexity, persistent-context, persona-rotation, personalized-ai, pinecone, postgres-mcp, production-validated-llm, project-context, project-knowledge-base, project-memory, prompt-testing, provider-agnostic-cli, provider-agnostic-sdk, provider-fallback, provider-health-check, provider-switching, provider-verification, python-generator, python-llm, python-sdk, python-sdk-cli, quality-comparison, rag, response-caching, response-comparison, retrieval-augmented-generation, sarvam-ai, sdk, sdk-and-cli, second-brain, self-hosted-ai, session-management, shell-integration, side-by-side-comparison, simple-mcp, slack-mcp, sonnet-4-5, sonnet-4-6, sonnet-4.5, sonnet-4.6, speed-comparison, sqlite-mcp, streamlit, support-triage, teaching-assistant, terminal-ai, terminal-ai-chat, terminal-assistant, text-to-code, token-tracking, tool-calling, tool-use-protocol, top-rated-models, universal-ai-cli, universal-ai-sdk, universal-mcp-client, universal-tool-calling, usage-analytics, vector-database, vendor-agnostic, vim-integration, vim-plugin, visual-comparison, watson-ai, watsonx, web-interface, web-ui, workspace-ai, workspace-system, xai-grok, xai-grok-4.1 | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Communications :: Chat",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Distributed Computing",
"Topic :: Text Processing :: Linguistic",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiofiles>=23.0.0",
"aiohttp>=3.8.0",
"anthropic>=0.3.0",
"cohere>=5.16.0",
"flask-cors>=4.0.0",
"flask>=3.0.0",
"google-generativeai>=0.3.0",
"groq>=0.4.0",
"httpx>=0.24.0",
"openai>=1.0.0",
"python-dotenv>=0.19.0",
"requests>=2.25.0",
"anthropic>=0.3.0; extra == \"all\"",
"cohere>=5.16.0; extra == \"all\"",
"flask-cors>=4.0.0; extra == \"all\"",
"flask>=3.0.0; extra == \"all\"",
"google-generativeai>=0.3.0; extra == \"all\"",
"groq>=0.4.0; extra == \"all\"",
"ibm-watsonx-ai>=0.0.5; extra == \"all\"",
"openai>=1.0.0; extra == \"all\"",
"black>=22.0.0; extra == \"dev\"",
"isort>=5.10.0; extra == \"dev\"",
"mypy>=0.991; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ibm-watsonx-ai>=0.0.5; extra == \"watsonx\""
] | [] | [] | [] | [
"Homepage, https://github.com/sreenathmmenon/llmswap",
"Documentation, https://github.com/sreenathmmenon/llmswap#readme",
"Repository, https://github.com/sreenathmmenon/llmswap",
"Issues, https://github.com/sreenathmmenon/llmswap/issues",
"Changelog, https://github.com/sreenathmmenon/llmswap/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:51:15.471233 | llmswap-5.5.6.tar.gz | 220,552 | 70/60/7914d5e315a311a199d910c04e6c0bedabfeecc62aefba2ab32d875897e6/llmswap-5.5.6.tar.gz | source | sdist | null | false | 6b6f48d330420255d0133f2bea9fabd6 | 16173f5abe9f4b3f39342c3a3dfd693599df8d2a1bac97abacff4f21bbd20162 | 70607914d5e315a311a199d910c04e6c0bedabfeecc62aefba2ab32d875897e6 | null | [
"LICENSE"
] | 259 |
2.4 | cfdb-ingest | 0.1.5 | File format conversions to cfdb | # cfdb-ingest
<p align="center">
<em>Convert meteorological model output to cfdb with standardized CF conventions</em>
</p>
[](https://github.com/mullenkamp/cfdb-ingest/actions)
[](https://codecov.io/gh/mullenkamp/cfdb-ingest)
[](https://badge.fury.io/py/cfdb-ingest)
---
**Documentation**: <a href="https://mullenkamp.github.io/cfdb-ingest/" target="_blank">https://mullenkamp.github.io/cfdb-ingest/</a>
**Source Code**: <a href="https://github.com/mullenkamp/cfdb-ingest" target="_blank">https://github.com/mullenkamp/cfdb-ingest</a>
---
## Table of Contents
- [Overview](#overview)
- [Supported Formats](#supported-formats)
- [WRF Variables](#wrf-variables)
- [Installation](#installation)
- [Python API](#python-api)
- [CLI](#cli)
- [Development](#development)
- [License](#license)
## Overview
cfdb-ingest converts meteorological file formats (netCDF4/HDF5, GRIB2, etc.) from various model outputs into [cfdb](https://github.com/mullenkamp/cfdb). It standardizes variable names and attributes to be consistent with [CF conventions](https://cfconventions.org/), making it straightforward to work with datasets from different sources through a single interface.
Key features:
- **Automatic variable mapping** -- source variable names are translated to CF-standard names with proper metadata (standard_name, units, encoding)
- **Wind rotation** -- grid-relative wind components are rotated to earth-relative using COSALPHA/SINALPHA
- **3D level interpolation** -- eta-level variables are interpolated to user-specified height levels above ground
- **Variable merging** -- surface and level-interpolated variants of the same quantity (e.g. T2 at 2 m and T at arbitrary heights) are merged into a single output variable
- **Spatial and temporal filtering** -- subset by bounding box (WGS84) and/or date range before writing
- **Multi-file support** -- seamlessly spans multiple input files, including cross-file precipitation accumulation
- **Configurable chunking** -- tune output chunk shapes for different access patterns
## Supported Formats
| Source | Class | CRS Projections |
|--------|-------|-----------------|
| WRF (wrfout) | `WrfIngest` | Lambert Conformal Conic, Polar Stereographic, Mercator, Lat-Lon |
## WRF Variables
Surface variables (fixed height above ground):
| Key | cfdb Name | Height | Source Vars | Transform |
|-----|-----------|--------|-------------|-----------|
| `T2` | `air_temp` | 2 m | T2 | direct |
| `PSFC` | `surface_pressure` | 0 m | PSFC | direct |
| `Q2` | `specific_humidity` | 2 m | Q2 | direct |
| `RAIN` | `precip` | 0 m | RAINNC, RAINC | accumulation increment |
| `WIND10` | `wind_speed` | 10 m | U10, V10 | wind rotation |
| `WIND_DIR10` | `wind_direction` | 10 m | U10, V10 | wind rotation |
| `U10` | `u_wind` | 10 m | U10, V10 | wind rotation |
| `V10` | `v_wind` | 10 m | U10, V10 | wind rotation |
| `TSK` | `soil_temp` | 0 m | TSK | direct |
| `SWDOWN` | `shortwave_radiation` | 0 m | SWDOWN | direct |
| `GLW` | `longwave_radiation` | 0 m | GLW | direct |
| `SNOWH` | `snow_depth` | 0 m | SNOWH | direct |
| `SLP` | `mslp` | 0 m | PSFC, T2, HGT | hypsometric reduction |
3D level-interpolated variables (interpolated to user-specified `target_levels`):
| Key | cfdb Name | Source Vars | Transform |
|-----|-----------|-------------|-----------|
| `T` | `air_temp` | T, P, PB, PH, PHB | potential to actual temperature |
| `WIND` | `wind_speed` | U, V, PH, PHB | unstagger + rotation |
| `WIND_DIR` | `wind_direction` | U, V, PH, PHB | unstagger + rotation |
| `U` | `u_wind` | U, V, PH, PHB | unstagger + rotation |
| `V` | `v_wind` | U, V, PH, PHB | unstagger + rotation |
| `Q` | `specific_humidity` | QVAPOR, PH, PHB | mixing ratio to specific humidity |
## Installation
Requires Python >= 3.11.
```bash
pip install cfdb-ingest
```
## Python API
### Basic conversion
```python
from cfdb_ingest import WrfIngest
wrf = WrfIngest('wrfout_d01_2023-02-12_00:00:00.nc')
# Convert selected variables for a time window
wrf.convert(
cfdb_path='output.cfdb',
variables=['T2', 'WIND10', 'precip'],
start_date='2023-02-12T06:00',
end_date='2023-02-12T18:00',
)
```
### Multi-file input
```python
wrf = WrfIngest([
'wrfout_d01_2023-02-12_00:00:00.nc',
'wrfout_d01_2023-02-13_00:00:00.nc',
])
# All timesteps across both files are merged automatically
wrf.convert(cfdb_path='output.cfdb', variables=['T2'])
```
### Spatial subsetting with a bounding box
```python
wrf.convert(
cfdb_path='output.cfdb',
variables=['T2'],
bbox=(165.0, -47.0, 175.0, -40.0), # (min_lon, min_lat, max_lon, max_lat)
)
```
### 3D level interpolation
```python
# Interpolate 3D temperature and wind to specific heights above ground
wrf.convert(
cfdb_path='output.cfdb',
variables=['T', 'WIND'],
target_levels=[100.0, 500.0, 1000.0, 2000.0],
bbox=(165.0, -47.0, 175.0, -40.0),
)
```
### Merging surface and 3D variables
Variables sharing a cfdb name are automatically merged. For example, `T2` (2 m) and `T` (levels) both map to `air_temp` and produce a single output variable spanning all heights:
```python
wrf.convert(
cfdb_path='output.cfdb',
variables=['T2', 'T'],
target_levels=[100.0, 500.0],
)
# Output height coordinate: [2.0, 100.0, 500.0]
```
### Custom chunk shape
The output chunk shape defaults to `(1, 1, ny, nx)` (one full spatial slab per timestep per height level). Override it for different access patterns:
```python
wrf.convert(
cfdb_path='output.cfdb',
variables=['T2'],
chunk_shape=(1, 1, 50, 50), # (time, z, y, x)
)
```
### Inspecting metadata before conversion
```python
wrf = WrfIngest('wrfout_d01_2023-02-12_00:00:00.nc')
wrf.crs # pyproj.CRS
wrf.times # numpy datetime64 array
wrf.x, wrf.y # 1D projected coordinate arrays
wrf.variables # dict of available variable mappings
wrf.bbox_geographic # (min_lon, min_lat, max_lon, max_lat)
```
### Variable name resolution
`variables` accepts mapping keys (`T2`), source variable names (`RAINNC`), or cfdb names (`air_temp`). When a cfdb name maps to multiple keys, all are included:
```python
wrf.resolve_variables(['air_temp']) # ['T2', 'T']
wrf.resolve_variables(['RAINNC']) # ['RAIN']
wrf.resolve_variables(None) # all available keys
```
## CLI
cfdb-ingest provides a `cfdb-ingest` command with a `wrf` subcommand.
### Basic usage
```bash
cfdb-ingest wrf wrfout_d01_2023-02-12_00:00:00.nc output.cfdb \
-v T2,WIND10 \
-s 2023-02-12T06:00 \
-e 2023-02-12T18:00
```
### All options
```
cfdb-ingest wrf [OPTIONS] INPUT_PATHS... CFDB_PATH
```
| Option | Short | Description |
|--------|-------|-------------|
| `--variables` | `-v` | Comma-separated variable names |
| `--start-date` | `-s` | Start date (ISO format) |
| `--end-date` | `-e` | End date (ISO format) |
| `--bbox` | `-b` | Bounding box: `min_lon,min_lat,max_lon,max_lat` |
| `--target-levels` | `-l` | Comma-separated height levels in meters |
| `--chunk-shape` | `-c` | Output chunk shape: `time,z,y,x` (e.g. `1,1,50,50`) |
| `--max-mem` | | Read buffer size in bytes (default: 128 MiB) |
| `--compression` | | Compression algorithm: `zstd` or `lz4` |
### Examples
```bash
# Convert with spatial subset
cfdb-ingest wrf wrfout_d01_*.nc output.cfdb \
-v T2 -b 165.0,-47.0,175.0,-40.0
# 3D temperature at specific height levels
cfdb-ingest wrf wrfout_d01_*.nc output.cfdb \
-v T -l 100,500,1000,2000 -b 165.0,-47.0,175.0,-40.0
# Custom chunk shape for time-series access patterns
cfdb-ingest wrf wrfout_d01_*.nc output.cfdb \
-v T2,WIND10 -c 24,1,50,50
```
## Development
### Setup environment
We use [UV](https://docs.astral.sh/uv/) to manage the development environment and production build.
```bash
uv sync
```
### Run tests
```bash
uv run pytest
```
### Lint and format
```bash
uv run lint
```
## License
This project is licensed under the terms of the Apache Software License 2.0.
| text/markdown | null | mullenkamp <mullenkamp1@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cfdb>=0.3.5",
"geointerp",
"h5py",
"numpy",
"pyproj",
"rechunkit",
"typer"
] | [] | [] | [] | [
"Documentation, https://mullenkamp.github.io/cfdb-ingest/",
"Source, https://github.com/mullenkamp/cfdb-ingest"
] | uv/0.8.7 | 2026-02-21T02:50:12.176151 | cfdb_ingest-0.1.5.tar.gz | 18,565 | bc/aa/707f0739306ae27da8bfa80d732d9bf09d8b2c0e31b3e80f429727aaba60/cfdb_ingest-0.1.5.tar.gz | source | sdist | null | false | 66a606f1012e8cd85abfaa1c0fe87446 | 3f07af5c30b26748952f4d95e2a14c0db64e53332d8837c8ff73647933119c0b | bcaa707f0739306ae27da8bfa80d732d9bf09d8b2c0e31b3e80f429727aaba60 | null | [
"LICENSE"
] | 233 |
2.4 | vercel-cli | 50.22.1 | Vercel CLI packaged for Python (bundled Node.js, vendored npm) | # Python package wrapper for Vercel CLI
[](https://github.com/nuage-studio/vercel-cli-python/actions/workflows/test.yml)
[](https://codecov.io/gh/nuage-studio/vercel-cli-python)
[](https://www.python.org/)
**vercel-cli** packages the npm `vercel` CLI for Python environments. It vendors the npm package under `vercel_cli/vendor/` and uses the bundled Node.js runtime provided by `nodejs-wheel-binaries`, so you can run `vercel` without installing Node.js.
It provides both a command-line interface and a Python API that other libraries can use programmatically instead of resorting to subprocess calls.
## Quick start
- **Install**:
```bash
pip install vercel-cli
```
- **Use** (same arguments and behavior as the official npm CLI):
```bash
vercel --version
vercel login
vercel deploy
```
- **Use programmatically in Python** (for libraries that depend on this package):
```python
from vercel_cli import run_vercel
# Deploy current directory
exit_code = run_vercel(["deploy"])
# Deploy specific directory with custom environment
exit_code = run_vercel(
["deploy", "--prod"],
cwd="/path/to/project",
env={"VERCEL_TOKEN": "my-token"}
)
# Check version
exit_code = run_vercel(["--version"])
```
## What this provides
- **No system Node.js required**: The CLI runs via the Node binary from `nodejs-wheel-binaries` (currently Node 22.x).
- **Vendored npm package**: The `vercel` npm package (production deps only) is checked into `vercel_cli/vendor/`.
- **Console entrypoint**: The `vercel` command maps to `vercel_cli.run:main`, which executes `vercel_cli/vendor/dist/vc.js` with the bundled Node runtime.
- **Python API**: The `run_vercel()` function allows other Python libraries to use Vercel CLI programmatically without subprocess calls, with secure environment variable handling.
## Requirements
- Python 3.8+
- macOS, Linux, or Windows supported by the Node wheels
## How it works
At runtime, `vercel_cli.run` locates `vercel_cli/vendor/dist/vc.js` and launches it via the Node executable exposed by `nodejs_wheel_binaries`. CLI arguments are passed through unchanged, while environment variables are handled securely.
## Programmatic usage
When using this package as a dependency in other Python libraries, you can call Vercel CLI commands directly without using subprocess:
```python
from vercel_cli import run_vercel
import tempfile
import os
def deploy_my_app(source_dir: str, token: str) -> bool:
"""Deploy an application to Vercel programmatically."""
with tempfile.TemporaryDirectory() as temp_dir:
# Copy your app to temp directory and modify as needed
# ...
# Deploy with custom environment
env = {
"VERCEL_TOKEN": token,
"NODE_ENV": "production"
}
exit_code = run_vercel(
["deploy", "--prod", "--yes"],
cwd=temp_dir,
env=env
)
return exit_code == 0
# Usage
success = deploy_my_app("./my-app", "my-vercel-token")
```
The `run_vercel()` function accepts:
- `args`: List of CLI arguments (same as command line)
- `cwd`: Working directory for the command
- `env`: Environment variables to set (passed directly to the Node.js runtime)
## Security considerations
When using the `env` parameter, only explicitly provided environment variables are passed to the Vercel CLI. This prevents accidental leakage of sensitive environment variables from your Python process while still allowing you to set necessary variables like `VERCEL_TOKEN`.
Example with secure token handling:
```python
from vercel_cli import run_vercel
# Secure: only VERCEL_TOKEN is passed to the CLI
exit_code = run_vercel(
["deploy", "--prod"],
env={"VERCEL_TOKEN": "your-secure-token"}
)
```
This approach avoids common security pitfalls of subprocess environment variable handling.
## Updating the vendored Vercel CLI (maintainers)
There are two ways to update the vendored npm package under `vercel_cli/vendor/`:
1) Manual update to a specific version
```bash
# Using the console script defined in pyproject.toml
uv run update-vendor 46.0.2
# or equivalently
uv run python scripts/update_vendor.py 46.0.2
```
This will:
- fetch `vercel@46.0.2` from npm,
- verify integrity/shasum,
- install production dependencies with `npm install --omit=dev`, and
- copy the result into `vercel_cli/vendor/`.
1) Automatic check-and-release (GitHub Actions)
The workflow `.github/workflows/release.yml` checks npm `latest` and, if newer than the vendored version, will:
- vendor the new version using `scripts/check_and_update.py`,
- commit the changes and create a tag `v<version>`,
- build distributions, and
- publish to PyPI (requires `PYPI_API_TOKEN`).
## Versioning
The Python package version is derived dynamically from the vendored `package.json` via Hatch’s version source:
```toml
[tool.hatch.version]
path = "vercel_cli/vendor/package.json"
pattern = '"version"\s*:\s*"(?P<version>[^\\"]+)"'
```
## Development
- Build backend: `hatchling`
- Dependency management: `uv` (see `uv.lock`)
- Tests: `pytest` with coverage in `tests/`
- Lint/format: `ruff`; type-check: `basedpyright`
Common commands (using `uv`):
```bash
# Run tests with coverage
uv run pytest --cov=vercel_cli --cov-report=term-missing
# Lint and format
uv run ruff check .
uv run ruff format .
# Type-check
uv run basedpyright
# Build wheel and sdist
uv run --with build python -m build
```
| text/markdown | Nuage | null | Nuage | null | null | cli, deployment, edge, functions, nodejs, npm, serverless, vercel, wrapper | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Build Tools",
"Topic :: System :: Systems Administration",
"Topic :: Utilities"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"nodejs-wheel-binaries==22.16.0"
] | [] | [] | [] | [
"Homepage, https://github.com/nuage-studio/vercel-cli-python",
"Repository, https://github.com/nuage-studio/vercel-cli-python",
"Issues, https://github.com/nuage-studio/vercel-cli-python/issues",
"Documentation, https://github.com/nuage-studio/vercel-cli-python#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:49:57.616532 | vercel_cli-50.22.1.tar.gz | 3,734,351 | c0/17/85f204bc5683f551bcdacf844c57d4cce6fc7d51ea32acc624cb5e241cdd/vercel_cli-50.22.1.tar.gz | source | sdist | null | false | 1363289faff9b547456596ff60f966f9 | d76819ea44184716afb168332f53ab6967da0f9b2f9e5c5171cd2f3894829b63 | c01785f204bc5683f551bcdacf844c57d4cce6fc7d51ea32acc624cb5e241cdd | null | [] | 247 |
2.4 | PyTurboJPEG | 2.2.0 | A Python wrapper of libjpeg-turbo for decoding and encoding JPEG image. | # PyTurboJPEG
A Python wrapper for libjpeg-turbo that enables efficient JPEG image decoding and encoding.
[](https://pypi.org/project/pyturbojpeg/)

[](https://pypistats.org/packages/pyturbojpeg)
[](https://github.com/lilohuang/PyTurboJPEG/blob/master/LICENSE)
## Prerequisites
- [libjpeg-turbo](https://github.com/libjpeg-turbo/libjpeg-turbo/releases) **3.0 or later** (required for PyTurboJPEG 2.0+)
- [numpy](https://github.com/numpy/numpy)
**Important:** PyTurboJPEG 2.0+ requires libjpeg-turbo 3.0 or later as it uses the new function-based TurboJPEG 3 API. For libjpeg-turbo 2.x compatibility, please use PyTurboJPEG 1.x.
## Installation
### macOS
```bash
brew install jpeg-turbo
pip install -U git+https://github.com/lilohuang/PyTurboJPEG.git
```
### Windows
1. Download the [libjpeg-turbo official installer](https://github.com/libjpeg-turbo/libjpeg-turbo/releases)
2. Install PyTurboJPEG:
```bash
pip install -U git+https://github.com/lilohuang/PyTurboJPEG.git
```
### Linux
1. Download the [libjpeg-turbo official installer](https://github.com/libjpeg-turbo/libjpeg-turbo/releases)
2. Install PyTurboJPEG:
```bash
pip install -U git+https://github.com/lilohuang/PyTurboJPEG.git
```
## Basic Usage
### Initialization
```python
from turbojpeg import TurboJPEG
# Use default library installation
jpeg = TurboJPEG()
# Or specify library path explicitly
# jpeg = TurboJPEG(r'D:\turbojpeg.dll') # Windows
# jpeg = TurboJPEG('/usr/lib64/libturbojpeg.so') # Linux
# jpeg = TurboJPEG('/usr/local/lib/libturbojpeg.dylib') # macOS
```
### Decoding
```python
import cv2
from turbojpeg import TurboJPEG, TJPF_GRAY, TJFLAG_FASTUPSAMPLE, TJFLAG_FASTDCT
jpeg = TurboJPEG()
# Basic decoding to BGR array
with open('input.jpg', 'rb') as f:
bgr_array = jpeg.decode(f.read())
cv2.imshow('bgr_array', bgr_array)
cv2.waitKey(0)
# Fast decoding (lower accuracy, higher speed)
with open('input.jpg', 'rb') as f:
bgr_array = jpeg.decode(f.read(), flags=TJFLAG_FASTUPSAMPLE|TJFLAG_FASTDCT)
# Decode with direct rescaling (1/2 size)
with open('input.jpg', 'rb') as f:
bgr_array_half = jpeg.decode(f.read(), scaling_factor=(1, 2))
# Get available scaling factors
scaling_factors = jpeg.scaling_factors
# Decode to grayscale
with open('input.jpg', 'rb') as f:
gray_array = jpeg.decode(f.read(), pixel_format=TJPF_GRAY)
```
### Decoding Header Information
```python
# Get image properties without full decoding (backward compatible)
with open('input.jpg', 'rb') as f:
width, height, jpeg_subsample, jpeg_colorspace = jpeg.decode_header(f.read())
# Get precision to select appropriate decode function
with open('input.jpg', 'rb') as f:
jpeg_data = f.read()
width, height, jpeg_subsample, jpeg_colorspace, precision = jpeg.decode_header(jpeg_data, return_precision=True)
# Use precision to select appropriate decode function
if precision == 8:
img = jpeg.decode(jpeg_data)
elif precision == 12:
img = jpeg.decode_12bit(jpeg_data)
elif precision == 16:
img = jpeg.decode_16bit(jpeg_data)
```
### YUV Decoding
```python
# Decode to YUV buffer
with open('input.jpg', 'rb') as f:
buffer_array, plane_sizes = jpeg.decode_to_yuv(f.read())
# Decode to YUV planes
with open('input.jpg', 'rb') as f:
planes = jpeg.decode_to_yuv_planes(f.read())
```
### Encoding
```python
from turbojpeg import TJSAMP_GRAY, TJFLAG_PROGRESSIVE
# Basic encoding with default settings
with open('output.jpg', 'wb') as f:
f.write(jpeg.encode(bgr_array))
# Encode with grayscale subsample
with open('output_gray.jpg', 'wb') as f:
f.write(jpeg.encode(bgr_array, jpeg_subsample=TJSAMP_GRAY))
# Encode with custom quality
with open('output_quality_50.jpg', 'wb') as f:
f.write(jpeg.encode(bgr_array, quality=50))
# Encode with progressive entropy coding
with open('output_progressive.jpg', 'wb') as f:
f.write(jpeg.encode(bgr_array, quality=100, flags=TJFLAG_PROGRESSIVE))
# Encode with lossless JPEG compression
with open('output_gray.jpg', 'wb') as f:
f.write(jpeg.encode(bgr_array, lossless=True))
```
### Advanced Operations
```python
# Scale with quality (without color conversion)
with open('input.jpg', 'rb') as f:
scaled_data = jpeg.scale_with_quality(f.read(), scaling_factor=(1, 4), quality=70)
with open('scaled_output.jpg', 'wb') as f:
f.write(scaled_data)
# Lossless crop
with open('input.jpg', 'rb') as f:
cropped_data = jpeg.crop(f.read(), 8, 8, 320, 240)
with open('cropped_output.jpg', 'wb') as f:
f.write(cropped_data)
```
### In-Place Operations
```python
import numpy as np
# In-place decoding (reuse existing array)
img_array = np.empty((640, 480, 3), dtype=np.uint8)
with open('input.jpg', 'rb') as f:
result = jpeg.decode(f.read(), dst=img_array)
# result is the same as img_array: id(result) == id(img_array)
# In-place encoding (reuse existing buffer)
buffer_size = jpeg.buffer_size(img_array)
dest_buf = bytearray(buffer_size)
result, n_bytes = jpeg.encode(img_array, dst=dest_buf)
with open('output.jpg', 'wb') as f:
f.write(dest_buf[:n_bytes])
# result is the same as dest_buf: id(result) == id(dest_buf)
```
### EXIF Orientation Handling
```python
import cv2
import numpy as np
import exifread
from turbojpeg import TurboJPEG
def transpose_image(image, orientation):
"""Transpose image based on EXIF Orientation tag.
See: https://www.exif.org/Exif2-2.PDF
"""
if orientation is None:
return image
val = orientation.values[0]
if val == 1: return image
elif val == 2: return np.fliplr(image)
elif val == 3: return np.rot90(image, 2)
elif val == 4: return np.flipud(image)
elif val == 5: return np.rot90(np.flipud(image), -1)
elif val == 6: return np.rot90(image, -1)
elif val == 7: return np.rot90(np.flipud(image))
elif val == 8: return np.rot90(image)
jpeg = TurboJPEG()
with open('foobar.jpg', 'rb') as f:
# Parse EXIF orientation
orientation = exifread.process_file(f).get('Image Orientation', None)
# Decode image
f.seek(0)
image = jpeg.decode(f.read())
# Apply orientation transformation
transposed_image = transpose_image(image, orientation)
cv2.imshow('transposed_image', transposed_image)
cv2.waitKey(0)
```
### ICC Color Management Workflow
```python
import io
import numpy as np
from PIL import Image, ImageCms
from turbojpeg import TurboJPEG, TJPF_BGR
def decode_jpeg_with_color_management(jpeg_path):
"""
Decodes a JPEG and applies color management (ICC Profile to sRGB).
Args:
jpeg_path (str): Path to the input JPEG file.
Returns:
PIL.Image: The color-corrected sRGB Image object.
"""
# 1. Initialize TurboJPEG
jpeg = TurboJPEG()
with open(jpeg_path, 'rb') as f:
jpeg_data = f.read()
# 2. Get image headers and decode pixels
# Using TJPF_BGR format (OpenCV standard) for the raw buffer
width, height, _, _ = jpeg.decode_header(jpeg_data)
pixels = jpeg.decode(jpeg_data, pixel_format=TJPF_BGR)
# 3. Encapsulate into a Pillow Image object
# Key: Use 'raw' and 'BGR' decoder to correctly map BGR bytes to an RGB Image object
img = Image.frombytes('RGB', (width, height), pixels, 'raw', 'BGR')
# 4. Handle ICC Profile transformation
try:
# Extract embedded ICC Profile
icc_profile = jpeg.get_icc_profile(jpeg_data)
if icc_profile:
# Create Source and Destination Profile objects
src_profile = ImageCms.getOpenProfile(io.BytesIO(icc_profile))
dst_profile = ImageCms.createProfile("sRGB")
# Perform color transformation (similar to "Convert to Profile" in Photoshop)
# This step recalculates pixel values to align with sRGB standards
img = ImageCms.profileToProfile(
img,
src_profile,
dst_profile,
outputMode='RGB'
)
print(f"Successfully applied ICC profile from {jpeg_path}")
else:
print("No ICC profile found, assuming sRGB.")
except Exception as e:
print(f"Color Management Error: {e}. Returning original raw image.")
return img
# --- Example Usage ---
if __name__ == "__main__":
result_img = decode_jpeg_with_color_management('icc_profile.jpg')
result_img.show()
# result_img.save('output_srgb.jpg', quality=95)
```
## High-Precision JPEG Support
PyTurboJPEG 2.0+ supports 12-bit and 16-bit precision JPEG encoding and decoding using libjpeg-turbo 3.0+ APIs. This feature is ideal for medical imaging, scientific photography, and other applications requiring higher bit depth.
**Requirements:**
- libjpeg-turbo 3.0 or later (12-bit and 16-bit support is built-in)
**Precision Modes:**
- **12-bit JPEG:** Supports both lossy and lossless compression
- **16-bit JPEG:** Only supports lossless compression (JPEG standard limitation)
### 12-bit JPEG (Lossy)
12-bit JPEG provides higher precision than standard 8-bit JPEG while maintaining compatibility with lossy compression.
```python
import numpy as np
from turbojpeg import TurboJPEG
jpeg = TurboJPEG()
# Create 12-bit image (values range from 0 to 4095)
img_12bit = np.random.randint(0, 4096, (480, 640, 3), dtype=np.uint16)
# Encode to 12-bit lossy JPEG
jpeg_data = jpeg.encode_12bit(img_12bit, quality=95)
# Decode from 12-bit JPEG
decoded_img = jpeg.decode_12bit(jpeg_data)
# Save to file
with open('output_12bit.jpg', 'wb') as f:
f.write(jpeg_data)
# Load from file
with open('output_12bit.jpg', 'rb') as f:
decoded_from_file = jpeg.decode_12bit(f.read())
```
### Lossless JPEG for 12-bit and 16-bit
12-bit and 16-bit JPEG support lossless compression for perfect reconstruction:
#### 12-bit Lossless JPEG
12-bit precision with lossless compression:
```python
import numpy as np
from turbojpeg import TurboJPEG
jpeg = TurboJPEG()
# Create 12-bit image
img_12bit = np.random.randint(0, 4096, (480, 640, 3), dtype=np.uint16)
# Encode to 12-bit lossless JPEG using encode_12bit() with lossless=True
jpeg_data = jpeg.encode_12bit(img_12bit, lossless=True)
# Decode using decode_12bit()
decoded_img = jpeg.decode_12bit(jpeg_data)
# Perfect reconstruction
assert np.array_equal(img_12bit, decoded_img) # True
```
#### 16-bit Lossless JPEG
16-bit JPEG provides the highest precision with perfect reconstruction through lossless compression. The JPEG standard only supports 16-bit for lossless mode.
```python
import numpy as np
from turbojpeg import TurboJPEG
jpeg = TurboJPEG()
# Create 16-bit image (values range from 0 to 65535)
img_16bit = np.random.randint(0, 65536, (480, 640, 3), dtype=np.uint16)
# Encode to 16-bit lossless JPEG
jpeg_data = jpeg.encode_16bit(img_16bit)
# Decode from 16-bit lossless JPEG
decoded_img = jpeg.decode_16bit(jpeg_data)
# Verify perfect reconstruction (lossless)
assert np.array_equal(img_16bit, decoded_img) # True
# Save to file
with open('output_16bit_lossless.jpg', 'wb') as f:
f.write(jpeg_data)
# Load from file
with open('output_16bit_lossless.jpg', 'rb') as f:
decoded_from_file = jpeg.decode_16bit(f.read())
```
### Medical and Scientific Imaging
For medical and scientific applications, 12-bit JPEG provides excellent precision while maintaining file size efficiency:
```python
import numpy as np
from turbojpeg import TurboJPEG, TJPF_GRAY, TJSAMP_GRAY
jpeg = TurboJPEG()
# Create 12-bit medical image (e.g., DICOM format)
# Medical images typically use 0-4095 range
medical_img = np.random.randint(0, 4096, (512, 512, 1), dtype=np.uint16)
# Encode with highest quality for medical applications
jpeg_medical = jpeg.encode_12bit(
medical_img,
pixel_format=TJPF_GRAY,
jpeg_subsample=TJSAMP_GRAY,
quality=100
)
# Decode for analysis
decoded_medical = jpeg.decode_12bit(jpeg_medical, pixel_format=TJPF_GRAY)
# Verify value range preservation
print(f"Original range: [{medical_img.min()}, {medical_img.max()}]")
print(f"Decoded range: [{decoded_medical.min()}, {decoded_medical.max()}]")
```
## License
See the LICENSE file for details.
| text/markdown | Lilo Huang | kuso.cc@gmail.com | null | null | MIT | null | [] | [] | https://github.com/lilohuang/PyTurboJPEG | null | null | [] | [] | [] | [
"numpy",
"pytest>=7.0.0; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-21T02:49:46.505910 | pyturbojpeg-2.2.0.tar.gz | 32,514 | d7/c6/efd96866c457f22d1e893f9ac67f17ff61a5661ad2a061da61657e9c4999/pyturbojpeg-2.2.0.tar.gz | source | sdist | null | false | 9566e3aff00792b84749979991fb61e1 | aaf0305aa9627ce7fdb8f592eb5e0fce804e1bd87db49900bcf78d7d5138eb88 | d7c6efd96866c457f22d1e893f9ac67f17ff61a5661ad2a061da61657e9c4999 | null | [
"LICENSE"
] | 0 |
2.4 | parq-cli | 0.1.5 | A powerful command-line tool for inspecting and analyzing Apache Parquet files | # parq-cli
[](https://www.python.org/downloads/)
[](LICENSE)
A powerful command-line tool for Apache Parquet files 🚀
English | [简体中文](https://github.com/Tendo33/parq-cli/blob/main/README_CN.md)
## ✨ Features
- 📊 **Metadata Viewing**: Quickly view Parquet file metadata (row count, column count, file size, compression type, etc.)
- 📋 **Schema Display**: Beautifully display file column structure and data types
- 👀 **Data Preview**: Support viewing the first N rows or last N rows of a file
- 🔢 **Row Count**: Quickly get the total number of rows in a file
- ✂️ **File Splitting**: Split large Parquet files into multiple smaller files
- 🗜️ **Compression Info**: Display file compression type and file size
- 🎨 **Beautiful Output**: Use Rich library for colorful, formatted terminal output
- 📦 **Smart Display**: Automatically detect nested structures, showing logical and physical column counts
## 📦 Installation
```bash
pip install parq-cli
```
## 🚀 Quick Start
### Basic Usage
```bash
# View file metadata
parq meta data.parquet
# Display schema information
parq schema data.parquet
# Display first 5 rows (default)
parq head data.parquet
# Display first 10 rows
parq head -n 10 data.parquet
# Display last 5 rows (default)
parq tail data.parquet
# Display last 20 rows
parq tail -n 20 data.parquet
# Display total row count
parq count data.parquet
# Split file into 3 parts
parq split data.parquet --file-count 3
# Split file with 1000 records per file
parq split data.parquet --record-count 1000
```
## 📖 Command Reference
### View Metadata
```bash
parq meta FILE
```
Display Parquet file metadata (row count, column count, file size, compression type, etc.).
### View Schema
```bash
parq schema FILE
```
Display the column structure and data types of a Parquet file.
### Preview Data
```bash
# Display first N rows (default 5)
parq head FILE
parq head -n N FILE
# Display last N rows (default 5)
parq tail FILE
parq tail -n N FILE
```
Notes:
- `N` must be a non-negative integer.
- If the input file does not exist, parq exits with code `1` and prints a friendly error message.
### Statistics
```bash
# Display total row count
parq count FILE
```
### Split Files
```bash
# Split into N files
parq split FILE --file-count N
# Split with M records per file
parq split FILE --record-count M
# Custom output format
parq split FILE -f N -n "output-%03d.parquet"
# Split into subdirectory
parq split FILE -f 3 -n "output/part-%02d.parquet"
```
Split a Parquet file into multiple smaller files. You can specify either the number of output files (`--file-count`) or the number of records per file (`--record-count`). The output file names are formatted according to the `--name-format` pattern (default: `result-%06d.parquet`).
### Global Options
- `--version, -v`: Display version information
- `--help`: Display help information
## 🎨 Output Examples
### Metadata Display
**Regular File (No Nested Structure):**
```bash
$ parq meta data.parquet
```
```
╭─────────────────────── 📊 Parquet File Metadata ───────────────────────╮
│ file_path: data.parquet │
│ num_rows: 1000 │
│ num_columns: 5 (logical) │
│ file_size: 123.45 KB │
│ compression: SNAPPY │
│ num_row_groups: 1 │
│ format_version: 2.6 │
│ serialized_size: 126412 │
│ created_by: parquet-cpp-arrow version 18.0.0 │
╰────────────────────────────────────────────────────────────────────────╯
```
**Nested Structure File (Shows Physical Column Count):**
```bash
$ parq meta nested.parquet
```
```
╭─────────────────────── 📊 Parquet File Metadata ───────────────────────╮
│ file_path: nested.parquet │
│ num_rows: 500 │
│ num_columns: 3 (logical) │
│ num_physical_columns: 8 (storage) │
│ file_size: 2.34 MB │
│ compression: ZSTD │
│ num_row_groups: 2 │
│ format_version: 2.6 │
│ serialized_size: 2451789 │
│ created_by: parquet-cpp-arrow version 21.0.0 │
╰────────────────────────────────────────────────────────────────────────╯
```
### Schema Display
```bash
$ parq schema data.parquet
```
```
📋 Schema Information
┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━┓
┃ Column Name ┃ Data Type ┃ Nullable ┃
┡━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━┩
│ id │ int64 │ ✗ │
│ name │ string │ ✓ │
│ age │ int64 │ ✓ │
│ city │ string │ ✓ │
│ salary │ double │ ✓ │
└─────────────┴───────────────┴──────────┘
```
## 🛠️ Tech Stack
- **[PyArrow](https://arrow.apache.org/docs/python/)**: High-performance Parquet reading engine
- **[Typer](https://typer.tiangolo.com/)**: Modern CLI framework
- **[Rich](https://rich.readthedocs.io/)**: Beautiful terminal output
## 🧪 Development
### Install Development Dependencies
```bash
# Recommended with uv
uv sync --extra dev
# Or with pip
pip install -e ".[dev]"
```
### Run Tests
```bash
pytest
```
### Run Tests (With Coverage)
```bash
pytest --cov=parq --cov-report=html
```
### Code Formatting and Checking
```bash
# Check and auto-fix with Ruff
ruff check --fix parq tests
# Find dead code
vulture parq tests scripts
```
## 🗺️ Roadmap
- [x] Basic metadata viewing
- [x] Schema display
- [x] Data preview (head/tail)
- [x] Row count statistics
- [x] File size and compression information display
- [x] Nested structure smart detection (logical vs physical column count)
- [x] Add split command, split a parquet file into multiple parquet files
- [ ] Data statistical analysis
- [ ] Add convert command, convert a parquet file to other formats (CSV, JSON, Excel)
- [ ] Add diff command, compare the differences between two parquet files
- [ ] Add merge command, merge multiple parquet files into one parquet file
## 📦 Release Process (for maintainers)
We use automated scripts to manage versions and releases:
```bash
# Bump version and create tag
python scripts/bump_version.py patch # 0.1.0 -> 0.1.1 (bug fixes)
python scripts/bump_version.py minor # 0.1.0 -> 0.2.0 (new features)
python scripts/bump_version.py major # 0.1.0 -> 1.0.0 (breaking changes)
# Push to trigger GitHub Actions
git push origin main
git push origin v0.1.1 # Replace with actual version
```
GitHub Actions will automatically:
- ✅ Run tests on Linux/macOS/Windows before publishing
- ✅ Check for version conflicts
- ✅ Fail fast on network errors while checking PyPI versions
- ✅ Build the package
- ✅ Publish to PyPI
- ✅ Create GitHub Release
See [scripts/README.md](scripts/README.md) for detailed documentation.
## 🤝 Contributing
Issues and Pull Requests are welcome!
1. Fork this repository
2. Create a feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details
## 🙏 Acknowledgments
- Inspired by [parquet-cli](https://github.com/chhantyal/parquet-cli)
- Thanks to the Apache Arrow team for powerful Parquet support
- Thanks to the Rich library for adding color to terminal output
## 📮 Contact
- Author: SimonSun
- Project URL: https://github.com/Tendo33/parq-cli
---
**⭐ If this project helps you, please give it a Star!**
| text/markdown | null | SimonSun <sjf19981112@gmail.com> | null | null | MIT | analytics, apache-parquet, cli, data, data-tools, parquet | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pyarrow>=18.0.0",
"rich>=13.0.0",
"typer>=0.15.0",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.7.0; extra == \"dev\"",
"vulture>=2.14; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Tendo33/parq-cli",
"Repository, https://github.com/Tendo33/parq-cli",
"Issues, https://github.com/Tendo33/parq-cli/issues",
"Documentation, https://github.com/Tendo33/parq-cli#readme"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T02:48:50.251351 | parq_cli-0.1.5.tar.gz | 600,910 | 79/fb/165d46609675bbe6c4b4e86644d30fa9bfd69c90269914ce9e43e50d860f/parq_cli-0.1.5.tar.gz | source | sdist | null | false | 449e1f8faa0d89ee5913649cc89c8317 | b55fb56d3a1bef143422a7805f1a32b42dfdf3f1d3ba93e6d53869adaff0e73a | 79fb165d46609675bbe6c4b4e86644d30fa9bfd69c90269914ce9e43e50d860f | null | [
"LICENSE"
] | 237 |
2.4 | songbirdcli | 0.3.1 | Songbird's cli. | # songbirdcli 🐦
Music downloading client featuring mp3 or m4a tagging.
## Dependencies
1. [deno](https://deno.com/)
2. [ffmpeg](https://www.ffmpeg.org/)
## Install via Pip
Run
```bash
pip install songbirdcli
```
To run (with minimal features enabled) use:
```bash
RUN_LOCAL=true GDRIVE_ENABLED=false ITUNES_ENABLED=false python3 songbirdcli
```
See [Configuration](#configuration) for how to configure the CLI via
Environment variables. Without configuration, songbirdcli saves it's data
relative to the script's directory.
## Install via Docker
To run the app via docker, you will require:
1. docker: https://docs.docker.com/get-docker/
Note: to be gung-ho, add `--pull always` to any of the
below commands to always receive the latest
and greatest images.
First, initialize your docker volumes
```bash
make volumesinit
```
Note: bash or zsh aliases are provided below,
assuming you clone songbird into your home directory
into `~/proj/cboin1996/`.
### Itunes and Google Drive Integration
Macos:
```bash
alias songbirdgi="docker run -it --env-file "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/docker.env \
-v "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/data/dump:/app/data/dump \
-v "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/data/gdrive:/app/data/gdrive \
-v "${HOME}"/Music/iTunes/iTunes\ Media/Automatically\ Add\ to\ Music.localized:/app/data/itunesauto \
-v "${HOME}"/Music/Itunes/Itunes\ Media/Music:/app/data/ituneslib \
-p 8080:8080 \
--hostname songbird \
--pull always \
cboin/songbird:latest"
```
Windows:
Install windows sub-system for linux and setup the below alias:
```bash
alias songbirdgi="docker run -it --env-file "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/docker.env \
-v "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/data/dump:/app/data/dump \
-v "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/data/gdrive:/app/data/gdrive \
-v /mnt/c/Users/*/Music/iTunes/iTunes\ Media/Automatically\ Add\ to\ Music:/app/data/itunesauto \
-v /mnt/c/Users/*/Music/iTunes/iTunes\ Media/Music:/app/data/ituneslib \
-p 8080:8080 \
--hostname songbird \
--pull always \
cboin/songbird:latest"
```
### Minimal configuration
By default, the app assumes itunes is installed. At minimum,
create a `.env` file with to run without either.
In addition, you need a folder to store local files in.
This folder will be passed as a volume mount to the
dockerized app, as above in `-v "app/data/dump":"app/data/dump",
and is initialized automatically when running
`make volumesinit`.
```.env
ITUNES_ENABLED=False
GDRIVE_ENABLED=False
```
### Gdrive Only
Create a .env file at the root of the project containing:
```.env
ITUNES_ENABLED=false
GDRIVE_FOLDER_ID=foo
```
Replace the `foo` with the folder id of your google drive
folder. This is found
in the url inside the folder when open in googledrive.
`E.x: https://drive.google.com/drive/folders/foo`
Follow https://developers.google.com/drive/api/quickstart/python,
and place the `credentials.json` file at the `app/data/gdrive`
folder at the root of the project.
```bash
alias songbirdg="docker run -it --env-file "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/docker.env \
-v "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/data/dump:/app/data/dump \
-v "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/data/gdrive:/app/data/gdrive \
-p 8080:8080 \
--hostname songbird \
--pull always \
cboin/songbird:latest"
```
### Itunes Only
```.env
GDRIVE_ENABLED=false
```
## Run
Macos:
```bash
alias songbirdi="docker run -it --env-file "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/docker.env \
-v "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/data/dump:/app/data/dump \
-v "${HOME}"/Music/iTunes/iTunes\ Media/Automatically\ Add\ to\ Music.localized:/app/data/itunesauto \
-v "${HOME}"/Music/Itunes/Itunes\ Media/Music:/app/data/ituneslib \
--pull always \
cboin/songbird:latest"
```
Windows:
Install windows sub-system for linux and
setup the below alias:
```bash
alias songbirdi="docker run -it --env-file "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/docker.env \
-v "${HOME}"/proj/cboin1996/songbird/songbirdcli/songbirdcli/data/dump:/app/data/dump \
-v /mnt/c/Users/*/Music/iTunes/iTunes\ Media/Automatically\ Add\ to\ Music:/app/data/itunesauto \
-v /mnt/c/Users/*/Music/iTunes/iTunes\ Media/Music:/app/data/ituneslib \
-p 8080:8080 \
--hostname songbird \
--pull always \
cboin/songbird:latest"
```
## Install as Package
To use songbirdcli as a python package, use
```bash
pip install songbirdcli
```
See [tests](./tests/unit/test_cli.py) for an example
of configuring and running songbirdcli as a package.
For API documentation, view [here](https://cboin1996.github.io/songbird/)
## Development
To run the application locally, you can use a vscode debugger.
You should also setup a .env file
with the parameter `RUN_LOCAL=True`.
### Requirements
1. Clone [songbirdcore](https://github.com/cboin1996/songbirdcore.git)
adjacent to this project.
2. Next, run
```bash
export ENV=dev
make setup
```
3. Follow the outputted instructions from `make setup`.
4. Next, run:
```bash
make requirements
```
**Note: the above command performs an editable install of `songbirdcore`**
This allows you to edit `songbirdcore` locally,
and have those changes directly
integrated with this application when developing.
To install the official stable version use
```bash
pip install songbirdcore
```
### Debug CLI
Vscode debugger can be configured to run the `cli.py` file
with the following `.vscode/launch.json` file
```json
{
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "./songbirdcli/cli.py",
"console": "integratedTerminal",
"justMyCode": true,
"envFile": "./dev.env"
},
]
}
```
Alternatively, you can simply run the app directly with
```bash
python3 songbirdcli/cli.py
```
## Linting
To lint the app, run
```
make lint
```
## Configuration
The following table summarizes the configurable parameters for the app,
these can be setup in a `.env` file at the root of the project,
and passed to docker with `--env-file .env`.
| Variable | Type | Default | Description |
| -------------------------- | ------------- | --------------------------------- | -------------------------------------------------------------------------- |
| RUN_LOCAL | bool | False | Whether to run the app locally, or configure it for running in a container |
| ROOT_PATH | str | sys.path[0] | The root path to the project folder |
| DATA_PATH | str | "data" | The name of the folder where app data is stored on disk |
| ITUNES_SEARCH_API_BASE_URL | str | "https://itunes.apple.com/search" | The itunes search api root url |
| ITUNES_ENABLED | bool | True | Whether to run with itunes integration enabled |
| ITUNES_FOLDER_PATH | Optional[str] | "itunesauto" | The path to the itunes automatically add folder |
| ITUNES_LIB_PATH | Optional[str] | "ituneslib" | The path to the itunes library folder |
| GDRIVE_ENABLED | bool | True | Whether to run with google drive integration |
| GDRIVE_FOLDER_PATH | Optional[str] | "gdrive" | Local folder for storing files destined for uploading to google drive |
| GDRIVE_FOLDER_ID | Optional[str] | "" | The folder id of a cloud google drive folder |
| GDRIVE_AUTH_PORT | int | 8080 | The port for oauth setup for google drive integration |
| LOCAL_SONG_STORE_STR | str | "dump" | Where songs are stored locally |
| FNAME_DUP_KEY | str | "_dup" | The key for naming duplicate files |
| FNAME_DUP_LIMIT | str | 8 | The limit of duplicate files matching the `FNAME_DUP_KEY` |
| YOUTUBE_DL_ENABLED | bool | True | Whether to enable the youtube download feature |
| YOUTUBE_RENDER_TIMEOUT | int | 20 | The time before giving up on the render of youtube's search page |
| YOUTUBE_RENDER_WAIT | float | 0.2 | The wait time before starting a render of the youtube search page |
| YOUTUBE_RENDER_SLEEP | int | 1 | The wait time after initial render of youtube |
| YOUTUBE_HOME_URL | str | "https://www.youtube.com" | |
| YOUTUBE_SEARCH_URL | str | "https://www.youtube.com/results" | |
| YOUTUBE_SEARCH_TAG | str | "search_query" | The html tag on youtubes home page linking to the html search form |
| YOUTUBE_SEARCHFORM_PAYLOAD | dict | {youtube_search_tag: ""} | the payload for performing a youtube search |
| YOUTUBE_DL_RETRIES | int | 3 | number of retries for youtube-dlp before giving up on a download |
| FILE_FORMAT | str | "mp3" | This field is overwritten to m4a if itunes is enabled. |
| text/markdown | Christian Boin | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"annotated-types==0.7.0",
"beautifulsoup4==4.12.3",
"brotli==1.2.0",
"certifi==2024.12.14",
"cffi==2.0.0",
"charset-normalizer==3.4.0",
"cryptography==46.0.5",
"cssselect==1.2.0",
"deprecation==2.1.0",
"eyeD3==0.9.9",
"fake-useragent==2.0.3",
"filetype==1.2.0",
"google-api-core==2.30.0",
"google-api-python-client==2.190.0",
"google-auth==2.48.0",
"google-auth-httplib2==0.3.0",
"google-auth-oauthlib==1.2.4",
"googleapis-common-protos==1.72.0",
"greenlet==3.1.1",
"httplib2==0.31.2",
"idna==3.10",
"lxml==5.3.0",
"lxml_html_clean==0.4.1",
"mutagen==1.47.0",
"oauthlib==3.3.1",
"packaging==26.0",
"parse==1.20.2",
"playwright==1.49.1",
"proto-plus==1.27.1",
"protobuf==6.33.5",
"pyasn1==0.6.2",
"pyasn1_modules==0.4.2",
"pycparser==3.0",
"pycryptodomex==3.23.0",
"pydantic==2.11.10",
"pydantic-settings==2.13.1",
"pydantic_core==2.33.2",
"pyee==12.0.0",
"pyparsing==3.3.2",
"pyquery==2.0.1",
"python-dotenv==1.2.1",
"requests==2.32.3",
"requests-htmlc==0.0.8",
"requests-oauthlib==2.0.0",
"rsa==4.9.1",
"songbirdcore==0.1.7",
"soupsieve==2.6",
"typing-inspection==0.4.2",
"typing_extensions==4.12.2",
"uritemplate==4.2.0",
"urllib3==2.2.3",
"w3lib==2.2.1",
"websockets==16.0",
"yt-dlp==2026.2.4",
"yt-dlp-ejs==0.4.0",
"black; extra == \"dev\"",
"click; extra == \"dev\"",
"isort; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"mkdocs; extra == \"dev\"",
"mkdocs-material; extra == \"dev\"",
"mike; extra == \"dev\"",
"mkdocstrings-python; extra == \"dev\"",
"build; extra == \"package\"",
"twine; extra == \"package\""
] | [] | [] | [] | [
"Documentation, https://cboin1996.github.io/songbird/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:47:45.099140 | songbirdcli-0.3.1.tar.gz | 27,370 | 64/38/68c2064f009cc4ec5510880c462ae679d07e2480e39a01d47fdd172a7dd3/songbirdcli-0.3.1.tar.gz | source | sdist | null | false | 303a44fa6946442d271535ee18fd9daa | f3210fa6aeb975042d3133e0f88f8c1e30aae06cee10c98bf2bb6ea241b85a80 | 643868c2064f009cc4ec5510880c462ae679d07e2480e39a01d47fdd172a7dd3 | null | [
"LICENSE"
] | 236 |
2.4 | ztensor | 1.2.3 | Unified, zero-copy, and safe I/O for deep learning formats. | # ztensor
[](https://github.com/pie-project/ztensor/actions/workflows/check.yml)
[](https://crates.io/crates/ztensor)
[](https://pypi.org/project/ztensor/)
[](https://docs.rs/ztensor)
[](https://opensource.org/licenses/MIT)
Unified, zero-copy, and safe I/O for deep learning formats.
## Reading
zTensor reads `.safetensors`, `.pt`, `.gguf`, `.npz`, `.onnx`, `.h5`, and `.zt` files through a single API. Format detection is automatic. In zero-copy mode, it consistently achieves ~2 GB/s across all formats. For `.pt` files, it parses pickle using a restricted VM in Rust that only extracts tensor metadata, so no arbitrary code can execute.
<p align="center">
<img src="website/static/charts/cross_format_read.svg" alt="Cross-format read throughput" width="700">
</p>
| Format | zTensor | zTensor (zc off) | Reference impl. |
| :--- | :--- | :--- | :--- |
| .zt | **2.19 GB/s** | 1.37 GB/s | n/a |
| .safetensors | **2.19 GB/s** | 1.46 GB/s | 1.33 GB/s ([safetensors](https://github.com/huggingface/safetensors)) |
| .pt | **2.04 GB/s** | 1.33 GB/s | 0.89 GB/s ([torch](https://github.com/pytorch/pytorch)) |
| .npz | **2.11 GB/s** | 1.41 GB/s | 1.04 GB/s ([numpy](https://github.com/numpy/numpy)) |
| .gguf | **2.11 GB/s** | 1.38 GB/s | 1.39 GB/s / 2.15 GB/s† ([gguf](https://github.com/ggml-org/ggml)) |
| .onnx | **2.07 GB/s** | 1.29 GB/s | 0.76 GB/s ([onnx](https://github.com/onnx/onnx)) |
| .h5 | **1.96 GB/s** | 1.30 GB/s | 1.35 GB/s ([h5py](https://github.com/h5py/h5py)) |
*Llama 3.2 1B shapes (~2.8 GB). Linux, NVMe SSD, median of 5 runs, cold reads. ONNX at 1 GB (protobuf limit). †GGUF's native reader also supports mmap (2.15 GB/s). See [Benchmarks](https://pie-project.github.io/ztensor/benchmarks) for details.*
## Writing
zTensor writes exclusively to `.zt`, our own format. Existing tensor formats each solve part of the problem, but none solve it cleanly:
- **Pickle-based formats** (`.pt`, `.bin`) execute arbitrary code on load; a model file can run anything on the reader's machine.
- **SafeTensors** is safe but treats every tensor as a flat, dense array of a fixed dtype. New formats can't be represented without a spec change.
- **GGUF** handles quantization but bakes each scheme into the dtype enum, coupling the format to the llama.cpp ecosystem.
- **NumPy `.npz`** has no alignment guarantees (no mmap), no compression beyond zip, and no structured metadata.
Most formats equate "tensor" with "flat array of one dtype." Once you need something structurally richer (sparse indices alongside values, or quantized weights alongside scales), the format can't express it without flattening into separate arrays held together by naming conventions.
`.zt` models each tensor as a composite object with typed components, so dense, sparse, and quantized data all fit without extending the format. It also supports zero-copy mmap reads, zstd compression, integrity checksums, and streaming writes. Read the full [specification](https://pie-project.github.io/ztensor/spec).
<p align="center">
<img src="website/static/charts/write_throughput.svg" alt="Write throughput by workload" width="700">
</p>
| Format | Large | Mixed | Small |
| :--- | :--- | :--- | :--- |
| **.zt** | 3.62 GB/s | 3.65 GB/s | 1.42 GB/s |
| .safetensors | 1.72 GB/s | 1.77 GB/s | 1.48 GB/s |
| .pt (pickle) | 3.62 GB/s | 3.68 GB/s | **2.00 GB/s** |
| .npz | 2.40 GB/s | 2.40 GB/s | 0.51 GB/s |
| **.gguf** | **3.85 GB/s** | **3.86 GB/s** | 1.06 GB/s |
| .onnx | 0.28 GB/s | 0.29 GB/s | 0.32 GB/s |
| .h5 | 3.67 GB/s | 3.69 GB/s | 0.27 GB/s |
*Three workloads at 512 MB, `copy=True`: Large (few big matrices), Mixed (realistic model shapes), Small (many ~10 KB parameters). See [Benchmarks](https://pie-project.github.io/ztensor/benchmarks) for details.*
## Format Comparison
| Feature | .zt | .safetensors | .gguf | .pt (pickle) | .npz | .onnx | .h5 |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Zero-copy read | ✓ | ✓ | ✓ | ~² | ~² | | |
| Safe (no code exec) | ✓ | ✓ | ✓ | | ✓ | ✓ | ✓ |
| Streaming / append | ✓ | | | | ~³ | | ✓ |
| Sparse tensors | ✓ | | | ✓ | | | |
| Per-tensor compression | ✓ | | | | ✗¹ | | ✓ |
| Extensible types | ✓ | | | N/A | | ✓ | ✓ |
¹ `.npz` uses archive-level zip/deflate, not per-tensor compression.
² Partial support (requires specific alignment or uncompressed data).
³ Zip append support (not standard API).
## Installation
```bash
pip install ztensor # Python
cargo add ztensor # Rust
cargo install ztensor-cli # CLI
```
## Documentation
- [Website](https://pie-project.github.io/ztensor/) — guides, API reference, benchmarks
- [docs.rs](https://docs.rs/ztensor) — Rust API docs
## License
MIT
| text/markdown; charset=UTF-8; variant=GFM | null | In Gim <in.gim@yale.edu> | null | null | null | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Scientific/Engineering"
] | [] | null | null | null | [] | [] | [] | [
"numpy",
"ml-dtypes; extra == \"bfloat16\""
] | [] | [] | [] | [
"Homepage, https://github.com/pie-project/ztensor",
"Source, https://github.com/pie-project/ztensor"
] | maturin/1.12.3 | 2026-02-21T02:46:37.811391 | ztensor-1.2.3-cp313-cp313-win32.whl | 735,728 | 89/59/e9c64343b1fa40fd4347e12f2a3b13e3fde1a4c909ff358366ff5f509100/ztensor-1.2.3-cp313-cp313-win32.whl | cp313 | bdist_wheel | null | false | dc75557040a8a5de4d1517601886779a | c185a534a495258ba18e3f13a99d393d9226a6e0b1b6ff14f77177da64773857 | 8959e9c64343b1fa40fd4347e12f2a3b13e3fde1a4c909ff358366ff5f509100 | null | [] | 1,861 |
2.4 | kestrel | 0.1.3 | a fast, efficient inference engine for moondream | # Kestrel

High-performance inference engine for the [Moondream](https://moondream.ai) vision-language model.
Kestrel provides async, micro-batched serving with streaming support, paged KV caching, and optimized CUDA kernels. It's designed for production deployments where throughput and latency matter.
## Features
- **Async micro-batching** — Cooperative scheduler batches heterogeneous requests without compromising per-request latency
- **Streaming** — Real-time token streaming for query and caption tasks
- **Multi-task** — Visual Q&A, captioning, point detection, object detection, and segmentation
- **Paged KV cache** — Efficient memory management for high concurrency
- **Prefix caching** — Radix tree-based caching for repeated prompts and images
- **LoRA adapters** — Parameter-efficient fine-tuning support with automatic cloud loading
## Requirements
- Python 3.10+
- NVIDIA GPU (tested on L4, L40S, H100). Optimized CUDA kernels are available for SM89 and SM90 architectures. Other GPUs may work but will fall back to non-optimized kernels.
- `MOONDREAM_API_KEY` environment variable (get this from [moondream.ai](https://moondream.ai))
## Installation
```bash
pip install kestrel
```
## Model Access
Kestrel supports both Moondream 3 and Moondream 2:
| Model | Repository | Notes |
|-------|------------|-------|
| Moondream 2 | [vikhyatk/moondream2](https://huggingface.co/vikhyatk/moondream2) | Public, no approval needed |
| Moondream 3 | [moondream/moondream3-preview](https://huggingface.co/moondream/moondream3-preview) | Requires access approval |
For Moondream 3, request access (automatically granted) then authenticate with `huggingface-cli login` or set `HF_TOKEN`.
## Quick Start
```python
import asyncio
from kestrel.config import RuntimeConfig
from kestrel.engine import InferenceEngine
async def main():
# Weights are automatically downloaded from HuggingFace on first run.
# Use model="moondream2" or model="moondream3-preview".
cfg = RuntimeConfig(model="moondream2")
# Create the engine (loads model and warms up)
engine = await InferenceEngine.create(cfg)
# Load an image (JPEG, PNG, or WebP bytes)
image = open("photo.jpg", "rb").read()
# Visual question answering
result = await engine.query(
image=image,
question="What's in this image?",
settings={"temperature": 0.2, "max_tokens": 512},
)
print(result.output["answer"])
# Clean up
await engine.shutdown()
asyncio.run(main())
```
## Tasks
Kestrel supports several vision-language tasks through dedicated methods on the engine.
### Query (Visual Q&A)
Ask questions about an image:
```python
result = await engine.query(
image=image,
question="How many people are in this photo?",
settings={
"temperature": 0.2, # Lower = more deterministic
"top_p": 0.9,
"max_tokens": 512,
},
)
print(result.output["answer"])
```
### Caption
Generate image descriptions:
```python
result = await engine.caption(
image,
length="normal", # "short", "normal", or "long"
settings={"temperature": 0.2, "max_tokens": 512},
)
print(result.output["caption"])
```
### Point
Locate objects as normalized (x, y) coordinates:
```python
result = await engine.point(image, "person")
print(result.output["points"])
# [{"x": 0.5, "y": 0.3}, {"x": 0.8, "y": 0.4}]
```
Coordinates are normalized to [0, 1] where (0, 0) is top-left.
### Detect
Detect objects as bounding boxes:
```python
result = await engine.detect(
image,
"car",
settings={"max_objects": 10},
)
print(result.output["objects"])
# [{"x_min": 0.1, "y_min": 0.2, "x_max": 0.5, "y_max": 0.6}, ...]
```
Bounding box coordinates are normalized to [0, 1].
### Segment
Generate a segmentation mask (Moondream 3 only):
```python
result = await engine.segment(image, "dog")
seg = result.output["segments"][0]
print(seg["svg_path"]) # SVG path data for the mask
print(seg["bbox"]) # {"x_min": ..., "y_min": ..., "x_max": ..., "y_max": ...}
```
Note: Segmentation requires Moondream 3 and separate model weights. Contact [moondream.ai](https://moondream.ai) for access.
## Streaming
For longer responses, you can stream tokens as they're generated:
```python
image = open("photo.jpg", "rb").read()
stream = await engine.query(
image=image,
question="Describe this scene in detail.",
stream=True,
settings={"max_tokens": 1024},
)
# Print tokens as they arrive
async for chunk in stream:
print(chunk.text, end="", flush=True)
# Get the final result with metrics
result = await stream.result()
print(f"\n\nGenerated {result.metrics.output_tokens} tokens")
```
Streaming is supported for `query` and `caption` methods.
## Response Format
All methods return an `EngineResult` with these fields:
```python
result.output # Dict with task-specific output ("answer", "caption", "points", etc.)
result.finish_reason # "stop" (natural end) or "length" (hit max_tokens)
result.metrics # Timing and token counts
```
The `metrics` object contains:
```python
result.metrics.input_tokens # Number of input tokens (including image)
result.metrics.output_tokens # Number of generated tokens
result.metrics.prefill_time_ms # Time to process input
result.metrics.decode_time_ms # Time to generate output
result.metrics.ttft_ms # Time to first token
```
## Using Finetunes
If you've created a finetuned model through the [Moondream API](https://moondream.ai), you can use it by passing the adapter ID:
```python
result = await engine.query(
image=image,
question="What's in this image?",
settings={"adapter": "01J5Z3NDEKTSV4RRFFQ69G5FAV@1000"},
)
```
The adapter ID format is `{finetune_id}@{step}` where:
- `finetune_id` is the ID of your finetune job
- `step` is the training step/checkpoint to use
Adapters are automatically downloaded and cached on first use.
## Configuration
### RuntimeConfig
```python
RuntimeConfig(
model="moondream3-preview", # or "moondream2"
max_batch_size=4, # Max concurrent requests
)
```
### Environment Variables
| Variable | Description |
|----------|-------------|
| `MOONDREAM_API_KEY` | Required. Get this from [moondream.ai](https://moondream.ai). |
| `HF_HOME` | Override HuggingFace cache directory for downloaded weights (default: `~/.cache/huggingface`). |
| `HF_TOKEN` | HuggingFace token for gated models like Moondream 3. Alternatively, run `huggingface-cli login`. |
## Benchmarks
Throughput and latency for the `query` skill on a single H100 GPU, measured on the [ChartQA](https://huggingface.co/datasets/vikhyatk/chartqa) test split with prefix caching enabled.
- **Direct** — the model generates a short answer (~3 output tokens per request)
- **CoT** (Chain-of-Thought) — the model reasons step-by-step before answering (~30 output tokens per request), enabled via `reasoning=True`
### Moondream 2
| Batch Size | Direct (req/s) | Direct P50 (ms) | CoT (req/s) | CoT P50 (ms) |
|-----------|---------------|----------------|-------------|--------------|
| 1 | 36.48 | 31.91 | 11.50 | 140.11 |
| 2 | 41.67 | 44.52 | 17.65 | 134.85 |
| 4 | 44.41 | 67.31 | 25.46 | 153.46 |
| 8 | 46.12 | 110.04 | 33.63 | 207.75 |
| 16 | 46.85 | 209.27 | 37.86 | 347.70 |
### Moondream 3
| Batch Size | Direct (req/s) | Direct P50 (ms) | CoT (req/s) | CoT P50 (ms) |
|-----------|---------------|----------------|-------------|--------------|
| 1 | 27.82 | 41.05 | 9.04 | 177.28 |
| 2 | 31.24 | 62.41 | 12.98 | 181.56 |
| 4 | 33.18 | 90.29 | 17.75 | 221.60 |
| 8 | 34.73 | 149.13 | 22.84 | 312.56 |
| 16 | 35.11 | 281.28 | 26.95 | 503.55 |
## License
Free for evaluation and non-commercial use. Commercial use requires a [Moondream Station Pro license](https://moondream.ai/pricing).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"torch==2.9.1",
"tokenizers>=0.15",
"safetensors>=0.4",
"transformers>=4.44",
"torch-c-dlpack-ext>=0.1.3",
"starlette>=0.37",
"httpx>=0.27",
"uvicorn>=0.30",
"apache-tvm-ffi<0.2,>=0.1.5",
"kestrel-native==0.1.3",
"kestrel-kernels==0.1.3",
"huggingface-hub>=0.20",
"pytest>=8.0; extra == \"dev\"",
"datasets>=3.0; extra == \"eval\"",
"tqdm>=4.66; extra == \"eval\""
] | [] | [] | [] | [] | uv/0.9.0 | 2026-02-21T02:45:46.852117 | kestrel-0.1.3.tar.gz | 140,603 | 1f/44/c193444989d929f5d720b4c1216cbd309ff59dec185adc9bc8ac0903d570/kestrel-0.1.3.tar.gz | source | sdist | null | false | 9e7bab7d6348a3eb4aa7dfc8e08b4445 | ebaa4b48bec4c77866982a4e9eb3d5d917c4bba9e1d53d5712d250320ea53ccd | 1f44c193444989d929f5d720b4c1216cbd309ff59dec185adc9bc8ac0903d570 | null | [] | 257 |
2.4 | transmission-cleaner | 1.1.0.dev4 | Transmission maintenance tool for managing torrents by hardlinks, errors, and orphaned files | # Transmission Cleaner
[](https://pypi.org/project/transmission-cleaner/)

[](https://github.com/astral-sh/ruff)
[](https://github.com/astral-sh/uv)
A comprehensive CLI tool for maintaining your Transmission torrents.
- 🔗 **Find torrents without hardlinks** - Identify torrents that aren't hardlinked to media libraries (Sonarr, Radarr, etc.)
- ‼️ **Manage errored torrents** - Find and clean up torrents with errors (unregistered, tracker issues, etc.)
- 🗑️ **Detect orphaned files** - Discover files in your download directories that aren't tracked by any torrent
- 🛡️ **Protections** - Checks for cross-seeding and HNR violations for private torrents (PTP, BTN, BHD, TVCUK)
## Support
Unit tests ran against all of the following combinations
- Python versions: 3.10 up to 3.14
- OS: macOS, Linux, windows
## Installation
### Quick Install (Recommended)
If you haven't yet, [install uv](https://docs.astral.sh/uv/getting-started/installation/) (`curl -LsSf https://astral.sh/uv/install.sh | sh`). It's a massive painkiller for the python management headache.
```bash
# Using uv (recommended for CLI tools)
uv tool install transmission-cleaner
# Or from source
git clone https://github.com/flying-sausages/transmission-cleaner.git
cd transmission-cleaner
uv tool install .
```
## Usage
The tool is organized into subcommands, each targeting a specific maintenance task:
```bash
transmission-cleaner <command> [options]
```
### Commands Overview
- `hardlinks` - Find and manage torrents without hardlinks to other files
- `errors` - Find and manage torrents with error status
- `orphans` - Find and manage files not tracked by any torrent
All commands require authentication to the transmission RPC server. You can either:
- Use default RPC settings for local installs and use the `--username` and `--password` settings
- Point to your local daemon's config and supply a `--password`,
- Override the RPC defaults
Run any command with `--help` for detailed options.
### 1. Hardlinks Command
Find torrents whose files don't have hardlinks elsewhere on your system. Perfect for cleaning up after media that's been deleted from your library.
```bash
# List torrents without hardlinks
transmission-cleaner hardlinks --username USER --password PASSWORD
```
**Options:**
- `-d, --directory` - Filter by download directory (substring match)
- `-t, --tracker` - Filter by announce URL (substring match)
- `--min-days` - Minimum days of active seeding (default: 7)
- `--action` - Action to perform: `list` (default), `interactive`, `delete` (with data), `remove` (torrent only)
- `--skip-hnr` - Skip HNR check for private torrents (allows deletion even if HNR would be violated)
### 2. Errors Command
Find and manage torrents with error status, such as unregistered torrents or tracker failures. Includes cross-seed detection to prevent accidental data loss.
```bash
# List all torrents with errors
transmission-cleaner errors --username USER --password PASSWORD
```
**Options:**
- `-d, --directory` - Filter by download directory (substring match)
- `-t, --tracker` - Filter by announce URL (substring match)
- `--min-days` - Minimum days of active seeding (default: 7)
- `--error-pattern` - Filter by error message pattern (e.g., "Unregistered")
- `--skip-cross-seed` - Skip cross-seed detection (allows data deletion even if cross-seeded)
- `--action` - Action to perform: `list` (default), `interactive`, `delete` (with data), `remove` (torrent only)
**Cross-Seed Protection:** By default, the errors command checks if torrent data is shared with other active torrents. If cross-seeding is detected, the `delete` action will only remove the torrent entry, protecting the shared data.
### 3. Orphans Command
Find files in your download directories that aren't tracked by any torrent. Useful for cleaning up leftover files from deleted torrents or manual downloads.
```bash
# List orphaned files in a directory
transmission-cleaner orphans \
--username USER --password PASSWORD \
--directory /path/to/downloads
```
**Options:**
- `-d, --directory` - Directory to scan (required)
- `--include-hidden` - Include hidden files (files starting with .)
- `--action` - Action to perform: `list` (default), `interactive`, `delete`
**Note:** The orphans scanner automatically excludes:
- Symlinks (to prevent scanning outside the target directory)
- System files (.DS_Store, Thumbs.db, etc.)
- .torrent files
- Hidden files (unless `--include-hidden` is specified)
### Authentication Options
All commands support the same authentication options:
```bash
--settings-file PATH # Path to Transmission settings.json file
--protocol {http,https} # Protocol to use (default: http)
--username USERNAME # Transmission username
--password PASSWORD # Transmission password (required)
--host HOST # Transmission host (default: 127.0.0.1)
--port PORT # Transmission port (default: 9091)
--rpc-path PATH # Transmission RPC path (default: /transmission/rpc)
```
**Example with settings file:**
```bash
transmission-cleaner hardlinks \
--settings-file ~/.config/transmission-daemon/settings.json \
--password YOUR_PASSWORD
```
### Arrs Setup Suggestion
For automated cleanup with Sonarr/Radarr:
1. Have something (Plex/Maintainerr/etc.) automatically delete media
2. Set `Unmonitor Deleted Episodes` to True in your arr
3. In the arr's download client settings, set a `Category` value (e.g., "Sonarr" or "Radarr")
4. Test the tool manually:
```bash
transmission-cleaner hardlinks \
--settings-file ~/.config/transmission-daemon/settings.json \
--password YOUR_PASSWORD \
--directory Sonarr \
--action list
```
5. Add to crontab for daily cleanup at 3am:
```cron
0 3 * * * transmission-cleaner hardlinks --settings-file ~/.config/transmission-daemon/settings.json --password YOUR_PASSWORD --directory Sonarr --action delete >> /var/log/transmission-cleaner.log 2>&1
```
### Action Modes
All commands support these action modes:
- **list** (default) - Display matching items without making changes
- **interactive** - Prompt for confirmation before each action
- **delete** - Remove torrent with data from disk
- **remove** - Remove torrent from client only (keeps data)
Short forms: `l` for list, `i` for interactive, `d` for delete, `r` for remove
## Safety Notes
- **Always test with `--action list` first** to see what would be affected
- **Use interactive mode** when unsure about automatic removal
- **Backup your data** before performing bulk deletions
- **Cross-seed protection** in the errors command helps prevent data loss for shared files
- The tool requires direct filesystem access to check hardlinks and scan directories
- Orphans scanner excludes symlinks to prevent scanning outside target directories
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
### Development
```bash
# Set up the project
uv sync
# Run linting
ruff check .
# Run type checking
basedpyright
# Run tests
uv run pytest
# Run tests with coverage
uv run pytest --cov
```
#### HNRs
There's a [template](transmission_cleaner/hnrs/_template.py) for HNR checks for private trackers. You can consult the [README.md](transmission_cleaner/hnrs/README.md) in that directory for instructions on how to add new templates.
| text/markdown | null | Your Name <flying_sausages@protonmail.com> | null | Your Name <flying_sausages@protonmail.com> | null | transmission, torrent, hardlinks, cleanup, bittorrent | [
"Intended Audience :: End Users/Desktop",
"Intended Audience :: System Administrators",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Communications :: File Sharing",
"Topic :: System :: Filesystems",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"transmission-rpc>=7.0.11"
] | [] | [] | [] | [
"Homepage, https://github.com/flying-sausages/transmission-cleaner",
"Repository, https://github.com/flying-sausages/transmission-cleaner",
"Issues, https://github.com/flying-sausages/transmission-cleaner/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T02:45:07.664376 | transmission_cleaner-1.1.0.dev4.tar.gz | 23,166 | ea/50/13683b750c3d6f1b8930087c6d13873e6b7bfb1d1bd784c16bcf0a614175/transmission_cleaner-1.1.0.dev4.tar.gz | source | sdist | null | false | d2586e15c7358ac4c126fa73efdd8a54 | 06915601820be97b9a40820167f98d0a0d16c3b27e75c65001ccef0af672ed52 | ea5013683b750c3d6f1b8930087c6d13873e6b7bfb1d1bd784c16bcf0a614175 | MIT | [
"LICENSE"
] | 214 |
2.4 | syntactical | 2.0.0 | The Syntactical programming language's interpreter. | # Syntactical
**_The programing language of the future._**
### What is Syntactical?
Syntactical is a programming language made with Python, that is meant to be like Python, but with better syntax.
### Where do I get it?
You can install Syntactical with pip via the command: `pip install syntactical`.
### Documentation
Documentation is under construction. The current documentation can be found [here](https://syntactical.cool62.net).
### Visual Studio Code Extension
Syntactical has a VSCode extension in developement! See it's GitHub Repository [here](https://github.com/thecoolguy62aws/syntactical-vscode). Download it for Visual Studio Code [here](https://marketplace.visualstudio.com/items?itemName=thecoolguy62aws.syntactical).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [
"asgiref==3.11.1",
"attrs==25.4.0",
"automat==25.4.16",
"blinker==1.9.0",
"build==1.4.0",
"buildtools==1.0.6",
"certifi==2026.1.4",
"cffi==2.0.0",
"charset-normalizer==3.4.4",
"click==8.3.1",
"colorama==0.4.6",
"constantly==23.10.4",
"contourpy==1.3.3",
"cryptography==46.0.5",
"cycler==0.12.1",
"django==6.0.2",
"docopt==0.6.2",
"docutils==0.22.4",
"flask==3.1.2",
"fonttools==4.61.1",
"furl==2.1.4",
"greenlet==3.3.1",
"hyperlink==21.0.0",
"id==1.6.1",
"idna==3.11",
"incremental==24.11.0",
"itsdangerous==2.2.0",
"jaraco-classes==3.4.0",
"jaraco-context==6.1.0",
"jaraco-functools==4.4.0",
"jinja2==3.1.6",
"keyboard==0.13.5",
"keyring==25.7.0",
"kiwisolver==1.4.9",
"lark==1.3.1",
"markdown-it-py==4.0.0",
"markupsafe==3.0.3",
"matplotlib==3.10.8",
"mdurl==0.1.2",
"more-itertools==10.8.0",
"nh3==0.3.2",
"numpy==2.4.2",
"orderedmultidict==1.0.2",
"packaging==26.0",
"pathlib==1.0.1",
"pick==2.4.0",
"pillow==12.1.1",
"pycparser==3.0",
"pygame==2.6.1",
"pygments==2.19.2",
"pynput==1.8.1",
"pyparsing==3.3.2",
"pyproject-hooks==1.2.0",
"python-dateutil==2.9.0.post0",
"pywin32-ctypes==0.2.3",
"readme-renderer==44.0",
"redo==3.0.0",
"requests==2.32.5",
"requests-toolbelt==1.0.0",
"rfc3986==2.0.0",
"rich==14.3.2",
"setuptools==82.0.0",
"simplejson==3.20.2",
"six==1.17.0",
"sqlalchemy==2.0.46",
"sqlparse==0.5.5",
"twine==6.2.0",
"twisted==25.5.0",
"typing-extensions==4.15.0",
"tzdata==2025.3",
"urllib3==2.6.3",
"werkzeug==3.1.5",
"wheel==0.46.3",
"windows-curses==2.4.1",
"zope-interface==8.2"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-21T02:45:07.150383 | syntactical-2.0.0.tar.gz | 18,397 | e5/bc/fc589607d339561937d777002cc764a361bea25105d6dde0828928828f21/syntactical-2.0.0.tar.gz | source | sdist | null | false | 69091c42ce9bd1d4676562b2c5ebbcb2 | 50476987e16601c219d2c65ff439e650cad9d415f5f3e7c6594f308ea6dbb9b8 | e5bcfc589607d339561937d777002cc764a361bea25105d6dde0828928828f21 | null | [
"LICENSE.txt"
] | 246 |
2.4 | turbopdf | 1.5.6 | Generador de PDFs simples y rápidos para Python, ideal para entornos educativos y gubernamentales. | # 🚀 TurboPDF — Generador de PDFs Profesionales y Modulares para Django
[](https://pypi.org/project/turbopdf/)
[](https://pypi.org/project/turbopdf/)
[](https://github.com/EcosistemaUNP/python-ecosistema-turbopdf/blob/main/LICENSE)
[](https://github.com/EcosistemaUNP/python-ecosistema-turbopdf/stargazers)
> ✨ **Crea formularios oficiales, informes y documentos institucionales en minutos — con componentes HTML reutilizables y una estructura base profesional.**
TurboPDF te permite ensamblar PDFs complejos (como formularios de la UNP) usando **componentes modulares** (`fila_dos.html`, `firma.html`, etc.) y una **estructura base reutilizable** que incluye:
- Logos institucionales
- Márgenes y estilos oficiales
- Paginación automática
- Bloque "Archívese en:"
Ideal para entornos gubernamentales, educativos o empresariales que requieren documentos estandarizados.
---
## 🎯 ¿Por qué TurboPDF?
✅ **Modular** — Reutiliza componentes HTML en múltiples formularios
✅ **Flexible** — Construye cualquier formulario directamente desde tu vista
✅ **Profesional** — Estilos y estructura listos para documentos oficiales
✅ **Django-Friendly** — Integración directa con tus vistas y modelos
✅ **Mantenible** — La lógica del formulario vive en tu proyecto, no en la librería
---
## ⚡ Instalación
```bash
pip install turbopdf
```
---
## 📌 Requisitos:
---
Python ≥ 3.8
Django ≥ 3.2
wkhtmltopdf instalado en el sistema (guía de instalación )
---
## 🧩 Componentes incluidos
TurboPDF incluye componentes HTML listos para usar:
titulo_logo.html — Encabezado con logos y títulos
fila_dos.html, fila_tres.html, fila_cuatro.html — Filas de datos
tipo_identificacion.html — Selector de tipo de documento
firma.html, firmaop2.html — Firmas del solicitante
oficializacion.html — Pie de página con código y paginación
archivese.html — Bloque "Archívese en:"
pregunta_si_no.html, tipos_checkbox.html — Controles de selección
texarea.html — Áreas de texto grandes
anexos_limpio.html, manifiesto.html, leyenda_autoriza_correo.html — Componentes legales
---
---
## 🛠️ Cómo usar TurboPDF
Ejemplo 1: Formulario básico con encabezado y firma
from django.http import HttpResponse
from turbopdf.assemblers import BaseFormAssembler
```def mi_vista_pdf(request):
context = {'nombreCompleto': 'Ana López'}
assembler = BaseFormAssembler(context)
assembler.add_raw_html('<div style="border:1px solid #303d50; padding:20px;">')
assembler.add_component('titulo_logo.html', {
'titulo1': "MI FORMULARIO OFICIAL",
'titulo2': "SUBTÍTULO",
'titulo3': "INSTITUCIÓN"
})
assembler.add_component('firmaop2.html', {'nombre_completo': context['nombreCompleto']})
assembler.add_raw_html('</div>')
assembler.add_component('archivese.html', {})
assembler.add_component('oficializacion.html', {
'codigo': "MIF-FT-01",
'fecha': "Oficialización: 01/01/2025",
'pagina': "Pág. 1 de 1"
})
response = HttpResponse(assembler.build(), content_type='application/pdf')
response['Content-Disposition'] = 'attachment; filename="documento.pdf"'
return response
```
Ejemplo 2: Fila de datos + selección
```
assembler.add_component('fila_dos.html', {
'label1': "Nombre", 'valor1': "Juan Pérez",
'label2': "Correo", 'valor2': "juan@example.com"
})
assembler.add_component('pregunta_si_no.html', {
'pregunta': "¿Autoriza notificaciones por correo?",
'valor': "Sí"
})
```
Ejemplo 3: Tipo de identificación
```
assembler.add_component('tipo_identificacion.html', {
'numeracion1': 1,
'numeracion2': "2. Número de identificación *",
'numeroIdentificacion': "123456789",
'numeracion3': "3. Fecha de expedición *",
'fechaExpedicion': "01/01/2020",
'tipoIdentificacion': "Cédula de ciudadanía"
})
```
---
## 📜 Licencia
UNP - EcosistemaUNP ©
---
| text/markdown | null | EcosistemaUNP <ecosistema@unp.gov.co> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"django>=3.2",
"pdfkit>=1.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/EcosistemaUNP/python-ecosistema-turbopdf",
"Repository, https://github.com/EcosistemaUNP/python-ecosistema-turbopdf",
"Documentation, https://github.com/EcosistemaUNP/python-ecosistema-turbopdf#readme",
"Changelog, https://github.com/EcosistemaUNP/python-ecosistema-turbopdf/releases"
] | twine/6.2.0 CPython/3.12.4 | 2026-02-21T02:44:31.141185 | turbopdf-1.5.6.tar.gz | 283,660 | 03/f5/3bd2098797568d472387160d8f7a2da8a9bf5189a3e670c3523bc584af46/turbopdf-1.5.6.tar.gz | source | sdist | null | false | 25f612c10e9f83b308814227f7e86b96 | a33157a7fae7e3f237ddbfd0d1142a361491c6f98541f68a05f23a822e0785ad | 03f53bd2098797568d472387160d8f7a2da8a9bf5189a3e670c3523bc584af46 | null | [] | 238 |
2.4 | rdfvr | 0.3.5 | RDFVR: RDF Validation Report | RDFVR
======
**RDFVR** (**RDF** **V**alidation **R**eport) is a pure Python package to generate readable reports for the validation of RDF graphs against Shapes Constraint Language (SHACL) graphs.
## Installation
Install with *pip* (Python 3 pip installer `pip3`):
```bash
$ pip3 install rdfvr
```
## Command Line Use
```bash
$ rdfvr -f /path/to/rdf_graph -ff rdf_graph_format -s path/to/schema_of_rdf_graph -sf schema_of_rdf_graph_format -m path/to/mappings -o path/to/report -of report_format --no-datetime
```
Where
- `-f` is the path of the RDF graph file to be validated (also supports multiple files)
- `-ff` is the format of the RDF graph file (also supports multiple file formats when we have multiple RDF graph files)
- `-s` is the path of the the RDF graph's schema
- `-sf` is the format of the RDF graph's schema
- `-m` is the path of mappings to shorten the report
- `-o` is the path of the validation report without extension (also supports multiple files when we have multiple RDF graph files)
- `-of` is the format of the validation report (also supports multiple file formats when we have multiple RDF graph files)
- `--no-datetime` disables the datetime line in the output
Full CLI Usage Options:
```bash
$ rdfvr -h
usage: rdfvr [-h] [--file FILE] [--schema SCHEMA] [--fileformat FILEFORMAT]
[--schemaformat {xml,n3,turtle,nt,pretty-xml,trix,trig,nquads,json-ld,hext}]
[--mappings MAPPINGS] [--output OUTPUT] [--outputformat OUTPUTFORMAT]
[--no-datetime]
optional arguments:
-h, --help show this help message and exit
--file FILE, -f FILE File(s) of the RDF graph(s) to be validated (list[str] | str ): please use comma (no space) to split multiple file paths (e.g.
file1,file2,file3).
--schema SCHEMA, -s SCHEMA
Schema of the RDF graph, i.e., Shapes Constraint Language (SHACL) graph (str): path of the file.
--fileformat FILEFORMAT, -ff FILEFORMAT
File format(s) of the RDF graph(s) to be validated (list[str] | str ). Orders should be consistent with the input of --file. Default format is
json-ld. If all input files have the same format, only need to write once.
--schemaformat {xml,n3,turtle,nt,pretty-xml,trix,trig,nquads,json-ld,hext}, -sf {xml,n3,turtle,nt,pretty-xml,trix,trig,nquads,json-ld,hext}
File format of the schema (str). Default format is ttl.
--mappings MAPPINGS, -m MAPPINGS
File of the mappings to shorten the report (str): path of the JSON file, where the key is the original text and the value is the shorter text.
--output OUTPUT, -o OUTPUT
Path(s) of the validation report without extension (list[str] | str ). If no value, then output will be a string. Please use comma (no space) to split
multiple file paths (e.g. file1,file2,file3).
--outputformat OUTPUTFORMAT, -of OUTPUTFORMAT
File format(s) of the output, validation report (list[str] | str ). Orders should be consistent with the input of --output. Default format is
txt. Each item can only be one of {txt,html}. Please use comma (no space) to split multiple formats (e.g. format1,format2,format3). If all
output files have the same format, only need to write once.
--no-datetime Disable the datetime line in the output.
```
## Python Module Use
You can call the `validation_report` function of the `rdfvr` module as follows:
```python
from rdfvr import validation_report
validation_report(file_path, file_format, schema, schema_format, output_path, output_format, mappings, no_datetime)
```
Where
- `file_path` is the file path (string) of a RDF graph
- `file_format` is the format (string) of the RDF graph file
- `schema` is the file path (string) of the RDF graph's schema
- `schema_format` is the format (string) of the schema file
- `output_path` is the file path (string) of the validation report without extension
- `output_format` is the format (string) of the validation report, i.e., `txt` or `html`
- `mappings` is the mappings (dictionary) to shorten the report
- `no_datetime` is a boolean (default `False`) to disable the datetime line in the output
The return value is `None`.
The output will be either a `txt` file, a `html` file, or a `string` print in Bash.
| text/markdown | Meng Li, Timothy McPhillips, Bertram Ludäscher | null | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas>=2.1.0",
"pyshacl>=0.23.0",
"rdflib>=6.3.2",
"pyvis>=0.3.2",
"networkx>=3.3",
"pytest>=7.0; extra == \"dev\"",
"twine>=4.0.2; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/cirss/rdfvr"
] | twine/6.2.0 CPython/3.12.1 | 2026-02-21T02:43:44.366358 | rdfvr-0.3.5.tar.gz | 6,539 | 15/f6/7bf8af71c36a83c3a9b7c56c7d34c48deb5b395829ac23da468509709b52/rdfvr-0.3.5.tar.gz | source | sdist | null | false | a29b7588cc3185e7c708e695107a1c05 | 9c4ae876216c0dbaeeba9e791fd9fe7c68e5e6f0582e07c132ba814f1f68f3b2 | 15f67bf8af71c36a83c3a9b7c56c7d34c48deb5b395829ac23da468509709b52 | null | [] | 235 |
2.1 | odoo-addon-account-payment-partner | 18.0.1.0.4.1 | Adds payment mode on partners and invoices | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=======================
Account Payment Partner
=======================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:7f8930df5aa73a09d04874189fd03367c9cc5871736081b817ec6269f37ada2a
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Mature-brightgreen.png
:target: https://odoo-community.org/page/development-status
:alt: Mature
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fbank--payment-lightgray.png?logo=github
:target: https://github.com/OCA/bank-payment/tree/18.0/account_payment_partner
:alt: OCA/bank-payment
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/bank-payment-18-0/bank-payment-18-0-account_payment_partner
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/bank-payment&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module adds several fields:
- the *Supplier Payment Mode* and *Customer Payment Mode* on Partners,
- the *Payment Mode* on Invoices.
- the *Show bank account* on Payment Mode.
- the *# of digits for customer bank account* on Payment Mode.
- the *Bank account from journals* on Payment Mode.
- the *Payment mode* on Invoices Analysis.
On a Payment Order, in the wizard *Select Invoices to Pay*, the invoices
will be filtered per Payment Mode.
Allows to print in the invoice to which account number the payment (via
SEPA direct debit) is going to be charged so the customer knows that
information, but there are some customers that don't want that everyone
looking at the invoice sees the full account number (and even GDPR can
say a word about that), so that's the reason behind the several options.
**Table of contents**
.. contents::
:local:
Usage
=====
You are able to add a payment mode directly on a partner.
This payment mode is automatically associated to the invoice related to
the partner. This default value could be changed in a draft invoice.
When you create a payment order, only invoices related to chosen payment
mode are displayed.
Invoices without any payment mode are displayed too.
Changelog
=========
10.0.1.2.0 (2018-05-24)
-----------------------
- [IMP] Add options to show partner bank account in invoice report
(`#458 <https://github.com/OCA/bank-payment/issues/458>`__)
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/bank-payment/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/bank-payment/issues/new?body=module:%20account_payment_partner%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Akretion
* Tecnativa
Contributors
------------
- Alexis de Lattre <alexis.delattre@akretion.com>
- Raphaël Valyi
- Stefan Rijnhart (Therp)
- Alexandre Fayolle
- Stéphane Bidoul <stephane.bidoul@acsone.eu>
- Danimar Ribeiro
- Angel Moya <angel.moya@domatix.com>
- `Tecnativa <https://www.tecnativa.com>`__:
- Pedro M. Baeza
- Carlos Dauden
- Víctor Martínez
- `DynApps <https://www.dynapps.be>`__:
- Raf Ven <raf.ven@dynapps.be>
- Marçal Isern <marsal.isern@qubiq.es>
- Miquel Alzanillas <malzanillas@apsl.net>
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/bank-payment <https://github.com/OCA/bank-payment/tree/18.0/account_payment_partner>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Akretion, Tecnativa, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Development Status :: 6 - Mature"
] | [] | https://github.com/OCA/bank-payment | null | >=3.10 | [] | [] | [] | [
"odoo-addon-account_payment_mode==18.0.*",
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T02:42:54.129104 | odoo_addon_account_payment_partner-18.0.1.0.4.1-py3-none-any.whl | 87,115 | 0a/87/aa00fcdea83450f657cdbb308d48abdab28dedba6d184d80cde91b4073c7/odoo_addon_account_payment_partner-18.0.1.0.4.1-py3-none-any.whl | py3 | bdist_wheel | null | false | df3c9409794604cade06ad6b2a7fca92 | ee464e2fad3a598fd069830c951dcf064cb2258ac0cb9284851202198561205a | 0a87aa00fcdea83450f657cdbb308d48abdab28dedba6d184d80cde91b4073c7 | null | [] | 81 |
2.4 | mcp-checkpoint | 2.1.1 | A comprehensive MCP configuration scanner with client-aware security analysis. | <p align="center" style="margin-bottom: 0; line-height: 0;">
<img src="https://github.com/aira-security/mcp-checkpoint/blob/main/mcp-checkpoint.png" width="350">
</p>
<h3 align="center" margin-top: -20px; margin-bottom: 70px;>
MCP Checkpoint
</h3>
<br>
## :rocket: Overview
MCP Checkpoint is a comprehensive security scanner for Model Context Protocol (MCP). Automatically discovers, analyzes, and secures MCP servers integrated with all major Agentic IDEs, Agents and Clients.
<br>

## :bulb: Features
- **🔍 Auto-Discovery**: Finds known MCP configurations for popular Agentic IDEs like Cursor, Windsurf, VS Code, Claude Desktop, and more
- **🔧 Tool, Resource & Prompt Inventory**: Connects to MCP servers and catalogs available tools, resources, and prompt templates
- **🛡️ Security Analysis**: Specialized security checks including Prompt Injection, Rug Pull Attack, Cross-server Tool Shadowing, Tool Poisoning, Tool Name Ambiguity, [and more..](#beginner-security-checks)
- **🧭 Baseline Drift Detection**: Captures approved MCP components and detects rug pulls attacks
- **📊 Comprehensive Reporting**: Generates JSON and Markdown reports with actionable findings
- **📜 Audit Trail**: Timestamped baselines and reports for full traceability of changes and findings
## :toolbox: Installation
```bash
pip install mcp-checkpoint
```
## :running: Quick Start
```bash
# Scan all configurations with security analysis (auto-detects baseline.json if present)
mcp-checkpoint scan
# Inspect configurations and generate baseline (defaults to baseline.json)
mcp-checkpoint inspect
# Use custom configuration file
mcp-checkpoint scan --config /path/to/config.json
# Scan multiple configuration files
mcp-checkpoint scan \
--config /path/to/cursor.mcp.json \
--config /path/to/vscode.mcp.json
# Use custom baseline file path
mcp-checkpoint inspect --baseline /path/to/my-baseline.json
mcp-checkpoint scan --baseline /path/to/my-baseline.json
# Generate markdown report
mcp-checkpoint scan --report-type md
# Save to custom file
mcp-checkpoint scan --output my-report.json
mcp-checkpoint scan --report-type md --output my-report.md
```
#### :gear: Command Options
| Option | Description |
|---------------------------|---------------------------------------------------------------------|
| `--config` | Custom configuration file path (can be used multiple times) |
| `--baseline` | Baseline file for drift detection (scan) or creation (inspect) |
| `--report-type {json,md}` | Output format (default: json) |
| `--output` | Custom output file path |
| `--verbose` | Detailed terminal output |
| `--show-logs` | Display debug logs in terminal |
## :beginner: Security Checks
### 🛡️ Standard Checks
- **Prompt Injection**
- **Indirect Prompt Injection**
- **Cross-Server Tool Shadowing**
- **Tool Poisoning**
- **Prompt Injection in Tool Description, Name and Args**
- **Command Injection in Tool Description, Name and Args**
- **Tool Name Ambiguity**
- **Command Injection**
- **Excessive Tool Permissions**
- **Hardcoded Secrets**
### 🧭 Baseline Checks
Detects deviations from approved MCP components (requires a baseline generated via `inspect` mode):
- **Rug Pull Attack**
- **Tool Modified**
- **Resource Modified**
- **Resource Template Modified**
- **Prompt Modified**
### :page_with_curl: Logging
Logs are automatically saved to `logs/mcp_checkpoint.log`:
```bash
# Default: logs saved to file only
mcp-checkpoint scan
# Show logs in terminal too
mcp-checkpoint scan --show-logs
```
### :test_tube: Demo
Test MCP Checkpoint using our intentionally vulnerable MCP servers. For details, see the [demo guide](demo-mcp-server/README.md).
### :zap: Want More?
This open-source version covers static MCP configuration scanning. For teams that need deeper protection, [Aira Security](https://airasecurity.ai) offers a full enterprise platform with:
| Capability | Open Source | Aira Platform |
|---|:---:|:---:|
| MCP config scanning | ✅ | ✅ |
| Prompt & command injection detection | ✅ | ✅ |
| Tool poisoning & shadowing checks | ✅ | ✅ |
| Hardcoded secrets detection | ✅ | ✅ |
| **Runtime enforcement & blocking** | ❌ | ✅ |
| **Agent behavior policy enforcement** (toxic flow analysis) | ❌ | ✅ |
| **Skills scanner** (agentic workflow & capability analysis) | ❌ | ✅ |
| **Custom security policies** | ❌ | ✅ |
| **Aira dashboard** (centralized visibility & alerting) | ❌ | ✅ |
| **Complete Agentic Security** (beyond MCP — Agents, Workflows, and Skills) | ❌ | ✅ |
🚀 [See Aira in Action](https://calendly.com/mohan-/aira-security) to experience the full platform.
### :star2: Community
[Join our Slack](https://join.slack.com/t/airasecurityc-jwt3316/shared_invite/zt-3iar5tm3k-R5js~WfnDIHRNtSgd7D0Bg) - a space for developers and security engineers building together to secure AI agents.
### :question: FAQs
**Q: Is my source code ever shared, or does everything run locally?**
MCP Checkpoint runs entirely locally. Inspect and scan modes analyze your MCP configurations, detect MCP servers integrated with your agents, and evaluate them directly on your machine. Prompt injection checks use our open-source model `Aira-security/FT-Llama-Prompt-Guard-2`, downloaded from Hugging Face to your local environment, ensuring your data and code is never shared externally.
### :balance_scale: License
Distributed under the Apache 2.0 License. See [LICENSE](https://github.com/aira-security/mcp-checkpoint/blob/main/LICENSE) for more information.
| text/markdown | null | Aira Security <founders@airasecurity.ai> | null | null | null | mcp, model-context-protocol, security, prompt-injection, mcp-security, ai-agent-security, aira-security, agent-security | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3.15"
] | [] | null | null | <3.16,>=3.10 | [] | [] | [] | [
"fastmcp~=2.13.0",
"pyyaml~=6.0.2",
"thefuzz~=0.22.1",
"rich~=14.1.0",
"transformers~=4.57.1",
"torch~=2.9.0"
] | [] | [] | [] | [
"Homepage, https://github.com/aira-security/mcp-checkpoint",
"Repository, https://github.com/aira-security/mcp-checkpoint",
"Issues, https://github.com/aira-security/mcp-checkpoint/issues"
] | uv/0.8.3 | 2026-02-21T02:42:52.170466 | mcp_checkpoint-2.1.1.tar.gz | 50,281 | 09/15/421617afb20708c31f1b9ed577f0962dcacd640dc81cc73ac1b5e6e8eb9f/mcp_checkpoint-2.1.1.tar.gz | source | sdist | null | false | affae8c95263f2ba510b444cac0e7486 | 914eb6a95251a3e67fea70104849fe8b7c468d32b26040462a5dd61f9cab4096 | 0915421617afb20708c31f1b9ed577f0962dcacd640dc81cc73ac1b5e6e8eb9f | Apache-2.0 | [
"LICENSE"
] | 236 |
2.4 | moovio_sdk | 26.4.0.dev11 | Python Client SDK Generated by Speakeasy. | # Moov Python
The official SDK for interacting with the Moov API.
<div align="left">
<a href="https://www.speakeasy.com/?utm_source=moovio-sdk&utm_campaign=python"><img src="https://custom-icon-badges.demolab.com/badge/-Built%20By%20Speakeasy-212015?style=for-the-badge&logoColor=FBE331&logo=speakeasy&labelColor=545454" /></a>
<a href="https://opensource.org/licenses/Apache-2.0">
<img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" />
</a>
</div>
<!-- Start Summary [summary] -->
## Summary
Moov API: Moov is a platform that enables developers to integrate all aspects of money movement with ease and speed.
The Moov API makes it simple for platforms to send, receive, and store money. Our API is based upon REST
principles, returns JSON responses, and uses standard HTTP response codes. To learn more about how Moov
works at a high level, read our [concepts](https://docs.moov.io/guides/get-started/glossary/) guide.
For more information about the API: [Moov Guides and API Documentation](https://docs.moov.io/)
<!-- End Summary [summary] -->
<!-- Start Table of Contents [toc] -->
## Table of Contents
<!-- $toc-max-depth=2 -->
* [Moov Python](https://github.com/moovfinancial/moov-python/blob/master/#moov-python)
* [SDK Installation](https://github.com/moovfinancial/moov-python/blob/master/#sdk-installation)
* [IDE Support](https://github.com/moovfinancial/moov-python/blob/master/#ide-support)
* [SDK Example Usage](https://github.com/moovfinancial/moov-python/blob/master/#sdk-example-usage)
* [Authentication](https://github.com/moovfinancial/moov-python/blob/master/#authentication)
* [Available Resources and Operations](https://github.com/moovfinancial/moov-python/blob/master/#available-resources-and-operations)
* [File uploads](https://github.com/moovfinancial/moov-python/blob/master/#file-uploads)
* [Retries](https://github.com/moovfinancial/moov-python/blob/master/#retries)
* [Error Handling](https://github.com/moovfinancial/moov-python/blob/master/#error-handling)
* [Server Selection](https://github.com/moovfinancial/moov-python/blob/master/#server-selection)
* [Custom HTTP Client](https://github.com/moovfinancial/moov-python/blob/master/#custom-http-client)
* [Resource Management](https://github.com/moovfinancial/moov-python/blob/master/#resource-management)
* [Debugging](https://github.com/moovfinancial/moov-python/blob/master/#debugging)
* [Development](https://github.com/moovfinancial/moov-python/blob/master/#development)
* [Maturity](https://github.com/moovfinancial/moov-python/blob/master/#maturity)
* [Contributions](https://github.com/moovfinancial/moov-python/blob/master/#contributions)
<!-- End Table of Contents [toc] -->
<!-- Start SDK Installation [installation] -->
## SDK Installation
> [!NOTE]
> **Python version upgrade policy**
>
> Once a Python version reaches its [official end of life date](https://devguide.python.org/versions/), a 3-month grace period is provided for users to upgrade. Following this grace period, the minimum python version supported in the SDK will be updated.
The SDK can be installed with *uv*, *pip*, or *poetry* package managers.
### uv
*uv* is a fast Python package installer and resolver, designed as a drop-in replacement for pip and pip-tools. It's recommended for its speed and modern Python tooling capabilities.
```bash
uv add moovio_sdk
```
### PIP
*PIP* is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.
```bash
pip install moovio_sdk
```
### Poetry
*Poetry* is a modern tool that simplifies dependency management and package publishing by using a single `pyproject.toml` file to handle project metadata and dependencies.
```bash
poetry add moovio_sdk
```
### Shell and script usage with `uv`
You can use this SDK in a Python shell with [uv](https://docs.astral.sh/uv/) and the `uvx` command that comes with it like so:
```shell
uvx --from moovio_sdk python
```
It's also possible to write a standalone Python script without needing to set up a whole project like so:
```python
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "moovio_sdk",
# ]
# ///
from moovio_sdk import Moov
sdk = Moov(
# SDK arguments
)
# Rest of script here...
```
Once that is saved to a file, you can run it with `uv run script.py` where
`script.py` can be replaced with the actual file name.
<!-- End SDK Installation [installation] -->
<!-- Start IDE Support [idesupport] -->
## IDE Support
### PyCharm
Generally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin.
- [PyCharm Pydantic Plugin](https://docs.pydantic.dev/latest/integrations/pycharm/)
<!-- End IDE Support [idesupport] -->
<!-- Start SDK Example Usage [usage] -->
## SDK Example Usage
### Example
```python
# Synchronous Example
from moovio_sdk import Moov
from moovio_sdk.models import components
from moovio_sdk.utils import parse_datetime
with Moov(
x_moov_version="v2024.01.00",
security=components.Security(
username="",
password="",
),
) as moov:
res = moov.accounts.create(account_type=components.CreateAccountType.BUSINESS, profile=components.CreateProfile(
business=components.CreateBusinessProfile(
legal_business_name="Whole Body Fitness LLC",
),
), metadata={
"optional": "metadata",
}, terms_of_service={
"manual": {
"accepted_date": parse_datetime("2026-07-27T08:57:17.388Z"),
"accepted_ip": "172.217.2.46",
"accepted_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36",
"accepted_domain": "https://rundown-depot.org/",
},
}, customer_support={
"phone": {
"number": "8185551212",
"country_code": "1",
},
"email": "jordan.lee@classbooker.dev",
"address": {
"address_line1": "123 Main Street",
"address_line2": "Apt 302",
"city": "Boulder",
"state_or_province": "CO",
"postal_code": "80301",
"country": "US",
},
}, settings={
"card_payment": {
"statement_descriptor": "Whole Body Fitness",
},
"ach_payment": {
"company_name": "WholeBodyFitness",
},
}, mode=components.Mode.PRODUCTION)
# Handle response
print(res)
```
</br>
The same SDK client can also be used to make asynchronous requests by importing asyncio.
```python
# Asynchronous Example
import asyncio
from moovio_sdk import Moov
from moovio_sdk.models import components
from moovio_sdk.utils import parse_datetime
async def main():
async with Moov(
x_moov_version="v2024.01.00",
security=components.Security(
username="",
password="",
),
) as moov:
res = await moov.accounts.create_async(account_type=components.CreateAccountType.BUSINESS, profile=components.CreateProfile(
business=components.CreateBusinessProfile(
legal_business_name="Whole Body Fitness LLC",
),
), metadata={
"optional": "metadata",
}, terms_of_service={
"manual": {
"accepted_date": parse_datetime("2026-07-27T08:57:17.388Z"),
"accepted_ip": "172.217.2.46",
"accepted_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36",
"accepted_domain": "https://rundown-depot.org/",
},
}, customer_support={
"phone": {
"number": "8185551212",
"country_code": "1",
},
"email": "jordan.lee@classbooker.dev",
"address": {
"address_line1": "123 Main Street",
"address_line2": "Apt 302",
"city": "Boulder",
"state_or_province": "CO",
"postal_code": "80301",
"country": "US",
},
}, settings={
"card_payment": {
"statement_descriptor": "Whole Body Fitness",
},
"ach_payment": {
"company_name": "WholeBodyFitness",
},
}, mode=components.Mode.PRODUCTION)
# Handle response
print(res)
asyncio.run(main())
```
<!-- End SDK Example Usage [usage] -->
<!-- Start Authentication [security] -->
## Authentication
### Per-Client Security Schemes
This SDK supports the following security scheme globally:
| Name | Type | Scheme | Environment Variable |
| ------------------------- | ---- | ---------- | ----------------------------------- |
| `username`<br/>`password` | http | HTTP Basic | `MOOV_USERNAME`<br/>`MOOV_PASSWORD` |
You can set the security parameters through the `security` optional parameter when initializing the SDK client instance. For example:
```python
from moovio_sdk import Moov
from moovio_sdk.models import components
from moovio_sdk.utils import parse_datetime
with Moov(
security=components.Security(
username="",
password="",
),
x_moov_version="v2024.01.00",
) as moov:
res = moov.accounts.create(account_type=components.CreateAccountType.BUSINESS, profile=components.CreateProfile(
business=components.CreateBusinessProfile(
legal_business_name="Whole Body Fitness LLC",
),
), metadata={
"optional": "metadata",
}, terms_of_service={
"manual": {
"accepted_date": parse_datetime("2026-07-27T08:57:17.388Z"),
"accepted_ip": "172.217.2.46",
"accepted_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36",
"accepted_domain": "https://rundown-depot.org/",
},
}, customer_support={
"phone": {
"number": "8185551212",
"country_code": "1",
},
"email": "jordan.lee@classbooker.dev",
"address": {
"address_line1": "123 Main Street",
"address_line2": "Apt 302",
"city": "Boulder",
"state_or_province": "CO",
"postal_code": "80301",
"country": "US",
},
}, settings={
"card_payment": {
"statement_descriptor": "Whole Body Fitness",
},
"ach_payment": {
"company_name": "WholeBodyFitness",
},
}, mode=components.Mode.PRODUCTION)
# Handle response
print(res)
```
<!-- End Authentication [security] -->
<!-- Start Available Resources and Operations [operations] -->
## Available Resources and Operations
<details open>
<summary>Available methods</summary>
### [AccountTerminalApplications](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accountterminalapplications/README.md)
* [link](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accountterminalapplications/README.md#link) - Link an account with a terminal application.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/terminal-applications.write` scope.
* [list](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accountterminalapplications/README.md#list) - Retrieve all terminal applications linked to a specific account.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/terminal-applications.read` scope.
* [get](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accountterminalapplications/README.md#get) - Verifies if a specific Terminal Application is linked to an Account. This endpoint acts as a validation check for the link's existence.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/terminal-applications.read` scope.
* [get_configuration](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accountterminalapplications/README.md#get_configuration) - Fetch the configuration for a given Terminal Application linked to a specific Account.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/terminal-configuration.read` scope.
### [Accounts](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accounts/README.md)
* [create](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accounts/README.md#create) - You can create **business** or **individual** accounts for your users (i.e., customers, merchants) by passing the required
information to Moov. Requirements differ per account type and requested [capabilities](https://docs.moov.io/guides/accounts/capabilities/requirements/).
If you're requesting the `wallet`, `send-funds`, `collect-funds`, or `card-issuing` capabilities, you'll need to:
+ Send Moov the user [platform terms of service agreement](https://docs.moov.io/guides/accounts/requirements/platform-agreement/) acceptance.
This can be done upon account creation, or by [patching](https://docs.moov.io/api/moov-accounts/accounts/patch/) the account using the `termsOfService` field.
If you're creating a business account with the business type `llc`, `partnership`, or `privateCorporation`, you'll need to:
+ Provide [business representatives](https://docs.moov.io/api/moov-accounts/representatives/) after creating the account.
+ [Patch](https://docs.moov.io/api/moov-accounts/accounts/patch/) the account to indicate that business representative ownership information is complete.
Visit our documentation to read more about [creating accounts](https://docs.moov.io/guides/accounts/create-accounts/) and [verification requirements](https://docs.moov.io/guides/accounts/requirements/identity-verification/).
Note that the `mode` field (for production or sandbox) is only required when creating a _facilitator_ account. All non-facilitator account requests will ignore the mode field and be set to the calling facilitator's mode.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/) you'll need
to specify the `/accounts.write` scope.
* [list](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accounts/README.md#list) - List or search accounts to which the caller is connected.
All supported query parameters are optional. If none are provided the response will include all connected accounts.
Pagination is supported via the `skip` and `count` query parameters. Searching by name and email will overlap and
return results based on relevance. Accounts with AccountType `guest` will not be included in the response.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/) you'll need
to specify the `/accounts.read` scope.
* [get](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accounts/README.md#get) - Retrieves details for the account with the specified ID.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/) you'll need
to specify the `/accounts/{accountID}/profile.read` scope.
* [update](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accounts/README.md#update) - When **can** profile data be updated:
+ For unverified accounts, all profile data can be edited.
+ During the verification process, missing or incomplete profile data can be edited.
+ Verified accounts can only add missing profile data.
When **can't** profile data be updated:
+ Verified accounts cannot change any existing profile data.
If you need to update information in a locked state, please contact Moov support.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/) you'll need
to specify the `/accounts/{accountID}/profile.write` scope.
* [disconnect](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accounts/README.md#disconnect) - This will sever the connection between you and the account specified and it will no longer be listed as
active in the list of accounts. This also means you'll only have read-only access to the account going
forward for reporting purposes.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/profile.disconnect` scope.
* [list_connected](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accounts/README.md#list_connected) - List or search accounts to which the caller is connected.
All supported query parameters are optional. If none are provided the response will include all connected accounts.
Pagination is supported via the `skip` and `count` query parameters. Searching by name and email will overlap and
return results based on relevance. Accounts with AccountType `guest` will not be included in the response.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/) you'll need
to specify the `/accounts.read` scope.
* [connect](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accounts/README.md#connect) - Shares access scopes from the account specified to the caller, establishing a connection
between the two accounts with the specified permissions.
* [get_countries](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accounts/README.md#get_countries) - Retrieve the specified countries of operation for an account.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/profile.read` scope.
* [assign_countries](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accounts/README.md#assign_countries) - Assign the countries of operation for an account.
This endpoint will always overwrite the previously assigned values.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/profile.write` scope.
* [get_merchant_processing_agreement](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accounts/README.md#get_merchant_processing_agreement) - Retrieve a merchant account's processing agreement.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/profile.read` scope.
* [get_terms_of_service_token](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/accounts/README.md#get_terms_of_service_token) - Generates a non-expiring token that can then be used to accept Moov's terms of service.
This token can only be generated via API. Any Moov account requesting the collect funds, send funds, wallet,
or card issuing capabilities must accept Moov's terms of service, then have the generated terms of service
token patched to the account. Read more in our [documentation](https://docs.moov.io/guides/accounts/requirements/platform-agreement/).
### [Adjustments](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/adjustments/README.md)
* [list](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/adjustments/README.md#list) - List adjustments associated with a Moov account.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/wallets.read` scope.
* [get](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/adjustments/README.md#get) - Retrieve a specific adjustment associated with a Moov account.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/wallets.read` scope.
### [ApplePay](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/applepay/README.md)
* [register_merchant_domains](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/applepay/README.md#register_merchant_domains) - Add domains to be registered with Apple Pay.
Any domains that will be used to accept payments must first be [verified](https://docs.moov.io/guides/sources/cards/apple-pay/#register-your-domains)
with Apple.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/apple-pay.write` scope.
* [update_merchant_domains](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/applepay/README.md#update_merchant_domains) - Add or remove domains to be registered with Apple Pay.
Any domains that will be used to accept payments must first be [verified](https://docs.moov.io/guides/sources/cards/apple-pay/#register-your-domains)
with Apple.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/apple-pay.write` scope.
* [get_merchant_domains](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/applepay/README.md#get_merchant_domains) - Get domains registered with Apple Pay.
Read our [Apple Pay tutorial](https://docs.moov.io/guides/sources/cards/apple-pay/#register-your-domains) to learn more.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/apple-pay.read` scope.
* [create_session](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/applepay/README.md#create_session) - Create a session with Apple Pay to facilitate a payment.
Read our [Apple Pay tutorial](https://docs.moov.io/guides/sources/cards/apple-pay/#register-your-domains) to learn more.
A successful response from this endpoint should be passed through to Apple Pay unchanged.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/apple-pay.write` scope.
* [link_token](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/applepay/README.md#link_token) - Connect an Apple Pay token to the specified account.
Read our [Apple Pay tutorial](https://docs.moov.io/guides/sources/cards/apple-pay/#register-your-domains) to learn more.
The `token` data is defined by Apple Pay and should be passed through from Apple Pay's response unmodified.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/cards.write` scope.
### [Authentication](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/authentication/README.md)
* [revoke_access_token](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/authentication/README.md#revoke_access_token) - Revoke an auth token.
Allows clients to notify the authorization server that a previously obtained refresh or access token is no longer needed.
* [create_access_token](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/authentication/README.md#create_access_token) - Create or refresh an access token.
### [Avatars](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/avatars/README.md)
* [get](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/avatars/README.md#get) - Get avatar image for an account using a unique ID.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/profile-enrichment.read` scope.
### [BankAccounts](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/bankaccounts/README.md)
* [link](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/bankaccounts/README.md#link) - Link a bank account to an existing Moov account. Read our [bank accounts guide](https://docs.moov.io/guides/sources/bank-accounts/) to learn more.
It is strongly recommended that callers include the `X-Wait-For` header, set to `payment-method`, if the newly linked
bank-account is intended to be used right away. If this header is not included, the caller will need to poll the [List Payment
Methods](https://docs.moov.io/api/sources/payment-methods/list/)
endpoint to wait for the new payment methods to be available for use.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/bank-accounts.write` scope.
* [list](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/bankaccounts/README.md#list) - List all the bank accounts associated with a particular Moov account.
Read our [bank accounts guide](https://docs.moov.io/guides/sources/bank-accounts/) to learn more.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/bank-accounts.read` scope.
* [get](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/bankaccounts/README.md#get) - Retrieve bank account details (i.e. routing number or account type) associated with a specific Moov account.
Read our [bank accounts guide](https://docs.moov.io/guides/sources/bank-accounts/) to learn more.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/bank-accounts.read` scope.
* [disable](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/bankaccounts/README.md#disable) - Discontinue using a specified bank account linked to a Moov account.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/bank-accounts.write` scope.
* [initiate_micro_deposits](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/bankaccounts/README.md#initiate_micro_deposits) - Micro-deposits help confirm bank account ownership, helping reduce fraud and the risk of unauthorized activity.
Use this method to initiate the micro-deposit verification, sending two small credit transfers to the bank account
you want to confirm.
If you request micro-deposits before 4:15PM ET, they will appear that same day. If you request micro-deposits any
time after 4:15PM ET, they will appear the next banking day. When the two credits are initiated, Moov simultaneously
initiates a debit to recoup the micro-deposits.
Micro-deposits initiated for a `sandbox` bank account will always be `$0.00` / `$0.00` and instantly verifiable once initiated.
You can simulate micro-deposit verification in test mode. See our [test mode](https://docs.moov.io/guides/get-started/test-mode/#micro-deposits)
guide for more information.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/bank-accounts.write` scope.
* [complete_micro_deposits](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/bankaccounts/README.md#complete_micro_deposits) - Complete the micro-deposit validation process by passing the amounts of the two transfers within three tries.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/bank-accounts.write` scope.
* [get_verification](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/bankaccounts/README.md#get_verification) - Retrieve the current status and details of an instant verification, including whether the verification method was instant (RTP or FedNow) or same-day
ACH. This helps track the verification process in real-time and provides details in case of exceptions.
The status will indicate the following:
- `new`: Verification initiated, credit pending to the payment network
- `sent-credit`: Credit sent, available for verification
- `failed`: Verification failed, description provided, user needs to add a new bank account
- `expired`: Verification expired after 14 days, initiate another verification
- `max-attempts-exceeded`: Five incorrect code attempts exhausted, initiate another verification
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/bank-accounts.read` scope.
* [initiate_verification](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/bankaccounts/README.md#initiate_verification) - Instant micro-deposit verification offers a quick and efficient way to verify bank account ownership.
Send a $0.01 credit with a unique verification code via RTP, FedNow, or same-day ACH, depending on the receiving bank's capabilities. This
feature provides a faster alternative to traditional methods, allowing verification in a single session.
It is recommended to use the `X-Wait-For: rail-response` header to synchronously receive the outcome of the instant credit in the
response payload.
Possible verification methods:
- `instant`: Real-time verification credit sent via RTP or FedNow
- `ach`: Verification credit sent via same-day ACH
Possible statuses:
- `new`: Verification initiated, credit pending
- `sent-credit`: Credit sent, available for verification in the external bank account
- `failed`: Verification failed due to credit rejection/return, details in `exceptionDetails`
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/bank-accounts.write` scope.
* [complete_verification](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/bankaccounts/README.md#complete_verification) - Finalize the instant micro-deposit verification by submitting the verification code displayed in the user's bank account.
Upon successful verification, the bank account status will be updated to `verified` and eligible for ACH debit transactions.
The following formats are accepted:
- `MV0000`
- `mv0000`
- `0000`
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/bank-accounts.write` scope.
### [Branding](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/branding/README.md)
* [create](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/branding/README.md#create) - Create brand properties for the specified account.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/branding.write` scope.
* [upsert](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/branding/README.md#upsert) - Create or replace brand properties for the specified account.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/branding.write` scope.
* [get](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/branding/README.md#get) - Get brand properties for the specified account.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/branding.read` scope.
### [Capabilities](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/capabilities/README.md)
* [list](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/capabilities/README.md#list) - Retrieve all the capabilities an account has requested.
Read our [capabilities guide](https://docs.moov.io/guides/accounts/capabilities/) to learn more.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/capabilities.read` scope.
* [request](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/capabilities/README.md#request) - Request capabilities for a specific account. Read our [capabilities guide](https://docs.moov.io/guides/accounts/capabilities/) to learn more.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/capabilities.write` scope.
* [get](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/capabilities/README.md#get) - Retrieve a specific capability that an account has requested. Read our [capabilities guide](https://docs.moov.io/guides/accounts/capabilities/) to learn more.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/capabilities.read` scope.
* [disable](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/capabilities/README.md#disable) - Disable a specific capability that an account has requested. Read our [capabilities guide](https://docs.moov.io/guides/accounts/capabilities/) to learn more.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/capabilities.write` scope.
### [CardIssuing](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/cardissuing/README.md)
* [request](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/cardissuing/README.md#request) - Request a virtual card be issued.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/issued-cards.write` scope.
* [list](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/cardissuing/README.md#list) - List Moov issued cards existing for the account.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/issued-cards.read` scope.
* [get](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/cardissuing/README.md#get) - Retrieve a single issued card associated with a Moov account.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/issued-cards.read` scope.
* [update](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/cardissuing/README.md#update) - Update a Moov issued card.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/issued-cards.write` scope.
* [get_full](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/cardissuing/README.md#get_full) - Get issued card with PAN, CVV, and expiration.
Only use this endpoint if you have provided Moov with a copy of your PCI attestation of compliance.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/issued-cards.read-secure` scope.
### [Cards](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/cards/README.md)
* [link](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/cards/README.md#link) - Link a card to an existing Moov account.
Read our [accept card payments guide](https://docs.moov.io/guides/sources/cards/accept-card-payments/#link-a-card) to learn more.
Only use this endpoint if you have provided Moov with a copy of your PCI attestation of compliance.
During card linking, the provided data will be verified by submitting a $0 authorization (account verification) request.
If `merchantAccountID` is provided, the authorization request will contain that account's statement descriptor and address.
Otherwise, the platform account's profile will be used. If no statement descriptor has been set, the authorization will
use the account's name instead.
It is strongly recommended that callers include the `X-Wait-For` header, set to `payment-method`, if the newly linked
card is intended to be used right away. If this header is not included, the caller will need to poll the [List Payment
Methods](https://docs.moov.io/api/sources/payment-methods/list/)
endpoint to wait for the new payment methods to be available for use.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/cards.write` scope.
* [list](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/cards/README.md#list) - List all the active cards associated with a Moov account.
Read our [accept card payments guide](https://docs.moov.io/guides/sources/cards/accept-card-payments/) to learn more.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/cards.read` scope.
* [get](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/cards/README.md#get) - Fetch a specific card associated with a Moov account.
Read our [accept card payments guide](https://docs.moov.io/guides/sources/cards/accept-card-payments/) to learn more.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/cards.read` scope.
* [update](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/cards/README.md#update) - Update a linked card and/or resubmit it for verification.
If a value is provided for CVV, a new verification ($0 authorization) will be submitted for the card. Updating the expiration
date or
address will update the information stored on file for the card but will not be verified.
Read our [accept card payments guide](https://docs.moov.io/guides/sources/cards/accept-card-payments/#reverify-a-card) to learn
more.
Only use this endpoint if you have provided Moov with a copy of your PCI attestation of compliance.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/cards.write` scope.
* [disable](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/cards/README.md#disable) - Disables a card associated with a Moov account.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/cards.write` scope.
### [Disputes](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/disputes/README.md)
* [list](https://github.com/moovfinancial/moov-python/blob/master/docs/sdks/disputes/README.md#list) - Returns the list of disputes.
Read our [disputes guide](https://docs.moov.io/guides/money-movement/accept-payments/card-acceptance/disputes/) to learn more.
To access this endpoint using an [access token](https://docs.moov.io/api/authentication/access-tokens/)
you'll need to specify the `/accounts/{accountID}/transfers.read` scope.
* [get](https://github.com/moovfinanc | text/markdown | Speakeasy | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | https://moov.io/ | null | >=3.10 | [] | [] | [] | [
"httpcore>=1.0.9",
"httpx>=0.28.1",
"pydantic>=2.11.2"
] | [] | [] | [] | [
"Homepage, https://moov.io/",
"Repository, https://github.com/moovfinancial/moov-python.git",
"Documentation, https://docs.moov.io/"
] | poetry/2.2.1 CPython/3.10.19 Linux/6.8.0-1044-azure | 2026-02-21T02:42:24.572916 | moovio_sdk-26.4.0.dev11.tar.gz | 338,772 | 62/ba/2018fe6cd0de236f3cec0adc6aa58320eb9703e0371eda30fb0b0fb3aa2f/moovio_sdk-26.4.0.dev11.tar.gz | source | sdist | null | false | 2888baa43211bc8e62dde23fbb5bfe54 | ee255890d09d54683b6f3b730121bed68f690861caf4a7c783c957c0ef6729ee | 62ba2018fe6cd0de236f3cec0adc6aa58320eb9703e0371eda30fb0b0fb3aa2f | null | [] | 0 |
2.4 | agentward | 0.2.0 | Open-source permission control plane for AI agents. Scan, enforce, and audit every tool call. | <p align="center">
<img src="docs/architecture.svg" alt="AgentWard Architecture" width="900"/>
</p>
<h1 align="center">AgentWard</h1>
<p align="center">
<strong>Open-source permission control plane for AI agents.</strong><br/>
Scan, enforce, and audit every tool call.
</p>
<p align="center">
<a href="https://pypi.org/project/agentward/"><img src="https://img.shields.io/pypi/v/agentward?color=00FF41&labelColor=0a0a0a" alt="PyPI"></a>
<a href="https://github.com/agentward-ai/agentward/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-Apache%202.0-00FF41?labelColor=0a0a0a" alt="License"></a>
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.11+-00FF41?labelColor=0a0a0a" alt="Python"></a>
</p>
---
<p align="center">
<img src="docs/demo.gif" alt="AgentWard Demo" width="900"/>
</p>
Telling an agent *"don't touch the stove"* is a natural-language guardrail that can be circumvented. AgentWard puts a **physical lock on the stove** — code-level enforcement that prompt injection can't override.
AgentWard sits between AI agents and their tools (MCP servers, HTTP gateways, function calls) to enforce least-privilege policies, inspect data flows at runtime, and generate compliance audit trails. Policies are enforced **in code, outside the LLM context window** — the model never sees them, can't override them, can't be tricked into ignoring them.
## Why AgentWard?
AI agents now have access to your email, calendar, filesystem, shell, databases, and APIs. The tools exist to *give* agents these capabilities. But **nothing exists to control what they do with them.**
| What exists today | What it does | What it doesn't do |
|---|---|---|
| **Static scanners** (mcp-scan, Cisco Skill Scanner) | Scan tool definitions, report risks | No runtime enforcement. Scan and walk away. |
| **Guardrails frameworks** (NeMo, Guardrails AI) | Filter LLM inputs/outputs | Don't touch tool calls. An agent can still `rm -rf /`. |
| **Prompt-based rules** (SecureClaw) | Inject safety instructions into agent context | Vulnerable to prompt injection. The LLM can be tricked into ignoring them. |
| **IAM / OAuth** | Control who can access what | Control *humans*, not *agents*. An agent with your OAuth token has your full permissions. |
The gap: **No tool-level permission enforcement that actually runs in code, outside the LLM, at the point of every tool call.** Scanners find problems but don't fix them. Guardrails protect the model but not the tools. Prompt rules are suggestions, not enforcement.
AgentWard fills this gap. It's a proxy that sits between agents and tools, evaluating every `tools/call` against a declarative policy — in code, at runtime, where prompt injection can't reach.
## Quick Start
```bash
pip install agentward
```
### 1. Scan your tools
```bash
agentward scan
```
Auto-discovers MCP configs (Claude Desktop, Cursor, Windsurf, VS Code), Python tool definitions (OpenAI, LangChain, CrewAI), and ClawdBot/OpenClaw skills. Outputs a permission map with risk ratings and security recommendations.
```
Server Tool Risk Data Access
─────────────── ──────────────────── ─────── ──────────────────
filesystem read_file MEDIUM File read
filesystem write_file HIGH File write
github create_issue MEDIUM GitHub API
shell-executor run_command CRITICAL Shell execution
```
### 2. Generate a policy
```bash
agentward configure
```
Generates a smart-default `agentward.yaml` with security-aware rules based on what `scan` found — skill restrictions, approval gates, and chaining rules tailored to your setup.
```yaml
# agentward.yaml (generated)
version: "1.0"
skills:
filesystem:
read_file: { action: allow }
write_file: { action: approve } # requires human approval
shell-executor:
run_command: { action: block } # blocked entirely
require_approval:
- send_email
- delete_file
```
### 3. Wire it in
```bash
# MCP servers (Claude Desktop, Cursor, etc.)
agentward setup --policy agentward.yaml
# Or for ClawdBot gateway
agentward setup --gateway clawdbot
```
Rewrites your MCP configs so every tool call routes through the AgentWard proxy. For ClawdBot, swaps the gateway port so AgentWard sits as an HTTP reverse proxy.
### 4. Enforce at runtime
```bash
# MCP stdio proxy
agentward inspect --policy agentward.yaml -- npx @modelcontextprotocol/server-filesystem /tmp
# HTTP gateway proxy
agentward inspect --gateway clawdbot --policy agentward.yaml
```
Every tool call is now intercepted, evaluated against your policy, and either allowed, blocked, or flagged for approval. Full audit trail logged.
```
[ALLOW] filesystem.read_file /tmp/notes.txt
[BLOCK] shell-executor.run_command rm -rf /
[APPROVE] gmail.send_email → waiting for human approval
```
## How It Works
AgentWard operates as a transparent proxy between agents and their tools:
```
Agent Host AgentWard Tool Server
(Claude, Cursor, etc.) (Proxy + Policy Engine) (MCP, Gateway)
tools/call ──────────► Intercept ──► Policy check
│ │
│ ALLOW ──────┼──────► Forward to server
│ BLOCK ──────┼──────► Return error
│ APPROVE ────┼──────► Wait for human
│ │
└── Audit log ◄──┘
```
**Two proxy modes, same policy engine:**
| Mode | Transport | Intercepts | Use Case |
|------|-----------|------------|----------|
| **Stdio** | JSON-RPC 2.0 over stdio | `tools/call` | MCP servers (Claude Desktop, Cursor, Windsurf, VS Code) |
| **HTTP** | HTTP reverse proxy + WebSocket | `POST /tools-invoke` | ClawdBot gateway, HTTP-based tools |
## CLI Commands
| Command | Description |
|---------|-------------|
| `agentward scan` | Static analysis — discover tools, generate permission maps, risk ratings |
| `agentward configure` | Generate smart-default policy YAML from scan results |
| `agentward setup` | Wire proxy into MCP configs or gateway ports |
| `agentward inspect` | Start runtime proxy with live policy enforcement |
| `agentward comply` | Compliance evaluation against regulatory frameworks *(coming soon)* |
## Policy Actions
| Action | Behavior |
|--------|----------|
| `allow` | Tool call forwarded transparently |
| `block` | Tool call rejected, error returned to agent |
| `approve` | Tool call held for human approval before forwarding |
| `log` | Tool call forwarded, but logged with extra detail |
| `redact` | Tool call forwarded with sensitive data stripped |
## What AgentWard Is NOT
- **Not a static scanner** — Scanners like mcp-scan analyze and walk away. AgentWard scans *and* enforces at runtime.
- **Not a guardrails framework** — NeMo Guardrails and Guardrails AI focus on LLM input/output. AgentWard controls the *tool calls*.
- **Not prompt-based enforcement** — Injecting safety rules into the LLM context is vulnerable to prompt injection. AgentWard enforces policies in code, outside the context window.
- **Not an IAM system** — AgentWard complements IAM. It controls what *agents* can do with the permissions they already have.
## Supported Platforms
**MCP Hosts (stdio proxy):**
- Claude Desktop
- Claude Code
- Cursor
- Windsurf
- VS Code Copilot
- Any MCP-compatible client
**HTTP Gateways:**
- ClawdBot (with WebSocket passthrough for UI)
- Extensible to other HTTP-based tool gateways
**Python Tool Scanning:**
- OpenAI SDK (`@tool` decorators)
- LangChain (`@tool`, `StructuredTool`)
- CrewAI (`@tool`)
- Anthropic SDK
## Development
```bash
# Clone and set up
git clone https://github.com/agentward-ai/agentward.git
cd agentward
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
# Run tests
pytest
# Lint
ruff check agentward/
```
## Roadmap
- [x] MCP stdio proxy with policy enforcement
- [x] HTTP reverse proxy with WebSocket passthrough
- [x] Static scanner (MCP configs, Python tools, OpenClaw skills)
- [x] Smart-default policy generation
- [x] MCP config wrapping (`agentward setup`)
- [x] Audit logging (JSON Lines + rich stderr)
- [ ] Skill chaining analysis and enforcement
- [ ] Human-in-the-loop approval flow
- [ ] Compliance frameworks (HIPAA, SOX, GDPR, PCI-DSS)
- [ ] Data classifier (PII/PHI detection)
- [ ] Data boundary enforcement
- [ ] Skill Compliance Registry
## License
[Apache 2.0](LICENSE)
---
<p align="center">
<a href="https://agentward.ai">agentward.ai</a> · <a href="https://github.com/agentward-ai/agentward">GitHub</a>
</p>
| text/markdown | AgentWard Contributors | null | null | null | null | agents, ai, governance, mcp, permissions, security | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"aiohttp>=3.9",
"pydantic>=2.0",
"pyyaml>=6.0",
"rich>=13.0",
"typer[all]>=0.9.0"
] | [] | [] | [] | [
"Homepage, https://agentward.ai",
"Repository, https://github.com/agentward-ai/agentward"
] | twine/6.2.0 CPython/3.13.3 | 2026-02-21T02:41:44.298967 | agentward-0.2.0.tar.gz | 2,974,408 | f6/ec/96701225d880afddf038f9e30c02d852d971e440343bd31a82c771a7dff6/agentward-0.2.0.tar.gz | source | sdist | null | false | 933013ab263c47ceec4a69318be154dd | 859e2180f65d0df534afe7cd8aefa15f4801d7056237301c57c961a00527fd1d | f6ec96701225d880afddf038f9e30c02d852d971e440343bd31a82c771a7dff6 | Apache-2.0 | [] | 227 |
2.4 | hlquantum | 0.1.1 | HLQuantum (High Level Quantum) — A high-level Python package for working with quantum hardware using CUDA-Q and other backends. | # HLQuantum
**HLQuantum** (High Level Quantum) is a high-level Python package designed to simplify working with quantum hardware. Write your quantum logic once and run it on any supported backend.
## Supported Backends
| Backend | Framework | Install extra |
| ------------------ | ------------------------------------------------------ | ---------------------------------- |
| `CudaQBackend` | [NVIDIA CUDA-Q](https://nvidia.github.io/cuda-quantum) | `pip install hlquantum[cudaq]` |
| `QiskitBackend` | [IBM Qiskit](https://qiskit.org) | `pip install hlquantum[qiskit]` |
| `CirqBackend` | [Google Cirq](https://quantumai.google/cirq) | `pip install hlquantum[cirq]` |
| `BraketBackend` | [Amazon Braket](https://aws.amazon.com/braket/) | `pip install hlquantum[braket]` |
| `PennyLaneBackend` | [Xanadu PennyLane](https://pennylane.ai) | `pip install hlquantum[pennylane]` |
## Installation
```bash
# Core only (no backend dependencies)
pip install .
# With a specific backend
pip install ".[qiskit]"
# With all backends
pip install ".[all]"
# Development
pip install ".[dev]"
```
## Features
- **Backend-Agnostic Circuits** — A single `QuantumCircuit` IR that translates to any supported framework.
- **Quantum Pipelines** — Build modular architectures using ML-inspired `Layer` and `Sequential` models.
- **Resilient Workflows** — Orchestrate complex executions with loops, branching, and state persistence (save/resume).
- **Asynchronous Execution** — Multi-backend concurrency with `async/await` support.
- **High-Level QML** — Keras-compatible `QuantumLayer` with auto-differentiation and TensorFlow Quantum support.
- **Unitary-Agnostic @kernel** — Write quantum logic as plain Python functions.
- **GPU Acceleration** — Unified `GPUConfig` across all backends.
## GPU Acceleration
HLQuantum provides a unified `GPUConfig` that works across all GPU-capable backends:
```python
from hlquantum import GPUConfig, GPUPrecision
# Simple — single GPU
gpu = GPUConfig(enabled=True)
# Multi-GPU
gpu = GPUConfig(enabled=True, multi_gpu=True, device_ids=[0, 1])
# FP64 precision
gpu = GPUConfig(enabled=True, precision=GPUPrecision.FP64)
# Enable cuStateVec (Qiskit Aer)
gpu = GPUConfig(enabled=True, custatevec=True)
```
### GPU Support by Backend
| Backend | GPU Library | Auto-selected target / device |
| ------------------ | ------------------------ | -------------------------------------------- |
| `CudaQBackend` | CUDA-Q (native) | `"nvidia"`, `"nvidia-fp64"`, `"nvidia-mqpu"` |
| `QiskitBackend` | qiskit-aer-gpu | `AerSimulator(device='GPU')` |
| `CirqBackend` | qsimcirq | `QSimSimulator(use_gpu=True)` |
| `PennyLaneBackend` | pennylane-lightning[gpu] | `"lightning.gpu"` |
| `BraketBackend` | _(not available)_ | _(cloud-managed hardware)_ |
### Per-Backend GPU Examples
```python
from hlquantum import GPUConfig, GPUPrecision
from hlquantum.backends import CudaQBackend, QiskitBackend, CirqBackend, PennyLaneBackend
gpu = GPUConfig(enabled=True)
# CUDA-Q — auto-selects "nvidia" target
cudaq = CudaQBackend(gpu_config=gpu)
# CUDA-Q — multi-GPU with FP64
cudaq_multi = CudaQBackend(
gpu_config=GPUConfig(enabled=True, multi_gpu=True, precision=GPUPrecision.FP64)
)
# Qiskit Aer — GPU + cuStateVec
qiskit = QiskitBackend(gpu_config=GPUConfig(enabled=True, custatevec=True))
# Cirq — qsim GPU simulator
cirq = CirqBackend(gpu_config=gpu)
# PennyLane — auto-selects lightning.gpu
pl = PennyLaneBackend(gpu_config=gpu)
```
```python
from hlquantum import detect_gpus
for gpu in detect_gpus():
print(f"GPU {gpu['id']}: {gpu['name']} ({gpu['memory_total_gb']} GB)")
```
## Quantum Pipelines (ML-Style)
Build complex circuits modularly by stacking layers:
```python
from hlquantum.layers import Sequential, GroverLayer, QFTLayer, RealAmplitudes
# Stack algorithms and variational layers
model = Sequential([
QFTLayer(num_qubits=4),
GroverLayer(num_qubits=4, target_states=["1010"]),
RealAmplitudes(num_qubits=4, reps=2)
])
# Compile to a single circuit
circuit = model.build()
```
## Resilient Workflows
Orchestrate complex execution flows with automatic state persistence and parallel execution.
```python
from hlquantum.workflows import Workflow, Parallel, Loop, Branch
wf = Workflow(state_file="checkpoint.json", name="Discovery")
# Add parallel paths
wf.add(Parallel(circuit1, circuit2))
# Add a loop
wf.add(Loop(base_circuit, iterations=10))
# Execute asynchronously (with optional throttling for rate-limits)
import asyncio
results = asyncio.run(wf.run(resume=True))
# Export to Mermaid for visualization
print(wf.to_mermaid())
```
## Quantum Machine Learning (QML)
Integrate quantum layers into your standard Keras/TensorFlow models.
```python
import tensorflow as tf
from hlquantum.qml import QuantumLayer, create_quantum_classifier
# Build a hybrid model
model = tf.keras.Sequential([
tf.keras.layers.Dense(8, activation='relu'),
QuantumLayer(my_parameterized_circuit), # Auto-detects TFQ or uses Parameter-Shift
tf.keras.layers.Dense(2, activation='softmax')
])
# Or use pre-built high-level models
model = create_quantum_classifier(n_qubits=4, n_classes=2)
```
## Quick Start
```python
import hlquantum
from hlquantum import kernel
@kernel(num_qubits=2)
def bell(qc):
qc.h(0)
qc.cx(0, 1)
qc.measure_all()
print(bell.circuit)
# QuantumCircuit(num_qubits=2, gates=4)
```
## Backend Examples
### CUDA-Q
```python
from hlquantum.backends import CudaQBackend
backend = CudaQBackend(target="default")
result = hlquantum.run(bell, shots=1000, backend=backend)
print(result.counts) # {'00': ~500, '11': ~500}
```
### Qiskit (IBM)
```python
from hlquantum.backends import QiskitBackend
# Local AerSimulator (default)
backend = QiskitBackend()
result = hlquantum.run(bell, shots=1000, backend=backend)
# Real IBM hardware
from qiskit_ibm_runtime import QiskitRuntimeService
service = QiskitRuntimeService()
ibm_backend = service.least_busy(min_num_qubits=2)
backend = QiskitBackend(backend=ibm_backend)
```
### Cirq (Google)
```python
from hlquantum.backends import CirqBackend
backend = CirqBackend()
result = hlquantum.run(bell, shots=1000, backend=backend)
# With noise
import cirq
noise = cirq.ConstantQubitNoiseModel(cirq.depolarize(0.01))
noisy_backend = CirqBackend(noise_model=noise)
```
### Amazon Braket
```python
from hlquantum.backends import BraketBackend
# Local simulator
backend = BraketBackend()
result = hlquantum.run(bell, shots=1000, backend=backend)
# IonQ on AWS
from braket.aws import AwsDevice
ionq = AwsDevice("arn:aws:braket:::device/qpu/ionq/Harmony")
backend = BraketBackend(device=ionq, s3_destination=("my-bucket", "results"))
```
### PennyLane (Xanadu)
```python
from hlquantum.backends import PennyLaneBackend
# default.qubit simulator
backend = PennyLaneBackend()
result = hlquantum.run(bell, shots=1000, backend=backend)
# Lightning fast simulator
backend = PennyLaneBackend(device_name="lightning.qubit")
```
## Working with Results
```python
result = hlquantum.run(bell, shots=1000)
result.counts # {'00': 512, '11': 488}
result.probabilities # {'00': 0.512, '11': 0.488}
result.most_probable # '00'
result.expectation_value() # 1.0 (parity-based)
result.shots # 1000
result.backend_name # 'qiskit (aer_simulator)'
# State Vector (Simulators only)
result = hlquantum.run(bell, include_statevector=True)
sv = result.get_state_vector()
print(sv) # [0.707+0j, 0, 0, 0.707+0j]
# Transpilation & Error Mitigation
from hlquantum.mitigation import ThresholdMitigation
result = hlquantum.run(
bell,
transpile=True,
mitigation=ThresholdMitigation(threshold=0.01)
)
# Built-in Algorithms
from hlquantum import algorithms
# Foundational
qft_circuit = algorithms.qft(num_qubits=4)
bv_circuit = algorithms.bernstein_vazirani("1011")
grover_circuit = algorithms.grover(num_qubits=3, target_states=["101"])
# Classical Logic (Quantum Arithmetic)
adder = algorithms.half_adder()
# Variational & Optimization
from hlquantum.algorithms import vqe_solve, qaoa_solve, gqe_solve
# VQE with parameterized circuits
res = vqe_solve(my_ansatz, initial_params=[0.1, 0.2])
# QAOA for combinatorial optimization
res = qaoa_solve(cost_hamiltonian, p=2)
# GQE for generative modeling
res = gqe_solve(ansatz, my_loss_fn)
# Differentiable Programming
from hlquantum.algorithms import parameter_shift_gradient
grads = parameter_shift_gradient(circuit, {"theta": 0.5})
```
## Adding a Custom Backend
```python
from hlquantum.backends import Backend
from hlquantum.circuit import QuantumCircuit
from hlquantum.result import ExecutionResult
class MyBackend(Backend):
@property
def name(self) -> str:
return "my_backend"
def run(self, circuit: QuantumCircuit, shots: int = 1000, **kwargs) -> ExecutionResult:
# Translate circuit.gates → your framework
# Execute and collect counts
return ExecutionResult(counts={"00": shots}, shots=shots, backend_name=self.name)
```
## Running Tests
```bash
pip install ".[dev]"
pytest tests/ -v
```
## Sponsors
HLQuantum is made possible with the support of our sponsors. If you'd like to support this project, please reach out.
| | Sponsor | Description |
| --- | ---------------------------------------------- | ----------------------------------------------- |
| | [**Venture Chain**](https://venture-chain.com) | Supporting initial development effort & release |
## License
Apache License 2.0 — see [LICENSE](LICENSE) for details.
| text/markdown | null | AlinDFerenczi <alindanielferenczi@gmail.com> | null | null | Apache-2.0 | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"amazon-braket-sdk; extra == \"all\"",
"cirq; extra == \"all\"",
"pennylane; extra == \"all\"",
"qiskit; extra == \"all\"",
"qiskit-aer; extra == \"all\"",
"amazon-braket-sdk; extra == \"braket\"",
"cirq; extra == \"cirq\"",
"black; extra == \"dev\"",
"isort; extra == \"dev\"",
"mkdocs; extra == \"dev\"",
"mkdocs-material; extra == \"dev\"",
"mkdocstrings[python]; extra == \"dev\"",
"mypy; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pennylane; extra == \"pennylane\"",
"qiskit; extra == \"qiskit\"",
"qiskit-aer; extra == \"qiskit\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T02:41:28.604882 | hlquantum-0.1.1.tar.gz | 43,243 | 68/78/c0c02b82513272995b24daebdbfb7799ffd4c60fa4d9c9d44f0ebfe3bff0/hlquantum-0.1.1.tar.gz | source | sdist | null | false | 0351ff020af758bb30b105653ee6a3f5 | e11d6d9358621d3465e40e0056a6d21f9b5a8544b24b2135ae2d7959570cc20a | 6878c0c02b82513272995b24daebdbfb7799ffd4c60fa4d9c9d44f0ebfe3bff0 | null | [
"LICENSE",
"NOTICE"
] | 224 |
2.4 | djinn-bot-cli | 0.1.3 | Python CLI for djinnbot | # djinn-bot-cli
CLI for the [DjinnBot](https://github.com/BaseDatum/djinnbot) agent orchestration platform. Chat with agents, manage pipelines, configure model providers, and browse agent memory — all from the terminal.
## Installation
```bash
pip install djinn-bot-cli
```
Or with [uv](https://docs.astral.sh/uv/):
```bash
uv tool install djinn-bot-cli
```
## Quick Start
```bash
# Check server connectivity
djinn status
# Chat with an agent (interactive picker for agent + model)
djinn chat
# Chat directly
djinn chat --agent stas --model anthropic/claude-sonnet-4
# Configure a model provider API key
djinn provider set-key anthropic
```
## Commands
### `djinn status`
Show server health, Redis connection, active runs, and GitHub App status.
### `djinn chat`
Interactive TUI chat session with an agent. Features streaming responses with markdown rendering, collapsible thinking blocks and tool calls with syntax-highlighted JSON, and a fuzzy-search model picker.
```bash
djinn chat # interactive agent + model selection
djinn chat -a finn -m anthropic/claude-sonnet-4 # skip pickers
```
### `djinn provider`
Manage model provider API keys and configuration.
```bash
djinn provider list # show all providers and status
djinn provider show anthropic # details + available models
djinn provider set-key openrouter # set API key (secure prompt)
djinn provider models # list models from configured providers
djinn provider enable openai # enable a provider
djinn provider disable openai # disable (keeps key)
djinn provider remove openai # delete key and config
```
### `djinn pipeline`
Manage pipeline definitions.
```bash
djinn pipeline list # list all pipelines
djinn pipeline show engineering # show steps and agents
djinn pipeline validate engineering # validate a pipeline
djinn pipeline raw engineering # show raw YAML
```
### `djinn agent`
Manage agents and view their status.
```bash
djinn agent list # list all agents
djinn agent show stas # detailed info + persona files
djinn agent status # fleet overview
djinn agent status stas # single agent status
djinn agent runs stas # run history
djinn agent config stas # agent configuration
djinn agent projects stas # assigned projects
```
### `djinn memory`
Browse and search agent memory vaults.
```bash
djinn memory vaults # list all vaults
djinn memory list stas # files in a vault
djinn memory show stas session.md # view a file
djinn memory search "deployments" # search across vaults
djinn memory search "arch" -a finn # search within an agent
djinn memory delete stas old.md # delete a file
```
## Configuration
The CLI connects to `http://localhost:8000` by default. Override with:
```bash
djinn --url http://your-server:8000 status
```
Or set the environment variable:
```bash
export DJINNBOT_URL=http://your-server:8000
```
## Development
```bash
cd cli
uv sync --all-extras
uv run djinn --help
uv run pytest tests/ -v
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.27.0",
"rich>=13.0.0",
"textual>=0.80.0",
"typer>=0.12.0",
"websockets>=12.0",
"pytest-mock>=3.12; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"respx>=0.21.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:41:21.583789 | djinn_bot_cli-0.1.3.tar.gz | 42,440 | ea/b9/a0bb8ee4f650920f9d9c7514cf22338e64ce55917c6b9a0553952c0cad32/djinn_bot_cli-0.1.3.tar.gz | source | sdist | null | false | e63f4671ee0b5510630fd7b666c154c5 | b35cb98207933d6bf41bf228192ca01d92207f3af0312e8f5a66fa55a32df79e | eab9a0bb8ee4f650920f9d9c7514cf22338e64ce55917c6b9a0553952c0cad32 | null | [] | 217 |
2.4 | value-network | 0.0.27 | Value networks | ## Value network (wip)
Exploration into some new research surrounding value networks
## Install
```bash
$ pip install value-network
```
## Usage
First, organize your videos (trajectories). The videos should have a suffix `.1` for success and `.0` for failure.
```text
.
├── data
│ ├── traj_0.1.mp4
│ ├── traj_1.0.mp4
│ └── traj_2.1.mp4
└── ...
```
### 1. Train
Train the `SigLIP` value network by passing the folder containing the videos. The tool will automatically handle the conversion.
```bash
$ value-network-cli train --trajectories-folder ./data --max-steps 1000 --model_output_path ./model.pt
```
### 2. Predict
Use the trained model to predict the value of any image
```bash
$ value-network-cli predict-value ./model.pt ./frame.png
```
## Citations
```bibtex
@article{Farebrother2024StopRT,
title = {Stop Regressing: Training Value Functions via Classification for Scalable Deep RL},
author = {Jesse Farebrother and Jordi Orbay and Quan Ho Vuong and Adrien Ali Taiga and Yevgen Chebotar and Ted Xiao and Alex Irpan and Sergey Levine and Pablo Samuel Castro and Aleksandra Faust and Aviral Kumar and Rishabh Agarwal},
journal = {ArXiv},
year = {2024},
volume = {abs/2403.03950},
url = {https://api.semanticscholar.org/CorpusID:268253088}
}
```
```bibtex
@misc{lee2025banelexplorationposteriorsgenerative,
title = {BaNEL: Exploration Posteriors for Generative Modeling Using Only Negative Rewards},
author = {Sangyun Lee and Brandon Amos and Giulia Fanti},
year = {2025},
eprint = {2510.09596},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2510.09596},
}
```
```bibtex
@misc{ma2024visionlanguagemodelsincontext,
title = {Vision Language Models are In-Context Value Learners},
author = {Yecheng Jason Ma and Joey Hejna and Ayzaan Wahid and Chuyuan Fu and Dhruv Shah and Jacky Liang and Zhuo Xu and Sean Kirmani and Peng Xu and Danny Driess and Ted Xiao and Jonathan Tompson and Osbert Bastani and Dinesh Jayaraman and Wenhao Yu and Tingnan Zhang and Dorsa Sadigh and Fei Xia},
year = {2024},
eprint = {2411.04549},
archivePrefix = {arXiv},
primaryClass = {cs.RO},
url = {https://arxiv.org/abs/2411.04549},
}
```
```bibtex
@misc{yang2026riseselfimprovingrobotpolicy,
title = {RISE: Self-Improving Robot Policy with Compositional World Model},
author = {Jiazhi Yang and Kunyang Lin and Jinwei Li and Wencong Zhang and Tianwei Lin and Longyan Wu and Zhizhong Su and Hao Zhao and Ya-Qin Zhang and Li Chen and Ping Luo and Xiangyu Yue and Hongyang Li},
year = {2026},
eprint = {2602.11075},
archivePrefix = {arXiv},
primaryClass = {cs.RO},
url = {https://arxiv.org/abs/2602.11075},
}
```
| text/markdown | null | Phil Wang <lucidrains@gmail.com> | null | null | MIT License Copyright (c) 2026 Phil Wang Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | artificial intelligence, deep learning, value networks | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"accelerate",
"click",
"einops>=0.8.2",
"ema-pytorch",
"hl-gauss-pytorch",
"memmap-replay-buffer>=0.0.19",
"opencv-python",
"torch-einops-utils>=0.0.20",
"torch>=2.5",
"torchvision",
"pytest; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://pypi.org/project/value-network/",
"Repository, https://codeberg.org/lucidrains/value-network"
] | uv/0.8.13 | 2026-02-21T02:41:20.599937 | value_network-0.0.27.tar.gz | 103,616 | 6a/7b/2b0c39343fe7ef55b19ae4969e50561053350c94fd7a4a485d9d25681bba/value_network-0.0.27.tar.gz | source | sdist | null | false | 454cf8bb2ac339fa71cb75b1bbd235ec | 3a02ac407cd3e288623c68a831b73e077fb824952b4149c9d74b23a6e2eaaf4d | 6a7b2b0c39343fe7ef55b19ae4969e50561053350c94fd7a4a485d9d25681bba | null | [
"LICENSE"
] | 223 |
2.4 | karrio-zoom2u | 2026.1.14 | Karrio - Zoom2u Shipping Extension |
# karrio.zoom2u
This package is a Zoom2u extension of the [karrio](https://pypi.org/project/karrio) multi carrier shipping SDK.
## Requirements
`Python 3.7+`
## Installation
```bash
pip install karrio.zoom2u
```
## Usage
```python
import karrio.sdk as karrio
from karrio.mappers.zoom2u.settings import Settings
# Initialize a carrier gateway
zoom2u = karrio.gateway["zoom2u"].create(
Settings(
...
)
)
```
Check the [Karrio Mutli-carrier SDK docs](https://docs.karrio.io) for Shipping API requests
| text/markdown | null | karrio <hello@karrio.io> | null | null | null | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"karrio"
] | [] | [] | [] | [
"Homepage, https://github.com/karrioapi/karrio"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T02:41:09.110026 | karrio_zoom2u-2026.1.14-py3-none-any.whl | 14,402 | 83/5a/3cac607471f3e77dc959bba8e6f7839ad23ee713fd057131e6efe76601c2/karrio_zoom2u-2026.1.14-py3-none-any.whl | py3 | bdist_wheel | null | false | 6b49741dd5ca482b0e1e4432ee3943ad | 87f51fcd01c7d1254721a19f9b20674f2412f166fd88b467435124a0db6958cd | 835a3cac607471f3e77dc959bba8e6f7839ad23ee713fd057131e6efe76601c2 | LGPL-3.0 | [] | 84 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.