metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | flet-color-pickers | 0.80.6.dev7615 | Pick colors in Flet apps. | # flet-color-pickers
[](https://pypi.python.org/pypi/flet-color-pickers)
[](https://pepy.tech/project/flet-color-pickers)
[](https://pypi.org/project/flet-color-pickers)
[](https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet/docs/assets/badges/docs-coverage)
[](https://github.com/flet-dev/flet/blob/main/sdk/python/packages/flet-color-pickers/LICENSE)
A [Flet](https://flet.dev) extension package for picking colors.
It is based on the [flutter_colorpicker](https://pub.dev/packages/flutter_colorpicker) Flutter package.
## Documentation
Detailed documentation to this package can be found [here](https://docs.flet.dev/color_picker/).
## Platform Support
| Platform | Windows | macOS | Linux | iOS | Android | Web |
|----------|---------|-------|-------|-----|---------|-----|
| Supported| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
## Usage
### Installation
To install the `flet-color-pickers` package and add it to your project dependencies:
- Using `uv`:
```bash
uv add flet-color-pickers
```
- Using `pip`:
```bash
pip install flet-color-pickers
```
After this, you will have to manually add this package to your `requirements.txt` or `pyproject.toml`.
### Examples
For examples, see [these](https://github.com/flet-dev/flet/tree/main/sdk/python/examples/controls/color_pickers).
| text/markdown | null | Flet contributors <hello@flet.dev> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flet==0.80.6.dev7615"
] | [] | [] | [] | [
"Homepage, https://flet.dev",
"Documentation, https://docs.flet.dev/colorpickers",
"Repository, https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet-color-pickers",
"Issues, https://github.com/flet-dev/flet/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:35:25.001901 | flet_color_pickers-0.80.6.dev7615.tar.gz | 20,983 | 2c/b8/56c1f3d88a1229e69d35c69433532df34a66b937719d0de0aa8446d019b7/flet_color_pickers-0.80.6.dev7615.tar.gz | source | sdist | null | false | 53245dfef2f86168ccdb54a610076b9f | bf300fcf635df2e66471b7753f7fe9f658b3e46b9b1de1d66793b113a88a806c | 2cb856c1f3d88a1229e69d35c69433532df34a66b937719d0de0aa8446d019b7 | Apache-2.0 | [
"LICENSE"
] | 163 |
2.4 | flet-code-editor | 0.80.6.dev7615 | Edit and highlight source code inside Flet apps. | # flet-code-editor
[](https://pypi.python.org/pypi/flet-code-editor)
[](https://pepy.tech/project/flet-code-editor)
[](https://github.com/flet-dev/flet/blob/main/sdk/python/packages/flet-code-editor/LICENSE)
A [Flet](https://flet.dev) extension for editing and highlighting source code.
It is based on the [flutter_code_editor](https://pub.dev/packages/flutter_code_editor) Flutter package.
## Documentation
Detailed documentation to this package can be found [here](https://docs.flet.dev/codeeditor/).
## Usage
### Installation
To install the `flet-code-editor` package and add it to your project dependencies:
- Using `uv`:
```bash
uv add flet-code-editor
```
- Using `pip`:
```bash
pip install flet-code-editor
```
After this, you will have to manually add this package to your `requirements.txt` or `pyproject.toml`.
### Examples
For examples, see [these](https://github.com/flet-dev/flet/tree/main/examples/controls/code_editor).
| text/markdown | null | Flet contributors <hello@flet.dev> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flet==0.80.6.dev7615"
] | [] | [] | [] | [
"Homepage, https://flet.dev",
"Documentation, https://docs.flet.dev/codeeditor",
"Repository, https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet-code-editor",
"Issues, https://github.com/flet-dev/flet/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:35:21.728320 | flet_code_editor-0.80.6.dev7615.tar.gz | 22,410 | ec/f7/adf125ce959d14a63110b8e768c6b680e7fb6a810d8e17f1bcf7f596076d/flet_code_editor-0.80.6.dev7615.tar.gz | source | sdist | null | false | 30c2b4fe8c83f0f7ca1786d799e0680b | dfe38c053a6430b8da0d8035cb93847d6a1820430c6c6c8412b7e351f23cfead | ecf7adf125ce959d14a63110b8e768c6b680e7fb6a810d8e17f1bcf7f596076d | Apache-2.0 | [
"LICENSE"
] | 169 |
2.4 | flet-charts | 0.80.6.dev7615 | Interactive chart controls for Flet apps. | # flet-charts
[](https://pypi.python.org/pypi/flet-charts)
[](https://pepy.tech/project/flet-charts)
[](https://pypi.org/project/flet-charts)
[](https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet/docs/assets/badges/docs-coverage)
[](https://github.com/flet-dev/flet/blob/main/sdk/python/packages/flet-charts/LICENSE)
A [Flet](https://flet.dev) extension for creating interactive charts and graphs.
It is based on the [fl_chart](https://pub.dev/packages/fl_chart) Flutter package.
## Documentation
Detailed documentation to this package can be found [here](https://docs.flet.dev/charts/).
## Platform Support
| Platform | Windows | macOS | Linux | iOS | Android | Web |
|----------|---------|-------|-------|-----|---------|-----|
| Supported| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
## Usage
### Installation
To install the `flet-charts` package and add it to your project dependencies:
- Using `uv`:
```bash
uv add flet-charts
```
- Using `pip`:
```bash
pip install flet-charts
```
After this, you will have to manually add this package to your `requirements.txt` or `pyproject.toml`.
### Examples
For examples, see [these](https://github.com/flet-dev/flet/tree/main/sdk/python/examples/controls/charts).
### Available charts
- [`BarChart`](https://docs.flet.dev/charts/bar_chart/)
- [`CandlestickChart`](https://docs.flet.dev/charts/candlestick_chart/)
- [`LineChart`](https://docs.flet.dev/charts/line_chart/)
- [`MatplotlibChart`](https://docs.flet.dev/charts/matplotlib_chart/)
- [`PieChart`](https://docs.flet.dev/charts/pie_chart/)
- [`PlotlyChart`](https://docs.flet.dev/charts/plotly_chart/)
- [`RadarChart`](https://docs.flet.dev/charts/radar_chart/)
- [`ScatterChart`](https://docs.flet.dev/charts/scatter_chart/)
| text/markdown | null | Flet contributors <hello@flet.dev> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flet==0.80.6.dev7615"
] | [] | [] | [] | [
"Homepage, https://flet.dev",
"Documentation, https://docs.flet.dev/charts",
"Repository, https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet-charts",
"Issues, https://github.com/flet-dev/flet/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:35:18.562291 | flet_charts-0.80.6.dev7615-py3-none-any.whl | 74,587 | 5a/a6/0fbb011b1f345b7d2d0daca33595acd3f980910f3c3e88b2987d31592a90/flet_charts-0.80.6.dev7615-py3-none-any.whl | py3 | bdist_wheel | null | false | 955d7a8cef5a177e3664ba8d7ae2f628 | d5ef423ee988d6f9f6035be26e82874b92b3c008cfbb33d613f003d5e4499637 | 5aa60fbb011b1f345b7d2d0daca33595acd3f980910f3c3e88b2987d31592a90 | Apache-2.0 | [
"LICENSE"
] | 184 |
2.4 | flet-audio-recorder | 0.80.6.dev7615 | Adds audio recording support to Flet apps. | # flet-audio-recorder
[](https://pypi.python.org/pypi/flet-audio-recorder)
[](https://pepy.tech/project/flet-audio-recorder)
[](https://pypi.org/project/flet-audio-recorder)
[](https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet/docs/assets/badges/docs-coverage)
[](https://github.com/flet-dev/flet/blob/main/sdk/python/packages/flet-audio-recorder/LICENSE)
Adds audio recording support to [Flet](https://flet.dev) apps.
It is based on the [record](https://pub.dev/packages/record) Flutter package.
## Documentation
Detailed documentation to this package can be found [here](https://docs.flet.dev/audio-recorder/).
## Platform Support
| Platform | Windows | macOS | Linux | iOS | Android | Web |
|----------|---------|-------|-------|-----|---------|-----|
| Supported| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
## Usage
### Installation
To install the `flet-audio-recorder` package and add it to your project dependencies:
- Using `uv`:
```bash
uv add flet-audio-recorder
```
- Using `pip`:
```bash
pip install flet-audio-recorder
```
After this, you will have to manually add this package to your `requirements.txt` or `pyproject.toml`.
> [!NOTE]
> On Linux, encoding is provided by [fmedia](https://stsaz.github.io/fmedia/) which must be installed separately.
### Examples
For examples, see [these](https://github.com/flet-dev/flet/tree/main/sdk/python/examples/services/audio_recorder).
| text/markdown | null | Flet contributors <hello@flet.dev> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flet==0.80.6.dev7615"
] | [] | [] | [] | [
"Homepage, https://flet.dev",
"Documentation, https://docs.flet.dev/audio-recorder",
"Repository, https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet-audio-recorder",
"Issues, https://github.com/flet-dev/flet/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:35:15.729027 | flet_audio_recorder-0.80.6.dev7615-py3-none-any.whl | 27,683 | ac/5f/24ba6e3790efbe354115a7b0c672c09b87b43d68db7f128267ce5a034e45/flet_audio_recorder-0.80.6.dev7615-py3-none-any.whl | py3 | bdist_wheel | null | false | 4915b5ea922530765dc79ed107dc8e48 | 7a408ff1ad55525c6f8d747308a743d5e4630fb8e019814ebb07e55b044ee219 | ac5f24ba6e3790efbe354115a7b0c672c09b87b43d68db7f128267ce5a034e45 | Apache-2.0 | [
"LICENSE"
] | 174 |
2.4 | flet-audio | 0.80.6.dev7615 | Provides audio integration and playback in Flet apps. | # flet-audio
[](https://pypi.python.org/pypi/flet-audio)
[](https://pepy.tech/project/flet-audio)
[](https://pypi.org/project/flet-audio)
[](https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet/docs/assets/badges/docs-coverage)
[](https://github.com/flet-dev/flet/blob/main/sdk/python/packages/flet-audio/LICENSE)
A [Flet](https://flet.dev) extension package for playing audio.
It is based on the [audioplayers](https://pub.dev/packages/audioplayers) Flutter package.
## Documentation
Detailed documentation to this package can be found [here](https://docs.flet.dev/audio/).
## Platform Support
| Platform | Windows | macOS | Linux | iOS | Android | Web |
|----------|---------|-------|-------|-----|---------|-----|
| Supported| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
## Usage
### Installation
To install the `flet-audio` package and add it to your project dependencies:
- Using `uv`:
```bash
uv add flet-audio
```
- Using `pip`:
```bash
pip install flet-audio
```
After this, you will have to manually add this package to your `requirements.txt` or `pyproject.toml`.
> [!NOTE]
> On Linux/WSL, you need to install [`GStreamer`](https://github.com/GStreamer/gstreamer) library.
>
> If you receive `error while loading shared libraries: libgstapp-1.0.so.0`, it means `GStreamer` is not installed in your WSL environment.
>
> To install it, run the following command:
>
> ```bash
> apt install -y libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-tools
> ```
### Examples
For examples, see [these](https://github.com/flet-dev/flet/tree/main/sdk/python/examples/services/audio).
| text/markdown | null | Flet contributors <hello@flet.dev> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flet==0.80.6.dev7615"
] | [] | [] | [] | [
"Homepage, https://flet.dev",
"Documentation, https://docs.flet.dev/audio",
"Repository, https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet-audio",
"Issues, https://github.com/flet-dev/flet/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:35:13.171090 | flet_audio-0.80.6.dev7615.tar.gz | 19,481 | 8f/cd/610c6d1b94da770badf63f2b367e8996df01238e1cf67cca631bdd0e2f35/flet_audio-0.80.6.dev7615.tar.gz | source | sdist | null | false | 35a9c184c0eedc7dea327d839bd6ce19 | 21458bb14fb40b14d37d7c0b2c215c3254a69fef91da23c6a384f8550b5c893e | 8fcd610c6d1b94da770badf63f2b367e8996df01238e1cf67cca631bdd0e2f35 | Apache-2.0 | [
"LICENSE"
] | 186 |
2.4 | flet-ads | 0.80.6.dev7615 | Display Google Ads in Flet apps. | # flet-ads
[](https://pypi.python.org/pypi/flet-ads)
[](https://pepy.tech/project/flet-ads)
[](https://pypi.org/project/flet-ads)
[](https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet/docs/assets/badges/docs-coverage)
[](https://github.com/flet-dev/flet/blob/main/sdk/python/packages/flet-ads/LICENSE)
Display Google Ads in [Flet](https://flet.dev) apps.
It is based on the [google_mobile_ads](https://pub.dev/packages/google_mobile_ads) Flutter package.
## Documentation
Detailed documentation to this package can be found [here](https://docs.flet.dev/ads/).
## Platform Support
| Platform | Windows | macOS | Linux | iOS | Android | Web |
|----------|---------|-------|-------|-----|---------|-----|
| Supported| ❌ | ❌ | ❌ | ✅ | ✅ | ❌ |
## Usage
### Installation
To install the `flet-ads` package and add it to your project dependencies:
- Using `uv`:
```bash
uv add flet-ads
```
- Using `pip`:
```bash
pip install flet-ads
```
After this, you will have to manually add this package to your `requirements.txt` or `pyproject.toml`.
### Examples
For examples, see [these](https://github.com/flet-dev/flet/tree/main/sdk/python/examples/controls/ads).
| text/markdown | null | Flet contributors <hello@flet.dev> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flet==0.80.6.dev7615"
] | [] | [] | [] | [
"Homepage, https://flet.dev",
"Documentation, https://docs.flet.dev/ads",
"Repository, https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet-ads",
"Issues, https://github.com/flet-dev/flet/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:35:10.137671 | flet_ads-0.80.6.dev7615.tar.gz | 20,586 | 55/19/8c83481a38a88177a7ba9aad3ee8103d3188913da7329bb8194fb69ee714/flet_ads-0.80.6.dev7615.tar.gz | source | sdist | null | false | 81cb47d9e193de086724f3e60499f03f | f658476206041a64b4c9f2857eebaea3e2028d81e54682c5131931069516ad58 | 55198c83481a38a88177a7ba9aad3ee8103d3188913da7329bb8194fb69ee714 | Apache-2.0 | [
"LICENSE"
] | 176 |
2.4 | flet-web | 0.80.6.dev7615 | Flet web client in Flutter. | # Flet Web client in Flutter
[](https://pypi.org/project/flet-web)
[](https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet/docs/assets/badges/docs-coverage)
This package contains a compiled Flutter Flet web client.
| text/markdown | null | "Appveyor Systems Inc." <hello@flet.dev> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flet",
"fastapi>=0.115.12",
"uvicorn[standard]>=0.35.0"
] | [] | [] | [] | [
"Homepage, https://flet.dev",
"Repository, https://github.com/flet-dev/flet",
"Documentation, https://flet.dev/docs"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:35:05.308070 | flet_web-0.80.6.dev7615-py3-none-any.whl | 22,205,693 | 6e/8b/0e538192d5017483e427669ea694716d12a66bac14915c917f7fba07efa4/flet_web-0.80.6.dev7615-py3-none-any.whl | py3 | bdist_wheel | null | false | eb9ce7fe86fd6a7e5cbc4a2f0e6bf8bb | d7983bf2b39df3493233503c609c59355efda8229fff36a47e46ff7d6a3aa0d2 | 6e8b0e538192d5017483e427669ea694716d12a66bac14915c917f7fba07efa4 | Apache-2.0 | [] | 195 |
2.4 | flet-desktop-light | 0.80.6.dev7615 | Flet Desktop client in Flutter (light) | # Flet Desktop client in Flutter (light)
[](https://pypi.org/project/flet-desktop-light)
[](https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet/docs/assets/badges/docs-coverage)
This package contains a compiled Flutter Flet desktop client with audio and video
components removed.
| text/markdown | null | "Appveyor Systems Inc." <hello@flet.dev> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flet"
] | [] | [] | [] | [
"Homepage, https://flet.dev",
"Repository, https://github.com/flet-dev/flet",
"Documentation, https://flet.dev/docs"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:34:59.024045 | flet_desktop_light-0.80.6.dev7615-py3-none-manylinux_2_35_x86_64.whl | 20,808,485 | 0a/82/af6a82fe856cf156b26d69df68792fdc71e8b20e555e3463d1a2fa372977/flet_desktop_light-0.80.6.dev7615-py3-none-manylinux_2_35_x86_64.whl | py3 | bdist_wheel | null | false | d1b2ad567da7d6a4db8b998b347ea4ab | 0687aacde43d4f94adbb07e67dfefa699b29fdc4dd9ba0bd41fa452b5a2e840b | 0a82af6a82fe856cf156b26d69df68792fdc71e8b20e555e3463d1a2fa372977 | Apache-2.0 | [] | 744 |
2.4 | ciri-ai | 0.0.8 | CIRI Copilot — a local AI agent CLI with skills, toolkits, and subagents | # CIRI Copilot
[](https://pypi.org/project/ciri/)
[](https://www.python.org/downloads/)
[](LICENSE.md)
[](https://docs.astral.sh/uv/)
**CIRI (Contextual Intelligent Runtime Interface) Copilot** — a local, desktop-class AI copilot that runs as a command-line interface (CLI). It provides interactive chat with AI models, thread-based conversation management, file- and skill-aware autocompletion, and an extensible skills/toolkit system.
This README is intentionally neutral and written for both developers and non-developers: what the project does, how to get started, how to configure it, and key implementation notes and limitations.
---
## Table of Contents
- [What CIRI is (brief)](#what-ciri-is-brief)
- [Features](#features)
- [Who should use it](#who-should-use-it)
- [Prerequisites](#prerequisites)
- [Windows](#windows)
- [macOS](#macos)
- [Linux](#linux)
- [Installation](#installation)
- [Clone the repo](#clone-the-repo)
- [Install (global vs development)](#install-global-vs-development)
- [Configuration](#configuration)
- [OpenRouter API key](#openrouter-api-key)
- [Quickstart](#quickstart)
- [API Mode (Programmatic Access)](#api-mode-programmatic-access)
- [Commands reference (short)](#commands-reference-short)
- [Developer notes](#developer-notes)
- [Limitations & privacy](#limitations--privacy)
- [Troubleshooting](#troubleshooting)
- [Contributing](#contributing)
- [License](#license)
- [Contact](#contact)
---
## What CIRI is (brief)
CIRI is a local CLI application that helps users interact with AI models and tools from their terminal. It uses OpenRouter (or compatible providers) for model access and aims to balance interactivity, local storage, and extensibility via "skills" and toolkits.
## Features
- **Interactive AI Chat**: Streaming responses with rich terminal formatting.
- **Multi-Provider Support**: Seamless integration with OpenRouter or direct providers (Anthropic, OpenAI, Google, etc.) via LangChain.
- **Multimodal Content**: Support for images, audio, and documents (PDF, CSV, etc.) in conversation.
- **Thread-Based Management**: Save, switch, and delete conversation threads locally.
- **Deep Contextual Autocompletion**: High-performance autocompletion for `@files:`, `@folders:`, `@skills:`, `@toolkits:`, `@subagents:`, and `@harness:`.
- **Self-Evolution**: Ciri can analyze its workspace and register new skills, toolkits, and subagents on the fly.
- **Human-in-the-Loop (HITL)**: Approve, reject, or edit tool actions (shell commands, file edits) before they execute.
- **Local Storage**: Checkpoint and conversation history stored in a local SQLite database.
- **Extensible Architecture**: Easily add new skills and toolkits.
- **Programmatic API Mode**: `--api` mode provides a persistent server with Unix socket interface for building custom UIs and backend integrations. Supports streaming events, thread state queries, and model/browser profile switching.
## Who should use it
- Non-developers: a lightweight, local AI chat assistant accessible from the terminal.
- Developers: a base to extend with new skills, integrate tools, or customize model usage.
## Prerequisites
Minimum recommended: **Python 3.12+** (project developed and tested on 3.12). Adjust or test if you need earlier versions.
### Windows
- Git
- Python 3.12+ (check "Add Python to PATH" during install)
- uv (https://docs.astral.sh/uv/)
PowerShell (install uv):
```powershell
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```
### macOS
- Git (Xcode Command Line Tools or Homebrew)
- Python 3.12+ (Homebrew: `brew install python@3.12`)
Install uv:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
### Linux (Ubuntu/Debian example)
```bash
sudo apt update && sudo apt upgrade -y
sudo apt install git -y
# If Python 3.12 is not available, consider using the deadsnakes PPA on Ubuntu:
# sudo add-apt-repository ppa:deadsnakes/ppa -y
# sudo apt update
# sudo apt install python3.12 python3.12-venv python3.12-dev -y
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
```
---
## Installation
### From PyPI (recommended)
```bash
pip install ciri-ai
```
Or with [uv](https://docs.astral.sh/uv/):
```bash
# Install as a global tool (recommended)
uv tool install ciri-ai --force --refresh
# Or add to your current project
uv add ciri-ai
```
After install, the `ciri` command is available globally.
### From source
Clone the repo:
```bash
git clone https://github.com/adimis-ai/ciri.git
cd ciri
```
Option 1 — Global install from source (recommended for users):
```bash
uv tool install .
```
This places the `ciri` command into your user bin (commonly `~/.local/bin`) and isolates dependencies.
Option 2 — Development / editable (recommended for contributors):
```bash
# create and sync virtual environment with uv
uv sync
# install package in editable mode
uv pip install -e .
```
---
## Configuration
### API Keys
CIRI supports multiple providers. By default, it uses **OpenRouter**, but you can use any provider supported by LangChain (OpenAI, Anthropic, Google, Mistral, etc.).
- **Interactive Setup**: If an API key is missing for your chosen model, CIRI will prompt you to enter it on startup and offer to persist it globally in `~/.ciri/.env` and your shell profile.
- **Environment Variables**: You can also set them manually:
```bash
export OPENROUTER_API_KEY="your-key"
export ANTHROPIC_API_KEY="your-key"
# etc.
```
### Model Gateway
You can switch between `langchain` (default) and `openrouter` gateways via the `LLM_GATEWAY_PROVIDER` variable.
```bash
export LLM_GATEWAY_PROVIDER="langchain" # Supports provider:model format
```
**Security note:** do not commit API keys to version control.
---
## Quickstart
Start the CLI:
```bash
ciri
```
On first run, you will be guided through model and browser profile selection.
### Common interactions
- **Reference Files**: Type `@files:` then a path fragment.
- **Reference Folders**: Type `@folders:` then a path fragment.
- **Reference Harness**: Type `@harness:` to select core or project harness directories — shown with `(Core)` and `(Current)` flags.
- **Use Skills**: Type `@skills:` to see available local skills.
- **Sync Workspace**: Run `/sync` to let Ciri discover your local setup.
- **Change Model**: Run `/change-model` to switch AI providers/models.
- **Manage Threads**: Use `/threads` to list or `/new-thread` to start fresh.
Example session
```text
You> Hello, analyze the @src/__main__.py file
CIRI> [analysis about the file]
You> /threads
# shows list of threads
You> exit
Goodbye!
```
---
## API Mode (Programmatic Access)
For building custom UIs or backend integrations, use the **`--api` mode** with a persistent Unix socket server:
```bash
# Start the API server (holds copilot in memory)
ciri --api --server &
# Send commands from your backend/UI
ciri --api --run --input '{"messages": [{"type": "human", "content": "Hello"}]}'
ciri --api --state --config '{"configurable": {"thread_id": "..."}}'
ciri --api --history --config '{"configurable": {"thread_id": "..."}}'
ciri --api --change-model 'anthropic/claude-opus-4-6'
ciri --api --change-browser-profile '{"browser": "chrome", "profile_directory": "Default"}'
```
All responses are **NDJSON** (newline-delimited JSON) streamed to stdout. The server auto-starts if needed.
→ [Full API Reference](docs-site/docs/api-reference.md)
---
## Commands Reference
| Command | Description |
| :--- | :--- |
| `/threads` | List all conversation threads. |
| `/switch-thread` | Interactively switch to another thread. |
| `/new-thread` | Start a new conversation thread. |
| `/delete-thread` | Delete the current thread history. |
| `/change-model` | Change the active LLM model. |
| `/change-browser-profile` | Switch browser profiles for research. |
| `/sync` | Analyze workspace & register skills/subagents. |
| `/help` | Show the help menu. |
| `/exit` | Exit the CLI. |
**Keyboard shortcuts**
- `Tab` — autocomplete file paths, skills, or model names
- `Ctrl+C` — cancel current operation
---
## Developer notes
**High-level architecture**
- Entry point / CLI: `ciri` starts an interactive REPL-like chat.
- Core Logic: `CopilotController` manages threads and executes the agent graph (supports multimodal inputs).
- Model integration: OpenRouter client used for model calls; streaming and selection handled by runtime code.
- Tools & skills: extensible skills discovered under `.ciri/skills` (skills may include scripts, validators, and metadata).
- Storage: local conversation storage — see code for details.
**Key locations**
- `src/` — main package and CLI entry points (look for `__main__.py` or CLI module)
- `.ciri/skills` — bundled skills and examples
- `tests/` — test suite
- `pyproject.toml` — project metadata and dependencies
**Extending**
- Follow patterns used in `.ciri/skills` to add new skills
- Document inputs/outputs and add tests under `tests/`
**Development tips**
- Use `uv sync` to prepare the development environment
- Install editable: `uv pip install -e .`
---
## Limitations & privacy
- CIRI relies on third-party model providers (OpenRouter). Provider policies, costs, and behavior apply.
- Conversations are stored locally, but model requests are sent over the network to the chosen provider. Avoid sending sensitive data unless you accept the provider's terms.
- Offline use requires configuring or running compatible local models — not provided by default.
---
## Troubleshooting
**Command `ciri` not found**
Cause: user bin (e.g., `~/.local/bin`) not in `PATH`.
Fix (Linux/macOS):
```bash
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.profile
# or add to ~/.bashrc or ~/.zshrc and restart the shell
```
**Python version error**
Cause: Python < 3.12 installed. Install Python 3.12+ and ensure `uv` or your environment uses it.
**API key errors**
Cause: invalid or missing OpenRouter API key. Verify at https://openrouter.ai/keys and re-enter when prompted. Remove saved key files if needed (e.g., `~/.ciri/.env`).
**Permission denied when writing data**
Cause: incorrect ownership of `~/.ciri` or other data directories.
Fix (Linux):
```bash
sudo chown -R "$USER":"$USER" ~/.ciri
```
---
## Contributing
Contributions welcome. See `CONTRIBUTING.md`.
Suggested flow:
1. Fork and create a branch
2. Run tests and add tests for new behavior
3. Open a PR with a clear description
---
## License
MIT — see `LICENSE.md`.
---
## Contact
Aditya Mishra — https://github.com/adimis-ai
Project: https://github.com/adimis-ai/ciri
| text/markdown | null | Aditya Mishra <adimis.ai.001@gmail.com> | null | null | # MIT License
Copyright (c) 2026 Aditya Mishra
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | agent, ai, automation, cli, copilot, langchain, langgraph, llm, skills, subagents, toolkits | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming... | [] | null | null | >=3.12 | [] | [] | [] | [
"anyio",
"asgiref>=3.11.1",
"black>=26.1.0",
"crawl4ai",
"ddgs",
"deepagents",
"duckduckgo-search",
"httpx",
"importlib-metadata; python_version < \"3.13\"",
"langchain",
"langchain-anthropic>=1.3.1",
"langchain-cohere>=0.5.0",
"langchain-community",
"langchain-core",
"langchain-deepseek... | [] | [] | [] | [
"Homepage, https://github.com/adimis-ai/ciri",
"Repository, https://github.com/adimis-ai/ciri",
"Issues, https://github.com/adimis-ai/ciri/issues",
"Changelog, https://github.com/adimis-ai/ciri/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:34:41.334875 | ciri_ai-0.0.8.tar.gz | 3,401,283 | b1/0b/2dafd6e415888ee72d8442ddc1269821ed456fe13d235b6193c1c1ce7cd2/ciri_ai-0.0.8.tar.gz | source | sdist | null | false | 8fedd1b191433b64ff33cf6fd773ecda | 4e6bcb700c8b63c5e7a22a6677216ca5c7a0b9ef3f1137cbc9076e363bbdd90d | b10b2dafd6e415888ee72d8442ddc1269821ed456fe13d235b6193c1c1ce7cd2 | null | [
"LICENSE.md"
] | 193 |
2.4 | prefect-client | 3.6.18 | Workflow orchestration and management. | <p align="center"><img src="https://github.com/PrefectHQ/prefect/assets/3407835/c654cbc6-63e8-4ada-a92a-efd2f8f24b85" width=1000></p>
<p align="center">
<a href="https://pypi.python.org/pypi/prefect-client/" alt="PyPI version">
<img alt="PyPI" src="https://img.shields.io/pypi/v/prefect-client?color=0052FF&labelColor=090422"></a>
<a href="https://github.com/prefecthq/prefect/" alt="Stars">
<img src="https://img.shields.io/github/stars/prefecthq/prefect?color=0052FF&labelColor=090422" /></a>
<a href="https://pepy.tech/badge/prefect-client/" alt="Downloads">
<img src="https://img.shields.io/pypi/dm/prefect-client?color=0052FF&labelColor=090422" /></a>
<a href="https://github.com/prefecthq/prefect/pulse" alt="Activity">
<img src="https://img.shields.io/github/commit-activity/m/prefecthq/prefect?color=0052FF&labelColor=090422" /></a>
<br>
<a href="https://prefect.io/slack" alt="Slack">
<img src="https://img.shields.io/badge/slack-join_community-red.svg?color=0052FF&labelColor=090422&logo=slack" /></a>
<a href="https://www.youtube.com/c/PrefectIO/" alt="YouTube">
<img src="https://img.shields.io/badge/youtube-watch_videos-red.svg?color=0052FF&labelColor=090422&logo=youtube" /></a>
</p>
# prefect-client
The `prefect-client` package is a minimal-installation of `prefect` which is designed for interacting with Prefect Cloud
or remote any `prefect` server. It sheds some functionality and dependencies in exchange for a smaller installation size,
making it ideal for use in lightweight or ephemeral environments. These characteristics make it ideal for use in lambdas
or other resource-constrained environments.
## Getting started
`prefect-client` shares the same installation requirements as prefect. To install, make sure you are on Python 3.10 or
later and run the following command:
```bash
pip install prefect-client
```
Next, ensure that your `prefect-client` has access to a remote `prefect` server by exporting the `PREFECT_API_KEY`
(if using Prefect Cloud) and `PREFECT_API_URL` environment variables. Once those are set, use the package in your code as
you would normally use `prefect`!
For example, to remotely trigger a run a deployment:
```python
from prefect.deployments import run_deployment
def my_lambda(event):
...
run_deployment(
name="my-flow/my-deployment",
parameters={"foo": "bar"},
timeout=0,
)
my_lambda({})
```
To emit events in an event driven system:
```python
from prefect.events import emit_event
def something_happened():
emit_event("my-event", resource={"prefect.resource.id": "foo.bar"})
something_happened()
```
Or just interact with a `prefect` API:
```python
from prefect.client.orchestration import get_client
async def query_api():
async with get_client() as client:
limits = await client.read_concurrency_limits(limit=10, offset=0)
print(limits)
query_api()
```
## Known limitations
By design, `prefect-client` omits all CLI and server components. This means that the CLI is not available for use
and attempts to access server objects will fail. Furthermore, some classes, methods, and objects may be available
for import in `prefect-client` but may not be "runnable" if they tap into server-oriented functionality. If you
encounter such a limitation, feel free to [open an issue](https://github.com/PrefectHQ/prefect/issues/new/choose)
describing the functionality you are interested in using and we will do our best to make it available.
## Next steps
There's lots more you can do to orchestrate and observe your workflows with Prefect!
Start with our [friendly tutorial](https://docs.prefect.io/tutorials) or explore the [core concepts of Prefect workflows](https://docs.prefect.io/concepts/).
## Join the community
Prefect is made possible by the fastest growing community of thousands of friendly data engineers. Join us in building a new kind of workflow system.
The [Prefect Slack community](https://prefect.io/slack) is a fantastic place to learn more about Prefect, ask questions, or get help with workflow design.
All community forums, including code contributions, issue discussions, and Slack messages are subject to our [Code of Conduct](https://github.com/PrefectHQ/prefect/blob/main/CODE_OF_CONDUCT.md).
## Contribute
See our [documentation on contributing to Prefect](https://docs.prefect.io/contributing/overview/).
Thanks for being part of the mission to build a new kind of workflow system and, of course, **happy engineering!**
| text/markdown | null | "Prefect Technologies, Inc." <help@prefect.io> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Prog... | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"amplitude-analytics<2.0.0,>=1.2.1",
"anyio<5.0.0,>=4.4.0",
"asgi-lifespan<3.0,>=1.0",
"cachetools<8.0,>=5.3",
"cloudpickle<4.0,>=2.0",
"coolname<4.0.0,>=1.0.4",
"dateparser<2.0.0,>=1.1.1",
"exceptiongroup>=1.0.0",
"fastapi<1.0.0,>=0.111.0",
"fsspec>=2022.5.0",
"graphviz>=0.20.1",
"griffe<3.0.... | [] | [] | [] | [
"Changelog, https://github.com/PrefectHQ/prefect/releases",
"Documentation, https://docs.prefect.io",
"Source, https://github.com/PrefectHQ/prefect",
"Tracker, https://github.com/PrefectHQ/prefect/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:34:37.737497 | prefect_client-3.6.18.tar.gz | 762,211 | b5/3b/d9c3973abbbd9cd938de356568c045e9753f3d1e800b5ff224daa92e9a78/prefect_client-3.6.18.tar.gz | source | sdist | null | false | 94f70d5144845f8161e1dd04e7c132d8 | 57fd253b76bc18b81009dfd4fdb4926cde935528eebf281de2f5daafb80ba3b3 | b53bd9c3973abbbd9cd938de356568c045e9753f3d1e800b5ff224daa92e9a78 | null | [
"LICENSE"
] | 325 |
2.4 | flet-desktop | 0.80.6.dev7615 | Flet Desktop client in Flutter | # Flet Desktop client in Flutter
[](https://pypi.org/project/flet-desktop)
[](https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet/docs/assets/badges/docs-coverage)
This package contains a compiled Flutter Flet desktop client.
| text/markdown | null | "Appveyor Systems Inc." <hello@flet.dev> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flet"
] | [] | [] | [] | [
"Homepage, https://flet.dev",
"Repository, https://github.com/flet-dev/flet",
"Documentation, https://flet.dev/docs"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:34:17.386324 | flet_desktop-0.80.6.dev7615-py3-none-manylinux_2_39_x86_64.whl | 23,287,635 | 73/0d/301b9bb57e27615df0e274bcfc254570e7965c23454b92e0b94452c149a4/flet_desktop-0.80.6.dev7615-py3-none-manylinux_2_39_x86_64.whl | py3 | bdist_wheel | null | false | 5199c071d0e47f57381d43a429d507e3 | e1d719641128bea8900fad28fbd1b722cb3dc09b42f765247bddefd77bc94382 | 730d301b9bb57e27615df0e274bcfc254570e7965c23454b92e0b94452c149a4 | Apache-2.0 | [] | 931 |
2.3 | py-cloud-task | 0.1.0 | A framework agnostic Google Cloud Tasks library for the Push architecture. | <h1 align="center">py-cloud-task</h1>
<p align="center">
<strong>A Framework Agnostic Client for Google Cloud Tasks.</strong>
<br>
Move from "Pull" (Workers) to "Push" (Serverless) architecture effortlessly.
</p>
<p align="center">
<a href="https://github.com/ziett/py-cloud-task/actions" target="_blank">
<img src="https://img.shields.io/github/actions/workflow/status/ziett/py-cloud-task/tests.yml?branch=main&label=tests&style=flat-square" alt="Tests">
</a>
<a href="https://pypi.org/project/py-cloud-task/" target="_blank">
<img src="https://img.shields.io/pypi/pyversions/py-cloud-task.svg?color=%2334D058&style=flat-square" alt="Supported Python versions">
</a>
</p>
---
**py-cloud-task** is a lightweight, async-first library that abstracts the complexity of **Google Cloud Tasks**. It
provides a developer experience similar to Celery or TaskIQ but is designed specifically for **Serverless**
environments (Cloud Run, App Engine, Cloud Functions, FastAPI).
It handles serialization, authentication (OIDC), scheduling, and—crucially—**FastAPI Dependency Injection**
automatically.
### Why use this instead of Celery/Redis?
| Feature | Celery / Redis (Pull) | py-cloud-task (Push) |
|:-----------------|:---------------------------------------------|:--------------------------------------------------|
| **Architecture** | Workers poll Redis 24/7 ("Are there tasks?") | Google calls your API via HTTP ("Here is a task") |
| **Cost** | You pay for idle workers & Redis instances | **Pay-per-use** (Scale to Zero supported) |
| **Infra** | Requires Redis/RabbitMQ management | **Zero Ops** (Managed by Google) |
| **Retries** | Managed by worker code | **Native** (Exponential backoff managed by GCP) |
| **DX** | Heavy setup | **Decorator-based** (Just like FastAPI) |
---
## Installation
Currently, the package is available via GitHub. You can install it using `uv` or `pip`.
### Using uv (Recommended)
```bash
# Instalação Core
uv add "py-cloud-task @ git+https://github.com/uhmiller/py-cloud-task.git"
# Com suporte a FastAPI
uv add "py-cloud-task[fastapi] @ git+https://github.com/uhmiller/py-cloud-task.git"
# Para simulação local (testes)
uv add "py-cloud-task[test] @ git+https://github.com/uhmiller/py-cloud-task.git"
```
---
## Quick Start
### 1. Configure the Client
The `CloudTaskClient` is the entry point. It holds the configuration for your Google Cloud project and queue.
```python
from cloudtask import CloudTaskClient
client = CloudTaskClient(
project="my-gcp-project",
location="europe-west1",
queue="default",
url="[https://api.myapp.com/tasks/run](https://api.myapp.com/tasks/run)", # The public URL of your worker
sae="my-service-account@my-gcp-project.iam.gserviceaccount.com", # Service Account email for OIDC auth
secret="super-secret-token", # Optional: Header secret for extra security
force_to_queue=None, # Optional: Force all tasks to a specific queue (useful for Staging)
eager=None, # None = Production (Sends to Google Cloud)
)
```
### 2. Define a Task
Use the `@client.task` decorator. You can define tasks anywhere in your code.
```python
@client.task(queue='high-priority', name='unique-task-name')
async def send_welcome_email(user_id: str, email: str):
print(f"Sending email to {email}...")
# ... logic to send email ...
return "sent"
```
### 3. Trigger the Task
You can trigger tasks asynchronously. This will serialize the arguments and send them to Google Cloud Tasks.
```python
# Simple trigger
await send_welcome_email(user_id="123", email="user@example.com").push()
```
---
## Advanced Usage
### Scheduling (Delayed Execution)
Schedule a task to run in the future using the `at` parameter.
```python
from datetime import datetime, timedelta
# Run 1 hour from now
eta = datetime.now() + timedelta(hours=1)
await send_welcome_email("123", "user@example.com").push(at=eta)
```
### Task Deduplication (Named Tasks)
Google Cloud Tasks ensures that tasks with the same name are executed only once. You can set a custom name to prevent
duplicate execution.
```python
# Instantiate the task wrapper first
task = send_welcome_email("123", "user@example.com")
# Set a deterministic name (e.g., specific to the user and action)
task.name = "welcome-email-user-123"
# Push to cloud
await task.push()
```
### Local Development (Eager Modes)
When developing locally, you often don't want to send tasks to Google Cloud. The `eager` parameter supports three modes
to help you develop and test safely.
#### Mode 1: Immediate Execution (`eager="immediate"`)
Runs the function directly in the current process. Fastest option for Unit Tests.
```python
client = CloudTaskClient(..., eager="immediate")
await send_welcome_email("123", "user@example.com").push()
# Result: Function runs instantly. No HTTP. No Serialization.
```
#### Mode 2: Remote Simulation (`eager="remote"`)
Simulates a full HTTP request to your local worker using `httpx`. This is perfect for **Integration Tests** because it
validates serialization, headers, and dependency injection without needing Google infrastructure.
```python
client = CloudTaskClient(..., eager="remote")
await send_welcome_email("123", "user@example.com").push()
# Result: Sends POST http://localhost:8000/tasks/run.
```
#### Mode 3: Production (`eager=None`)
The default behavior. Serializes the task and sends it to Google Cloud Tasks.
---
## FastAPI Integration
**py-cloud-task** has first-class support for FastAPI. It leverages FastAPI's native **Dependency Injection** system.
### 1. Setup the Router
```python
from fastapi import FastAPI
from cloudtask.fastapi import CloudTaskRouter
from app.core.tasks import client
app = FastAPI()
# Register the route that receives tasks from Google
app.include_router(CloudTaskRouter(client), prefix="/tasks")
```
### 2. Use `Depends` in Tasks
You can inject database sessions, services, or any other dependency directly into your tasks, just like in API
endpoints.
```python
from fastapi import Depends
from sqlalchemy.ext.asyncio import AsyncSession
from app.core.db import get_db
@client.task()
async def process_order(
order_id: int,
db: AsyncSession = Depends(get_db) # <--- Magic happens here
):
# The 'db' session is created, injected, and closed automatically!
order = await db.get(Order, order_id)
order.status = "processed"
await db.commit()
```
**Note:** When triggering the task, you **only** pass the data arguments. The dependencies are resolved automatically by
the worker.
```python
# Correct usage (Dependency is ignored during push)
await process_order(order_id=500).push()
```
---
## Security
To ensure that only Google Cloud Tasks can call your worker endpoint, the library supports two mechanisms:
1. **OIDC Token (Recommended):** The library automatically attaches an OIDC token identifying the Service Account. Your
Cloud Run/Functions service should validate this token (Google handles this automatically for Cloud Run if you don't
allow unauthenticated invocations).
2. **Secret Header:** You can configure a shared secret.
```python
client = CloudTaskClient(..., secret="my-secret-key")
```
The router will automatically validate the `X-PYCT-SECRET` header and reject unauthorized requests (403 Forbidden).
---
## Contributing
Contributions are welcome! If you find a bug or want to add a feature (e.g., Flask or Django adapters), please open an
issue or submit a PR.
---
<p align="center">
<span style="color: #666;">Built with ❤️ by the engineering team at <a href="https://ziett.co">Ziett</a></span>
</p>
<p align="center">
<a href="https://ziett.com">
<img src="https://ziett.co/icon.png" alt="Ziett Logo" width="60" height="60"/>
</a>
</p>
| text/markdown | Ageu | Ageu <uhtred@ohrus.co> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Framework :: FastAPI",
"Framework :: Django",
"Topic :: System :: Distributed Computing"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"google-cloud-tasks>=2.21.0",
"fastapi>=0.129.0; extra == \"fastapi\"",
"httpx>=0.28.1; extra == \"test\""
] | [] | [] | [] | [] | uv/0.9.29 {"installer":{"name":"uv","version":"0.9.29","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T20:34:03.432811 | py_cloud_task-0.1.0-py3-none-any.whl | 13,825 | 8b/4c/5d5b62fd04016df77ad16aac1c724ad0d550251826af1244a7db0e941760/py_cloud_task-0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 74f8a05a4db6b908bd2351f76ad5db8a | 6fe8150f7ffbfd4e319ee02b4e8b06621d12e539dc3ceebbe6cca4ea2327590a | 8b4c5d5b62fd04016df77ad16aac1c724ad0d550251826af1244a7db0e941760 | null | [] | 233 |
2.4 | flet-cli | 0.80.6.dev7615 | Flet CLI | # Flet CLI
[](https://pypi.org/project/flet-cli)
[](https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet/docs/assets/badges/docs-coverage)
Flet CLI is a command-line interface tool for Flet, a framework for building interactive multi-platform applications using Python.
## Features
- Create new Flet projects
- Run Flet applications
- Package and deploy Flet apps
## Basic Usage
To create a new Flet project:
```
flet create myapp
```
| text/markdown | null | "Appveyor Systems Inc." <hello@flet.dev> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flet",
"watchdog>=4.0.0",
"packaging>=25.0",
"qrcode>=7.4.2",
"tomli>=1.1.0; python_version < \"3.11\"",
"cookiecutter>=2.6.0"
] | [] | [] | [] | [
"Homepage, https://flet.dev",
"Repository, https://github.com/flet-dev/flet",
"Documentation, https://flet.dev/docs"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:33:15.027636 | flet_cli-0.80.6.dev7615.tar.gz | 54,100 | e3/0c/1c1228c56fb54d058a9edf39437bb0aef174f11751381d61c88928a7e21e/flet_cli-0.80.6.dev7615.tar.gz | source | sdist | null | false | 58479f6d75dcc65cfda64f38d28b1eb9 | 678c568b57e909e403a92279babe125a14ce515231dbaed9928b51e05d8ec7dc | e30c1c1228c56fb54d058a9edf39437bb0aef174f11751381d61c88928a7e21e | Apache-2.0 | [] | 192 |
2.4 | flet | 0.80.6.dev7615 | Flet for Python - easily build interactive multi-platform apps in Python | <p align="center">
<a href="https://flet.dev"><img src="https://raw.githubusercontent.com/flet-dev/flet/refs/heads/main/media/logo/flet-logo.svg" height="150" alt="Flet logo"></a>
</p>
<p align="center">
<em>Build multi-platform apps in Python. No frontend experience required.</em>
</p>
<p align="center">
<a href="https://github.com/flet-dev/flet/blob/main/LICENSE" target="_blank">
<img src="https://img.shields.io/github/license/flet-dev/flet.svg" alt="License" /></a>
<a href="https://pypi.org/project/flet" target="_blank">
<img src="https://img.shields.io/pypi/v/flet?color=%2334D058&label=pypi" alt="Package version" /></a>
<a href="https://pepy.tech/project/flet" target="_blank">
<img src="https://static.pepy.tech/badge/flet/month" alt="Monthly downloads" /></a>
<a href="https://pypi.org/project/flet" target="_blank">
<img src="https://img.shields.io/badge/python-%3E%3D3.10-%2334D058" alt="Python >= 3.10" /></a>
<a href="https://github.com/flet-dev/flet/actions/workflows/ci.yml" target="_blank">
<img src="https://github.com/flet-dev/flet/actions/workflows/ci.yml/badge.svg" alt="Build status" /></a>
<a href="https://github.com/flet-dev/flet/tree/main/sdk/python/packages/flet/docs/assets/badges/docs-coverage" target="_blank">
<img src="https://docs.flet.dev/assets/badges/docs-coverage/flet.svg" alt="Docstring coverage" /></a>
</p>
---
Flet is a framework that allows building mobile, desktop and web applications
in Python only without prior experience in frontend development.
### <img src="https://flet.dev/img/pages/home/single-code-base.svg" width="25" align="top" /> Single code base for any device
Your app will equally look great on iOS, Android, Windows, Linux, macOS and web.
### <img src="https://flet.dev/img/pages/home/python.svg" width="25" align="top" /> Build an entire app in Python
Build a cross-platform app without knowledge of Dart, Swift, Kotlin, HTML or JavaScript - only Python!
### <img src="https://flet.dev/img/pages/home/controls.svg" width="25" align="top" /> 150+ built-in controls and services
Beautiful UI widgets with Material and Cupertino design: layout, navigation, dialogs, charts - Flet uses Flutter to render UI.
### <img src="https://flet.dev/img/pages/home/python-packages.svg" width="25" align="top" /> 50+ Python packages for iOS and Android
Numpy, pandas, pydantic, cryptography, opencv, pillow and other popular libraries.
### <img src="https://flet.dev/img/pages/home/web-support.svg" width="25" align="top" /> Full web support
Flet apps run natively in modern browsers using WebAssembly and Pyodide, with no server required. Prefer server-side? Deploy as a Python web app with real-time UI updates.
### <img src="https://flet.dev/img/pages/home/packaging.svg" width="25" align="top" /> Built-in packaging
Build standalone executables or bundles for iOS, Android, Windows, Linux, macOS and web. Instantly deploy to App Store and Google Play.
### <img src="https://flet.dev/img/pages/home/test-on-ios-android.svg" width="25" align="top" /> Test on iOS and Android
Test your project on your own mobile device with Flet App. See your app updates as you make changes.
### <img src="https://flet.dev/img/pages/home/extensible.svg" width="25" align="top" /> Extensible
Easily wrap any of thousands of Flutter packages to use with Flet or build new controls in pure Python using built-in UI primitives.
### <img src="https://flet.dev/img/pages/home/accessible.svg" width="25" align="top" /> Accessible
Flet is built with Flutter which has solid accessibility foundations on Android, iOS, web, and desktop.
## Flet app example
Below is a simple "Counter" app, with a text field and two buttons to increment and decrement the counter value:
```python title="counter.py"
import flet as ft
def main(page: ft.Page):
page.title = "Flet counter example"
page.vertical_alignment = ft.MainAxisAlignment.CENTER
input = ft.TextField(value="0", text_align=ft.TextAlign.RIGHT, width=100)
def minus_click(e):
input.value = str(int(input.value) - 1)
def plus_click(e):
input.value = str(int(input.value) + 1)
page.add(
ft.Row(
alignment=ft.MainAxisAlignment.CENTER,
controls=[
ft.IconButton(ft.Icons.REMOVE, on_click=minus_click),
input,
ft.IconButton(ft.Icons.ADD, on_click=plus_click),
],
)
)
ft.run(main)
```
To run the app, install `flet`:
```bash
pip install 'flet[all]'
```
then launch the app:
```bash
flet run counter.py
```
This will open the app in a native OS window - what a nice alternative to Electron! 🙂
<p align="center">
<img src="https://docs.flet.dev/assets/getting-started/counter-app/macos.png" width="45%" />
</p>
To run the same app as a web app use `--web` option with `flet run` command:
```bash
flet run --web counter.py
```
<p align="center">
<img src="https://docs.flet.dev/assets/getting-started/counter-app/safari.png" width="60%" />
</p>
## Learn more
* [Website](https://flet.dev)
* [Documentation](https://docs.flet.dev)
* [Roadmap](https://flet.dev/roadmap)
* [Apps Gallery](https://flet.dev/gallery)
## Community
* [Discussions](https://github.com/flet-dev/flet/discussions)
* [Discord](https://discord.gg/dzWXP8SHG8)
* [X (Twitter)](https://twitter.com/fletdev)
* [Bluesky](https://bsky.app/profile/fletdev.bsky.social)
* [Email us](mailto:hello@flet.dev)
## Contributing
Want to help improve Flet? Check out the [contribution guide](https://docs.flet.dev/contributing).
| text/markdown | null | "Appveyor Systems Inc." <hello@flet.dev> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"flet-cli; extra == \"cli\"",
"flet-web; extra == \"web\"",
"oauthlib>=3.2.2; platform_system != \"Pyodide\"",
"httpx>=0.28.1; platform_system != \"Pyodide\"",
"repath>=0.9.0",
"msgpack>=1.1.0",
"typing-extensions; python_version < \"3.11\"",
"flet-cli; extra == \"all\"",
"flet-web; extra == \"all\"... | [] | [] | [] | [
"Homepage, https://flet.dev",
"Repository, https://github.com/flet-dev/flet",
"Documentation, https://docs.flet.dev/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:33:10.651963 | flet-0.80.6.dev7615.tar.gz | 439,831 | e4/97/a07e0c5563e03414f63fa57c01b15b5b12bd81dcd1562fabff08fdfab202/flet-0.80.6.dev7615.tar.gz | source | sdist | null | false | f75bfa8fe55be4801839e1b562386c9b | 06b8a43d48fda6b1c7a62f87a87fd91b4f1b80e54443f49fd807fa81b632a00e | e497a07e0c5563e03414f63fa57c01b15b5b12bd81dcd1562fabff08fdfab202 | Apache-2.0 | [] | 384 |
2.4 | eeco | 0.2.1 | Calculate electricity-related emissions and costs. | ******************************************
Electric Emissions & Cost Optimizer (EECO)
******************************************
.. image::
https://github.com/we3lab/eeco/workflows/Build%20Main/badge.svg
:height: 30
:target: https://github.com/we3lab/eeco/actions
:alt: Build Status
.. image::
https://github.com/we3lab/eeco/workflows/Documentation/badge.svg
:height: 30
:target: https://we3lab.github.io/eeco
:alt: Documentation
.. image::
https://codecov.io/gh/we3lab/eeco/branch/main/graph/badge.svg
:height: 30
:target: https://codecov.io/gh/we3lab/eeco
:alt: Code Coverage
.. image::
https://zenodo.org/badge/979642377.svg
:height: 30
:target: https://doi.org/10.5281/zenodo.17102024
:alt: Zenodo DOI
A package for calculating electricity-related emissions and costs for optimization problem formulation and other computational analyses.
Useful Commands
===============
1. ``pip install -e .`` (or ``pip install -e .[test]`` for development)
This will install your package in editable mode.
2. ``pytest eeco/tests --cov=eeco --cov-report=html``
Produces an HTML test coverage report for the entire project which can
be found at ``htmlcov/index.html``.
3. ``docs/make html``
This will generate an HTML version of the documentation which can be found
at ``_build/html/index.html``.
4. ``flake8 eeco --count --verbose --show-source --statistics``
This will lint the code and share all the style errors it finds.
5. ``black eeco``
This will reformat the code according to strict style guidelines.
Documentation
==============
The documentation for this package is hosted on `GitHub Pages <https://we3lab.github.io/eeco>`_.
Legal Documents
===============
This work was supported by the following grants and programs:
- `National Alliance for Water Innovation (NAWI) <https://www.nawihub.org/>`_ (grant number UBJQH - MSM)
- `Department of Energy, the Office of Energy Efficiency and Renewable Energy, Advanced Manufacturing Office <https://www.energy.gov/eere/ammto/advanced-materials-and-manufacturing-technologies-office>`_ (grant number DE-EE0009499)
- `California Energy Commission (CEC) <https://www.energy.ca.gov/>`_ (grant number GFO-23-316)
- `Equitable, Affordable & Resilient Nationwide Energy System Transition (EARNEST) Consortium <https://earnest.stanford.edu/>`_
- `Stanford University Bits & Watts Initiative <https://bitsandwatts.stanford.edu/>`_
- `Stanford Woods Institute Realizing Environmental Innovation Program (REIP) <https://woods.stanford.edu/research/funding-opportunities/realizing-environmental-innovation-program>`_
- `Stanford Woods Institute Mentoring Undergraduate in Interdisciplinary Research (MUIR) Program <https://woods.stanford.edu/educating-leaders/education-leadership-programs/mentoring-undergraduates-interdisciplinary-research>`_
- `Stanford University Sustainability Undergraduate Research in Geoscience and Engineering (SURGE) Program <https://sustainability.stanford.edu/our-community/access-belonging-community/surge>`_
The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights.
- `LICENSE <https://github.com/we3lab/eeco/blob/main/LICENSE/>`_
- `CONTRIBUTING <https://github.com/we3lab/eeco/blob/main/CONTRIBUTING.rst/>`_
Attribution
===========
If you found this package useful, we encourage you to cite the papers below depending on which portion of the code you use.
See the metadata in `CITATION.cff <https://github.com/we3lab/eeco/blob/main/CITATION.cff>`_, on `Zenodo <https://doi.org/10.5281/zenodo.17102024>`_,
or the following `BibTeX` format to cite the Python package in its entirety:
.. code-block::
@software{chapin_2025_17102024,
author={Chapin, Fletcher T. and
Rao, Akshay K. and
Sakthivelu, Adhithyan and
Wettermark, Daly and
Musabandesu, Erin and
Jaminet, Anne and
Dudchenko, Alexander V. and
Mauter, Meagan S.},
title={Electric Emissions \& Cost Optimizer (EECO)},
month=sep,
year=2025,
publisher={Zenodo},
version={v0.1.0},
doi={10.5281/zenodo.17102025},
url={https://doi.org/10.5281/zenodo.17102025}
}
Citing `costs.py`
*****************
The development of `costs.py` was the culmination of two papers from the WE3Lab.
The convex formulation of tariff costs for optimizing flexible loads was originally developed for a case study of flexible wastewater treatment plant operation published in Environmental Science & Technology:
Bolorinos, J., Mauter, M. S., & Rajagopal, R. Integrated energy flexibility management at wastewater treatment facilities. *Environ. Sci. Technol.* **57**, 18362-18371. (2023). DOI: `10.1021/acs.est.3c00365 <https://doi.org/10.1021/acs.est.3c00365>`_
In `BibTeX` format:
.. code-block::
@article{bolorinos2023integrated,
title={Integrated energy flexibility management at wastewater treatment facilities},
author={Bolorinos, Jose and Mauter, Meagan S and Rajagopal, Ram},
journal={Environmental Science \& Technology},
volume={57},
number={46},
pages={18362--18371},
year={2023},
publisher={ACS Publications},
url={https://doi.org/10.1021/acs.est.3c00365}
}
The tariff data format was published in the following data descriptor in Nature Scientific Data:
Chapin, F.T., Bolorinos, J. & Mauter, M.S. Electricity and natural gas tariffs at United States wastewater treatment plants. *Sci Data* **11**, 113 (2024). DOI: `10.1038/s41597-023-02886-6 <https://doi.org/10.1038/s41597-023-02886-6>`_
In `BibTeX` format:
.. code-block::
@Article{Chapin2024,
author={Chapin, Fletcher T and Bolorinos, Jose and Mauter, Meagan S.},
title={Electricity and natural gas tariffs at United States wastewater treatment plants},
journal={Scientific Data},
year={2024},
month={Jan},
day={23},
volume={11},
number={1},
pages={113},
issn={2052-4463},
doi={10.1038/s41597-023-02886-6},
url={https://doi.org/10.1038/s41597-023-02886-6}
}
Citing `emissions.py`
*********************
The emissions optimization code was originally developed for co-optimizing costs and emissions at a wastewater treatment plant and published in Environmental Science & Technology:
Chapin, F.T., Wettermark, D., Bolorinos, J. & Mauter, M.S. Load-shifting strategies for cost-effective emission reductions at wastewater facilities *Environ. Sci. Technol.* **59**, 2285-2294 (2025). DOI: `10.1021/acs.est.4c09773 <https://doi.org/10.1021/acs.est.4c09773>`_
In `BibTeX` format:
.. code-block::
@article{chapin2025load,
title={Load-Shifting Strategies for Cost-Effective Emission Reductions at Wastewater Facilities},
author={Chapin, Fletcher T and Wettermark, Daly and Bolorinos, Jose and Mauter, Meagan S},
journal={Environmental Science \& Technology},
volume={59},
number={4},
pages={2285--2294},
year={2025},
publisher={ACS Publications},
url={https://pubs.acs.org/doi/10.1021/acs.est.4c09773}
}
Citing `metrics.py`
*******************
The flexibility metrics come from the following Nature Water paper:
Rao, A. K., Bolorinos, J., Musabandesu, E., Chapin, F. T., & Mauter, M. S. Valuing energy flexibility from water systems. *Nat. Water* **2**, 1028-1037 (2024). DOI: `10.1038/s44221-024-00316-4 <https://doi.org/10.1038/s44221-024-00316-4>`_
In `BibTeX` format:
.. code-block::
@article{rao2024valuing,
title={Valuing energy flexibility from water systems},
author={Rao, Akshay K and Bolorinos, Jose and Musabandesu, Erin and Chapin, Fletcher T and Mauter, Meagan S},
journal={Nature Water},
volume={2},
number={10},
pages={1028--1037},
year={2024},
publisher={Nature Publishing Group UK London},
url={https://doi.org/10.1038/s44221-024-00316-4}
}
| text/x-rst | WE3Lab | fchapin@stanford.edu | null | null | null | eeco | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"License :: Free for non-commercial use",
"Natural Language :: English",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pytho... | [] | https://github.com/we3lab/eeco | null | >=3.9 | [] | [] | [] | [
"pandas>=2.2.1",
"numpy>=1.26.4",
"cvxpy>=1.3.0",
"pyomo>=6.8",
"gurobipy>=11.0",
"pint>=0.19.2",
"pytz>=2025.1",
"black>=22.3.0; extra == \"test\"",
"flake8>=4.0.0; extra == \"test\"",
"codecov>=2.1.4; extra == \"test\"",
"pytest>=8.1.1; extra == \"test\"",
"pytest-cov>=3.0.0; extra == \"test... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:32:48.991990 | eeco-0.2.1.tar.gz | 1,509,073 | 85/2f/9ba7c87c97839bcbbc880a135baaa8936a5f86cecef32160770ee35cfc70/eeco-0.2.1.tar.gz | source | sdist | null | false | a0ca8ac94a1d4ebd5332dda54361c642 | 9c39172deecacfb47938fef27490ad5f19f90983a56ec56ca0d0321db7de843b | 852f9ba7c87c97839bcbbc880a135baaa8936a5f86cecef32160770ee35cfc70 | null | [
"LICENSE"
] | 198 |
2.4 | fluidattacks-core | 6.3.3 | Fluid Attacks Core Library | # Fluid Attacks Core Library
<p align="center">
<img alt="logo" src="https://res.cloudinary.com/fluid-attacks/image/upload/f_auto,q_auto/v1/airs/menu/Logo?_a=AXAJYUZ0.webp" />
</p>
Get more information about this library on the
[official documentation](https://help.fluidattacks.com/portal/en/kb/articles/core-library)
| text/markdown | null | Development <development@fluidattacks.com> | null | null | MPL-2.0 | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"uvloop>=0.21.0; extra == \"aio\"",
"aiofiles>=23.2.1; extra == \"git\"",
"aiohttp>=3.10.0; extra == \"git\"",
"anyio>=4.7.0; extra == \"git\"",
"boto3>=1.34; extra == \"git\"",
"botocore>=1.40.18; extra == \"git\"",
"fluidattacks-core[http]; extra == \"git\"",
"GitPython>=3.1.41; extra == \"git\"",
... | [] | [] | [] | [] | uv/0.9.25 {"installer":{"name":"uv","version":"0.9.25","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:32:22.774903 | fluidattacks_core-6.3.3-py3-none-any.whl | 71,053 | 61/1a/0239cae7b85c00d67cc677e62e2d13b5df8f1497853d36ad132fc04228f5/fluidattacks_core-6.3.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 74069616a4739b45ac517d2835198da7 | 341607ab86a48be2ded99e8de60cc948c6850d9356fad001ea53aa222d53a484 | 611a0239cae7b85c00d67cc677e62e2d13b5df8f1497853d36ad132fc04228f5 | null | [] | 216 |
2.4 | thisispamela | 1.1.4 | Pamela Enterprise Voice API SDK for Python | # thisispamela SDK for Python
Official SDK for the Pamela Voice API.
## Installation
```bash
pip install thisispamela
```
## Usage
### Basic Example
```python
from pamela import PamelaClient
client = PamelaClient(
api_key="pk_live_your_api_key_here",
base_url="https://api.thisispamela.com", # Optional
)
# Create a call
call = client.create_call(
to="+1234567890",
task="Order a large pizza for delivery",
locale="en-US",
max_duration_seconds=299,
voice="female",
agent_name="Pamela",
caller_name="John from Acme",
)
print(f"Call created: {call['id']}")
# Get call status
status = client.get_call(call["id"])
print(f"Call status: {status['status']}")
```
### Webhook Verification
```python
from flask import Flask, request
from pamela import verify_webhook_signature
app = Flask(__name__)
WEBHOOK_SECRET = "your_webhook_secret"
@app.route("/webhooks/pamela", methods=["POST"])
def handle_webhook():
signature = request.headers.get("X-Pamela-Signature")
payload = request.json
if not verify_webhook_signature(payload, signature, WEBHOOK_SECRET):
return {"error": "Invalid signature"}, 401
# Handle webhook event
print(f"Webhook event: {payload['event']}")
print(f"Call ID: {payload['call_id']}")
return {"status": "ok"}, 200
```
### Tool Webhook Handler
```python
from flask import Flask, request
from pamela import verify_webhook_signature
app = Flask(__name__)
WEBHOOK_SECRET = "your_webhook_secret"
@app.route("/webhooks/pamela/tools", methods=["POST"])
def handle_tool_webhook():
signature = request.headers.get("X-Pamela-Signature")
payload = request.json
if not verify_webhook_signature(payload, signature, WEBHOOK_SECRET):
return {"error": "Invalid signature"}, 401
tool_name = payload["tool_name"]
arguments = payload["arguments"]
call_id = payload["call_id"]
correlation_id = payload["correlation_id"]
# Execute tool based on tool_name
if tool_name == "check_order_status":
order_id = arguments.get("order_id")
result = check_order_status(order_id)
return {"result": result}
return {"error": "Unknown tool"}, 400
```
## Getting API Keys
### Obtaining Your API Key
API keys are created and managed through the Pamela Partner Portal or via the Partner API:
1. **Sign up for an API subscription** (see Subscription Requirements below)
2. **Create an API key** via one of these methods:
- Developer portal at [developer.thisispamela.com](https://developer.thisispamela.com): Log in and navigate to the API settings panel
- Partner API: `POST /api/b2b/v1/partner/api-keys`
```bash
curl -X POST https://api.thisispamela.com/api/b2b/v1/partner/api-keys \
-H "Authorization: Bearer YOUR_B2C_USER_TOKEN" \
-H "Content-Type: application/json" \
-d '{"project_id": "optional-project-id", "key_prefix": "pk_live_"}'
```
3. **Save your API key immediately** - the full key is only returned once during creation
4. **Use the key prefix** (`pk_live_`) to identify keys in your account
### Managing API Keys
- **List API keys**: `GET /api/b2b/v1/partner/api-keys`
- **Revoke API key**: `POST /api/b2b/v1/partner/api-keys/{key_id}/revoke`
- **Associate with projects**: Optionally link API keys to specific projects for better organization
### API Key Format
- **Live keys**: Start with `pk_live_` (all API usage)
- **Security**: Keys are hashed in the database. Store them securely and never commit them to version control.
## Subscription Requirements
### API Subscription Required
**All API access requires an active API subscription.** API calls will return `403 Forbidden` if:
- No API subscription is active
- Subscription status is `past_due` and grace period has expired
- Subscription status is `canceled`
### Grace Period
API subscriptions have a **1-week grace period** when payment fails:
- During grace period: API access is allowed, but usage is still charged
- After grace period expires: API access is blocked until payment is updated
### Subscription Status Endpoints
Check subscription status using the partner API:
- `GET /api/b2b/v1/partner/subscription` - Get subscription status
- `POST /api/b2b/v1/partner/subscription/checkout` - Create checkout session
- `POST /api/b2b/v1/partner/subscription/portal` - Access Customer Portal
## Error Handling
The SDK provides structured exceptions for all API errors:
```python
from pamela import (
PamelaClient,
PamelaError,
AuthenticationError,
SubscriptionError,
RateLimitError,
ValidationError,
CallError,
)
client = PamelaClient(api_key="pk_live_your_key")
try:
call = client.create_call(to="+1234567890", task="Test call")
except AuthenticationError as e:
# 401: Invalid or missing API key
print(f"Auth failed: {e.message}")
print(f"Error code: {e.error_code}")
except SubscriptionError as e:
# 403: Subscription inactive or expired
if e.error_code == 7008:
print("Grace period expired - update payment method")
else:
print(f"Subscription issue: {e.message}")
except RateLimitError as e:
# 429: Rate limit exceeded
retry_after = e.details.get("retry_after", 30)
print(f"Rate limited, retry after {retry_after}s")
except ValidationError as e:
# 400/422: Invalid request parameters
print(f"Invalid request: {e.message}")
print(f"Details: {e.details}")
except CallError as e:
# Call-specific errors
print(f"Call error: {e.message}")
except PamelaError as e:
# All other API errors
print(f"API error {e.error_code}: {e.message}")
```
### Exception Hierarchy
All exceptions inherit from `PamelaError`:
```
PamelaError (base)
├── AuthenticationError # 401 errors
├── SubscriptionError # 403 errors (subscription issues)
├── RateLimitError # 429 errors
├── ValidationError # 400/422 errors
└── CallError # Call-specific errors
```
### Exception Attributes
All exceptions have:
- `message`: Human-readable error message
- `error_code`: Numeric error code (e.g., 7008 for subscription expired)
- `details`: Dict with additional context
- `status_code`: HTTP status code
## Error Codes Reference
### Authentication Errors (401)
| Code | Description |
|------|-------------|
| 1001 | API key required |
| 1002 | Invalid API key |
| 1003 | API key expired |
### Subscription Errors (403)
| Code | Description |
|------|-------------|
| 1005 | API subscription required |
| 7008 | Subscription expired (grace period ended) |
### Validation Errors (400)
| Code | Description |
|------|-------------|
| 2001 | Validation error |
| 2002 | Invalid phone number format |
### API Errors (7xxx)
| Code | Description |
|------|-------------|
| 7001 | Partner not found |
| 7002 | Project not found |
| 7003 | Call not found |
| 7004 | No phone number for country |
| 7005 | Unsupported country |
### Rate Limiting (429)
| Code | Description |
|------|-------------|
| 6001 | Rate limit exceeded |
| 6002 | Quota exceeded |
## Usage Limits & Billing
### API Usage
- **Unlimited API calls** (no call count limits)
- **All API usage billed at $0.10/minute** (10 cents per minute)
- **Minimum billing: 1 minute per call** (even if call duration < 60 seconds)
- **Billing calculation**: `billed_minutes = max(ceil(duration_seconds / 60), 1)`
- **Only calls that connect** (have `started_at`) are billed
### Usage Tracking
- Usage is tracked in `b2b_usage` collection with `type: "api_usage"` (collection name stays `b2b_usage`)
- Usage is synced to Stripe hourly (at :00 minutes)
- Stripe meter name: `stripe_minutes`
- Failed syncs are retried with exponential backoff (1s, 2s, 4s, 8s, 16s), max 5 retries
### Billing Period
- Billing is based on calendar months (UTC midnight on 1st of each month)
- Calls are billed in the month where `started_at` occurred
- Usage sync status: `pending`, `synced`, or `failed`
## API Methods
### Calls
- `create_call(to, task, ...)` - Create a new call
- `get_call(call_id)` - Get call status and details
- `list_calls(status?, limit?, offset?, ...)` - List calls with optional filters
- `cancel_call(call_id)` - Cancel an in-progress call
- `hangup_call(call_id)` - Force hangup an in-progress call
### Tools
- `register_tool(name, description, input_schema, ...)` - Register a tool
- `list_tools()` - List all tools
- `delete_tool(tool_id)` - Delete a tool
### Usage
- `usage.get(period=None)` - Get usage statistics
**Example:**
```python
# Get current month usage
usage = client.usage.get()
# Get usage for specific period
jan_usage = client.usage.get("2024-01")
print(f"Usage: {usage['call_count']} calls, {usage.get('api_minutes', 0)} minutes")
print(f"Quota: {usage.get('quota', {}).get('partner_limit', 'Unlimited')}")
```
**Response:**
```python
{
"partner_id": "partner_123",
"project_id": "project_456", # Optional
"period": "2024-01",
"call_count": 150,
"quota": {
"partner_limit": None, # None = unlimited for API
"project_limit": None
}
}
```
**Note:** API subscriptions have no quota limits - all usage is billed per-minute.
## API Reference
See the [Pamela API Documentation](https://docs.thisispamela.com/developer) for full API reference.
| text/markdown | null | Pamela <support@thisispamela.com> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.28.0",
"urllib3>=1.26.0",
"typing-extensions>=4.0.0; python_version < \"3.11\"",
"mypy<1.18.0,>=1.10.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"ruff>=0.6.0; extra == \"dev\"",
"types-requests>=2.32.0.20250328; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/rtpam/pamela"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T20:31:06.700211 | thisispamela-1.1.4.tar.gz | 16,641 | fe/74/8c2abfdb9b70e5e0db551f4027eeb5f0aa39d3ca1148a2ad9e5494bb232b/thisispamela-1.1.4.tar.gz | source | sdist | null | false | a04967413e8bca5ca106db038e9993df | afec8a44538c7f3832cd60d862c0b3ba317ae4d8ee2e5a5e9e1fff6b8bf18709 | fe748c2abfdb9b70e5e0db551f4027eeb5f0aa39d3ca1148a2ad9e5494bb232b | null | [] | 178 |
2.3 | cartography-client | 0.17.0 | The official Python library for the cartography API | # Cartography Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/cartography-client/)
The Cartography Python library provides convenient access to the Cartography REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## Documentation
The full API of this library can be found in [api.md](https://github.com/evrimai/cartography-client/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install cartography-client
```
## Usage
The full API of this library can be found in [api.md](https://github.com/evrimai/cartography-client/tree/main/api.md).
```python
import os
from cartography import Cartography
client = Cartography(
bearer_token=os.environ.get(
"CARTOGRAPHY_BEARER_TOKEN"
), # This is the default and can be omitted
)
response = client.health.check()
```
While you can provide a `bearer_token` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `CARTOGRAPHY_BEARER_TOKEN="My Bearer Token"` to your `.env` file
so that your Bearer Token is not stored in source control.
## Async usage
Simply import `AsyncCartography` instead of `Cartography` and use `await` with each API call:
```python
import os
import asyncio
from cartography import AsyncCartography
client = AsyncCartography(
bearer_token=os.environ.get(
"CARTOGRAPHY_BEARER_TOKEN"
), # This is the default and can be omitted
)
async def main() -> None:
response = await client.health.check()
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install cartography-client[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from cartography import DefaultAioHttpClient
from cartography import AsyncCartography
async def main() -> None:
async with AsyncCartography(
bearer_token=os.environ.get(
"CARTOGRAPHY_BEARER_TOKEN"
), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
response = await client.health.check()
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `cartography.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `cartography.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `cartography.APIError`.
```python
import cartography
from cartography import Cartography
client = Cartography()
try:
client.health.check()
except cartography.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except cartography.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except cartography.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from cartography import Cartography
# Configure the default for all requests:
client = Cartography(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).health.check()
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from cartography import Cartography
# Configure the default for all requests:
client = Cartography(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Cartography(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).health.check()
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/evrimai/cartography-client/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `CARTOGRAPHY_LOG` to `info`.
```shell
$ export CARTOGRAPHY_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from cartography import Cartography
client = Cartography()
response = client.health.with_raw_response.check()
print(response.headers.get('X-My-Header'))
health = response.parse() # get the object that `health.check()` would have returned
print(health)
```
These methods return an [`APIResponse`](https://github.com/evrimai/cartography-client/tree/main/src/cartography/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/evrimai/cartography-client/tree/main/src/cartography/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.health.with_streaming_response.check() as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from cartography import Cartography, DefaultHttpxClient
client = Cartography(
# Or use the `CARTOGRAPHY_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from cartography import Cartography
with Cartography() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/evrimai/cartography-client/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import cartography
print(cartography.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/evrimai/cartography-client/tree/main/./CONTRIBUTING.md).
| text/markdown | Cartography | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/evrimai/cartography-client",
"Repository, https://github.com/evrimai/cartography-client"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-19T20:31:05.713670 | cartography_client-0.17.0.tar.gz | 114,664 | 11/e9/20b04eb5afe1ae929950834ab5b0d3c5bf0e6b3cb2e3d2fedae5e32073ad/cartography_client-0.17.0.tar.gz | source | sdist | null | false | d257b771323d423a144931112718787c | 6cbb95500ddaebbffec412c9eede2698d1bb83db4fa2d5de37eeab1f95d8d9e0 | 11e920b04eb5afe1ae929950834ab5b0d3c5bf0e6b3cb2e3d2fedae5e32073ad | null | [] | 198 |
2.4 | spellbot | 18.2.3 | The Discord bot for Webcam Magic | # SpellBot
<div align="center">
<img
width="200"
alt="spellbot"
src="https://raw.githubusercontent.com/lexicalunit/spellbot/main/spellbot.png"
/>
<br />
<br />
<a href="https://discordapp.com/api/oauth2/authorize?client_id=725510263251402832&permissions=2416045137&scope=applications.commands%20bot">
<img
align="center"
alt="Add to Discord"
src="https://user-images.githubusercontent.com/1903876/88951823-5d6c9a00-d24b-11ea-8523-d256ccbf4a3c.png"
/>
</a>
<br />
The Discord bot for <a href="https://convoke.games/">Convoke</a>
<br />
<br />
| <!-- --> | <!-- --> |
| ----------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------: |
| **Deployment** | [![build][build-badge]][build] [![aws][aws-badge]][aws] [![status][status-badge]][status] |
| **Dependencies** | [![python][python-badge]][python] [![discord.py][discord-py-badge]][discord-py] |
| **Distribution** | [![pypi][pypi-badge]][pypi] [![docker][docker-badge]][docker-hub] [![mit][mit-badge]][mit] |
| **Quality** | [![codecov][codecov-badge]][codecov] [![ruff][ruff-badge]][ruff] [![pyright][pyright-badge]][pyright] |
| **Observability** | [![uptime][uptime-badge]][uptime] [![metrics][metrics-badge]][metrics]<br/>[![datadog][datadog-badge]][datadog] [![ganalytics][ganalytics-badge]][ganalytics] |
| **Socials** | [![discord][discord-badge]][discord-invite] [![follow][follow-badge]][follow] |
| **Funding** | [![patreon][patreon-button]][patreon] [![kofi][kofi-button]][kofi] |
</div>
## 🤖 Using SpellBot
SpellBot helps you find _Magic: The Gathering_ games on [Convoke][convoke], [Girudo][girudo], and [Table Stream][tablestream]. Just looking to play a game of Commander? Run the command `/lfg` and SpellBot will help you out!
<p align="center">
<img
src="https://github.com/lexicalunit/spellbot/assets/1903876/39381709-8dfd-473e-8072-e7267c50b4ad"
width="600"
alt="/lfg"
/>
</p>
SpellBot uses [Discord slash commands][slash]. Each command provides its own help documentation that you can view directly within Discord itself before running the command. Take a look and see what's available by typing `/` and browsing the commands for SpellBot!
## 🔭 Where to Play?
These communities are using SpellBot to play Magic! Maybe one of them is right for you?
<div align="center">
<!-- SERVERS BEGIN -->
<table>
<tr>
<td align="center"><a href="https://www.playedh.com/"><img width="200" height="200" src="https://user-images.githubusercontent.com/1903876/140843874-78510411-dcc8-4a26-a59a-0d6856698dcc.png" alt="PlayEDH" /><br />PlayEDH</a></td>
<td align="center"><a href="https://www.patreon.com/tolariancommunitycollege"><img width="200" height="200" src="https://github.com/lexicalunit/spellbot/assets/1903876/92aa9c59-9f30-4f4e-83ab-fc86e72e8f40" alt="Tolarian Community College" /><br />Tolarian Community College</a></td>
<td align="center"><a href="https://discord.com/invite/cedh"><img width="200" height="200" src="https://github.com/lexicalunit/spellbot/assets/1903876/32c324a3-b060-4bd2-8d8a-a72799acc0ff" alt="cEDH" /><br />cEDH</a></td>
</tr>
<tr>
<td align="center"><a href="https://linktr.ee/CriticalEDH"><img width="200" height="200" src="https://github.com/user-attachments/assets/2ff3e55e-2efa-4f15-b0f8-402a3ec3ba37" alt="CriticalEDH" /><br />CriticalEDH</a></td>
<td align="center"><a href="https://www.convoke.games/"><img width="200" height="200" src="https://github.com/user-attachments/assets/16d4867b-4fe2-49be-b812-b169c347c6d4" alt="Convoke" /><br />Convoke</a></td>
<td align="center"><a href="https://discord.com/invite/9Z7x8dh6Tf"><img width="200" height="200" src="https://github.com/lexicalunit/spellbot/assets/1903876/26b824c1-fa82-4b18-a47c-37114a0023b7" alt="EDH Fight Club" /><br />EDH Fight Club</a></td>
</tr>
<tr>
<td align="center"><a href="https://disboard.org/server/757455940009328670"><img width="200" height="200" src="https://github.com/lexicalunit/spellbot/assets/1903876/a2117868-cd86-44a9-8e92-91e5b2d639c2" alt="Oath of the Gaywatch" /><br />Oath of the Gaywatch</a></td>
<td align="center"><a href="https://linktr.ee/cedhspain"><img width="200" height="200" src="https://github.com/lexicalunit/spellbot/assets/1903876/823a2ed7-c59a-47da-886c-5f468a3b3032" alt="Comunidad Española de cEDH" /><br />Comunidad Española de cEDH</a></td>
<td align="center"><a href="https://discord.gg/CfCb9fmgCD"><img width="200" height="200" src="https://github.com/user-attachments/assets/86bb3488-fa03-4fb6-80c7-3ef929fb8076" alt="Top Tier Bangers" /><br />Top Tier Bangers</a></td>
</tr>
<tr>
<td align="center"><a href="https://www.playtowinmtg.com/"><img width="200" height="200" src="https://github.com/lexicalunit/spellbot/assets/1903876/e04abae7-394e-4f89-94e9-edbdbfd411fb" alt="Play to Win" /><br />Play to Win</a></td>
<td align="center"><a href="https://www.facebook.com/EDHTambayan/"><img width="200" height="200" src="https://user-images.githubusercontent.com/1903876/161825614-64e432d4-85e8-481e-8f41-f66ab8c940cc.png" alt="EDH Tambayan" /><br />EDH Tambayan</a></td>
<td align="center"><a href="https://www.patreon.com/PlayingWithPowerMTG"><img width="200" height="200" src="https://github.com/lexicalunit/spellbot/assets/1903876/60a984e4-8fa1-4d8f-bf0d-2e391776b56d" alt="Playing with Power" /><br />Playing with Power</a></td>
</tr>
<tr>
<td align="center"><a href="https://discord.gg/commander"><img width="200" height="200" src="https://github.com/user-attachments/assets/6f4cf0de-ed31-4d19-b2c2-78fb9b544992" alt="The Commander Staple" /><br />The Commander Staple</a></td>
<td align="center"><a href="https://discord.gg/ZmPsjrxe4h"><img width="200" height="200" src="https://github.com/lexicalunit/spellbot/assets/1903876/47d68a5b-fe08-497c-a76b-c8dde5f56af3" alt="Command the Cause" /><br />Command the Cause</a></td>
<td align="center"><a href="https://twitter.com/TurboDCommander"><img width="200" height="200" src="https://github.com/lexicalunit/spellbot/assets/1903876/d7d6c867-c857-4760-8552-8b8e7b4a1bad" alt="Turbo Commander" /><br />Turbo Commander</a></td>
</tr>
<tr>
<td align="center"><a href="https://www.cedh.uk/"><img width="200" height="200" src="https://github.com/lexicalunit/spellbot/assets/1903876/34bcb78c-60e2-495a-b919-873d0d331798" alt="cEDH UK" /><br />cEDH UK</a></td>
<td align="center"><a href="https://discord.com/invite/mtg-home-689674672240984067"><img width="200" height="200" src="https://github.com/lexicalunit/spellbot/assets/1903876/322d1bdf-6b32-45f5-93b2-8d4963075772" alt="MTG@Home" /><br />MTG@Home</a></td>
<td align="center"><a href="https://www.mtgdc.info/"><img width="200" height="200" src="https://github.com/user-attachments/assets/d7dfa16c-8b65-40e4-b449-4758fd3c3807" alt="Duel Commander" /><br />Duel Commander</a></td>
</tr>
<tr>
<td align="center"><a href="https://discord.gg/bA5tf3Xc8M"><img width="200" height="200" src="https://github.com/user-attachments/assets/5a3dbc81-0867-4e86-8c9c-f3801f681f54" alt="Proxy Pirates" /><br />Proxy Pirates</a></td>
</tr>
</table>
<!-- SERVERS END -->
</div>
Want your community to be featured here as well? Please contact me at [spellbot@lexicalunit.com](mailto:spellbot@lexicalunit.com)!
## 📊 Mythic Track
SpellBot integrates seamlessly with [Mythic Track](https://www.mythictrack.com/spellbot) which allows you to track games within your Discord server. Visualize and explore your data to reveal interesting trends. To get started run the `/setup_mythic_track` command on your server. Please also consider [supporting Mythic Track](https://www.patreon.com/MythicTrack)!
<p align="center">
<img
src="https://github.com/user-attachments/assets/07dacc71-baa6-4605-a44b-bacf8dc23076"
width="617"
alt="Mythic Track Setup"
/>
</p>
## ❓ Help
Two of the most common issues people using SpellBot run into are related to receiving Direct Messages from the bot. SpellBot uses Discord embeds in the DMs that it sends and there are some settings you need to enable for this to work correctly.
In your `Settings ► Chat` make sure that you have enabled **Embeds and link previews**.
<p align="center">
<img
src="https://github.com/lexicalunit/spellbot/assets/1903876/0d584532-0689-44b5-ba18-882d44b4b808"
width="700"
alt="Settings - Chat"
/>
</p>
And in your `Settings ► Privacy & Safety`, enable both **Allow direct message message for server members** and **Enable message requests from server members you may not know**.
<p align="center">
<img
src="https://github.com/lexicalunit/spellbot/assets/1903876/f16c943b-5120-4def-a254-d7fd04af2689"
width="700"
alt="Settings - Privacy & Safety"
/>
</p>
If you have more questions, please don't hesitate to join us on the [SpellBot Discord server][discord-invite] to get answers from our generous community.
## 🎤 Feedback
Thoughts and suggestions? Come join us on the [SpellBot Discord server][discord-invite]! Please also feel free to [directly report any bugs][issues] that you encounter. Or reach out to me on BlueSky at [@spellbot.io][follow].
## 🙌 Supported By
The continued operation of SpellBot is supported by <a href="https://www.playedh.com/">PlayEDH</a> as well as generous donations from [my patrons on Patreon][patreon] and [Ko-fi][kofi]. If you would like to help support SpellBot, please consider [signing up][patreon] for as little a _one dollar a month_ or [giving me a one-off tip][kofi] for whatever you feel is appropriate.
## ❤️ Contributing
If you'd like to become a part of the SpellBot development community please first know that we have a documented [code of conduct](CODE_OF_CONDUCT.md) and then see our [documentation on how to contribute](CONTRIBUTING.md) for details on how to get started.
## 🐳 Docker Support
SpellBot can be run via docker. Our image is published to [lexicalunit/spellbot][docker-hub]. See [our documentation on Docker Support](DOCKER.md) for help with installing and using it.
## 🔍 Fine-print
Any usage of SpellBot implies that you accept the following policies.
- [Privacy Policy](PRIVACY_POLICY.md)
- [Terms of Service](TERMS_OF_SERVICE.md)
---
[MIT][mit] © [amy@lexicalunit][lexicalunit] et [al][contributors]
[aws-badge]: https://img.shields.io/badge/cloud-aws-green
[aws]: https://console.aws.amazon.com/console/home
[build-badge]: https://github.com/lexicalunit/spellbot/actions/workflows/ci.yaml/badge.svg
[build]: https://github.com/lexicalunit/spellbot/actions/workflows/ci.yaml
[codecov-badge]: https://codecov.io/gh/lexicalunit/spellbot/branch/main/graph/badge.svg
[codecov]: https://codecov.io/gh/lexicalunit/spellbot
[contributors]: https://github.com/lexicalunit/spellbot/graphs/contributors
[convoke]: https://www.convoke.games/
[datadog-badge]: https://img.shields.io/badge/monitors-datadog-blueviolet.svg
[datadog]: https://app.datadoghq.com/apm/home
[discord-badge]: https://img.shields.io/discord/949425995969093722?logo=Discord&logoColor=ffffff&labelColor=7289da
[discord-invite]: https://discord.gg/HuzTQYpYH4
[discord-py-badge]: https://img.shields.io/badge/discord.py-2.x.x-blue
[discord-py]: https://github.com/Rapptz/discord.py
[docker-badge]: https://img.shields.io/docker/pulls/lexicalunit/spellbot.svg
[docker-hub]: https://hub.docker.com/r/lexicalunit/spellbot
[follow-badge]: https://img.shields.io/badge/Bluesky-1185FE?style=flat&logo=bluesky&logoColor=white
[follow]: https://bsky.app/profile/spellbot.io
[ganalytics-badge]: https://img.shields.io/badge/analytics-google-orange.svg
[ganalytics]: https://analytics.google.com/analytics/web/
[girudo]: https://www.girudo.com/
[issues]: https://github.com/lexicalunit/spellbot/issues
[kofi-button]: https://img.shields.io/badge/Ko--fi-F16061?style=flat&logo=ko-fi&logoColor=white
[kofi]: https://ko-fi.com/lexicalunit
[lexicalunit]: http://github.com/lexicalunit
[metrics-badge]: https://img.shields.io/badge/metrics-grafana-orange.svg
[metrics]: https://lexicalunit.grafana.net/d/4TSUCbcMz/spellbot?orgId=1
[mit-badge]: https://img.shields.io/badge/License-MIT-yellow.svg
[mit]: https://opensource.org/license/mit
[patreon-button]: https://img.shields.io/badge/Patreon-F96854?style=flat&logo=patreon&logoColor=white
[patreon]: https://www.patreon.com/lexicalunit
[pypi-badge]: https://img.shields.io/pypi/v/spellbot
[pypi]: https://pypi.org/project/spellbot/
[pyright-badge]: https://img.shields.io/badge/types-pyright-c3c38f.svg
[pyright]: https://github.com/microsoft/pyright
[python-badge]: https://img.shields.io/badge/python-3.13-blue.svg
[python]: https://www.python.org/
[ruff-badge]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json
[ruff]: https://github.com/astral-sh/ruff
[slash]: https://discord.com/blog/slash-commands-are-here
[status-badge]: https://img.shields.io/badge/bot-status-green
[status]: https://spellbot.io/status
[tablestream]: https://table-stream.com/
[uptime-badge]: https://img.shields.io/uptimerobot/ratio/m785764282-c51c742e56a87d802968efcc
[uptime]: https://uptimerobot.com/dashboard#785764282
| text/markdown | null | Amy Troschinetz <spellbot@lexicalunit.com> | null | null | MIT | bot, discord, magic, mtg, webcam | [
"Development Status :: 4 - Beta",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.13",
"Topic :: Communications :: Chat",
"Topic :: Games/Entertainment :: Board Games"
] | [] | null | null | <4,>=3.13 | [] | [] | [] | [
"aiohttp-jinja2>=1.6",
"aiohttp>=3.9.4",
"alembic>=1.13.1",
"asgiref>=3.8.1",
"attrs>=23.2.0",
"babel>=2.14.0",
"cachetools>=5.5.0",
"certifi>=2024.2.2",
"click>=8.1.7",
"datadog>=0.49.1",
"ddtrace>=3.16.2",
"discord-py>=2.3.2",
"dunamai>=1.19.2",
"greenlet>=3.2.2",
"gunicorn>=21.2.0",
... | [] | [] | [] | [
"homepage, http://spellbot.io/",
"repository, https://github.com/lexicalunit/spellbot"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T20:30:42.154380 | spellbot-18.2.3.tar.gz | 759,153 | a3/c7/ab8735addc62bfaefc5aa69909f3a5db2dfc9d18bba414fadd50f95adcb9/spellbot-18.2.3.tar.gz | source | sdist | null | false | d98a72516621d10c156fee07d35c39f3 | 4d9d351be26b01f1cd5a6c15bea0c0e448250bde92631c6b960d0cd96678faa1 | a3c7ab8735addc62bfaefc5aa69909f3a5db2dfc9d18bba414fadd50f95adcb9 | null | [
"LICENSE.md"
] | 189 |
2.4 | dave.py | 0.1.1 | Python bindings for libdave, Discord's audio & video end-to-end encryption | # dave.py
<img src="https://img.shields.io/github/actions/workflow/status/DisnakeDev/dave.py/wheels.yml?branch=main&style=flat-square"></img>
<a href="https://pypi.org/project/dave.py/"><img src="https://img.shields.io/pypi/v/dave.py.svg?style=flat-square" alt="PyPI version info" /></a>
<a href="https://pypi.org/project/dave.py/"><img src="https://img.shields.io/pypi/pyversions/dave.py.svg?style=flat-square" alt="PyPI supported Python versions" /></a>
Python bindings for [libdave](https://github.com/discord/libdave), Discord's C++ DAVE[^1] protocol implementation.
See the [API docs](https://docs.discord.com/developers/topics/voice-connections#end-to-end-encryption-dave-protocol) for a general overview of the protocol, as well as https://daveprotocol.com/ for an in-depth protocol description.
## Installation
```sh
pip install dave.py
```
Prebuilt wheels for all platforms and many 64-bit architectures are available directly from PyPI (32-bit architectures are not supported).
If you're missing wheels for any specific platform/architecture, feel free to open an issue!
To build from source, any PEP 517-compatible build frontend can be used, e.g. `python -m build`.
Note that building from source (or sdist) also requires `$VCPKG_ROOT` to point to a [vcpkg](https://github.com/microsoft/vcpkg) clone, as well as a lot of patience.
## Usage
This is currently primarily intended for https://github.com/DisnakeDev/disnake, though it is not targeting it in any way.
Due to this, there isn't really any documentation to speak of right now. Sorry about that.
[^1]: *"Discord's audio & video end-to-end encryption"*
| text/markdown | Disnake Development | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming La... | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:30:08.478595 | dave_py-0.1.1.tar.gz | 131,993 | 74/67/600b175315550d838d970559ccf07ee98a8825860143b43e8070850197ac/dave_py-0.1.1.tar.gz | source | sdist | null | false | e2412c10d0a41d91840da9ac0057df3d | 43972416878178c5c892ec12a744107ab62debc9039c8d420b650f35338c1955 | 7467600b175315550d838d970559ccf07ee98a8825860143b43e8070850197ac | MIT | [] | 0 |
2.4 | exafs | 1.2.2 | Tool for creation, validation, and execution of ExaBGP messages for network security. | # ExaFS
[](https://badge.fury.io/py/exafs)
[](https://hub.docker.com/r/jirivrany/exafs-base)
[](https://opensource.org/licenses/MIT)
[](https://github.com/CESNET/exafs/actions/workflows/python-app.yml)
[](https://github.com/CESNET/exafs/actions/workflows/github-code-scanning/codeql)
[](https://pypi.org/project/exafs/)
ExaFS brings new functionality to the environment of routing protocols configuration for backbone network hardware security.
The tool extends network administrators toolset by adding an extra layer for configuration rules creation, validation, and authorization. With this new layer, a larger group of network administrators can safely create new
[BGP protocol](https://github.com/Exa-Networks/exabgp) rules to prevent DDoS and other forms of malicious cyber attacks.
ExaFS is open source with MIT license. The system is regularly used at [CESNET](https://www.cesnet.cz/) - the Czech national e-infrastructure for science, research and education operator.
ExaFS provides both the user Web interface and the REST API for web service.
Key contributions of the system are **user authorization** mechanism and **validation system for BGP commands**.
Without ExaFS the system Root privileges are required for direct interaction with ExaBGP and networking hardware. ExaFS provides several user roles and access rights similarly to user roles in other software systems such as SQL. The system allows specifying user rights for various kinds of sub-nets following the network topology.
Validation system for BGP commands assures that only error-free messages can pass to the system BGP API. Both syntax and access rights are validated before a new rule can be stored in the database.
Thanks to the storage, all the rules can be restored quickly after a system reboot or failure. All rules are validated again, before sending them to ExaBPG from the storage, to prevent any malicious database manipulation.
ExaFS is an integral part of cybersecurity tools at CESNET. However, it can be used in any network where ExaBGP is available.
See how is ExaFS integrated into the network in the picture below.

## Project presentations
* 2020 - CZ [DDoS Protector v prostředí propojovacího uzlu NIX.CZ](https://www.cesnet.cz/wp-content/uploads/2020/02/DDP_v_NIX.pdf), [Seminář o bezpečností sítí a služeb 2020](https://www.cesnet.cz/akce/bss20/)
* 2019 - EN [ExaFS: mitigating unwanted traffic](https://xn--ondej-kcb.caletka.cz/dl/slidy/20191113-SIGNOC-ExaFS.pdf), [10th SIG-NOC meeting](https://wiki.geant.org/display/SIGNOC/10th+SIG-NOC+meeting), Prague
* 2019 - CZ [Potlačení nežádoucího provozu pomocí BGP Flowspec](https://indico.csnog.eu/event/6/contributions/64/attachments/35/61/CESNET-FlowSpec-CSNOG.pdf), [CSNOG 2019](https://indico.csnog.eu/event/6/overview)
* 2019 - CZ [Nástroje pro FlowSpec a RTBH](https://konference.cesnet.cz/prezentace2019/sal1/3_Adamec.pdf), [Konference e-infrastruktury CESNET](https://konference.cesnet.cz/) 2019
* 2019 - CZ [Nástroje pro obranu proti útokům na páteřních směrovačích](https://konference.cesnet.cz/prezentace2019/sal1/3_Verich.pdf),[Konference e-infrastruktury CESNET](https://konference.cesnet.cz/) 2019
## System overview

The core component of ExaFS is a web application written in Python using the Flask framework. It provides a user interface for managing ExaBGP rules (CRUD operations) and also exposes a REST API with similar functionality. The web application uses Shibboleth for authentication, while the REST API relies on token-based authentication.
The application generates ExaBGP commands and forwards them to the ExaBGP process. All rules are thoroughly validated—only valid rules are stored in the database and sent to the ExaBGP connector.
The second component of the system is a separate application that replicates received commands to `stdout`. The connection between the ExaBGP daemon and the `stdout` of the ExaAPI (ExaBGP process) is defined in the ExaBGP configuration.
This API was originally part of the same project but has since been moved to its own repository. You can use the [exabgp-process pip package](https://pypi.org/project/exabgp-process/), clone the Git repository, or develop your own implementation.
Each time this process receives a command from ExaFS, it outputs it to `stdout`, allowing the ExaBGP service to process the command and update its routing table—creating, modifying, or removing rules accordingly.
It may also be necessary to monitor ExaBGP and re-announce rules after a restart or shutdown. This can be handled via the ExaBGP service configuration, or by using an example system service called **Guarda**, described in the documentation. In either case, the key mechanism is calling the application endpoint `/rules/announce_all`. This endpoint is only accessible from `localhost`; a local IP address must be configured in the application settings.
## DOCS
### Instalation related
* [ExaFS Ansible deploy](https://github.com/CESNET/ExaFS-deploy) - repository with Ansbile playbook for deploying ExaFS with Docker Compose.
* [Install notes](./docs/INSTALL.md)
* [using Docker Image](./docs/DockerImage.md)
* [Database backup configuration](./docs/DB_BACKUP.md)
* [Local database instalation notes](./docs/DB_LOCAL.md)
### API
The REST API is documented using Swagger (OpenAPI). After installing and running the application, the API documentation is available locally at the /apidocs/ endpoint. This interactive documentation provides details about all available endpoints, request and response formats, and supported operations, making it easier to integrate and test the API.
## [Change log](./CHANGELOG.md)
| text/markdown | Jiri Vrany, Petr Adamec, Josef Verich, Jakub Man | null | null | Jiri Vrany <jiri.vrany@cesnet.cz> | null | bgp, exabgp, flowspec, ddos, network-security, CESNET | [
"Development Status :: 4 - Beta",
"Intended Audience :: System Administrators",
"Intended Audience :: Telecommunications Industry",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programmi... | [] | null | null | >=3.9 | [] | [] | [] | [
"Flask>=2.0.2",
"Flask-SQLAlchemy>=2.2",
"Flask-SSO>=0.4.0",
"Flask-WTF>=1.0.0",
"Flask-Migrate>=3.0.0",
"Flask-Script>=2.0.0",
"Flask-Session",
"PyJWT>=2.4.0",
"PyMySQL>=1.0.0",
"requests>=2.20.0",
"babel>=2.7.0",
"email_validator>=1.1",
"pika>=1.3.0",
"loguru",
"flasgger",
"python-do... | [] | [] | [] | [
"Homepage, https://github.com/CESNET/exafs",
"Repository, https://github.com/CESNET/exafs",
"Documentation, https://github.com/CESNET/exafs/blob/master/README.md",
"Issues, https://github.com/CESNET/exafs/issues",
"Changelog, https://github.com/CESNET/exafs/blob/master/README.md"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T20:30:04.227853 | exafs-1.2.2.tar.gz | 118,877 | d4/37/661aef904305cb9b0573a532dc3860af19193173897e425f78caba145b75/exafs-1.2.2.tar.gz | source | sdist | null | false | ad23febd5c1b706497ddd91f9280666a | 45c27415a0bfa5aeb2712a8dbef394d45e1480effcd94bc44e613bab679509da | d437661aef904305cb9b0573a532dc3860af19193173897e425f78caba145b75 | MIT | [
"LICENSE"
] | 187 |
2.4 | linux-mcp-server | 1.3.2 | MCP server for read-only Linux system administration, diagnostics, and troubleshooting | <!-- mcp-name: io.github.rhel-lightspeed/linux-mcp-server -->
[](https://github.com/rhel-lightspeed/linux-mcp-server/actions/workflows/ci.yml)
[](https://codecov.io/gh/rhel-lightspeed/linux-mcp-server)
[](https://pypi.org/project/linux-mcp-server)
[](https://rhel-lightspeed.github.io/linux-mcp-server/)
# Linux MCP Server
A Model Context Protocol (MCP) server for read-only Linux system administration, diagnostics, and troubleshooting on RHEL-based systems.
## Features
- **Read-Only Operations**: All tools are strictly read-only for safe diagnostics
- **Remote SSH Execution**: Execute commands on remote systems via SSH with key-based authentication
- **Multi-Host Management**: Connect to different remote hosts in the same session
- **Comprehensive Diagnostics**: System info, services, processes, logs, network, and storage
- **Configurable Log Access**: Control which log files can be accessed via environment variables
- **RHEL/systemd Focused**: Optimized for Red Hat Enterprise Linux systems
## Installation and Usage
For detailed instructions on setting up and using the Linux MCP Server, please refer to our official documentation:
- **[Installation Guide]**: Detailed steps for `pip`, `uv`, and container-based deployments.
- **[Usage Guide]**: Information on running the server, configuring LLM clients, and troubleshooting.
- **[Cheatsheet]**: A reference for what prompts to use to invoke various tools.
[Installation Guide]: https://rhel-lightspeed.github.io/linux-mcp-server/install/
[Usage Guide]: https://rhel-lightspeed.github.io/linux-mcp-server/usage/
[Cheatsheet]: https://rhel-lightspeed.github.io/linux-mcp-server/cheatsheet/
| text/markdown | RHEL Lightspeed | null | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"Natural Language :: English",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python ... | [] | null | null | >=3.10 | [] | [] | [] | [
"asyncssh[bcrypt]>=2.22.0",
"fastmcp<3,>=2.14.4",
"pydantic-settings>=2.12.0",
"pydantic>=2.12.5",
"gssapi>=1.11.1; extra == \"gssapi\""
] | [] | [] | [] | [
"Source code, https://github.com/rhel-lightspeed/linux-mcp-server",
"Bug Tracker, https://github.com/rhel-lightspeed/linux-mcp-server/issues",
"Documentation, https://rhel-lightspeed.github.io/linux-mcp-server/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:27:03.379546 | linux_mcp_server-1.3.2.tar.gz | 253,423 | 6f/c6/f317c796d0ac93afdc36a5f779da378047915c4f040f2350915c99394a96/linux_mcp_server-1.3.2.tar.gz | source | sdist | null | false | c49954909900d2fc0831960564676038 | ada26c30b050e8441af832579f4f0e3315766f6029de279a0dbcf5cf0b34007d | 6fc6f317c796d0ac93afdc36a5f779da378047915c4f040f2350915c99394a96 | null | [
"LICENSE",
"licenses/GPL-3.0.txt"
] | 305 |
2.4 | wa-link-parser | 0.2.1 | Extract, classify, and enrich links from WhatsApp chat exports | # wa-link-parser
[](https://pypi.org/project/wa-link-parser/)
[](https://pypi.org/project/wa-link-parser/)
[](https://github.com/sreeramramasubramanian/wa-link-parser/blob/main/LICENSE)
**Turn WhatsApp chat exports into a searchable link catalog.**
`wa-link-parser` takes a WhatsApp `.txt` export and extracts every URL -- classifying them by domain, fetching page titles and descriptions, and exporting everything to CSV or JSON. Works as a CLI tool or a Python library.
## Why this exists
WhatsApp groups accumulate dozens of links daily -- articles, videos, restaurants, travel ideas -- that disappear into chat scroll. There's no good tool to answer "what was that Airbnb link someone shared last month?" This tool fills that gap.
### The pipeline
```
Raw .txt file
-> Parse Structured messages with timestamps + senders
-> Extract URLs pulled from message text (TLD-aware, not naive regex)
-> Attribute Each link tied to WHO shared it and WHEN
-> Contextualize Adjacent messages within 60s grabbed as surrounding context
-> Classify Domain mapped to type (youtube->video, swiggy->food, github->code)
-> Enrich HTTP fetch of each URL -> page title + OG description
-> Export SQLite with relational model -> filtered CSV/JSON
```
## Features
- **Multi-format parsing** -- auto-detects 7 WhatsApp export formats (Indian, US, European, German, and more)
- **TLD-aware URL extraction** -- uses `urlextract`, not naive regex, so it catches real URLs and skips noise
- **Domain classification** -- maps 30+ domains to types like `youtube`, `travel`, `food`, `shopping`, `code`
- **Metadata enrichment** -- fetches page titles and OG descriptions with rate limiting and retry
- **SQLite storage** -- relational model with WAL mode; imports are idempotent via message hashing
- **Filtered export** -- CSV or JSON with filters by sender, date range, link type, and domain
- **Domain exclusions** -- auto-filters ephemeral links (Zoom, Google Meet, bit.ly) at export time
- **CLI + library** -- full Click CLI for quick use, clean Python API with no Click dependency for integration
## Installation
```bash
pip install wa-link-parser
```
Or install from source:
```bash
git clone https://github.com/sreeramramasubramanian/wa-link-parser.git
cd wa-link-parser
pip install -e .
```
## Quick start
Three commands, and you have a searchable link catalog:
```bash
# 1. Import a chat export
wa-links import chat.txt --group "Goa Trip 2025"
# 2. Enrich links with page titles and descriptions
wa-links enrich "Goa Trip 2025"
# 3. Export to CSV
wa-links export "Goa Trip 2025"
```
That's it. You'll get a CSV file with every link from the chat, classified and enriched.
Need something more specific? Add filters:
```bash
wa-links export "Goa Trip 2025" --type youtube --format json
wa-links export "Goa Trip 2025" --sender "Priya" --after 2025-10-01
wa-links export "Goa Trip 2025" --no-exclude # include Zoom/Meet links too
```
## Sample output
**CSV** (`wa-links export "Goa Trip 2025"`):
```
sender,date,link,domain,type,title,description,context
Arjun,2025-10-12,https://www.youtube.com/watch?v=K3FnLas09mw,youtube.com,youtube,Best Beaches in South Goa 2025,A complete guide to Goa's hidden beaches...,guys check this out before we finalize
Meera,2025-10-14,https://www.airbnb.co.in/rooms/52841379,airbnb.co.in,travel,Beachside Villa in Palolem,Entire villa · 4 beds · Pool,this one has a pool and is close to the beach
Priya,2025-10-15,https://github.com/sreeramramasubramanian/wa-link-parser,github.com,code,wa-link-parser: Extract links from WhatsApp chats,Python library and CLI for...,use this to save all our links lol
```
**JSON** (`wa-links export "Goa Trip 2025" --format json`):
```json
[
{
"sender": "Arjun",
"date": "2025-10-12",
"link": "https://www.youtube.com/watch?v=K3FnLas09mw",
"domain": "youtube.com",
"type": "youtube",
"title": "Best Beaches in South Goa 2025",
"description": "A complete guide to Goa's hidden beaches...",
"context": "guys check this out before we finalize"
}
]
```
## Library usage
All library functions work without Click -- use callbacks for progress and interaction.
```python
from wa_link_parser import parse_chat_file, extract_links, fetch_metadata, export_links
# Parse a chat export
messages = parse_chat_file("chat.txt")
# Extract and classify links from messages
for msg in messages:
links = extract_links(msg.raw_text)
for link in links:
print(f"{msg.sender}: {link.url} ({link.link_type})")
# Fetch metadata for a single URL
title, description = fetch_metadata("https://www.youtube.com/watch?v=K3FnLas09mw")
# Export with default exclusions
export_links("Goa Trip 2025")
# Export everything, no exclusions
export_links("Goa Trip 2025", exclude_domains=[])
```
### API reference
| Function | Description |
|----------|-------------|
| `parse_chat_file(path)` | Parse a `.txt` export into `ParsedMessage` objects |
| `extract_links(text)` | Extract URLs from text, returns `ExtractedLink` objects |
| `classify_url(url)` | Classify a URL by domain, returns link type string |
| `fetch_metadata(url)` | Fetch page title and description for a URL |
| `enrich_links(group_id)` | Enrich all unenriched links for a group in the DB |
| `export_links(group, ...)` | Export links to CSV/JSON with filters and exclusions |
| `filter_excluded_domains(links, ...)` | Filter link dicts by domain exclusion list |
| `reset_exclusion_cache()` | Clear cached exclusion domains (for testing) |
### Data classes
| Class | Fields |
|-------|--------|
| `ParsedMessage` | `timestamp`, `sender`, `raw_text`, `is_system` |
| `ExtractedLink` | `url`, `domain`, `link_type` |
| `ImportStats` | `new_messages`, `skipped_messages`, `links_extracted`, `contacts_created` |
## Supported formats
The parser auto-detects WhatsApp export formats from multiple locales:
| Format | Example |
|--------|---------|
| Indian (bracket, tilde) | `[20/10/2025, 10:29:01 AM] ~ Sender: text` |
| US (bracket, short year) | `[1/15/25, 3:45:30 PM] Sender: text` |
| International (no bracket, 24h) | `20/10/2025, 14:30 - Sender: text` |
| US (no bracket, 12h) | `1/15/25, 3:45 PM - Sender: text` |
| European (short year, 24h) | `20/10/25, 14:30 - Sender: text` |
| German (dots) | `20.10.25, 14:30 - Sender: text` |
| Bracket (no tilde, full year) | `[20/10/2025, 10:29:01 AM] Sender: text` |
## CLI reference
### `import`
Import a WhatsApp chat export file.
```bash
wa-links import <file> --group "Group Name"
wa-links import <file> --group "Group Name" --enrich
```
- Deduplicates on reimport (idempotent)
- Resolves contacts with fuzzy matching on subsequent imports
- Builds context from adjacent messages by the same sender (within 60s)
### `enrich`
Fetch page titles and descriptions for unenriched links.
```bash
wa-links enrich "Group Name"
```
- Extracts `og:title` and `og:description`, falls back to `<title>` tag
- Rate-limited (2 req/sec) with retry on failure
- Safe to run multiple times -- only fetches metadata for new links
### `export`
Export links to CSV or JSON with optional filters.
```bash
wa-links export "Group Name"
wa-links export "Group Name" --format json
wa-links export "Group Name" --type youtube --sender "Alice" --after 2025-10-01
wa-links export "Group Name" --no-exclude
```
| Flag | Description |
|------|-------------|
| `--output` | Output file path |
| `--type` | Filter by link type (e.g., `youtube`, `travel`, `shopping`) |
| `--sender` | Filter by sender name (substring match) |
| `--after` | Only links after this date (`YYYY-MM-DD`) |
| `--before` | Only links before this date (`YYYY-MM-DD`) |
| `--domain` | Filter by domain (substring match) |
| `--format` | `csv` (default) or `json` |
| `--no-exclude` | Disable default domain exclusions |
### `stats`
Show group statistics.
```bash
wa-links stats "Group Name"
```
### `groups`
List all imported groups.
### `contacts`
List or resolve contacts.
```bash
wa-links contacts "Group Name"
wa-links contacts "Group Name" --resolve
```
### `reset`
Delete all data for a group to reimport fresh.
```bash
wa-links reset "Group Name" --yes
```
## Configuration
### Link types
Built-in domain-to-type mappings:
| Type | Domains |
|------|---------|
| youtube | youtube.com, youtu.be |
| google_maps | maps.google.com, maps.app.goo.gl |
| document | docs.google.com, drive.google.com |
| instagram | instagram.com |
| twitter | twitter.com, x.com |
| spotify | open.spotify.com, spotify.link |
| reddit | reddit.com |
| linkedin | linkedin.com |
| article | medium.com |
| notion | notion.so |
| github | github.com |
| stackoverflow | stackoverflow.com |
| shopping | amazon.in, amazon.com, flipkart.com |
| food | swiggy.com, zomato.com |
| travel | airbnb.com, tripadvisor.com |
| general | everything else |
To add or override mappings, create a `link_types.json` in your working directory:
```json
{
"tiktok.com": "tiktok",
"www.tiktok.com": "tiktok",
"substack.com": "newsletter"
}
```
### Domain exclusions
By default, `export` filters out ephemeral/temporary links that clutter exports:
| Category | Domains |
|----------|---------|
| Video calls | meet.google.com, zoom.us, teams.microsoft.com, teams.live.com |
| Email | mail.google.com, outlook.live.com, outlook.office.com |
| URL shorteners | bit.ly, tinyurl.com, t.co, we.tl |
All links are still stored in the database -- exclusions only apply at export time.
To customize, create an `exclusions.json` in your working directory. It's a JSON array of domains to add. Prefix with `!` to remove a built-in default:
```json
[
"calendly.com",
"!bit.ly"
]
```
This adds `calendly.com` to the exclusion list and removes `bit.ly` from it.
Programmatic control:
```python
export_links("Group") # default exclusions
export_links("Group", exclude_domains=[]) # no exclusions
export_links("Group", exclude_domains=["zoom.us", "calendly.com"]) # custom list
```
## Storage
Data is stored in a SQLite database (WAL mode). Set the path with:
```bash
export WA_LINKS_DB_PATH=/path/to/wa_links.db
```
Defaults to `wa_links.db` in the current directory.
## Development
```bash
pip install -e ".[dev]"
pytest
```
91 tests covering parsing, extraction, classification, enrichment, export, and exclusions. Python 3.10+ required.
## License
MIT
| text/markdown | Sreeram Ramasubramanian | null | null | null | null | whatsapp, links, parser, chat, url-extractor | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Py... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.1.0",
"urlextract>=1.9.0",
"requests>=2.31.0",
"beautifulsoup4>=4.12.0",
"pytest>=7.4.0; extra == \"dev\"",
"whatstk>=0.7.1; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/sreeramramasubramanian/wa-link-parser",
"Repository, https://github.com/sreeramramasubramanian/wa-link-parser",
"Issues, https://github.com/sreeramramasubramanian/wa-link-parser/issues"
] | twine/6.2.0 CPython/3.10.1 | 2026-02-19T20:26:58.067313 | wa_link_parser-0.2.1.tar.gz | 25,548 | c6/59/99d81b5881c171bd94d176889e621b00a603cc65881e774ba121ede0bd74/wa_link_parser-0.2.1.tar.gz | source | sdist | null | false | 92fa4ab7f6585a38a674d8c572e1173d | 89b52eefe6dd29459ba00b08afdccd744b6f2f76caae565623f2c8ebea72fba8 | c65999d81b5881c171bd94d176889e621b00a603cc65881e774ba121ede0bd74 | MIT | [
"LICENSE"
] | 181 |
2.4 | compresr | 1.1.1 | Python SDK for Compresr - Intelligent prompt compression service | # Compresr Python SDK
Intelligent context compression service to optimize LLM costs and performance. Reduce your LLM API costs by 30-70% through intelligent context compression.
## Installation
```bash
pip install compresr
```
## Quick Start
### API Key Setup
Get your API key from [compresr.ai](https://compresr.ai):
1. Create an account at [compresr.ai](https://compresr.ai)
2. Navigate to Dashboard → API Keys
3. Click "Create New Key" and copy it (shown only once!)
### Two Types of Compression
#### 1. Agnostic Compression (No Question Needed)
Use `CompressionClient` for general-purpose compression:
```python
from compresr import CompressionClient
client = CompressionClient(api_key="cmp_your_api_key")
result = client.compress(
context="Your very long context that needs compression...",
compression_model_name="A_CMPRSR_V1" # or "A_CMPRSR_V1_FLASH" for speed
)
print(f"Original: {result.data.original_tokens} tokens")
print(f"Compressed: {result.data.compressed_tokens} tokens")
print(f"Saved: {result.data.tokens_saved} tokens")
```
#### 2. Question-Specific Compression
Use `QSCompressionClient` to compress based on a specific question:
```python
from compresr import QSCompressionClient
client = QSCompressionClient(api_key="cmp_your_api_key")
result = client.compress(
context="Python was created in 1991. JavaScript in 1995. Java in 1995.",
question="Who created Python?",
compression_model_name="QS_CMPRSR_V1"
)
print(f"Compressed (question-relevant): {result.data.compressed_context}")
print(f"Saved: {result.data.tokens_saved} tokens")
```
### Integration with OpenAI
**Agnostic compression:**
```python
from compresr import CompressionClient
from openai import OpenAI
compresr = CompressionClient(api_key="cmp_xxx")
openai_client = OpenAI(api_key="sk-xxx")
compressed = compresr.compress(
context="Your long system prompt or document...",
compression_model_name="A_CMPRSR_V1"
)
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": compressed.data.compressed_context},
{"role": "user", "content": "Analyze this data..."}
]
)
print(f"Saved {compressed.data.tokens_saved} tokens!")
```
**Question-specific compression (RAG/QA):**
```python
from compresr import QSCompressionClient
from openai import OpenAI
compresr = QSCompressionClient(api_key="cmp_xxx")
openai_client = OpenAI(api_key="sk-xxx")
user_question = "What is machine learning?"
# Compress retrieval results based on the question
compressed = compresr.compress(
context="Retrieved documents with lots of information...",
question=user_question,
compression_model_name="QS_CMPRSR_V1"
)
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": compressed.data.compressed_context},
{"role": "user", "content": user_question}
]
)
```
## Streaming Support
Both clients support real-time streaming:
```python
from compresr import CompressionClient, QSCompressionClient
# Agnostic streaming
client = CompressionClient(api_key="cmp_your_api_key")
for chunk in client.compress_stream(
context="Your long context...",
compression_model_name="A_CMPRSR_V1"
):
print(chunk.content, end="", flush=True)
# Question-specific streaming
qs_client = QSCompressionClient(api_key="cmp_your_api_key")
for chunk in qs_client.compress_stream(
context="Your long context...",
question="What is important here?",
compression_model_name="QS_CMPRSR_V1"
):
print(chunk.content, end="", flush=True)
```
## Async Support
Both clients support async/await:
```python
import asyncio
from compresr import CompressionClient, QSCompressionClient
async def main():
# Agnostic async
client = CompressionClient(api_key="cmp_your_api_key")
result = await client.compress_async(
context="Your context...",
compression_model_name="A_CMPRSR_V1"
)
# Question-specific async
qs_client = QSCompressionClient(api_key="cmp_your_api_key")
qs_result = await qs_client.compress_async(
context="Your context...",
question="What matters here?",
compression_model_name="QS_CMPRSR_V1"
)
await client.close()
await qs_client.close()
asyncio.run(main())
```
## Batch Processing
Both clients support batch processing:
```python
from compresr import CompressionClient, QSCompressionClient
# Agnostic batch
client = CompressionClient(api_key="cmp_your_api_key")
results = client.compress_batch(
contexts=["First context...", "Second context..."],
compression_model_name="A_CMPRSR_V1"
)
# Question-specific batch
qs_client = QSCompressionClient(api_key="cmp_your_api_key")
qs_results = qs_client.compress_batch(
contexts=["Context 1...", "Context 2..."],
questions=["Question 1?", "Question 2?"],
compression_model_name="QS_CMPRSR_V1"
)
print(f"Total tokens saved: {results.data.total_tokens_saved}")
```
## API Reference
### Client Initialization
```python
from compresr import CompressionClient, QSCompressionClient
# Agnostic compression
client = CompressionClient(
api_key="cmp_your_api_key", # Required
timeout=30 # Optional: request timeout in seconds
)
# Question-specific compression
qs_client = QSCompressionClient(
api_key="cmp_your_api_key", # Required
timeout=30 # Optional: request timeout in seconds
)
```
**Note:** BASE_URL is fixed to `https://api.compresr.ai` and cannot be changed.
### Methods
Both `CompressionClient` and `QSCompressionClient` support:
| Method | Description |
|--------|-------------|
| `compress()` | Compress single context (QS requires `question` param) |
| `compress_async()` | Async compress |
| `compress_batch()` | Batch compress (QS requires `questions` list) |
| `compress_stream()` | Stream compression |
### Response Structure
```python
# CompressionResult
result.data.original_context # Original input
result.data.compressed_context # Compressed output
result.data.original_tokens # Token count before
result.data.compressed_tokens # Token count after
result.data.actual_compression_ratio # Achieved ratio (0-1)
result.data.tokens_saved # Tokens saved
result.data.duration_ms # Processing time
# BatchResult
results.data.total_original_tokens
results.data.total_compressed_tokens
results.data.total_tokens_saved
results.data.average_compression_ratio
results.data.count
results.data.results # List of CompressionResult
```
## Available Models
### Agnostic Models (CompressionClient)
| Model | Description | Use Case |
|-------|-------------|----------|
| `A_CMPRSR_V1` | LLM-based abstractive compression (default) | General purpose, best quality |
| `A_CMPRSR_V1_FLASH` | Fast extractive compression | Speed-critical applications |
### Question-Specific Models (QSCompressionClient)
| Model | Description | Use Case |
|-------|-------------|----------|
| `QS_CMPRSR_V1` | Question-specific compression, Abstractive (default) | General purpose |
| `QSR_CMPRSR_V1` | Question-specific Extractive | General purpose |
## Error Handling
Both clients use the same exception handling:
```python
from compresr import CompressionClient, QSCompressionClient
from compresr.exceptions import (
CompresrError,
AuthenticationError,
RateLimitError,
ValidationError,
)
client = CompressionClient(api_key="cmp_your_api_key")
try:
result = client.compress(
context="Your context...",
compression_model_name="A_CMPRSR_V1"
)
except AuthenticationError:
print("Invalid API key")
except RateLimitError:
print("Rate limit exceeded")
except ValidationError as e:
print(f"Invalid request: {e}")
except CompresrError as e:
print(f"API error: {e}")
```
## Requirements
- Python 3.9+
- `httpx >= 0.27.0`
- `pydantic >= 2.10.0`
## License
Proprietary License
## Support
- Documentation: [compresr.ai/docs/overview](https://compresr.ai/docs/overview)
- Support: [support@compresr.ai](mailto:support@compresr.ai)
- Issues: [GitHub Issues](https://github.com/compresr/sdk/issues)
- Website: [compresr.ai](https://compresr.ai)
| text/markdown | null | Compresr Team <founders@compresr.ai> | null | null | null | llm, compression, ai, openai, gpt, tokens, cost-optimization | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.27.0",
"pydantic>=2.10.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-timeout>=2.1.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.12.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",... | [] | [] | [] | [
"Homepage, https://compresr.com",
"Documentation, https://docs.compresr.com",
"Repository, https://github.com/compresr/sdk",
"Issues, https://github.com/compresr/sdk/issues"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-19T20:26:09.105760 | compresr-1.1.1.tar.gz | 15,184 | 7c/88/07bca40253c276cef998c67abe0951edf4087125021cafb074da4690093a/compresr-1.1.1.tar.gz | source | sdist | null | false | 343645b7039e82389054350c26dbd4b9 | 48e1bbf16da91c0bf58ea312107283612d812cc99f055b9c6e8bcafaffe25e12 | 7c8807bca40253c276cef998c67abe0951edf4087125021cafb074da4690093a | LicenseRef-Proprietary | [] | 213 |
2.4 | aip-engine | 0.4.1 | Algebraic Independence Processor — auto-detection of matrix structure + memory-efficient computation for ultra-large sparse systems | # AIP Engine
**Algebraic Independence Processor** — auto-detection of matrix structure + memory-efficient computation for ultra-large sparse systems.
```
pip install aip-engine
```
## What it does
AIP Engine solves a real problem: building and solving sparse linear systems that are **too large for conventional tools**. It does three things:
1. **Detects** matrix structure automatically (sparse/dense, square/rectangular)
2. **Routes** to the optimal solver (LSQR, spsolve, LAPACK)
3. **Accordion Memory**: builds and solves ultra-large systems without running out of RAM
## Quick start
```python
import aip
# Auto-detect structure and solve
report = aip.detect_matrix(A)
x = aip.solve(A, b)
```
## Accordion Memory
For systems with millions or billions of entries that don't fit in RAM:
```python
from aip.accordion import PascalIndex, AccordionBuilder, solve_chunks
# 1. Mathematical indexing (replaces dictionary, saves GB of RAM)
index = PascalIndex(num_vars=30, max_degree=10)
idx = index.combo_to_index((3, 7, 12)) # O(k) time, 0 extra memory
# 2. Batch construction (never all in RAM at once)
builder = AccordionBuilder(num_rows=53_000_000)
# ... add entries in batches ...
builder.flush() # converts to CSR chunk, frees raw data
chunks = builder.finalize()
# 3. Streaming solve (never assembles full matrix)
result = solve_chunks(chunks, b, max_iter=10000)
print(result['residual'], result['size_l2'])
```
## Why Accordion?
| | Without Accordion | With Accordion |
|---|---|---|
| Monomial index (8.6M entries) | ~2 GB dictionary | 0 MB (computed mathematically) |
| Matrix construction (150M entries) | ~12 GB Python lists | ~2.4 GB array.array per batch |
| Full matrix (53M x 1.17B) | 496,052 TB dense | 14.5 GB sparse chunks |
| Solve | needs full matrix in RAM | streaming over chunks |
Real-world results:
| Problem | Matrix size | Dense would be | Accordion uses | Compression |
|---|---|---|---|---|
| PHP n=4 d=8 | 8.6M x 78M | 5.4 PB | 1.2 GB | 4,640,586x |
| PHP n=5 d=10 | 53M x 1.17B | 496,052 TB | 14.5 GB | 34,215,310x |
## How it works
### PascalIndex
Uses the [Combinatorial Number System](https://en.wikipedia.org/wiki/Combinatorial_number_system) to compute the index of any monomial in O(k) time using a precomputed Pascal table. No dictionary needed.
```python
index = PascalIndex(num_vars=45, max_degree=10)
print(index) # PascalIndex(vars=45, deg=10, monomials=4,346,814,276, pascal=4048 bytes)
# 4.3 billion monomials indexed with 4 KB of memory
```
### AccordionBuilder
Builds sparse matrices in batches using `array.array` (C-native, 4-8 bytes/element) instead of Python lists (28 bytes/element). Each batch is converted to CSR immediately and raw arrays are freed.
### solve_chunks
LSQR solver that operates on a `LinearOperator` built from column chunks. The matvec/rmatvec operations iterate over chunks sequentially, never needing the full matrix in memory.
## Requirements
- Python >= 3.8
- NumPy >= 1.20
- SciPy >= 1.7
## License
MIT License - Carmen Esteban, 2025-2026
| text/markdown | Carmen Esteban | null | null | null | MIT | sparse-matrix, linear-algebra, memory-efficient, combinatorial-number-system, accordion-memory, large-scale-computation, scientific-computing | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language ... | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.20",
"scipy>=1.7"
] | [] | [] | [] | [
"Homepage, https://github.com/iafiscal1212/aip-engine",
"Repository, https://github.com/iafiscal1212/aip-engine",
"Issues, https://github.com/iafiscal1212/aip-engine/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T20:25:47.315754 | aip_engine-0.4.1.tar.gz | 16,907 | 7d/72/976f8f67b08eca1be116e2678b9116928506f7972b11f163da597eacd3e1/aip_engine-0.4.1.tar.gz | source | sdist | null | false | 77c6a7df9a7a430729208691dfba1641 | b35fda122a5deb2a7e418cf817222c2092be72fe47b1904fc8eff6ef30422e14 | 7d72976f8f67b08eca1be116e2678b9116928506f7972b11f163da597eacd3e1 | null | [
"LICENSE"
] | 231 |
2.4 | griptape | 1.9.3 | Modular Python framework for LLM workflows, tools, memory, and data. | 
[](https://pypi.python.org/pypi/griptape)
[](https://github.com/griptape-ai/griptape/actions/workflows/unit-tests.yml)
[](https://griptape.readthedocs.io/)
[](https://microsoft.github.io/pyright/)
[](https://github.com/astral-sh/ruff)
[](https://codecov.io/github/griptape-ai/griptape)
[](https://discord.gg/griptape)
Griptape is a Python framework designed to simplify the development of generative AI (genAI) applications.
It offers a set of straightforward, flexible abstractions for working with areas such as Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and much more.
## 🛠️ Core Components
### 🏗️ Structures
- 🤖 **Agents** consist of a single Task, configured for Agent-specific behavior.
- 🔄 **Pipelines** organize a sequence of Tasks so that the output from one Task may flow into the next.
- 🌐 **Workflows** configure Tasks to operate in parallel.
### 📝 Tasks
Tasks are the core building blocks within Structures, enabling interaction with Engines, Tools, and other Griptape components.
### 🧠 Memory
- 💬 **Conversation Memory** enables LLMs to retain and retrieve information across interactions.
- 🗃️ **Task Memory** keeps large or sensitive Task outputs off the prompt that is sent to the LLM.
- 📊 **Meta Memory** enables passing in additional metadata to the LLM, enhancing the context and relevance of the interaction.
### 🚗 Drivers
Drivers facilitate interactions with external resources and services in Griptape.
They allow you to swap out functionality and providers with minimal changes to your business logic.
#### LLM & Orchestration
- 🗣️ **Prompt Drivers**: Manage textual and image interactions with LLMs.
- 🤖 **Assistant Drivers**: Enable interactions with various “assistant” services.
- 📜 **Ruleset Drivers**: Load and apply rulesets from external sources.
- 🧠 **Conversation Memory Drivers**: Store and retrieve conversational data.
- 📡 **Event Listener Drivers**: Forward framework events to external services.
- 🏗️ **Structure Run Drivers**: Execute structures locally or in the cloud.
#### Retrieval & Storage
- 🔢 **Embedding Drivers**: Generate vector embeddings from textual inputs.
- 🔀 **Rerank Drivers**: Rerank search results for improved relevance.
- 💾 **Vector Store Drivers**: Manage the storage and retrieval of embeddings.
- 🗂️ **File Manager Drivers**: Handle file operations on local and remote storage.
- 💼 **SQL Drivers**: Interact with SQL databases.
#### Multimodal
- 🎨 **Image Generation Drivers**: Create images from text descriptions.
- 🗣️ **Text to Speech Drivers**: Convert text to speech.
- 🎙️ **Audio Transcription Drivers**: Convert audio to text.
#### Web
- 🔍 **Web Search Drivers**: Search the web for information.
- 🌐 **Web Scraper Drivers**: Extract data from web pages.
#### Observability
- 📈 **Observability Drivers**: Send trace and event data to observability platforms.
### 🔧 Tools
Tools provide capabilities for LLMs to interact with data and services.
Griptape includes a variety of [built-in Tools](https://docs.griptape.ai/stable/griptape-framework/tools/official-tools/), and makes it easy to create [custom Tools](https://docs.griptape.ai/stable/griptape-framework/tools/custom-tools/).
### 🚂 Engines
Engines wrap Drivers and provide use-case-specific functionality:
- 📊 **RAG Engine** is an abstraction for implementing modular Retrieval Augmented Generation (RAG) pipelines.
- 🛠️ **Extraction Engine** extracts JSON or CSV data from unstructured text.
- 📝 **Summary Engine** generates summaries from textual content.
- ✅ **Eval Engine** evaluates and scores the quality of generated text.
### 📦 Additional Components
- 📐 **Rulesets** steer LLM behavior with minimal prompt engineering.
- 🔄 **Loaders** load data from various sources.
- 🏺 **Artifacts** allow for passing data of different types between Griptape components.
- ✂️ **Chunkers** segment texts into manageable pieces for diverse text types.
- 🔢 **Tokenizers** count the number of tokens in a text to not exceed LLM token limits.
## Documentation
Please visit the [docs](https://docs.griptape.ai/) for information on installation and usage.
Check out [Griptape Trade School](https://learn.griptape.ai/) for free online courses.
## Hello World Example
Here's a minimal example of griptape:
```python
from griptape.drivers.prompt.openai import OpenAiChatPromptDriver
from griptape.rules import Rule
from griptape.tasks import PromptTask
task = PromptTask(
prompt_driver=OpenAiChatPromptDriver(model="gpt-4.1"),
rules=[Rule("Keep your answer to a few sentences.")],
)
result = task.run("How do I do a kickflip?")
print(result.value)
```
```text
To do a kickflip, start by positioning your front foot slightly angled near the middle of the board and your back foot on the tail.
Pop the tail down with your back foot while flicking the edge of the board with your front foot to make it spin.
Jump and keep your body centered over the board, then catch it with your feet and land smoothly. Practice and patience are key!
```
## Task and Workflow Example
Here is a concise example using griptape to research open source projects:
```python
from griptape.drivers.prompt.openai_chat_prompt_driver import OpenAiChatPromptDriver
from griptape.drivers.web_search.duck_duck_go import DuckDuckGoWebSearchDriver
from griptape.rules import Rule, Ruleset
from griptape.structures import Workflow
from griptape.tasks import PromptTask, TextSummaryTask
from griptape.tools import WebScraperTool, WebSearchTool
from griptape.utils import StructureVisualizer
from pydantic import BaseModel
class Feature(BaseModel):
name: str
description: str
emoji: str
class Output(BaseModel):
answer: str
key_features: list[Feature]
projects = ["griptape", "langchain", "crew-ai", "pydantic-ai"]
prompt_driver = OpenAiChatPromptDriver(model="gpt-4.1")
workflow = Workflow(
tasks=[
[
PromptTask(
id=f"project-{project}",
input="Tell me about the open source project: {{ project }}.",
prompt_driver=prompt_driver,
context={"project": projects},
output_schema=Output,
tools=[
WebSearchTool(
web_search_driver=DuckDuckGoWebSearchDriver(),
),
WebScraperTool(),
],
child_ids=["summary"],
)
for project in projects
],
TextSummaryTask(
input="{{ parents_output_text }}",
id="summary",
rulesets=[
Ruleset(
name="Format", rules=[Rule("Be detailed."), Rule("Include emojis.")]
)
],
),
]
)
workflow.run()
print(StructureVisualizer(workflow).to_url())
```
```text
Output: Here's a detailed summary of the open-source projects mentioned:
1. **Griptape** 🛠️:
- Griptape is a modular Python framework designed for creating AI-powered applications. It focuses on securely connecting to
enterprise data and APIs. The framework provides structured components like Agents, Pipelines, and Workflows, allowing for both
parallel and sequential operations. It includes built-in tools and supports custom tool creation for data and service
interaction.
2. **LangChain** 🔗:
- LangChain is a framework for building applications powered by Large Language Models (LLMs). It offers a standard interface
for models, embeddings, and vector stores, facilitating real-time data augmentation and model interoperability. LangChain
integrates with various data sources and external systems, making it adaptable to evolving technologies.
3. **CrewAI** 🤖:
- CrewAI is a standalone Python framework for orchestrating multi-agent AI systems. It allows developers to create and
manage AI agents that collaborate on complex tasks. CrewAI emphasizes ease of use and scalability, providing tools and
documentation to help developers build AI-powered solutions.
4. **Pydantic-AI** 🧩:
- Pydantic-AI is a Python agent framework that simplifies the development of production-grade applications with Generative
AI. Built on Pydantic, it supports various AI models and provides features like type-safe design, structured response
validation, and dependency injection. Pydantic-AI aims to bring the ease of FastAPI development to AI applications.
These projects offer diverse tools and frameworks for developing AI applications, each with unique features and capabilities
tailored to different aspects of AI development.
```
```mermaid
graph TD;
griptape-->summary;
langchain-->summary;
pydantic-ai-->summary;
crew-ai-->summary;
```
## Versioning
Griptape uses [Semantic Versioning](https://semver.org/).
## Contributing
Thank you for considering contributing to Griptape! Before you start, please review our [Contributing Guidelines](https://github.com/griptape-ai/griptape/blob/main/CONTRIBUTING.md).
## License
Griptape is available under the Apache 2.0 License.
| text/markdown | null | Griptape <hello@griptape.ai> | null | null | null | null | [] | [] | null | null | <4,>=3.9 | [] | [] | [] | [
"attrs>=24.3.0",
"filetype>=1.2",
"jinja2>=3.1.4",
"json-schema-to-pydantic>=0.4.6",
"marshmallow-enum>=1.5.1",
"marshmallow<4,>=3.21.3",
"numpy<3,>=1.26.4",
"openai>=1.1.1",
"pip>=25.0.1",
"pydantic>=2.7.4",
"pyyaml>=6.0.1",
"requests>=2.32.0",
"rich>=13.7.1",
"schema>=0.7.7",
"tenacity... | [] | [] | [] | [
"Repository, https://github.com/griptape-ai/griptape"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:25:22.212196 | griptape-1.9.3.tar.gz | 192,381 | 2e/a7/8341b996b51acaa17bf35931e8285c87c36a61192a0291402023d1979a23/griptape-1.9.3.tar.gz | source | sdist | null | false | d815e5d7e517b7c4b8a4d6ae74ab5182 | b4fb63fd73fe3fcab9a9269c91b3123a91d9a66fe3e2a78d6ec5018d9f7c464d | 2ea78341b996b51acaa17bf35931e8285c87c36a61192a0291402023d1979a23 | Apache-2.0 | [
"LICENSE",
"NOTICE"
] | 950 |
2.1 | odoo-addon-helm_portal | 18.0.1.0.0 | Update Helm releases in portal. | ===========
Helm Portal
===========
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:4c7ecef22fbd1036c4fe6ed1f04ecb36718cabe9d6aed3ebb9ff0ab102f48dd4
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Production%2FStable-green.png
:target: https://odoo-community.org/page/development-status
:alt: Production/Stable
.. |badge2| image:: https://img.shields.io/badge/licence-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-Mint--system%2F-lightgray.png?logo=github
:target: https://github.com/Mint-system//tree/18.0/helm_portal
:alt: Mint-system/
|badge1| |badge2| |badge3|
Update Helm releases in portal.
**Table of contents**
.. contents::
:local:
Usage
=====
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/Mint-system//issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/Mint-system//issues/new?body=module:%20helm_portal%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Mint System GmbH
Contributors
------------
- Janik von Rotz login@janikvonrotz.ch
Maintainers
-----------
This module is part of the `Mint-system/ <https://github.com/Mint-system//tree/18.0/helm_portal>`_ project on GitHub.
You are welcome to contribute.
| text/x-rst | Mint System GmbH | null | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Development Status :: 5 - Production/Stable"
] | [] | https://www.mint-system.ch/ | null | >=3.10 | [] | [] | [] | [
"odoo-addon-helm==18.0.*",
"odoo==18.0.*"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:25:15.432572 | odoo_addon_helm_portal-18.0.1.0.0-py3-none-any.whl | 415,136 | 6f/7c/7096e2d6dda11aeed2665f50b080f25bf043a850f5f42171fb659ca213f2/odoo_addon_helm_portal-18.0.1.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 952c71d36562cdad01700e0123503a0c | 0ea563bcefd888340ffebc97d0529e34aa0f5521e7b8fc3f361d72ffb51e1553 | 6f7c7096e2d6dda11aeed2665f50b080f25bf043a850f5f42171fb659ca213f2 | null | [] | 0 |
2.4 | heylead | 0.3.0 | MCP-native autonomous LinkedIn SDR. Your AI sales rep, one command to fill your pipeline. | # HeyLead
**Your AI sales rep. One command to fill your pipeline.**
HeyLead is an MCP-native autonomous LinkedIn SDR that runs inside Cursor, Claude Code, or any MCP-compatible editor. No dashboard. No web app. Just chat with your AI and say "find me leads."
---
## Getting Started
MCP (Model Context Protocol) lets AI assistants use external tools. HeyLead gives your AI the ability to do LinkedIn outreach for you.
**You need:** [Cursor](https://cursor.com) or [Claude Code](https://docs.anthropic.com/en/docs/claude-code) — any MCP-compatible AI editor.
### Step 1: Install HeyLead
[**Install in Cursor**](cursor://anysphere.cursor-deeplink/mcp/install?name=heylead&config=eyJjb21tYW5kIjoidXZ4IiwiYXJncyI6WyJoZXlsZWFkIl19)
Click the link above and Cursor installs it automatically.
<details>
<summary>Manual install or using another editor?</summary>
**Cursor (manual):** Settings > MCP > "Add new MCP server" > Name: `heylead`, Command: `uvx heylead`
**Claude Code:** `claude mcp add heylead -- uvx heylead`
</details>
### Step 2: Set up your account
Open your AI chat and say:
> **"Set up my HeyLead profile"**
You'll get a link — sign in with Google, connect LinkedIn, copy your token, paste it back. ~1 minute, no API keys needed.
### Step 3: Find leads
```
"Find me CTOs at fintech startups in New York"
"Send outreach to the campaign"
"Check my replies"
"How's my outreach doing?"
```
---
## How It Works
1. **Define your ICP** — "Generate an ICP for AI SaaS founders" → RAG-powered personas with pain points, barriers, and LinkedIn targeting
2. **Create a campaign** — "Find me fintech CTOs" → searches LinkedIn, scores prospects by fit
3. **Warm up prospects** — Engages with their posts (comments, likes) before reaching out
4. **Send personalized invitations** — Voice-matched messages that sound like you, not a bot
5. **Follow up automatically** — Multi-touch sequences after connections are accepted
6. **Handle replies** — Detects sentiment, advances positive leads toward meetings, answers questions
7. **Track outcomes** — Won/lost/opted-out tracking with conversion analytics
**Two modes:**
- **Copilot** (default) — review every message before it sends
- **Autopilot** — AI handles outreach within your rate limits and working hours
---
## Tools
HeyLead gives your AI 29 specialized tools:
### Core Workflow
| Tool | What it does |
|------|-------------|
| `setup_profile` | Connects LinkedIn and creates your voice signature |
| `generate_icp` | Creates rich Ideal Customer Profiles with buyer personas, pain points, and LinkedIn targeting |
| `create_campaign` | Finds prospects matching your target and builds a campaign |
| `generate_and_send` | Writes personalized invitations and sends (or queues for review) |
| `check_replies` | Checks for new replies, classifies sentiment, surfaces hot leads |
| `show_status` | Your dashboard — campaigns, stats, hot leads, account health |
### Multi-Touch Outreach
| Tool | What it does |
|------|-------------|
| `send_followup` | Sends follow-up DMs after a connection is accepted |
| `reply_to_prospect` | Auto-replies adapting to sentiment — advance meetings, answer questions, or gracefully close |
| `engage_prospect` | Comments on or reacts to a prospect's LinkedIn posts for warm-up |
### Copilot Review
| Tool | What it does |
|------|-------------|
| `approve_outreach` | Approve, edit, skip, or stop a proposed message |
| `suggest_next_action` | AI recommends what to do next, prioritized by impact |
| `show_conversation` | View the full message thread with a prospect |
### Campaign Management
| Tool | What it does |
|------|-------------|
| `edit_campaign` | Update name, mode, booking link, offerings, or messaging preferences |
| `pause_campaign` | Pause outreach on a campaign |
| `resume_campaign` | Resume a paused campaign |
| `archive_campaign` | Mark a campaign as completed |
| `skip_prospect` | Remove a bad-fit prospect from the queue |
| `retry_failed` | Retry outreaches that failed with errors |
| `emergency_stop` | Immediately pause all active campaigns |
### Analytics
| Tool | What it does |
|------|-------------|
| `campaign_report` | Detailed analytics — funnel, outcomes, stale leads, engagement ROI |
| `export_campaign` | Export campaign results as a formatted table |
| `compare_campaigns` | Side-by-side comparison of multiple campaigns |
| `close_outreach` | Record outcome — won, lost, or opted out |
### Automation
| Tool | What it does |
|------|-------------|
| `toggle_scheduler` | Enable autonomous outreach (local or cloud 24/7) |
| `scheduler_status` | View scheduler state, pending jobs, recent activity |
### Account
| Tool | What it does |
|------|-------------|
| `list_linkedin_accounts` | List connected LinkedIn accounts (read-only) |
| `switch_account` | List accounts and switch flow |
| `switch_account_to` | Switch to a different LinkedIn account |
| `unlink_account` | Disconnect LinkedIn and clear local account |
---
## Key Features
**Voice Matching** — Analyzes your LinkedIn profile and posts to capture your writing style. Every message sounds like you wrote it.
**ICP Generation** — RAG-powered pipeline that crawls company context, generates buyer personas with pain points, fears, barriers, and maps them to LinkedIn search parameters.
**Autonomous Scheduler** — Runs in the background, respects working hours and rate limits. Enable cloud scheduling for 24/7 operation even when your laptop is off.
**Engagement Warm-ups** — Automatically engages with prospect posts before sending connection requests, building familiarity.
**Adaptive Rate Limiting** — Starts conservative, ramps up when acceptance rate is high, pulls back when it drops. Respects LinkedIn safety limits.
**Outcome Tracking** — Mark deals as won/lost, track conversion rates, identify stale leads, measure engagement ROI.
---
## Pricing
| Plan | Price | What you get |
|------|-------|-------------|
| **Free** | $0 | 50 invitations/month, 1 campaign, 2 follow-ups per prospect, 30 engagements/month |
| **Pro** | $29/mo | Unlimited campaigns, 5 follow-ups with multi-day schedule, 5 LinkedIn accounts, cloud scheduler |
---
## Privacy
Your data stays on your machine:
- Contacts and messages — local SQLite database
- AI calls — routed through HeyLead's backend (Gemini 2.0 Flash) or your own key
- We don't store your messages or contacts on our servers
> **Power users:** Pass your own LLM key (Gemini/Claude/OpenAI) during setup to use your own AI. Completely optional.
---
## Backend mode & env
When the MCP client talks to a HeyLead backend (e.g. `heylead-api`), the backend uses these environment variables. Operators running their own backend should set them as required.
| Purpose | Example env vars |
|--------|-------------------|
| LLM | `GEMINI_API_KEY`, or `OPENAI_API_KEY` / `ANTHROPIC_API_KEY` if using other providers |
| Search / crawl | `SERPER_API_KEY`, `FIRECRAWL_API_KEY` (or similar) for ICP and company context |
| Auth / storage | `GOOGLE_*` (OAuth), `UNIPILE_*` (LinkedIn provider), plus DB/Redis if used |
| Optional | Feature flags, rate limits, logging — see backend repo |
For full backend configuration and deployment, see the **heylead-api** (or backend) repo and its docs.
---
## Optional Dependencies
The base install covers all core features. For advanced ICP generation:
```bash
pip install heylead[icp] # Embeddings for RAG-powered ICP generation
pip install heylead[crawl] # Web crawling for company context ingestion
pip install heylead[all] # Both
```
---
## Troubleshooting
**"uvx: command not found"**
Install `uv` first: `curl -LsSf https://astral.sh/uv/install.sh | sh` (or `brew install uv` on Mac)
**"MCP server not connecting"**
Restart your editor after adding the MCP server. In Cursor, check Settings > MCP — the server should show a green dot.
**"Setup failed" or "LinkedIn not connected"**
Make sure you clicked "Connect LinkedIn Now" on the sign-in page and completed the LinkedIn login. Then run setup again.
**Need help?** Open an [issue](https://github.com/D4umak/heylead/issues).
---
## Publishing to PyPI (maintainers)
To make HeyLead available on PyPI (or to publish a new version):
### Option A: Publish via GitHub Release (recommended)
1. **One-time:** Create a [PyPI account](https://pypi.org/account/register/) and an [API token](https://pypi.org/manage/account/token/). In your repo: **Settings → Secrets and variables → Actions** → add secret `PYPI_TOKEN` with the token value.
2. Bump version in `pyproject.toml` (`version = "0.2.4"`).
3. Commit, push, then create a **GitHub Release** (tag e.g. `v0.2.4`, release title optional). The workflow [`.github/workflows/publish.yml`](.github/workflows/publish.yml) runs on release and publishes to PyPI.
### Option B: Publish manually
```bash
pip install build twine
python -m build # creates dist/
twine check dist/* # optional: validate
twine upload dist/* # prompts for PyPI username + password (use __token__ and your API token)
```
After publishing, anyone can install with `pip install heylead` or run with `uvx heylead`.
---
## Links
- [PyPI](https://pypi.org/project/heylead/)
- [Issues](https://github.com/D4umak/heylead/issues)
## License
MIT (code) — see [LICENSE](LICENSE)
Knowledge base and prompt configurations are proprietary.
| text/markdown | null | HeyLead <hello@heylead.dev> | null | null | null | ai, leads, linkedin, mcp, outreach, sales, sdr | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: P... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0.0",
"cryptography>=41.0.0",
"httpx>=0.24.0",
"mcp<2.0.0,>=1.2.0",
"firecrawl-py>=0.0.1; extra == \"all\"",
"numpy>=1.24.0; extra == \"all\"",
"sentence-transformers>=2.2.0; extra == \"all\"",
"firecrawl-py>=0.0.1; extra == \"crawl\"",
"numpy>=1.24.0; extra == \"icp\"",
"sentence-transf... | [] | [] | [] | [
"Homepage, https://github.com/D4umak/heylead",
"Repository, https://github.com/D4umak/heylead",
"Issues, https://github.com/D4umak/heylead/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T20:23:56.629601 | heylead-0.3.0.tar.gz | 210,321 | f0/f0/fcf5f1be9f2834a8850c9e58e2bd1e8d127e02843a5b97ea9c69417ded43/heylead-0.3.0.tar.gz | source | sdist | null | false | e1ffc1e7804974a0c41647403d671fb9 | 77c76b62f1a249a118d83053194d0ebaa44e33f901d437eaad3f17000588cc4d | f0f0fcf5f1be9f2834a8850c9e58e2bd1e8d127e02843a5b97ea9c69417ded43 | MIT | [
"LICENSE"
] | 210 |
2.1 | c2cwsgiutils | 6.1.10.dev24 | Common utilities for Camptocamp WSGI applications | # Camptocamp WSGI utilities
This is a Python 3 library providing common tools for Camptocamp WSGI
applications:
- Provide prometheus metrics
- Allow to use a master/slave PostgresQL configuration
- Logging handler for CEE/UDP logs
- An optional view to change runtime the log levels
- SQL profiler to debug DB performance problems, disabled by default. Warning, it will slow down everything.
- A view to get the version information about the application and the installed packages
- A framework for implementing a health_check service
- Error handlers to send JSON messages to the client in case of error
- A cornice service drop in replacement for setting up CORS
Also provide tools for writing acceptance tests:
- A class that can be used from a py.test fixture to control a composition
- A class that can be used from a py.text fixture to test a REST API
As an example on how to use it in an application provided by a Docker image, you can look at the
test application in [acceptance_tests/app](acceptance_tests/app).
To see how to test such an application, look at [acceptance_tests/tests](acceptance_tests/tests).
## Install
### Custom Docker image (from PYPI library)
Here we didn't do a minimal install of c2cwsgiutils, but be put in place everything needed to
monitor the application in integration and production environment.
The library is available in PYPI:
[https://pypi.python.org/pypi/c2cwsgiutils](https://pypi.python.org/pypi/c2cwsgiutils)
Copy and adapt these template configuration file into your project:
- [production.ini](acceptance_tests/app/production.ini);
- [gunicorn.conf.py](acceptance_tests/app/gunicorn.conf.py).
Then replace `c2cwsgiutils_app` by your package name.
You should install `c2cwsgiutils` with the tool you use to manage your pip dependencies.
In the `Dockerfile` you should add the following lines:
```dockerfile
# Generate the version file.
RUN c2cwsgiutils-genversion $(git rev-parse HEAD)
CMD ["gunicorn", "--paste=/app/production.ini"]
# Default values for the environment variables
ENV \
DEVELOPMENT=0 \
SQLALCHEMY_POOL_RECYCLE=30 \
SQLALCHEMY_POOL_SIZE=5 \
SQLALCHEMY_MAX_OVERFLOW=25 \
SQLALCHEMY_SLAVE_POOL_RECYCLE=30 \
SQLALCHEMY_SLAVE_POOL_SIZE=5 \
SQLALCHEMY_SLAVE_MAX_OVERFLOW=25\
LOG_TYPE=console \
OTHER_LOG_LEVEL=WARNING \
GUNICORN_LOG_LEVEL=WARNING \
SQL_LOG_LEVEL=WARNING \
C2CWSGIUTILS_LOG_LEVEL=WARNING \
LOG_LEVEL=INFO
```
Add in your `main` function.
```python
config.include("c2cwsgiutils.pyramid")
dbsession = c2cwsgiutils.db.init(config, "sqlalchemy", "sqlalchemy_slave")
config.scan(...)
# Initialize the health checks
health_check = c2cwsgiutils.health_check.HealthCheck(config)
health_check.add_db_session_check(dbsession)
health_check.add_alembic_check(dbsession, "/app/alembic.ini", 1)
```
The related environment variables:
- `DEVELOPMENT`: set to `1` to enable the development mode, default is `0`.
- `SQLALCHEMY_URL`: SQL alchemy URL, like `postgresql://user:password@host:port/dbname`.
- `SQLALCHEMY_POOL_RECYCLE`: The SQL alchemy pool recycle, default is `30`.
- `SQLALCHEMY_POOL_SIZE`: The SQL alchemy pool size, default is `5`.
- `SQLALCHEMY_MAX_OVERFLOW`: SQL alchemy max overflow, default is `25`.
- `SQLALCHEMY_SLAVE_URL`: The SQL alchemy slave (read only) URL, like `postgresql://user:password@host:port/dbname`.
- `SQLALCHEMY_SLAVE_POOL_RECYCLE`: The SQL alchemy slave pool recycle, default is `30`.
- `SQLALCHEMY_SLAVE_POOL_SIZE`: The SQL alchemy slave pool size, default is `5`.
- `SQLALCHEMY_SLAVE_MAX_OVERFLOW`: The SQL alchemy slave max overflow, default is `25`.
- `GUNICORN_WORKERS`: The number of workers, default is `2`.
- `GUNICORN_THREADS`: The number of threads per worker, default is `10`.
- `LOG_TYPE`: The types of logs, default is `console`, should be `json` on kubernetes to work well with
[elk](https://www.elastic.co/fr/what-is/elk-stack).
- `LOG_LEVEL`: The application log level, default is `INFO`.
- `SQL_LOG_LEVEL`: The SQL query log level, `WARNING`: no logs, `INFO`: logs the queries,
`DEBUG` also logs the results, default is `WARNING`.
- `GUNICORN_ERROR_LOG_LEVEL`: The Gunicorn error log level, default is `WARNING`.
- `C2CWSGIUTILS_CONFIG`: The fallback ini file to use by gunicorn, default is `production.ini`.
- `C2CWSGIUTILS_LOG_LEVEL`: The c2c WSGI utils log level, default is `WARNING`.
- `OTHER_LOG_LEVEL`: The log level for all the other logger, default is `WARNING`.
Those environment variables can be useful for investigation on production environments.
### Docker (deprecated)
Or (deprecated) as a base Docker image:
[camptocamp/c2cwsgiutils:release_5](https://hub.docker.com/r/camptocamp/c2cwsgiutils/) or
[ghcr.io/camptocamp/c2cwsgiutils:release_5](https://github.com/orgs/camptocamp/packages/container/package/c2cwsgiutils)
If you need an image with a smaller foot print, use the tags prefixed with `-light`. Those are without
GDAL and without the build tools.
We deprecate the Docker image because:
- The project wants to choose the base image.
- The project pin different versions of the dependencies.
## General config
In general, configuration can be done both with environment variables (taken first) or with entries in the
`production.ini` file.
You can configure the base URL for accessing the views provided by c2cwsgiutils with an environment variable
named `C2C_BASE_PATH` or in the `production.ini` file with a property named `c2c.base_path`.
A few REST APIs are added and can be seen with this URL:
`{C2C_BASE_PATH}`.
Some APIs are protected by a secret. This secret is specified in the `C2C_SECRET` variable or `c2c.secret`
property. It is either passed as the `secret` query parameter or the `X-API-Key` header. Once
accessed with a good secret, a cookie is stored and the secret can be omitted.
An alternative of using `C2C_SECRET` is to use an authentication on GitHub,
[create the GitHub application](https://github.com/settings/applications/new).
Configure the json renderers with the `C2C_JSON_PRETTY_PRINT` and `C2C_JSON_SORT_KEYS` environment
variables or `c2c.json.pretty_print`and `c2c.json.sort_keys` properties. Default is `false`.
Then it will redirect the user to the github authentication form if not already authenticated
(using `C2C_AUTH_GITHUB_CLIENT_ID`, `C2C_AUTH_GITHUB_CLIENT_SECRET` and `C2C_AUTH_GITHUB_SCOPE`).
Then we will check if the user is allowed to access to the application, for that we check
if the user has enough right on a GitHub repository (using `C2C_AUTH_GITHUB_REPOSITORY`
and `C2C_AUTH_GITHUB_REPOSITORY_ACCESS_TYPE`).
Finally we store the session information in an encrypted cookie (using `C2C_AUTH_SECRET`
and `C2C_AUTH_COOKIE`).
Configuration details:
Using the environment variable `C2C_AUTH_GITHUB_REPOSITORY` or the config key `c2c.auth.github.repository`
to define the related GitHub repository (required).
Using the environment variable `C2C_AUTH_GITHUB_ACCESS_TYPE` or the config key
`c2c.auth.github.access_type` to define the type of required access can be `pull`, `push` or
`admin` (default is `push`)
Using the environment variable `C2C_AUTH_GITHUB_CLIENT_ID` or the config key `c2c.auth.github.client_id` to
define the GitHub application ID (required)
Using the environment variable `C2C_AUTH_GITHUB_CLIENT_SECRET` or the config key
`c2c.auth.github.client_secret` to define the GitHub application secret (required)
Using the environment variable `C2C_AUTH_GITHUB_SCOPE` or the config key `c2c.auth.github.scope` to define
the GitHub scope (default is `repo`), see [GitHub documentation](https://developer.github.com/apps/building-oauth-apps/understanding-scopes-for-oauth-apps/)
Using the environment variable `C2C_AUTH_GITHUB_SECRET` or the config key `c2c.auth.github.auth.secret` to
define the used secret for JWD encryption (required, with a length at least of 16)
Using the environment variable `C2C_AUTH_GITHUB_COOKIE` or the config key `c2c.auth.github.auth.cookie` to
define the used cookie name (default is `c2c-auth-jwt`)
Using the environment variable `C2C_AUTH_GITHUB_AUTH_URL` or the config key `c2c.auth.github.auth_url` to
define the GitHub auth URL (default is `https://github.com/login/oauth/authorize`)
Using the environment variable `C2C_AUTH_GITHUB_TOKEN_URL` or the config key `c2c.auth.github.token_url` to
define the GitHub auth URL (default is `https://github.com/login/oauth/access_token`)
Using the environment variable `C2C_AUTH_GITHUB_USER_URL` or the config key `c2c.auth.github.user_url` to
define the GitHub auth URL (default is `https://api.github.com/user`)
Using the environment variable `C2C_AUTH_GITHUB_REPO_URL` or the config key `c2c.auth.github.repo_url` to
define the GitHub auth URL (default is `https://api.github.com/repo`)
Using the environment variable `C2C_AUTH_GITHUB_PROXY_URL` or the config key `c2c.auth.github.auth.proxy_url` to
define a redirect proxy between GitHub and our application to be able to share an OAuth2 application on GitHub (default is no proxy).
Made to work with [this proxy](https://github.com/camptocamp/redirect/).
Using the environment variable `C2C_USE_SESSION` or the config key `c2c.use_session` to
define if we use a session. Currently, we can use the session to store a state, used to prevent CSRF, during OAuth2 login (default is `false`)
## Pyramid
All the environment variables are usable in the configuration file using stuff like `%(ENV_NAME)s`.
To enable most of the features of c2cwsgiutils, you need to add this line to your WSGI main:
```python
import c2cwsgiutils.pyramid
config.include(c2cwsgiutils.pyramid.includeme)
```
Error catching views will be put in place to return errors as JSON.
A custom loader is provided to run pyramid scripts against configuration files containing environment variables:
```shell
proutes c2c://production.ini # relative path
proutes c2c:///app/production.ini # absolute path
```
A filter is automatically installed to handle the HTTP headers set by common proxies and have correct values
in the request object (`request.client_addr`, for example). This filter is equivalent to what the
`PasteDeploy#prefix` (minus the prefix part) does, but supports newer headers as well (`Forwarded`).
If you need to prefix your routes, you can use the `route_prefix` parameter of the `Configurator` constructor.
## Logging
Two new logging backends are provided:
- `c2cwsgiutils.pyramid_logging.PyramidCeeSysLogHandler`: to send @cee formatted logs to syslog through UDP.
- `c2cwsgiutils.pyramid_logging.JsonLogHandler`: to output (on stdout or stderr) JSON formatted logs.
Look at the logging configuration part of
[acceptance_tests/app/production.ini](acceptance_tests/app/production.ini) for paste and commands line.
The logging configuration is imported automatically by gunicorn, it is possible to visualize the dict config by setting the environment variable `DEBUG_LOGCONFIG=1`.
You can enable a view to configure the logging level on a live system using the `C2C_LOG_VIEW_ENABLED` environment
variable. Then, the current status of a logger can be queried with a GET on
`{C2C_BASE_PATH}/logging/level?secret={C2C_SECRET}&name={logger_name}` and can be changed with
`{C2C_BASE_PATH}/logging/level?secret={C2C_SECRET}&name={logger_name}&level={level}`. Overrides are stored in
Redis, if `C2C_REDIS_URL` (`c2c.redis_url`) or `C2C_REDIS_SENTINELS` is configured.
## Database maintenance
You can enable a view to force usage of the slave engine using the `C2C_DB_MAINTENANCE_VIEW_ENABLED` environment
variable. Then, the database can be made "readonly" with
`{C2C_BASE_PATH}/db/maintenance?secret={C2C_SECRET}&readonly=true`.
The current state is stored in Redis, if `C2C_REDIS_URL` (`c2c.redis_url`) or `C2C_REDIS_SENTINELS` is configured.
### Request tracking
In order to follow the logs generated by a request across all the services (think separate processes),
c2cwsgiutils tries to flag averything with a request ID. This field can come from the input as request headers
(`X-Request-ID`, `X-Correlation-ID`, `Request-ID` or `X-Varnish`) or will default to a UUID. You can add an
additional request header as source for that by defining the `C2C_REQUEST_ID_HEADER` environment variable
(`c2c.request_id_header`).
In JSON logging formats, a `request_id` field is automatically added.
You can enable (disabled by default since it can have a cost) the flagging of the SQL requests as well by
setting the C2C_SQL_REQUEST_ID environment variable (or c2c.sql_request_id in the .ini file). This will use
the application name to pass along the request id. If you do that, you must include the application name in
the PostgreSQL logs by setting `log_line_prefix` to something like `"%a "` (don't forget the space).
Then, in your application, it is recommended to transmit the request ID to the external REST APIs. Use
the `X-Request-ID` HTTP header, for example. The value of the request ID is accessible through an added
`c2c_request_id` attribute on the Pyramid Request objects. The `requests` module is patched to automatically
add this header.
The requests module is also patched to monitor requests done without timeout. In that case, you can
configure a default timeout with the `C2C_REQUESTS_DEFAULT_TIMEOUT` environment variable
(`c2c.requests_default_timeout`). If no timeout and no default is specified, a warning is issued.
## SQL profiler
The SQL profiler must be configured with the `C2C_SQL_PROFILER_ENABLED` environment variable. That enables a view
to query the status of the profiler (`{C2C_BASE_PATH}/sql_profiler?secret={C2C_SECRET}`) or to
enable/disable it (`{C2C_BASE_PATH}/sql_profiler?secret={C2C_SECRET}&enable={1|0}`).
If enabled, for each `SELECT` query sent by SQLAlchemy, another query it done with `EXPLAIN ANALYZE`
prepended to it. The results are sent to the `c2cwsgiutils.sql_profiler` logger.
Don't enable that on a busy production system. It will kill your performances.
## Profiler
C2cwsgiutils provide an easy way to profile an application:
With a decorator:
```python
from c2cwsgiutils.profile import Profiler
@Profile('/my_file.prof')
my_function():
...
```
Or with the `with` statement:
```python
from c2cwsgiutils.profile import Profiler
with Profile('/my_file.prof'):
...
```
Then open your file with SnakeViz:
```bash
docker cp container_name:/my_file.prof .
pip install --user snakeviz
snakeviz my_file.prof
```
## DB sessions
The `c2cwsgiutils.db.init` allows you to setup a DB session that has two engines for accessing a
master/slave PostgresQL setup. The slave engine (read only) will be used automatically for `GET` and `OPTIONS`
requests and the master engine (read write) will be used for the other queries.
To use that, your `production.ini` must look like that:
```ini
sqlalchemy.url = %(SQLALCHEMY_URL)s
sqlalchemy.pool_recycle = %(SQLALCHEMY_POOL_RECYCLE)s
sqlalchemy.pool_size = %(SQLALCHEMY_POOL_SIZE)s
sqlalchemy.max_overflow = %(SQLALCHEMY_MAX_OVERFLOW)s
sqlalchemy_slave.url = %(SQLALCHEMY_SLAVE_URL)s
sqlalchemy_slave.pool_recycle = %(SQLALCHEMY_SLAVE_POOL_RECYCLE)s
sqlalchemy_slave.pool_size = %(SQLALCHEMY_SLAVE_POOL_SIZE)s
sqlalchemy_slave.max_overflow = %(SQLALCHEMY_SLAVE_MAX_OVERFLOW)s
```
And your code that initializes the DB connection must look like that:
```python
import c2cwsgiutils.db
def main(config):
c2cwsgiutils.db.init(config, 'sqlalchemy', 'sqlalchemy_slave', force_slave=[
"POST /api/hello"
])[0]
```
You can use the `force_slave` and `force_master` parameters to override the defaults and force a route to use
the master or the slave engine.
## Health checks
To enable health checks, you must add some setup in your WSGI main (usually after the DB connections are
setup). For example:
```python
from c2cwsgiutils.health_check import HealthCheck
def custom_check(request):
global not_happy
if not_happy:
raise Exception("I'm not happy")
return "happy"
health_check = HealthCheck(config)
health_check.add_db_session_check(models.DBSession, at_least_one_model=models.Hello)
health_check.add_url_check('http://localhost:8080/api/hello')
health_check.add_custom_check('custom', custom_check, 2)
health_check.add_alembic_check(models.DBSession, '/app/alembic.ini', 3)
```
Then, the URL `{C2C_BASE_PATH}/health_check?max_level=3` can be used to run the health checks and get a report
looking like that (in case of error):
```json
{
"status": 500,
"successes": {
"db_engine_sqlalchemy": { "timing": 0.002 },
"db_engine_sqlalchemy_slave": { "timing": 0.003 },
"http://localhost/api/hello": { "timing": 0.01 },
"alembic_app_alembic.ini_alembic": { "timing": 0.005, "result": "4a8c1bb4e775" }
},
"failures": {
"custom": {
"message": "I'm not happy",
"timing": 0.001
}
}
}
```
The levels are:
- 0: Don't add checks at this level. This max_level is used for doing a simple ping.
- 1: Checks for anything vital for the usefulness of the service (DB, redis, ...). This is the max_level set
by default and used by load balancers to determine if the service is alive.
- \>=2: Use those at your convenience. Pingdom and CO are usually setup at max_level=100. So stay below.
The URL `{C2C_BASE_PATH}/health_check?checks=<check_name>` can be used to run the health checks on some
checks, coma separated list.
When you instantiate the `HealthCheck` class, two checks may be automatically enabled:
- If redis is configured, check that redis is reachable.
- If redis is configured and the version information is available, check that the version matches
across all instances.
Look at the documentation of the `c2cwsgiutils.health_check.HealthCheck` class for more information.
## SQLAlchemy models graph
A command is provided that can generate Doxygen graphs of an SQLAlchemy ORM model.
See [acceptance_tests/app/models_graph.py](acceptance_tests/app/models_graph.py) how it's used.
## Version information
If the `/app/versions.json` exists, a view is added (`{C2C_BASE_PATH}/versions.json`) to query the current
version of a app. This file is generated by calling the `c2cwsgiutils-genversion [$GIT_TAG] $GIT_HASH`
command line. Usually done in the [Dockerfile](acceptance_tests/app/Dockerfile) of the WSGI application.
## Prometheus
[Prometheus client](https://github.com/prometheus/client_python) is integrated in c2cwsgiutils.
It will work in multi process mode with the limitation listed in the
[`prometheus_client` documentation](https://github.com/prometheus/client_python#multiprocess-mode-eg-gunicorn).
To enable it you should provide the `C2C_PROMETHEUS_PORT` environment variable.
For security reason, this port should not be exposed.
We can customize it with the following environment variables:
- `C2C_PROMETHEUS_PREFIX`: to customize the prefix, default is `c2cwsggiutils-`.
- `C2C_PROMETHEUS_PACKAGES` the packages that will be present in the version information, default is `c2cwsgiutils,pyramid,gunicorn,sqlalchemy`.
- `C2C_PROMETHEUS_APPLICATION_PACKAGE` the packages that will be present in the version information as application.
And you should add in your `gunicorn.conf.py`:
```python
from prometheus_client import multiprocess
def on_starting(server):
from c2cwsgiutils import prometheus
del server
prometheus.start()
def post_fork(server, worker):
from c2cwsgiutils import prometheus
del server, worker
prometheus.cleanup()
def child_exit(server, worker):
del server
multiprocess.mark_process_dead(worker.pid)
```
In your `Dockerfile` you should add:
```dockerfile
RUN mkdir -p /prometheus-metrics \
&& chmod a+rwx /prometheus-metrics
ENV PROMETHEUS_MULTIPROC_DIR=/prometheus-metrics
```
### Add custom metric collector
See [official documentation](https://github.com/prometheus/client_python#custom-collectors).
Related to the Unix process.
```python
from c2cwsgiutils import broadcast, prometheus
prometheus.MULTI_PROCESS_COLLECTOR_BROADCAST_CHANNELS.append("prometheus_collector_custom")
broadcast.subscribe("c2cwsgiutils_prometheus_collect_gc", _broadcast_collector_custom)
my_custom_collector_instance = MyCustomCollector()
def _broadcast_collector_custom() -> List[prometheus.SerializedGauge]:
"""Get the collected GC gauges."""
return prometheus.serialize_collected_data(my_custom_collector_instance)
```
Related to the host, use that in the `gunicorn.conf.py`:
```python
def on_starting(server):
from c2cwsgiutils import prometheus
del server
registry = CollectorRegistry()
registry.register(MyCollector())
prometheus.start(registry)
```
### Database metrics
Look at the `c2cwsgiutils-stats-db` utility if you want to generate statistics (gauges) about the
row counts.
### Usage of metrics
With c2cwsgiutils each instance (Pod) has its own metrics, so we need to aggregate them to have the metrics for the service you probably need to use `sum by (<fields>) (<metric>)` to get the metric (especially for counters).
## Custom scripts
To have the application initialized in a script you should use the
`c2cwsgiutils.setup_process.bootstrap_application_from_options` function.
Example of `main` function:
```python
def main() -> None:
parser = argparse.ArgumentParser(description="My scrypt.")
# Add your argument here
c2cwsgiutils.setup_process.fill_arguments(parser)
args = parser.parse_args()
env = c2cwsgiutils.setup_process.bootstrap_application_from_options(args)
settings = env["registry"].settings
# Add your code here
```
If you need an access to the database you should add:
```python
engine = c2cwsgiutils.db.get_engine(settings)
session_factory = c2cwsgiutils.db.get_session_factory(engine)
with transaction.manager:
# Add your code here
```
If you need the database connection without the application context, you can replace:
```python
env = c2cwsgiutils.setup_process.bootstrap_application_from_options(args)
settings = env["registry"].settings
```
by:
```python
loader = pyramid.scripts.common.get_config_loader(args.config_uri)
loader.setup_logging(parse_vars(args.config_vars) if args.config_vars else None)
settings = loader.get_settings()
```
## Debugging
To enable the debugging interface, you must set the `C2C_DEBUG_VIEW_ENABLED` environment variable. Then you can
have dumps of a few things:
- every threads' stacktrace: `{C2C_BASE_PATH}/debug/stacks?secret={C2C_SECRET}`
- memory usage: `{C2C_BASE_PATH}/debug/memory?secret={C2C_SECRET}&limit=30&analyze_type=builtins.dict&python_internals_map=false`
- object ref: `{C2C_BASE_PATH}/debug/show_refs.dot?secret={C2C_SECRET}&analyze_type=gunicorn.app.wsgiapp.WSGIApplication&analyze_id=12345&max_depth=3&too_many=10&filter=1024&no_extra_info&backrefs`
`analyze_type` and `analyze_id` should not ve used toogether, you can use it like:
```bash
curl "<URL>" > /tmp/show_refs.dot
dot -Lg -Tpng /tmp/show_refs.dot > /tmp/show_refs.png
```
- memory increase when calling another API: `{C2C_BASE_PATH}/debug/memory_diff?path={path_info}&secret={C2C_SECRET}&limit=30&no_warmup`
- sleep the given number of seconds (to test load balancer timeouts): `{C2C_BASE_PATH}/debug/sleep?secret={C2C_SECRET}&time=60.2`
- see the HTTP headers received by WSGI: `{C2C_BASE_PATH}/debug/headers?secret={C2C_SECRET}&status=500`
- return an HTTP error: `{C2C_BASE_PATH}/debug/error?secret={C2C_SECRET}&status=500`
To ease local development, the views are automatically reloaded when files change. In addition, the filesystem is mounted by the `docker-compose.override.yaml` file. Make sure not to use such file / mechanism in production.
### Broadcast
Some c2cwsgiutils APIs effect or query the state of the WSGI server. Since only one process out of the 5
(by default) time the number of servers gets a query, only this one will be affected. To avoid that, you
can configure c2cwsgiutils to use Redis pub/sub to broadcast those requests and collect the answers.
The impacted APIs are:
- `{C2C_BASE_PATH}/debug/stacks`
- `{C2C_BASE_PATH}/debug/memory`
- `{C2C_BASE_PATH}/logging/level`
- `{C2C_BASE_PATH}/sql_profiler`
The configuration parameters are:
- `C2C_REDIS_URL` (`c2c.redis_url`): The URL to the Redis single instance to use
- `C2C_REDIS_OPTIONS`: The Redis options, comma separated list of <key>=<value>, the value is parsed as YAML
- `C2C_REDIS_SENTINELS`: The coma separated list of Redis host:port sentinel instances to use
- `C2C_REDIS_SERVICENAME`: The redis service name in case of using sentinels
- `C2C_REDIS_DB`: The redis database number in case of using sentinels
- `C2C_BROADCAST_PREFIX` (`c2c.broadcast_prefix`): The prefix to add to the channels being used (must be
different for 2 different services)
If not configured, only the process receiving the request is impacted.
## CORS
To have CORS compliant views, define your views like that:
```python
from c2cwsgiutils import services
hello_service = services.create("hello", "/hello", cors_credentials=True)
@hello_service.get()
def hello_get(request):
return {'hello': True}
```
# Exception handling
c2cwsgiutils can install exception handling views that will catch any exception raised by the
application views and will transform it into a JSON response with a HTTP status corresponding to the error.
You can enable this by setting `C2C_ENABLE_EXCEPTION_HANDLING` (`c2c.enable_exception_handling`) to "1".
In development mode (`DEVELOPMENT=1`), all the details (SQL statement, stacktrace, ...) are sent to the
client. In production mode, you can still get them by sending the secret defined in `C2C_SECRET` in the query.
If you want to use pyramid_debugtoolbar, you need to disable exception handling and configure it like that:
```ini
pyramid.includes =
pyramid_debugtoolbar
debugtoolbar.enabled = true
debugtoolbar.hosts = 0.0.0.0/0
debugtoolbar.intercept_exc = debug
debugtoolbar.show_on_exc_only = true
c2c.enable_exception_handling = 0
```
# JSON pretty print
Some JSON renderers are available:
- `json`: the normal JSON renderer (default).
- `fast_json`: a faster JSON renderer using ujson.
- `cornice_json`: the normal JSON renderer wrapped around cornice CorniceRenderer.
- `cornice_fast_json`: a faster JSON renderer wrapped around cornice CorniceRenderer.
Both pretty prints the rendered JSON. While this adds significant amount of whitespace, the difference in
bytes transmitted on the network is negligible thanks to gzip compression.
The `fast_json` renderer is using ujson which is faster, but doesn't offer the ability to change the rendering
of some types (the `default` parameter of json.dumps). This will interact badly with `papyrus` and such.
The cornice versions should be used to avoid the "'JSON' object has no attribute 'render_errors'" error.
## Sentry integration
The stacktraces can be sent to a sentry.io service for collection. To enable it, you must set the `SENTRY_URL`
(`c2c.sentry_url`) to point the the project's public DSN.
A few other environment variables can be used to tune the info sent with each report:
- `SENTRY_EXCLUDES` (`c2c.sentry.excludes`): list of loggers (colon separated, without spaces) to exclude for sentry
- `GIT_HASH` (`c2c.git_hash`): will be used for the release
- `SENTRY_CLIENT_RELEASE`: If not equal to "latest", will be taken for the release instead of the GIT_HASH
- `SENTRY_CLIENT_ENVIRONMENT`: the environment (dev, int, prod, ...)
- `SENTRY_CLIENT_IGNORE_EXCEPTIONS`: list (coma separated) of exceptions to ignore (defaults to SystemExit)
- `SENTRY_TAG_...`: to add other custom tags
- `SENTRY_LEVEL`: starting from what logging level to send events to Sentry (defaults to ERROR)
- `SENTRY_TRACES_SAMPLE_RATE`: The percentage of events to send to sentry in order to compute the performance. Value between 0 and 1 (default is 0)
- `SENTRY_INTEGRATION_LOGGING`: If set to 0, the Sentry integration will not log anything (default is 1)
- `SENTRY_INTEGRATION_PYRAMID`: If set to 0, the Sentry integration with Pyramid will not be enabled (default is 1)
- `SENTRY_INTEGRATION_SQLALCHEMY`: If set to 0, the Sentry integration with SQLAlchemy will not be enabled (default is 1)
- `SENTRY_INTEGRATION_REDIS`: If set to 0, the Sentry integration with Redis will not be enabled (default is 1)
- `SENTRY_INTEGRATION_ASYNCIO`: If set to 0, the Sentry integration with asyncio will not be enabled (default is 1)
# Developer info
You will need `docker` (>=1.12.0), `docker compose` and
`make` installed on the machine to play with this project.
Check available versions of `docker-engine` with
`apt-get policy docker-engine` and eventually force install the
up-to-date version using a command similar to
`apt-get install docker-engine=1.12.3-0~xenial`.
To lint and test everything, run the following command:
```shell
make
```
Make sure you are strict with the version numbers:
- bug fix version change: Nothing added, removed or changed in the API and only bug fix
version number changes in the dependencies
- minor version change: The API must remain backward compatible and only minor version
number changes in the dependencies
- major version change: The API and the dependencies are not backward compatible
To make a release:
- Change the the version in [setup.py](setup.py).
- Commit and push to master.
- Tag the GIT commit.
- Add the new branch name in the `.github/workflows/rebuild.yaml` and
`.github/workflows/audit.yaml` files.
## Pserve
Pserve will not set the headers in the environment then if you are behind a reverse proxy, you will have
wrong values in client information, you can force them by using the environment variables:
`C2CWSGIUTILS_FORCE_PROTO`, `C2CWSGIUTILS_FORCE_HOST` `C2CWSGIUTILS_FORCE_SERVER_NAME` and
`C2CWSGIUTILS_FORCE_REMOTE_ADDR`.
## Testing
### Screenshots
To test the screenshots, you need to install `node` with `npm`, to do that add the following lines in your `Dockerfile`:
```dockerfile
RUN --mount=type=cache,target=/var/lib/apt/lists \
--mount=type=cache,target=/var/cache,sharing=locked \
apt-get install --yes --no-install-recommends gnupg \
&& . /etc/os-release \
&& echo "deb https://deb.nodesource.com/node_18.x ${VERSION_CODENAME} main" > /etc/apt/sources.list.d/nodesource.list \
&& curl --silent https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - \
&& apt-get update \
&& apt-get install --assume-yes --no-install-recommends 'nodejs=18.*' \
libx11-6 libx11-xcb1 libxcomposite1 libxcursor1 \
libxdamage1 libxext6 libxi6 libxtst6 libnss3 libcups2 libxss1 libxrandr2 libasound2 libatk1.0-0 \
libatk-bridge2.0-0 libpangocairo-1.0-0 libgtk-3.0 libxcb-dri3-0 libgbm1 libxshmfence1
```
To do the image test call `check_screenshot` e.g.:
```python
from c2cwsgiutils.acceptance import image
def test_screenshot(app_connection):
image.check_screenshot(
app_connection.base_url + "my-path",
width=800,
height=600,
result_folder="results",
expected_filename=os.path.join(os.path.dirname(__file__), "my-check.expected.png"),
)
```
## Contributing
Install the pre-commit hooks:
```bash
pip install pre-commit
pre-commit install --allow-missing-config
```
| text/markdown | Camptocamp | info@camptocamp.com | null | null | BSD-2-Clause | geo, gis, sqlalchemy, orm, wsgi | [
"Development Status :: 5 - Production/Stable",
"Environment :: Plugins",
"Framework :: Pyramid",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Progra... | [] | https://github.com/camptocamp/c2cwsgiutils | null | >=3.10 | [] | [] | [] | [
"SQLAlchemy; extra == \"standard\" or extra == \"webserver\" or extra == \"all\"",
"SQLAlchemy-Utils; extra == \"standard\" or extra == \"webserver\" or extra == \"all\"",
"alembic; extra == \"standard\" or extra == \"alembic\" or extra == \"all\"",
"boltons; extra == \"tests\" or extra == \"all\"",
"cee_sy... | [] | [] | [] | [
"Repository, https://github.com/camptocamp/c2cwsgiutils"
] | twine/4.0.2 CPython/3.11.14 | 2026-02-19T20:23:25.729987 | c2cwsgiutils-6.1.10.dev24.tar.gz | 98,315 | 66/a7/4154d28eea2d5526a8b993c808ceeefbfd3029c28c4ba4df1650ec3d7bd1/c2cwsgiutils-6.1.10.dev24.tar.gz | source | sdist | null | false | 4971059b5b4b33bc1a4758bca455d944 | 33c04e90ecc59d42b0f25c016b5c153fec4502fe7cea7b2face14f97cc7716ac | 66a74154d28eea2d5526a8b993c808ceeefbfd3029c28c4ba4df1650ec3d7bd1 | null | [] | 193 |
2.4 | rocm-docs-core | 1.32.0 | Core utilities for all ROCm documentation on RTD | # ROCm Documentation Core Utilities
ROCm Docs Core is also distributed as a pip package available from PyPi as
[rocm-docs-core](https://pypi.org/project/rocm-docs-core/)
## Purpose
This repository is comprised of utilities, styling, scripts, and additional HTML content that is common to all ROCm projects' documentation. This greatly aids in maintaining the documentation, as any change to the appearance only needs to be modified in one location.
## Usage
### Setup
- Install this repository as a Python package using pip
- From PyPi: `pip install rocm-docs-core`
- From GitHub: `pip install git+https://github.com/ROCm/rocm-docs-core.git`.
- Set `rocm_docs_theme` as the HTML theme
- Add `rocm_docs` as an extension
- Optionally, add `rocm_docs.doxygen` and `sphinxcontrib.doxylink` as extensions
For an example, see the [test conf.py](./tests/sites/doxygen/extension/conf.py)
### Legacy Setup
- From the `rocm_docs` package import the function `setup_rocm_docs` into `conf.py` for the ReadTheDocs project.
- Call exactly the following, replacing `<PROJECT NAME HERE>` with the name of the project.
For an example, see the [test legacy conf.py](./tests/sites/doxygen/legacy/conf.py)
## Documentation
The `rocm-docs-core` documentation is viewable at [https://rocm.docs.amd.com/projects/rocm-docs-core/en/latest/](https://rocm.docs.amd.com/projects/rocm-docs-core/en/latest/)
### User Guide
The User Guide describes how users can make use of functionality in `rocm-docs-core`
It is viewable at [https://rocm.docs.amd.com/projects/rocm-docs-core/en/latest/user_guide/user_guide.html](https://rocm.docs.amd.com/projects/rocm-docs-core/en/latest/user_guide/user_guide.html)
### Developer Guide
The Developer Guide provides additional information on the processes in toolchains for `rocm-docs-core`
It is viewable at [https://rocm.docs.amd.com/projects/rocm-docs-core/en/latest/developer_guide/developer_guide.html](https://rocm.docs.amd.com/projects/rocm-docs-core/en/latest/developer_guide/developer_guide.html)
### Build Documentation Locally
To build the `rocm-docs-core` documentation locally, run the commands below:
```bash
pip install -r requirements.txt
cd docs
python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html
```
| text/markdown | null | Lauren Wrubleski <Lauren.Wrubleski@amd.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"GitPython>=3.1.30",
"PyGithub>=1.58.1",
"sphinx>=5.3.0",
"breathe>=4.34.0",
"myst-nb>=1.1.2",
"pydata-sphinx-theme>=0.15.4",
"sphinx-book-theme>=1.1.4",
"sphinx-copybutton>=0.5.1",
"sphinx-design>=0.3.0",
"sphinx_external_toc>=0.3.1",
"sphinx-notfound-page>=0.8.3",
"pyyaml>=6.0",
"fastjsons... | [] | [] | [] | [
"repository, https://github.com/ROCm/rocm-docs-core",
"documentation, https://rocm.docs.amd.com"
] | twine/6.1.0 CPython/3.12.8 | 2026-02-19T20:23:09.158825 | rocm_docs_core-1.32.0.tar.gz | 1,219,169 | 80/9f/6716ddd3e85c28806e4445d039f99469dbea229c4c4b49c1a018b3f6aa95/rocm_docs_core-1.32.0.tar.gz | source | sdist | null | false | 1042f13f23161293ebd97988ac93f822 | 9818e6be979cf85200a8a13d9585e4017fc34a665cde11dc1a73bf6144e868b4 | 809f6716ddd3e85c28806e4445d039f99469dbea229c4c4b49c1a018b3f6aa95 | null | [
"LICENSE.txt"
] | 657 |
2.4 | crds | 13.1.4 | Calibration Reference Data System, HST/JWST/Roman reference file management | ====
CRDS
====
CRDS is a package used for working with astronomical reference files for the
HST and JWST telescopes. CRDS is useful for performing various operations on
reference files or reference file assignment rules. CRDS is used to assign,
check, and compare reference files and rules, and also to predict those
datasets which should potentially be reprocessed due to changes in reference
files or assignment rules. CRDS has versioned rules which define the
assignment of references for each type and instrument configuration. CRDS has
web sites corresponding to each project (http://hst-crds.stsci.edu or
https://jwst-crds.stsci.edu/) which record information about reference files
and provide related services.
CRDS development is occuring at:
`Project's github page <https://github.com/spacetelescope/crds>`_.
CRDS is also available for installation as part of ``stenv``:
`stenv <https://github.com/spacetelescope/stenv>`_.
Basic CRDS Installation
-----------------------
For many roles, CRDS is *automatically installed as a dependency* of the
calibration software. This default installation supports running calibrations
but not more advanced CRDS activities like submitting files or development.
You can test for an existing installation of CRDS like this::
$ crds list --status
CRDS Version = '7.4.0, b7.4.0, daf308e24c8dd37e70c89012e464058861417245'
CRDS_MODE = 'auto'
CRDS_PATH = 'undefined'
CRDS_SERVER_URL = 'undefined'
Cache Locking = 'enabled, multiprocessing'
Effective Context = 'jwst_0541.pmap'
Last Synced = '2019-08-26 07:30:09.254136'
Python Executable = '/Users/homer/miniconda3/envs/crds-env/bin/python'
Python Version = '3.7.4.final.0'
Readonly Cache = False
This output indicates CRDS is installed and configured for processing onsite
using a pre-built cache of CRDS rules and references at */grp/crds/cache*.
File Submission Installation
----------------------------
For performing the file submission role, CRDS includes additional dependencies
and can be trickier to install.
Adding CRDS to an Existing Environment
+++++++++++++++++++++++++++++++++++++++
You can install/upgrade CRDS and it's dependencies in your current environment
like this::
git clone https://github.com/spacetelescope/crds.git
cd crds
./crds_setup_crds
It is recommended that you only do this in an environment dedicated to file
submissions. This may be suitable for e.g. installing/upgrading CRDS in
an active *redcatconda* environment.
Full Environment Install
++++++++++++++++++++++++
Sometimes it's expedient to install an entirely new environment including a
baseline conda, CRDS, and all of it's dependencies. To start from scratch,
you can::
git clone https://github.com/spacetelescope/crds.git
cd crds
./crds_setup_all
# open a new terminal window
conda activate crds-env
To customize a bit more, *crds_setup_all* and *crds_setup_env* support
parameters which can be used to specify OS, shell, and install location.
Substitute the below to specify Linux, c-shell, and a non-default install
location::
./crds_setup_all Linux csh $HOME/miniconda_crds
Advanced Install
++++++++++++++++
Below are the current sub-tasks used conceptually for a full featured CRDS
install. These can serve as an alternative to cloning the CRDS repo and
running the install script(s). If you already have a python environment
supporting pip,
1. Installing Conda
^^^^^^^^^^^^^^^^^^^
Alternate / definitive installation instructions for installing a baseline conda
can be found here::
https://spacetelescope.github.io/training-library/computer_setup.html#installing-conda
2. Create crds-env Environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The CRDS software and basic conda dependencies should be installed in an
isolated conda environment::
conda create -n crds-env
conda activate crds-env
You can substitute the environment name of your choice, e.g. *redcatconda* vs. *crds-env*.
3. Add JWST CAL S/W and Dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Installing the JWST CAL S/W will also automatically install many dependencies of
a numerical computing environment::
pip install --upgrade numpy
pip install --upgrade git+https://github.com/spacetelescope/jwst
Note that these commands also install the latest version of CRDS from pip which
may not be current enough for ongoing reference file testing and
troubleshooting.
4. Install CRDS and Dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This sequence first removes the CRDS installed automatically as part of
installing the *jwst* package and then installs the latest available CRDS
from github with advanced dependencies not needed for basic operation::
pip uninstall --yes crds
pip install --upgrade git+https://github.com/spacetelescope/crds.git#egg=crds["submission","test"]
A more full featured CRDS install is::
pip install --upgrade git+https://github.com/spacetelescope/crds.git#egg=crds["submission","dev","test","docs"]
5. Install Fitsverify
^^^^^^^^^^^^^^^^^^^^^
Since it is a C-based package fitsverify is not available using pip but is
available via conda on the astroconda channel::
conda config --add channels http://ssb.stsci.edu/astroconda
conda install --yes fitsverify
As part of an end-user setup installation of fitsverify is optional, CRDS
certify will run without it after issuing a warning, the CRDS server will run
fitsverify as part of its checks unless/until we stop using it altogether.
User's Guide
------------
More documentation about CRDS is available here:
https://jwst-crds.stsci.edu/static/users_guide/index.html
| text/x-rst | STScI CRDS s/w developers | null | null | null | null | null | [
"Intended Audience :: Science/Research",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS :: MacOS X",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Astronomy"
] | [
"Linux"
] | null | null | >=3.7 | [] | [] | [] | [
"astropy",
"numpy",
"filelock",
"asdf!=3.0.0",
"requests",
"parsley",
"jwst; extra == \"jwst\"",
"roman_datamodels; extra == \"roman\"",
"bs4; extra == \"submission\"",
"ipython; extra == \"dev\"",
"jupyterlab; extra == \"dev\"",
"ansible; extra == \"dev\"",
"helm; extra == \"dev\"",
"mock... | [] | [] | [] | [
"homepage, https://github.com/spacetelescope/crds"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:22:38.099531 | crds-13.1.4.tar.gz | 14,225,903 | 5b/21/4b77bf9170178296e5648b9a52568e2d180367b2ac4261d2aba501c99c2f/crds-13.1.4.tar.gz | source | sdist | null | false | d5976fc7618bf9d791aaf45ff514294c | f1183f696805a45122def601acf8be5e6786edcf7a235dec7cfc69aaba6b637e | 5b214b77bf9170178296e5648b9a52568e2d180367b2ac4261d2aba501c99c2f | null | [
"LICENSE"
] | 2,779 |
2.4 | seamaster | 0.0.5 | Seamaster is a package which helps users write and submit code to participate in the Seawars game. | # Seamaster Bot Programming – User Guide
> Note: This library is still under development, and the contents of this README might change in the future.
Seamaster is a strategy-based bot programming platform where you design **autonomous bots** that explore, harvest, fight, and survive in a grid-based world.
> **Key Mindset**
> You do **not** control bots every turn.
> You **define strategies**, and bots follow them autonomously for their entire lifetime.
## Core Philosophy
> **A bot is born with a strategy.
> It lives with that strategy.
> It dies with that strategy.**
- You define **how a bot behaves**
- The engine decides **when that behavior runs**
- You never micromanage bots after spawning
---
## Architecture Overview
| Layer | Responsibility |
|------|----------------|
| **User (`user.py`)** | Strategy logic only |
| **BotContext** | Gives you methods to define your custom bot |
| **Helpers** | Create actions (`move`, `attack`, etc.) |
You only write **`user.py`**.
---
## Bots = Strategies
Each bot type is a **Python class**.
```python
class Forager(BotController):
def act(self):
...
```
Define your complete bot strategy here and execute!
## Some Examples:
### Adding extra abilities while spawning bots and using botcontext
```python
def play(api: GameAPI):
actions = []
if api.view.bot_count < api.view.max_bots:
abilities = [
Ability.HARVEST.value,
Ability.SCOUT.value,
Ability.SPEED.value, # EXTRA ability
Ability.SELF_DESTRUCT.value, # EXTRA ability
]
if can_afford(api, abilities):
actions.append(
spawn("HeatSeeker", abilities)
)
return actions
```
### OR like this:
```python
actions.append(
spawn(
"CustomBot",
[
Ability.HARVEST.value,
Ability.SCOUT.value,
Ability.SPEED.value,
]
)
)
```
| text/markdown | null | Allen <108123012@nitt.edu>, Niharika <108124080@nitt.edu>, Dash Skndash <110124025@nitt.edu> | null | null | null | null | [
"Development Status :: 1 - Planning",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Games/Entertainment",
"Topic :: Games/Entertainment :: Simulation"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"build; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/delta/seamaster",
"Issues, https://github.com/delta/seamaster/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T20:22:23.148288 | seamaster-0.0.5.tar.gz | 553,341 | 1b/1a/a425a0063d629898b0a22a27047a71ffe44f316f2419c827292b9ed5cd2e/seamaster-0.0.5.tar.gz | source | sdist | null | false | 1d52e248fc0c552acaafaf83c59c2d05 | ad86fec5f00b0e2704c0f3c64fed2d479c2f6d86b15cc0b34984513d893ec32d | 1b1aa425a0063d629898b0a22a27047a71ffe44f316f2419c827292b9ed5cd2e | MIT | [
"LICENSE"
] | 201 |
2.4 | bmtool | 0.8.2.1 | BMTool | # bmtool
<div align="center">
**A comprehensive toolkit for developing computational neuroscience models with NEURON and BMTK**
[](https://github.com/cyneuro/bmtool/blob/master/LICENSE)
[](https://www.python.org/downloads/)
[](https://badge.fury.io/py/bmtool)
[](https://cyneuro.github.io/bmtool/)
[](https://github.com/astral-sh/ruff)
[Documentation](https://cyneuro.github.io/bmtool/) | [Installation](#installation) | [Features](#features) | [Contributing](CONTRIBUTING.md)
</div>
---
## Overview
BMTool is a collection of utilities designed to streamline the development, analysis, and execution of large-scale neural network models using [NEURON](https://www.neuron.yale.edu/neuron/) and the [Brain Modeling Toolkit (BMTK)](https://alleninstitute.github.io/bmtk/). Whether you're building single-cell models, developing synaptic mechanisms, or running parameter sweeps on HPC clusters, BMTool provides the tools you need.
## Features
### Single Cell Modeling
- Analyze passive membrane properties
- Current injection protocols and voltage responses
- F-I curve generation and analysis
- Impedance profile calculations
### Synapse Development
- Synaptic property tuning and validation
- Gap junction modeling and analysis
- Visualization of synaptic responses
- Parameter optimization tools
### Network Construction
- Custom connectors for complex network models
- Distance-dependent connection probabilities
- Connection matrix visualization
- Network statistics and validation
### Visualization
- Network position plotting (2D/3D)
- Connection matrices and weight distributions
- Raster plots and spike train analysis
- LFP and ECP visualization
- Power spectral density analysis
### SLURM Cluster Management
- YAML-based simulation configuration
- Automated parameter sweeps (value-based and percentage-based)
- Multi-environment support for different HPC devices
- Job monitoring and status tracking
- Microsoft Teams webhook notifications
### Analysis Tools
- Spike rate and population activity analysis
- Phase locking and spike-phase timing
- Oscillation detection with FOOOF
- Power spectral analysis
- Batch processing capabilities
## Installation
Install the latest stable release from PyPI:
```bash
pip install bmtool
```
For development installation, see the [Contributing Guide](CONTRIBUTING.md).
## Documentation
Comprehensive documentation with examples and tutorials is available at:
**[https://cyneuro.github.io/bmtool/](https://cyneuro.github.io/bmtool/)**
### Key Documentation Sections
- [SLURM Module](https://cyneuro.github.io/bmtool/modules/slurm/) - Run simulations on HPC clusters
- [Analysis Workflows](https://cyneuro.github.io/bmtool/modules/analysis/) - Process simulation results
- [Network Building](https://cyneuro.github.io/bmtool/modules/connectors/) - Construct neural networks
- [Single Cell Tools](https://cyneuro.github.io/bmtool/modules/singlecell/) - Analyze individual neurons
- [API Reference](https://cyneuro.github.io/bmtool/api/) - Complete API documentation
## Contributing
We welcome contributions from the community! To get started:
1. Read the [Contributing Guide](CONTRIBUTING.md) for setup instructions
2. Check out open [issues](https://github.com/cyneuro/bmtool/issues) or propose new features
3. Follow our code style guidelines using Ruff and pre-commit hooks
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed information on development setup, code standards, and the pull request process.
## Requirements
- Python 3.8+
- NEURON 8.2.4
- BMTK
- See [setup.py](setup.py) for complete dependency list
## License
BMTool is released under the [MIT License](LICENSE).
## Support
For questions, bug reports, or feature requests:
- 📖 Check the [documentation](https://cyneuro.github.io/bmtool/)
- 🐛 Open an [issue](https://github.com/cyneuro/bmtool/issues)
- 💬 Contact: gregglickert@mail.missouri.edu
## Acknowledgments
Developed by the Neural Engineering Laboratory at the University of Missouri.
| text/markdown | Neural Engineering Laboratory at the University of Missouri | gregglickert@mail.missouri.edu | null | null | MIT | null | [
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Topic :: Software Development :: Libraries",
"Topic :: Software Devel... | [] | https://github.com/cyneuro/bmtool | null | >=3.8 | [] | [] | [] | [
"neuron==8.2.4",
"bmtk",
"click",
"clint",
"h5py",
"matplotlib",
"networkx",
"numpy",
"pandas",
"questionary",
"pynmodlt",
"xarray",
"fooof",
"requests",
"pyyaml",
"PyWavelets",
"numba",
"tqdm",
"ruff>=0.1.0; extra == \"dev\"",
"pyright>=1.1.0; extra == \"dev\"",
"pre-commit>... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:21:42.953930 | bmtool-0.8.2.1.tar.gz | 201,588 | 15/87/eaa8c590182cc0939694bd51d5d585986b4e696977e464884602d77e29f8/bmtool-0.8.2.1.tar.gz | source | sdist | null | false | e1e8f5228fc775ecb34b4c6ae244537a | b7496c961ca45e3a8b1c8646657efd64cab7a6d921123038a765a8686c70498a | 1587eaa8c590182cc0939694bd51d5d585986b4e696977e464884602d77e29f8 | null | [
"LICENSE"
] | 219 |
2.1 | cdktn-provider-datadog | 13.0.0 | Prebuilt datadog Provider for CDK Terrain (cdktn) | # CDKTN prebuilt bindings for DataDog/datadog provider version 3.89.0
This repo builds and publishes the [Terraform datadog provider](https://registry.terraform.io/providers/DataDog/datadog/3.89.0/docs) bindings for [CDK Terrain](https://cdktn.io).
## Available Packages
### NPM
The npm package is available at [https://www.npmjs.com/package/@cdktn/provider-datadog](https://www.npmjs.com/package/@cdktn/provider-datadog).
`npm install @cdktn/provider-datadog`
### PyPI
The PyPI package is available at [https://pypi.org/project/cdktn-provider-datadog](https://pypi.org/project/cdktn-provider-datadog).
`pipenv install cdktn-provider-datadog`
### Nuget
The Nuget package is available at [https://www.nuget.org/packages/Io.Cdktn.Providers.Datadog](https://www.nuget.org/packages/Io.Cdktn.Providers.Datadog).
`dotnet add package Io.Cdktn.Providers.Datadog`
### Maven
The Maven package is available at [https://mvnrepository.com/artifact/io.cdktn/cdktn-provider-datadog](https://mvnrepository.com/artifact/io.cdktn/cdktn-provider-datadog).
```
<dependency>
<groupId>io.cdktn</groupId>
<artifactId>cdktn-provider-datadog</artifactId>
<version>[REPLACE WITH DESIRED VERSION]</version>
</dependency>
```
### Go
The go package is generated into the [`github.com/cdktn-io/cdktn-provider-datadog-go`](https://github.com/cdktn-io/cdktn-provider-datadog-go) package.
`go get github.com/cdktn-io/cdktn-provider-datadog-go/datadog/<version>`
Where `<version>` is the version of the prebuilt provider you would like to use e.g. `v11`. The full module name can be found
within the [go.mod](https://github.com/cdktn-io/cdktn-provider-datadog-go/blob/main/datadog/go.mod#L1) file.
## Docs
Find auto-generated docs for this provider here:
* [Typescript](./docs/API.typescript.md)
* [Python](./docs/API.python.md)
* [Java](./docs/API.java.md)
* [C#](./docs/API.csharp.md)
* [Go](./docs/API.go.md)
You can also visit a hosted version of the documentation on [constructs.dev](https://constructs.dev/packages/@cdktn/provider-datadog).
## Versioning
This project is explicitly not tracking the Terraform datadog provider version 1:1. In fact, it always tracks `latest` of `~> 3.0` with every release. If there are scenarios where you explicitly have to pin your provider version, you can do so by [generating the provider constructs manually](https://cdktn.io/docs/concepts/providers#import-providers).
These are the upstream dependencies:
* [CDK Terrain](https://cdktn.io) - Last official release
* [Terraform datadog provider](https://registry.terraform.io/providers/DataDog/datadog/3.89.0)
* [Terraform Engine](https://terraform.io)
If there are breaking changes (backward incompatible) in any of the above, the major version of this project will be bumped.
## Features / Issues / Bugs
Please report bugs and issues to the [CDK Terrain](https://cdktn.io) project:
* [Create bug report](https://github.com/open-constructs/cdk-terrain/issues)
* [Create feature request](https://github.com/open-constructs/cdk-terrain/issues)
## Contributing
### Projen
This is mostly based on [Projen](https://projen.io), which takes care of generating the entire repository.
### cdktn-provider-project based on Projen
There's a custom [project builder](https://github.com/cdktn-io/cdktn-provider-project) which encapsulate the common settings for all `cdktn` prebuilt providers.
### Provider Version
The provider version can be adjusted in [./.projenrc.js](./.projenrc.js).
### Repository Management
The repository is managed by [CDKTN Repository Manager](https://github.com/cdktn-io/cdktn-repository-manager/).
| text/markdown | CDK Terrain Maintainers | null | null | null | MPL-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
... | [] | https://github.com/cdktn-io/cdktn-provider-datadog.git | null | ~=3.9 | [] | [] | [] | [
"cdktn<0.23.0,>=0.22.0",
"constructs<11.0.0,>=10.4.2",
"jsii<2.0.0,>=1.119.0",
"publication>=0.0.3",
"typeguard<4.3.0,>=2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/cdktn-io/cdktn-provider-datadog.git"
] | twine/6.1.0 CPython/3.14.2 | 2026-02-19T20:21:39.085008 | cdktn_provider_datadog-13.0.0.tar.gz | 16,749,669 | a7/ec/4a3492298281c6b6cbc517f057cd517fb1439e51d95d49eff09b65363c65/cdktn_provider_datadog-13.0.0.tar.gz | source | sdist | null | false | ee942da24e6638fb62f0497c10fce14c | 6d98e868499a80c82faa6f8fc7b76c4ff302bf906970a468f181c0cc04dbc004 | a7ec4a3492298281c6b6cbc517f057cd517fb1439e51d95d49eff09b65363c65 | null | [] | 219 |
2.4 | pulumi-dex | 0.8.0 | A Pulumi provider for managing Dex resources via the Dex gRPC Admin API | # Pulumi Provider for Dex
A Pulumi provider for managing Dex (https://dexidp.io/) resources via the Dex gRPC Admin API. This provider allows you to manage Dex OAuth2 clients and connectors (IdPs) as infrastructure-as-code.
## Features
- **OAuth2 Client Management**: Create, update, and delete Dex OAuth2 clients
- **Generic Connector Support**: Manage any Dex connector type (OIDC, LDAP, SAML, etc.)
- **OIDC Connector Support**: First-class support for OIDC connectors with typed configuration
- **Azure/Entra ID Integration**:
- `AzureOidcConnector` - Uses generic OIDC connector (type: `oidc`)
- `AzureMicrosoftConnector` - Uses Dex's Microsoft-specific connector (type: `microsoft`)
- **AWS Cognito Integration**: `CognitoOidcConnector` for managing Cognito user pools as IdPs
- **GitLab Integration**: `GitLabConnector` for GitLab.com and self-hosted GitLab instances
- **GitHub Integration**: `GitHubConnector` for GitHub.com and GitHub Enterprise
- **Google Integration**: `GoogleConnector` for Google Workspace and Google accounts
- **Local/Builtin Connector**: `LocalConnector` for local user authentication
## Installation
### Prerequisites
- [Pulumi CLI](https://www.pulumi.com/docs/get-started/install/) installed
- Go 1.24+ (for building the provider)
- Access to a Dex instance with gRPC API enabled
### Building the Provider
```bash
# Clone the repository
git clone https://github.com/kotaicode/pulumi-dex.git
cd pulumi-dex
# Build the provider binary
go build -o bin/pulumi-resource-dex ./cmd/pulumi-resource-dex
# Install the provider locally
pulumi plugin install resource dex v0.1.0 --file bin/pulumi-resource-dex
```
### Generating Language SDKs
After building the provider, generate SDKs for your preferred language:
```bash
# Generate TypeScript SDK
pulumi package gen-sdk bin/pulumi-resource-dex --language typescript --out sdk/typescript
# Generate Go SDK
pulumi package gen-sdk bin/pulumi-resource-dex --language go --out sdk/go
# Generate Python SDK (optional)
pulumi package gen-sdk bin/pulumi-resource-dex --language python --out sdk/python
```
## Configuration
The provider requires configuration to connect to your Dex gRPC API:
```typescript
import * as dex from "@kotaicode/pulumi-dex";
const provider = new dex.Provider("dex", {
host: "dex.internal:5557", // Dex gRPC host:port
// Optional: TLS configuration for mTLS
caCert: fs.readFileSync("certs/ca.crt", "utf-8"),
clientCert: fs.readFileSync("certs/client.crt", "utf-8"),
clientKey: fs.readFileSync("certs/client.key", "utf-8"),
// Or for development:
// insecureSkipVerify: true,
});
```
### Environment Variables
You can also configure the provider using environment variables:
- `DEX_HOST` - Dex gRPC host:port
- `DEX_CA_CERT` - PEM-encoded CA certificate
- `DEX_CLIENT_CERT` - PEM-encoded client certificate
- `DEX_CLIENT_KEY` - PEM-encoded client private key
- `DEX_INSECURE_SKIP_VERIFY` - Skip TLS verification (development only)
- `DEX_TIMEOUT_SECONDS` - Per-RPC timeout in seconds
## Usage Examples
### Managing an OAuth2 Client
```typescript
import * as dex from "@kotaicode/pulumi-dex";
const webClient = new dex.Client("webClient", {
clientId: "my-web-app",
name: "My Web App",
redirectUris: ["https://app.example.com/callback"],
// secret is optional - will be auto-generated if omitted
}, { provider });
export const clientSecret = webClient.secret; // Pulumi secret
```
### Azure/Entra ID Connector (Generic OIDC)
```typescript
const azureConnector = new dex.AzureOidcConnector("azure-tenant-a", {
connectorId: "azure-tenant-a",
name: "Azure AD (Tenant A)",
tenantId: "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
clientId: "your-azure-app-client-id",
clientSecret: "your-azure-app-client-secret", // Pulumi secret
redirectUri: "https://dex.example.com/callback",
scopes: ["openid", "profile", "email", "offline_access"],
userNameSource: "preferred_username", // or "upn" or "email"
}, { provider });
```
### Azure/Entra ID Connector (Microsoft-Specific)
```typescript
const azureMsConnector = new dex.AzureMicrosoftConnector("azure-ms", {
connectorId: "azure-ms",
name: "Azure AD (Microsoft Connector)",
tenant: "common", // or "organizations" or specific tenant ID
clientId: "your-azure-app-client-id",
clientSecret: "your-azure-app-client-secret",
redirectUri: "https://dex.example.com/callback",
groups: "groups", // Optional: group claim name
}, { provider });
```
### AWS Cognito Connector
```typescript
const cognitoConnector = new dex.CognitoOidcConnector("cognito-eu", {
connectorId: "cognito-eu",
name: "Cognito (EU)",
region: "eu-central-1",
userPoolId: "eu-central-1_XXXXXXX",
clientId: "your-cognito-app-client-id",
clientSecret: "your-cognito-app-client-secret",
redirectUri: "https://dex.example.com/callback",
userNameSource: "email", // or "sub"
}, { provider });
```
### GitLab Connector
```typescript
const gitlabConnector = new dex.GitLabConnector("gitlab", {
connectorId: "gitlab",
name: "GitLab",
clientId: "your-gitlab-client-id",
clientSecret: "your-gitlab-client-secret",
redirectUri: "https://dex.example.com/callback",
baseURL: "https://gitlab.com", // Optional, defaults to https://gitlab.com
groups: ["my-group"], // Optional: groups whitelist
useLoginAsID: false, // Optional: use username as ID instead of internal ID
getGroupsPermission: false, // Optional: include group permissions in groups claim
}, { provider });
```
### GitHub Connector
```typescript
const githubConnector = new dex.GitHubConnector("github", {
connectorId: "github",
name: "GitHub",
clientId: "your-github-client-id",
clientSecret: "your-github-client-secret",
redirectUri: "https://dex.example.com/callback",
orgs: [
{ name: "my-organization" },
{
name: "my-organization-with-teams",
teams: ["red-team", "blue-team"]
}
],
teamNameField: "slug", // Optional: "name", "slug", or "both" - default: "slug"
useLoginAsID: false, // Optional: use username as ID
// For GitHub Enterprise:
// hostName: "git.example.com",
// rootCA: "/etc/dex/ca.crt",
}, { provider });
```
### Google Connector
```typescript
const googleConnector = new dex.GoogleConnector("google", {
connectorId: "google",
name: "Google",
clientId: "your-google-client-id",
clientSecret: "your-google-client-secret",
redirectUri: "https://dex.example.com/callback",
promptType: "consent", // Optional: default is "consent"
hostedDomains: ["example.com"], // Optional: domain whitelist for G Suite
groups: ["admins@example.com"], // Optional: group whitelist for G Suite
// For group fetching:
// serviceAccountFilePath: "/path/to/googleAuth.json",
// domainToAdminEmail: {
// "*": "super-user@example.com",
// "my-domain.com": "super-user@my-domain.com"
// },
}, { provider });
```
### Local/Builtin Connector
```typescript
const localConnector = new dex.LocalConnector("local", {
connectorId: "local",
name: "Local",
enabled: true, // Optional: default is true
}, { provider });
```
### Generic Connector (OIDC)
```typescript
const genericOidcConnector = new dex.Connector("github-oidc", {
connectorId: "github-oidc",
type: "oidc",
name: "GitHub OIDC",
oidcConfig: {
issuer: "https://token.actions.githubusercontent.com",
clientId: "your-github-oidc-client-id",
clientSecret: "your-secret",
redirectUri: "https://dex.example.com/callback",
scopes: ["openid", "email", "profile"],
},
}, { provider });
```
### Generic Connector (Raw JSON)
```typescript
const githubConnector = new dex.Connector("github", {
connectorId: "github",
type: "github",
name: "GitHub",
rawConfig: JSON.stringify({
clientID: "your-github-client-id",
clientSecret: "your-github-client-secret",
redirectURI: "https://dex.example.com/callback",
orgs: ["kotaicode"],
}),
}, { provider });
```
## Resources
### `dex.Client`
Manages an OAuth2 client in Dex.
**Inputs:**
- `clientId` (string, required) - Unique identifier for the client
- `name` (string, required) - Display name
- `secret` (string, optional, secret) - Client secret (auto-generated if omitted)
- `redirectUris` (string[], required) - Allowed redirect URIs
- `trustedPeers` (string[], optional) - Trusted peer client IDs
- `public` (boolean, optional) - Public (non-confidential) client
- `logoUrl` (string, optional) - Logo image URL
**Outputs:**
- `id` - Resource ID (same as clientId)
- `clientId` - The client ID
- `secret` - The client secret (Pulumi secret)
- `createdAt` - Creation timestamp
### `dex.Connector`
Manages a generic connector in Dex.
**Inputs:**
- `connectorId` (string, required) - Unique identifier
- `type` (string, required) - Connector type (e.g., "oidc", "ldap", "saml", "github")
- `name` (string, required) - Display name
- `oidcConfig` (OIDCConfig, optional) - OIDC configuration (use when type="oidc")
- `rawConfig` (string, optional) - Raw JSON configuration (for non-OIDC connectors)
**Note:** Exactly one of `oidcConfig` or `rawConfig` must be provided.
### `dex.AzureOidcConnector`
Manages an Azure AD/Entra ID connector using generic OIDC.
**Inputs:**
- `connectorId` (string, required)
- `name` (string, required)
- `tenantId` (string, required) - Azure tenant ID (UUID)
- `clientId` (string, required) - Azure app client ID
- `clientSecret` (string, required, secret) - Azure app client secret
- `redirectUri` (string, required)
- `scopes` (string[], optional) - Defaults to `["openid", "profile", "email", "offline_access"]`
- `userNameSource` (string, optional) - "preferred_username" (default), "upn", or "email"
- `extraOidc` (map, optional) - Additional OIDC config fields
### `dex.AzureMicrosoftConnector`
Manages an Azure AD/Entra ID connector using Dex's Microsoft-specific connector.
**Inputs:**
- `connectorId` (string, required)
- `name` (string, required)
- `tenant` (string, required) - "common", "organizations", or tenant ID (UUID)
- `clientId` (string, required)
- `clientSecret` (string, required, secret)
- `redirectUri` (string, required)
- `groups` (string, optional) - Group claim name (requires admin consent)
### `dex.CognitoOidcConnector`
Manages an AWS Cognito user pool connector.
**Inputs:**
- `connectorId` (string, required)
- `name` (string, required)
- `region` (string, required) - AWS region (e.g., "eu-central-1")
- `userPoolId` (string, required) - Cognito user pool ID
- `clientId` (string, required) - Cognito app client ID
- `clientSecret` (string, required, secret) - Cognito app client secret
- `redirectUri` (string, required)
- `scopes` (string[], optional) - Defaults to `["openid", "email", "profile"]`
- `userNameSource` (string, optional) - "email" (default) or "sub"
- `extraOidc` (map, optional) - Additional OIDC config fields
### `dex.GitLabConnector`
Manages a GitLab connector in Dex.
**Inputs:**
- `connectorId` (string, required)
- `name` (string, required)
- `clientId` (string, required) - GitLab application client ID
- `clientSecret` (string, required, secret) - GitLab application client secret
- `redirectUri` (string, required)
- `baseURL` (string, optional) - GitLab instance URL, defaults to `https://gitlab.com`
- `groups` (string[], optional) - Groups whitelist
- `useLoginAsID` (bool, optional) - Use username as ID instead of internal ID, default: `false`
- `getGroupsPermission` (bool, optional) - Include group permissions in groups claim, default: `false`
### `dex.GitHubConnector`
Manages a GitHub connector in Dex.
**Inputs:**
- `connectorId` (string, required)
- `name` (string, required)
- `clientId` (string, required) - GitHub OAuth app client ID
- `clientSecret` (string, required, secret) - GitHub OAuth app client secret
- `redirectUri` (string, required)
- `orgs` (GitHubOrg[], optional) - List of organizations and teams
- `name` (string, required) - Organization name
- `teams` (string[], optional) - Team names within the organization
- `loadAllGroups` (bool, optional) - Load all user orgs/teams, default: `false`
- `teamNameField` (string, optional) - "name", "slug", or "both", default: "slug"
- `useLoginAsID` (bool, optional) - Use username as ID, default: `false`
- `preferredEmailDomain` (string, optional) - Preferred email domain
- `hostName` (string, optional) - GitHub Enterprise hostname
- `rootCA` (string, optional) - Root CA certificate path for GitHub Enterprise
### `dex.GoogleConnector`
Manages a Google connector in Dex.
**Inputs:**
- `connectorId` (string, required)
- `name` (string, required)
- `clientId` (string, required) - Google OAuth client ID
- `clientSecret` (string, required, secret) - Google OAuth client secret
- `redirectUri` (string, required)
- `promptType` (string, optional) - OIDC prompt parameter, default: "consent"
- `hostedDomains` (string[], optional) - Domain whitelist for G Suite
- `groups` (string[], optional) - Group whitelist for G Suite
- `serviceAccountFilePath` (string, optional) - Service account JSON file path for group fetching
- `domainToAdminEmail` (map[string]string, optional) - Domain to admin email mapping for group fetching
### `dex.LocalConnector`
Manages a local/builtin connector in Dex.
**Inputs:**
- `connectorId` (string, required)
- `name` (string, required)
- `enabled` (bool, optional) - Whether the connector is enabled, default: `true`
**Note:** The local connector requires `enablePasswordDB: true` in Dex configuration. User management is handled separately via Dex's static passwords or gRPC API.
## Local Development and Testing
### Running Dex Locally with Docker Compose
See `docker-compose.yml` for a local Dex setup with gRPC API enabled.
```bash
# Start Dex
docker-compose up -d
# Dex gRPC will be available at localhost:5557
# Dex web UI will be available at http://localhost:5556
```
### Example Pulumi Program
See the `examples/` directory for complete example programs.
## Dex Configuration Requirements
Your Dex instance must have the gRPC API enabled. Add this to your Dex configuration:
```yaml
grpc:
addr: 127.0.0.1:5557
tlsCert: /etc/dex/grpc.crt
tlsKey: /etc/dex/grpc.key
tlsClientCA: /etc/dex/client.crt
reflection: true
# Enable connector CRUD (required for connector management)
enablePasswordDB: false
```
And set the environment variable:
```bash
export DEX_API_CONNECTORS_CRUD=true
```
## Security Considerations
- **Secrets**: All secrets (client secrets, TLS keys) are automatically marked as Pulumi secrets and encrypted in state
- **mTLS**: Strongly recommended for production use. Configure TLS certificates properly
- **Network**: Ensure Dex gRPC API is only accessible from trusted networks
## Contributing
Contributions are welcome! Please open an issue or submit a pull request.
## Dex Version Compatibility
This provider has been tested with:
- **Dex v2.4.0+** (with `DEX_API_CONNECTORS_CRUD=true`)
The provider requires:
- Dex gRPC API enabled
- `DEX_API_CONNECTORS_CRUD=true` environment variable set on Dex (required for connector CRUD operations)
For older Dex versions, connector management may not be available. Client management should work with any Dex version that exposes the gRPC API.
## Development
### Prerequisites
- Go 1.24.1+
- Pulumi CLI
- Docker and Docker Compose (for local testing)
### Building
```bash
make build
```
### Running Tests
```bash
# Unit tests
make test
# Integration tests (requires Dex running)
make dex-up
make test # Run tests with integration tag
make dex-down
```
### Code Quality
```bash
# Run linter
golangci-lint run
# Format code
go fmt ./...
```
## Contributing
Contributions are welcome! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests
5. Submit a pull request
## License
[License TBD - Add MIT or Apache 2.0]
## Support
- **GitHub Issues**: https://github.com/kotaicode/pulumi-dex/issues
- **Documentation**: https://github.com/kotaicode/pulumi-dex#readme
| text/markdown | null | null | null | null | null | category/cloud | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Repository, https://github.com/kotaicode/pulumi-dex"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:21:26.261399 | pulumi_dex-0.8.0.tar.gz | 29,312 | 08/32/fcbdc2985099d9e37487394bb9d1f92cb0f7cb06b1da37b23ad15974677a/pulumi_dex-0.8.0.tar.gz | source | sdist | null | false | 9a2a764a4c4810df605b1720b9aea837 | b03a1f620cb9bb170e799edeaee738d7b59fe85da03d78e20077106d28609748 | 0832fcbdc2985099d9e37487394bb9d1f92cb0f7cb06b1da37b23ad15974677a | null | [] | 201 |
2.4 | dkist-processing-dlnirsp | 1.0.3 | Science processing code for the DLNIRSP instrument on DKIST | dkist-processing-dlnirsp
========================
|codecov|
Overview
--------
The dkist-processing-dlnirsp library contains the implementation of the DLNIRSP pipelines as a collection of the
`dkist-processing-core <https://pypi.org/project/dkist-processing-core/>`_ framework and
`dkist-processing-common <https://pypi.org/project/dkist-processing-common/>`_ Tasks.
The recommended project structure is to separate tasks and workflows into separate packages. Having the workflows
in their own package facilitates using the build_utils to test the integrity of those workflows in the unit test.
Environment Variables
---------------------
.. list-table::
:widths: 10 90
:header-rows: 1
* - Variable
- Field Info
* - LOGURU_LEVEL
- annotation=str required=False default='INFO' alias_priority=2 validation_alias='LOGURU_LEVEL' description='Log level for the application'
* - MESH_CONFIG
- annotation=dict[str, MeshService] required=False default_factory=dict alias_priority=2 validation_alias='MESH_CONFIG' description='Service mesh configuration' examples=[{'upstream_service_name': {'mesh_address': 'localhost', 'mesh_port': 6742}}]
* - RETRY_CONFIG
- annotation=RetryConfig required=False default_factory=RetryConfig description='Retry configuration for the service'
* - OTEL_SERVICE_NAME
- annotation=str required=False default='unknown-service-name' alias_priority=2 validation_alias='OTEL_SERVICE_NAME' description='Service name for OpenTelemetry'
* - DKIST_SERVICE_VERSION
- annotation=str required=False default='unknown-service-version' alias_priority=2 validation_alias='DKIST_SERVICE_VERSION' description='Service version for OpenTelemetry'
* - NOMAD_ALLOC_ID
- annotation=str required=False default='unknown-allocation-id' alias_priority=2 validation_alias='NOMAD_ALLOC_ID' description='Nomad allocation ID for OpenTelemetry'
* - NOMAD_ALLOC_NAME
- annotation=str required=False default='unknown-allocation-name' alias='NOMAD_ALLOC_NAME' alias_priority=2 description='Allocation name for the deployed container the task is running on.'
* - NOMAD_GROUP_NAME
- annotation=str required=False default='unknown-allocation-group' alias='NOMAD_GROUP_NAME' alias_priority=2 description='Allocation group for the deployed container the task is running on'
* - OTEL_EXPORTER_OTLP_TRACES_INSECURE
- annotation=bool required=False default=True description='Use insecure connection for OTLP traces'
* - OTEL_EXPORTER_OTLP_METRICS_INSECURE
- annotation=bool required=False default=True description='Use insecure connection for OTLP metrics'
* - OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='OTLP traces endpoint. Overrides mesh configuration' examples=['localhost:4317']
* - OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='OTLP metrics endpoint. Overrides mesh configuration' examples=['localhost:4317']
* - OTEL_PYTHON_DISABLED_INSTRUMENTATIONS
- annotation=list[str] required=False default_factory=list description='List of instrumentations to disable. https://opentelemetry.io/docs/zero-code/python/configuration/' examples=[['pika', 'requests']]
* - OTEL_PYTHON_FASTAPI_EXCLUDED_URLS
- annotation=str required=False default='health' description='Comma separated list of URLs to exclude from OpenTelemetry instrumentation in FastAPI.' examples=['client/.*/info,healthcheck']
* - SYSTEM_METRIC_INSTRUMENTATION_CONFIG
- annotation=Union[dict[str, bool], NoneType] required=False default=None description='Configuration for system metric instrumentation. https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/system_metrics/system_metrics.html' examples=[{'system.memory.usage': ['used', 'free', 'cached'], 'system.cpu.time': ['idle', 'user', 'system', 'irq'], 'system.network.io': ['transmit', 'receive'], 'process.runtime.memory': ['rss', 'vms'], 'process.runtime.cpu.time': ['user', 'system'], 'process.runtime.context_switches': ['involuntary', 'voluntary']}]
* - ISB_USERNAME
- annotation=str required=False default='guest' description='Username for the interservice-bus.'
* - ISB_PASSWORD
- annotation=str required=False default='guest' description='Password for the interservice-bus.'
* - ISB_EXCHANGE
- annotation=str required=False default='master.direct.x' description='Exchange for the interservice-bus.'
* - ISB_QUEUE_TYPE
- annotation=str required=False default='classic' description='Queue type for the interservice-bus.' examples=['quorum', 'classic']
* - BUILD_VERSION
- annotation=str required=False default='dev' description='Fallback build version for workflow tasks.'
* - MAX_FILE_DESCRIPTORS
- annotation=int required=False default=1024 description='Maximum number of file descriptors to allow the process.'
* - GQL_AUTH_TOKEN
- annotation=Union[str, NoneType] required=False default='dev' description='The auth token for the metadata-store-api.'
* - OBJECT_STORE_ACCESS_KEY
- annotation=Union[str, NoneType] required=False default=None description='The access key for the object store.'
* - OBJECT_STORE_SECRET_KEY
- annotation=Union[str, NoneType] required=False default=None description='The secret key for the object store.'
* - OBJECT_STORE_USE_SSL
- annotation=bool required=False default=False description='Whether to use SSL for the object store connection.'
* - MULTIPART_THRESHOLD
- annotation=Union[int, NoneType] required=False default=None description='Multipart threshold for the object store.'
* - S3_CLIENT_CONFIG
- annotation=Union[dict, NoneType] required=False default=None description='S3 client configuration for the object store.'
* - S3_UPLOAD_CONFIG
- annotation=Union[dict, NoneType] required=False default=None description='S3 upload configuration for the object store.'
* - S3_DOWNLOAD_CONFIG
- annotation=Union[dict, NoneType] required=False default=None description='S3 download configuration for the object store.'
* - GLOBUS_MAX_RETRIES
- annotation=int required=False default=5 description='Max retries for transient errors on calls to the globus api.'
* - GLOBUS_INBOUND_CLIENT_CREDENTIALS
- annotation=list[GlobusClientCredential] required=False default_factory=list description='Globus client credentials for inbound transfers.' examples=[[{'client_id': 'id1', 'client_secret': 'secret1'}, {'client_id': 'id2', 'client_secret': 'secret2'}]]
* - GLOBUS_OUTBOUND_CLIENT_CREDENTIALS
- annotation=list[GlobusClientCredential] required=False default_factory=list description='Globus client credentials for outbound transfers.' examples=[[{'client_id': 'id3', 'client_secret': 'secret3'}, {'client_id': 'id4', 'client_secret': 'secret4'}]]
* - OBJECT_STORE_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='Object store Globus Endpoint ID.'
* - SCRATCH_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='Scratch Globus Endpoint ID.'
* - SCRATCH_BASE_PATH
- annotation=str required=False default='scratch/' description='Base path for scratch storage.'
* - SCRATCH_INVENTORY_DB_COUNT
- annotation=int required=False default=16 description='Number of databases in the scratch inventory (redis).'
* - DOCS_BASE_URL
- annotation=str required=False default='my_test_url' description='Base URL for the documentation site.'
* - FTS_ATLAS_DATA_DIR
- annotation=Union[str, NoneType] required=False default=None description='Common cached directory for downloaded FTS Atlas.'
Development
-----------
.. code-block:: bash
git clone git@bitbucket.org:dkistdc/dkist-processing-dlnirsp.git
cd dkist-processing-dlnirsp
pre-commit install
pip install -e .[test]
pytest -v --cov dkist_processing_nirsp
Build
--------
Artifacts are built through Bitbucket Pipelines.
The pipeline can be used in other repos with a modification of the package and artifact locations
to use the names relevant to the target repo.
e.g. dkist-processing-test -> dkist-processing-vbi and dkist_processing_test -> dkist_processing_vbi
Deployment
----------
Deployment is done with `turtlebot <https://bitbucket.org/dkistdc/turtlebot/src/main/>`_ and follows
the process detailed in `dkist-processing-core <https://pypi.org/project/dkist-processing-core/>`_
Additionally, when a new release is ready to be built the following steps need to be taken:
1. Freezing Dependencies
#########################
A new "frozen" extra is generated by the `dkist-dev-tools <https://bitbucket.org/dkistdc/dkist-dev-tools/src/main/>`_
package. If you don't have `dkist-dev-tools` installed please follow the directions from that repo.
To freeze dependencies run
.. code-block:: bash
ddt freeze vX.Y.Z[rcK]
Where "vX.Y.Z[rcK]" is the version about to be released.
2. Changelog
############
When you make **any** change to this repository it **MUST** be accompanied by a changelog file.
The changelog for this repository uses the `towncrier <https://github.com/twisted/towncrier>`__ package.
Entries in the changelog for the next release are added as individual files (one per change) to the ``changelog/`` directory.
Writing a Changelog Entry
^^^^^^^^^^^^^^^^^^^^^^^^^
A changelog entry accompanying a change should be added to the ``changelog/`` directory.
The name of a file in this directory follows a specific template::
<PULL REQUEST NUMBER>.<TYPE>[.<COUNTER>].rst
The fields have the following meanings:
* ``<PULL REQUEST NUMBER>``: This is the number of the pull request, so people can jump from the changelog entry to the diff on BitBucket.
* ``<TYPE>``: This is the type of the change and must be one of the values described below.
* ``<COUNTER>``: This is an optional field, if you make more than one change of the same type you can append a counter to the subsequent changes, i.e. ``100.bugfix.rst`` and ``100.bugfix.1.rst`` for two bugfix changes in the same PR.
The list of possible types is defined in the towncrier section of ``pyproject.toml``, the types are:
* ``feature``: This change is a new code feature.
* ``bugfix``: This is a change which fixes a bug.
* ``doc``: A documentation change.
* ``removal``: A deprecation or removal of public API.
* ``misc``: Any small change which doesn't fit anywhere else, such as a change to the package infrastructure.
Rendering the Changelog at Release Time
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you are about to tag a release first you must run ``towncrier`` to render the changelog.
The steps for this are as follows:
* Run `towncrier build --version vx.y.z` using the version number you want to tag.
* Agree to have towncrier remove the fragments.
* Add and commit your changes.
* Tag the release.
**NOTE:** If you forget to add a Changelog entry to a tagged release (either manually or automatically with ``towncrier``)
then the Bitbucket pipeline will fail. To be able to use the same tag you must delete it locally and on the remote branch:
.. code-block:: bash
# First, actually update the CHANGELOG and commit the update
git commit
# Delete tags
git tag -d vWHATEVER.THE.VERSION
git push --delete origin vWHATEVER.THE.VERSION
# Re-tag with the same version
git tag vWHATEVER.THE.VERSION
git push --tags origin main
Science Changelog
^^^^^^^^^^^^^^^^^
Whenever a release involves changes to the scientific quality of L1 data, additional changelog fragment(s) should be
created. These fragments are intended to be as verbose as is needed to accurately capture the scope of the change(s),
so feel free to use all the fancy RST you want. Science fragments are placed in the same ``changelog/`` directory
as other fragments, but are always called::
<PR NUMBER | +>.science[.<COUNTER>].rst
In the case that a single pull request encapsulates the entirety of the scientific change then the first field should
be that PR number (same as the normal CHANGELOG). If, however, there is not a simple mapping from a single PR to a scienctific
change then use the character "+" instead; this will create a changelog entry with no associated PR. For example:
.. code-block:: bash
$ ls changelog/
99.bugfix.rst # This is a normal changelog fragment associated with a bugfix in PR 99
99.science.rst # Apparently that bugfix also changed the scientific results, so that PR also gets a science fragment
+.science.rst # This fragment is not associated with a PR
When it comes time to build the SCIENCE_CHANGELOG, use the ``science_towncrier.sh`` script in this repo to do so.
This script accepts all the same arguments as the default `towncrier`. For exmaple:
.. code-block:: bash
./science_towncrier.sh build --version vx.y.z
This will update the SCIENCE_CHANGELOG and remove any science fragments from the changelog directory.
3. Tag and Push
###############
Once all commits are in place add a git tag that will define the released version, then push the tags up to Bitbucket:
.. code-block:: bash
git tag vX.Y.Z[rcK]
git push --tags origin BRANCH
In the case of an rc, BRANCH will likely be your development branch. For full releases BRANCH should be "main".
.. |codecov| image:: https://codecov.io/bb/dkistdc/dkist-processing-dlnirsp/graph/badge.svg?token=GQFBIHIKZM
:target: https://codecov.io/bb/dkistdc/dkist-processing-dlnirsp
| text/x-rst | null | NSO / AURA <dkistdc@nso.edu> | null | null | BSD-3-Clause | null | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"dkist-processing-common==12.6.2",
"dkist-processing-math==2.2.1",
"dkist-processing-pac==3.1.1",
"dkist-header-validator==5.3.0",
"dkist-fits-specifications==4.21.0",
"dkist-spectral-lines==3.0.0",
"solar-wavelength-calibration==2.0.1",
"dkist-service-configuration==4.2.0",
"astropy==7.0.2",
"num... | [] | [] | [] | [
"Homepage, https://nso.edu/dkist/data-center/",
"Repository, https://bitbucket.org/dkistdc/dkist-processing-dlnirsp/",
"Documentation, https://docs.dkist.nso.edu/projects/dl-nirsp",
"Help, https://nso.atlassian.net/servicedesk/customer/portal/5"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T20:21:02.551051 | dkist_processing_dlnirsp-1.0.3.tar.gz | 208,415 | 2f/60/0fda283ad5d94f834c8559178bb11b6c9389866f0bf95ea4cc3853deed4c/dkist_processing_dlnirsp-1.0.3.tar.gz | source | sdist | null | false | 1b102981b3d0b19f97173d09be5843e1 | 96668303eac240a3734c7cf1ab3c41d8feeedd851452214d4759d64deb69811c | 2f600fda283ad5d94f834c8559178bb11b6c9389866f0bf95ea4cc3853deed4c | null | [] | 447 |
2.4 | mara-client | 0.20.7 | A client for the MARA conversational agent for cheminformatics. | # mara client module
This package provides a Python interface for the MARA conversational agent for cheminformatics.
## Installation
```bash
pip install mara-client
```
## Usage
To use the MARA client, you need to have an API key. You can create one at https://mara.nanome.ai/settings/api-keys.
MARA chats are created using the `new_chat` method, or retrieved with the `get_chat` method of the client. You can then interact with the chat using the `prompt` method. The `prompt` method returns a `ChatResult` object, which contains the response from MARA, intermediate messages such as tool runs, and any files that were generated during the conversation. You can download these files using the `download_file` method of the chat. Chat will be visible as conversations in the MARA web interface, and can be deleted using the `delete` method.
```python
from mara_client import MARAClient
API_KEY = "..."
URL = "https://mara.example.com" # optional
client = MARAClient(API_KEY, URL)
chat = client.new_chat()
# or, chat = client.get_chat("chat_id")
result = chat.prompt('Download SDF of aspirin')
print(result.response)
# The SDF file for the compound aspirin has been downloaded successfully. You can access it [here](CHEMBL25.sdf).
print(result.files)
# [ChatFile(id='...', name='CHEMBL25.sdf', size=1203, date=...)]
chat.files.download('CHEMBL25.sdf', 'aspirin.sdf')
# downloaded as aspirin.sdf in current working directory
result = chat.prompt('Calculate chem props')
print(result.response)
# The chemical properties of the compound with ChEMBL ID CHEMBL25 (aspirin) are as follows:
#
# | Property | Value |
# | --- | --- |
# | Molecular Weight (MW) | 180.159 |
# | LogP | 1.310 |
# | Total Polar Surface Area (TPSA) | 63.600 |
# | Hydrogen Bond Acceptors (HBA) | 3 |
# | Hydrogen Bond Donors (HBD) | 1 |
# | Rotatable Bonds (RB) | 2 |
chat.delete()
# remove chat from history, delete associated files and data
```
### Files
The chat object contains a `files` attribute for working with files.
```python
# Upload a file as part of a prompt
file_path = './example.sdf'
result = chat.prompt('Convert this to SMILES', files=[file_path])
# List all files
file_list = chat.files.list()
# Download a file
file_name = file_list[0].name
chat.files.download(file_name, 'output.sdf')
# Upload a file directly
file_path = './example.sdf'
file = chat.files.upload(file_path)
print(file.id)
```
### Data Tables
The chat object contains a `datatables` attribute for working with DataTables.
```python
# Create a data table from already uploaded file
csv_file = './example.csv'
datatable: DataTable = chat.datatables.create(csv_file)
# List all data tables
table_list = chat.datatables.list()
# Generate a new DataTable based on Chat context
chat.datatables.generate()
# Run prompt to update/query a datatable
dt_id = datatable.id
chat.datatables.prompt(dt_id, prompt)
# Retrieve a datatable
chat.datatables.get(dt_id)
# View datatable as a pandas Dataframe
df = datatable.dataframe
```
| text/markdown | Sam Hessenauer, Alex McNerney, Mike Rosengrant | sam@nanome.ai, alex@nanome.ai, mike.rosengrant@nanome.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://nanome.ai/mara | null | null | [] | [] | [] | [
"requests>=2.31.0",
"pandas>=2.1.4",
"pydantic>=2.7.3"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T20:20:56.095988 | mara_client-0.20.7.tar.gz | 5,933 | 26/96/6450338af15e35cf79ac630864a45ea7057e16ecd5bcf82c392686a8e327/mara_client-0.20.7.tar.gz | source | sdist | null | false | 201968a414a4fe4895cabf8b42f6d19d | 9752e671eda098970959dccd0ceac27d66e69a873824a3ae20fb80dc856a1a64 | 26966450338af15e35cf79ac630864a45ea7057e16ecd5bcf82c392686a8e327 | null | [] | 212 |
2.4 | zombie-squirrel | 0.10.4 | Generated from aind-library-template | # ZOMBIE Squirrel
[](LICENSE)

[](https://github.com/semantic-release/semantic-release)



<img src="zombie-squirrel_logo.png" width="400" alt="Logo (image from ChatGPT)">
`zombie-squirrel` is a set of one-line functions that handle the entire process of caching and retrieving data (and metadata) from AIND data assets.
In the background, the ZOMBIE squirrel repackages data/metadata into dataframes and stores them on S3 in a flat bucket, or in memory for testing.
## Installation
```bash
pip install zombie-squirrel
```
## Usage
### Set backend
```bash
export FOREST_TYPE='S3'
```
Options are 'S3', 'MEMORY'.
### Scurry (fetch) data
```python
from zombie_squirrel import unique_project_names
project_names = unique_project_names()
```
| Function | Description | Parameters |
| -------- | ----------- | ---------- |
| `unique_project_names` | Fetch unique project names from docdb | |
| `unique_subject_ids` | Fetch unique subject IDs from docdb | |
| `asset_basics` | Fetch basic asset metadata including modalities, projects, and subject info | |
| `source_data` | Fetch source data references for derived records | |
| `raw_to_derived` | Fetch mapping of raw records to their derived records | |
| `qc` | Fetch QC dataframe for a single or multiple records | `str` or `list[str]` |
### Hide the acorns
```python
from zombie_squirrel.sync import hide_acorns
hide_acorns()
```
| text/markdown | Allen Institute for Neural Dynamics | null | null | null | MIT | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"duckdb",
"pyarrow",
"boto3",
"pandas",
"aind-data-access-api[docdb]"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:20:14.095111 | zombie_squirrel-0.10.4.tar.gz | 16,796 | b0/58/4345c57107d061a086878353155526d06169331a90e2c8b919b2ea04c075/zombie_squirrel-0.10.4.tar.gz | source | sdist | null | false | 23a0803ec5eb2c4c18ef1e08207c1ba0 | b8ab5c39c06ddb83a49d19f4bdc8f10a0ac64ae92c9f48fdc982e15df4c2fd08 | b0584345c57107d061a086878353155526d06169331a90e2c8b919b2ea04c075 | null | [
"LICENSE"
] | 212 |
2.4 | langxchange | 0.5.0 | AI Framework for fast integration of Private Data and LLM | # LangXChange Framework
<div align="center">




**A comprehensive Python Framework for LLM operations, vector databases, RAG implementations, database integration, MCP Service Management and local model management**
[Installation](#installation) • [Quick Start](#quick-start) • [Documentation](#modules) • [Examples](#examples) • [RAG Tutorial](#rag-tutorial)
</div>
## 🌟 Overview
LangXChange is a powerful, comprehensive Python toolkit designed to streamline LLM operations and Retrieval-Augmented Generation (RAG) implementations. It provides unified interfaces for multiple LLM providers, vector databases, document processing, database integration, and local model management.
### 🚀 Key Features
- **🤖 Multi-LLM Support**: OpenAI, Anthropic, Google GenAI, DeepSeek, Llama
- **🔍 Vector Databases**: ChromaDB, Pinecone, FAISS integration
- **📄 Document Processing**: Universal document loader with multiple formats
- **🧠 RAG Implementation**: Two-stage retrieval with cross-encoder
- **🏠 Local LLM**: Model downloading, fine-tuning, quantization
- **💾 Database Integration**: MySQL, MongoDB support
- **💰 Cost Tracking**: Built-in cost monitoring for API calls
- **⚡ Performance**: Caching, async operations, batch processing
## 📦 Installation
### Via PyPI (Recommended)
```bash
pip install langxchange
```
<!-- ### From Source
```bash
git clone https://github.com/yourusername/langxchange.git
cd langxchange
pip install -e .
```
### Development Installation
```bash
git clone https://github.com/yourusername/langxchange.git
cd langxchange
pip install -e ".[dev]"
``` -->
### Environment Variables
Create a `.env` file in your project directory:
```bash
# OpenAI
OPENAI_API_KEY=your_openai_key
# MySQL Configuration
MYSQL_HOST=localhost
MYSQL_DB=your_database
MYSQL_USER=your_username
MYSQL_PASSWORD=your_password
MYSQL_PORT=3306
MYSQL_CHARSET=utf8mb4
# ChromaDB
CHROMA_PERSIST_PATH=./chroma_db
# Vector Databases
PINECONE_API_KEY=your_pinecone_key
PINECONE_ENVIRONMENT=your_environment
# Milvus Configuration
MILVUS_HOST=localhost
MILVUS_PORT=19530
MILVUS_API_KEY=your_milvus_token # Optional for local, required for Zilliz Cloud
# Elasticsearch Configuration
ELASTICSEARCH_HOST=http://localhost:9200
```
## 🚀 Quick Start
```python
import os
from langxchange.openai_helper import EnhancedOpenAIHelper, OpenAIConfig
from langxchange.chroma_helper import EnhancedChromaHelper, ChromaConfig
from langxchange.documentloader import DocumentLoaderHelper, ChunkingStrategy
from langxchange.embeddings import EmbeddingHelper
# Set API key (or use environment variable)
os.environ["OPENAI_API_KEY"] = "your-api-key"
# Configure OpenAI with enhanced settings
openai_config = OpenAIConfig(
chat_model="gpt-4",
enable_caching=True,
enable_cost_tracking=True,
max_retries=3
)
# Initialize OpenAI client
llm = EnhancedOpenAIHelper(openai_config)
# Configure ChromaDB with enhanced settings
chroma_config = ChromaConfig(
persist_directory="./chroma_db",
batch_size=100,
progress_bar=True
)
# Initialize ChromaDB vector store
chroma = EnhancedChromaHelper(llm, chroma_config)
# Load and process documents with semantic chunking
loader = DocumentLoaderHelper(
chunking_strategy=ChunkingStrategy.SEMANTIC,
chunk_size=800,
preserve_formatting=True
)
# Process documents and store in vector database
documents = list(loader.load("document.pdf"))
chroma.insert_documents(
collection_name="my_collection",
documents=[doc.content for doc in documents],
metadatas=[doc.metadata for doc in documents],
generate_embeddings=True
)
# Query the vector database
results = chroma.query_collection(
collection_name="my_collection",
query_text="What is machine learning?",
top_k=5
)
```
## 📚 Modules
### 🤖 LLM Providers
#### OpenAI Integration (`openai_helper.py`)
```python
from langxchange.openai_helper import EnhancedOpenAIHelper, OpenAIConfig
# Configure OpenAI with enhanced settings
open_ai_config = OpenAIConfig(
chat_model="gpt-4",
enable_caching=True,
enable_cost_tracking=True,
max_retries=3,
log_level="INFO"
)
openai = EnhancedOpenAIHelper(open_ai_config)
response = openai.generate(
prompt="Explain quantum computing in simple terms.",
system_message="You are a helpful AI assistant."
)
# Cost tracking
cost = openai.get_cost_summary()
print(f"Total cost: ${cost['total_cost']:.4f}")
```
**Features:**
- ✅ API key management and validation
- ✅ Response caching for cost optimization
- ✅ Cost tracking and reporting
- ✅ Support for all OpenAI models (GPT-3.5, GPT-4, embeddings)
- ✅ Batch processing capabilities
- ✅ Error handling and retry logic
#### Anthropic Integration (`anthropic_helper.py`)
```python
import os
from langxchange.anthropic_helper import EnhancedAnthropicHelper, AnthropicConfig
# Set API key (or use environment variable)
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-key"
# Configure Anthropic with enhanced settings
anthropic_config = AnthropicConfig(
model="claude-3-sonnet-20240229",
enable_caching=True,
enable_cost_tracking=True,
max_retries=3,
log_level="INFO"
)
# Initialize Anthropic client
anthropic = EnhancedAnthropicHelper(anthropic_config)
response = anthropic.generate(
prompt="Analyze the following text for sentiment and key themes.",
max_tokens=500,
system_message="You are a helpful AI assistant."
)
# Cost tracking
cost = anthropic.get_cost_summary()
print(f"Total cost: ${cost['total_cost']:.4f}")
```
**Features:**
- ✅ Claude model support (Haiku, Sonnet, Opus)
- ✅ Enhanced configuration and caching
- ✅ Cost tracking and reporting
- ✅ Context window optimization
- ✅ Token counting and cost estimation
- ✅ Streaming responses
- ✅ Error handling and retry logic
#### Google GenAI Integration (`google_genai_helper.py`) — **Updated in v0.4.7**
```python
import os
import asyncio
from langxchange.google_genai_helper import EnhancedGoogleGenAIHelper, GoogleGenAIHelper
# Set API key (or use environment variable)
os.environ["GOOGLE_API_KEY"] = "your-google-key"
# ── EnhancedGoogleGenAIHelper (recommended) ──────────────────────────────────
google_genai = EnhancedGoogleGenAIHelper(
api_key="your-google-key",
chat_model="gemini-2.0-flash",
vision_model="gemini-2.0-flash",
tts_model="gemini-2.0-flash-preview-tts",
enable_usage_tracking=True,
enable_context_caching=True
)
# ── Synchronous multi-turn chat ───────────────────────────────────────────────
messages = [
{"role": "system", "content": "You are a helpful research assistant."},
{"role": "user", "content": "Summarize the following research paper."},
]
response, usage = google_genai.chat(messages, temperature=0.3, max_tokens=1000)
print(response)
# ── Async chat (non-blocking, for FastAPI / async backends) ───────────────────
async def async_example():
response_text, usage, tool_calls = await google_genai.chat_async(
messages=messages,
temperature=0.7,
max_tokens=2000,
)
print(response_text)
print(f"Tokens used: {usage.get('total_tokens', 0)}")
asyncio.run(async_example())
# ── Streaming chat ────────────────────────────────────────────────────────────
async def stream_example():
async for chunk in google_genai.chat_stream(messages, temperature=0.7):
print(chunk, end="", flush=True)
asyncio.run(stream_example())
# ── Tool / function calling ───────────────────────────────────────────────────
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"]
}
}
}]
async def tool_example():
content, usage, tool_calls = await google_genai.chat_async(
messages=[{"role": "user", "content": "What's the weather in London?"}],
tools=tools
)
if tool_calls:
for call in tool_calls:
print(f"Tool: {call['function']['name']}, Args: {call['function']['arguments']}")
asyncio.run(tool_example())
# ── Multi-modal vision processing ─────────────────────────────────────────────
vision_response = google_genai.chat_with_vision(
text="Analyze this image for key insights",
image_path="research_chart.png"
)
# ── Text-to-speech generation ─────────────────────────────────────────────────
audio_path = google_genai.text_to_speech(
text="Welcome to the research presentation",
voice="Zephyr",
output_format="wav"
)
# ── Usage tracking ────────────────────────────────────────────────────────────
stats = google_genai.get_usage_statistics()
print(f"Total requests: {stats.chat_requests}")
print(f"Input tokens: {stats.total_input_tokens}")
# ── Lightweight GoogleGenAIHelper (simple use cases) ─────────────────────────
llm = GoogleGenAIHelper(chat_model="gemini-2.0-flash")
async def simple_chat():
reply = await llm.chat(messages)
print(reply)
asyncio.run(simple_chat())
```
**Features (v0.5.0):**
- ✅ **Native multi-turn chat** — messages correctly formatted as `types.Content` with `user`/`model` roles
- ✅ **System instruction support** — `system` messages extracted and passed via `GenerateContentConfig.system_instruction`
- ✅ **Async chat** (`chat_async`) — non-blocking, returns `(text, usage, tool_calls)` tuple
- ✅ **Streaming chat** (`chat_stream`) — async generator yielding text chunks in real time
- ✅ **Tool / function calling** — OpenAI-style tool definitions auto-converted to `types.FunctionDeclaration`; tool calls returned in OpenAI-compatible format
- ✅ **Dual client** — both sync `genai.Client` and async `genai.AsyncClient` initialized on startup
- ✅ Gemini 2.0 Flash model support
- ✅ Multi-modal capabilities (text, images, audio)
- ✅ Text-to-speech and speech-to-text generation
- ✅ Context management and usage statistics
- ✅ Safety filtering and content moderation
- ✅ Embedding generation with `text-embedding-004`
- ✅ Caching and performance optimization
#### DeepSeek Integration (`deepseek_helper.py`)
```python
import os
from langxchange.deepseek_helper import EnhancedDeepSeekHelper, ModelType, ContextManagementStrategy
# Set API key (or use environment variable)
os.environ["DEEPSEEK_API_KEY"] = "your-deepseek-key"
# Initialize Enhanced DeepSeek client with advanced configuration
deepseek = EnhancedDeepSeekHelper(
api_key="your-deepseek-key",
base_url="https://api.deepseek.com/v1",
default_model=ModelType.CHAT.value,
embed_model=ModelType.EMBEDDING.value,
vision_model=ModelType.VISION.value,
timeout=30,
max_retries=3,
enable_logging=True,
log_level="INFO",
max_context_tokens=30000,
context_strategy=ContextManagementStrategy.SLIDING_WINDOW
)
# Generate text response with enhanced features
response = deepseek.generate(
prompt="Write a Python function for binary search.",
max_tokens=500,
temperature=0.3,
system_message="You are a helpful coding assistant.",
context_messages=[{"role": "user", "content": "Previous context"}]
)
# Code generation with syntax highlighting
code_response = deepseek.generate_code(
prompt="Create a REST API endpoint in Flask",
language="python",
include_docs=True,
error_handling=True
)
# Batch processing
batch_responses = deepseek.batch_generate([
{"prompt": "Explain recursion", "max_tokens": 200},
{"prompt": "Define Big O notation", "max_tokens": 200}
], temperature=0.7)
# Usage and cost tracking
cost_summary = deepseek.get_cost_summary()
print(f"Total cost: ${cost_summary['total_cost']:.4f}")
print(f"Total tokens: {cost_summary['total_tokens']}")
```
**Features:**
- ✅ Cost-effective alternative to OpenAI with enhanced features
- ✅ Multiple model types (chat, embedding, vision)
- ✅ Advanced context management with sliding window strategy
- ✅ Code generation with syntax highlighting and documentation
- ✅ Batch processing capabilities for multiple requests
- ✅ Streaming support with real-time response handling
- ✅ Usage tracking and cost monitoring
- ✅ Error handling and retry logic
- ✅ Compatible with OpenAI API format
#### Llama Integration (`llama_helper.py`)
```python
from langxchange.llama_helper import EnhancedLLaMAHelper, LLaMAConfig
import os
# Set Hugging Face token (or use environment variable)
os.environ["HUGGINGFACE_TOKEN"] = "your-hf-token"
# Configure LLaMA with enhanced settings
llama_config = LLaMAConfig(
chat_model="meta-llama/Llama-2-7b-chat-hf",
embed_model="all-MiniLM-L6-v2",
device="auto",
max_memory_per_gpu="8GB",
load_in_8bit=False,
load_in_4bit=True,
cache_dir="./llama_cache",
trust_remote_code=False
)
# Initialize Enhanced LLaMA client
llama = EnhancedLLaMAHelper(config=llama_config)
# Generate text response
response = llama.generate(
prompt="Explain machine learning fundamentals.",
temperature=0.7,
max_tokens=2048,
system_message="You are a knowledgeable AI assistant specializing in ML.",
do_sample=True,
top_p=0.9
)
# Advanced text generation with stopping criteria
advanced_response = llama.generate_advanced(
prompt="Write a Python class for neural networks",
stopping_criteria=["def ", "class ", "\n\n"],
temperature=0.5,
repetition_penalty=1.1,
no_repeat_ngram_size=3
)
# Batch text processing
batch_responses = llama.batch_generate([
{"prompt": "What is deep learning?", "max_tokens": 200},
{"prompt": "Explain neural networks", "max_tokens": 200},
{"prompt": "Define backpropagation", "max_tokens": 200}
], temperature=0.7)
# Token counting and optimization
token_count = llama.count_tokens("This is a sample text for token counting.")
print(f"Token count: {token_count}")
# Model performance metrics
metrics = llama.get_model_metrics()
print(f"Memory usage: {metrics['memory_usage_gb']:.2f} GB")
print(f"Inference speed: {metrics['tokens_per_second']:.2f} tokens/sec")
```
**Features:**
- ✅ Local model deployment with Hugging Face integration
- ✅ Enhanced quantization support (4-bit, 8-bit)
- ✅ GPU acceleration with memory optimization
- ✅ Advanced configuration with LLaMAConfig
- ✅ Multiple quantization modes for different hardware
- ✅ Custom stopping criteria for precise generation control
- ✅ Batch processing capabilities
- ✅ Token counting and performance monitoring
- ✅ Memory management and optimization
- ✅ Support for various LLaMA model variants
- ✅ Hugging Face model hub integration
#### Model Context Protocol (MCP) Integration (`mcp_helper.py`)
```python
import asyncio
from langxchange.mcp_helper import MCPServiceManager
async def main():
# Initialize manager with JSON config
manager = MCPServiceManager("mcp_config.json")
# 1. Register functional capabilities for servers with priority
manager.register_server_capabilities("filesystem", ["files", "local_storage"], priority=10)
manager.register_server_capabilities("brave_search", ["web_search", "research"], priority=5)
# 2. Intelligent Routing: Resolve server by tool name or capability
# Automatically selects the best server (health + priority aware)
server = await manager.select_best_server_for_tool("read_file")
# Direct namespace resolution
server = manager.resolve_tool_server("filesystem::read_file")
# Name-based resolution with capability hints
context = {"preferred_capability": "web_search"}
server = manager.resolve_tool_server("search", context=context)
# 3. Health & Priority Aware Selection
# Automatically picks the best server based on priority, error rates and latency
best_server = manager.select_best_server(["server_a", "server_b"])
# 4. Discovery: Fetch all tools with routing metadata
all_tools = await manager.get_all_tools_with_metadata()
# 5. Call a tool (standard way)
result = await manager.call_tool(
server_name="filesystem",
tool_name="read_file",
arguments={"path": "data.txt"}
)
await manager.shutdown()
if __name__ == "__main__":
asyncio.run(main())
```
**Features:**
- ✅ **Standardized Interoperability**: Connect to any MCP-compliant server (stdio or SSE)
- ✅ **Intelligent Routing**: Automatic server resolution based on tool names and namespaces (`server::tool`)
- ✅ **Capability-Based Selection**: Route tasks to servers based on functional tags (e.g., "web_search", "filesystem")
- ✅ **Health & Priority Aware Selection**: Automatically prioritizes healthy servers and uses priority scores for optimal routing
- ✅ **Tool Registry**: Comprehensive metadata registry for all available tools across all connected servers
- ✅ **Lifecycle Management**: Automatic server startup, health monitoring, and recovery
- ✅ **Discovery**: Dynamic tool discovery with TTL-based caching
- ✅ **Production-Ready**: Graceful shutdown, detailed logging, and error handling
#### 🛠️ Complete Example: Calculator Server
This example demonstrates a full setup including a mock server, configuration, and execution script.
````carousel
```python
# mcp_calculator_server.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Calculator")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
return a * b
if __name__ == "__main__":
mcp.run(transport="stdio")
```
<!-- slide -->
```json
// mcp_test_config.json
{
"servers": [
{
"name": "calculator",
"transport": "stdio",
"command": "python3",
"args": ["mcp_calculator_server.py"]
}
]
}
```
<!-- slide -->
```python
# mcp_test_execution.py
import asyncio
from langxchange.mcp_helper import MCPServiceManager
async def run_test():
manager = MCPServiceManager("mcp_test_config.json")
await manager.initialize()
try:
# Call 'add' tool
result = await manager.call_tool(
server_name="calculator",
tool_name="add",
arguments={"a": 5, "b": 3}
)
print(f"Add Result: {result}")
finally:
await manager.shutdown()
if __name__ == "__main__":
asyncio.run(run_test())
```
<!-- slide -->
```text
# Expected Output
INFO:langxchange.mcp:MCPServiceManager initialized
INFO:langxchange.mcp:Started MCP server 'calculator'
Add Result: content=[TextContent(type='text', text='8', ...)] structuredContent={'result': 8}
INFO:langxchange.mcp:Stopped MCP server 'calculator'
```
````
### 🤖 Autonomous Agents (`EnhancedAgent.py`)
LangXChange 0.4.6 introduces a powerful `EnhancedLLMAgentHelper` for building production-ready autonomous agents. It features dynamic tool discovery, semantic memory, per-tool circuit breakers, and automatic observation summarization.
#### 🚀 Quick Start: Manual Tool Definition
```python
import asyncio
from langxchange.EnhancedAgent import EnhancedLLMAgentHelper
from langxchange.agent_memory_helper import AgentMemoryHelper
from langxchange.openai_helper import EnhancedOpenAIHelper, OpenAIConfig
async def main():
# 1. Initialize LLM and Memory
llm = EnhancedOpenAIHelper(OpenAIConfig(chat_model="gpt-4o"))
memory = AgentMemoryHelper(sqlite_path="agent_memory.db")
# 2. Define tools with JSON Schema for strict parameter generation
async def get_weather(params):
return f"The weather in {params['location']} is sunny, 25°C."
tools = [{
"action": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City and country"}
},
"required": ["location"]
},
"func": get_weather
}]
# 3. Initialize Agent
agent = EnhancedLLMAgentHelper(
llm=llm,
action_space=tools,
external_memory_helper=memory,
debug=True
)
# 4. Run autonomously to achieve a goal
agent.set_goal("What is the weather in London?")
results = await agent.run_autonomous(max_cycles=5)
for res in results:
print(f"Thought: {res['thought']}")
print(f"Action: {res['decision']['action']}")
print(f"Result: {res['outcome']['result']}")
if __name__ == "__main__":
asyncio.run(main())
```
#### 🔌 Native MCP Integration
Instead of manually defining tools, you can pass an MCP configuration. The agent will automatically discover all tools from the configured servers and route calls dynamically.
```python
mcp_config = {
"servers": [
{
"name": "filesystem",
"transport": "stdio",
"command": "mcp-server-filesystem",
"args": ["/home/user/Downloads"]
},
{
"name": "brave-search",
"transport": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {"BRAVE_API_KEY": "your-key"}
}
]
}
agent = EnhancedLLMAgentHelper(
llm=llm,
mcp_config=mcp_config, # Native MCP support
external_memory_helper=memory
)
# The agent will now have access to all tools from both filesystem and brave-search
agent.set_goal("Find the latest PDF in my Downloads and search for its main topic on the web.")
await agent.run_autonomous()
```
#### ✨ Key Features
- ✅ **Autonomous Loops**: Use `run_autonomous()` for multi-step goal achievement with automatic state management.
- ✅ **Dynamic Discovery**: Tools are discovered at runtime from MCP servers or via a `discovery_callback`.
- ✅ **Schema-Strict Parameters**: Uses JSON Schema hints in prompts to ensure the LLM generates valid parameters.
- ✅ **Per-Tool Circuit Breakers**: Failures in one tool (e.g., a flaky API) won't crash the entire agent.
- ✅ **Auto-Summarization**: Large tool outputs are automatically summarized to preserve context window space.
- ✅ **Semantic Memory**: Integrates with `AgentMemoryHelper` for long-term history and semantic retrieval.
- ✅ **Observability**: Built-in Prometheus-style metrics and correlation ID tracing across all operations.
### 🔍 Vector Database Integration
#### ChromaDB Integration (`chroma_helper.py`)
```python
from langxchange.chroma_helper import EnhancedChromaHelper, ChromaConfig
# Configure Chroma with enhanced performance settings
chroma_config = ChromaConfig(
persist_directory="./chroma_db",
batch_size=100,
max_workers=8,
progress_bar=True
)
chroma = EnhancedChromaHelper(llm, chroma_config)
# Insert documents with metadata
chroma.insert_documents(
collection_name="my_collection",
documents=["Document content here"],
metadatas=[{"source": "file1.txt", "type": "text"}],
generate_embeddings=True
)
# Query with similarity search
results = chroma.query_collection(
collection_name="my_collection",
query_text="What is machine learning?",
top_k=5
)
```
**Features:**
- ✅ Persistent storage
- ✅ Metadata filtering
- ✅ Batch operations
- ✅ Collection management
- ✅ Performance optimization
#### Pinecone Integration (`pinecone_helper.py`)
```python
import os
from langxchange.pinecone_helper import EnhancedPineconeHelper, PineconeConfig, CloudProvider, MetricType
from langxchange.openai_helper import EnhancedOpenAIHelper, OpenAIConfig
# Set API key (or use environment variable)
os.environ["PINECONE_API_KEY"] = "your-pinecone-key"
# Configure LLM helper for embeddings
openai_config = OpenAIConfig(enable_caching=True)
llm_helper = EnhancedOpenAIHelper(openai_config)
# Configure Pinecone with enhanced settings
pinecone_config = PineconeConfig(
api_key="your-pinecone-key",
environment="us-west1-gcp",
cloud_service=CloudProvider.GCP,
index_name="my-index",
dimension=1536, # OpenAI embedding dimension
metric=MetricType.COSINE,
batch_size=100,
max_workers=10,
progress_bar=True
)
# Initialize Enhanced Pinecone client
pinecone = EnhancedPineconeHelper(
llm_helper=llm_helper,
config=pinecone_config
)
# Insert documents with automatic embedding generation
documents = [
"Machine learning is a subset of AI that focuses on algorithms.",
"Deep learning uses neural networks with multiple layers.",
"Natural language processing enables computers to understand text."
]
pinecone.insert_documents(
collection_name="my-index",
documents=documents,
metadatas=[{"source": "article1", "topic": "AI"},
{"source": "article2", "topic": "ML"},
{"source": "article3", "topic": "NLP"}],
generate_embeddings=True,
namespace="default"
)
# Query for similar vectors with filters
results = pinecone.query(
vector=None, # Will auto-generate embedding from query_text
query_text="What is machine learning?",
top_k=5,
filter_metadata={"topic": "AI"},
include_metadata=True
)
# DataFrame ingestion with batch processing
import pandas as pd
df = pd.DataFrame({
'text': ['Sample text 1', 'Sample text 2', 'Sample text 3'],
'category': ['tech', 'science', 'tech']
})
stats = pinecone.ingest_dataframe(
collection_name="my-index",
dataframe=df,
text_column='text',
metadata_columns=['category'],
namespace="default"
)
print(f"Inserted {stats['inserted']} documents")
print(f"Skipped {stats['skipped']} documents")
```
**Features:**
- ✅ Cloud-based vector storage with auto-scaling
- ✅ Automatic embedding generation using LLM helpers
- ✅ DataFrame ingestion with batch processing
- ✅ Advanced querying with metadata filters
- ✅ Performance monitoring and statistics
- ✅ Enterprise-grade error handling and retries
- ✅ Memory-efficient operations for large datasets
- ✅ Namespace management and resource cleanup
- ✅ Real-time updates with comprehensive logging
#### Milvus Integration (`milvus_helper.py`)
```python
from langxchange.milvus_helper import EnhancedMilvusHelper, MilvusConfig
from langxchange.openai_helper import EnhancedOpenAIHelper, OpenAIConfig
# Configure LLM helper for embeddings
openai_config = OpenAIConfig(enable_caching=True)
llm_helper = EnhancedOpenAIHelper(openai_config)
# Configure Milvus with enhanced settings
milvus_config = MilvusConfig(
host="localhost",
port="19530",
api_key="your-milvus-token", # Optional for local
collection_prefix="lx_",
embedding_dim=1536,
batch_size=100,
progress_bar=True
)
# Initialize Enhanced Milvus client
milvus = EnhancedMilvusHelper(
llm_helper=llm_helper,
config=milvus_config
)
# Insert documents with automatic embedding generation
documents = [
"Milvus is an open-source vector database built for AI applications.",
"It supports high-performance vector similarity search and analytics.",
"Milvus can handle billions of vectors with millisecond latency."
]
milvus.insert_documents(
collection_name="ai_docs",
documents=documents,
metadatas=[{"category": "database"}, {"category": "search"}, {"category": "performance"}],
generate_embeddings=True
)
# Query for similar vectors
results = milvus.query(
collection_name="ai_docs",
query_text="What is Milvus?",
top_k=3
)
for hit in results[0]:
print(f"Score: {hit.score}")
print(f"Document: {hit.entity.get('document')}")
```
**Features:**
- ✅ High-performance vector search with HNSW index
- ✅ Support for local Milvus and Zilliz Cloud (via API key/token)
- ✅ Automatic collection creation and schema management
- ✅ Batch insertion and DataFrame ingestion
- ✅ Metadata filtering and JSON support
- ✅ Enterprise-grade error handling
#### Elasticsearch Integration (`elasticsearch_helper.py`)
```python
from langxchange.elasticsearch_helper import EnhancedElasticsearchHelper, ElasticsearchConfig
from langxchange.openai_helper import EnhancedOpenAIHelper, OpenAIConfig
# Configure LLM helper for embeddings
openai_config = OpenAIConfig(enable_caching=True)
llm_helper = EnhancedOpenAIHelper(openai_config)
# Configure Elasticsearch with enhanced settings
es_config = ElasticsearchConfig(
host="http://localhost:9200",
index_prefix="lx_",
embedding_dim=1536,
batch_size=100,
progress_bar=True
)
# Initialize Enhanced Elasticsearch client
es = EnhancedElasticsearchHelper(
llm_helper=llm_helper,
config=es_config
)
# Insert documents with automatic embedding generation
documents = [
"Elasticsearch is a distributed, RESTful search and analytics engine.",
"It provides a distributed, multitenant-capable full-text search engine.",
"Elasticsearch is developed in Java and is open source."
]
es.insert_documents(
collection_name="tech_docs",
documents=documents,
metadatas=[{"topic": "search"}, {"topic": "analytics"}, {"topic": "java"}],
generate_embeddings=True
)
# Query for similar vectors
results = es.query(
collection_name="tech_docs",
query_text="What is Elasticsearch?",
top_k=3
)
for hit in results['hits']['hits']:
print(f"Score: {hit['_score']}")
print(f"Document: {hit['_source'].get('document')}")
```
**Features:**
- ✅ High-performance vector search with `dense_vector` type
- ✅ Support for script-based cosine similarity scoring
- ✅ Automatic index creation and mapping management
- ✅ Batch insertion and DataFrame ingestion
- ✅ Metadata filtering support
- ✅ Unified interface consistent with Chroma and Milvus helpers
#### FAISS Integration (`faiss_helper.py`)
```python
import os
import pandas as pd
from langxchange.faiss_helper import EnhancedFAISSHelper
# Initialize Enhanced FAISS helper with advanced configuration
faiss = EnhancedFAISSHelper(
dim=768, # Vector dimension
index_type="ivf", # Use IVF index for better performance on large datasets
normalize_vectors=True, # Normalize vectors for cosine similarity
nlist=100, # Number of clusters for IVF
auto_train=True # Automatically train IVF indices
)
# Insert individual vectors with documents and metadata
documents = [
"Machine learning is a subset of artificial intelligence.",
"Deep learning uses neural networks with multiple layers.",
"Natural language processing enables computers to understand text.",
"Computer vision allows machines to interpret visual information."
]
metadatas = [
{"source": "article1", "topic": "ML", "date": "2025-01-15"},
{"source": "article2", "topic": "DL", "date": "2025-01-16"},
{"source": "article3", "topic": "NLP", "date": "2025-01-17"},
{"source": "article4", "topic": "CV", "date": "2025-01-18"}
]
# Generate embeddings (this would come from your embedding model)
embeddings = [
[0.1, 0.2, 0.3] * 256, # 768-dimensional embedding
[0.4, 0.5, 0.6] * 256,
[0.7, 0.8, 0.9] * 256,
[0.2, 0.3, 0.4] * 256
]
faiss.insert(
vectors=embeddings,
documents=documents,
metadatas=metadatas
)
# DataFrame integration with batch processing
df = pd.DataFrame({
'embeddings': embeddings,
'documents': documents,
'metadata': metadatas
})
faiss.insert_dataframe(
dataframe=df,
embeddings_col="embeddings",
documents_col="documents",
metadata_col="metadata"
)
# Query for similar vectors with comprehensive results
query_vector = [0.1, 0.2, 0.3] * 256 # Sample query embedding
results = faiss.query(
embedding_vector=query_vector,
top_k=3,
include_distances=True # Include similarity scores
)
print(f"Found {len(results)} similar documents:")
for i, result in enumerate(results, 1):
print(f"{i}. Score: {result['distance']:.3f}")
print(f" Document: {result['document'][:100]}...")
print(f" Metadata: {result['metadata']}")
print()
# Batch querying for multiple queries at once
query_vectors = [
[0.1, 0.2, 0.3] * 256,
[0.4, 0.5, 0.6] * 256
]
batch_results = faiss.query_batch(
embedding_vectors=query_vectors,
top_k=2,
include_distances=True
)
print(f"Batch query results: {len(batch_results)} result sets")
# Retrieve documents by ID
doc_results = faiss.get_by_ids(["id_0", "id_1"])
print(f"Retrieved {len(doc_results)} documents by ID")
# Get comprehensive statistics
stats = faiss.get_stats()
print(f"Index Statistics:")
print(f" Total vectors: {stats['total_vectors']}")
print(f" Index type: {stats['index_type']}")
print(f" Dimension: {stats['dimension']}")
print(f" Is trained: {stats.get('is_trained', 'N/A')}")
print(f" Number of clusters: {stats.get('nlist', 'N/A')}")
# Persistence - save and load index
index_path = "./faiss_index.bin"
metadata_path = "./faiss_metadata.pkl"
faiss.save(index_path, metadata_path)
print(f"Saved index to {index_path}")
# Load saved index
loaded_faiss = EnhancedFAISSHelper(dim=768, index_type="ivf")
loaded_faiss.load(index_path, metadata_path)
print(f"Loaded index with {loaded_faiss.count()} vectors")
# Index management
index_vector_count = faiss.count()
print(f"Current index contains {index_vector_count} vectors")
# Delete specific document
deleted = faiss.delete_by_id("id_0")
print(f"Document deleted: {deleted}")
# Rebuild index with different configuration
rebuilt_count = faiss.rebuild_index(index_type="hnsw")
print(f"Rebuilt index with {rebuilt_count} vectors using HNSW")
```
**Features:**
- ✅ Multiple index types (Flat, IVF, HNSW) with automatic training
- ✅ High-performance similarity search with vector normalization
- ✅ Comprehensive DataFrame integration for pandas workflow
- ✅ Batch operations for efficient processing
- ✅ Advanced querying with distance scores and metadata
- ✅ Persistence and index management
- ✅ Document retrieval by ID and batch operations
- ✅ Comprehensive statistics and performance monitoring
- ✅ Index rebuilding and optimization
- ✅ Memory-efficient operations with proper validation
### 📄 Document Processing
#### Document Loader (`documentloader.py`)
```python
"""
Demo script showing the enhanced DocumentLoaderHelper capabilities
optimized for LLM processing.
"""
import os
import sys
from langxchange.documentloader import DocumentLoaderHelper, ChunkingStrategy, ImageProcessingStrategy
# Initialize Document Loader with different configurations
loader = DocumentLoaderHelper(
chunk_size=800,
overlap_size=100,
chunking_strategy=ChunkingStrategy.SEMANTIC,
preserve_formatting=True
)
# Test different chunking strategies
strategies = [
(ChunkingStrategy.CHARACTER, "Character-based chunking"),
(ChunkingStrategy.SENTENCE, "Sentence-aware chunking"),
(ChunkingStrategy.PARAGRAPH, "Paragraph-aware chunking"),
(ChunkingStrategy.SEMANTIC, "Semantic chunking (recommended)"),
]
# Add token-based if available
try:
import tiktoken
strategies.append((ChunkingStrategy.TOKEN, "Token-based chunking"))
except ImportError:
print("Note: tiktoken not available, skipping token-based chunking")
# Process document with specific strategy
file_path = "documents/sample.txt"
for strategy, description in strategies:
print(f"\n--- {description} ---")
# Initialize loader with strategy
loader = DocumentLoaderHelper(
chunk_size=800,
overlap_size=100,
chunking_strategy=strategy,
preserve_formatting=True
)
try:
chunks = list(loader.load(file_path))
print(f"Total chunks created: {len(chunks)}")
print(f"Processing time: {loader.stats['times']['total']:.3f}s")
# Show first few chunks
for i, chunk in enumerate(chunks[:3]):
print(f"\nChunk {i+1}:")
print(f" Length: {len(chunk.content)} chars")
if chunk.metadata.token_count:
print(f" Tokens: {chunk.metadata.token_count}")
print(f" Content preview: {chunk.content[:100]}...")
except Exception as e:
print(f"Error with {strategy}: {e}")
# Multi-format document processing
loader = DocumentLoaderHelper(
chunk_size=800,
chunking_strategy=ChunkingStrategy.SEMANTIC,
preserve_formatting=True
)
files_to_process = [
"documents/sample.txt",
"documents/data.csv",
"documents/report.pdf"
]
for file_path in files_to_process:
if os.path.exists(file_path):
print(f"\n--- Processing: {file_path} ---")
try:
chunks = list(loader.load(file_path))
print(f"File type: {chunks[0].metadata.file_type}")
print(f"Total chunks: {len(chunks)}")
# Show metadata for first chunk
first_chunk = chunks[0]
print(f"First chunk metadata:")
print(f" Source: {first_chunk.metadata.source_file}")
print(f" Section: {first_chunk.metadata.section_title}")
print(f" Content length: {len(first_chunk.content)} chars")
except Exception as e:
print(f"Error processing {file_path}: {e}")
# Advanced configuration examples
configs = [
{
"name": "Minimal overlap",
"params": {"chunk_size": 400, "overlap_size": 20, "min_chunk_size": 30}
},
{
"name": "High overlap",
"params": {"chunk_size": 400, "overlap_size": 100, "min_chunk_size": 50}
},
{
"name": "Preserve formatting",
"params": {"chunk_size": 400, "preserve_formatting": True}
},
{
"name": "Normalize text",
"params": {"chunk_size": 400, "preserve_formatting": False}
}
]
file_path = "documents/sample.txt"
for config in configs:
print(f"\n--- CONFIG: {config['name']} ---")
loader = DocumentLoaderHelper(
chunking_strategy=ChunkingStrategy.SEMANTIC,
**config['params']
)
try:
chunks = list(loader.load(file_path))
stats = loader.get_statistics()
print(f"Chunks created: {len(chunks)}")
print(f"Avg chunk size: {sum(len(c.content) for c in chunks) / len(chunks):.0f} chars")
print(f"Processing time: {stats['processing_stats']['times']['total']:.3f}s")
except Exception as e:
print(f"Error: {e}")
# Image processing support
try:
from PIL import Image
pil_available = True
print("✅ PIL (Pillow) support: Available")
except ImportError:
pil_available = False
print("❌ PIL (Pillow) support: Not available")
try:
import pytesseract
tesseract_available = True
print("✅ OCR (pytesseract) support: Available")
except ImportError:
tesseract_available = False
print("❌ OCR (pytesseract) support: Not available")
# Image processing strategies
strategies = [
(ImageProcessingStrategy.OCR_TEXT, "Extract text using OCR"),
(ImageProcessingStrategy.DESCRIPTION, "Generate image d | text/markdown | Timothy Owusu | ikolilu.tim.owusu@gmail.com | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"pandas",
"sentence-transformers",
"chromadb",
"pinecone-client",
"sqlalchemy",
"pymongo",
"pymysql",
"numpy",
"google-generativeai",
"openai",
"anthropic",
"weaviate-client",
"qdrant-client",
"elasticsearch",
"elasticsearch-dsl",
"opensearch-py",
"faiss-cpu",
"pymilvus>=2.3.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T20:18:22.735096 | langxchange-0.5.0.tar.gz | 217,818 | d3/5c/5bef2ea2f04bc39deb76d93f2687a666d9e67a9882ac97f78d4b97e4bb4f/langxchange-0.5.0.tar.gz | source | sdist | null | false | 7a9dda80ba6cfb5cb2704e88f2718bd3 | deb599d78a3e1379d269a04de30a39e12eeb625ea4db20c07f5b359382968535 | d35c5bef2ea2f04bc39deb76d93f2687a666d9e67a9882ac97f78d4b97e4bb4f | null | [] | 208 |
2.4 | surety-diff | 0.0.1 | Contract-aware diff and comparison engine for Surety. | # Surety Diff
Contract-aware diff and comparison engine for the Surety ecosystem.
`surety-diff` provides structured, human-readable comparison
for contract-driven service testing.
It is designed to explain *why* data does not match a contract,
not just that it failed.
---
## Installation
```bash
pip install surety-diff
| text/markdown | null | Elena Kulgavaya <elena.kulgavaya@gmail.com> | null | null | MIT | diff, comparison, contract-testing, automation, surety | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"surety<1.0,>=0.0.4"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:18:08.659161 | surety_diff-0.0.1.tar.gz | 11,966 | 08/06/ad6e43fdd55eadf47a4f4e87764fd7f77b4afb24b2d77cda6d2ead773f45/surety_diff-0.0.1.tar.gz | source | sdist | null | false | f7c2522ebc51df06103284ec6e991591 | 02bbef96e69c8f1c9bd64f63ed9aa602e59d06c98480e4aea69f0fb6481d2001 | 0806ad6e43fdd55eadf47a4f4e87764fd7f77b4afb24b2d77cda6d2ead773f45 | null | [
"LICENSE"
] | 281 |
2.4 | deployfilegen | 0.1.34 | Auto-generate Dockerfiles, docker-compose, and GitHub Actions for Django + React/Vite/Next.js projects. Zero config. Production-ready. | # deployfilegen
[](https://pypi.org/project/deployfilegen/)
[](https://pypi.org/project/deployfilegen/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
**deployfilegen** is a production-grade Python CLI that auto-generates **Dockerfiles**, **docker-compose.yml**, and **GitHub Actions** workflows for Django + React/Next.js/Vite projects.
**One command. Zero config. Production-ready.**
```bash
pip install deployfilegen
deployfilegen init --mode dev # Dev environment — just works
deployfilegen init --mode prod # Production — hardened & optimized
```
---
## ⚡ Demo
```bash
$ deployfilegen init --mode dev
Generating Backend Dockerfile...
Generated backend/Dockerfile
Generating Frontend Dockerfile...
Generated frontend/Dockerfile
Generating Docker Compose...
Generated docker-compose.dev.yml
Deployment configuration generated successfully!
$ docker compose -f docker-compose.dev.yml up --build
# ✅ Backend at http://localhost:8000
# ✅ Frontend at http://localhost:5173 (Vite auto-detected)
```
---
## 📁 Expected Project Structure
```
my-project/
├── .env # Your environment variables
├── backend/
│ ├── manage.py # Django project
│ └── requirements.txt
├── frontend/
│ ├── package.json # React/Vite/Next.js
│ └── src/
│
│── # Generated by deployfilegen ──────────
├── backend/Dockerfile ← generated
├── frontend/Dockerfile ← generated
├── docker-compose.dev.yml ← generated (dev mode)
├── docker-compose.prod.yml ← generated (prod mode)
└── .github/workflows/
└── deploy.yml ← generated (prod mode)
```
---
## 🚀 Key Features
- **Zero-Config Dev Mode**: Works with *any* existing `.env` file. No forced variable naming.
- **Smart Framework Detection**: Auto-detects **Vite** (port 5173), **Next.js** (port 3000), or **CRA**.
- **Flexible Deployment**: SSH Build (default) or Registry Push — you choose.
- **Production-Grade Defaults**:
- Non-root users, unprivileged Nginx
- Healthchecks, restart policies
- Multi-stage builds, `.dockerignore` generation
---
## 📦 Deployment Strategies
### SSH Build (Default — No Registry Needed)
```bash
deployfilegen init --mode prod --deploy ssh
```
**Required `.env` (only 2 variables!):**
```ini
DEPLOY_HOST=your_server_ip
DEPLOY_USER=ubuntu
```
**CI/CD workflow:** `SSH → git pull → docker compose build → up -d`
### Registry Push (Advanced — Immutable Deployments)
```bash
deployfilegen init --mode prod --deploy registry
```
**Required `.env`:**
```ini
# Always required
DEPLOY_HOST=your_server_ip
DEPLOY_USER=ubuntu
# Only for --deploy registry
DOCKER_USERNAME=your_username
BACKEND_IMAGE_NAME=user/backend
FRONTEND_IMAGE_NAME=user/frontend
```
**CI/CD workflow:** `Build → Push to Registry → SSH → docker compose pull → up -d`
---
## 🛠 Supported Stacks
| Component | Framework | Auto-Detected |
|:---|:---|:---|
| **Backend** | Django | Project name from `manage.py` |
| **Frontend** | Vite | Port `5173`, `--host` binding |
| **Frontend** | Next.js | Port `3000`, `-H` binding |
| **Frontend** | CRA | Port `3000`, `HOST` env |
---
## ⚙️ Configuration
Generate a boilerplate `.env`:
```bash
deployfilegen template # SSH mode (minimal)
deployfilegen template --deploy registry # Registry mode (full)
```
---
## 📖 CLI Reference
```text
Usage: deployfilegen init [OPTIONS]
Options:
--mode [dev|prod] Generation mode (Default: prod)
--deploy [ssh|registry] Deployment strategy (Default: ssh)
--force, -f Overwrite existing files
--with-db Include a Postgres service
# Scope Control
--docker-only Generate only Dockerfiles
--compose-only Generate only docker-compose.yml
--github-only Generate only GitHub Actions (Prod only)
--backend-only Only generate backend assets
--frontend-only Only generate frontend assets
# Override Detection
--frontend-port INT Override detected frontend dev port
--start-command TEXT Override detected frontend start command
--project-name TEXT Override detected Django project name
--help Show this message
```
---
## 🔧 Troubleshooting
**"Missing required variables" error in prod mode?**
```bash
deployfilegen template # SSH: just DEPLOY_HOST + DEPLOY_USER
deployfilegen template --deploy registry # Registry: adds DOCKER_USERNAME + IMAGE_NAMEs
```
**Frontend container exits immediately?**
```bash
deployfilegen init --mode dev --start-command "serve" --force
```
**Wrong port detected?**
```bash
deployfilegen init --mode dev --frontend-port 8080 --force
```
**Django project name wrong?**
```bash
deployfilegen init --mode prod --project-name my_project --force
```
---
## 📄 License
MIT
| text/markdown | null | Shankarsan Sahoo <shankarsansahoo2001@gmail.com> | null | null | MIT License
Copyright (c) 2026 Shankarsan Sahoo
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| docker, dockerfile, docker-compose, deployment, django, react, vite, nextjs, devops, github-actions, ci-cd, infrastructure, code-generator, production, nginx, gunicorn | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating S... | [] | null | null | >=3.9 | [] | [] | [] | [
"typer[all]",
"python-dotenv",
"pytest"
] | [] | [] | [] | [
"Homepage, https://github.com/Shankarsan-Sahoo/deployfilegen",
"Repository, https://github.com/Shankarsan-Sahoo/deployfilegen.git",
"Issues, https://github.com/Shankarsan-Sahoo/deployfilegen/issues"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-19T20:17:27.025590 | deployfilegen-0.1.34.tar.gz | 19,293 | df/5e/99e209f3341a0063c499381ef582e4c5dc5acaf77c4c27af60689d70d505/deployfilegen-0.1.34.tar.gz | source | sdist | null | false | ce48baaa482837407e20bb48a092d141 | 34c08d478d0332ee4fe254afcaa0e9cd15488ed64c6b1663bb6f14c50784ae7c | df5e99e209f3341a0063c499381ef582e4c5dc5acaf77c4c27af60689d70d505 | null | [
"LICENSE"
] | 212 |
2.1 | di-registry | 0.6.1 | Simple object registry that can be used for dependency injection and object configuration |
DI Registry
===========
`di-registry` is a package that provides a basic object registry that can be used for configuration or dependency injection.
| text/markdown | Sean Clark | sean@v13inc.com | null | null | MIT | null | [] | [] | https://gitlab.com/heingroup/di_registry | null | >=3.6.0 | [] | [] | [] | [] | [] | [] | [] | [] | twine/2.0.0 pkginfo/1.10.0 requests/2.31.0 setuptools/57.5.0 requests-toolbelt/1.0.0 tqdm/4.67.3 CPython/3.7.17 | 2026-02-19T20:17:05.191053 | di_registry-0.6.1.tar.gz | 10,215 | b9/b4/be619ba52cd1c3255deb2042e7d2ce6100942add1a611b202b8ddb1f3b80/di_registry-0.6.1.tar.gz | source | sdist | null | false | a97e39e03b1d60578915ee886ba1103d | 9e7dc2da72ecb5d62384fe43e94c4647b704941d9b36908e1e5566fc671b3e33 | b9b4be619ba52cd1c3255deb2042e7d2ce6100942add1a611b202b8ddb1f3b80 | null | [] | 233 |
2.4 | dkist-processing-vbi | 1.26.13 | Processing code for the VBI instrument on DKIST | dkist-processing-vbi
====================
|codecov|
Overview
--------
The dkist-processing-vbi library contains the implementation of the vbi pipelines as a collection of the
`dkist-processing-core <https://pypi.org/project/dkist-processing-core/>`_ framework and
`dkist-processing-common <https://pypi.org/project/dkist-processing-common/>`_ Tasks.
The recommended project structure is to separate tasks and workflows into separate packages. Having the workflows
in their own package facilitates using the build_utils to test the integrity of those workflows in the unit test.
Environment Variables
---------------------
.. list-table::
:widths: 10 90
:header-rows: 1
* - Variable
- Field Info
* - LOGURU_LEVEL
- annotation=str required=False default='INFO' alias_priority=2 validation_alias='LOGURU_LEVEL' description='Log level for the application'
* - MESH_CONFIG
- annotation=dict[str, MeshService] required=False default_factory=dict alias_priority=2 validation_alias='MESH_CONFIG' description='Service mesh configuration' examples=[{'upstream_service_name': {'mesh_address': 'localhost', 'mesh_port': 6742}}]
* - RETRY_CONFIG
- annotation=RetryConfig required=False default_factory=RetryConfig description='Retry configuration for the service'
* - OTEL_SERVICE_NAME
- annotation=str required=False default='unknown-service-name' alias_priority=2 validation_alias='OTEL_SERVICE_NAME' description='Service name for OpenTelemetry'
* - DKIST_SERVICE_VERSION
- annotation=str required=False default='unknown-service-version' alias_priority=2 validation_alias='DKIST_SERVICE_VERSION' description='Service version for OpenTelemetry'
* - NOMAD_ALLOC_ID
- annotation=str required=False default='unknown-allocation-id' alias_priority=2 validation_alias='NOMAD_ALLOC_ID' description='Nomad allocation ID for OpenTelemetry'
* - NOMAD_ALLOC_NAME
- annotation=str required=False default='unknown-allocation-name' alias='NOMAD_ALLOC_NAME' alias_priority=2 description='Allocation name for the deployed container the task is running on.'
* - NOMAD_GROUP_NAME
- annotation=str required=False default='unknown-allocation-group' alias='NOMAD_GROUP_NAME' alias_priority=2 description='Allocation group for the deployed container the task is running on'
* - OTEL_EXPORTER_OTLP_TRACES_INSECURE
- annotation=bool required=False default=True description='Use insecure connection for OTLP traces'
* - OTEL_EXPORTER_OTLP_METRICS_INSECURE
- annotation=bool required=False default=True description='Use insecure connection for OTLP metrics'
* - OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='OTLP traces endpoint. Overrides mesh configuration' examples=['localhost:4317']
* - OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='OTLP metrics endpoint. Overrides mesh configuration' examples=['localhost:4317']
* - OTEL_PYTHON_DISABLED_INSTRUMENTATIONS
- annotation=list[str] required=False default_factory=list description='List of instrumentations to disable. https://opentelemetry.io/docs/zero-code/python/configuration/' examples=[['pika', 'requests']]
* - OTEL_PYTHON_FASTAPI_EXCLUDED_URLS
- annotation=str required=False default='health' description='Comma separated list of URLs to exclude from OpenTelemetry instrumentation in FastAPI.' examples=['client/.*/info,healthcheck']
* - SYSTEM_METRIC_INSTRUMENTATION_CONFIG
- annotation=Union[dict[str, bool], NoneType] required=False default=None description='Configuration for system metric instrumentation. https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/system_metrics/system_metrics.html' examples=[{'system.memory.usage': ['used', 'free', 'cached'], 'system.cpu.time': ['idle', 'user', 'system', 'irq'], 'system.network.io': ['transmit', 'receive'], 'process.runtime.memory': ['rss', 'vms'], 'process.runtime.cpu.time': ['user', 'system'], 'process.runtime.context_switches': ['involuntary', 'voluntary']}]
* - ISB_USERNAME
- annotation=str required=False default='guest' description='Username for the interservice-bus.'
* - ISB_PASSWORD
- annotation=str required=False default='guest' description='Password for the interservice-bus.'
* - ISB_EXCHANGE
- annotation=str required=False default='master.direct.x' description='Exchange for the interservice-bus.'
* - ISB_QUEUE_TYPE
- annotation=str required=False default='classic' description='Queue type for the interservice-bus.' examples=['quorum', 'classic']
* - BUILD_VERSION
- annotation=str required=False default='dev' description='Fallback build version for workflow tasks.'
* - MAX_FILE_DESCRIPTORS
- annotation=int required=False default=1024 description='Maximum number of file descriptors to allow the process.'
* - GQL_AUTH_TOKEN
- annotation=Union[str, NoneType] required=False default='dev' description='The auth token for the metadata-store-api.'
* - OBJECT_STORE_ACCESS_KEY
- annotation=Union[str, NoneType] required=False default=None description='The access key for the object store.'
* - OBJECT_STORE_SECRET_KEY
- annotation=Union[str, NoneType] required=False default=None description='The secret key for the object store.'
* - OBJECT_STORE_USE_SSL
- annotation=bool required=False default=False description='Whether to use SSL for the object store connection.'
* - MULTIPART_THRESHOLD
- annotation=Union[int, NoneType] required=False default=None description='Multipart threshold for the object store.'
* - S3_CLIENT_CONFIG
- annotation=Union[dict, NoneType] required=False default=None description='S3 client configuration for the object store.'
* - S3_UPLOAD_CONFIG
- annotation=Union[dict, NoneType] required=False default=None description='S3 upload configuration for the object store.'
* - S3_DOWNLOAD_CONFIG
- annotation=Union[dict, NoneType] required=False default=None description='S3 download configuration for the object store.'
* - GLOBUS_MAX_RETRIES
- annotation=int required=False default=5 description='Max retries for transient errors on calls to the globus api.'
* - GLOBUS_INBOUND_CLIENT_CREDENTIALS
- annotation=list[GlobusClientCredential] required=False default_factory=list description='Globus client credentials for inbound transfers.' examples=[[{'client_id': 'id1', 'client_secret': 'secret1'}, {'client_id': 'id2', 'client_secret': 'secret2'}]]
* - GLOBUS_OUTBOUND_CLIENT_CREDENTIALS
- annotation=list[GlobusClientCredential] required=False default_factory=list description='Globus client credentials for outbound transfers.' examples=[[{'client_id': 'id3', 'client_secret': 'secret3'}, {'client_id': 'id4', 'client_secret': 'secret4'}]]
* - OBJECT_STORE_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='Object store Globus Endpoint ID.'
* - SCRATCH_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='Scratch Globus Endpoint ID.'
* - SCRATCH_BASE_PATH
- annotation=str required=False default='scratch/' description='Base path for scratch storage.'
* - SCRATCH_INVENTORY_DB_COUNT
- annotation=int required=False default=16 description='Number of databases in the scratch inventory (redis).'
* - DOCS_BASE_URL
- annotation=str required=False default='my_test_url' description='Base URL for the documentation site.'
Development
-----------
.. code-block:: bash
git clone git@bitbucket.org:dkistdc/dkist-processing-vbi.git
cd dkist-processing-vbi
pre-commit install
pip install -e .[test]
pytest -v --cov dkist_processing_vbi
Build
-----
Artifacts are built through Bitbucket Pipelines.
The pipeline can be used in other repos with a modification of the package and artifact locations
to use the names relevant to the target repo.
e.g. dkist-processing-test -> dkist-processing-vbi and dkist_processing_test -> dkist_processing_vbi
Deployment
----------
Deployment is done with turtlebot and follows
the process detailed in `dkist-processing-core <https://pypi.org/project/dkist-processing-core/>`_
Additionally, when a new release is ready to be built the following steps need to be taken:
1. Freezing Dependencies
#########################
A new "frozen" extra is generated by the `dkist-dev-tools <https://bitbucket.org/dkistdc/dkist-dev-tools/src/main/>`_
package. If you don't have `dkist-dev-tools` installed please follow the directions from that repo.
To freeze dependencies run
.. code-block:: bash
ddt freeze vX.Y.Z[rcK]
Where "vX.Y.Z[rcK]" is the version about to be released.
2. Changelog
############
When you make **any** change to this repository it **MUST** be accompanied by a changelog file.
The changelog for this repository uses the `towncrier <https://github.com/twisted/towncrier>`__ package.
Entries in the changelog for the next release are added as individual files (one per change) to the ``changelog/`` directory.
Writing a Changelog Entry
^^^^^^^^^^^^^^^^^^^^^^^^^
A changelog entry accompanying a change should be added to the ``changelog/`` directory.
The name of a file in this directory follows a specific template::
<PULL REQUEST NUMBER>.<TYPE>[.<COUNTER>].rst
The fields have the following meanings:
* ``<PULL REQUEST NUMBER>``: This is the number of the pull request, so people can jump from the changelog entry to the diff on BitBucket.
* ``<TYPE>``: This is the type of the change and must be one of the values described below.
* ``<COUNTER>``: This is an optional field, if you make more than one change of the same type you can append a counter to the subsequent changes, i.e. ``100.bugfix.rst`` and ``100.bugfix.1.rst`` for two bugfix changes in the same PR.
The list of possible types is defined in the towncrier section of ``pyproject.toml``, the types are:
* ``feature``: This change is a new code feature.
* ``bugfix``: This is a change which fixes a bug.
* ``doc``: A documentation change.
* ``removal``: A deprecation or removal of public API.
* ``misc``: Any small change which doesn't fit anywhere else, such as a change to the package infrastructure.
Rendering the Changelog at Release Time
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you are about to tag a release first you must run ``towncrier`` to render the changelog.
The steps for this are as follows:
* Run `towncrier build --version vx.y.z` using the version number you want to tag.
* Agree to have towncrier remove the fragments.
* Add and commit your changes.
* Tag the release.
**NOTE:** If you forget to add a Changelog entry to a tagged release (either manually or automatically with ``towncrier``)
then the Bitbucket pipeline will fail. To be able to use the same tag you must delete it locally and on the remote branch:
.. code-block:: bash
# First, actually update the CHANGELOG and commit the update
git commit
# Delete tags
git tag -d vWHATEVER.THE.VERSION
git push --delete origin vWHATEVER.THE.VERSION
# Re-tag with the same version
git tag vWHATEVER.THE.VERSION
git push --tags origin main
Science Changelog
^^^^^^^^^^^^^^^^^
Whenever a release involves changes to the scientific quality of L1 data, additional changelog fragment(s) should be
created. These fragments are intended to be as verbose as is needed to accurately capture the scope of the change(s),
so feel free to use all the fancy RST you want. Science fragments are placed in the same ``changelog/`` directory
as other fragments, but are always called::
<PR NUMBER | +>.science[.<COUNTER>].rst
In the case that a single pull request encapsulates the entirety of the scientific change then the first field should
be that PR number (same as the normal CHANGELOG). If, however, there is not a simple mapping from a single PR to a scientific
change then use the character "+" instead; this will create a changelog entry with no associated PR. For example:
.. code-block:: bash
$ ls changelog/
99.bugfix.rst # This is a normal changelog fragment associated with a bugfix in PR 99
99.science.rst # Apparently that bugfix also changed the scientific results, so that PR also gets a science fragment
+.science.rst # This fragment is not associated with a PR
When it comes time to build the SCIENCE_CHANGELOG, use the ``science_towncrier.sh`` script in this repo to do so.
This script accepts all the same arguments as the default `towncrier`. For example:
.. code-block:: bash
./science_towncrier.sh build --version vx.y.z
This will update the SCIENCE_CHANGELOG and remove any science fragments from the changelog directory.
3. Tag and Push
###############
Once all commits are in place add a git tag that will define the released version, then push the tags up to Bitbucket:
.. code-block:: bash
git tag vX.Y.Z[rcK]
git push --tags origin BRANCH
In the case of an rc, BRANCH will likely be your development branch. For full releases BRANCH should be "main".
.. |codecov| image:: https://codecov.io/bb/dkistdc/dkist-processing-vbi/graph/badge.svg?token=LWWKR72RVV
:target: https://codecov.io/bb/dkistdc/dkist-processing-vbi
| text/x-rst | null | NSO / AURA <dkistdc@nso.edu> | null | null | BSD-3-Clause | null | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"dkist-processing-common==12.6.2",
"dkist-processing-math==2.2.1",
"dkist-header-validator==5.3.0",
"dkist-fits-specifications==4.21.0",
"astropy==7.0.2",
"numpy==2.2.5",
"sunpy==6.1.1",
"scipy==1.15.3",
"pillow==10.4.0",
"moviepy==2.1.2",
"dkist-spectral-lines==3.0.0",
"dkist-service-configur... | [] | [] | [] | [
"Homepage, https://nso.edu/dkist/data-center/",
"Repository, https://bitbucket.org/dkistdc/dkist-processing-vbi/",
"Documentation, https://docs.dkist.nso.edu/projects/vbi",
"Help, https://nso.atlassian.net/servicedesk/customer/portal/5"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T20:17:02.738774 | dkist_processing_vbi-1.26.13.tar.gz | 76,119 | b1/0b/b441290a4b19ad8ec9956ecb3a07f255bed9df59c4dabb846303db794ddc/dkist_processing_vbi-1.26.13.tar.gz | source | sdist | null | false | c289806a36cd6e615860b969d7784904 | 6978491908038f26ac6b23d60bbc475a14aa8908fafbdea665f857c4445952fe | b10bb441290a4b19ad8ec9956ecb3a07f255bed9df59c4dabb846303db794ddc | null | [] | 454 |
2.4 | pointblank | 0.21.0 | Find out if your data is what you think it is. | > [!TIP]
> **📺 Featured Talk: ['Making Things Nice in Python'](https://www.youtube.com/watch?v=J6e2BKjHyPg)**
>
> Discover how Pointblank and Great Tables (used in this library) prioritize user experience in Python package design. I go over why convenient options, extensive documentation, and thoughtful API decisions is better for everyone (even when they challenge conventional Python patterns/practices).
<div align="center">
<a href="https://posit-dev.github.io/pointblank/"><img src="https://posit-dev.github.io/pointblank/assets/pointblank_logo.svg" width="85%"/></a>
_Data validation toolkit for assessing and monitoring data quality._
[](https://pypi.python.org/pypi/pointblank)
[](https://pypi.org/project/pointblank/#history)
[](https://pepy.tech/projects/pointblank)
[](https://anaconda.org/conda-forge/pointblank)
[](https://img.shields.io/github/license/posit-dev/pointblank)
[](https://github.com/posit-dev/pointblank/actions/workflows/ci-tests.yaml)
[](https://codecov.io/gh/posit-dev/pointblank)
[](https://www.repostatus.org/#active)
[](https://posit-dev.github.io/pointblank/)
[](https://deepwiki.com/posit-dev/pointblank)
[](https://github.com/posit-dev/pointblank/graphs/contributors)
[](https://discord.com/invite/YH7CybCNCQ)
[](https://www.contributor-covenant.org/version/2/1/code_of_conduct.html)
</div>
<div align="center">
<a href="translations/README.fr.md">Français</a> |
<a href="translations/README.de.md">Deutsch</a> |
<a href="translations/README.it.md">Italiano</a> |
<a href="translations/README.es.md">Español</a> |
<a href="translations/README.pt-BR.md">Português</a> |
<a href="translations/README.nl.md">Nederlands</a> |
<a href="translations/README.zh-CN.md">简体中文</a> |
<a href="translations/README.ja.md">日本語</a> |
<a href="translations/README.ko.md">한국어</a> |
<a href="translations/README.hi.md">हिन्दी</a> |
<a href="translations/README.ar.md">العربية</a>
</div>
<br>
Pointblank takes a different approach to data quality. It doesn't have to be a tedious technical task. Rather, it can become a process focused on clear communication between team members. While other validation libraries focus solely on catching errors, Pointblank is great at both **finding issues and sharing insights**. Our beautiful, customizable reports turn validation results into conversations with stakeholders, making data quality issues immediately understandable and actionable for everyone on your team.
**Get started in minutes, not hours.** Pointblank's AI-powered [`DraftValidation`](https://posit-dev.github.io/pointblank/user-guide/draft-validation.html) feature analyzes your data and suggests intelligent validation rules automatically. So there's no need to stare at an empty validation script wondering where to begin. Pointblank can kickstart your data quality journey so you can focus on what matters most.
Whether you're a data scientist who needs to quickly communicate data quality findings, a data engineer building robust pipelines, or an analyst presenting data quality results to business stakeholders, Pointblank helps you to turn data quality from an afterthought into a competitive advantage.
## Getting Started with AI-Powered Validation Drafting
The `DraftValidation` class uses LLMs to analyze your data and generate a complete validation plan with intelligent suggestions. This helps you quickly get started with data validation or jumpstart a new project.
```python
import pointblank as pb
# Load your data
data = pb.load_dataset("game_revenue") # A sample dataset
# Use DraftValidation to generate a validation plan
pb.DraftValidation(data=data, model="anthropic:claude-sonnet-4-5")
```
The output is a complete validation plan with intelligent suggestions based on your data:
```python
import pointblank as pb
# The validation plan
validation = (
pb.Validate(
data=data,
label="Draft Validation",
thresholds=pb.Thresholds(warning=0.10, error=0.25, critical=0.35)
)
.col_vals_in_set(columns="item_type", set=["iap", "ad"])
.col_vals_gt(columns="item_revenue", value=0)
.col_vals_between(columns="session_duration", left=3.2, right=41.0)
.col_count_match(count=11)
.row_count_match(count=2000)
.rows_distinct()
.interrogate()
)
validation
```
<div align="center">
<img src="https://posit-dev.github.io/pointblank/assets/pointblank-draft-validation-report.png" width="800px">
</div>
<br>
Copy, paste, and customize the generated validation plan for your needs.
## Chainable Validation API
Pointblank's chainable API makes validation simple and readable. The same pattern always applies: (1) start with `Validate`, (2) add validation steps, and (3) finish with `interrogate()`.
```python
import pointblank as pb
validation = (
pb.Validate(data=pb.load_dataset(dataset="small_table"))
.col_vals_gt(columns="d", value=100) # Validate values > 100
.col_vals_le(columns="c", value=5) # Validate values <= 5
.col_exists(columns=["date", "date_time"]) # Check columns exist
.interrogate() # Execute and collect results
)
# Get the validation report from the REPL with:
validation.get_tabular_report().show()
# From a notebook simply use:
validation
```
<div align="center">
<img src="https://posit-dev.github.io/pointblank/assets/pointblank-tabular-report.png" width="800px">
</div>
<br>
Once you have an interrogated `validation` object, you can leverage a variety of methods to extract insights like:
- getting detailed reports for single steps to see what went wrong
- filtering tables based on validation results
- extracting problematic data for debugging
## Why Choose Pointblank?
- **Works with your existing stack**: Seamlessly integrates with Polars, Pandas, DuckDB, MySQL, PostgreSQL, SQLite, Parquet, PySpark, Snowflake, and more!
- **Beautiful, interactive reports**: Crystal-clear validation results that highlight issues and help communicate data quality
- **Composable validation pipeline**: Chain validation steps into a complete data quality workflow
- **Threshold-based alerts**: Set 'warning', 'error', and 'critical' thresholds with custom actions
- **Practical outputs**: Use validation results to filter tables, extract problematic data, or trigger downstream processes
## Production-Ready Validation Pipeline
Here's how Pointblank handles complex, real-world scenarios with advanced features like threshold management, automated alerts, and comprehensive business rule validation:
```python
import pointblank as pb
import polars as pl
# Load your data
sales_data = pl.read_csv("sales_data.csv")
# Create a comprehensive validation
validation = (
pb.Validate(
data=sales_data,
tbl_name="sales_data", # Name of the table for reporting
label="Real-world example.", # Label for the validation, appears in reports
thresholds=(0.01, 0.02, 0.05), # Set thresholds for warnings, errors, and critical issues
actions=pb.Actions( # Define actions for any threshold exceedance
critical="Major data quality issue found in step {step} ({time})."
),
final_actions=pb.FinalActions( # Define final actions for the entire validation
pb.send_slack_notification(
webhook_url="https://hooks.slack.com/services/your/webhook/url"
)
),
brief=True, # Add automatically-generated briefs for each step
)
.col_vals_between( # Check numeric ranges with precision
columns=["price", "quantity"],
left=0, right=1000
)
.col_vals_not_null( # Ensure that columns ending with '_id' don't have null values
columns=pb.ends_with("_id")
)
.col_vals_regex( # Validate patterns with regex
columns="email",
pattern="^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
)
.col_vals_in_set( # Check categorical values
columns="status",
set=["pending", "shipped", "delivered", "returned"]
)
.conjointly( # Combine multiple conditions
lambda df: pb.expr_col("revenue") == pb.expr_col("price") * pb.expr_col("quantity"),
lambda df: pb.expr_col("tax") >= pb.expr_col("revenue") * 0.05
)
.interrogate()
)
```
```
Major data quality issue found in step 7 (2025-04-16 15:03:04.685612+00:00).
```
```python
# Get an HTML report you can share with your team
validation.get_tabular_report().show("browser")
```
<div align="center">
<img src="https://posit-dev.github.io/pointblank/assets/pointblank-sales-data.png" width="800px">
</div>
```python
# Get a report of failing records from a specific step
validation.get_step_report(i=3).show("browser") # Get failing records from step 3
```
<div align="center">
<img src="https://posit-dev.github.io/pointblank/assets/pointblank-step-report.png" width="800px">
</div>
<br>
## YAML Configuration
For teams that need portable, version-controlled validation workflows, Pointblank supports YAML configuration files. This makes it easy to share validation logic across different environments and team members, ensuring everyone is on the same page.
**validation.yaml**
```yaml
validate:
data: small_table
tbl_name: "small_table"
label: "Getting started validation"
steps:
- col_vals_gt:
columns: "d"
value: 100
- col_vals_le:
columns: "c"
value: 5
- col_exists:
columns: ["date", "date_time"]
```
**Execute the YAML validation**
```python
import pointblank as pb
# Run validation from YAML configuration
validation = pb.yaml_interrogate("validation.yaml")
# Get the results just like any other validation
validation.get_tabular_report().show()
```
This approach is suitable for:
- **CI/CD pipelines**: Store validation rules alongside your code
- **Team collaboration**: Share validation logic in a readable format
- **Environment consistency**: Use the same validation across dev, staging, and production
- **Documentation**: YAML files serve as living documentation of your data quality requirements
## Command Line Interface (CLI)
Pointblank includes a powerful CLI utility called `pb` that lets you run data validation workflows directly from the command line. Perfect for CI/CD pipelines, scheduled data quality checks, or quick validation tasks.
<div align="center">
<img src="https://posit-dev.github.io/pointblank/assets/vhs/cli-complete-workflow.gif" width="100%">
</div>
**Explore Your Data**
```bash
# Get a quick preview of your data
pb preview small_table
# Preview data from GitHub URLs
pb preview "https://github.com/user/repo/blob/main/data.csv"
# Check for missing values in Parquet files
pb missing data.parquet
# Generate column summaries from database connections
pb scan "duckdb:///data/sales.ddb::customers"
```
**Run Essential Validations**
```bash
# Run validation from YAML configuration file
pb run validation.yaml
# Run validation from Python file
pb run validation.py
# Check for duplicate rows
pb validate small_table --check rows-distinct
# Validate data directly from GitHub
pb validate "https://github.com/user/repo/blob/main/sales.csv" --check col-vals-not-null --column customer_id
# Verify no null values in Parquet datasets
pb validate "data/*.parquet" --check col-vals-not-null --column a
# Extract failing data for debugging
pb validate small_table --check col-vals-gt --column a --value 5 --show-extract
```
**Integrate with CI/CD**
```bash
# Use exit codes for automation in one-liner validations (0 = pass, 1 = fail)
pb validate small_table --check rows-distinct --exit-code
# Run validation workflows with exit codes
pb run validation.yaml --exit-code
pb run validation.py --exit-code
```
Click the following headings to see some video demonstrations of the CLI:
<details>
<summary>Getting Started with the Pointblank CLI</summary>
<div align="center">
<img src="https://posit-dev.github.io/pointblank/assets/vhs/cli-getting-started.gif" width="100%">
</div>
</details>
<details>
<summary>Doing Some Data Exploration</summary>
<div align="center">
<img src="https://posit-dev.github.io/pointblank/assets/vhs/cli-data-exploration.gif" width="100%">
</div>
</details>
<details>
<summary>Validating Data with the CLI</summary>
<div align="center">
<img src="https://posit-dev.github.io/pointblank/assets/vhs/cli-essential-validations.gif" width="100%">
</div>
</details>
<details>
<summary>Using Polars in the CLI</summary>
<div align="center">
<img src="https://posit-dev.github.io/pointblank/assets/vhs/cli-using-polars.gif" width="100%">
</div>
</details>
<details>
<summary>Integrating Pointblank with CI/CD</summary>
<div align="center">
<img src="https://posit-dev.github.io/pointblank/assets/vhs/cli-cicd-workflows.gif" width="100%">
</div>
</details>
## Generate Realistic Test Data
Need test data for your validation workflows? The `generate_dataset()` function creates realistic, locale-aware synthetic data based on schema definitions. It's very useful for developing pipelines without production data, running CI/CD tests with reproducible scenarios, or prototyping workflows before production data is available.
```python
import pointblank as pb
# Define a schema with field constraints
schema = pb.Schema(
user_id=pb.int_field(min_val=1, unique=True),
name=pb.string_field(preset="name"),
email=pb.string_field(preset="email"),
age=pb.int_field(min_val=18, max_val=100),
status=pb.string_field(allowed=["active", "pending", "inactive"]),
)
# Generate 10 rows of realistic test data
data = pb.generate_dataset(schema, n=10, seed=23)
data
```
| user_id | name | email | age | status |
|---------------------|------------------|----------------------------|-----|----------|
| 7188536481533917197 | Vivienne Rios | vrios27@hotmail.com | 55 | pending |
| 2674009078779859984 | William Schaefer | wschaefer28@yandex.com | 28 | active |
| 7652102777077138151 | Lily Hansen | lily779@aol.com | 20 | active |
| 157503859921753049 | Shirley Mays | shirley_mays@protonmail.com| 93 | inactive |
| 2829213282471975080 | Sean Dawson | sean_dawson@hotmail.com | 57 | pending |
| 3497364383162086858 | Zachary Marsh | zmarsh23@zoho.com | 72 | pending |
| 3302703640991750415 | Gemma Gonzalez | gemmagonzalez@yahoo.com | 66 | pending |
| 6695746877064448147 | Brian Haley | brian437@yandex.com | 85 | inactive |
| 2466163118311913924 | Nora Hernandez | norahernandez@aol.com | 63 | pending |
| 129827878195925732 | Diana Novak | diana922@protonmail.com | 34 | active |
The generator supports sophisticated data generation with these capabilities:
- **Realistic data with presets**: Use built-in presets like `"name"`, `"email"`, `"address"`, `"phone"`, etc.
- **User agent strings**: Generate highly varied, realistic browser user agent strings from 17 browser categories with over 42,000 unique combinations
- **50+ country support**: Generate locale-specific data (e.g., `country="DE"` for German addresses)
- **Field constraints**: Control ranges, patterns, uniqueness, and allowed values
- **Multiple output formats**: Returns Polars DataFrames by default, but also supports Pandas (`output="pandas"`) or dictionaries (`output="dict"`)
This makes it easy to generate test data that matches your validation rules, helping you develop and test data quality workflows without relying on real data.
## Features That Set Pointblank Apart
- **Complete validation workflow**: From data access to validation to reporting in a single pipeline
- **Built for collaboration**: Share results with colleagues through beautiful interactive reports
- **Practical outputs**: Get exactly what you need: counts, extracts, summaries, or full reports
- **Flexible deployment**: Use in notebooks, scripts, or data pipelines
- **Synthetic data generation**: Create realistic test data with 30+ presets, user agent strings, locale-aware formatting, and 50+ country support
- **Customizable**: Tailor validation steps and reporting to your specific needs
- **Internationalization**: Reports can be generated in 40 languages, including English, Spanish, French, and German
## Documentation and Examples
Visit our [documentation site](https://posit-dev.github.io/pointblank) for:
- [The User Guide](https://posit-dev.github.io/pointblank/user-guide/)
- [API reference](https://posit-dev.github.io/pointblank/reference/)
- [Example gallery](https://posit-dev.github.io/pointblank/demos/)
- [The Pointblog](https://posit-dev.github.io/pointblank/blog/)
## Join the Community
We'd love to hear from you! Connect with us:
- [GitHub Issues](https://github.com/posit-dev/pointblank/issues) for bug reports and feature requests
- [_Discord server_](https://discord.com/invite/YH7CybCNCQ) for discussions and help
- [Contributing guidelines](https://github.com/posit-dev/pointblank/blob/main/CONTRIBUTING.md) if you'd like to help improve Pointblank
## Installation
You can install Pointblank using pip:
```bash
pip install pointblank
```
You can also install Pointblank from Conda-Forge by using:
```bash
conda install conda-forge::pointblank
```
If you don't have Polars or Pandas installed, you'll need to install one of them to use Pointblank.
```bash
pip install "pointblank[pl]" # Install Pointblank with Polars
pip install "pointblank[pd]" # Install Pointblank with Pandas
```
To use Pointblank with DuckDB, MySQL, PostgreSQL, or SQLite, install Ibis with the appropriate backend:
```bash
pip install "pointblank[duckdb]" # Install Pointblank with Ibis + DuckDB
pip install "pointblank[mysql]" # Install Pointblank with Ibis + MySQL
pip install "pointblank[postgres]" # Install Pointblank with Ibis + PostgreSQL
pip install "pointblank[sqlite]" # Install Pointblank with Ibis + SQLite
```
## Technical Details
Pointblank uses [Narwhals](https://github.com/narwhals-dev/narwhals) to work with Polars and Pandas DataFrames, and integrates with [Ibis](https://github.com/ibis-project/ibis) for database and file format support. This architecture provides a consistent API for validating tabular data from various sources.
## Contributing to Pointblank
There are many ways to contribute to the ongoing development of Pointblank. Some contributions can be simple (like fixing typos, improving documentation, filing issues for feature requests or problems, etc.) and others might take more time and care (like answering questions and submitting PRs with code changes). Just know that anything you can do to help would be very much appreciated!
Please read over the [contributing guidelines](https://github.com/posit-dev/pointblank/blob/main/CONTRIBUTING.md) for
information on how to get started.
## Pointblank for R
There's also a version of Pointblank for R, which has been around since 2017 and is widely used in the R community. You can find it at https://github.com/rstudio/pointblank.
## Roadmap
We're actively working on enhancing Pointblank with:
1. Additional validation methods for comprehensive data quality checks
2. Advanced logging capabilities
3. Messaging actions (Slack, email) for threshold exceedances
4. LLM-powered validation suggestions and data dictionary generation
5. JSON/YAML configuration for pipeline portability
6. CLI utility for validation from the command line
7. Expanded backend support and certification
8. High-quality documentation and examples
If you have any ideas for features or improvements, don't hesitate to share them with us! We are always looking for ways to make Pointblank better.
## Code of Conduct
Please note that the Pointblank project is released with a [contributor code of conduct](https://www.contributor-covenant.org/version/2/1/code_of_conduct/). <br>By participating in this project you agree to abide by its terms.
## 📄 License
Pointblank is licensed under the MIT license.
© Posit Software, PBC.
## 🏛️ Governance
This project is primarily maintained by
[Rich Iannone](https://bsky.app/profile/richmeister.bsky.social). Other authors may occasionally
assist with some of these duties.
| text/markdown | null | Richard Iannone <riannone@me.com> | null | null | MIT License
Copyright (c) 2024-2026 Posit Software, PBC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| data, quality, validation, testing, data science, data engineering | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming La... | [] | null | null | >=3.10 | [] | [] | [] | [
"commonmark>=0.9.1",
"importlib-metadata",
"great_tables>=0.20.0",
"narwhals>=2.0.1",
"typing_extensions>=3.10.0.0",
"requests>=2.31.0",
"click>=8.0.0",
"rich>=13.0.0",
"pyyaml>=6.0.0",
"pandas>=2.2.3; extra == \"pd\"",
"polars>=1.33.0; extra == \"pl\"",
"pyspark==3.5.6; extra == \"pyspark\"",... | [] | [] | [] | [
"homepage, https://github.com/posit-dev/pointblank"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:16:46.924077 | pointblank-0.21.0.tar.gz | 62,066,413 | 92/0a/79d5814869687d55a46e1ded4abc75ba8c80ff4c420189f33e7ad5f8e222/pointblank-0.21.0.tar.gz | source | sdist | null | false | ddd051cb53f89a8c56dad3aaba05727e | 0bfb571dda0bacfe50af2c2b1e7d99b183b4ce2da5728483c601576d6e8f8925 | 920a79d5814869687d55a46e1ded4abc75ba8c80ff4c420189f33e7ad5f8e222 | null | [
"LICENSE"
] | 1,729 |
2.4 | tableau-migration | 6.0.0 | Tableau Migration SDK | # Migration SDK
## Usage
This will install the tableau_migration package
```
pip install tableau_migration
```
Once installed, you can use it the REPL or your own code
```Python
# This initializes the dotnet runtime on import
import tableau_migration
# Now just use the object
from migration_migration_plan_builder import PyMigrationPlanBuilder
planBuilder = PyMigrationPlanBuilder()
...
```
| text/markdown | Salesforce, Inc. | null | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cffi==2.0.0",
"pycparser==2.23",
"pythonnet==3.0.5",
"typing-extensions==4.15.0"
] | [] | [] | [] | [
"Homepage, http://www.tableau.com",
"Bug Tracker, http://www.tableau.com",
"Repository, https://github.com/tableau/tableau-migration-sdk"
] | twine/6.2.0 CPython/3.11.4 | 2026-02-19T20:16:44.018070 | tableau_migration-6.0.0.tar.gz | 2,426,720 | b5/cf/fd51a0d8aabf53bfb974b6315a3be784c689114385ae7bc53c93893c25b3/tableau_migration-6.0.0.tar.gz | source | sdist | null | false | 4f11285c259820793bb4895d62e807eb | a5e016ebfdac05384714eae147afdc409e51b33cc8fe2b9a7da51e985eba29cc | b5cffd51a0d8aabf53bfb974b6315a3be784c689114385ae7bc53c93893c25b3 | Apache-2.0 | [
"LICENSE"
] | 227 |
2.4 | climpred | 2.6.0 | """Verification of weather and climate forecasts and prediction.""" | .. image:: https://i.imgur.com/HPOdOsR.png
Verification of weather and climate forecasts
..
Table version of badges inspired by pySTEPS.
.. list-table::
:stub-columns: 1
:widths: 10 90
* - docs
- |docs| |context7| |joss| |doi|
* - tests
- |ci| |upstream| |codecov| |precommit|
* - package
- |conda| |conda downloads| |pypi| |pypi downloads|
* - license
- |license|
* - community
- |gitter| |contributors| |forks| |stars| |issues| |PRs|
* - tutorials
- |gallery| |workshop| |cloud|
.. |docs| image:: https://img.shields.io/readthedocs/climpred/latest.svg?style=flat
:target: https://climpred.readthedocs.io/en/stable/?badge=stable
:alt: Documentation Status
.. |context7| image:: https://img.shields.io/badge/docs-LLM-008A61
:target: https://context7.com/pangeo-data/climpred/llms.txt
:alt: context7 docs for LLMs
.. |joss| image:: https://joss.theoj.org/papers/246d440e3fcb19025a3b0e56e1af54ef/status.svg
:target: https://joss.theoj.org/papers/246d440e3fcb19025a3b0e56e1af54ef
:alt: JOSS paper
.. |doi| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.4556085.svg
:target: https://doi.org/10.5281/zenodo.4556085
:alt: DOI
.. |ci| image:: https://github.com/pangeo-data/climpred/actions/workflows/climpred_testing.yml/badge.svg
:target: https://github.com/pangeo-data/climpred/actions/workflows/climpred_testing.yml
:alt: CI
.. |upstream| image:: https://github.com/pangeo-data/climpred/actions/workflows/upstream-dev-ci.yml/badge.svg
:target: https://github.com/pangeo-data/climpred/actions/workflows/upstream-dev-ci.yml
:alt: CI upstream
.. |codecov| image:: https://codecov.io/gh/pangeo-data/climpred/branch/main/graph/badge.svg
:target: https://codecov.io/gh/pangeo-data/climpred
:alt: coverage
.. |precommit| image:: https://results.pre-commit.ci/badge/github/pangeo-data/climpred/main.svg
:target: https://results.pre-commit.ci/latest/github/pangeo-data/climpred/main
:alt: pre-commit.ci status
.. |conda| image:: https://img.shields.io/conda/vn/conda-forge/climpred.svg
:target: https://anaconda.org/conda-forge/climpred
:alt: Conda Version
.. |pypi| image:: https://img.shields.io/pypi/v/climpred.svg
:target: https://pypi.python.org/pypi/climpred/
:alt: pypi Version
.. |license| image:: https://img.shields.io/github/license/pangeo-data/climpred.svg
:alt: license
:target: LICENSE.txt
.. |gitter| image:: https://badges.gitter.im/Join%20Chat.svg
:target: https://gitter.im/climpred
:alt: gitter chat
.. |contributors| image:: https://img.shields.io/github/contributors/pangeo-data/climpred
:alt: GitHub contributors
:target: https://github.com/pangeo-data/climpred/graphs/contributors
.. |conda downloads| image:: https://img.shields.io/conda/dn/conda-forge/climpred
:alt: Conda downloads
:target: https://anaconda.org/conda-forge/climpred
.. |pypi downloads| image:: https://pepy.tech/badge/climpred
:alt: pypi downloads
:target: https://pepy.tech/project/climpred
.. |gallery| image:: https://img.shields.io/badge/climpred-examples-ed7b0e.svg
:alt: climpred gallery
:target: https://mybinder.org/v2/gh/pangeo-data/climpred/main?urlpath=lab%2Ftree%2Fdocs%2Fsource%2Fquick-start.ipynb
.. |workshop| image:: https://img.shields.io/badge/climpred-workshop-f5a252
:alt: climpred workshop
:target: https://mybinder.org/v2/gh/bradyrx/climpred_workshop/master
.. |cloud| image:: https://img.shields.io/badge/climpred-cloud_demo-f9c99a
:alt: climpred cloud demo
:target: https://github.com/aaronspring/climpred-cloud-demo
.. |forks| image:: https://img.shields.io/github/forks/pangeo-data/climpred
:alt: GitHub forks
:target: https://github.com/pangeo-data/climpred/network/members
.. |stars| image:: https://img.shields.io/github/stars/pangeo-data/climpred
:alt: GitHub stars
:target: https://github.com/pangeo-data/climpred/stargazers
.. |issues| image:: https://img.shields.io/github/issues/pangeo-data/climpred
:alt: GitHub issues
:target: https://github.com/pangeo-data/climpred/issues
.. |PRs| image:: https://img.shields.io/github/issues-pr/pangeo-data/climpred
:alt: GitHub PRs
:target: https://github.com/pangeo-data/climpred/pulls
..
We are actively looking for new contributors for climpred!
`Riley <https://www.linkedin.com/in/rileybrady/>`_ moved to McKinsey's
Climate Analytics team as a climate software engineer.
`Aaron <https://www.linkedin.com/in/springaaron/>`_ moved to XING as a data scientist.
We especially hope for python enthusiasts from seasonal, subseasonal or weather
prediction community. In our past coding journey, collaborative coding, feedbacking
issues and pull requests advanced our code and thinking about forecast verification
more than we could have ever expected.
Feel free to implement your own new feature or take a look at the
`good first issue <https://github.com/pangeo-data/climpred/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22>`_
tag in the issues. If you are interested in maintaining climpred, please ping us.
Installation
============
You can install the latest release of ``climpred`` using ``pip`` or ``conda``:
.. code-block:: bash
python -m pip install climpred[complete]
.. code-block:: bash
conda install -c conda-forge climpred
You can also install the bleeding edge (pre-release versions) by cloning this
repository or installing directly from GitHub:
.. code-block:: bash
git clone https://github.com/pangeo-data/climpred.git
cd climpred
python -m pip install . --upgrade
.. code-block:: bash
pip install git+https://github.com/pangeo-data/climpred.git
Documentation
=============
Documentation is in development and can be found on readthedocs_.
.. _readthedocs: https://climpred.readthedocs.io/en/latest/
Star History
============
.. image:: https://api.star-history.com/svg?repos=pangeo-data/climpred&type=date&legend=top-left
:alt: Star History Chart
:target: https://www.star-history.com/#pangeo-data/climpred&type=date&legend=top-left
| text/x-rst | null | Aaron Spring <aaron.spring@mpimet.mpg.de>, Riley Brady <riley.brady@colorado.edu> | null | Aaron Spring <aaron.spring@mpimet.mpg.de>, Trevor James Smith <smith.trevorj@ouranos.ca> | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :... | [] | null | null | >=3.9 | [] | [] | [] | [
"cf-xarray>=0.8",
"cftime>=1.6.3",
"dask>=2023.4",
"numpy>=2",
"packaging>=23",
"pandas>=2",
"pooch>=1.8",
"xarray>=2024.5",
"xskillscore>=0.0.29",
"bottleneck; extra == \"accel\"",
"numba>=0.57; extra == \"accel\"",
"bias-correction>=0.4; extra == \"bias-correction\"",
"xclim>=0.52; extra =... | [] | [] | [] | [
"Changelog, https://climpred.readthedocs.io/en/stable/changelog.html",
"Homepage, https://climpred.readthedocs.io/en/stable/",
"Issue Tracker, https://github.com/pangeo-data/climpred/issues",
"Source, https://github.com/pangeo-data/climpred"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:16:31.939388 | climpred-2.6.0.tar.gz | 7,053,433 | 2d/75/cc5bef6639efbdb9c70eba42460a1818783965028bdc99e1509c1fdd6f57/climpred-2.6.0.tar.gz | source | sdist | null | false | 51e00909227b5ba7aeacceb66c135719 | b4cc1d989e4965f77b9184270ed7e537b60766498fb4c7ccd64247d84b6d77de | 2d75cc5bef6639efbdb9c70eba42460a1818783965028bdc99e1509c1fdd6f57 | MIT | [
"LICENSE.txt"
] | 256 |
2.4 | cavendo-engine | 0.1.0 | Python SDK for Cavendo Engine - AI agent workflow platform | # Cavendo Python SDK
A Python SDK for interacting with the Cavendo Engine API, designed for use with AI agent frameworks like CrewAI, LangChain, and AutoGen.
## Installation
```bash
pip install cavendo-engine
```
For development:
```bash
pip install cavendo-engine[dev]
```
## Quick Start
```python
from cavendo import CavendoClient
# Initialize with explicit credentials
client = CavendoClient(
url="http://localhost:3001",
api_key="cav_ak_your_api_key"
)
# Or use environment variables: CAVENDO_URL and CAVENDO_AGENT_KEY
client = CavendoClient()
# Get current agent info
agent = client.me()
print(f"Logged in as: {agent.name}")
# Get next task
task = client.tasks.next()
if task:
# Mark as in progress
client.tasks.update_status(task.id, "in_progress")
# Get task context
context = client.tasks.context(task.id)
# Search knowledge base
results = client.knowledge.search("relevant query", project_id=context.project["id"])
# Submit deliverable
deliverable = client.deliverables.submit(
task_id=task.id,
title="Analysis Report",
content="## Findings\n\n...",
content_type="markdown"
)
# Mark for review
client.tasks.update_status(task.id, "review")
# Always close when done
client.close()
```
### Using Context Manager
```python
from cavendo import CavendoClient
with CavendoClient() as client:
task = client.tasks.next()
# ... work with task
# Client is automatically closed
```
### Async Usage
```python
import asyncio
from cavendo import CavendoClient
async def main():
async with CavendoClient() as client:
agent = await client.me_async()
tasks = await client.tasks.list_all_async(status="pending")
for task in tasks:
context = await client.tasks.context_async(task.id)
# ... process task
asyncio.run(main())
```
## Configuration
### Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `CAVENDO_URL` | Base URL of the Cavendo Engine API | `http://localhost:3001` |
| `CAVENDO_AGENT_KEY` | Your agent's API key | Required |
### Client Options
```python
client = CavendoClient(
url="http://localhost:3001", # API base URL
api_key="cav_ak_...", # Agent API key
timeout=30.0, # Request timeout in seconds
max_retries=3, # Max retries for failed requests
)
```
## API Reference
### CavendoClient
The main client class for interacting with the Cavendo Engine API.
#### `client.me() -> Agent`
Get information about the current agent.
```python
agent = client.me()
print(f"Agent: {agent.name}")
print(f"Type: {agent.type}")
print(f"Scopes: {agent.scopes}")
print(f"Projects: {agent.project_ids}")
```
### Tasks API
Access via `client.tasks`.
#### `tasks.list_all(status?, project_id?, limit?, offset?) -> list[Task]`
List tasks assigned to the current agent.
```python
# All tasks
all_tasks = client.tasks.list_all()
# Filter by status
pending = client.tasks.list_all(status="pending")
in_progress = client.tasks.list_all(status="in_progress")
# Filter by project
project_tasks = client.tasks.list_all(project_id=5)
# Pagination
page2 = client.tasks.list_all(limit=10, offset=10)
```
#### `tasks.next() -> Task | None`
Get the next highest-priority pending task.
```python
task = client.tasks.next()
if task:
print(f"Next task: {task.title}")
```
#### `tasks.get(task_id) -> Task`
Get a specific task by ID.
```python
task = client.tasks.get(123)
```
#### `tasks.context(task_id) -> TaskContext`
Get full context for a task including project, related tasks, knowledge, and previous deliverables.
```python
context = client.tasks.context(123)
print(f"Project: {context.project['name']}")
print(f"Related tasks: {len(context.related_tasks)}")
print(f"Knowledge docs: {len(context.knowledge)}")
```
#### `tasks.update_status(task_id, status, progress?) -> Task`
Update task status.
```python
# Start working
client.tasks.update_status(123, "in_progress")
# Submit for review with progress info
client.tasks.update_status(
123,
"review",
progress={"steps_completed": 5, "total_steps": 5}
)
```
Valid statuses: `pending`, `assigned`, `in_progress`, `review`, `completed`, `cancelled`
### Deliverables API
Access via `client.deliverables`.
#### `deliverables.submit(task_id, title, content, content_type?, metadata?) -> Deliverable`
Submit a new deliverable.
```python
deliverable = client.deliverables.submit(
task_id=123,
title="Research Report",
content="## Executive Summary\n\n...",
content_type="markdown", # markdown, html, json, text, code
metadata={
"sources": ["https://example.com"],
"version": 1
}
)
```
#### `deliverables.get(deliverable_id) -> Deliverable`
Get a specific deliverable.
```python
deliverable = client.deliverables.get(456)
```
#### `deliverables.get_feedback(deliverable_id) -> Feedback | None`
Get feedback on a deliverable.
```python
feedback = client.deliverables.get_feedback(456)
if feedback:
print(f"Status: {feedback.status}")
print(f"Comments: {feedback.content}")
```
#### `deliverables.submit_revision(deliverable_id, content, title?, metadata?) -> Deliverable`
Submit a revision for a deliverable.
```python
revision = client.deliverables.submit_revision(
deliverable_id=456,
content="## Updated Report\n\n..."
)
print(f"Now at version: {revision.version}")
```
#### `deliverables.mine(status?, task_id?, limit?, offset?) -> list[Deliverable]`
List deliverables submitted by the current agent.
```python
# All deliverables
mine = client.deliverables.mine()
# Needing revision
to_revise = client.deliverables.mine(status="revision_requested")
# For a specific task
task_deliverables = client.deliverables.mine(task_id=123)
```
### Knowledge API
Access via `client.knowledge`.
#### `knowledge.search(query, project_id?, tags?, limit?) -> list[SearchResult]`
Search the knowledge base.
```python
results = client.knowledge.search(
query="pricing strategy",
project_id=3,
limit=10
)
for result in results:
print(f"{result.document.title} (score: {result.score:.2f})")
for highlight in result.highlights:
print(f" - {highlight}")
```
#### `knowledge.get(knowledge_id) -> KnowledgeDocument`
Get a specific knowledge document.
```python
doc = client.knowledge.get(5)
print(doc.content)
```
#### `knowledge.list_all(project_id?, tags?, limit?, offset?) -> list[KnowledgeDocument]`
List knowledge documents.
```python
docs = client.knowledge.list_all(project_id=3)
```
### Webhooks API
Access via `client.webhooks`. Requires `webhook:create` scope.
#### `webhooks.list_all() -> list[Webhook]`
List webhooks created by this agent.
```python
webhooks = client.webhooks.list_all()
```
#### `webhooks.create(url, events, active?) -> Webhook`
Create a new webhook.
```python
webhook = client.webhooks.create(
url="https://example.com/webhook",
events=["task.assigned", "deliverable.approved"]
)
print(f"Webhook secret: {webhook.secret}") # Save this!
```
Available events:
- `task.assigned`
- `task.updated`
- `deliverable.approved`
- `deliverable.revision_requested`
- `deliverable.rejected`
- `sprint.started`
- `project.knowledge_updated`
- `briefing.generated`
#### `webhooks.update(webhook_id, url?, events?, active?) -> Webhook`
Update a webhook.
```python
# Disable webhook
client.webhooks.update(1, active=False)
# Change events
client.webhooks.update(1, events=["task.assigned"])
```
#### `webhooks.delete(webhook_id) -> None`
Delete a webhook.
```python
client.webhooks.delete(1)
```
## Data Types
### Task
```python
@dataclass
class Task:
id: int
title: str
description: str | None
status: TaskStatus # pending, assigned, in_progress, review, completed, cancelled
priority: int # 1-4
project_id: int | None
project_name: str | None
assignee_id: int | None
due_date: datetime | None
progress: dict
metadata: dict
created_at: datetime | None
updated_at: datetime | None
```
### Deliverable
```python
@dataclass
class Deliverable:
id: int
task_id: int
title: str
content: str
content_type: ContentType # markdown, html, json, text, code
status: DeliverableStatus # pending, approved, revision_requested, rejected
version: int
metadata: dict
feedback: str | None
created_at: datetime | None
updated_at: datetime | None
```
### KnowledgeDocument
```python
@dataclass
class KnowledgeDocument:
id: int
title: str
content: str
content_type: ContentType
project_id: int | None
tags: list[str]
metadata: dict
created_at: datetime | None
updated_at: datetime | None
```
## Error Handling
The SDK provides specific exception types for different error conditions:
```python
from cavendo import (
CavendoError, # Base exception
AuthenticationError, # 401 - Invalid API key
AuthorizationError, # 403 - Insufficient permissions
NotFoundError, # 404 - Resource not found
ValidationError, # 400 - Invalid request data
RateLimitError, # 429 - Rate limit exceeded
ServerError, # 5xx - Server error
CavendoConnectionError, # Network connection failed
CavendoTimeoutError, # Request timed out
)
try:
task = client.tasks.get(999999)
except NotFoundError as e:
print(f"Task not found: {e.message}")
except AuthenticationError as e:
print(f"Auth failed: {e.message}")
except RateLimitError as e:
print(f"Rate limited. Retry after: {e.retry_after} seconds")
except CavendoError as e:
print(f"API error [{e.status_code}]: {e.message}")
```
### ValidationError Details
```python
try:
client.deliverables.submit(task_id=123, title="", content="")
except ValidationError as e:
print(f"Validation failed: {e.message}")
for field, errors in e.errors.items():
print(f" {field}: {', '.join(errors)}")
```
## Integration Examples
### CrewAI Integration
```python
from crewai import Agent, Task, Crew
from crewai.tools import BaseTool
from cavendo import CavendoClient
class CavendoKnowledgeTool(BaseTool):
name = "search_knowledge"
description = "Search Cavendo knowledge base"
def __init__(self, client: CavendoClient):
self._client = client
def _run(self, query: str) -> str:
results = self._client.knowledge.search(query)
return "\n".join(r.document.content for r in results[:3])
# Create agent with Cavendo tools
client = CavendoClient()
knowledge_tool = CavendoKnowledgeTool(client)
researcher = Agent(
role="Researcher",
goal="Research topics using knowledge base",
tools=[knowledge_tool]
)
# See examples/crewai_integration.py for full example
```
### LangChain Integration
```python
from langchain.tools import BaseTool
from langchain.agents import AgentExecutor, create_openai_functions_agent
from cavendo import CavendoClient
class CavendoSearchTool(BaseTool):
name = "cavendo_search"
description = "Search Cavendo knowledge base"
def __init__(self, client: CavendoClient):
self.client = client
def _run(self, query: str) -> str:
results = self.client.knowledge.search(query)
return "\n".join(r.document.content for r in results)
# Create LangChain agent with Cavendo tools
client = CavendoClient()
tools = [CavendoSearchTool(client)]
# See examples/langchain_integration.py for full example
```
## Development
### Running Tests
```bash
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run with coverage
pytest --cov=cavendo
```
### Type Checking
```bash
mypy cavendo
```
### Linting
```bash
ruff check cavendo
ruff format cavendo
```
## License
MIT License - see [LICENSE](LICENSE) for details.
| text/markdown | null | Cavendo <dev@cavendo.co> | null | null | null | agents, ai, autogen, cavendo, crewai, langchain, workflow | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx>=0.25.0",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-httpx>=0.21.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://cavendo.co",
"Documentation, https://docs.cavendo.co/sdk/python",
"Repository, https://github.com/Cavendo/Engine",
"Issues, https://github.com/Cavendo/Engine/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T20:14:52.487554 | cavendo_engine-0.1.0.tar.gz | 55,939 | 5b/5a/8f5e3bfff40ec51079bab98d294d289be97b642698c588a55318f1e9c756/cavendo_engine-0.1.0.tar.gz | source | sdist | null | false | 234e292398251f0de0fa345eb0c9ceb2 | 0250ee285602ff70a13c8ebf8923b8d82f351178eaed93a972e5802ab88db476 | 5b5a8f5e3bfff40ec51079bab98d294d289be97b642698c588a55318f1e9c756 | MIT | [
"LICENSE"
] | 229 |
2.4 | univi | 0.4.5 | UniVI: a scalable multi-modal variational autoencoder toolkit for seamless integration and analysis of multimodal single-cell data. | # UniVI
[](https://pypi.org/project/univi/)
[](https://pepy.tech/project/univi)
[](https://anaconda.org/conda-forge/univi)
[](https://anaconda.org/conda-forge/univi)
[](https://pypi.org/project/univi/)
<picture>
<!-- Dark mode (GitHub supports this; PyPI may ignore <source>) -->
<source media="(prefers-color-scheme: dark)"
srcset="https://raw.githubusercontent.com/Ashford-A/UniVI/v0.4.5/assets/figures/univi_overview_dark.png">
<!-- Light mode / fallback (works on GitHub + PyPI) -->
<img src="https://raw.githubusercontent.com/Ashford-A/UniVI/v0.4.5/assets/figures/univi_overview_light.png"
alt="UniVI overview and evaluation roadmap"
width="100%">
</picture>
**UniVI** is a **multi-modal variational autoencoder (VAE)** framework for aligning and integrating single-cell modalities such as **RNA**, **ADT (CITE-seq)**, and **ATAC**.
It’s designed for experiments like:
* **Joint embedding** of paired multimodal data (CITE-seq, Multiome, TEA-seq)
* **Zero-shot projection** of external unimodal cohorts into a paired “bridge” latent
* **Cross-modal reconstruction / imputation** (RNA→ADT, ATAC→RNA, etc.)
* **Denoising** via learned generative decoders
* **Evaluation** (FOSCTTM, Recall@k, modality mixing/entropy, label transfer, fused-space clustering)
* **Optional supervised heads** for harmonized annotation and domain confusion
* **Optional transformer encoders** (per-modality and/or fused multimodal transformer posterior)
* **Token-level hooks** for interpretability (top-k indices; optional attention maps if enabled)
---
## Preprint
If you use UniVI in your work, please cite:
> Ashford AJ, Enright T, Somers J, Nikolova O, Demir E.
> **Unifying multimodal single-cell data with a mixture-of-experts β-variational autoencoder framework.**
> *bioRxiv* (2025; updated 2026). doi: [10.1101/2025.02.28.640429](https://doi.org/10.1101/2025.02.28.640429)
```bibtex
@article{Ashford2025UniVI,
title = {Unifying multimodal single-cell data with a mixture-of-experts β-variational autoencoder framework},
author = {Ashford, A. J. and Enright, T. and Somers, J. and Nikolova, O. and Demir, E.},
journal = {bioRxiv},
date = {2025},
doi = {10.1101/2025.02.28.640429},
url = {https://www.biorxiv.org/content/10.1101/2025.02.28.640429},
note = {Preprint (updated 2026)}
}
````
---
## Installation
### PyPI
```bash
pip install univi
```
> **Note:** UniVI requires PyTorch. If `import torch` fails, install PyTorch for your platform/CUDA from PyTorch’s official install instructions.
### Conda / mamba
```bash
conda install -c conda-forge univi
# or
mamba install -c conda-forge univi
```
### Development install (from source)
```bash
git clone https://github.com/Ashford-A/UniVI.git
cd UniVI
conda env create -f envs/univi_env.yml
conda activate univi_env
pip install -e .
```
---
## Data expectations
UniVI expects **per-modality AnnData** objects. For paired settings, modalities should share the same cells:
* Each modality is an `AnnData`
* Paired modalities have the same `obs_names` (same cells, same order)
* Raw counts often live in `.layers["counts"]`
* A model-ready representation lives in `.X` (or `.obsm["X_*"]` for ATAC LSI)
You can keep multiple representations around:
* `.layers["counts"]` = raw
* `.X` = model input (e.g., log1p normalized RNA, CLR ADT, LSI ATAC, etc.)
* `.layers["denoised_*"]` / `.layers["imputed_*"]` = UniVI outputs
---
## Quickstart (Python / Jupyter)
This is the “notebook path”: load paired AnnData → train → encode → evaluate/plot.
```python
import numpy as np
import scanpy as sc
import torch
from torch.utils.data import DataLoader, Subset
from univi import UniVIMultiModalVAE, ModalityConfig, UniVIConfig, TrainingConfig
from univi.data import MultiModalDataset, align_paired_obs_names
from univi.trainer import UniVITrainer
```
### 1) Load paired AnnData
```python
rna = sc.read_h5ad("path/to/rna_citeseq.h5ad")
adt = sc.read_h5ad("path/to/adt_citeseq.h5ad")
adata_dict = align_paired_obs_names({"rna": rna, "adt": adt})
```
### 2) Dataset + dataloaders
```python
device = "cuda" if torch.cuda.is_available() else ("mps" if torch.mps.is_available() else "cpu")
dataset = MultiModalDataset(
adata_dict=adata_dict,
X_key="X", # uses .X as model input
device=None, # dataset yields CPU tensors; model moves to GPU
)
n = rna.n_obs
idx = np.arange(n)
rng = np.random.default_rng(0)
rng.shuffle(idx)
split = int(0.8 * n)
train_idx, val_idx = idx[:split], idx[split:]
train_loader = DataLoader(Subset(dataset, train_idx), batch_size=256, shuffle=True, num_workers=0)
val_loader = DataLoader(Subset(dataset, val_idx), batch_size=256, shuffle=False, num_workers=0)
```
### 3) Model config + train
```python
univi_cfg = UniVIConfig(
latent_dim=30,
beta=1.15,
gamma=3.25,
encoder_dropout=0.10,
decoder_dropout=0.05,
kl_anneal_start=75,
kl_anneal_end=150,
align_anneal_start=100,
align_anneal_end=175,
modalities=[
# likelihood could also be: "mse", "nb", "zinb", "poisson",
# "bernoulli", etc. depending on closest modality input distribution
# and experiment goals (e.g., "bernoulli" for raw binarized ATAC peaks,
# "nb" or "zinb" for raw scRNA-seq count inputs, "gaussian" for most
# normalized/scaled feature inputs, like log-normed RNA, CLR-normed ADT,
# TF-IDF/LSI normed ATAC features)
ModalityConfig(
name="rna",
input_dim=rna_tr.n_vars,
encoder_hidden=[1024, 512, 256, 128],
decoder_hidden=[128, 256, 512, 1024],
likelihood="gaussian",
),
ModalityConfig(
name="adt",
input_dim=adt_tr.n_vars,
encoder_hidden=[256, 128, 64],
decoder_hidden=[64, 128, 256],
likelihood="gaussian",
),
],
)
train_cfg = TrainingConfig(
n_epochs=5000,
batch_size=batch_size,
lr=1e-3,
weight_decay=1e-4,
device=device,
log_every=25,
grad_clip=5.0,
num_workers=0,
seed=42,
early_stopping=True,
best_epoch_warmup=75,
patience=50, # in UniVI v0.4.1+
min_delta=0.0,
)
model = UniVIMultiModalVAE(
univi_cfg,
loss_mode="v1", # or "v2" - "v1" recommended (used in the manuscript)
v1_recon="avg",
normalize_v1_terms=True,
).to(device)
trainer = UniVITrainer(
model=model,
train_loader=train_loader,
val_loader=val_loader,
train_cfg=train_cfg,
device=device,
)
trainer.fit()
```
---
## After training: what you can do with a UniVI model
UniVI models are **generative** (decoders + likelihoods) and **alignment-oriented** (shared latent space). After training, you typically use two modules:
* `univi.evaluation`: encoding, denoising, cross-modal prediction (imputation), generation, and metrics
* `univi.plotting`: Scanpy/Matplotlib helpers for UMAPs, legends, confusion matrices, MoE gate plots, and reconstruction-error plots
### 0) Imports + plotting defaults
```python
import numpy as np
import scipy.sparse as sp
from univi.evaluation import (
encode_adata,
encode_fused_adata_pair,
cross_modal_predict,
denoise_adata,
denoise_from_multimodal,
evaluate_alignment,
reconstruction_metrics,
# NEW (generation + recon error workflows)
generate_from_latent,
fit_label_latent_gaussians,
sample_latent_by_label,
evaluate_cross_reconstruction,
)
from univi.plotting import (
set_style,
umap,
umap_by_modality,
compare_raw_vs_denoised_umap_features,
plot_confusion_matrix,
write_gates_to_obs,
plot_moe_gate_summary,
# NEW (reconstruction error plots)
plot_reconstruction_error_summary,
plot_featurewise_reconstruction_scatter,
)
set_style(font_scale=1.2, dpi=150)
device = "cuda" # or "mps" (Mac M-chips), or "cpu"
```
Helper for sparse matrices:
```python
def to_dense(X):
return X.toarray() if sp.issparse(X) else np.asarray(X)
```
---
## 1) Encode a modality into latent space (`.obsm["X_univi"]`)
Use this when you have **one observed modality at a time** (RNA-only, ADT-only, ATAC-only, etc.):
```python
Z_rna = encode_adata(
model,
adata=rna,
modality="rna",
device=device,
layer=None, # uses adata.X by default
X_key="X",
batch_size=1024,
latent="moe_mean", # {"moe_mean","moe_sample","modality_mean","modality_sample"}
random_state=0,
)
rna.obsm["X_univi"] = Z_rna
```
Then plot:
```python
umap(
rna,
obsm_key="X_univi",
color=["celltype.l2", "batch"],
legend="outside",
legend_subset_topk=25,
savepath="umap_rna_univi.png",
show=False,
)
```
---
## 2) Encode a *fused* multimodal latent (true paired/multi-observed cells)
When you have multiple observed modalities for the **same cells**, you can encode the *fused* posterior (and optionally MoE router gates/logits):
```python
fused = encode_fused_adata_pair(
model,
adata_by_mod={"rna": rna, "adt": adt}, # same obs_names, same order
device=device,
batch_size=1024,
use_mean=True,
return_gates=True,
return_gate_logits=True,
write_to_adatas=True, # writes obsm + gate columns
fused_obsm_key="X_univi_fused",
gate_prefix="gate",
)
# fused["Z_fused"] -> (n_cells, latent_dim)
# fused["gates"] -> (n_cells, n_modalities) or None (if fused transformer posterior is used)
```
Plot fused:
```python
umap(
rna,
obsm_key="X_univi_fused",
color=["celltype.l2", "batch"],
legend="outside",
savepath="umap_fused.png",
show=False,
)
```
---
## 3) Cross-modal prediction (imputation): encode source → decode target
Example: **RNA → ADT**. UniVI will automatically handle decoder output types internally (e.g. Gaussian returns tensor; NB returns `{"mu","log_theta"}`; ZINB returns `{"mu","log_theta","logit_pi"}`; Poisson returns `{"rate","log_rate"}`, etc.) and return the appropriate **mean-like** prediction.
```python
adt_hat_from_rna = cross_modal_predict(
model,
adata_src=rna,
src_mod="rna",
tgt_mod="adt",
device=device,
layer=None,
X_key="X",
batch_size=512,
use_moe=True,
)
adt.layers["imputed_from_rna"] = adt_hat_from_rna
```
---
## 4) Denoising (self-reconstruction or true fused denoising)
### Option A — self-denoise a single modality (same as “reconstruct”)
```python
denoise_adata(
model,
adata=rna,
modality="rna",
device=device,
out_layer="denoised_self",
overwrite_X=False,
batch_size=512,
)
```
### Option B — true multimodal denoising via fused latent
```python
denoise_adata(
model,
adata=rna, # output written here
modality="rna",
device=device,
out_layer="denoised_fused",
overwrite_X=False,
batch_size=512,
adata_by_mod={"rna": rna, "adt": adt},
layer_by_mod={"rna": None, "adt": None}, # None -> use .X
X_key_by_mod={"rna": "X", "adt": "X"},
use_mean=True,
)
```
Compare raw vs denoised marker overlays:
```python
compare_raw_vs_denoised_umap_features(
rna,
obsm_key="X_univi",
features=["MS4A1", "CD3D", "NKG7"],
raw_layer=None,
denoised_layer="denoised_fused",
savepath="umap_raw_vs_denoised.png",
show=False,
)
```
---
## 5) Quantify reconstruction / imputation error vs ground truth
You can compute **featurewise + summary** errors between:
* **cross-reconstructed** (RNA→ADT, ATAC→RNA, …)
* **denoised** outputs (self or fused)
* and the **true observed** data
### A) Basic metrics on two matrices
```python
true = to_dense(adt.X)
pred = adt.layers["imputed_from_rna"]
m = reconstruction_metrics(true, pred)
print("MSE mean:", m["mse_mean"])
print("Pearson mean:", m["pearson_mean"])
```
### B) One-call evaluation for cross-reconstruction / denoising
This will:
1. generate predictions via UniVI (handling decoder output types correctly),
2. align to the requested truth matrix (layer/X_key), and
3. return metrics + optional per-feature vectors.
```python
rep = evaluate_cross_reconstruction(
model,
adata_src=rna,
adata_tgt=adt,
src_mod="rna",
tgt_mod="adt",
device=device,
src_layer=None,
tgt_layer=None,
batch_size=512,
# optionally restrict to a feature subset (e.g., top markers)
feature_names=None,
)
print(rep["summary"]) # mse_mean/median, pearson_mean/median, etc.
```
Plot reconstruction-error summaries:
```python
plot_reconstruction_error_summary(
rep,
title="RNA → ADT imputation error",
savepath="recon_error_summary.png",
show=False,
)
```
And featurewise scatter (true vs predicted) for selected features:
```python
plot_featurewise_reconstruction_scatter(
rep,
features=["CD3", "CD4", "MS4A1"],
savepath="recon_scatter_selected_features.png",
show=False,
)
```
---
## 6) Alignment evaluation (FOSCTTM, Recall@k, mixing/entropy, label transfer, gates)
```python
metrics = evaluate_alignment(
Z1=rna.obsm["X_univi"],
Z2=adt.obsm["X_univi"],
metric="euclidean",
recall_ks=(1, 5, 10),
k_mixing=20,
k_entropy=30,
labels_source=rna.obs["celltype.l2"].to_numpy(),
labels_target=adt.obs["celltype.l2"].to_numpy(),
compute_bidirectional_transfer=True,
k_transfer=15,
json_safe=True,
)
```
Confusion matrix:
```python
plot_confusion_matrix(
np.asarray(metrics["label_transfer_cm"]),
labels=np.asarray(metrics["label_transfer_label_order"]),
title="Label transfer (RNA → ADT)",
normalize="true",
savepath="label_transfer_confusion.png",
show=False,
)
```
---
## 7) Generate new data from latent space (sampling / “in silico cells”)
UniVI decoders define a likelihood per modality (Gaussian, NB, ZINB, Poisson, Bernoulli, etc.). Generation is done as:
1. pick latent samples `z ~ p(z)` (or a conditional latent distribution)
2. decode with the modality decoder(s)
3. return **mean-like reconstructions** or (optionally) sample from the likelihood
### A) Unconditional generation (standard normal prior)
```python
Xgen = generate_from_latent(
model,
n=5000,
target_mod="rna",
device=device,
z_source="prior", # "prior" or provide z directly
return_mean=True, # mean-like output
sample_likelihood=False, # if True: sample from likelihood when supported
)
# Xgen shape: (5000, n_genes)
```
### B) Cell-type–conditioned generation via empirical latent neighborhoods
This is the “no classifier head needed” option:
1. encode a reference cohort
2. pick cells with a given label
3. sample around their latent distribution (Gaussian fit, or jitter)
```python
Z = rna.obsm["X_univi"]
labels = rna.obs["celltype.l2"].to_numpy()
# Fit a per-label Gaussian in latent space
label_gauss = fit_label_latent_gaussians(Z, labels)
# Sample latent points for a chosen label
z_B = sample_latent_by_label(label_gauss, label="B cell", n=2000, random_state=0)
# Decode to RNA space
X_B = generate_from_latent(
model,
z=z_B,
target_mod="rna",
device=device,
return_mean=True,
)
```
### C) Cluster-aware generation (no annotations required)
If you don’t have labels, you can cluster `Z` (e.g., k-means), fit cluster Gaussians, then sample by cluster id.
### D) Head-guided generation (optional, when a classifier head exists)
If you trained a classification head, you can optionally *bias* latent selection toward a desired label by filtering or optimizing candidate z’s (implementation depends on your head setup). UniVI supports this workflow when the head is present, but the **label-agnostic Gaussian/cluster methods work everywhere**.
---
## 8) MoE gating diagnostics (precision contributions + optional learnable router)
UniVI can report per-cell modality **contribution weights** for the **analytic fusion** path (MoE/PoE-style).
There are two related notions of “who contributed how much” to the fused latent:
- **Precision-only (always available):** derived from each modality’s posterior uncertainty in latent space.
- **Router × precision (optional):** if your trained model exposes **router logits**, UniVI can combine
router probabilities with precision to produce contribution weights.
> Note: This section applies to **analytic fusion** (Gaussian experts in latent space).
> If you use a **fused transformer posterior**, there may be no analytic precision/router attribution
> and gates can be unavailable or not meaningful.
### A) Compute per-cell contribution weights (recommended)
```python
from univi.evaluation import to_dense, encode_moe_gates_from_tensors
from univi.plotting import write_gates_to_obs, plot_moe_gate_summary
gate = encode_moe_gates_from_tensors(
model,
x_dict={"rna": to_dense(rna.X), "adt": to_dense(adt.X)},
device=device,
batch_size=1024,
modality_order=["rna", "adt"],
kind="router_x_precision", # will fall back to "effective_precision" if router logits are unavailable
return_logits=True,
)
W = gate["weights"] # (n_cells, n_modalities), rows sum to 1
mods = gate["modality_order"] # e.g. ["rna", "adt"]
print("Requested kind:", gate.get("requested_kind"))
print("Effective kind:", gate.get("kind"))
print("Per-modality mean:", gate.get("per_modality_mean"))
print("Has logits:", gate.get("logits") is not None)
````
If you want **precision-only** weights (no router influence), set `kind="effective_precision"`.
### B) Write weights to `.obs` (for plotting / grouping)
```python
write_gates_to_obs(
rna,
gates=W,
modality_names=mods,
gate_prefix="moe_gate", # creates obs cols: moe_gate_{mod}
gate_logits=gate.get("logits"), # optional; may be None
)
```
### C) Plot contribution usage (overall + grouped)
```python
plot_moe_gate_summary(
rna,
gate_prefix="moe_gate",
groupby="celltype.l3", # or "celltype.l2", "batch", etc.
agg="mean",
savepath="moe_gates_by_celltype.png",
show=False,
)
```
### D) Optional: log gates alongside alignment metrics
`evaluate_alignment(...)` evaluates geometric alignment (FOSCTTM, Recall@k, mixing/entropy, label transfer).
If you want to save gate summaries alongside those metrics, just merge dictionaries:
```python
from univi.evaluation import evaluate_alignment
metrics = evaluate_alignment(
Z1=rna.obsm["X_univi"],
Z2=adt.obsm["X_univi"],
labels_source=rna.obs["celltype.l3"].to_numpy(),
labels_target=adt.obs["celltype.l3"].to_numpy(),
json_safe=True,
)
metrics["moe_gates"] = {
"kind": gate.get("kind"),
"requested_kind": gate.get("requested_kind"),
"modality_order": mods,
"per_modality_mean": gate.get("per_modality_mean"),
# (optional) store full matrices; omit if you want small JSON
# "weights": W,
# "logits": gate.get("logits"),
}
```
---
### Decoder output types (what UniVI handles for you)
Decoders can return either:
* a tensor (e.g. GaussianDecoder → `X_hat`)
* or a dict (e.g. NB → `{"mu","log_theta"}`, ZINB → `{"mu","log_theta","logit_pi"}`, Poisson → `{"rate","log_rate"}`, Bernoulli/Categorical → `{"logits", "probs"}`)
All post-training utilities above (`cross_modal_predict`, `denoise_*`, `generate_from_latent`, and reconstruction-eval helpers) are designed to **unwrap decoder outputs safely** and consistently return a sensible **mean-like** matrix for evaluation/plotting.
---
## Advanced topics
### Training objectives (v1 vs v2/lite)
* **v1 (“paper”)**: per-modality posteriors + reconstruction scheme (cross/self/avg) + posterior alignment across modalities
* **v2/lite**: fused posterior (MoE/PoE-style by default; optional fused transformer) + per-modality recon + β·KL + γ·alignment (L2 on latent means)
Choose via `loss_mode` at construction time (Python) or config JSON (scripts).
## Advanced model features
This section covers the “advanced” knobs in `univi/models/univi.py` and when to use them. Everything below is optional: you can train and evaluate UniVI without touching any of it.
---
### 1) Fused multimodal transformer posterior (optional)
**What it is:**
A *single* fused encoder that tokenizes each observed modality, concatenates tokens, runs a multimodal transformer, and outputs a fused posterior `(mu_fused, logvar_fused)`.
**Why you’d use it:**
- You want the posterior to be learned jointly across modalities (rather than fused analytically via PoE/MoE precision fusion).
- You want token-level interpretability hooks (e.g., ATAC top-k peak indices; optional attention maps if enabled in the encoder stack).
- You want a learnable “cross-modality mixing” mechanism beyond precision fusion.
**How to enable (config):**
- Set `cfg.fused_encoder_type = "multimodal_transformer"`.
- Optionally set:
- `cfg.fused_modalities = ["rna","adt","atac"]` (defaults to all)
- `cfg.fused_require_all_modalities = True` (default): only use fused posterior when all required modalities are present; otherwise falls back to `mixture_of_experts()`.
**Key API points:**
- Training: the model will automatically decide whether to use fused encoder or fallback based on presence and `fused_require_all_modalities`.
- Encoding: use `model.encode_fused(...)` to get the fused latent and optionally gates from fallback fusion.
```python
mu, logvar, z = model.encode_fused(
{"rna": X_rna, "adt": X_adt, "atac": X_atac},
use_mean=True,
)
````
---
### 2) Attention bias for transformer encoders (distance bias for ATAC, optional)
**What it is:**
A safe, optional attention bias that can encourage local genomic context for tokenized ATAC (or any modality tokenizer that supports it). It’s a **no-op** unless:
* the encoder is transformer-based *and*
* its tokenizer exposes `build_distance_attn_bias()` *and*
* you pass `attn_bias_cfg`.
**Why you’d use it:**
* ATAC token sets are sparse and positional: distance-aware attention can help the transformer focus on local regulatory structure.
**How to use (forward / encode / predict):**
Pass `attn_bias_cfg` into `forward(...)`, `encode_fused(...)`, or `predict_heads(...)`.
```python
attn_bias_cfg = {
"atac": {"type": "distance", "lengthscale_bp": 50_000, "same_chrom_only": True}
}
out = model(x_dict, epoch=ep, attn_bias_cfg=attn_bias_cfg)
mu, logvar, z = model.encode_fused(x_dict, attn_bias_cfg=attn_bias_cfg)
pred = model.predict_heads(x_dict, attn_bias_cfg=attn_bias_cfg)
```
**Notes:**
* For the *fused* multimodal transformer posterior, UniVI applies distance bias *within* the ATAC token block and leaves cross-modality blocks neutral (0), so it won’t artificially “force” cross-modality locality.
---
### 3) Learnable MoE gating for fusion (optional)
**What it is:**
A learnable gate that produces per-cell modality weights and uses them to scale per-modality precisions before PoE-style fusion. This is **off by default**; without it, fusion is pure precision fusion.
**Why you’d use it:**
* Modalities have variable quality per cell (e.g., low ADT counts, sparse ATAC, stressed RNA).
* You want a *data-driven* “trust score” per modality per cell.
* You want interpretable per-cell reliance weights (gate weights) to diagnose integration behavior.
**How to enable (config):**
* `cfg.use_moe_gating = True`
* Optional:
* `cfg.moe_gating_type = "per_modality"` (default) or `"shared"`
* `cfg.moe_gating_hidden = [..]`, `cfg.moe_gating_dropout`, `cfg.moe_gating_batchnorm`, `cfg.moe_gating_activation`
* `cfg.moe_gate_eps` to avoid exact zeros in gated precisions
**How to retrieve gates:**
Use `encode_fused(..., return_gates=True)` (works when not using fused transformer posterior; if fused posterior is used, gates are `None`).
```python
mu, logvar, z, gates, gate_logits = model.encode_fused(
x_dict,
use_mean=True,
return_gates=True,
return_gate_logits=True,
)
# gates: (n_cells, n_modalities) in the model's modality order
```
**Tip:**
Gate weights are useful for plots like “ADT reliance by celltype” or identifying low-quality subsets.
---
### 4) Multi-head supervised decoders (classification + adversarial heads)
UniVI supports two supervised head systems:
#### A) Legacy single label head (kept for backwards compatibility)
**What it is:**
A single categorical head via `label_decoder` controlled by init args:
* `n_label_classes`, `label_loss_weight`, `label_ignore_index`, `classify_from_mu`, `label_head_name`
**When to use it:**
If you already rely on the legacy label head in notebooks/scripts and want a stable API.
**Label names helpers:**
```python
model.set_label_names(["B", "T", "NK", ...])
```
#### B) New `cfg.class_heads` multi-head system (recommended for new work)
**What it is:**
Any number of heads defined via `ClassHeadConfig`. Heads can be:
* **categorical**: softmax + cross-entropy
* **binary**: single logit + BCEWithLogitsLoss (optionally with `pos_weight`)
Heads can also be **adversarial**: they apply a gradient reversal layer (GRL) to encourage invariance (domain confusion).
**Why you’d use it:**
* Predict multiple labels simultaneously (celltype, batch, donor, tissue, QC flags, etc.).
* Add domain-adversarial training (e.g., suppress batch/donor information).
* Semi-supervised setups where only some labels exist per head.
**How labels are passed at training time:**
`y` should be a dict keyed by head name:
```python
y = {
"celltype": celltype_ids, # categorical (shape [B] or one-hot [B,C])
"batch": batch_ids, # adversarial categorical, for batch-invariant latents
"is_doublet": doublet_01, # binary head (0/1, ignore_index supported)
}
out = model(x_dict, epoch=ep, y=y)
```
**How to predict heads after training:**
Use `predict_heads(...)` to run encoding + head prediction in one call.
```python
pred = model.predict_heads(x_dict, return_probs=True)
# pred[head] returns probabilities (softmax for categorical, sigmoid for binary)
```
**Head label name helpers (categorical):**
```python
model.set_head_label_names("celltype", ["B", "T", "NK", ...])
```
**Inspect head configuration (useful for logging):**
```python
meta = model.get_classification_meta()
```
---
### 5) Label expert injection into the fused posterior (semi-supervised “label as expert”)
**What it is:**
Optionally treats labels as an additional expert by encoding the label into a Gaussian posterior and fusing it with the base fused posterior. Controlled by:
* `use_label_encoder=True` and `n_label_classes>0`
* `label_encoder_warmup` (epoch threshold before injection starts)
* `label_moe_weight` (how strong labels influence fusion)
* `unlabeled_logvar` (large => labels contribute little when missing)
**Why you’d use it:**
* Semi-supervised alignment: labels can stabilize the latent when paired signals are weak.
* Controlled injection after warmup to avoid early collapse.
**How to use in encoding:**
`encode_fused(..., inject_label_expert=True, y=...)`
```python
mu, logvar, z = model.encode_fused(
x_dict,
epoch=ep,
y={"label": y_ids}, # or just pass y_ids if using legacy path
inject_label_expert=True,
)
```
---
### 6) Recon scaling across modalities (important when dims differ a lot)
**What it is:**
Per-modality reconstruction losses are typically summed across features; large modalities (RNA) can dominate gradients. UniVI supports:
* `recon_normalize_by_dim` + `recon_dim_power` (divide by `D**power`)
* per-modality `ModalityConfig.recon_weight`
**Defaults:**
* v1-style losses: normalize is off by default, `power=0.5`
* v2/lite: normalize is on by default, `power=1.0`
**Why you’d use it:**
* Stabilize training when RNA has 2k–20k dims but ADT has 30–200 dims and ATAC-LSI has ~50–500 dims.
* Tune modality balance without hand-waving.
**How to tune:**
* For “equal per-cell contribution” across modalities: `recon_normalize_by_dim=True` and `recon_dim_power=1.0`
* If you want a softer correction: `power=0.5`
* Or set `recon_weight` per modality.
---
### 7) Convenience APIs you’ll actually call
#### `encode_fused(...)`
**Purpose:** Encode any subset of modalities into a fused posterior, with optional gate outputs.
```python
mu, logvar, z = model.encode_fused(
x_dict,
epoch=0,
use_mean=True, # True: return mu; False: sample
inject_label_expert=True,
attn_bias_cfg=None,
)
# Optional: get fusion gates (only when fused transformer posterior is NOT used)
mu, logvar, z, gates, gate_logits = model.encode_fused(
x_dict,
return_gates=True,
return_gate_logits=True,
)
```
#### `predict_heads(...)`
**Purpose:** Encode fused latent, then emit probabilities/logits for the legacy head + all multi-head configs.
```python
pred = model.predict_heads(x_dict, return_probs=True)
# pred[head] -> probs (softmax/sigmoid)
```
---
## Repository structure
```text
UniVI/
├── README.md # Project overview, installation, quickstart
├── LICENSE # MIT license text file
├── pyproject.toml # Python packaging config (pip / PyPI)
├── assets/ # Static assets used by README/docs
│ └── figures/ # Schematic figure(s) for repository front page
├── conda.recipe/ # Conda build recipe (for conda-build)
│ └── meta.yaml
├── envs/ # Example conda environments
│ ├── UniVI_working_environment.yml
│ ├── UniVI_working_environment_v2_full.yml
│ ├── UniVI_working_environment_v2_minimal.yml
│ └── univi_env.yml # Recommended env (CUDA-friendly)
├── data/ # Small example data notes (datasets are typically external)
│ └── README.md # Notes on data sources / formats
├── notebooks/ # Jupyter notebook analyses to reproduce figures from our revised manuscript (in progress for Genome Research)
│ ├── UniVI_manuscript_GR-Figure__2__CITE_paired.ipynb
│ ├── UniVI_manuscript_GR-Figure__3__CITE_paired_biological_latent.ipynb
│ ├── UniVI_manuscript_GR-Figure__4__Multiome_paired.ipynb
│ ├── UniVI_manuscript_GR-Figure__5__Multiome_bridge_mapping_and_fine-tuning.ipynb
│ ├── UniVI_manuscript_GR-Figure__6__TEA-seq_tri-modal.ipynb
│ ├── UniVI_manuscript_GR-Figure__7__AML_bridge_mapping_and_fine-tuning.ipynb
│ ├── UniVI_manuscript_GR-Figure__8__benchmarking_against_pytorch_tools.ipynb
│ ├── UniVI_manuscript_GR-Figure__8__benchmarking_against_R_tools.ipynb
│ ├── UniVI_manuscript_GR-Figure__8__benchmarking_merging_and_plotting_runs.ipynb
│ ├── UniVI_manuscript_GR-Figure__9__paired_data_ablation_and_computational_scaling_performance.ipynb
│ ├── UniVI_manuscript_GR-Figure__9__paired_data_ablation_and_computational_scaling_performance_compile_plots_from_results_df.ipynb
│ ├── UniVI_manuscript_GR-Figure_10__cell_population_ablation_MoE.ipynb
│ ├── UniVI_manuscript_GR-Figure_10__cell_population_ablation_MoE_compile_plots_from_results_df.ipynb
│ ├── UniVI_manuscript_GR-Supple_____grid-sweep.ipynb
│ └── UniVI_manuscript_GR-Supple_____grid-sweep_compile_plots_from_results_df.ipynb
├── parameter_files/ # JSON configs for model + training + data selectors
│ ├── defaults_*.json # Default configs (per experiment)
│ └── params_*.json # Example “named” configs (RNA, ADT, ATAC, etc.)
├── scripts/ # Reproducible entry points (revision-friendly)
│ ├── train_univi.py # Train UniVI from a parameter JSON
│ ├── evaluate_univi.py # Evaluate trained models (FOSCTTM, label transfer, etc.)
│ ├── benchmark_univi_citeseq.py # CITE-seq-specific benchmarking script
│ ├── run_multiome_hparam_search.py
│ ├── run_frequency_robustness.py # Composition/frequency mismatch robustness
│ ├── run_do_not_integrate_detection.py # “Do-not-integrate” unmatched population demo
│ ├── run_benchmarks.py # Unified wrapper (includes optional Harmony baseline)
│ └── revision_reproduce_all.sh # One-click: reproduces figures + supplemental tables
└── univi/ # UniVI Python package (importable as `import univi`)
├── __init__.py # Package exports and __version__
├── __main__.py # Enables: `python -m univi ...`
├── cli.py # Minimal CLI (e.g., export-s1, encode)
├── pipeline.py # Config-driven model+data loading; latent encoding helpers
├── diagnostics.py # Exports Supplemental_Table_S1.xlsx (env + hparams + dataset stats)
├── config.py # Config dataclasses (UniVIConfig, ModalityConfig, TrainingConfig)
├── data.py # Dataset wrappers + matrix selectors (layer/X_key, obsm support)
├── evaluation.py # Metrics (FOSCTTM, mixing, label transfer, feature recovery)
├── matching.py # Modality matching / alignment helpers
├── objectives.py # Losses (ELBO variants, KL/alignment annealing, etc.)
├── plotting.py # Plotting helpers + consistent style defaults
├── trainer.py # UniVITrainer: training loop, logging, checkpointing
├── interpretability.py # Helper scripts for transformer token weight interpretability
├── figures/ # Package-internal figure assets (placeholder)
│ └── .gitkeep
├── models/ # VAE architectures + building blocks
│ ├── __init__.py
│ ├── mlp.py # Shared MLP building blocks
│ ├── encoders.py # Modality encoders (MLP + transformer + fused transformer)
│ ├── decoders.py # Likelihood-specific decoders (NB, ZINB, Gaussian, etc.)
│ ├── transformer.py # Transformer blocks + encoder (+ optional attn bias support)
│ ├── tokenizer.py # Tokenization configs/helpers (top-k / patch)
│ └── univi.py # Core UniVI multi-modal VAE
├── hyperparam_optimization/ # Hyperparameter search scripts
│ ├── __init__.py
│ ├── common.py
│ ├── run_adt_hparam_search.py
│ ├── run_atac_hparam_search.py
│ ├── run_citeseq_hparam_search.py
│ ├── run_multiome_hparam_search.py
│ ├── run_rna_hparam_search.py
│ ├── run_atac_hparam_search.py
│ └── run_teaseq_hparam_search.py
└── utils/ # General utilities
├── __init__.py
├── io.py # I/O helpers (AnnData, configs, checkpoints)
├── logging.py # Logging configuration / progress reporting
├── seed.py # Reproducibility helpers (seeding RNGs)
├── stats.py # Small statistical helpers / transforms
└── torch_utils.py # PyTorch utilities (device, tensor helpers)
```
---
## License
MIT License — see `LICENSE`.
---
## Contact, questions, and bug reports
* **Questions / comments:** open a GitHub Issue with the `question` label (or use Discussions)
* **Bug reports:** include:
* UniVI version: `python -c "import univi; print(univi.__version__)"`
* a minimal notebook/code snippet
* stack trace + OS/CUDA/PyTorch versions
| text/markdown | null | "Andrew J. Ashford" <ashforda@ohsu.edu> | null | null | MIT License
Copyright (c) 2025 Andrew J. Ashford
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the “Software”), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.26",
"scipy>=1.11",
"pandas>=2.1",
"anndata>=0.10",
"scanpy>=1.11",
"torch>=2.2",
"scikit-learn>=1.3",
"h5py>=3.10",
"pyyaml>=6.0",
"matplotlib>=3.8",
"seaborn>=0.13",
"igraph>=0.11",
"leidenalg>=0.10",
"tqdm>=4.66",
"openpyxl>=3.1",
"harmonypy>=0.0.9; extra == \"bench\""
] | [] | [] | [] | [
"Homepage, https://github.com/Ashford-A/UniVI",
"Repository, https://github.com/Ashford-A/UniVI",
"Bug Tracker, https://github.com/Ashford-A/UniVI/issues"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T20:14:13.243816 | univi-0.4.5.tar.gz | 123,081 | df/de/3cdecd2281d1373eba5c1c8fe12015fd4cc66011c9c21ccaa572d2037c6a/univi-0.4.5.tar.gz | source | sdist | null | false | 90e353d240f7a0d16d9ea80709811dcd | 9c3c4ccef2e361fc68774829d059f6d30ff1dae6690e374ed8b611ca74ab4bdc | dfde3cdecd2281d1373eba5c1c8fe12015fd4cc66011c9c21ccaa572d2037c6a | null | [
"LICENSE"
] | 222 |
2.4 | sebas-calculator | 0.1 | A simple calculator that performs basic arithmetic operations. | # Sebas-Calculator
Calculadora simple por consola que realiza operaciones aritméticas básicas, desarrollada en Python.
## Operaciones disponibles
| Comando | Descripción |
| ---------------- | ----------------------------- |
| `suma` | Suma de dos números |
| `resta` | Resta de dos números |
| `multiplicacion` | Multiplicación de dos números |
| `division` | División de dos números |
| `salir` | Terminar el programa |
## Requisitos
- Python >= 3.6
## Instalación
### Desde PyPI
```bash
pip install sebas-calculator
```
### Desde el código fuente
```bash
git clone https://github.com/sebas-calculator/sebas-calculator.git
cd sebas-calculator
pip install .
```
## Uso
### Como programa de consola
```bash
python -m sebas_calculator
```
El programa te pedirá:
1. La operación a realizar (`suma`, `resta`, `multiplicacion`, `division`).
2. Dos números enteros.
3. Mostrará el resultado en pantalla.
Escribe `salir` para terminar el programa.
### Como librería en tu código
```python
from sebas_calculator.operations import suma, resta, multiplicacion, division
resultado = suma(5, 3) # 8
resultado = resta(10, 4) # 6
resultado = multiplicacion(3, 7) # 21
resultado = division(15, 3) # 5.0
```
También puedes usar la función `operate` para elegir la operación dinámicamente:
```python
from sebas_calculator.operations import operate
resultado = operate("suma", 5, 3) # 8
```
## Ejemplo
```
Ingrese la operación a realizar (suma, resta, multiplicacion, division) o 'salir' para terminar: suma
Ingrese el primer número: 5
Ingrese el segundo número: 3
El resultado de la suma es: 8
```
## Estructura del proyecto
```
setup.py
sebas-calculator/
main.py # Punto de entrada del programa
operations.py # Funciones de las operaciones aritméticas
```
## Contribuir
1. Haz un fork del repositorio.
2. Crea una rama para tu feature (`git checkout -b feature/nueva-operacion`).
3. Haz commit de tus cambios (`git commit -m 'Añadir nueva operación'`).
4. Haz push a la rama (`git push origin feature/nueva-operacion`).
5. Abre un Pull Request.
## Licencia
Este proyecto está bajo la licencia MIT. Consulta el archivo [LICENSE](LICENSE) para más detalles.
## Autor
Sebastian Torregroza
| text/markdown | Sebastian Torregroza | sebastorregroza6@gmail.com | null | null | null | null | [] | [] | https://github.com/Sebas200702/sebas-calculator | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.9 | 2026-02-19T20:13:40.533934 | sebas_calculator-0.1.tar.gz | 3,133 | 19/c1/ccf6e18a69b444054daf0ddde387759e1a6b6ce894dca9e93cadd34ae04e/sebas_calculator-0.1.tar.gz | source | sdist | null | false | 074529780cce593449ac20a92faa11e0 | 40835326ea97adc1d76de20ab6175ace126b2078809cb390f8014a681d792104 | 19c1ccf6e18a69b444054daf0ddde387759e1a6b6ce894dca9e93cadd34ae04e | null | [] | 235 |
2.4 | kodudo | 0.2.0 | Cook your data into documents using Jinja2 templates | # Kodudo
[](https://pypi.org/project/kodudo/)
[](https://pypi.org/project/kodudo/)
[](https://www.gnu.org/licenses/gpl-3.0)
[](https://github.com/astral-sh/ruff)
**Kodudo** is a Bororo word for *"to cook"*.
It is a minimal, functional Python tool that cooks your data into documents using Jinja2 templates. Designed to work seamlessly with [aptoro](https://github.com/plataformasindigenas/aptoro), it separates data preparation from presentation, allowing you to transform validated data into HTML, Markdown, or any other text format.
## Features
- **Data Agnostic:** Works natively with Aptoro exports, plain JSON lists, or generic wrappers.
- **Jinja2 Powered:** Leverage the full power of Jinja2 templates, inheritance, and macros.
- **Configuration over Code:** Define complex rendering pipelines in simple YAML files.
- **Context Aware:** Automatically injects metadata, configuration, and custom context into templates.
- **Multi-Output:** Render multiple output files from a single config with `outputs`.
- **Per-Record Rendering:** Generate one file per data record with `foreach` and path interpolation.
- **Multi-Format:** Output to HTML, Markdown, text, or any text-based format.
## Installation
```bash
pip install kodudo
```
## CLI Usage
Kodudo provides a command-line interface for "cooking" documents from configuration files.
```bash
# Cook a single configuration file
kodudo cook config.yaml
# Cook multiple files at once
kodudo cook config1.yaml config2.yaml
# Use shell expansion
kodudo cook configs/*.yaml
```
## Quick Start
```python
import kodudo
# Cook using a config file (same as CLI)
paths = kodudo.cook("config.yaml")
# Cook with runtime overrides (no temp files needed)
batch = kodudo.load_config("config.yaml")
for locale in ("pt", "en"):
kodudo.cook_from_config(
batch.config,
context={"lang": locale},
output=f"docs/{locale}/page.html",
)
# Render directly to a string
html = kodudo.render(
data=[{"name": "Alice"}, {"name": "Bob"}],
template="templates/users.html.j2",
context={"title": "User List"},
)
```
## Documentation
For full details on configuration options, template variables, and data formats, see the [Documentation](DOCS.md).
## Configuration
Define your rendering process in a YAML configuration file:
```yaml
# Single output
input: data/records.json
template: templates/page.html.j2
output: output/page.html
context:
title: "My Documents"
```
```yaml
# Multi-output (e.g., locales)
input: data/records.json
template: templates/page.html.j2
outputs:
- output: en/page.html
context: { lang: en }
- output: pt/page.html
context: { lang: pt }
```
```yaml
# Per-record rendering
input: data/articles.json
template: templates/article.html.j2
output: articles/{article.slug}.html
foreach: article
```
## Supported Formats
**Input Data (JSON):**
- **Aptoro Format:** `{ "meta": {...}, "data": [...] }`
- **Plain List:** `[ {...}, {...} ]`
- **Generic Wrapper:** `{ "results": [...] }`
**Output:**
- **HTML**
- **Markdown**
- **Text**
- **Any text-based format**
## License
GNU General Public License v3 (GPLv3)
| text/markdown | Tiago Tresoldi | null | null | null | null | template, jinja2, data, rendering, etl | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Text Processing :: Markup"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"jinja2>=3.0",
"pyyaml>=6.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\"",
"types-PyYAML; extra == \"dev\"",
"build; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/plataformasindigenas/kodudo",
"Documentation, https://github.com/plataformasindigenas/kodudo#readme",
"Repository, https://github.com/plataformasindigenas/kodudo"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T20:13:35.525817 | kodudo-0.2.0.tar.gz | 30,051 | df/12/5d7bda8e6d6c659f12f003a9cbb25b813f6856787ac9ed2ea87c10f38311/kodudo-0.2.0.tar.gz | source | sdist | null | false | 54a6a2300e9a2d220e855bfa5dfeb7de | 4b9e11652d68de6e5b88d2f02281fc5eb457a38879019985f112087286c38949 | df125d7bda8e6d6c659f12f003a9cbb25b813f6856787ac9ed2ea87c10f38311 | GPL-3.0-or-later | [
"LICENSE"
] | 319 |
2.4 | gist-select | 0.1.0 | GIST: Greedy Independent Set Thresholding for Max-Min Diversification with Submodular Utility | # gist-select
**Greedy Independent Set Thresholding for Max-Min Diversification with Submodular Utility**
A production-grade Python implementation of the GIST algorithm from [Fahrbach et al. (NeurIPS 2025)](https://arxiv.org/abs/2405.18754). Select subsets that are both **high-quality** and **diverse** — with provable approximation guarantees.
---
## The Problem
You have a large pool of items (data points, images, documents, candidates) and need to select *k* of them. You want items that are **individually valuable** *and* **collectively diverse** — no redundancy, maximum coverage.
GIST solves this by maximizing:
```
f(S) = g(S) + λ · div(S)
```
| Term | Meaning |
|------|---------|
| `g(S)` | Monotone submodular utility — how valuable the selected set is |
| `div(S)` | Max-min diversity — the minimum pairwise distance in the set |
| `λ` | Trade-off knob between utility and diversity |
**Approximation guarantees:**
- General submodular utility: **1/2 - ε**
- Linear utility: **2/3 - ε** *(tight — matches the NP-hardness lower bound)*
---
## Features
- **Provably good** — constant-factor approximation guarantees from the paper
- **Fast at scale** — tested up to 2M points with high-dimensional embeddings
- **CELF acceleration** — lazy greedy evaluation reduces oracle calls by orders of magnitude
- **Optimised numerics** — BLAS-backed distance computation, precomputed norms, no large temporaries
- **Flexible metrics** — Euclidean, cosine, or bring your own distance function
- **Flexible utilities** — linear weights, set coverage, or bring your own submodular function
- **Parallel threshold sweep** — optional multi-threaded execution via joblib
- **Deterministic** — seed parameter for full reproducibility
---
## Installation
```bash
pip install gist-select
```
With optional parallel support:
```bash
pip install "gist-select[parallel]"
```
**Requirements:** Python ≥ 3.10, NumPy ≥ 1.24, SciPy ≥ 1.10
---
## Quick Start
```python
import numpy as np
from gist import gist, LinearUtility, EuclideanDistance
# 10,000 points in 64 dimensions
rng = np.random.default_rng(42)
points = rng.standard_normal((10_000, 64)).astype(np.float32)
weights = rng.random(10_000)
# Select the 50 best-and-diverse points
result = gist(
points=points,
utility=LinearUtility(weights),
distance=EuclideanDistance(),
k=50,
lam=1.0,
seed=42,
)
print(f"Selected {len(result.indices)} points")
print(f"Objective: {result.objective_value:.4f}")
print(f"Utility: {result.utility_value:.4f}")
print(f"Diversity: {result.diversity:.4f}")
```
---
## API Reference
### `gist()`
```python
gist(
points, # np.ndarray (n, d) — your data
utility, # SubmodularFunction — how to score subsets
distance, # DistanceMetric — how to measure spread
k, # int — how many points to select
lam=1.0, # float — diversity weight (λ ≥ 0)
eps=0.05, # float — approximation granularity (ε > 0)
n_jobs=1, # int — threads for threshold sweep
seed=None, # int — random seed for reproducibility
diameter=None, # tuple — precomputed (d_max, idx_u, idx_v)
) -> GISTResult
```
**Returns** a `GISTResult` with:
| Field | Type | Description |
|-------|------|-------------|
| `indices` | `np.ndarray` | Indices of selected points |
| `objective_value` | `float` | `g(S) + λ · div(S)` |
| `utility_value` | `float` | `g(S)` |
| `diversity` | `float` | `div(S)` — minimum pairwise distance |
**Parameters in detail:**
- **`lam`** — Controls the utility/diversity trade-off. Higher values favour more spread-out selections. Set to `0` for pure utility maximisation (standard greedy).
- **`eps`** — Controls the number of distance thresholds swept (~76 for `eps=0.05`, ~38 for `eps=0.1`). Smaller is more thorough but slower.
- **`n_jobs`** — Number of threads for the threshold sweep. Requires `joblib`. Uses threading backend to share memory.
- **`diameter`** — Skip the automatic diameter estimation by providing `(d_max, idx_u, idx_v)`. Useful when calling `gist()` repeatedly on the same point set.
---
### Distance Metrics
```python
from gist import EuclideanDistance, CosineDistance, CallableDistance
```
| Class | Description | Hot Path |
|-------|-------------|----------|
| `EuclideanDistance()` | L2 distance with precomputed norms | Single BLAS GEMV |
| `CosineDistance()` | `1 - cos(a, b)`, auto-normalises | Single BLAS GEMV |
| `CallableDistance(fn)` | User-provided `fn(vec, matrix) → dists` | Your function |
**Custom distance example:**
```python
from gist import CallableDistance
def manhattan(source_vec, target_matrix):
"""L1 distance — must be vectorised."""
return np.abs(target_matrix - source_vec).sum(axis=1)
distance = CallableDistance(manhattan)
```
> **Note:** The callable signature is `fn(source: ndarray shape (d,), targets: ndarray shape (m, d)) → ndarray shape (m,)`. It *must* be vectorised — a scalar `dist(a, b)` function will not work at scale.
---
### Submodular Utilities
```python
from gist import LinearUtility, CoverageFunction, SubmodularFunction
```
#### `LinearUtility(weights)`
Additive utility: `g(S) = Σ weights[i]` for `i ∈ S`.
This is the most common case and the fastest — marginal gains are just individual weights. GIST achieves the tight **2/3-approximation** for linear utilities.
```python
weights = np.array([0.9, 0.1, 0.8, 0.3, 0.7])
utility = LinearUtility(weights)
```
#### `CoverageFunction(coverage_matrix, element_weights=None)`
Set-coverage utility: `g(S) = |⋃_{i ∈ S} cover(i)|`.
Each point covers a set of elements. The utility is the total number (or weighted sum) of distinct elements covered by the selected set. Classic diminishing returns.
```python
from scipy import sparse
# 1000 points, each covering some of 500 elements
coverage_matrix = sparse.random(1000, 500, density=0.05, format="csr")
coverage_matrix.data[:] = 1 # binary
utility = CoverageFunction(coverage_matrix)
```
#### Custom Submodular Functions
Subclass `SubmodularFunction` and implement two methods:
```python
from gist import SubmodularFunction
class MyUtility(SubmodularFunction):
def marginal_gains(self, selected: list[int], candidates: np.ndarray) -> np.ndarray:
"""Return g(v | S) for each v in candidates."""
gains = np.empty(len(candidates))
for i, v in enumerate(candidates):
gains[i] = self._compute_marginal(v, selected)
return gains
def value(self, selected: list[int]) -> float:
"""Return g(S)."""
return self._compute_value(selected)
```
> **Performance tip:** GIST uses CELF (lazy greedy) internally, so `marginal_gains` is called infrequently on small batches after the initial pass. But the initial call evaluates *all* points, so make sure it handles large `candidates` arrays efficiently.
---
## Examples
### Data Sampling for Model Training
Select a representative training subset that balances uncertainty and diversity — inspired by the paper's ImageNet experiment:
```python
import numpy as np
from gist import gist, LinearUtility, CosineDistance
# embeddings: (n, 2048) from a pretrained model
# uncertainty: (n,) margin-based uncertainty scores
embeddings = np.load("embeddings.npy")
uncertainty = np.load("uncertainty_scores.npy")
# Select 50K diverse, uncertain examples
result = gist(
points=embeddings,
utility=LinearUtility(uncertainty),
distance=CosineDistance(),
k=50_000,
lam=0.5, # balance uncertainty with diversity
eps=0.05,
n_jobs=4, # parallel threshold sweep
seed=0,
)
train_indices = result.indices
print(f"Selected {len(train_indices)} training examples")
print(f"Min pairwise cosine distance: {result.diversity:.4f}")
```
### Feature Selection
Select a diverse subset of features that individually have high relevance:
```python
import numpy as np
from gist import gist, LinearUtility, EuclideanDistance
# features: (n_features, n_samples) — each row is a feature vector
features = np.load("feature_matrix.npy")
relevance_scores = np.load("relevance.npy")
result = gist(
points=features,
utility=LinearUtility(relevance_scores),
distance=EuclideanDistance(),
k=20,
lam=2.0, # strongly penalise redundant features
seed=42,
)
selected_features = result.indices
```
### Tuning the Diversity Trade-off
Sweep over `lam` to find the right balance for your task:
```python
import numpy as np
from gist import gist, LinearUtility, EuclideanDistance
rng = np.random.default_rng(0)
points = rng.standard_normal((5000, 32))
weights = rng.random(5000)
for lam in [0.0, 0.5, 1.0, 2.0, 5.0]:
result = gist(
points, LinearUtility(weights), EuclideanDistance(),
k=50, lam=lam, eps=0.1, seed=0,
)
print(f"λ={lam:<4} utility={result.utility_value:.2f} "
f"diversity={result.diversity:.3f} "
f"objective={result.objective_value:.2f}")
```
```
λ=0.0 utility=49.47 diversity=3.041 objective=49.47
λ=0.5 utility=48.91 diversity=3.255 objective=50.54
λ=1.0 utility=48.31 diversity=3.442 objective=51.75
λ=2.0 utility=47.06 diversity=3.819 objective=54.70
λ=5.0 utility=43.98 diversity=4.607 objective=67.02
```
### Pure Utility Maximisation
Set `lam=0` to recover standard greedy submodular maximisation (no diversity term):
```python
result = gist(points, utility, distance, k=100, lam=0.0)
# Equivalent to the classic (1 - 1/e)-approximation greedy
```
---
## Performance
Benchmarked on Apple M-series, single-threaded, `eps=0.1`:
| Points | Dimensions | k | Time |
|--------|-----------|-----|------|
| 10K | 64 | 50 | 0.3s |
| 100K | 128 | 100 | 6s |
| 500K | 128 | 100 | ~30s |
| 2M | 128 | 100 | ~2 min |
**Scaling tips:**
- Use `float32` points — 2x memory savings and faster BLAS
- Increase `eps` (e.g., `0.1` → `0.2`) to halve the number of thresholds
- Use `n_jobs=-1` with joblib for parallel threshold sweep
- Precompute `diameter` when calling `gist()` repeatedly on the same data
---
## How It Works
GIST sweeps over a geometric sequence of distance thresholds. For each threshold *d*, it runs a greedy algorithm that builds a maximal independent set of the intersection graph (points within distance *d* are "neighbours") while maximising the submodular utility.
```
GIST(V, g, k, ε):
1. S ← GreedyIndependentSet(V, g, d=0, k) # pure utility baseline
2. T ← diametrical pair with max distance # pure diversity baseline
3. For each threshold d in geometric sequence:
T ← GreedyIndependentSet(V, g, d, k) # utility + diversity
Keep best f(T) = g(T) + λ · div(T)
4. Return the best solution found
```
The `GreedyIndependentSet` subroutine uses CELF (lazy greedy) to minimise submodular oracle calls. Points within distance *d* of a selected point are eliminated, ensuring all selected points are pairwise at distance ≥ *d*.
For the full details, see the paper: [arXiv:2405.18754](https://arxiv.org/abs/2405.18754)
---
## Citation
If you use this package in your research, please cite the original paper:
```bibtex
@inproceedings{fahrbach2025gist,
title={GIST: Greedy Independent Set Thresholding for Max-Min Diversification with Submodular Utility},
author={Fahrbach, Matthew and Ramalingam, Srikumar and Zadimoghaddam, Morteza and Ahmadian, Sara and Citovsky, Gui and DeSalvo, Giulia},
booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
year={2025}
}
```
---
## License
MIT
| text/markdown | null | Kenny Claka <hello@kennyigbechi.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.24",
"scipy>=1.10",
"joblib>=1.3; extra == \"parallel\"",
"pytest>=7.0; extra == \"dev\"",
"joblib>=1.3; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/kclaka/gist-select",
"Repository, https://github.com/kclaka/gist-select",
"Issues, https://github.com/kclaka/gist-select/issues"
] | twine/6.2.0 CPython/3.13.0 | 2026-02-19T20:13:25.218932 | gist_select-0.1.0.tar.gz | 20,393 | 98/6e/15109590efed1fce757bd8fadf8cb44298ff5263eb0c277203bf96e310db/gist_select-0.1.0.tar.gz | source | sdist | null | false | 5b62aa8b7cbb0ff85ece0b3825a5f26b | 20442ce741b0f98575eb49927a44b96b71f71b3e22ba84c39d3b1964679eacfa | 986e15109590efed1fce757bd8fadf8cb44298ff5263eb0c277203bf96e310db | MIT | [
"LICENSE"
] | 238 |
2.4 | tico | 0.2.0.dev260219 | Convert Exported Torch Module To Circle | # TICO
_TICO_ (Torch IR to Circle [ONE](https://github.com/Samsung/ONE)) is a python library for converting
Pytorch modules into a circle model that is a lightweight and efficient representation in ONE
designed for optimized on-device neural network inference.
## Table of Contents
### For Users
- [Installation](#installation)
- [Getting Started](#getting-started)
- [From torch module](#from-torch-module)
- [From .pt2](#from-pt2)
- [Running circle models directly in Python](#running-circle-models-directly-in-python)
- [Quantization](#quantization)
### For Developers
- [Testing & Code Formatting](#testing--code-formatting)
- [Testing](#testing)
- [Code Formatting](#code-formatting)
## For Users
### Installation
0. Prerequisites
- Python 3.10
- (Optional) [one-compiler 1.30.0](https://github.com/Samsung/ONE/releases/tag/1.30.0)
- It is only required if you intend to run inference with the converted Circle model. If you are only converting models without running them, this dependency is not needed.
We highly recommend to use a virtual env, e.g., conda.
1. Clone this repo
2. Build python package
```bash
./ccex build
```
This will generate `build` and `dist` directories in the root directory.
3. Install generated package
```bash
./ccex install
```
**Available options**
- `--dist` To install the package from .whl (without this option, _TICO_ is installed in an editable mode)
- `--torch_ver <torch version>` To install a specific torch version (default: 2.6).
- Available <torch version>: 2.5, 2.6, 2.7, 2.8, nightly
4. Now you can convert a torch module to a `.circle`.
### Getting started
This tutorial explains how you can use _TICO_ to generate a circle model from a torch module.
Let's assume we have a torch module.
```python
import tico
import torch
class AddModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, y):
return x + y
```
**NOTE**
_TICO_ internally uses [torch.export](https://pytorch.org/docs/stable/export.html#torch-export).
Therefore, the torch module must be 'export'able. Please see
[this document](https://pytorch.org/docs/stable/export.html#limitations-of-torch-export)
if you have any trouble to export.
#### From torch module
You can convert a torch module to a circle model with these steps.
```python
torch_module = AddModule()
example_inputs = (torch.ones(4), torch.ones(4))
circle_model = tico.convert(torch_module.eval(), example_inputs)
circle_model.save('add.circle')
```
**NOTE**
Please make sure to call `eval()` on the PyTorch module before passing it to our API.
This ensures the model runs in inference mode, disabling layers like dropout and
batch normalization updates.
**Compile with configuration**
```python
from test.modules.op.add import AddWithCausalMaskFolded
torch_module = AddWithCausalMaskFolded()
example_inputs = torch_module.get_example_inputs()
config = tico.CompileConfigV1()
config.legalize_causal_mask_value = True
circle_model = tico.convert(torch_module, example_inputs, config = config)
circle_model.save('add_causal_mask_m120.circle')
```
With `legalize_causal_mask_value` option on, causal mask value is converted from
-inf to -120, creating a more quantization-friendly circle model with the cost of
slight accuracy drop.
#### From .pt2
The torch module can be exported and saved as `.pt2` file (from PyTorch 2.1).
```python
module = AddModule()
example_inputs = (torch.ones(4), torch.ones(4))
exported_program = torch.export.export(module, example_inputs)
torch.export.save(exported_program, 'add.pt2')
```
There are two ways to convert `.pt2` file: python api, command line tool.
- Python API
```python
circle_model = tico.convert_from_pt2('add.pt2')
circle_model.save('add.circle')
```
- Command Line Tool
```bash
pt2-to-circle -i add.pt2 -o add.circle
```
- Command Line Tool with configuration
```bash
pt2-to-circle -i add.pt2 -o add.circle -c config.yaml
```
```yaml
# config.yaml
version: '1.0' # You must specify the config version.
legalize_causal_mask_value: True
```
#### Running circle models directly in Python
After circle export, you can run the model directly in Python.
Note that you should install one-compiler package first.
The output types are numpy.ndarray.
```python
torch_module = AddModule()
example_inputs = (torch.ones(4), torch.ones(4))
circle_model = tico.convert(torch_module, example_inputs)
circle_model(*example_inputs)
# numpy.ndarray([2., 2., 2., 2.], dtype=float32)
```
### Quantization
The `tico.quantization` module provides a unified and modular interface for quantizing
large language models (LLMs) and other neural networks.
It introduces a simple two-step workflow — **prepare** and **convert** — that
abstracts the details of different quantization algorithms.
#### Basic Usage
```python
from tico.quantization import prepare, convert
from tico.quantization.config.gptq import GPTQConfig
import torch
import torch.nn as nn
class LinearModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(8, 8)
def forward(self, x):
return self.linear(x)
model = LinearModel().eval()
# 1. Prepare for quantization
quant_config = GPTQConfig()
prepared_model = prepare(model, quant_config)
# 2. Calibration
for d in dataset:
prepared_model(d)
# 3. Apply GPTQ
quantized_model = convert(prepared_model, quant_config)
```
For detailed documentation, design notes, and contributing guidelines,
see [tico/quantization/README.md](./tico/quantization/README.md).
## For Developers
### Testing & Code Formatting
Run below commands to configure testing or formatting environment.
Refer to the dedicated section to have more fine-grained control.
```bash
$ ./ccex configure # to set up testing & formatting environment
$ ./ccex configure format # to set up only formatting environment
$ ./ccex configure test # to set up only testing environment
```
**Available options**
- `--torch_ver <torch version>` To install a specific torch family package(ex. torchvision) version (default: 2.6)
- Available <torch version>: '2.5', '2.6', 'nightly'
```bash
$ ./ccex configure # to set up testing & formatting environment with stable2.6.x version
$ ./ccex configure test # to set up only testing environment with stable 2.6.x version
$ ./ccex configure test --torch_ver 2.5 # to set up only testing environment with stable 2.5.x version
$ ./ccex configure test --torch_ver nightly # to set up only testing environment with nightly version
```
### Testing
#### Test congifure
Run below commands to install requirements for testing.
**NOTE** `TICO` will be installed in an editable mode.
```bash
./ccex configure test
# without editable install
./ccex configure test --dist
```
#### Test All
Run below commands to run the all unit tests.
**NOTE** Unit tests don't include model test.
```bash
./ccex test
# OR
./ccex test run-all-tests
```
#### Test Subset
To run subset of `test.modules.*`,
Run `./ccex test -k <keyword>`
For example, to run tests in specific sub-directory (op, net, ..)
```bash
# To run tests in specific sub-directory (op/, net/ ..)
./ccex test -k op
./ccex test -k net
# To run tests in one file (single/op/add, single/op/sub, ...)
./ccex test -k add
./ccex test -k sub
# To run SimpleAdd test in test/modules/single/op/add.py
./ccex test -k SimpleAdd
```
To see the full debug log, add `-v` or `TICO_LOG=4`.
```bash
TICO_LOG=4 ./ccex test -k add
# OR
./ccex test -v -k add
```
#### Test Model
If you want to test them locally, you can do so by navigating to each model directory,
installing the dependencies listed in its `requirements.txt`, and running the tests one by one.
```bash
$ pip install -r test/modules/model/<model_name>/requirements.txt
# Run test for a single model
$ ./ccex test -m <model_name>
# Run models whose names contain "Llama" (e.g., Llama, LlamaDecoderLayer, LlamaWithGQA, etc.)
# Note that you should use quotes for the wildcard(*) pattern
$ ./ccex test -m "Llama*"
```
For example, to run a single model
```
./ccex test -m InceptionV3
```
#### Runtime Options
By default, `./ccex test` runs all modules with the `circle-interpreter` engine.
You can override this and run tests using the `onert` runtime instead.
##### 0. Install ONERT
```bash
pip install onert
```
##### 1. Command-Line Flag
Use the `--runtime` (or `-r`) flag to select a runtime:
```bash
# Run with the default circle-interpreter
./ccex test
# Run all tests with onert
./ccex test --runtime onert
# or
./ccex test -r onert
```
##### 2. Environment Variable
You can also set the `CCEX_RUNTIME` environment variable:
```bash
# Temporarily override for one command
CCEX_RUNTIME=onert ./ccex test
# Persist in your shell session
export CCEX_RUNTIME=onert
./ccex test
```
##### Supported Runtimes
- circle-interpreter (default): uses the Circle interpreter for inference.
- onert: uses the ONERT package for inference, useful when the Circle interpreter
cannot run a given module.
### Code Formatting
#### Format configure
Run below commands to install requirements for formatting.
```bash
./ccex configure format
```
#### Format run
```bash
./ccex format
```
| text/markdown | null | null | null | null | This file provides full text of licenses used in this project
- Apache License 2.0
- BSD 3-Clause
...............................................................................
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
...............................................................................
The BSD 3-Clause License
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
.............................................................................
| null | [] | [] | null | null | >=3.10.0 | [] | [] | [] | [
"circle-schema",
"packaging",
"cffi",
"torch",
"pyyaml",
"tqdm"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:13:14.291329 | tico-0.2.0.dev260219.tar.gz | 222,075 | 31/4a/841eb30adcc85273a38189dd268ab05e0fc55741dfa2b21d11de5c333074/tico-0.2.0.dev260219.tar.gz | source | sdist | null | false | 5dc6ec4fe8b9c2f177cdfc1a82f4f9d7 | 200143ddf0c476a00ca8783aa61a031b4dd3ccd94fdd5b382c0ad1c394f358a3 | 314a841eb30adcc85273a38189dd268ab05e0fc55741dfa2b21d11de5c333074 | null | [
"LICENSE"
] | 196 |
2.4 | pyloopmessage | 0.4.0 | Python client for the LoopMessage iMessage API | # PyLoopMessage
A modern Python client for the LoopMessage iMessage API.
## Features
- ✨ Full support for LoopMessage REST API
- 🔒 Type-safe with comprehensive type hints
- 📱 Send messages, reactions, and audio messages
- 👥 Support for group messaging
- 📞 Webhook handling for real-time events
- 🧪 Async/await support
- 🛡️ Built-in error handling and retries
## Installation
```bash
pip install pyloopmessage
```
## Quick Start
```python
from pyloopmessage import LoopMessageClient
# Initialize the client
client = LoopMessageClient(
authorization_key="your_auth_key",
secret_key="your_secret_key"
)
# Send a message
response = await client.send_message(
recipient="+1234567890",
text="Hello from PyLoopMessage!",
sender_name="YourSenderName"
)
print(f"Message sent with ID: {response.message_id}")
```
## API Support
### Sending Messages
- ✅ Send text messages to individuals
- ✅ Send messages to groups
- ✅ Send audio messages
- ✅ Send reactions
- ✅ Message effects (slam, loud, gentle, etc.)
- ✅ Attachments support
- ✅ Reply-to functionality
### Message Status
- ✅ Check message status
- ✅ Webhook event handling
- ✅ Real-time status updates
### Advanced Features
- ✅ Typing indicators
- ✅ Read status
- ✅ Sandbox mode
- ✅ Error handling with detailed error codes
## Documentation
For detailed documentation and examples, visit our [GitHub repository](https://github.com/yourusername/pyloopmessage).
## License
MIT License - see LICENSE file for details.
| text/markdown | null | Balaji Rama <balajirw10@gmail.com> | null | null | MIT | imessage, api, messaging, loopmessage | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python ... | [] | null | null | >=3.8 | [] | [] | [] | [
"httpx>=0.24.0",
"pydantic>=2.0.0",
"typing-extensions>=4.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-httpx>=0.21.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pre-commit>... | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.6 | 2026-02-19T20:12:33.328022 | pyloopmessage-0.4.0.tar.gz | 28,480 | ee/d6/059f54b9856a983e4ec1c6a6afde3a642c3207fffd14f360b79b17cbe08d/pyloopmessage-0.4.0.tar.gz | source | sdist | null | false | 4f6f264373cdb136ab44fb157fbbd3be | f4856ed3f5ff7153415b7e816f7408776cddc77db22081a596ad5b615fd49bf8 | eed6059f54b9856a983e4ec1c6a6afde3a642c3207fffd14f360b79b17cbe08d | null | [
"LICENSE"
] | 214 |
2.4 | nativebridge | 0.1.0 | Connect to NativeBridge cloud Android devices via ADB | # NativeBridge CLI
Connect to cloud Android devices via ADB — as if they were plugged in locally.
NativeBridge CLI authenticates with your API key, authorizes your IP, and establishes an ADB connection to cloud-hosted Android devices through a secure TCP proxy.
## Installation
```bash
pip install nativebridge
```
**Prerequisite:** [Android SDK Platform Tools](https://developer.android.com/tools/releases/platform-tools) must be installed and `adb` available on your PATH.
## Quick Start
```bash
# 1. Save your API key (one-time)
nativebridge login --api-key YOUR_API_KEY
# 2. Connect to a cloud device
nativebridge connect --device SESSION_ID
# 3. Use ADB as usual
nativebridge adb -s host:port shell
```
## Commands
### `login` — Save API key
```bash
nativebridge login --api-key nb_live_abc123
nativebridge login --api-key nb_live_abc123 --api-base https://custom.api.com
```
Saves your API key to `~/.nativebridge/config.json` so you don't need to pass it every time.
### `logout` — Remove saved API key
```bash
nativebridge logout
```
### `connect` — Connect to a cloud device
```bash
nativebridge connect --device SESSION_ID
nativebridge connect -d SESSION_ID --api-key nb_live_abc123
```
This command:
1. Validates your API key with the NativeBridge backend
2. Authorizes your IP address for ADB access
3. Runs `adb connect` to the cloud device
On success, you'll see the connection details and quick-start commands.
### `disconnect` — Disconnect from a cloud device
```bash
nativebridge disconnect --device SESSION_ID
nativebridge disconnect -d SESSION_ID
```
### `devices` — List connected ADB devices
```bash
nativebridge devices
```
Runs `adb devices -l` and displays the output.
### `status` — Show CLI configuration
```bash
nativebridge status
```
Displays your current API base URL, masked API key, and connected ADB devices.
### `adb` — ADB passthrough
```bash
nativebridge adb devices
nativebridge adb -s host:port shell
nativebridge adb -s host:port install app.apk
nativebridge adb -s host:port push local.txt /sdcard/
nativebridge adb -s host:port logcat
nativebridge adb -s host:port shell pm list packages
```
Passes any arguments directly to `adb`. This is a convenience wrapper so you can use a single tool for all device interactions.
## Configuration
NativeBridge CLI reads configuration in the following order (first match wins):
| Setting | Environment Variable | Config File Key | Default |
|----------|---------------------------|-----------------|----------------------------|
| API Key | `NATIVEBRIDGE_API_KEY` | `api_key` | — |
| API Base | `NATIVEBRIDGE_API_BASE` | `api_base` | `https://api.nativebridge.io` |
Config file location: `~/.nativebridge/config.json`
## Requirements
- Python 3.8+
- `adb` (Android SDK Platform Tools) on your PATH
- A NativeBridge API key
## License
MIT License. See [LICENSE](LICENSE) for details.
| text/markdown | null | NativeBridge <support@nativebridge.io> | null | null | null | adb, android, cloud, nativebridge, device-farm, testing, remote-devices | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Topic :: Software Development :: Testing",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"P... | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.28.0",
"rich>=13.0.0"
] | [] | [] | [] | [
"Homepage, https://nativebridge.io",
"Documentation, https://docs.nativebridge.io/cli",
"Repository, https://github.com/AutoFlowLabs/nativebridge-cli",
"Issues, https://github.com/AutoFlowLabs/nativebridge-cli/issues"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-19T20:12:29.984295 | nativebridge-0.1.0.tar.gz | 9,635 | c2/42/22477dd62bffd3144d61c90d969ea74b9dce165dc2e1c5415b539bae0c73/nativebridge-0.1.0.tar.gz | source | sdist | null | false | 252d707aab29041b1faafbe81009ab8a | 0acd33fa0b158257d16d55bed3cb780951c857cc5f5ff982411006d07174e76f | c24222477dd62bffd3144d61c90d969ea74b9dce165dc2e1c5415b539bae0c73 | MIT | [
"LICENSE"
] | 236 |
2.4 | thds.mops | 3.14.20260219201208 | ML Ops tools for Trilliant Health | `mops` is a Python library for ML Operations.
Jump to
[Quickstart](https://github.com/TrilliantHealth/trilliant-data-science/blob/main/libs/mops/docs/quickstart.adoc)
if you ~~are impatient~~ prefer examples, like me!
`mops` solves for four core design goals:
- [Efficient](https://github.com/TrilliantHealth/trilliant-data-science/blob/main/libs/mops/docs/optimizations.adoc)
transfer of
[pure](https://github.com/TrilliantHealth/trilliant-data-science/blob/main/libs/mops/docs/pure_functions.adoc)
function execution to
[remote](https://github.com/TrilliantHealth/trilliant-data-science/blob/main/libs/mops/docs/remote.adoc)
execution environments with more &| different compute resources
- Everything is written in standard Python with basic Python primitives; no frameworks, YAML, DSLs...
- [Memoization](https://github.com/TrilliantHealth/trilliant-data-science/blob/main/libs/mops/docs/memoization.adoc)
— i.e. _reproducibility and fault tolerance_ — for individual functions.
- Droppability: `mops` shouldn't entangle itself with your code, and you should always be able to run
your code with or without `mops` in the loop.
It is used by
[decorating or wrapping your pure function and then calling it](https://github.com/TrilliantHealth/trilliant-data-science/blob/main/libs/mops/docs/magic.adoc)
like a normal function.
### read the docs
[Browse our full documentation here.](https://github.com/TrilliantHealth/trilliant-data-science/blob/main/libs/mops/README.adoc)
| text/markdown | null | Trilliant Health <info@trillianthealth.com> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"azure-core",
"azure-identity",
"azure-storage-file-datalake",
"cachetools",
"importlib_metadata>=3.6; python_version < \"3.10\"",
"tblib~=2.0",
"thds-adls",
"thds-core",
"thds-humenc",
"thds-termtool",
"tomli",
"kubernetes!=32.0.0,>=18.20; extra == \"k8s\""
] | [] | [] | [] | [
"Repository, https://github.com/TrilliantHealth/ds-monorepo"
] | twine/6.1.0 CPython/3.13.3 | 2026-02-19T20:12:26.698790 | thds_mops-3.14.20260219201208-py3-none-any.whl | 169,402 | 5c/cd/678f10dd69e8e042808dff92707fff048a42fcdea59cce65bed579978c56/thds_mops-3.14.20260219201208-py3-none-any.whl | py3 | bdist_wheel | null | false | 3bd9f4867b4005c4e0e0b4c5fa857829 | c7819d9a79f3c8e913aef92ae6f46d0af49ee727d50f49ff834dcd020f1e8e7c | 5ccd678f10dd69e8e042808dff92707fff048a42fcdea59cce65bed579978c56 | null | [] | 0 |
2.4 | thds.attrs-utils | 1.7.20260219201205 | Utilities for attrs record classes. | # `thds.attrs-utils` Library
This library contains utilities for working with basic data types and type annotations in a generic way -
transforming, checking, generating, or anything else you'd want to do with types.
## Supported types:
- Builtin types, e.g. `int`, `str`, `float`, `bool`, `bytes`
- `datetime.date`, `datetime.datetime`
- Most standard library collection types, e.g. `List[T]`, `Sequence[T]`, `Dict[K, V]`, `Mapping[K, V]`,
`Set[T]`
- Heterogeneous tuple types, e.g. `typing.Tuple[A, B, C]`
- Variadic tuple types, e.g. `Tuple[T, ...]`
- `typing.NamedTuple` record types
- `attrs`-defined record types, including generics with type variables
- `dataclasses`-defined record types, including generics with type variables
- Union types using `typing.Union`
- `typing.Literal`
- `typing.Annotated`
- `typing.NewType`
## General Recursion Framework
The `thds.attrs_utils.type_recursion` module defines a generic interface for performing operations on
arbitrarily nested types. If you have some operation you'd like to do, e.g. transform a python data model
into some other schema language, or define a generic validation check, all you need to do is define it on
a particular set of cases.
For example, here's a simple implementation that counts the number of types referenced inside of a nested
type definition:
```python
from typing import List, Mapping, Tuple, get_args
from thds.attrs_utils.type_recursion import TypeRecursion, Registry
def n_types_generic(recurse, type_):
args = get_args(type_)
return 1 + sum(map(recurse, args))
n_types = TypeRecursion(
Registry(),
tuple=n_types_generic, # these aren't strictly required because the implementation is the same for all of them
collection=n_types_generic, # but I include them
mapping=n_types_generic,
otherwise=n_types_generic,
)
print(n_types(Mapping[Tuple[int, str], List[bytes]]))
# 1 2 3 4 5 6
# 6
```
This example is very simple to illustrate the point. However, much more complex use cases are enabled by
the framework. Most useful are type recursions which accept types and return _callables_ that apply to or
return values inhabiting those types. Examples included in this library are
- an instance checker takes an arbitrarily nested type and returns a callable which recursively checks
that all fields inside a nested value are of the expected type
- a jsonschema generator which takes a type and returns a jsonschema, which can then be used to validate
deserialized values that may be structured into instances of that type
- a random generator which takes a type and returns random instances of that type
Note that the cases which return callables are _static_ with respect to the given type. This allows you
to freeze the callable as specialized to a specific type, so that the type itself only has to be
inspected only once - the callable itself only needs to inspect values.
## Use Cases in this Library
This library includes a few useful implementations of the above pattern.
### Random Data Generation
You can create a callable to generate instances of a given type as follows:
```python
import itertools
from typing import Dict, Generic, Literal, NewType, Optional, Tuple, TypeVar
import attr
from thds.attrs_utils.random.builtin import random_bool_gen, random_int_gen, random_str_gen
from thds.attrs_utils.random.tuple import random_tuple_gen
from thds.attrs_utils.random.attrs import register_random_gen_by_field
from thds.attrs_utils.random import random_gen
@register_random_gen_by_field(
a=random_str_gen(random_int_gen(1, 3), "ABCD"),
b=random_tuple_gen(random_int_gen(0, 3), random_bool_gen(0.99))
)
@attr.define
class Record1:
a: Optional[str]
b: Tuple[int, bool]
ID = TypeVar("ID")
Key = Literal["foo", "bar", "baz"]
@attr.define
class Record2(Generic[ID]):
id: ID
records: Dict[Key, Record1]
MyID = NewType("MyID", int)
ids = itertools.count(1)
random_gen.register(MyID, lambda: next(ids))
random_record = random_gen(Record2[MyID])
print(random_record())
print(random_record())
# Record2(id=1, records={'bar': Record1(a='B', b=(1, True)), 'baz': Record1(a='C', b=(3, True)), 'foo': Record1(a='ACB', b=(1, True))})
# Record2(id=2, records={'foo': Record1(a='A', b=(3, True)), 'bar': Record1(a='ADB', b=(1, True)), 'baz': Record1(a='CAD', b=(0, True))})
```
This can be useful for certain kinds of tests, e.g. round-trip tests, run-time profiling, and
property-based tests. It saves you maintenance because you don't need a sample of "real" data that is
completely up to date with your data model changes, and it saves you time because it's faster to generate
random instances in memory than to fetch a file and deserialize instances from it.
### Validation
There are two kinds of validation provided in this library: jsonschema validation and basic instance
checking.
#### Jsonschema
Jsonschema validation applies to an "unstructured" precursor of your data that would come, e.g. from
parsing json or deserializing data in some other way. This expects a value composed of builtin python
types - dicts, lists, strings, ints, floats, bools, and null values, arbitrarily nested.
To generate a jsonschema for your type (usually a nested record type of some kind), you need only run the
following:
```python
from thds.attrs_utils.jsonschema import to_jsonschema, jsonschema_validator
from my_library import my_module
schema = to_jsonschema(my_module.MyRecordType, modules=[my_module])
check = jsonschema_validator(schema)
check({}) # fails for absence of fields defined in my_module.MyRecordType
```
#### Simple instance checks
Instance checking asserts that the run time types of all references inside some object are as expected.
It is semantically similar to the builtin `isinstance`, but checks all references inside an object
recursively.
```python
from typing import Literal, Mapping
from thds.attrs_utils.isinstance import isinstance as deep_isinstance
Num = Literal["one", "two", "three"]
value = {"one": 2, "three": 4}
# can't use `isinstance` with parameterized types
print(isinstance(value, Mapping))
# True
print(deep_isinstance(value, Mapping[Num, int]))
# True
print(deep_isinstance(value, Mapping[str, int]))
# True
print(deep_isinstance(value, Mapping[Num, str]))
# False
```
This can be useful for validating data from an unknown source, but is generally less useful that
jsonschema validation, because it applies to data that has already been "structured", (assuming that the
input was even in the correct shape for such an operation), and most of the errors it would catch could
also be caught statically and more efficiently via static type checking. We provide it mainly as a
reference implementation for using the `TypeRecursion` framework in a relatively simple, but mostly
complete way. We also use it in a property-based test of random data generation; for any type `T`,
`isinstance(random_gen(T)(), T)` should hold.
## Serialization/Deserialization
The `thds.attrs_utils.cattrs` submodule defines useful defaults for serialization/deserialization of
values of various types, and utils to customize behavior for your own custom types, should you need to.
The goal is that the defaults do what you want in 99% of cases.
To use the converters:
```python
from thds.attrs_utils.cattrs import DEFAULT_JSON_CONVERTER
from my_library import my_module
ready_for_json = DEFAULT_JSON_CONVERTER.unstructure(my_module.MyRecordType())
```
or if you require some custom behavior, you may define your own hooks and use helper functions to
construct your own converter. Here's an example where we register custom hooks for the UUID type, which
you would need if that type was present in your data model:
```python
from typing import Type
from uuid import UUID
from thds.attrs_utils.cattrs import default_converter, setup_converter, DEFAULT_STRUCTURE_HOOKS, DEFAULT_UNSTRUCTURE_HOOKS_JSON
def structure_uuid(s: str, type_: Type[UUID]) -> UUID:
return type_(s)
CONVERTER = setup_converter(
default_converter(),
struct_hooks=[*DEFAULT_STRUCTURE_HOOKS, (UUID, structure_uuid)],
unstruct_hooks=[*DEFAULT_UNSTRUCTURE_HOOKS_JSON, (UUID, str)],
)
```
| text/markdown | null | Trillianth Health <info@trillianthealth.com> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"attrs>=22.2.0",
"returns",
"thds-core",
"typing-inspect",
"cattrs>=22.2.0; extra == \"cattrs\"",
"docstring-parser; extra == \"docstrings\"",
"fastjsonschema; extra == \"jsonschema\""
] | [] | [] | [] | [
"Repository, https://github.com/TrilliantHealth/ds-monorepo"
] | twine/6.1.0 CPython/3.13.3 | 2026-02-19T20:12:25.122579 | thds_attrs_utils-1.7.20260219201205-py3-none-any.whl | 42,111 | 87/04/09142216676f3b93cc171dc3fd6132116443356adb40fda7da81d1401b1f/thds_attrs_utils-1.7.20260219201205-py3-none-any.whl | py3 | bdist_wheel | null | false | ff6e559ba93344c39832659ffe5718b3 | b6da701d7187d17840cc92bd55900c0f36d1e9cf5c346101f53621b73b43fa77 | 870409142216676f3b93cc171dc3fd6132116443356adb40fda7da81d1401b1f | null | [] | 0 |
2.4 | thds.tabularasa | 0.14.10 | Trilliant Health reference data build system. | ## Tabula Rasa
The `thds.tabularasa` package serves to enable version control, validation, and runtime access to tabular
datasets that are required in analytic and production workflows. As such, it encompasses a build system
for generating data, documentation for its derivation process, and code for accessing it.
### The Schema File
To use `tabularasa` in your project, you will first create a single yaml file defining a tabular schema
and build process. This file should exist within your package, not somewhere else in your repo - in other
words it should be package data. It is therefore always read and specified as any package data would be -
with a package name and a path inside said package.
The schema file includes documentation, tabular schema definitions, type information, value-level
constraints (e.g. ranges, string patterns, and nullability), column-level constraints (e.g. uniqueness),
file resource definitions, and build options controlling the output of the build system. Tables are built
from raw data files which may take any form and may be stored either in the repository under version
control or remotely in a blob store such as ADLS (versioned with md5 hashes to ensure build availability
and consistency), but are packaged with the distribution as strictly-typed parquet files and optionally
as a sqlite database archive file. Large package files may be omitted from the base distribution to be
synced with a blob store at run time.
The sections of the schema file are as follows:
- `build_options`: a set of various flags controlling your build process, including code and data
generation
- `tables`: the schema definitions of your tabular data, plus specifications of the inputs and functions
used to derive them
- `types`: any custom constrained column-level types you may wish to define and reference in your tables.
These become both validation constraints expressed as `pandera` schemas, and `typing.Literal` types in
the case of enums, or sometimes `typing.NewType`s depending on your build options.
- `local_data`: specifications of local files in your repo that will be used to build your tables. Files
referenced here are expected to be version-controlled along with your code and so don't require hashes
for integrity checks. Note that tabularasa assumes the file on disk is the official committed version.
It cannot protect against builds with uncommitted local changes to these files.
- `remote_data`: specifications of remote files that will be used to build your tables. Currently only
blob store backends like ADLS are supported. Files referenced here must be versioned with hashes to
ensure build integrity (MD5 is used currently).
- `remote_blob_store`: optional location to store large artifacts in post-build, in case you want to set
a size limit above which your data files will not be packaged with your distribution. They can then be
fetched at run time as needed.
- `external_schemas`: optional specification of `tabularasa` schemas inside other packages, in case you
are integrating with them, e.g. by sharing some types.
To get more detail on the structure of any of these sections, you may refer to the
`thds.tabularasa.schema.metaschema._RawSchema` class, which is an exact field-by-field reflection of the
schema yaml file (with a few enriched fields). Instances of this class are validated and enriched to
become instances of `thds.tabularasa.schema.metaschema.Schema`, which are then used in various build
operations.
### Core Concepts: How Tabularasa Controls Your Data
Before diving into the details, it's important to understand how tabularasa controls and transforms your
data:
#### Column Ordering
**Important**: The column order in your output parquet files is **entirely controlled by the order
defined in schema.yaml**, not by the order in your preprocessor code or source data. Even if your
preprocessor returns columns in a different order, tabularasa will reorder them to match the schema
definition during the build process. This ensures consistency across all data artifacts.
#### Primary Keys and Pandas Index
When working with pandas DataFrames, be aware that **primary key columns become the DataFrame index** and
effectively "disappear" from the regular columns. If you define `primary_key: [id, date]` in your schema,
those columns will be accessible via `df.index` rather than `df['id']` or `df['date']`. This behavior is
automatic and ensures efficient indexing for data access.
#### Transient Tables
Tables marked with `transient: true` are intermediate tables used during the build process but are not
included in the final package distribution. Use transient tables for:
- Raw input data that gets processed into final tables
- Intermediate transformation steps
- Large source data that shouldn't be shipped with the package
#### External Data Philosophy
Tabularasa follows a fundamental principle: **builds should never depend on external services**. All data
is snapshotted internally to ensure reproducible builds. This means:
- Data from external sources (APIs, remote CSVs, etc.) should be fetched and stored in version control or
a blob store that you control (specified in the `remote_data` section)
- This ensures builds are deterministic and not affected by external service availability or consistency
### The Data Interfaces
The code generation portion of the build system can generate interfaces for loading the package parquet
data as `attrs` records or `pandas` dataframes (validated by `pandera` schemas), and for loading `attrs`
records from a `sqlite` archive via indexed queries on specific sets of fields.
The code for all modules is generated and written at [build time](#building).
### Building
To build your project with `tabularasa`, just run
```bash
tabularasa codegen
tabularasa datagen
```
from the project root, followed by the invocation of your standard build tool (`poetry`, `setuptools`,
etc).
This will generate all source code interfaces and package data according to various options specified in
the `build_options` section of the [schema file](#the-schema-file). Note that no code is written unless
the [AST](https://en.wikipedia.org/wiki/Abstract_syntax_tree) of the generated python code differs from
what is found in the local source files. This allows the code generation step to avoid conflict with code
formatters such as `black`, since these change only the formatting and not the AST of the code.
### Adding new package data
To add a new table to the schema, place a new named entry under the `tables` section in your
[schema file](#the-schema-file). Source data for the table is specified in the table's `dependencies`
section. There are multiple ways to specify the source data, including version-controlled
repository-local files and remote files. Source data can be a standard tabular text format (CSV, TSV,
etc) which can be translated automatically into the table's typed schema, or some other data format that
requires processing using a user-defined function specified under a `preprocessor` key.
The simplest way to add new reference data to version control is to simply place a CSV in your repo, and
define the schema of that data in the `tables` section of your [schema file](#the-schema-file), pointing
the `dependencies.filename` of the table to the new CSV file.
Note that this direct file reference approach works only with files that can unambiguously be interpreted
into the table's schema. Currently this is implemented for character-delimited text files such as CSV/TSV
(with many exposed options for parsing), but could be extended to other tabular formats in the future.
#### Choosing Between Local and Remote Data
When deciding how to store your source data, consider these trade-offs:
**Local Data Storage Patterns**
Tabularasa supports two distinct patterns for managing local data files, each serving different
organizational needs. The **direct file reference pattern** allows tables to specify their data source
directly through `dependencies.filename`, providing a straightforward path to a file in the repository.
When you need to update the data, you simply overwrite the file and run
`tabularasa datagen <your-table-name>` without making any schema changes. The framework reads the file
directly using the provided path along with any parsing parameters specified in the dependencies block.
This approach works best for data files that are specific to a single table and can be parsed
unambiguously, requiring no custom code to interpret.
The **shared data pattern** using the `local_data` section provides a more structured approach for
managing data sources that multiple tables depend on. With this pattern, you define a named entry in the
`local_data` section of your schema that contains not just the filename but comprehensive metadata
including the data authority, source URL, update frequency, and documentation. Tables then reference
these entries using `dependencies.local: [entry_name]`. When the preprocessor function executes, it
receives a `LocalDataSpec` object that provides access to both the file (via the `full_path` property)
and all associated metadata. This pattern is best when multiple tables need to derive data from the same
source file, such as when several tables extract different subsets from a comprehensive dataset. This
centralized definition allows consistency across all dependent tables and makes it easier to track data
provenance and update schedules. The same metadata fields are available on all file reference types
(direct references, `local_data`, and `remote_data`) since they all inherit from the same base schema.
Both patterns store files in version control, making them ideal for smaller datasets that require
frequent updates. There is no difference in documentation level or reusability between the two
patterns—both require the same metadata and can be referenced throughout the derivation DAG (in the case
of the direct reference pattern you would reference the derived _table_ rather than the raw file). The
key difference is organizational: direct references provide a quick way to define a table from a single
file inline, while `local_data` provides centralized definitions when multiple tables derive from the
same source file. Larger files should use remote storage instead.
**Remote Data Storage in Blob Store**
Remote data storage through a blob store (e.g., ADLS) addresses the scalability limitations of local file
storage. When source datasets too large for version control, the `remote_data` section of the schema file
allows you to reference files stored in a blob store. Each remote data entry specifies paths to files in
the blob store along with their MD5 hashes to ensure the correct version is downloaded during builds.
While this approach keeps the repository lean, it requires a more structured workflow: you must upload
source files to the blob store, calculate their MD5 hashes, and specify them in the schema. This
additional complexity makes remote storage most suitable for stable, infrequently changing source
datasets where the overhead of managing source file hashes is justified by the benefits of centralized
storage and repository size optimization.
Note that MD5 hash management differs by context: source files in `remote_data` require manual MD5 hash
specification, while the derived parquet files underlying the tables in the schema have their MD5 hashes
automatically calculated and updated by `tabularasa datagen`. Local source files referenced through
`local_data` or `dependencies.filename` do not require MD5 hashes since they are assumed to be versioned
by your version control system.
Example workflow for monthly updates with local data:
```yaml
# schema.yaml - Direct file reference pattern
tables:
my_monthly_data:
dependencies:
filename: build_data/monthly_data.csv
last_updated: 2024-01-15
update_frequency: Monthly
doc: "Monthly update: Download new CSV → overwrite file → datagen"
```
Example of shared local_data pattern:
```yaml
# schema.yaml - Shared data pattern
local_data:
census_data: # Define once
filename: build_data/census_2023.xlsx
url: https://census.gov/data/...
authority: US Census Bureau
last_updated: 2023-07-01
update_frequency: Yearly
tables:
state_demographics:
dependencies:
local: [census_data] # Reference from multiple tables
county_statistics:
dependencies:
local: [census_data] # Same source, consistent metadata
```
Example workflow for remote data:
```yaml
# schema.yaml
remote_data:
my_large_data:
paths:
- name: data/large_file_2024_01.parquet
md5: abc123... # Must update this hash for each new version
tables:
large_table:
dependencies:
remote: [my_large_data] # Reference remote data
```
When changes are made to a table in `schema.yaml`, either the schema or the source data, be sure to
update the associated derived package data file by running `tabularasa datagen <table-name>`. The table's
MD5 hash, and those of any dependent derived tables downstream of it, will then be automatically updated
to reflect the new generated parquet file either during this step or during pre-commit hook execution.
See the [package data generation section](#generating-package-data) for more information on this.
To understand all the ways of defining a table or file dependency, take a look at the schema file data
model defined in the `thds.tabularasa.schema.metaschema._RawSchema` class. This represents an exact
field-by-field reflection of the contents of the schema yaml file.
### The CLI
When installed, the `thds.tabularasa` package comes with a CLI, invoked as `tabularasa` or
`python -m thds.tabularasa`. In the examples that follow, we use the `tabularasa` invocation. This CLI
supplies various utils for development tasks like building and fetching data, generating code and docs,
and checking package data integrity.
Each of these functionalities can be invoked via
```
tabularasa <subcommand-name>
```
for the subcommand that accomplishes the intended task.
The CLI can be made more verbose by repeating the `-v` flag as many times as necessary just after
`tabularasa` and before the name of the subcommand being invoked. If you should want them, the CLI can
self-install its own set of bash-compatible completions by running
`tabularasa --install-bash-completions`.
Documentation for the main CLI or any subcommand can be accessed in the standard way with `--help`:
```bash
tabularasa --help # main CLI args and subcommand list
tabularasa <command-name> --help # help for command identified by <command-name> - its purpose and args
```
The CLI is by default configured by a config file (JSON or YAML) in the working directory called
`tabularasa.yaml`. This just supplies a few required pieces of information, namely the name of the
`package` that you're interacting with and the `schema_path` relative to the package root, so that you
don't have to pass them as options on the command line. Most other important information relevant to the
CLI operations is contained in the [schema file](#the-schema-file) itself, especially the `build_options`
section.
To use the CLI in another project as a build tool, you will need to specify `thds.tabularasa[cli]` as
your dependency. The `cli` extra comes with some dependencies that are only needed in the context of the
CLI which are somewhat heavy and so best left out of your environment if you don't explicitly need them.
Of course if you need the CLI as a development dependency but you only need the _library_ at run time,
you may specify just `thds.tabularasa` as your main dependency and `thds.tabularasa[cli]` as your dev
dependency.
Some useful subcommands of the CLI are documented below.
#### Generating package data
If you're adding new tables or updating the data in a set of tables, especially when using a custom
preprocessor, you will likely want to repeatedly regenerate the package data parquet files for those
tables in order to confirm that the build is working as intended.
To do so, run
```bash
tabularasa datagen <table-name-1> <table-name-2> ...
```
All of the tables you specify _and_ all of their dependents downstream in the computational DAG will thus
be re-computed. This saves you from the work of keeping track of the downstream dependents, a tedious and
error-prone task. It ensures that all your package data and associated hashes are up to date, which
finally ensures that your peers will have up-to-date data when they get a cache miss after pulling your
code changes.
Any derived table upstream of those you request to build with `datagen` will be auto-synced from the blob
store prior to the build running, if available, saving you the wait time of re-building them needlessly
in case they're not already in your working tree.
If you'd like to better understand what you changed after any `tabularasa datagen` invocation before you
commit the result, you can run `tabularasa data-diff`. By default, this diffs the data as versioned in
the working tree against the data as versioned in the HEAD commit. If you've already committed, you can
pass a ref to the previous commit, e.g. `tabularasa data-diff HEAD~`. This will show summary stats
describing the changes, such as the number of rows added, removed, and modified for each updated table.
With the `--verbose` flag added, you can see more detail, for instance the row counts for each row-level
pattern of updates (e.g. in 10 rows, columns 'A' and 'B' were updated, in 5 rows, column 'C' was nulled,
in 3 rows, column 'A' was filled, etc.).
If you wish to regenerate _all_ package data tables from scratch, you can run
```bash
tabularasa datagen
```
This will remove _all_ pre-existing package data files and re-generate them. This is an extreme measure
and should be used sparingly; in most cases, you will want to only those specific tables whose source
data or derivation logic you know has changed.
Note that if you have just cloned the repo or pulled a branch and wish to get your local package data
up-to-date with the state on that branch, you don't need to re-derive all the data! Just
[sync with the blob store](#syncing-with-the-blob-store) instead.
#### Inspecting auto-generated code
If you'd like to review the code changes that would result from any change to the schema or compilation
modules without over-writing the existing generated source (as a [build](#building) could do), there is a
simple CLI command for inspecting it.
To inspect e.g. the auto-generated pandas code for the current repo state, run
```bash
tabularasa compile pandas
```
The code will print to stdout. Simply replace `pandas` with `attrs`, `sqlite`, `attrs-sqlite`, or
`pyarrow` to see the code generated for those use cases.
#### Checking integrity of local built reference data
The build pipeline uses md5 hashes to prevent expensive re-builds in local runs. When the
[build](#building) finishes, you will have several parquet files and possibly a sqlite database archive
present in your file tree. Each of the parquet files should have an associated md5 checksum in
`schema.yaml`, indicating the version of the data that should result from the build.
To check the status of your local built data files with respect to the `schema.yaml` hashes, you can run
```bash
tabularasa check-hashes
```
**Important**: The following shouldn't be required in normal usage: use with care and only if you know
what you're doing!
To sync the hashes in `schema.yaml` with those of your generated data you can run
```bash
tabularasa update-hashes
```
By default this will also update your generated data accessor source code, which has the hashes embedded
in order to enable run-time integrity checks on fetch from the blob store, if you're using one. In
general, you _should not need to to this manually_ however, since `tabularasa datagen` will update the
hashes for you as part of its normal operation.
#### Syncing with the Blob Store
**Important**: The `push`, `pull`, and `sync-blob-store` commands work **only with final parquet
tables**, not with input source data. Input data (specified in `local_data` or `remote_data`) is only
accessed during `datagen` execution.
Under the section `remote_blob_store` in [the schema file](#the-schema-file), you may optionally specify
a remote cloud storage location where built package data artifacts are stored. In case
`build_options.package_data_file_size_limit` is set, the package in question will not come with any
package data files exceeding that limit in size. These _will_ be available in the remote blob store, and
in case they are not present when one of the [data loaders](#the-data-interfaces) is invoked, will be
downloaded into the package.
Should your use case require the data to be locally available at run time, e.g. if you lack connectivity,
then you may fetch all the package data tables that were omitted in the [build](#building) by running
```bash
tabularasa sync-blob-store --down
```
or just
```bash
tabularasa pull
```
If you're using a remote blob store for large files, you will want to include the invocation
```bash
tabularasa sync-blob-store --up
```
or just
```bash
tabularasa push
```
somewhere in your CI build scripts after the [build](#building) completes and before you publish your
package, to ensure that those files are available at run time to end users when needed.
#### Initializing the SQLite Database
To initialize the SQLite database (see [interfaces](#the-data-interfaces)), should one be needed but not
shipped as package data (as specified in the `build_options` section of
[the schema file](#the-schema-file)), you may run
```bash
tabularasa init-sqlite
```
This will create the SQLite database archive in your installed package directory. For an added level of
safety you may pass `--validate` (to validate the inserted data against the constraints defined in
[the schema file](#the-schema-file) as expressed as [pandera schemas](#the-data-interfaces)), but these
will usually be statically verified once at build time and guaranteed correct before shipping.
#### Visualizing the Data Dependency DAG
The `dag` command creates a graph visualization of your project's dependency DAG and subsets thereof. The
visualization is opened in a browser (it's SVG by default) but if you pass `--format png` for example it
will open in an image viewer.
To visualize your data dependency DAG, from your project root run
```bash
tabularasa dag # generate full DAG
tabularasa dag [table-name(s)] # generate DAG for specific tables
```
> [!NOTE]
> This requires the `graphviz` source and binaries to be available on your system (`graphviz` is a C
> library that doesn't come packaged with the python wrapper `pygraphviz`). The easiest way to ensure
> this if you have a global anaconda env is to run `conda install graphviz`. However you proceed, you can
> verify that `graphviz` is available by running `which dot` and verifying that a path to an executable
> for the `dot` CLI is found (`dot` is one layout algorithm that comes with graphviz, and the one used in
> this feature). Once you have that, you may `pip install pygraphviz` into your working dev environment.
> Refer to the [pygraphviz docs](https://pygraphviz.github.io/documentation/stable/install.html) if you
> get stuck.
## Generating documentation
To generate the documentation for your project, run:
```bash
tabularasa docgen
```
from your project root.
This generates docs in ReStructuredText (rst) format in a directory structure specified in the
`table_docs_path`, `type_docs_path`, and `source_docs_path` fields of the
[schema file](#the-schema-file)'s `build_options` section. As such, these docs are valid as input to the
`sphinx` documentation build tool.
## Memory usage
Your reference data may be fairly large, and in multiprocessing contexts it can be useful to share the
read-only data in memory between processes for the sake of performance.
`tabularasa` builds this in via mem-mapped SQLite for the most part, but the default Python installation
of SQLite [limits](https://www.sqlite.org/mmap.html) the amount of memory-mapped data to 2GB per database
file.
A project called `pysqlite3` packages the same shim code alongside the ability to provide a different
shared library for SQLite, and their built binary package
[increases](https://github.com/coleifer/pysqlite3/blob/master/setup.py?ts=4#L107) the memory cap to 1TB.
Currently, the precompiled package is only available for Linux.
The good news: if you want more reference data to be shared between processes, all you need to do is
successfully install a version of `pysqlite3` into your Python environment. If you're on Linux, likely
you can accomplish this with a simple `pip install pysqlite3-binary`. On a Mac, you'll need to follow
their [instructions](https://github.com/coleifer/pysqlite3#building-with-system-sqlite) for linking
against a system-installed SQLite, or build against a statically-linked library and then install from
source.
If `pysqlite3` is installed in your Python environment, it will be used within `tabularasa` by default.
To disable this behavior, set the `REF_D_DISABLE_PYSQLITE3` environment variable to a non-empty string
value.
By default, with `pysqlite3` installed, 8 GB of RAM will be memory-mapped per database file. With the
standard `sqlite3` module, the limit will be hard-capped at 2 GB. If you want to change this default, you
can set the `REF_D_DEFAULT_MMAP_BYTES` environment variable to an integer number of bytes.
| text/markdown | null | Trilliant Health <info@trillianthealth.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"attrs>=22.2",
"cattrs>=22.2",
"filelock",
"networkx>=3.0",
"numpy",
"packaging",
"pandas>=1.5",
"pandera<0.24,>=0.20",
"pydantic>=2.0.0",
"pyarrow>=10.0",
"pyyaml>=6.0.1",
"setuptools>=66.1.1",
"thds-adls",
"thds-core",
"typing-extensions",
"black; extra == \"autoformat\"",
"isort; ... | [] | [] | [] | [
"Repository, https://github.com/TrilliantHealth/ds-monorepo"
] | twine/6.1.0 CPython/3.13.3 | 2026-02-19T20:12:23.230853 | thds_tabularasa-0.14.10-py3-none-any.whl | 122,812 | 2a/ff/d540f208dbb25abc1e5e31d6015d7f997c1fc0751d53227aa5aa08a13c56/thds_tabularasa-0.14.10-py3-none-any.whl | py3 | bdist_wheel | null | false | ad1d544ab347836c3a1c867b858173b0 | c53a5a949de1500b567a53202f61e757ca60d2927cf18c8f0612e3d49827d757 | 2affd540f208dbb25abc1e5e31d6015d7f997c1fc0751d53227aa5aa08a13c56 | null | [] | 0 |
2.4 | thds.adls | 4.5.20260219201157 | ADLS tools | # thds.adls
A high-performance Azure Data Lake Storage (ADLS Gen2) client for the THDS monorepo. It wraps the Azure
SDK with hash-aware caching, azcopy acceleration, and shared client/credential plumbing so applications
can transfer large blob datasets quickly and reliably.
## Highlights
- **Environment-aware paths first:** Almost every consumer starts by importing `fqn`, `AdlsFqn`, and
`defaults.env_root()` to build storage-account/container URIs that follow the current THDS environment.
- **Cache-backed reads:** `download_to_cache` is the standard entry point for pulling blobs down with a
verified hash so local workflows, tests, and pipelines can operate on read-only copies.
- **Bulk filesystem helpers:** `ADLSFileSystem` powers scripts and jobs that need to walk directories,
fetch batches of files, or mirror hive tables without re-implementing Azure SDK plumbing.
- **Spark/Databricks bridges:** `abfss` and `uri` conversions keep analytics code agnostic to whether it
needs an `adls://`, `abfss://`, `https://`, or `dbfs://` view of the same path.
- **Composable utilities:** Higher-level modules (cache, upload, copy, list) layer on top of those
imports so teams can opt into more advanced behavior without leaving the public API surface.
## Key Modules
| Component | Typical usage in the monorepo |
| ------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| `fqn` | Parse, validate, and join ADLS paths; used when materializing model datasets and configuring pipelines. |
| `AdlsFqn` | Strongly typed value passed between tasks and tests to represent a single blob or directory. |
| `defaults` / `named_roots` | Resolve environment-specific storage roots (`defaults.env_root()`, `named_roots.require(...)`). |
| `download_to_cache` (`cached` module) | Bring a blob down to the shared read-only cache before analytics, feature builds, or test fixtures run. |
| `ADLSFileSystem` (`impl` module) | Fetch or list entire directory trees and integrate with caching inside scripts and notebooks. |
| `abfss` | Translate `AdlsFqn` objects into `abfss://` URIs for Spark/Databricks jobs. |
| `uri` | Normalize `adls://`, `abfss://`, `https://`, and `dbfs://` strings into `AdlsFqn` values (and vice versa). |
| `global_client` / `shared_credential` | Shared, fork-safe Azure clients and credentials backing the public helpers above. |
## Example Usage
1. Use the caching helpers and Source integration:
```python
from thds.adls import cached, upload, source
cache_path = cached.download_to_cache("adls://acct/container/path/to/file")
src = upload("adls://acct/container/path/out.parquet", cache_path)
verified = source.get_with_hash(src.uri)
```
1. For CLI usage, run (from repo root):
```bash
uv run python -m thds.adls.tools.download adls://acct/container/path/file
```
## Operational Notes
- **Hash metadata:** Uploads attach `hash_xxh3_128_b64` automatically when the bytes are known. Download
completion back-fills missing hashes when permissions allow.
- **Locks and concurrency:** Large transfers acquire per-path file locks to keep azcopy instances
cooperative. Global HTTP connection pools default to 100 but are configurable via `thds.core.config`.
- **Error handling:** `BlobNotFoundError` and other ADLS-specific exceptions translate into custom error
types to simplify retries and diagnostics.
- **Extensibility:** Additional hash algorithms can be registered by importing dependent packages (e.g.,
`blake3`). Named roots can be populated dynamically via environment-specific modules
(`thds.adls._thds_defaults` hook).
| text/markdown | null | Trilliant Health <info@trillianthealth.com> | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"aiohttp>=3.8.1",
"aiostream>=0.4.5",
"azure-identity>=1.9",
"azure-storage-file-datalake>=12.6",
"blake3",
"filelock>=3.0",
"xxhash",
"thds-core"
] | [] | [] | [] | [
"Repository, https://github.com/TrilliantHealth/trilliant-data-science"
] | twine/6.1.0 CPython/3.13.3 | 2026-02-19T20:12:21.301476 | thds_adls-4.5.20260219201157-py3-none-any.whl | 64,222 | f2/15/f673e86b2821626e633fa27ddbb716a96726f7a7787fb40ffeadb6e96b19/thds_adls-4.5.20260219201157-py3-none-any.whl | py3 | bdist_wheel | null | false | 010300312b59af57dc01abaca1ce8226 | b1822b4a4c6c10d1b88eba60c21d929c08599712961a1c2e18b6cdc1059b1cd7 | f215f673e86b2821626e633fa27ddbb716a96726f7a7787fb40ffeadb6e96b19 | null | [] | 0 |
2.4 | taskcluster-taskgraph | 19.2.1 | Build taskcluster taskgraphs |
.. image:: https://firefox-ci-tc.services.mozilla.com/api/github/v1/repository/taskcluster/taskgraph/main/badge.svg
:target: https://firefox-ci-tc.services.mozilla.com/api/github/v1/repository/taskcluster/taskgraph/main/latest
:alt: Task Status
.. image:: https://codecov.io/gh/taskcluster/taskgraph/branch/main/graph/badge.svg?token=GJIV52ZQNP
:target: https://codecov.io/gh/taskcluster/taskgraph
:alt: Code Coverage
.. image:: https://badge.fury.io/py/taskcluster-taskgraph.svg
:target: https://badge.fury.io/py/taskcluster-taskgraph
:alt: Pypi Version
.. image:: https://readthedocs.org/projects/taskcluster-taskgraph/badge/?version=latest
:target: https://taskcluster-taskgraph.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://img.shields.io/badge/license-MPL%202.0-orange.svg
:target: http://mozilla.org/MPL/2.0
:alt: License
Taskgraph
=========
Taskgraph is a Python library to generate graphs of tasks for the `Taskcluster
CI`_ service. It is the recommended approach for configuring tasks once your
project outgrows a single `.taskcluster.yml`_ file and is what powers the over
30,000 tasks and counting that make up Firefox's CI.
For more information and usage instructions, `see the docs`_.
How It Works
------------
Taskgraph leverages the fact that Taskcluster is a generic task execution
platform. This means that tasks can be scheduled via its `comprehensive API`_,
and aren't limited to being triggered in response to supported events.
Taskgraph leverages this execution platform to allow CI systems to scale to any
size or complexity.
1. A *decision task* is created via Taskcluster's normal `.taskcluster.yml`_
file. This task invokes ``taskgraph``.
2. Taskgraph evaluates a series of yaml based task definitions (similar to
those other CI offerings provide).
3. Taskgraph applies transforms on top of these task definitions. Transforms
are Python functions that can programmatically alter or even clone a task
definition.
4. Taskgraph applies some optional optimization logic to remove unnecessary
tasks.
5. Taskgraph submits the resulting *task graph* to Taskcluster via its API.
Taskgraph's combination of declarative task configuration combined with
programmatic alteration are what allow it to support CI systems of any scale.
Taskgraph is the library that powers the 30,000+ tasks making up `Firefox's
CI`_.
.. _Taskcluster CI: https://taskcluster.net/
.. _comprehensive API: https://docs.taskcluster.net/docs/reference/platform/queue/api
.. _.taskcluster.yml: https://docs.taskcluster.net/docs/reference/integrations/github/taskcluster-yml-v1
.. _Firefox's CI: https://treeherder.mozilla.org/jobs?repo=mozilla-central
.. _see the docs: https://taskcluster-taskgraph.readthedocs.io
Installation
------------
Taskgraph supports Python 3.8 and up, and can be installed from Pypi:
.. code-block::
pip install taskcluster-taskgraph
Alternatively, the repo can be cloned and installed directly:
.. code-block::
git clone https://github.com/taskcluster/taskgraph
cd taskgraph
pip install .
In both cases, it's recommended to use a Python `virtual environment`_.
.. _virtual environment: https://docs.python.org/3/tutorial/venv.html
Get Involved
------------
If you'd like to get involved, please see our `contributing docs`_!
.. _contributing docs: https://github.com/taskcluster/taskgraph/blob/main/CONTRIBUTING.rst
| text/x-rst | null | Mozilla Release Engineering <release+taskgraph@mozilla.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12... | [] | null | null | >=3.9 | [] | [] | [] | [
"appdirs>=1.4",
"cookiecutter~=2.1",
"json-e>=2.7",
"mozilla-repo-urls>=0.1.1",
"msgspec>=0.20.0",
"pyyaml>=5.3.1",
"redo>=2.0",
"requests>=2.25",
"slugid>=2.0",
"taskcluster-urls>=11.0",
"taskcluster>=92.0",
"voluptuous>=0.12.1",
"zstandard>=0.23.0; extra == \"load-image\"",
"orjson>=3; e... | [] | [] | [] | [
"Repository, https://github.com/taskcluster/taskgraph",
"Issues, https://github.com/taskcluster/taskgraph/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:12:20.082686 | taskcluster_taskgraph-19.2.1.tar.gz | 556,928 | 4a/23/4fcc3f3e14e1cd0b486726a8f4b96541f26101956c604c777ecde0794f09/taskcluster_taskgraph-19.2.1.tar.gz | source | sdist | null | false | 37b8aea701cc8903d1ed0b41409ae567 | 67722afa6bfb743c57885988fedf02d78a07b7aae436ee9a8521fa2582a002d5 | 4a234fcc3f3e14e1cd0b486726a8f4b96541f26101956c604c777ecde0794f09 | null | [
"LICENSE"
] | 1,068 |
2.4 | thds.core | 1.50.20260219201154 | Core utilities. | # core Library
The monorepo successor to `core`
## Development
If making changes to the library please add an entry to `CHANGES.md`, and if the change is more than a
patch, please bump the version in `pyproject.toml` accordingly.
## Config
`thds.core.config` provides a general-purpose config system designed to regularize how we implement
configuration both for libraries and applications. Please see its [README here](src/thds/core/CONFIG.md)!
## Logging config
This library handles configuration of all DS loggers. By default, all INFO-and-above messages are written
(to `stderr`).
### Default output formatter
By default we use a custom formatter intended to make things maximally human-readable.
If you want structured logs, you might try setting `THDS_CORE_LOG_FORMAT=logfmt`, or `json` if you want
JSON logs.
### File format
To customize what level different modules are logged at, you should create a file that looks like this:
```
[debug]
thds.adls.download
thds.core.link
[warning]
thds.mops.pure.pickle_runner
thds.mops.k8s.watch
```
You may also/instead add an `*` to change the global default log level, e.g.:
```
[warning]
*
```
> The wildcard syntax is not a generic pattern-matching facility; it _only_ matches the root logger.
>
> However, if you wish to match a subtree of the logger hierarchy, this is built in with Python loggers;
> simply configure `thds.adls` under `[debug]` and all otherwise-unconfigured loggers under `thds.adls`
> will now log at the DEBUG level.
### `THDS_CORE_LOG_LEVELS_FILE` environment variable
Provide the path to the above-formatted file to `thds.core` via the `THDS_CORE_LOG_LEVELS_FILE`
environment variable. You may wish to create this file and then set its path via exported envvar in your
`.bash/zshrc` so that you can permanently tune our logging to meet your preferences.
| text/markdown | null | Trilliant Health <info@trillianthealth.com> | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"setuptools",
"typing-extensions"
] | [] | [] | [] | [
"Repository, https://github.com/TrilliantHealth/trilliant-data-science"
] | twine/6.1.0 CPython/3.13.3 | 2026-02-19T20:12:18.937693 | thds_core-1.50.20260219201154-py3-none-any.whl | 114,579 | ab/5d/7c7eb07a8de3b142218b28fee71e4f0d2c39eb0c9d8454280437a6189052/thds_core-1.50.20260219201154-py3-none-any.whl | py3 | bdist_wheel | null | false | cb7e04d8c14e1720b318d11693ef4f8d | d450fe6efb7241d66f9a9be91b3bd0abe3f1cfec8e3c90624619e9b926b9d395 | ab5d7c7eb07a8de3b142218b28fee71e4f0d2c39eb0c9d8454280437a6189052 | null | [] | 0 |
2.4 | thds.atacama | 1.2.20260219201150 | A Marshmallow schema generator for `attrs` classes. Inspired by `desert`. | # atacama
A Marshmallow schema generator for `attrs` classes.
Inspired by `desert`.
## Why
`desert` seems mostly unmaintained. It is also surprisingly small (kudos to the authors), which makes it
a reasonable target for forking and maintaining.
However, we think the (widespread) practice of complecting the data class definition with its
serialization schema is unwise. While this is certainly DRY-er than having to rewrite the entire Schema,
it's (critically) not DRY at all if you ever want to have different de/serialization patterns depending
on the data source.
In particular, `atacama` is attempting to optimize for the space of Python application that serve APIs
from a database. These are common situations where serialization and deserialization may need to act
differently, and there's value in being able to cleanly separate those without redefining the `attrs`
class itself.
`cattrs` is the prior art here, which mostly dynamically defines all of its structure and unstructure
operations, and allows for different Converters to be used on the same `attrs` classes. However `cattrs`
does not bring the same level of usability as Marshmallow when it comes to various things that are
important for APIs. In particular, we prefer Marshmallow for its:
- validation, which we find to be more ergonomic in the Marshmallow-verse.
- ecosystem utilities such as OpenAPI spec generation from Marshmallow Schemas.
As of this writing, we are unaware of anything that `cattrs` can do that we cannot accomplish in
Marshmallow, although for performance and other reasons, there may be cases where `cattrs` remains a
better fit!
Thus `atacama`. It aims to provide fully dynamic Schema generation, while retaining 100% of the
generality offered by Marshmallow, in a form that avoids introducing complex shim APIs that no longer
look and feel like Marshmallow itself.
## What
`atacama` takes advantage of Python keyword arguments to provide as low-boilerplate an interface as
possible. Given:
```
from datetime import datetime, date
import attrs
@attrs.define
class Todo:
id: str
owner_id: str
created_at: datetime
priority: float = 0.0
due_on: None | date = None
```
For such a simple example, let's assume the following Schema validation rules, but only for when the data
comes in via the API:
- `created_at` must be before the current moment
- `priority` must be in the range \[0.0, 10.0\]
- `due_on`, if present, must be before 2038, when the Unix epoch will roll over and all computers will
die a fiery death.
```
from typing import Type
from atacama import neo # neo is the recommended default SchemaGenerator
import marshmallow as ma
def before_now(dt: datetime) -> bool:
return dt <= datetime.now()
def before_unix_death(date: date):
return date < date(2038, 1, 19)
TodoFromApi: Type[ma.Schema] = neo(
Todo,
created_at=neo.field(validate=before_now),
priority=neo.field(validate=ma.validate.Range(min=0.0, max=10.0),
due_on=neo.field(validate=before_unix_death),
)
TodoFromDb: Type[ma.Schema] = neo(
Todo,
created_at=neo.field(data_key='created_ts'),
)
# both of the generated Schemas are actually Schema _classes_,
# just like a statically defined Marshmallow class.
# In most cases, you'll want to instantiate an object of the class
# before use, e.g. `TodoFromDb().load(...)`
```
Note that nothing that we have done here requires
- modifying the `Todo` class in any way.
- repeating any information that can be derived _from_ the `Todo` class (e.g. that `due_on` is a `date`,
or that it is `Optional` with a default of `None`).
- complecting the data source and validation/transformation for that source with the core data type
itself, which can easily be shared across both the database and the API.
### Recursive Schema and Field generation
The first example demonstrates what we want and why we want it, but does not prove generality for our
approach. Classes are by nature recursively defined, and Schemas must also be.
Happily, `atacama` supports recursive generation and recursive customization at each layer of the
class+`Schema`.
There are five fundamental cases for every attribute in a class which is desired to be a `Field` in a
Schema. Two of these have already been demonstrated. The 5 cases are the following:
1. Completely dynamic `Field` and recursive `Schema` generation.
- This is demonstrated by `id` and `owner_id` in our `Todo` example. We told `atacama` nothing about
them, and reasonable Marshmallow Fields with correct defaults were generated for both.
2. A customized `Field`, with recursive `Schema` generation as needed.
- This is demonstrated by `created_at`, `priority`, and `due_on` in our `Todo` example. Much information
can be dynamically derived from the annotations in the `Todo` class, and `atacama` will do so. However,
we also wished to _add_ information to the generated `Field`, and we can trivially do so by supplying
keyword arguments normally accepted by `Field` directly to the `field` method of our `SchemaGenerator`.
These keyword arguments can even technically override the keyword arguments for `Field` derived by
`atacama` itself, though that would in most cases be a violation of your contract with the readers of
your class definition and is therefore not recommended. The `Field` _type_ will still be chosen by
`atacama`, so if for some reason you want more control than is being offered by `atacama`, that takes
you to option #3:
3. Completely static `Field` definition.
- In some cases, you may wish to opt out of `atacama` entirely, starting at a given attribute. In this
case, simply provide a Marshmallow `Field` (which is by definition fully defined recursively), and
`atacama` will respect your intention by placing the `Field` directly into the `Schema` at the
specified point.
4. A statically defined `Schema`.
- This is similar to case 2, except that, by providing a Marshmallow `Schema` for a nested attribute, you
are confirming that you want `atacama` to infer the "outer" information about that attribute, including
that is is a `Nested` `Field`, to perform all the standard unwrapping of Generic and Union types, and
to assign the correct default based on your `attrs` class definition. For instance, an attribute that
exhibits the definition `Optional[List[YourClass]] = None` would allow you to provide a nested `Schema`
defining only how to handle `YourClass`, while still generating the functionality around the default
value None and expecting a `List` of `YourClass`.
- In particular, this would be an expected case when you have a need to generate a `Schema` for direct
deserialization of a class that is also used in a parent class and `Schema`, but where both the parent
and child Schema share all the same custom validation, etc. By generating the nested `Schema` and then
assigning it at the proper location within the parent `Schema`, you can easily reuse all of the
customization from the child generation.
5. A nested `Schema` _generator_.
- The most common use case for this will be when it is desirable to customize the generated `Field` of a
nested class. In order to provide an API that continues to privilege keyword arguments as a way of
'pathing' to the various parts of the `Schema`, we must first capture any keyword arguments specific to
the `Nested` `Field` that will be generated, and from there on we can allow you to provide names
pointing to attributes in the nested class.
- SchemaGenerators are objects created by users who wish to customize `Schema` generation in particular
ways. The `Meta` class within a Marshmallow `Schema` changes certain behaviors across all its fields.
While `atacama` provides several default generators, you may wish to create your own. Regardless, the
use case for providing a nested `SchemaGenerator` is more specifically where you wish to make Schemas
with nested Schemas that follow different rules than their parents. This is no issue with `atacama` -
if it finds a nested `SchemaGenerator`, it will defer nested generation from that point onward to the
new `SchemaGenerator` as expected. Note that, of course, the `Field` being generated for that attribute
will follow the rules of the _current_ SchemaGenerator, just as would happen with nested `Meta` classes
in nested Schemas.
What does this look like in practice? See the annotated example below, which demonstrates all 5 of these
possible interactions between an `attrs` class and the specific `Schema` desired by our (potentially
somewhat sugar-high) imaginary user:
```python 3.7
@attrs.define
class Mallow:
gooeyness: GooeyEnum
color: str = "light-brown"
@attrs.define
class Milk:
"""Just a percentage"""
fat_pct: float
@attrs.define
class ChocolateIngredients:
cacao_src: str
sugar_grams: float
milk: ty.Optional[Milk] = None
@attrs.define
class Chocolate:
brand: str
cacao_pct: float
ingredients: ty.Optional[ChocolateIngredients] = None
@attrs.define
class GrahamCracker:
brand: str
@attrs.define
class Smore:
graham_cracker: GrahamCracker
marshmallows: ty.List[Mallow]
chocolate: ty.Optional[Chocolate] = None
ChocolateIngredientsFromApiSchema = atacama.neo(
ChocolateIngredients,
# 1. milk and sugar_grams are fully dynamically generated
# 2. a partially-customized Field inheriting its Field type, default, etc from the attrs class definition
cacao_src=atacama.neo.field(
validate=ma.validate.OneOf(["Ivory Coast", "Nigeria", "Ghana", "Cameroon"])
),
)
class MallowSchema(ma.Schema):
"""Why are you doing this by hand?"""
gooeyness = EnumField(GooeyEnum, by_value=True)
color = ma.fields.Raw()
@ma.post_load
def pl(self, data: dict, **_kw):
return Mallow(**data)
SmoreFromApiSchema = atacama.ordered(
Smore,
# 1. graham_cracker, by being omitted, will have a nested schema generated with no customizations
# 5. In order to name/path the fields of nested elements, we plug in a nested
# SchemaGenerator.
#
# Note that keyword arguments applicable to the Field surrounding the nested Schema,
# e.g. load_only, are supplied to the `nested` method, whereas 'paths' to attributes within the nested class
# are supplied to the returned NestedSchemaGenerator function.
#
# Note also that we use a different SchemaGenerator (neo) than the parent (ordered),
# and this is perfectly fine and works as you'd expect.
chocolate=atacama.neo.nested(load_only=True)(
# 2. Both pct_cacao and brand have customizations but are otherwise dynamically generated.
# Note in particular that we do not need to specify the `attrs` class itself, as that
# is known from the type of the `chocolate` attribute.
cacao_pct=atacama.neo.field(validate=ma.validate.Range(min=0, max=100)),
brand=atacama.neo.field(validate=ma.validate.OneOf(["nestle", "hershey"])),
# 4. we reuse the previously defined ChocolateIngredientsFromApi Schema
ingredients=ChocolateIngredientsFromApiSchema,
),
# 3. Here, the list of Mallows is represented by a statically defined NestedField
# containing a statically defined Schema.
# Why? Who knows, but if you want to do it yourself, it's possible!
marshmallows=ma.fields.Nested(MallowSchema(many=True)),
)
```
## How
### SchemaGenerators
All interaction with `atacama` is done via a top-level `SchemaGenerator` object. It contains some
contextual information which will be reused recursively throughout a generated `Schema`, including a way
to define the `Meta` class that is a core part of Marshmallow's configurability.
`atacama` currently provides two 'default' schema generators, `neo` and `ordered`.
- `ordered` provides no configuration other than the common specification that the generated Schema
should preserve the order of the attributes as they appear in the class - while this may not matter for
most runtime use cases, it is infinitely valuable for debuggability and for further ecosystem usage
such as OpenAPI spec generation, which ought to follow the order defined by the `attrs` class.
- `neo` stands for "non-empty, ordered", and is the preferred generator for new Schemas, because it
builds in a very opinionated but nonetheless generally useful concept of non-emptiness. For attributes
of types that properly have lengths, it is in general the case that one and only one of the following
should be true:
1. Your attribute has a default defined, such that it is not required to be present in input data for
successful deserialization.
1. It is illegal to provide an empty, zero-length value.
The intuition here is that a given attribute type either _may_ have an 'essentially empty' value, or it
may not. Examples of things which may never be empty include database ids (empty string would be
inappropriate), lists of object 'owners' (an empty list would orphan the object, and therefore must not
be permitted), etc. Whereas in many cases, an empty string or list is perfectly normal, and in those
cases it is preferred that the class itself define the common-sense default value in order to make
things work as expected without boilerplate.
### FieldTransforms
The `neo` `SchemaGenerator` performs the additional 'non-empty' validation to non-defaulted Fields via
something called a `FieldTransform`. Any `FieldTransform` attached to a `SchemaGenerator` will be run on
_every_ `Field` attached to the Schema, _recursively_. This includes statically-provided Fields.
The `FieldTransform` must accept an actual `Field` object and returns a (presumably modified) `Field`
object. This is only run at the time of `Schema` generation, so if you wish to add validators or perform
customization to the Field that happens at load/dump time, you must compose your logic with the existing
`Field`. A Schema generator can have multiple FieldTransforms, and they will be run _in order_ on every
`Field`. A `FieldTransform` is, in essence, a higher-order function over `Field`, which are themselves
functions for the incoming attribute data.
The two default generators are provided as a convenience to the user and nothing more - it is perfectly
acceptable and indeed expected that you might define your own 'sorts' of schema generators, with your own
`FieldTransforms` and basic `Meta` definitions, depending on your needs.
### Leaf type->Field mapping
As a recursive generator, there must be known base cases where a concrete Marshmallow `Field` can be
automatically generated based on the type of an attribute.
#### Built-in mappings
The default base cases are defined in `atacama/leaf.py`. They are relatively comprehensive as far as
Python builtins go, covering various date/time concepts and UUID. We also specifically map
`Union[int, float]` to the Marshmallow `Number` `Field`. Further, we support `typing_extensions.Literal`
using the built-in Marshmallow validator `OneOf`, and we have introduced a simple `Set` `Field` that
serializes `set`s to sorted `list`s.
#### Custom static mappings
Nevertheless, you may find that you wish to configure a more comprehensive (or different) set of leaf
types for your `SchemaGenerator`. This may be configured by passing the keyword argument `leaf_types` to
the `SchemaGenerator` constructor with a mapping of those leaf types. A `dict` is sufficient to provide a
static `LeafTypeMapping`.
#### Custom dynamic mappings
You may also provide a more dynamic implementation of the `Protocol` defined in `atacama/leaf.py`. This
would provide functionality similar to `cattrs.register_structure_hook`, except that a Marshmallow
`Field` handles both serialization and deserialization. The included `DynamicLeafTypeMapping` class can
help accomplish this, though you may provide your own custom implementation of the Protocol as well.
`DynamicLeafTypeMapping` is recursively nestable, so you may overlay your own handlers on top of our base
handlers via:
```
from atacama import DynamicLeafTypeMapping, AtacamaBaseLeafTypeMapping
your_mapping = DynamicLeafTypeMapping(AtacamaBaseLeafTypeMapping, [handler_1, handler_2])
```
## Minor Features
### `require_all`
You may specify at generation time that you wish to make all fields (recursively) `required` at the time
of load. This may be useful on its own, but is also the only way of accurately describing an 'output'
type in a JSON/OpenAPI schema, because `required` in that context is the only way to indicate that your
attribute will never be `undefined`. When dumping an `attrs` class to Python dictionary, all attributes
are always guaranteed to be present in the output, so `undefined` will never happen even for attributes
with defaults.
Example:
`atacama.neo(Foo, config(require_all=True))`
### Schema name suffix
You may specify a suffix for the name of the Schema generated. This may be useful when you are trying to
generate an output JSON schema and have multiple Schemas derived from the same `attrs` class.
Example:
`atacama.neo(Foo, config(schema_name_suffix='Input'))` results in the schema having the name
`your_module.FooInput` rather than `your_module.Foo`.
| text/markdown | null | Trilliant Health <info@trillianthealth.com> | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"attrs>=22.2.0",
"marshmallow>=3.1",
"marshmallow-enum",
"marshmallow-union",
"thds-core",
"typing-inspect>=0.9"
] | [] | [] | [] | [
"Repository, https://github.com/TrilliantHealth/trilliant-data-science"
] | twine/6.1.0 CPython/3.13.3 | 2026-02-19T20:12:16.902469 | thds_atacama-1.2.20260219201150-py3-none-any.whl | 22,619 | b0/2b/f94f6a9260145df53831e6c3480d668a98b7a5cc201fbc34e3af5ac78e42/thds_atacama-1.2.20260219201150-py3-none-any.whl | py3 | bdist_wheel | null | false | 1d23f793959cbdc6547cf9f26cf25376 | 6b7da0f427e0071e82017b973f23efdd13ea29e979ffb711906043698298c8cb | b02bf94f6a9260145df53831e6c3480d668a98b7a5cc201fbc34e3af5ac78e42 | null | [] | 0 |
2.4 | thds.termtool | 1.0.20260219201147 | Tools for terminal-based applications | # `thds.termtool` Library
Tools for terminal-based applications
| text/markdown | null | Trilliant Health <info@trillianthealth.com> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"ansicolors",
"thds-core"
] | [] | [] | [] | [
"repository, https://github.com/TrilliantHealth/ds-monorepo"
] | twine/6.1.0 CPython/3.13.3 | 2026-02-19T20:12:13.696413 | thds_termtool-1.0.20260219201147-py3-none-any.whl | 2,890 | 44/e9/e87f806ee5f66656f1e74eae2b62eacfcbc989d0f71d1cb93f989bce386e/thds_termtool-1.0.20260219201147-py3-none-any.whl | py3 | bdist_wheel | null | false | 0ad64e2bb03624cffed79b53128d38f6 | 4fbd183ebfc754144647db23d47be4447d6a34b5fc4296025ea3328913f2e52c | 44e9e87f806ee5f66656f1e74eae2b62eacfcbc989d0f71d1cb93f989bce386e | null | [] | 0 |
2.4 | thds.humenc | 1.1.20260219201143 | Binary to string encoding for human readers. | # Hum(an) Enc(oding)
Binary to string encoding for human readers.
| text/markdown | null | Trilliant Health <info@trillianthealth.com> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"thds-core",
"wordybin>=0.2.0"
] | [] | [] | [] | [
"Repository, https://github.com/TrilliantHealth/ds-monorepo"
] | twine/6.1.0 CPython/3.13.3 | 2026-02-19T20:12:12.322035 | thds_humenc-1.1.20260219201143-py3-none-any.whl | 2,569 | ae/20/2afb087139c4629a724482a0ed2c2b204d2dd2cece38dc5ae49f8183aca1/thds_humenc-1.1.20260219201143-py3-none-any.whl | py3 | bdist_wheel | null | false | 6aed175370ae5e81a6b80aeb9d1f7023 | 962bde54a26b06189a78c40f0a1e0afa37496da509f39f469d6b9d7ca98b6f0f | ae202afb087139c4629a724482a0ed2c2b204d2dd2cece38dc5ae49f8183aca1 | null | [] | 0 |
2.4 | testio-mcp | 0.5.1 | Model Context Protocol server for TestIO Customer API integration | # TestIO MCP Server
Query TestIO test data through AI tools - no UI required.
[](https://www.python.org/downloads/)
[](https://github.com/jlowin/fastmcp)
---
## Quick Start
**Get started in 3 steps:**
### 1. Setup (One-time configuration)
```bash
uvx testio-mcp setup
```
Creates `~/.testio-mcp.env` with your API credentials and preferences.
Reference docs are copied to `~/.testio-mcp/` including `.env.example` for all available options.
### 2. Sync Data
```bash
uvx testio-mcp sync
```
Loads your products, features, and tests into local cache (~30s-2min).
### 3. Start Server
```bash
uvx testio-mcp serve --transport http
```
Runs at http://127.0.0.1:8080 (keep terminal open).
**Next:** Configure your AI client → [MCP_SETUP.md](MCP_SETUP.md)
**Optional:** Open http://127.0.0.1:8080/docs for interactive API explorer.
---
## Access Methods
| Method | Endpoint | Best For |
|--------|----------|----------|
| **MCP** | `http://127.0.0.1:8080/mcp` | Claude, Cursor, AI assistants |
| **REST** | `http://127.0.0.1:8080/api/*` | Scripts, dashboards, integrations |
| **Swagger** | `http://127.0.0.1:8080/docs` | API exploration, testing |
### Example: Same Query, Two Ways
```bash
# Via AI (MCP)
"What's the status of test 109363?"
# Via REST
curl http://127.0.0.1:8080/api/tests/109363/summary
```
---
## Tools (17)
### Data Discovery
| Tool | Example Query |
|------|---------------|
| `list_products` | "Show all mobile apps" |
| `list_tests` | "List running tests for product 598" |
| `list_features` | "What features does product 598 have?" |
| `list_users` | "Who are our testers?" |
| `list_bugs` | "Show critical bugs for test 109363" |
### Entity Summaries
| Tool | Example Query |
|------|---------------|
| `get_test_summary` | "Status of test 109363" |
| `get_product_summary` | "Overview of product 598" |
| `get_feature_summary` | "Details on feature 1234" |
| `get_user_summary` | "Show tester 5678's activity" |
| `get_bug_summary` | "Details on bug 91011" |
### Analytics & Reports
| Tool | Example Query |
|------|---------------|
| `generate_quality_report` | "Quality report for products 598, 599" |
| `query_metrics` | "Bug counts by severity for product 598" |
| `get_analytics_capabilities` | "What metrics can I query?" |
See [ANALYTICS.md](docs/ANALYTICS.md) for the full analytics guide with query patterns and examples.
### Search & Sync
| Tool | Example Query |
|------|---------------|
| `search` | "Find bugs mentioning login" |
| `sync_data` | "Refresh data for product 598" |
### Diagnostics
| Tool | Example Query |
|------|---------------|
| `get_server_diagnostics` | "Check server health" |
| `get_problematic_tests` | "Which tests failed to sync?" |
---
## Prompts (2)
Interactive workflows for common tasks:
| Prompt | Use Case |
|--------|----------|
| `analyze-product-quality` | Deep-dive quality analysis with artifacts |
| `prep-meeting` | Generate meeting materials from analysis |
---
## Resources (2)
Knowledge bases accessible via `testio://` URIs:
| Resource | Content |
|----------|---------|
| `testio://knowledge/playbook` | CSM heuristics and templates |
| `testio://knowledge/programmatic-access` | REST API discovery guide |
---
## CLI Reference
```bash
# Configuration
uvx testio-mcp setup # Interactive setup
uvx testio-mcp --version # Show version
# Server
uvx testio-mcp serve --transport http # HTTP mode (multi-client)
uvx testio-mcp serve --transport http --port 9000 # Custom port
uvx testio-mcp # stdio mode (single client)
# Sync
uvx testio-mcp sync --status # Check sync status
uvx testio-mcp sync # Manual sync
uvx testio-mcp sync --force # Full refresh
```
---
## Data Flow
```
┌─────────────────────────────────────────┐
│ AI Client (Claude, Cursor) │
│ or REST Client (curl, scripts) │
└─────────────┬───────────────────────────┘
↓
┌─────────────────────────────────────────┐
│ TestIO MCP Server │
│ localhost:8080 │
└─────────────┬───────────────────────────┘
↓
┌─────────────────────────────────────────┐
│ Local SQLite Cache │
│ ~/.testio-mcp/cache.db │
│ (queries: ~10ms, auto-sync: 1h) │
└─────────────┬───────────────────────────┘
↓ (read-through cache + sync)
┌─────────────────────────────────────────┐
│ TestIO Customer API │
│ https://api.test.io/customer/v2 │
└─────────────────────────────────────────┘
```
**Caching:** Background sync refreshes products, features, and discovers new tests every hour. Bug and test metadata use read-through caching—refreshed on-demand when queried if stale (>1 hour). Immutable tests (`archived`/`cancelled`) always serve from cache. See [CLAUDE.md](CLAUDE.md) for details on test mutability and caching logic.
---
## Configuration
Created by `uvx testio-mcp setup` at `~/.testio-mcp.env`:
| Variable | Description |
|----------|-------------|
| `TESTIO_CUSTOMER_API_TOKEN` | Your API token |
| `TESTIO_CUSTOMER_NAME` | Your subdomain |
| `TESTIO_CUSTOMER_ID` | Customer ID (default: 1) |
| `TESTIO_PRODUCT_IDS` | Filter to specific products |
| `TESTIO_HTTP_PORT` | Server port (default: 8080) |
Full options: see `.env.example` (repo root or `~/.testio-mcp/.env.example` for uvx users).
---
## Client Setup
See [MCP_SETUP.md](MCP_SETUP.md) for connecting:
- Claude Desktop
- Claude Code (CLI)
- Cursor
- Gemini Code
---
## Troubleshooting
```bash
# Server won't start?
curl http://127.0.0.1:8080/health
# Data seems stale?
uvx testio-mcp sync --status
uvx testio-mcp sync --force
# Token issues?
uvx testio-mcp setup # Reconfigure
```
---
## Documentation
- [MCP_SETUP.md](MCP_SETUP.md) - Client configuration
- [ANALYTICS.md](docs/ANALYTICS.md) - Analytics engine guide
- [CLAUDE.md](CLAUDE.md) - Development guide
- [CHANGELOG.md](CHANGELOG.md) - Version history
- [docs/architecture/](docs/architecture/) - Technical architecture
---
## License
Proprietary - See [LICENSE](LICENSE) for terms.
| text/markdown | TestIO MCP Team | null | null | null | null | ai, api, llm, mcp, model-context-protocol, testio | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Pyt... | [] | null | null | >=3.12 | [] | [] | [] | [
"aiosqlite>=0.20.0",
"alembic>=1.13.0",
"authlib>=1.6.6",
"cryptography>=46.0.5",
"dateparser>=1.2.0",
"fastapi>=0.109.0",
"fastmcp<3.0.0,>=2.12.0",
"filelock>=3.20.3",
"greenlet>=3.0.0",
"httpx>=0.28.0",
"jaraco-context>=6.1.0",
"psutil>=5.9.0",
"pydantic-settings>=2.11.0",
"pydantic>=2.1... | [] | [] | [] | [
"Homepage, https://github.com/test-IO/customer-mcp",
"Documentation, https://github.com/test-IO/customer-mcp#readme",
"Repository, https://github.com/test-IO/customer-mcp",
"Issues, https://github.com/test-IO/customer-mcp/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T20:11:16.016560 | testio_mcp-0.5.1-py3-none-any.whl | 374,642 | f6/56/ea310ef5c4ece35e4ac7a59124abc7706f807f9fd3fdcc418071771f7632/testio_mcp-0.5.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 8ad405958d71e3409ba451eb7d72c4cd | f579caea958217c2e092ee5a90b35008147d08dd343486d4928f2d002691cccb | f656ea310ef5c4ece35e4ac7a59124abc7706f807f9fd3fdcc418071771f7632 | LicenseRef-Proprietary | [
"LICENSE"
] | 201 |
2.3 | flux-config-shared | 0.9.5 | Shared protocol and configuration definitions for Flux Config packages | # flux-config-shared
Shared protocol and configuration definitions for Flux Config packages.
This package contains:
- JSON-RPC protocol definitions
- Daemon state models
- User configuration models
- Application configuration (AppConfig)
- Delegate configuration
- Pydantic validation models
Used by:
- flux-configd (daemon)
- flux-config-tui (TUI client)
| text/markdown | David White | David White <david@runonflux.io> | null | null | GPL-3.0-or-later | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: System :: Systems Administration",
"Topic :: System :: ... | [] | null | null | <4,>=3.13 | [] | [] | [] | [
"pyyaml<7,>=6.0.2",
"aiofiles<26,>=25.1.0",
"textual<7,>=6.11.0",
"pydantic<3,>=2.10.6",
"email-validator<2.3,>=2.2.0",
"cryptography<47,>=46.0.5",
"pyrage>=1.3.0",
"flux-delegate-starter>=0.1.0"
] | [] | [] | [] | [] | uv/0.6.3 | 2026-02-19T20:11:09.884207 | flux_config_shared-0.9.5.tar.gz | 14,799 | c1/86/9de1ef2a9a831a5fcc779c92abb66586b23714a39b8ee868367a1cfa336f/flux_config_shared-0.9.5.tar.gz | source | sdist | null | false | 42baadb2f827269a24843e20b3d7cb4f | 0740a46764a7beefa9a619c5c6fed7d0a0b90070028f376145c5495a541b85fc | c1869de1ef2a9a831a5fcc779c92abb66586b23714a39b8ee868367a1cfa336f | null | [] | 216 |
2.4 | phoebe | 2.4.22 | PHOEBE: modeling and analysis of eclipsing binary stars | PHOEBE 2.4
------------------------
<p align="center"><a href="https://phoebe-project.org"><img src="./images/logo_blue.svg" alt="PHOEBE logo" width="160px" align="center"/></a></p>
<pre align="center" style="text-align:center; font-family:monospace; margin: 30px">
pip install phoebe
</pre>
<p align="center">
<a href="https://pypi.org/project/phoebe/"><img src="https://img.shields.io/badge/pip-phoebe-blue.svg"/></a>
<a href="https://phoebe-project.org/install"><img src="https://img.shields.io/badge/python-3.8+-blue.svg"/></a>
<a href="https://github.com/phoebe-project/phoebe2/blob/master/LICENSE"><img src="https://img.shields.io/badge/license-GPL3-blue.svg"/></a>
<a href="https://github.com/phoebe-project/phoebe2/actions/workflows/on_pr.yml?query=branch%3Amaster"><img src="https://github.com/phoebe-project/phoebe2/actions/workflows/on_pr.yml/badge.svg?branch=master"/></a>
<a href="https://phoebe-project.org/docs/2.4"><img src="https://github.com/phoebe-project/phoebe2-docs/actions/workflows/build-docs.yml/badge.svg?branch=2.4"/></a>
<br/>
<a href="https://ui.adsabs.harvard.edu/abs/2016ApJS..227...29P"><img src="https://img.shields.io/badge/ApJS-Prsa+2016-lightgrey.svg"/></a>
<a href="https://ui.adsabs.harvard.edu/abs/2018ApJS..237...26H"><img src="https://img.shields.io/badge/ApJS-Horvat+2018-lightgrey.svg"/></a>
<a href="https://ui.adsabs.harvard.edu/abs/2020ApJS..247...63J"><img src="https://img.shields.io/badge/ApJS-Jones+2020-lightgrey.svg"/></a>
<a href="https://ui.adsabs.harvard.edu/abs/2020ApJS..250...34C/"><img src="https://img.shields.io/badge/ApJS-Conroy+2020-lightgrey.svg"/></a>
</p>
<p align="center">
<a href="https://phoebe-project.org"><img src="./images/console.gif" alt="Console Animation" width="600px" align="center"/></a>
</p>
INTRODUCTION
------------
PHOEBE stands for PHysics Of Eclipsing BinariEs. PHOEBE is pronounced [fee-bee](https://www.merriam-webster.com/dictionary/phoebe?pronunciation&lang=en_us&file=phoebe01.wav).
PHOEBE 2 is a rewrite of the original PHOEBE code. For most up-to-date information please refer to the PHOEBE project webpage: [https://phoebe-project.org](https://phoebe-project.org)
PHOEBE 2.0 is described by the release paper published in the Astrophysical Journal Supplement, [Prša et al. (2016, ApJS 227, 29)](https://ui.adsabs.harvard.edu/#abs/2016ApJS..227...29P). The addition of support for misaligned stars in version 2.1 is described in [Horvat et al. (2018, ApJS 237, 26)](https://ui.adsabs.harvard.edu/#abs/2018ApJS..237...26H). Interstellar extinction and support for Python 3 was added in version 2.2 and described in [Jones et al. (2020, ApJS 247, 63)](https://ui.adsabs.harvard.edu/abs/2020ApJS..247...63J). Inclusion of a general framework for solving the inverse problem as well as support for the [web and desktop clients](https://phoebe-project.org/clients) was introduced in version 2.3 as described in [Conroy et al. (2020, ApJS 250, 34)](https://ui.adsabs.harvard.edu/abs/2020ApJS..250...34C), which also removes support for Python 2. PHOEBE 2.4 improves on the geometry and ebai estimators, updates gaussian processes to use either scikit-learn or celerite2, and adds support for submitting compute or solver runs on external servers. These updates and fitting "best practices" will be discussed in Kochoska et al., in prep.
PHOEBE 2 is released under the [GNU General Public License v3](https://www.gnu.org/licenses/gpl-3.0.en.html).
The source code is available for download from the [PHOEBE project homepage](https://phoebe-project.org) and from [github](https://github.com/phoebe-project/phoebe2).
The development of PHOEBE 2 is funded in part by [NSF grant #1517474](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1517474), [NSF grant #1909109](https://www.nsf.gov/awardsearch/showAward?AWD_ID=1909109) and [NASA 17-ADAP17-68](https://ui.adsabs.harvard.edu/abs/2017adap.prop...68P).
DOWNLOAD AND INSTALLATION
-------------------------
The easiest way to download and install PHOEBE 2 is by using pip (make sure you're using the correct command for pip that points to your python3 installation - if in doubt use something like `python3 -m pip install phoebe`):
pip install phoebe
To install it site-wide, prefix the `pip` command with `sudo` or run it as root.
To download the PHOEBE 2 source code, use git:
git clone https://github.com/phoebe-project/phoebe2.git
To install PHOEBE 2 from the source locally, go to the `phoebe2/` directory and issue:
pip install .
Note that as of the 2.4.16 release, PHOEBE requires Python 3.8 or later. For further details on pre-requisites consult the [PHOEBE project webpage](https://phoebe-project.org/install/2.4).
GETTING STARTED
---------------
PHOEBE 2 has a fairly steep learning curve. To start PHOEBE from python, issue:
python
>>> import phoebe
>>>
As of the 2.3 release, PHOEBE also includes a desktop and web client user-interface which is installed independently of the python package here. See the [phoebe2-ui repository](https://github.com/phoebe-project/phoebe2-ui) and [phoebe-project.org/clients](https://phoebe-project.org/clients) for more details.
To understand how to use PHOEBE, please consult the [tutorials, scripts and manuals](https://phoebe-project.org/docs/2.4/) hosted on the PHOEBE webpage.
CHANGELOG
----------
### 2.4.22
* Fixes bug where LS periodogram was returning frequency instead of period [#1089]
* Fixes passband luminosity computation for the dataset-scaled mode. [#1091]
* Fixes third light computation when multiple passbands are used. [#1091]
* Argument order for compute_l3s() changed to explicitly provide the model as it is required for correct scaling. [#1091]
* Fixes support for newer releases of scipy. [#1102]
### 2.4.21
* Add a PHOEBE_TABLES_SERVER environment variable to allow overriding the URL of the queried tables server. [#1060]
* Improved description for fti_oversample parameter. [#1061]
* Updated all http links to https. [#1062]
* Catch ellc OSError. [#1063]
* Fixes parse_solver_times regression introduced in 2.4.20. [#1067]
* Fixes differential_corrections solver when multiple compute options exist. [#1069]
* Fixes crimpl issue with scp files from remote server. [#1074]
### 2.4.20
* Fix parse_solver_times compatibility with numpy > 1.24. [#1056]
* Update crimpl to allow custom MPI paths on the server and to use conda-forge as the default conda channel to avoid the need to agree to terms of the default channel. [#1055]
### 2.4.19
* Remove unused passband files [#1045]
### 2.4.18
* Fix handling of spots in single star rotstar case where spots were not co-rotating properly [#1017]
* Fix misaligned spots bug that caused size of spot to change across the rotation period [#1017]
* Fix animation bug which prevented passing times as a numpy array [#1018]
* Fix continue_from support for scipy optimizers [#1041]
* Fix support for astropy 7.x units [#1043]
### 2.4.17
* Fix support for numpy 2.0. [#982]
### 2.4.16
* Fix handling of floating-point precision near the aligned case that used to result in error from libphoebe. [#965]
* Updates to phoebe-server to be compatible with modern browser requirements. [#959]
* Fix support for python 3.13, remove official support for python 3.7. [#968]
### 2.4.15
* Fix handling of include_times for RVs with compute_times/phases. [#889]
* GPs on models computed in phase-space will be properly computed based on residuals in time space. [#899]
* Fix units of requivfrac. [#894]
* Fix adopting mask_phases from lc_geometry. [#896]
* Fix population of wavelength array in load function for passbands. [#914]
* Temporarily cap numpy dependency < 2.0. [#930]
* Fix installation of phoebe-server CLI script to launch from UI. [#929]
* Fix passing compute to export_solver with features attached. [#922]
* sigmas_lnf: change handling of noise-nuissance parameter for RVs to no longer depend on the RV amplitude. [#901]
* Remove duplicated phoebe-server code. [#940]
* Fix python 3.12+ support by updating invalid escape sequences. [#948]
* Improved precision in calculation of constraints. [#945]
### 2.4.14
* Fix MPI off to not broadcast if never enabled
* Fix warning message in dynesty solver
* Fix multi-compute with enabled/disabled datasets
* Fix error message in compute_ld_coeffs
* Fix segfaults in macos-14
* Now requires C++14-compatible compiler
### 2.4.13
* optimization: dynamical RVs avoid unnecessary meshing
* run_checks no longer requires ck2004 atmosphere tables if no datasets use ck2004
* fix treatment of distance for alternate backends (ellc, jktebop)
### 2.4.12 - build system update
* upgrade the build system to pyproject.toml with setuptools as backend and pip as frontend.
* drop the dependency on the obsolete distutils module.
* swap nosetests for pytest.
* small build-related bugfixes throughout the code.
### 2.4.11
* fix jktebop backend handling of mass-ratio and eccentricity for RVs.
* bumps version requirements in pip for numpy, scipy, astropy.
* allows sma@star and asini@star to flip to solve for q
* fixes handling of spots on rotating single stars.
* fixes constraint migration for 2.3 -> 2.4
### 2.4.10
* fixes implementation of gravitational redshift.
* fixes an unitialized value problem in gradients resulting in nans for effective temperatures.
* minor updates to passband exporting to support upcoming 2.5 release on the passbands server.
* allows setting SelectParameter to an array or tuple (in addition to a list).
### 2.4.9 - asynchronous spots bugfix
* fixes bug introduced in 2.4.8 and ensures that temperatures are recomputed for spots when the star is rotating asynchronously.
### 2.4.8 - spots optimization bugfix
* spots no longer force the mesh to be recomputed at each timepoint.
* updates for numpy compatibility and wider test matrix.
### 2.4.7 - line profile bugfix
* fix bug where wavelength arrays that did not include the central wavelength were returning nans for fluxes.
### 2.4.6 - potential to requiv TypeError bugfix
* fix bug where libphoebe was incorrectly raising an error suggesting the potential was out of bounds.
### 2.4.5 - negative mass bugfix
* fix bug where mass could be set to a negative value causing constraints to resolve to nans.
### 2.4.4 - constraint flipping bugfix
* fix bug where flipping Kepler's third law constraint multiple times would fail.
* fix bug when flipping requivsumfrac and requivratio constraints.
### 2.4.3 - use_server with features bugfix
* fix typo that raised error when using use_server with features attached
* added new `addl_slurm_kwargs` parameter to pass any options to slurm scheduler
### 2.4.2 - l3 handling distance in absolute pblum_mode bugfix
* fix conversion between l3 and l3_frac to account for distance when pblum_mode
is absolute
* fix tagged phoebe version in cached bundles to avoid import warning
### 2.4.1 - solver filtering and plotting bugfix
* fix filtering error when not explicitly passing solver to run_solver
* fix exposing analytic model from lc geometry estimator
* fix phase-sorting when plotting solution from ebai estimator
### 2.4.0 - solver and gaussian process improvements release
* add support for differential evolution optimizer solver
* add support for differential corrections optimizer solver
* optimizers: ability to continue runs from previous solutions (for most optimizers)
* improvements to geometry and ebai estimators to use ligeor as a new built-in dependency
* gaussian processes now use celerite2 or scikit-learn instead of celerite
* emcee sampler: additional plotting styles to check for convergence, checks to ensure starting sample is physical, and added ability to continue a previous run from any arbitrary iteration in a previous run
* new support for running jobs on external servers via crimpl
* clarified distinction between chi2 and mle
### 2.3.63 - constraint feature bugfix
* fix bug where creating a custom constraint for parameters within features was not correctly identifying the constrained parameter and was raising an error when attempting to set the value of the constraining parameter.
### 2.3.62 - attach_job ferr bugfix
* fix bug where error file was not properly loaded when retrieving error from external job
### 2.3.61 - M1 compiler optimization bugfix
* remove compiler optimizations that are not portable to ARM architectures
### 2.3.60 - passband timestamp bugfix
* compare version strings instead of datetime to avoid some systems throwing an error when looking for passband updates.
* see also 2.3.13 release.
### 2.3.59 - extinction constraint bugfix
* fixes extinction constraint when flipping to solve for Av
### 2.3.58 - astropy 5.0 units bugfix
* fixes support for astropy 5.0 changes to unit physical types (see also 2.3.51).
* b.save now requires delayed and failed constraints to run before saving.
### 2.3.57 - remove inadvertent typo while sampling distributions
* introduced in 2.3.55
### 2.3.56 - setup without m2r bugfix
* fixes installation (on some machines) where m2r is not installed
### 2.3.55 - sample_distribution_collection index bugfix
* fixes handling distributions on array parameters within sample_distribution_collection and run_compute(sample_from).
### 2.3.54 - distribution bugfix
* updates `distl` to convert units with recent changes to astropy. See also the changes in 2.3.51 and 2.3.52.
* fixes median introduced in 2.3.52 to act on distribution object instead of just arrays.
### 2.3.53 - adopt_solution adopt_values bugfix
* adopting a solution with `adopt_values=True` for a sampler solver will now adopt the median from the samples rather than the mean, to be consistent with the central values reported by the distributions themselves.
### 2.3.52 - run_all_constraints support for array parameters bugfix
* fixes new run_all_constraints (new in 2.3.51) method to work on array parameters (compute_times/compute_phases).
### 2.3.51 - units physical type astropy update bugfix
* fixes parsing the physical type of a unit in latest releases of astropy. Without this fix, some constraints may fail to run.
* implements a new b.run_all_constraints, which is now automatically called when importing from a file in case any constraints were in the failed state.
### 2.3.50 - contact binary estimators bugfix
* rv_geometry: explicitly look for RVs attached to stars (not envelopes, which raised a lookup error).
* run_checks_solver: run compatibility checks between solver and hierarchies. Contact binaries are not supported by lc_geometry or ebai, single stars are not supported by lc_geometry, ebai, or rv_geometry.
### 2.3.49 - requivsumfrac flipping bugfix
* fix remaining cases for flipping requivsumfrac constraint (see 2.3.45 bugfix release for the partial fix for some, but not all, cases)
* migrate from Travis CI to GitHub actions for CI testing
### 2.3.48 - mu atm out-of-bounds bugfix
* fixes atmosphere out-of-bounds error caused by mu that should be exactly 0 or 1, but numerically out-of-bounds.
### 2.3.47 - calculate_lnp bugfix
* fixes calculate_lnp to more robustly handle parameter matching for both the constrained and unconstrained case
* fixes default_binary constructor when overriding label of the 'binary' orbit
* fixes typo in ellc backend for the period==1 case
### 2.3.46 - rvperiodogram SB1 bugfix
* fixes handling of SB1s (RVs with a single component) in the rv_periodogram estimator
* adds checks to forbid zeros in dataset sigmas
### 2.3.45 - requivsumfrac constraint flipping bugfix
* fixes bug in flipping requivsumfrac constraint for requivratio when requiv of the secondary star is already constrained
### 2.3.44 - add_component/figure bugfix
* fixes bug in assigning parameter tags when passing function (as kind) to add_component or add_figure.
### 2.3.43 - RV SB1 residuals bugfix
* fixes silently ignoring component (while calculating residuals, chi2, etc) in an RV dataset in which times are provided, but observational RVs are not.
* improves error messages in calculate_residuals when resulting in no or more than one matches.
### 2.3.42 - RV plotting bugfix
* fixes plotting RVs when compute_times is provided instead of times. Previously would raise an error that the 'rvs' parameter could not be found as it is hidden in the dataset.
### 2.3.41 - estimators missing sigmas bugfix
* fixes handling of default sigmas within LC estimators when no sigmas are provided in the dataset.
### 2.3.40 - custom lnprobability bugfix
* fixes handling of `custom_lnprobability_callable` when passed to `run_solver`. Previously an error was raised stating it was not a supported keyword argument and was not passed to the script correctly during `export_solver`.
### 2.3.39 - optimizer progressbar and sample_from infinite failed samples bugfix
* fixes bug in increment size in progressbar for optimizers that appears to go past 100% before completion
* when running a forward model sampling from a distribution (or a solution), only allow 10 failed samples per draw before raising an error to prevent getting stuck in an infinite loop if the parameter space is unphysical
* add_compute(overwrite=True) now allows the existing tag to already exist in solutions (in addition to models)
### 2.3.38 - mvgaussian uncertainties bugfix
* updates distl to 0.3.1 which includes a fix to treat mvgaussian uncertainties from percentiles like other distribution types
* forces updating kepler's third law constraint when importing a bundle from before 2.3.25 bugfix
### 2.3.37 - add_distribution allow_multiple_matches bugfix
* fixes bug where tags on distributions were improperly applied when passing `allow_multiple_matches=True`
* disables run_compute progressbar within solvers
* fixes typo in description of progress parameter
### 2.3.36 - MPI passband directory bugfix
* fixes bug where running phoebe for the first time within MPI crashes due to each processor attempting to create the passband directory.
### 2.3.35 - rotstar bugfix
* bugfix in equation for converting rotation period/frequency to potential that affects the shapes of rapidly rotating stars with distortion_method of 'rotstar'.
* single stars: implements the missing constraint for requiv_max for single star systems.
### 2.3.34 - ebai and continue_from bugfix
* ebai: map phases onto -0.5,0.5 interval after computing phase-shift and sending to ebai
* emcee: cast fitted_uniqueids to list when applying wrap indices for continue_from
### 2.3.33 - constrained and multivariate priors bugfix
* fixes handling of multivariate distributions as priors
* run_compute sample_from: use serial mode when sample_num is 1
* run_compute when passing solution instead of sample_from, default to sample_num=1 if adopt_distributions is False
* export_solver: exclude unneeded distributions/solutions from the exported script to optimize filesize
* export_solver: adds (undocumented until 2.4 release) support for autocontinue
* export_compute: do not require explicitly passing compute if only one exists matching the filter
* calculate_lnp: include_constrained now defaults to True
### 2.3.32 - phoebe-server bugfix
* fixes version of flask-socketio dependency to remain compatible with desktop client
* ensures path and query string are cast to string
### 2.3.31 - SB1 with compute_times bugfix
* fixes fitting radial velocities where only one component has observations (SB1 system) and compute_times are provided.
* compute_residuals now returns an empty array when the corresponding times_array is empty, instead of raising an error
### 2.3.30 - ld_coeffs fitting bugfix
* all fitting ld_coeffs. Each coefficient is referenced by index and can be fit or have distributions attached independently. See [tutorial](https://phoebe-project.org/docs/latest/tutorials/fitting_ld_coeffs) for more details.
* also fixes support for [custom constraints](https://phoebe-project.org/docs/latest/tutorials/constraints_custom) which can be used to link ld_coeffs between datasets of the same passband, for example.
### 2.3.29 - adopt_solution bugfix
* do not require passing solution to adopt_solution (when adopting distributions) if only one solution exists
* fix distribution_overwrite_all not defined error
### 2.3.28 - solver checks bugfix
* excludes datasets not supported in fitting (mesh, orb, lp, etc) from forward-model within inverse solvers.
* run_checks_solver now checks for nans in dataset arrays.
### 2.3.27 - add_compute/solver overwrite bugfix
* fixes bug where passing overwrite to add_compute or add_solver raised an error if run_compute/run_solver already created a model/solution tagged with that same label.
### 2.3.26 - multiprocessing bugfix
* allows disabling multiprocessing (or lowering the number of available processors). Multiprocessing is used by default when not within MPI and when calling `run_compute` with `sample_from` or `run_solver` with solvers that support parallelization. Some installations of multiprocessing on Mac may cause issues, in which case you can now for PHOEBE to run in serial mode.
* this introduces new `phoebe.multiprocessing_off()`, `phoebe.multiprocessing_on()`, `phoebe.multiprocessing_get_nprocs()`, and `phoebe.multiprocessing_set_nprocs(n)` functions, but the default behavior remains unchanged.
### 2.3.25 - distribution propagation bugfix
* updates distl to 0.2.0 release which includes support for retaining simultaneous sampling between copies of the same underyling distribution, increased precision on latex formatting of uncertainties, and maintaining labels during unit conversion.
* fix propagating distl distribution objects through constraints to arbitrary depth.
* update Kepler's third law constraint to be distl-friendly (1+q becomes q+1).
* parameter.get_distribution: new argument `delta_if_none` to allow returning a delta function. This is now the default behavior from within b.get/plot_distribution_collection
* b.sample_distribution_collection: rename `N` argument to `sample_size` (but with backwards compatibility support for `N`).
* run_checks_solver now includes a warning if priors contain "around" distributions.
### 2.3.24 - emcee continue_from bugfix
* skip nwalkers vs number of parameters check when continue_from is set
* fallback on twigs when original uniqueids not available (when attempting to continue from a solution loaded into a new bundle, for example)
* wrapping rules for angle parameters fallback on median of last iteration in the available chain when uniqueids do not match as the initializing distribution likely does not exist anymore
### 2.3.23 - ellc flux-weighted RV vsini bugfix
* compute vsini from syncpar and pass to RV to enable Rossiter-McLaughlin effect when rv_method='flux-weighted'.
### 2.3.22 - trace plotting nanslice bugfix
* fix bug in plotting MCMC trace plots when any given chain is all nans.
### 2.3.21 - estimators phase-bin bugfix
* fix bug resulting in a nanslice error when phase_bin is enabled within estimators resulting in a single entry in any given bin. Now, sigmas will be ignored within the estimator in these cases with a warning in the logger.
### 2.3.20 - legacy passband bugfix
* now correctly maps passbands when using the legacy backend (only affects TESS and Tycho)
* falls back on PHOEBE atmospheres when needing to compute pblums internally for flux scaling prior to calling legacy backend
* from_legacy bugfix in parsing linear limb-darkening coefficients
* export_compute/export_solver: add commment warning against manually editing script
* fixes typo which raised error when rescaling passband-dependent mesh columns
### 2.3.19 - passbands update available datetime string parsing bugfix
* some systems fail to parse common datetime strings, resulting in inability to import phoebe when checking for available passband updates. This now prints and logs an error message, but does not prevent import.
* checking for available passband updates on import now correctly respects the PHOEBE_ENABLE_ONLINE_PASSBANDS environment variable.
* failed online passbands connection error messages are now only included in the log once (per processor) to avoid spamming the log (but are shown by default when manually calling phoebe.list_online_passbands).
### 2.3.18 - estimator.ebai with wide eclipse bugfix (attempt 2)
* actually fixes bug (see 2.3.13) that raised internal error when running ebai on an eclipse with width larger than 0.25 in phase. Note that these systems will still return nans as ebai is not well-suited to these systems, but the internal error will no longer occur.
### 2.3.17 - optimizer MPI bugfix
* enables parallelization (per-time or per-dataset) for optimizers.
### 2.3.16 - rv_geometry with different lengths bugfix
* fixes estimator.rv_geometry when primary and secondary component have different times.
### 2.3.15 - alternate backends with l3_frac and dataset-scaled bugfix
* fix bug in applying l3_frac within dataset scaling (pblum_mode='dataset-scaled') when using alternate backends.
### 2.3.14 - import_solution with uniqueid mismatch bugfix
* fix bug where falling back on twigs when importing a solution on a different bundle failed. It is still suggested to save the bundle and import solutions on the bundle used when calling export_solver.
### 2.3.13 - estimator.ebai with wide eclipse bugfix
* fix bug (but not really - see 2.3.18) that raised internal error when running ebai on an eclipse with width larger than 0.25 in phase. Note that these systems will still return nans as ebai is not well-suited to these systems, but the internal error will no longer occur.
### 2.3.12 - plot univariate distributions latex label bugfix
* fix bug in the latex labels on plots when converting from multivariate to univariate distributions.
### 2.3.11 - continue_from run_checks bugfix
* fix bug where run_checks raised an error for an empty init_from if continue_from was set.
### 2.3.10 - alternate backend atm bugfix
* fix bug where atm parameter was ignored during passband luminosity scaling while using an alternate backend, resulting in an atmosphere out-of-bounds error in some situations.
### 2.3.9 - online passbands bugfix
* stop attempting to query online passbands after three failed attempts to avoid significant time cost otherwise.
### 2.3.8 - plotting exclusion bugfix
* fix bug where datasets were excluded from plotting if not in any models
* fix syntax error in run_checks
### 2.3.7 - kwargs errors bugfix
* fix small bugs that could raise errors when passing some filter kwargs to `run_solver` or `sample_distribution_collection`
### 2.3.6 - GP run_checks bugfix
* fix check for presence of observational data during run_checks to only consider datasets with attached gaussian processes (GPs)
### 2.3.5 - lp run_checks bugfix
* fix length comparison of flux_densities and wavelengths during run_checks
### 2.3.4 - passband/extinction bugfix
* fixed Gordon extinction coefficient calculation in line with erratum http://dx.doi.org/10.1088/0004-637X/705/2/1320.
* added check to require updating affected-passbands (versions at tables.phoebe-project.org have been updated)
* removed duplicate Passband methods causing ld/ldint passband computations to fail
### 2.3.3 - latex representation bugfix
* fix the latex representation string for `fillout_factor`, `pot`, `pot_min`,
and `pot_max` parameters in a contact binary.
### 2.3.2 - manifest to include readme bugfix
* manually include README.md in MANIFEST.in to avoid build errors from pip
### 2.3.1 - pip install bugfix
* removes m2r as an (unlisted) build-dependency. m2r is only required to build the submission to submit to pypi, but is not required to install or run phoebe locally.
### 2.3.0 - inverse problem feature release
* Add support for inverse problem solvers, including "estimators", "optimizers", and "samplers"
* Add support for attaching distributions (as [distl](https://github.com/kecnry/distl) objects) to parameters, including priors and posteriors.
* Add support for [web and desktop clients](https://phoebe-project.org/clients) via a light-weight built in `phoebe-server`.
* Removed support for Python 2 (now requires Python 3.6+)
* Implement optional gaussian processes for light curves
* Implement phase-masking
* Added official support for [ellc](https://github.com/pmaxted/ellc) and [jktebop](https://www.astro.keele.ac.uk/jkt/codes/jktebop.html) alternate backends
* Per-component and per-dataset RV offsets
* Fixed phasing in time-dependent systems
* Distinction between anomalous and sidereal period in apsidal motion cases
* Extinction parameters moved from per-dataset to the system-level
* Added several new optional constraints
* Overhaul of the run_checks framework
* Updated scipy dependency to 1.7+
* Numerous small bugfixes and enhancements
### 2.2.2 - kwargs bugfix
* fix overriding mesh_init_phi as kwarg to run_compute
* fix pblum computation to not require irrad_method kwarg
* fix bundle representation to exclude hidden parameters
### 2.2.1 - g++/gcc version check bugfix
* Improves the detection of g++/gcc version to compare against requirements during setup.
### 2.2.0 - extinction feature release
* Add support for interstellar extinction/reddening.
* Support for Python 3.6+ in addition to Python 2.7+.
* Overhaul of limb-darkening with new ld_mode and ld_coeffs_source parameters.
* Overhaul of passband luminosity and flux scaling with new pblum_mode parameter, including support for maintaining color relations between multiple passbands.
* Ability to provide third light in either flux or percentage units, via the new l3_mode and l3_frac parameters.
* Support for computing a model at different times than the observations, via the new compute_times or computes_phases parameter.
* Transition from pickled to FITS passband files, with automatic detection for available updates. The tables can now also be accessed via tables.phoebe-project.org.
* DISABLED support for beaming/boosting.
* Allow flipping Kepler's thrid law constraint to solve for q.
* Require overwrite=True during add_* or run_* methods that would result in overwriting an existing label.
* Constraint for logg.
* Account for time-dependence (dpdt/dperdt) in t0 constraints.
### 2.1.17 - ignore fits passbands bugfix
* Future-proof to ignore for passband files with extensions other than ".pb"
which may be introduced in future versions of PHOEBE.
### 2.1.16 - eccentric/misaligned irradiation bugfix
* Fixes bug where irradiation was over-optimized and not recomputed as needed for
eccentric or misaligned orbits. Introduced in the optimizations in 2.1.6.
### 2.1.15 - spots bugfix
* Fixes 'long' location of spots on single stars.
* Fixes treatment of spots on secondary 'half' of contact systems.
* Fixes loading legacy files with a spot that has source of 0 due to a bug in legacy.
* Fixes overriding 'ntriangles' by passing keyword argument to run_compute.
### 2.1.14 - contacts inclination RVs bugfix
* Fixes the polar rotation axis for RVs in contact systems with non-90 inclinations
by re-enabling the alignment (pitch, yaw) constraints and enforcing them to be 0.
### 2.1.13 - constraint flip loop bugfix
* Fixes infinite loop when trying to flip esinw AND ecosw
* Adds ability to flip mass (Kepler's third law) to solve for q
* Fixes bug introduced in 2.1.9 in which out-of-limits constrained parameters in
an envelope were being raised before all constraints could resolve successfully.
### 2.1.12 - legacy ephemeris and kwargs checks bugfix
* Fixes applying t0 when importing legacy dataset which use phase.
* Fixes ignoring other compute options when running checks on kwargs during run_compute.
### 2.1.11 - legacy dataset import bugfix
* Fixes loading legacy datasets which use phase (by translating to time with the current ephemeris).
* Fixes loading legacy datasets with errors in magnitudes (by converting to errors in flux units).
* Fixes plotting RV datasets in which only one component has times (which is often the case when importing from a legacy file).
### 2.1.10 - ldint bugfix
* Removes ldint from the weights in the computations of RVs and LPs.
### 2.1.9 - limits bugfix
* Fixes a bug where parameter limits were not being checked and out-of-limits errors not raised correctly.
### 2.1.8 - mesh convergence bugfix
* Fixes a bug where certain parameters would cause the meshing algorithm to fail to converge. With this fix, up to 4 additional attempts will be made with random initial starting locations which should converge for most cases.
### 2.1.7 - comparison operators bugfix
* Fixes a bug where comparisons between Parameters/ParameterSets and values were returning nonsensical values.
* Comparing ParameterSets with any object will now return a NotImplementedError
* Comparing Parameters will compare against the value or quantity, with default units when applicable.
* Comparing equivalence between two Parameter objects will compare the uniqueids of the Parameters, NOT the values.
### 2.1.6 - optimization bugfix
* Fixes a bug where automatic detection of eclipses was failing to properly fallback on only detecting the horizon.
* Introduces several other significant optimizations, particularly in run_compute.
### 2.1.5 - single star get_orbits and line-profile bugfix
* Fixes a bug in hierarchy.get_orbits() for a single star hierarchy which resulted in an error being raised while computing line-profiles.
### 2.1.4 - freq constraint bugfix
* This fixes the inversion of the frequency constraint when flipping to solve for period.
### 2.1.3 - overflow error for semidetached systems bugfix
* Semi-detached systems could raise an error in the backend caused by the volume being slightly over the critical value when translating between requiv in solar units to volume in unitless/roche units. When this numerical discrepancy is detected, the critical value is now adopted and a warning is sent via the logger.
### 2.1.2 - constraints in solar units bugfix
* All constraints are now executed (by default) in solar units instead of SI. The Kepler's third law constraint (constraining mass by default) failed to have sufficient precision in SI, resulting in inaccurate masses. Furthermore, if the constraint was flipped, inaccurate values of sma could be passed to the backend, resulting in overflow in the semi-detached case.
* Bundles created before 2.1.2 and imported into 2.1.2+ will continue to use SI units for constraints and should function fine, but will not benefit from this update and will be incapable of changing the system hierarchy.
### 2.1.1 - MPI detection bugfix
* PHOEBE now detects if its within MPI on various different MPI installations (previously only worked for openmpi).
### 2.1.0 - misalignment feature release
* Add support for spin-orbit misalignment
* Add support for line profile (LP) datasets
* Switch parameterization from rpole/pot to requiv (including new semi-detached and contact constraints)
* Significant rewrite to plotting infrastructure to use [autofig](http://github.com/kecnry/autofig)
* Introduction of [nparray](http://github.com/kecnry/nparray) support within parameters
* Significant rewrite to mesh dataset infrastructure to allow choosing which columns are exposed
* Distinguish Roche (xyz) from Plane-of-Sky (uvw) coordinates
* Ability to toggle interactive constraints and interactive system checks independently
* Implementation of ParameterSet.tags and Parameter.tags
* General support for renaming tags/labels
* Expose pblum for contacts
* Expose per-component r and rprojs for contacts (used to be based on primary frame of reference only)
* Fix definition of vgamma (see note in 2.0.4 release below)
* Remove phshift parameter (see note in 2.0.3 release below)
* Permanently rename 'long' parameter for spots (see note in 2.0.2 release below)
* Numerous other minor bug fixes and improvements
### 2.0.11 - astropy version dependency bugfix
* Set astropy dependency to be >=1.0 and < 3.0 (as astropy 3.0 requires python 3)
### 2.0.10 - legacy import extraneous spaces bugfix
* Handle ignoring extraneous spaces when importing a PHOEBE legacy file.
### 2.0.9 - \_default Parameters bugfix
* Previously, after loading from a JSON file, new datasets were ignored by run_compute because the \_default Parameters (such as 'enabled') were not stored and loaded correctly. This has now been fixed.
* PS.datasets/components now hides the (somewhat confusing) \_default entries.
* unicode handling in filtering is improved to make sure the copying rules from JSON are followed correctly when loaded as unicodes instead of strings.
### 2.0.8 - contacts bugfix
* Remove unused Parameters from the Bundle
* Improvement in finding the boundary between the two components of a contact system
### 2.0.7 - legacy import/export bugfix
* Handle missing parameters when importing/exporting so that a Bundle exported to a PHOEBE legacy file can successfully be reimported
* Handle importing standard weight from datasets and converting to sigma
### 2.0.6 - unit conversion bugfix
* When requesting unit conversion from the frontend, astropy will now raise an error if the units are not compatible.
### 2.0.5 - semi-detached bugfix
* Fixed bug in which importing a PHOEBE legacy file of a semi-detached system failed to set the correct potential for the star filling its roche lobe. This only affects the importer itself.
* Implemented 'critical_rpole' and 'critical_potential' constraints.
### 2.0.4 - vgamma temporary bugfix
* The definition of vgamma in 2.0.* is in the direction of positive z rather than positive RV. For the sake of maintaining backwards-compatibility, this will remain unchanged for 2.0.* releases but will be fixed in the 2.1 release to be in the direction of positive RV. Until then, this bugfix handles converting to and from PHOEBE legacy correctly so that running the PHOEBE 2 and legacy backends gives consistent results.
### 2.0.3 - t0_supconj/t0_perpass bugfix
* Fixed constraint that defines the relation between t0_perpass and t0_supconj.
* Implement new 't0_ref' parameter which corresponds to legacy's 'HJD0'.
* Phasing now accepts t0='t0_supconj', 't0_perpass', 't0_ref', or a float representing the zero-point. The 'phshift' parameter will still be supported until 2.1, at which point it will be removed.
* Inclination parameter ('incl') is now limited to the [0-180] range to maintain conventions on superior conjunction and ascending/descending nodes.
* Fixed error message in ldint.
* Fixed the ability for multiple spots to be attached to the same component.
* Raise an error if attempting to attach spots to an unsupported component. Note: spots are currently not supported for contact systems.
### 2.0.2 - spots bugfix
* If using spots, it is important that you use 2.0.2 or later as there were several important bug fixes in this release.
* 'colon' parameter for spots has been renamed to 'long' (as its not actually colongitude). For 2.0.X releases, the 'colon' parameter will remain as a constrained parameter to avoid breaking any existing scripts, but will be removed with the 2.1.0 release.
* Features (including spots) have been fixed to correctly save and load to file.
* Corotation of spots is now enabled: if the 'syncpar' parameter is not unity, the spots will correctly corotate with the star. The location of the spot (defined by 'colat' and 'long' parameters) is defined such that the long=0 points to the companion star at t0. That coordinate system then rotates with the star according to 'syncpar'.
### 2.0.1 - ptfarea/pbspan bugfix
* Definition of flux and luminosity now use ptfarea instead of pbspan. In the bolometric case, these give the same quantity. This discrepancy was absorbed entirely by pblum scaling, so relative fluxes should not be affected, but the underlying absolute luminosities were incorrect for passbands (non-bolometric). In addition to under | text/markdown | null | Andrej Prša <aprsa@villanova.edu>, Kyle Conroy <kyle.conroy@villanova.edu>, Angela Kochoska <angela.kochoska@villanova.edu>, Martin Horvat <martin.horvat@fmf.uni-lj.si>, Dave Jones <djones@iac.es>, Michael Abdul-Masih <michael.abdul-masih@eso.org>, Bert Pablo <hpablo@aavso.org>, Joe Giammarco <giammarc@eastern.edu> | null | Kyle Conroy <kyle.conroy@villanova.edu>, Andrej Prša <aprsa@villanova.edu> | GPL-3.0-or-later | phoebe, science, astronomy, astrophysics, binary stars, eclipsing binary stars | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Natural Language :: English",
"Operating System :: POSIX :: Linux",
"Operating System... | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"scipy",
"astropy",
"pytest",
"tqdm",
"corner",
"requests",
"python-socketio",
"flask",
"flask-cors",
"flask-socketio",
"gevent",
"gevent-websocket"
] | [] | [] | [] | [
"homepage, https://phoebe-project.org",
"repository, https://github.com/phoebe-project/phoebe2",
"documentation, https://phoebe-project.org/docs"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:10:54.514702 | phoebe-2.4.22.tar.gz | 63,428,759 | bc/0b/976db28369107db36e1ccf12df6c7f397bc6680629e355b1e75dc87de3f6/phoebe-2.4.22.tar.gz | source | sdist | null | false | cdc8bbd26671b59b644cd3c82429f641 | e621cd90625290a5163f1af266f4535da808254d92be559823433aa2c42cb65b | bc0b976db28369107db36e1ccf12df6c7f397bc6680629e355b1e75dc87de3f6 | null | [
"LICENSE.md"
] | 2,129 |
2.4 | edgartools | 5.16.2 | Python library to access and analyze SEC Edgar filings, XBRL financial statements, 10-K, 10-Q, and 8-K reports | <p align="center">
<a href="https://github.com/dgunning/edgartools">
<img src="docs/images/edgartools-logo.png" alt="EdgarTools Python SEC EDGAR library logo" height="80">
</a>
</p>
<h1 align="center">EdgarTools - Python Library for SEC EDGAR Filings</h1>
<h3 align="center">The AI Native Python library for SEC EDGAR Data</h3>
<p align="center">
<a href="https://pypi.org/project/edgartools"><img src="https://img.shields.io/pypi/v/edgartools.svg" alt="PyPI - Version"></a>
<a href="https://github.com/dgunning/edgartools/actions"><img src="https://img.shields.io/github/actions/workflow/status/dgunning/edgartools/python-hatch-workflow.yml" alt="GitHub Workflow Status"></a>
<a href="https://www.codefactor.io/repository/github/dgunning/edgartools"><img src="https://www.codefactor.io/repository/github/dgunning/edgartools/badge" alt="CodeFactor"></a>
<a href="https://github.com/pypa/hatch"><img src="https://img.shields.io/badge/%F0%9F%A5%9A-Hatch-4051b5.svg" alt="Hatch project"></a>
<a href="https://github.com/dgunning/edgartools/blob/main/LICENSE"><img src="https://img.shields.io/github/license/dgunning/edgartools" alt="GitHub"></a>
<a href="https://pypi.org/project/edgartools"><img src="https://img.shields.io/pypi/dm/edgartools" alt="PyPI - Downloads"></a>
</p>
<p align="center">
<img src="docs/images/badges/badge-ai-native.svg" alt="AI Native">
<img src="docs/images/badges/badge-10x-faster.svg" alt="10x Faster">
<img src="docs/images/badges/badge-zero-cost.svg" alt="Zero Cost">
<img src="docs/images/badges/badge-production-ready.svg" alt="Production Ready">
<img src="docs/images/badges/badge-open-source.svg" alt="Open Source">
<img src="docs/images/badges/badge-financial-data.svg" alt="Financial Data">
</p>
<p align="center">
<b>The only SEC EDGAR library built from the ground up for AI agents and LLMs. Extract financial data in 3 lines of code instead of 100+. Production-ready MCP server included.</b>
</p>
<p align="center">
<sub>Built with AI-assisted development • 3-10x faster velocity • <a href="#-support-ai-powered-development">Support this project</a></sub>
</p>
**EdgarTools** is a Python library for downloading and analyzing SEC EDGAR filings. Extract 10-K, 10-Q, 8-K reports, parse XBRL financial statements, and access insider trading data (Form 4) with a simple Python API. Free and open-source.

<p align="center">
<img src="docs/images/dividers/divider-hexagons.svg" alt="">
</p>
## Why EdgarTools?
EdgarTools is the **fastest, most powerful open-source library** for SEC EDGAR data extraction. Built for financial analysts, data scientists, and AI developers who need reliable, production-ready access to SEC filings.
<table align="center">
<tr>
<td align="center" width="33%">
<img src="docs/images/icons/icon-speed.svg" width="80" alt="Lightning Fast"><br>
<b>Lightning Fast</b><br>
10-30x faster than alternatives<br>
Optimized with lxml & PyArrow
</td>
<td align="center" width="33%">
<img src="docs/images/icons/icon-ai.svg" width="80" alt="AI Native"><br>
<b>AI Native</b><br>
Built-in MCP server for Claude<br>
LLM-optimized text extraction
</td>
<td align="center" width="33%">
<img src="docs/images/icons/icon-quality.svg" width="80" alt="Data Quality"><br>
<b>Production Quality</b><br>
1000+ tests, type hints<br>
Battle-tested by analysts
</td>
</tr>
<tr>
<td align="center" width="33%">
<img src="docs/images/icons/icon-xbrl.svg" width="80" alt="XBRL Support"><br>
<b>XBRL Native</b><br>
Full XBRL standardization<br>
Cross-company comparisons
</td>
<td align="center" width="33%">
<img src="docs/images/icons/icon-data.svg" width="80" alt="Rich Data"><br>
<b>Rich Data Objects</b><br>
Smart parsing for every form<br>
Pandas-ready DataFrames
</td>
<td align="center" width="33%">
<img src="docs/images/icons/icon-community.svg" width="80" alt="Open Source"><br>
<b>Open Source</b><br>
MIT license, community-driven<br>
Transparent & auditable
</td>
</tr>
</table>
<p align="center">
<img src="docs/images/dividers/divider-hexagons.svg" alt="">
</p>
## How It Works
<p align="center">
<img src="docs/images/how-it-works.svg" alt="How EdgarTools Python library extracts SEC EDGAR filing data">
</p>
<p align="center">
<img src="docs/images/dividers/divider-hexagons.svg" alt="">
</p>
<p align="center">
<img src="docs/images/sections/section-quick-start.svg" alt="Quick Start">
</p>
```python
# Install the SEC EDGAR Python library
pip install edgartools
# Set your identity (required by SEC regulations)
from edgar import *
set_identity("your.name@example.com")
# Get SEC 10-K, 10-Q filings and XBRL financial statements
balance_sheet = Company("AAPL").get_financials().balance_sheet()
# Access any company's SEC filings
company = Company("MSFT")
# Parse Form 4 insider trading transactions
filings = company.get_filings(form="4")
form4_filing = filings[0]
form4 = form4_filing.obj()
```

<p align="center">
<img src="docs/images/dividers/divider-hexagons.svg" alt="">
</p>
## Use Cases
### Analyze 13F Institutional Holdings & Hedge Fund Portfolios
Track what hedge funds and institutional investors own by parsing SEC 13F filings. EdgarTools extracts complete portfolio holdings with position sizes, values, and quarter-over-quarter changes.
```python
from edgar import get_filings
thirteenf = get_filings(form="13F-HR")[0].obj()
thirteenf.holdings # DataFrame of all portfolio positions
```
### Track Insider Trading with SEC Form 4
Monitor insider buying and selling activity from SEC Form 4 filings. See which executives are purchasing or selling shares, option exercises, and net position changes.
```python
company = Company("TSLA")
form4 = company.get_filings(form="4")[0].obj()
form4.transactions # Insider buy/sell transactions
```
### Extract Financial Statements from 10-K and 10-Q Filings
Get income statements, balance sheets, and cash flow statements from SEC annual and quarterly reports. Data is parsed from XBRL with standardized labels for cross-company comparison.
```python
financials = Company("MSFT").get_financials()
financials.balance_sheet() # Balance sheet with all line items
financials.income_statement() # Revenue, net income, EPS
```
### Parse 8-K Current Reports for Corporate Events
Access material corporate events as they happen -- earnings releases, acquisitions, executive changes, and more. EdgarTools parses 8-K filings into structured items with full text extraction.
```python
eightk = get_filings(form="8-K")[0].obj()
eightk.items # List of reported event items
```
### Query XBRL Financial Data Across Companies
Access structured XBRL financial facts for any SEC filer. Query specific line items like revenue or total assets over time, and compare across companies using standardized concepts.
```python
facts = Company("AAPL").get_facts()
facts.to_pandas("us-gaap:Revenues") # Revenue history as DataFrame
```
<p align="center">
<img src="docs/images/dividers/divider-hexagons.svg" alt="">
</p>
<p align="center">
<img src="docs/images/sections/section-features.svg" alt="Key Features">
</p>
### Comprehensive SEC Data Access
<table>
<tr>
<td width="50%" valign="top">
**Financial Statements (XBRL)**
- Balance Sheets, Income Statements, Cash Flows
- Individual line items via XBRL tags
- Multi-period comparisons with comparative periods
- Standardized cross-company data
- Automatic unit conversion
- Metadata columns (dimensions, members, units)
- Complete dimensional data support
**Fund Holdings (13F)**
- Complete 13F filing history
- Portfolio composition analysis
- Position tracking over time
- Ownership percentages
- Value calculations
**Company Dataset & Reference Data**
- Industry and state filtering
- Company subsets with metadata
- Standardized industry classifications
- SEC ticker/CIK lookups
- Exchange information
**Insider Transactions**
- Form 3, 4, 5 structured data
- Transaction history by insider
- Ownership changes
- Grant and exercise details
- Automatic parsing
</td>
<td width="50%" valign="top">
**Filing Intelligence**
- Any form type (10-K, 10-Q, 8-K, S-1, etc.)
- Complete history since 1994
- Smart data objects for each form
- Automatic HTML to clean text
- Section extraction (Risk Factors, MD&A)
**Performance & Reliability**
- 10-30x faster than alternatives
- Configurable rate limiting (enterprise mirrors supported)
- Custom SEC data sources (corporate/academic mirrors)
- Smart caching (30-second fresh filing cache)
- Robust error handling
- SSL verification with fail-fast retry
- Type hints throughout
- [Enterprise configuration →](docs/configuration.md#enterprise-configuration)
**Developer Experience**
- Intuitive, consistent API
- Pandas DataFrame integration
- Rich terminal output
- Comprehensive documentation
- 1000+ tests
</td>
</tr>
</table>
EdgarTools supports all SEC form types including **10-K annual reports**, **10-Q quarterly filings**, **8-K current reports**, **13F institutional holdings**, **Form 4 insider transactions**, **proxy statements (DEF 14A)**, and **S-1 registration statements**. Parse XBRL financial data, extract text sections, and convert filings to pandas DataFrames.
<p align="center">
<img src="docs/images/dividers/divider-hexagons.svg" alt="">
</p>
## Comparison with Alternatives
| Feature | EdgarTools | sec-api (paid) | OpenEDGAR | Manual Scraping |
|---------|------------|----------------|-----------|-----------------|
| **AI/MCP Integration** | <img src="docs/images/icons/compare-check.svg" width="20"> | <img src="docs/images/icons/compare-cross.svg" width="20"> | <img src="docs/images/icons/compare-cross.svg" width="20"> | <img src="docs/images/icons/compare-cross.svg" width="20"> |
| **Cost** | Free | $150+/mo | Free | Free |
| **Speed** | 10-30x baseline | Fast (API) | Slow | Slow |
| **XBRL Support** | <img src="docs/images/icons/compare-check.svg" width="20"> Full | <img src="docs/images/icons/compare-partial.svg" width="20"> Partial | <img src="docs/images/icons/compare-cross.svg" width="20"> | <img src="docs/images/icons/compare-cross.svg" width="20"> |
| **Financial Statements** | <img src="docs/images/icons/compare-check.svg" width="20"> Parsed | <img src="docs/images/icons/compare-check.svg" width="20"> Parsed | <img src="docs/images/icons/compare-partial.svg" width="20"> Basic | <img src="docs/images/icons/compare-cross.svg" width="20"> DIY |
| **LLM-Ready Output** | <img src="docs/images/icons/compare-check.svg" width="20"> | <img src="docs/images/icons/compare-cross.svg" width="20"> | <img src="docs/images/icons/compare-cross.svg" width="20"> | <img src="docs/images/icons/compare-cross.svg" width="20"> |
| **Type Hints** | <img src="docs/images/icons/compare-check.svg" width="20"> | <img src="docs/images/icons/compare-cross.svg" width="20"> | <img src="docs/images/icons/compare-partial.svg" width="20"> | <img src="docs/images/icons/compare-cross.svg" width="20"> |
| **Rate Limiting** | <img src="docs/images/icons/compare-check.svg" width="20"> Auto | N/A (API) | <img src="docs/images/icons/compare-cross.svg" width="20"> Manual | <img src="docs/images/icons/compare-cross.svg" width="20"> Manual |
| **Open Source** | <img src="docs/images/icons/compare-check.svg" width="20"> MIT | <img src="docs/images/icons/compare-cross.svg" width="20"> Proprietary | <img src="docs/images/icons/compare-check.svg" width="20"> Apache | N/A |
<p align="center">
<img src="docs/images/dividers/divider-hexagons.svg" alt="">
</p>
<p align="center">
<img src="docs/images/sections/section-ai-integration.svg" alt="AI Integration">
</p>
### Use EdgarTools with Claude Code & Claude Desktop
EdgarTools provides **AI Skills** that enable Claude and other AI assistants to perform sophisticated SEC filing analysis. Once configured, you can ask Claude questions like:
- *"Compare Apple and Microsoft's revenue growth rates over the past 3 years"*
- *"Which Tesla executives sold more than $1 million in stock in the past 6 months?"*
- *"Find all technology companies that filed proxy statements with executive compensation changes"*
Claude will write the Python code, execute it, and explain the results - all powered by EdgarTools.
<details>
<summary><b>Setup Instructions</b></summary>
### Option 1: AI Skills (Recommended)
Install the EdgarTools skill for Claude Code or Claude Desktop:
```bash
pip install "edgartools[ai]"
python -c "from edgar.ai import install_skill; install_skill()"
```
This adds SEC analysis capabilities to Claude, including 3,450+ lines of API documentation, code examples, and form type reference.
### Option 2: MCP Server
Run EdgarTools as an MCP server for Claude Code or Claude Desktop:
```bash
pip install "edgartools[ai]"
python -m edgar.ai
```
Add to Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json`):
```json
{
"mcpServers": {
"edgartools": {
"command": "python",
"args": ["-m", "edgar.ai"],
"env": {
"EDGAR_IDENTITY": "Your Name your.email@example.com"
}
}
}
}
```
See [AI Integration Guide](docs/ai-integration.md) for complete documentation.
</details>
<p align="center">
<img src="docs/images/dividers/divider-hexagons.svg" alt="">
</p>
## <img src="docs/images/icons/emoji-heart.svg" width="24" height="24"> Support AI Powered Development
**I build and maintain EdgarTools solo using AI-assisted development.** Your support directly funds the Claude Max subscription that makes this extraordinary velocity possible.
### The Virtuous Cycle
<table align="center">
<tr>
<td align="center" width="25%" valign="top">
<img src="docs/images/icons/emoji-1.svg" width="24" height="24"><br>
<b>You Support</b><br><br>
Buy Me A Coffee<br>
contributions fund<br>
Claude Max
</td>
<td align="center" width="25%" valign="top">
<img src="docs/images/icons/emoji-2.svg" width="24" height="24"><br>
<b>AI Acceleration</b><br><br>
Specialized agents<br>
deliver <b>3-10x faster</b><br>
development
</td>
<td align="center" width="25%" valign="top">
<img src="docs/images/icons/emoji-3.svg" width="24" height="24"><br>
<b>Rapid Delivery</b><br><br>
Features in <b>days</b><br>
instead of weeks<br>
24 releases / 60 days
</td>
<td align="center" width="25%" valign="top">
<img src="docs/images/icons/emoji-4.svg" width="24" height="24"><br>
<b>You Benefit</b><br><br>
More features,<br>
faster fixes,<br>
free forever
</td>
</tr>
</table>
### Real Impact: Last 60 Days
<table align="center">
<tr>
<td align="center" width="25%" valign="top">
<img src="docs/images/icons/emoji-rocket.svg" width="24" height="24"><br>
<h3>24</h3>
<b>Releases</b><br>
<sub>1 every 2.5 days</sub>
</td>
<td align="center" width="25%" valign="top">
<img src="docs/images/icons/emoji-lightning.svg" width="24" height="24"><br>
<h3>322</h3>
<b>Commits</b><br>
<sub>5.4 per day</sub>
</td>
<td align="center" width="25%" valign="top">
<img src="docs/images/icons/emoji-target.svg" width="24" height="24"><br>
<h3>3-10x</h3>
<b>Velocity</b><br>
<sub>vs traditional dev</sub>
</td>
<td align="center" width="25%" valign="top">
<img src="docs/images/icons/emoji-timer.svg" width="24" height="24"><br>
<h3>Days</h3>
<b>Not Weeks</b><br>
<sub>for major features</sub>
</td>
</tr>
</table>
### Recent Examples
| Feature | Traditional Estimate | With AI | Speedup |
|---------|---------------------|---------|---------|
| XBRL Period Selection | 3-4 weeks | 5 days | **7x faster** |
| MCP Workflow Tools | 2-3 weeks | 2 days | **10x faster** |
| HTML Parsing Rewrite | 2 weeks | 3 days | **4x faster** |
| Standardized Concepts API | 2 weeks | 2-3 days | **5x faster** |
<p align="center">
<a href="https://github.com/sponsors/dgunning" target="_blank">
<img src="https://img.shields.io/badge/sponsor-30363D?style=for-the-badge&logo=GitHub-Sponsors&logoColor=#EA4AAA" alt="GitHub Sponsors" height="40">
</a>
<a href="https://www.buymeacoffee.com/edgartools" target="_blank">
<img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" height="40">
</a>
</p>
**What your support enables:**
- <img src="docs/images/icons/emoji-check.svg" width="16" height="16"> Claude Max subscription (AI agents that write, test, and document code)
- <img src="docs/images/icons/emoji-check.svg" width="16" height="16"> Continued 3-10x development velocity (features in days, not weeks)
- <img src="docs/images/icons/emoji-check.svg" width="16" height="16"> Rapid response to SEC format changes and bug reports
- <img src="docs/images/icons/emoji-check.svg" width="16" height="16"> New features based on community needs
- <img src="docs/images/icons/emoji-check.svg" width="16" height="16"> Free access for everyone, forever (no API keys, no rate limits)
**Alternative ways to support:**
- <img src="docs/images/icons/emoji-star.svg" width="16" height="16"> Star the repo on GitHub
- <img src="docs/images/icons/emoji-bug.svg" width="16" height="16"> Report bugs and contribute fixes
- <img src="docs/images/icons/emoji-book.svg" width="16" height="16"> Improve documentation
- <img src="docs/images/icons/emoji-speech.svg" width="16" height="16"> Answer questions in Discussions
- <img src="docs/images/icons/emoji-link.svg" width="16" height="16"> Share EdgarTools with colleagues
**Corporate users**: If your organization depends on EdgarTools for SEC compliance or regulatory reporting, [GitHub Sponsors](https://github.com/sponsors/dgunning) offers strategic sponsorship options designed for mission-critical dependencies.
<p align="center">
<img src="docs/images/dividers/divider-hexagons.svg" alt="">
</p>
<p align="center">
<img src="docs/images/sections/section-community.svg" alt="Community & Support">
</p>
### Documentation & Resources
- [User Journeys / Examples](https://edgartools.readthedocs.io/en/latest/examples/)
- [Quick Guide](https://edgartools.readthedocs.io/en/latest/quick-guide/)
- [Full API Documentation](https://edgartools.readthedocs.io/)
- [EdgarTools Blog](https://www.edgartools.io)
### Get Help & Connect
- [GitHub Issues](https://github.com/dgunning/edgartools/issues) - Bug reports and feature requests
- [Discussions](https://github.com/dgunning/edgartools/discussions) - Questions and community discussions
### Contributing
We welcome contributions from the community! Here's how you can help:
- **Code**: Fix bugs, add features, improve documentation
- **Examples**: Share interesting use cases and examples
- **Feedback**: Report issues or suggest improvements
- **Spread the Word**: Star the repo, share with colleagues
See our [Contributing Guide](CONTRIBUTING.md) for details.
---
<p align="center">
EdgarTools is distributed under the <a href="LICENSE">MIT License</a>
</p>
## Star History
[](https://star-history.com/#dgunning/edgartools&Timeline)
| text/markdown | null | Dwight Gunning <dgunning@gmail.com> | null | null | null | 10-K, 10-Q, 13F, 8-K, annual report, company filings, edgar, edgar api, edgar filings, filings, finance, financial data, financial statements, form 4, insider trading, institutional holdings, python, quarterly report, sec, sec api, sec filings, stock filings, xbrl | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Lang... | [] | null | null | >=3.10 | [] | [] | [] | [
"beautifulsoup4>=4.10.0",
"httpx>=0.25.0",
"httpxthrottlecache>=0.3.0",
"humanize>=4.0.0",
"jinja2>=3.1.0",
"lxml>=4.4",
"nest-asyncio>=1.5.1",
"orjson>=3.6.0",
"pandas>=2.0.0",
"pyarrow>=17.0.0",
"pydantic>=2.0.0",
"pyrate-limiter>=3.0.0",
"rank-bm25>=0.2.1",
"rapidfuzz>=3.5.0",
"rich>=... | [] | [] | [] | [
"Homepage, https://github.com/dgunning/edgartools",
"Documentation, https://dgunning.github.io/edgartools/",
"Issues, https://github.com/dgunning/edgartools/issues",
"Source, https://github.com/dgunning/edgartools",
"Changelog, https://github.com/dgunning/edgartools/releases"
] | python-httpx/0.28.0 | 2026-02-19T20:10:52.892421 | edgartools-5.16.2.tar.gz | 2,553,886 | e8/32/e1af3f63c219ab1c90e872f9acc929066d2a1258f4ea44533087204f4fb4/edgartools-5.16.2.tar.gz | source | sdist | null | false | 424e40efc2e7250ac5d4bc4ffe0df865 | f699e4278ea3a6fd19ca7fb3df5b06b52d61bc031732c88131dd5b7b311d4947 | e832e1af3f63c219ab1c90e872f9acc929066d2a1258f4ea44533087204f4fb4 | MIT | [
"LICENSE.txt"
] | 7,212 |
2.4 | platform-2step-mcp | 0.7.0 | MCP server for Platform-2Step API with human-in-the-loop confirmation | # Platform-2Step MCP
MCP server that enables AI agents (Claude, GPT, etc.) to interact with AgendaPro's Platform API safely through a human-in-the-loop confirmation system.
## Quick Start
```bash
# Install
uv pip install -e .
# Authenticate (one-time, interactive)
export PLATFORM_2STEPS_BFF_URL=https://ap-api.agendaprodev.com/platform-2steps-bff
platform-mcp-auth login
# Run server
platform-2step-mcp
```
## Claude Desktop Setup
Add to `~/Library/Application Support/Claude/claude_desktop_config.json`:
```json
{
"mcpServers": {
"platform-2step": {
"command": "uvx",
"args": ["platform-2step-mcp"],
"env": {
"PLATFORM_2STEPS_BFF_URL": "https://ap-api.agendaprodev.com/platform-2steps-bff"
}
}
}
}
```
## Documentation
For comprehensive documentation including:
- Architecture overview and authentication flows
- All available MCP tools with parameters
- API endpoints and operation flows
- Security rules and limits
- Development and debugging
See **[CLAUDE.md](./CLAUDE.md)**.
For shared documentation across the Platform MCP ecosystem, see the megarepo's `docs/shared/` directory.
## License
MIT
| text/markdown | AgendaPro | null | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"httpx>=0.27.0",
"mcp>=1.0.0",
"pydantic-settings>=2.0",
"pydantic>=2.0",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"respx>=0.21; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:10:15.803459 | platform_2step_mcp-0.7.0.tar.gz | 90,839 | ad/d8/86365edbe3b2b8a1f12a8564a019d1a9701cfa1d688673c95b8260d2e81c/platform_2step_mcp-0.7.0.tar.gz | source | sdist | null | false | f113f3260a326fa6cae93bf8e64d3a58 | 035c59bd2717051a0acf0d758435c7f4b1827d0338a3e00f483d1e8dd0a20506 | add886365edbe3b2b8a1f12a8564a019d1a9701cfa1d688673c95b8260d2e81c | null | [] | 201 |
2.4 | onnxruntime-webgpu | 1.24.2.dev20260218002 | ONNX Runtime is a runtime accelerator for Machine Learning models | ONNX Runtime
============
ONNX Runtime is a performance-focused scoring engine for Open Neural Network Exchange (ONNX) models.
For more information on ONNX Runtime, please see `aka.ms/onnxruntime <https://aka.ms/onnxruntime/>`_ or the `Github project <https://github.com/microsoft/onnxruntime/>`_.
Changes
-------
1.24.2
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.24.2
1.24.1
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.24.1
1.23.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.23.0
1.22.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.22.0
1.21.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.21.0
1.20.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.20.0
1.19.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.19.0
1.18.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.18.0
1.17.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.17.0
1.16.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.16.0
1.15.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.15.0
1.14.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.14.0
1.13.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.13.0
1.12.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.12.0
1.11.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.11.0
1.10.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.10.0
1.9.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.9.0
1.8.2
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.8.2
1.8.1
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.8.1
1.8.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.8.0
1.7.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.7.0
1.6.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.6.0
1.5.3
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.5.3
1.5.2
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.5.2
1.5.1
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.5.1
1.4.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.4.0
1.3.1
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.3.1
1.3.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.3.0
1.2.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.2.0
1.1.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.1.0
1.0.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.0.0
0.5.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v0.5.0
0.4.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v0.4.0
| null | Microsoft Corporation | onnxruntime@microsoft.com | null | null | MIT License | onnx machine learning | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering... | [] | https://onnxruntime.ai | https://github.com/microsoft/onnxruntime/tags | >=3.10 | [] | [] | [] | [
"flatbuffers",
"numpy>=1.21.6",
"packaging",
"protobuf",
"sympy"
] | [] | [] | [] | [] | RestSharp/106.13.0.0 | 2026-02-19T20:09:57.553219 | onnxruntime_webgpu-1.24.2.dev20260218002-cp314-cp314-win_amd64.whl | 26,455,440 | c6/53/b613a8bed67e0b42e2f36f5a024b7b98f96484c1f3ff6d4b76751c131c35/onnxruntime_webgpu-1.24.2.dev20260218002-cp314-cp314-win_amd64.whl | py3 | bdist_wheel | null | false | f4653cfa180edf58a9fe3609269816fa | 62c5ff8c683b9ce5e4d2bad77c88de37e36e3aa37034ab5b8a46b531ef160eaa | c653b613a8bed67e0b42e2f36f5a024b7b98f96484c1f3ff6d4b76751c131c35 | null | [] | 677 |
2.4 | onnxruntime-gpu | 1.24.2 | ONNX Runtime is a runtime accelerator for Machine Learning models | ONNX Runtime
============
ONNX Runtime is a performance-focused scoring engine for Open Neural Network Exchange (ONNX) models.
For more information on ONNX Runtime, please see `aka.ms/onnxruntime <https://aka.ms/onnxruntime/>`_ or the `Github project <https://github.com/microsoft/onnxruntime/>`_.
Changes
-------
1.24.2
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.24.2
1.24.1
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.24.1
1.23.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.23.0
1.22.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.22.0
1.21.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.21.0
1.20.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.20.0
1.19.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.19.0
1.18.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.18.0
1.17.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.17.0
1.16.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.16.0
1.15.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.15.0
1.14.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.14.0
1.13.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.13.0
1.12.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.12.0
1.11.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.11.0
1.10.0
^^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.10.0
1.9.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.9.0
1.8.2
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.8.2
1.8.1
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.8.1
1.8.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.8.0
1.7.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.7.0
1.6.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.6.0
1.5.3
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.5.3
1.5.2
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.5.2
1.5.1
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.5.1
1.4.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.4.0
1.3.1
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.3.1
1.3.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.3.0
1.2.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.2.0
1.1.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.1.0
1.0.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v1.0.0
0.5.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v0.5.0
0.4.0
^^^^^
Release Notes : https://github.com/Microsoft/onnxruntime/releases/tag/v0.4.0
| null | Microsoft Corporation | onnxruntime@microsoft.com | null | null | MIT License | onnx machine learning | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering... | [] | https://onnxruntime.ai | https://github.com/microsoft/onnxruntime/tags | >=3.10 | [] | [] | [] | [
"flatbuffers",
"numpy>=1.21.6",
"packaging",
"protobuf",
"sympy",
"nvidia-cuda-nvrtc-cu12~=12.0; extra == \"cuda\"",
"nvidia-cuda-runtime-cu12~=12.0; extra == \"cuda\"",
"nvidia-cufft-cu12~=11.0; extra == \"cuda\"",
"nvidia-curand-cu12~=10.0; extra == \"cuda\"",
"nvidia-cudnn-cu12~=9.0; extra == \... | [] | [] | [] | [] | RestSharp/106.13.0.0 | 2026-02-19T20:09:31.888412 | onnxruntime_gpu-1.24.2-cp314-cp314-win_amd64.whl | 209,502,551 | cc/db/0f94a1b31adc07f65b8184ae600e9040e4964b415a947de77c7b5d8b7b82/onnxruntime_gpu-1.24.2-cp314-cp314-win_amd64.whl | py3 | bdist_wheel | null | false | ea52accad2869404bdb0441aaa804f74 | 9f32e82d88eb3233ed3027713f1e832fa9d4a63741896d4caf6060fe58f72b5c | ccdb0f94a1b31adc07f65b8184ae600e9040e4964b415a947de77c7b5d8b7b82 | null | [] | 64,191 |
2.4 | surety-config | 0.0.3 | Configuration management for Surety ecosystem. | # Surety Config
Configuration layer for the Surety ecosystem.
`surety-config` provides structured configuration management
for contract-driven service testing using Surety.
---
## Installation
```bash
pip install surety-config
| text/markdown | null | Elena Kulgavaya <elena.kulgavaya@gmail.com> | null | null | MIT | configuration, testing, contract-testing, automation, surety | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"surety<1.0,>=0.0.4",
"pyyaml>=6.0.2"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:08:57.203049 | surety_config-0.0.3.tar.gz | 7,666 | 2d/4b/ade539baf09677986c5a8b62f997307a6c09b77cd387309e1908c644eab6/surety_config-0.0.3.tar.gz | source | sdist | null | false | 68a4f88d89d8f287f4d2b3d8f4296d1d | c6631b729cf3778d8614279204aca9619f7b9e463b97f80de4d077ad20315d31 | 2d4bade539baf09677986c5a8b62f997307a6c09b77cd387309e1908c644eab6 | null | [
"LICENSE"
] | 265 |
2.4 | docusync | 2.2.0 | CLI tool for syncing documentation from multiple repositories for Docusaurus | <div align="center">
# 📚 DocuSync
**Effortlessly sync documentation from multiple repositories into your Docusaurus site**
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/psf/black)
[Features](#-features) • [Installation](#-installation) • [Quick Start](#-quick-start) • [Usage](#-usage) • [Configuration](#-configuration)
</div>
---
## 🎯 Overview
**DocuSync** is a powerful CLI tool designed specifically for **Docusaurus** projects that need to aggregate documentation from multiple GitHub repositories. It automatically clones, organizes, and generates the proper category structure for your multi-repository documentation setup.
Perfect for:
- 🏢 **Microservices architectures** - Centralize docs from multiple services
- 📦 **Monorepo projects** - Sync docs from different packages
- 🔧 **SDK ecosystems** - Aggregate documentation from multiple SDKs
- 🌐 **Multi-team projects** - Combine docs from different teams
## ✨ Features
- 🚀 **Fast & Efficient** - Shallow cloning with configurable depth
- 🎨 **Docusaurus Integration** - Auto-generates `_category_.json` files
- 📋 **Multi-Repository Support** - Sync from unlimited GitHub repositories
- 🔧 **Flexible Configuration** - JSON-based configuration with validation
- 🔐 **Multiple Auth Methods** - Support for SSH keys and HTTPS with Personal Access Tokens
- 📊 **Beautiful Output** - Rich console interface with progress indicators
- 🧹 **Clean & Safe** - Automatic cleanup of temporary files
- ✅ **Type-Safe** - Built with Pydantic for robust configuration
- 🎯 **Selective Sync** - Sync all repos or just one
- 🔨 **MD/MDX Fixer** - Automatically fix common Markdown/MDX issues that cause Docusaurus build failures
## 📦 Installation
### Using uv (Recommended)
```bash
uv add docusync
```
### Using pip
```bash
pip install docusync
```
### For Development
```bash
git clone https://github.com/Roman505050/docusync.git
cd docusync
uv sync --all-groups
```
## 🚀 Quick Start
1. **Initialize configuration**
```bash
docusync init
```
2. **Edit `docusync.json`** with your repositories
3. **Sync your documentation**
```bash
docusync sync
```
4. **Run Docusaurus**
```bash
npm run start
```
That's it! Your documentation is now synced and ready to go! 🎉
## 📖 Usage
### Sync All Repositories
```bash
docusync sync
```
**With verbose output:**
```bash
docusync sync -v
```
**Keep temporary files for debugging:**
```bash
docusync sync --no-cleanup
```
**Automatically fix MD/MDX issues after sync:**
```bash
docusync sync --fix-md
```
### Sync Single Repository
```bash
docusync sync-one <repository-name>
```
**Example:**
```bash
docusync sync-one api-gateway
```
**With automatic MD/MDX fixing:**
```bash
docusync sync-one api-gateway --fix-md
```
### Fix Markdown/MDX Files
Fix common MDX/Markdown issues that cause Docusaurus build failures:
```bash
# Fix all .md files in a directory
docusync fix docs/
# Fix a single file
docusync fix docs/my-file.md
# Preview changes without applying them
docusync fix docs/ --dry-run
# Fix only files in the target directory (non-recursive)
docusync fix docs/ --no-recursive
```
**What the fixer fixes:**
- ❌ Invalid JSX tag names (e.g., `<1something>` → `<1something>`)
- ❌ HTML comments in MDX context (e.g., `<!-- comment -->` → `{/* comment */}`)
- ❌ Unclosed void elements (e.g., `<br>` → `<br />`)
- ❌ Invalid HTML attributes (e.g., `class=` → `className=`, `for=` → `htmlFor=`)
- ❌ Self-closing tag spacing (e.g., `<tag/>` → `<tag />`)
- ❌ Malformed numeric entities (e.g., `{` → `{`)
### List Configured Repositories
```bash
docusync list
```
### Initialize Configuration
```bash
docusync init
```
**With custom config path:**
```bash
docusync init -c custom-config.json
```
## ⚙️ Configuration
### Basic Configuration
Create a `docusync.json` file in your Docusaurus project root:
```json
{
"repositories": [
{
"github_path": "acme-corp/api-gateway",
"docs_path": "docs",
"display_name": "API Gateway",
"position": 1,
"description": "Central API gateway documentation"
},
{
"github_path": "acme-corp/user-service",
"docs_path": "documentation",
"display_name": "User Service",
"position": 2,
"description": "User management and authentication service"
},
{
"github_path": "acme-corp/payment-processor",
"docs_path": "docs",
"display_name": "Payment Processor",
"position": 3,
"description": "Payment processing and billing documentation"
}
],
"paths": {
"temp_dir": ".temp-repos",
"docs_dir": "docs"
},
"git": {
"clone_depth": 1,
"default_branches": ["main", "master"]
}
}
```
### Configuration Options
#### `repositories` (required)
Array of repositories to sync:
| Field | Type | Description |
|-------|------|-------------|
| `github_path` | `string` | GitHub repository path (`owner/repo`) |
| `docs_path` | `string` | Path to documentation within the repository |
| `display_name` | `string` | Display name for the category |
| `position` | `integer` | Sidebar position (must be unique) |
| `description` | `string` | Category description for Docusaurus |
| `protocol` | `string` | (Optional) Clone protocol: `"ssh"` or `"https"` |
| `pat_token_env` | `string` | (Optional) Environment variable with PAT token for this repo |
| `ssh_key_path` | `string` | (Optional) Path to SSH private key for this repo |
#### `paths` (required)
| Field | Type | Description |
|-------|------|-------------|
| `temp_dir` | `string` | Directory for temporary clones (auto-deleted) |
| `docs_dir` | `string` | Target directory for documentation |
#### `git` (required)
| Field | Type | Description |
|-------|------|-------------|
| `clone_depth` | `integer` | Git clone depth (1 for shallow clone) |
| `default_branches` | `array` | Default branches to try cloning |
| `default_protocol` | `string` | Clone protocol: `"ssh"` or `"https"` (default: `"ssh"`) |
| `default_ssh_key_path` | `string` | Default SSH private key path (optional, e.g., `~/.ssh/id_ed25519`) |
| `default_pat_token_env` | `string` | Default environment variable name containing GitHub Personal Access Token (optional) |
#### Authentication & Protocols
**SSH Authentication (default):**
- Uses `git@github.com:owner/repo.git` format
- Requires SSH key setup with GitHub
- Best for local development
- Supports custom SSH keys per repository
**HTTPS with PAT Token:**
- Uses `https://github.com/owner/repo.git` format
- Requires GitHub Personal Access Token
- Best for CI/CD pipelines
- Token is read from environment variable
- Supports different tokens per repository
**Example with HTTPS:**
```json
{
"git": {
"clone_depth": 1,
"default_branches": ["main", "master"],
"default_protocol": "https",
"default_pat_token_env": "GITHUB_PAT_TOKEN"
}
}
```
Then set your token:
```bash
export GITHUB_PAT_TOKEN="ghp_your_token_here"
```
**Example with custom SSH keys:**
```json
{
"repositories": [
{
"github_path": "acme-corp/api-docs",
"protocol": "ssh",
"ssh_key_path": "~/.ssh/acme_corp_key",
...
},
{
"github_path": "partner-org/service-docs",
"protocol": "ssh",
"ssh_key_path": "~/.ssh/partner_org_key",
...
}
],
"git": {
"default_protocol": "ssh",
"default_ssh_key_path": "~/.ssh/id_ed25519"
}
}
```
**Per-repository protocol override:**
```json
{
"repositories": [
{
"github_path": "acme-corp/payment-processor",
"docs_path": "docs",
"display_name": "Payment Processor",
"position": 3,
"description": "Payment processing documentation",
"protocol": "https"
}
]
}
```
**Multiple organizations with different authentication:**
```json
{
"repositories": [
{
"github_path": "acme-corp/api-docs",
"display_name": "ACME API",
"protocol": "ssh",
"ssh_key_path": "~/.ssh/acme_corp_key",
...
},
{
"github_path": "partner-org/service-docs",
"display_name": "Partner Service",
"protocol": "https",
"pat_token_env": "PARTNER_ORG_PAT_TOKEN",
...
},
{
"github_path": "contractor/integration-docs",
"display_name": "Integration",
"protocol": "https",
"pat_token_env": "CONTRACTOR_PAT_TOKEN",
...
}
],
"git": {
"default_protocol": "ssh",
"default_ssh_key_path": "~/.ssh/id_ed25519",
"default_pat_token_env": "GITHUB_PAT_TOKEN"
}
}
```
Then set individual credentials:
```bash
# For HTTPS repositories
export PARTNER_ORG_PAT_TOKEN="ghp_partner_token"
export CONTRACTOR_PAT_TOKEN="ghp_contractor_token"
export GITHUB_PAT_TOKEN="ghp_default_token"
```
**Priority for authentication:**
- **SSH:** Repository `ssh_key_path` → Global `default_ssh_key_path` → System default
- **HTTPS:** Repository `pat_token_env` → Global `default_pat_token_env` → No token
## 🎨 Docusaurus Integration
### Automatic `_category_.json` Generation
DocuSync automatically creates `_category_.json` files in each synced documentation directory, following the Docusaurus category format:
```json
{
"label": "API Gateway",
"position": 1,
"link": {
"type": "generated-index",
"description": "Central API gateway documentation"
}
}
```
This enables Docusaurus to:
- ✅ Automatically generate index pages for each category
- ✅ Properly order documentation in the sidebar
- ✅ Display category descriptions on index pages
- ✅ Create a beautiful, organized documentation structure
### Project Structure After Sync
```
your-docusaurus-project/
├── docusaurus.config.js
├── docusync.json
├── docs/
│ ├── api-gateway/
│ │ ├── _category_.json # Auto-generated ✨
│ │ ├── intro.md
│ │ ├── getting-started.md
│ │ └── api-reference.md
│ ├── user-service/
│ │ ├── _category_.json # Auto-generated ✨
│ │ ├── overview.md
│ │ └── configuration.md
│ └── payment-processor/
│ ├── _category_.json # Auto-generated ✨
│ ├── setup.md
│ └── webhooks.md
└── .temp-repos/ # Cleaned up after sync
```
## 🔍 Real-World Example
Here's a complete workflow for a microservices documentation setup:
```bash
# 1. Initialize your Docusaurus project
npx create-docusaurus@latest my-docs classic
# 2. Navigate to your project
cd my-docs
# 3. Initialize DocuSync
docusync init
# 4. Configure your repositories in docusync.json
cat > docusync.json << 'EOF'
{
"repositories": [
{
"github_path": "your-org/auth-service",
"docs_path": "docs",
"display_name": "Authentication Service",
"position": 1,
"description": "User authentication and authorization"
},
{
"github_path": "your-org/billing-service",
"docs_path": "docs",
"display_name": "Billing Service",
"position": 2,
"description": "Payment processing and billing management"
}
],
"paths": {
"temp_dir": ".temp-repos",
"docs_dir": "docs"
},
"git": {
"clone_depth": 1,
"default_branches": ["main", "master"]
}
}
EOF
# 5. Sync documentation
docusync sync -v
# 6. Start Docusaurus
npm run start
```
## 🛠️ Development
### Setup Development Environment
```bash
# Clone the repository
git clone https://github.com/Roman505050/docusync.git
cd docusync
# Install dependencies
uv sync --all-groups
```
### Code Quality
```bash
# Format code
uv run black src/
uv run isort src/
# Lint
uv run flake8 src/
# Type check
uv run mypy src/
```
### Running Tests
```bash
# Run all tests
uv run pytest
# With coverage
uv run pytest --cov=docusync --cov-report=html
```
## 📋 Requirements
- **Python** 3.12 or higher
- **Git** installed on your system
- **GitHub Access**: Either SSH keys configured OR Personal Access Token for HTTPS
- **Docusaurus** 2.x or higher (for the target project)
## 🤝 Contributing
Contributions are welcome! Here's how you can help:
1. 🍴 Fork the repository
2. 🔧 Create a feature branch (`git checkout -b feature/amazing-feature`)
3. ✅ Commit your changes (`git commit -m 'Add amazing feature'`)
4. 📤 Push to the branch (`git push origin feature/amazing-feature`)
5. 🎉 Open a Pull Request
Please make sure to:
- Add tests for new features
- Update documentation as needed
- Follow the existing code style
- Run linters before submitting
## 📝 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- [Click](https://click.palletsprojects.com/) - Beautiful CLI framework
- [Rich](https://github.com/Textualize/rich) - Rich text and formatting in the terminal
- [Pydantic](https://docs.pydantic.dev/) - Data validation using Python type hints
- [Docusaurus](https://docusaurus.io/) - The documentation framework this tool was built for
## 📧 Support
- 🐛 **Bug Reports**: [Open an issue](https://github.com/Roman505050/docusync/issues)
- 💡 **Feature Requests**: [Open an issue](https://github.com/Roman505050/docusync/issues)
- 📖 **Documentation**: [Read the docs](https://github.com/Roman505050/docusync#readme)
## 🌟 Show Your Support
If you find DocuSync helpful, please consider giving it a ⭐️ on GitHub!
---
<div align="center">
Made with ❤️ for the Docusaurus community
[Report Bug](https://github.com/Roman505050/docusync/issues) • [Request Feature](https://github.com/Roman505050/docusync/issues)
</div>
| text/markdown | null | Roman Myhun <myhun59@gmail.com> | null | null | MIT | cli, documentation, docusaurus, git, sync | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Documentation",
"Topic :: Software... | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.1.0",
"pydantic-settings>=2.0.0",
"rich>=13.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Roman505050/docusync",
"Repository, https://github.com/Roman505050/docusync",
"Issues, https://github.com/Roman505050/docusync/issues"
] | uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T20:08:52.071203 | docusync-2.2.0.tar.gz | 49,951 | da/13/b983b5749156b0a6279c381d988cae082c72c55772fc12b6068546eba7d7/docusync-2.2.0.tar.gz | source | sdist | null | false | 0469ff6cffbc0d450bcc9869c1940338 | fa29a5a9d479154b61b05bfb35dc0e44bf932a50434a8aaa5db616ed69b0127a | da13b983b5749156b0a6279c381d988cae082c72c55772fc12b6068546eba7d7 | null | [
"LICENSE"
] | 217 |
2.4 | dkist-processing-visp | 5.3.14 | Science processing code for the ViSP instrument on DKIST | dkist-processing-visp
=====================
|codecov|
Overview
--------
The dkist-processing-visp library contains the implementation of the visp pipelines as a collection of the
`dkist-processing-core <https://pypi.org/project/dkist-processing-core/>`_ framework and
`dkist-processing-common <https://pypi.org/project/dkist-processing-common/>`_ Tasks.
The recommended project structure is to separate tasks and workflows into separate packages. Having the workflows
in their own package facilitates using the build_utils to test the integrity of those workflows in the unit test.
Environment Variables
---------------------
.. list-table::
:widths: 10 90
:header-rows: 1
* - Variable
- Field Info
* - LOGURU_LEVEL
- annotation=str required=False default='INFO' alias_priority=2 validation_alias='LOGURU_LEVEL' description='Log level for the application'
* - MESH_CONFIG
- annotation=dict[str, MeshService] required=False default_factory=dict alias_priority=2 validation_alias='MESH_CONFIG' description='Service mesh configuration' examples=[{'upstream_service_name': {'mesh_address': 'localhost', 'mesh_port': 6742}}]
* - RETRY_CONFIG
- annotation=RetryConfig required=False default_factory=RetryConfig description='Retry configuration for the service'
* - OTEL_SERVICE_NAME
- annotation=str required=False default='unknown-service-name' alias_priority=2 validation_alias='OTEL_SERVICE_NAME' description='Service name for OpenTelemetry'
* - DKIST_SERVICE_VERSION
- annotation=str required=False default='unknown-service-version' alias_priority=2 validation_alias='DKIST_SERVICE_VERSION' description='Service version for OpenTelemetry'
* - NOMAD_ALLOC_ID
- annotation=str required=False default='unknown-allocation-id' alias_priority=2 validation_alias='NOMAD_ALLOC_ID' description='Nomad allocation ID for OpenTelemetry'
* - NOMAD_ALLOC_NAME
- annotation=str required=False default='unknown-allocation-name' alias='NOMAD_ALLOC_NAME' alias_priority=2 description='Allocation name for the deployed container the task is running on.'
* - NOMAD_GROUP_NAME
- annotation=str required=False default='unknown-allocation-group' alias='NOMAD_GROUP_NAME' alias_priority=2 description='Allocation group for the deployed container the task is running on'
* - OTEL_EXPORTER_OTLP_TRACES_INSECURE
- annotation=bool required=False default=True description='Use insecure connection for OTLP traces'
* - OTEL_EXPORTER_OTLP_METRICS_INSECURE
- annotation=bool required=False default=True description='Use insecure connection for OTLP metrics'
* - OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='OTLP traces endpoint. Overrides mesh configuration' examples=['localhost:4317']
* - OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='OTLP metrics endpoint. Overrides mesh configuration' examples=['localhost:4317']
* - OTEL_PYTHON_DISABLED_INSTRUMENTATIONS
- annotation=list[str] required=False default_factory=list description='List of instrumentations to disable. https://opentelemetry.io/docs/zero-code/python/configuration/' examples=[['pika', 'requests']]
* - OTEL_PYTHON_FASTAPI_EXCLUDED_URLS
- annotation=str required=False default='health' description='Comma separated list of URLs to exclude from OpenTelemetry instrumentation in FastAPI.' examples=['client/.*/info,healthcheck']
* - SYSTEM_METRIC_INSTRUMENTATION_CONFIG
- annotation=Union[dict[str, bool], NoneType] required=False default=None description='Configuration for system metric instrumentation. https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/system_metrics/system_metrics.html' examples=[{'system.memory.usage': ['used', 'free', 'cached'], 'system.cpu.time': ['idle', 'user', 'system', 'irq'], 'system.network.io': ['transmit', 'receive'], 'process.runtime.memory': ['rss', 'vms'], 'process.runtime.cpu.time': ['user', 'system'], 'process.runtime.context_switches': ['involuntary', 'voluntary']}]
* - ISB_USERNAME
- annotation=str required=False default='guest' description='Username for the interservice-bus.'
* - ISB_PASSWORD
- annotation=str required=False default='guest' description='Password for the interservice-bus.'
* - ISB_EXCHANGE
- annotation=str required=False default='master.direct.x' description='Exchange for the interservice-bus.'
* - ISB_QUEUE_TYPE
- annotation=str required=False default='classic' description='Queue type for the interservice-bus.' examples=['quorum', 'classic']
* - BUILD_VERSION
- annotation=str required=False default='dev' description='Fallback build version for workflow tasks.'
* - MAX_FILE_DESCRIPTORS
- annotation=int required=False default=1024 description='Maximum number of file descriptors to allow the process.'
* - GQL_AUTH_TOKEN
- annotation=Union[str, NoneType] required=False default='dev' description='The auth token for the metadata-store-api.'
* - OBJECT_STORE_ACCESS_KEY
- annotation=Union[str, NoneType] required=False default=None description='The access key for the object store.'
* - OBJECT_STORE_SECRET_KEY
- annotation=Union[str, NoneType] required=False default=None description='The secret key for the object store.'
* - OBJECT_STORE_USE_SSL
- annotation=bool required=False default=False description='Whether to use SSL for the object store connection.'
* - MULTIPART_THRESHOLD
- annotation=Union[int, NoneType] required=False default=None description='Multipart threshold for the object store.'
* - S3_CLIENT_CONFIG
- annotation=Union[dict, NoneType] required=False default=None description='S3 client configuration for the object store.'
* - S3_UPLOAD_CONFIG
- annotation=Union[dict, NoneType] required=False default=None description='S3 upload configuration for the object store.'
* - S3_DOWNLOAD_CONFIG
- annotation=Union[dict, NoneType] required=False default=None description='S3 download configuration for the object store.'
* - GLOBUS_MAX_RETRIES
- annotation=int required=False default=5 description='Max retries for transient errors on calls to the globus api.'
* - GLOBUS_INBOUND_CLIENT_CREDENTIALS
- annotation=list[GlobusClientCredential] required=False default_factory=list description='Globus client credentials for inbound transfers.' examples=[[{'client_id': 'id1', 'client_secret': 'secret1'}, {'client_id': 'id2', 'client_secret': 'secret2'}]]
* - GLOBUS_OUTBOUND_CLIENT_CREDENTIALS
- annotation=list[GlobusClientCredential] required=False default_factory=list description='Globus client credentials for outbound transfers.' examples=[[{'client_id': 'id3', 'client_secret': 'secret3'}, {'client_id': 'id4', 'client_secret': 'secret4'}]]
* - OBJECT_STORE_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='Object store Globus Endpoint ID.'
* - SCRATCH_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='Scratch Globus Endpoint ID.'
* - SCRATCH_BASE_PATH
- annotation=str required=False default='scratch/' description='Base path for scratch storage.'
* - SCRATCH_INVENTORY_DB_COUNT
- annotation=int required=False default=16 description='Number of databases in the scratch inventory (redis).'
* - DOCS_BASE_URL
- annotation=str required=False default='my_test_url' description='Base URL for the documentation site.'
Development
-----------
.. code-block:: bash
git clone git@bitbucket.org:dkistdc/dkist-processing-visp.git
cd dkist-processing-visp
pre-commit install
pip install -e .[test]
pytest -v --cov dkist_processing_visp
Build
--------
Artifacts are built through Bitbucket Pipelines.
The pipeline can be used in other repos with a modification of the package and artifact locations
to use the names relevant to the target repo.
e.g. dkist-processing-test -> dkist-processing-vbi and dkist_processing_test -> dkist_processing_vbi
Deployment
----------
Deployment is done with `turtlebot <https://bitbucket.org/dkistdc/turtlebot/src/main/>`_ and follows
the process detailed in `dkist-processing-core <https://pypi.org/project/dkist-processing-core/>`_
Additionally, when a new release is ready to be built the following steps need to be taken:
1. Freezing Dependencies
#########################
A new "frozen" extra is generated by the `dkist-dev-tools <https://bitbucket.org/dkistdc/dkist-dev-tools/src/main/>`_
package. If you don't have `dkist-dev-tools` installed please follow the directions from that repo.
To freeze dependencies run
.. code-block:: bash
ddt freeze vX.Y.Z[rcK]
Where "vX.Y.Z[rcK]" is the version about to be released.
2. Changelog
############
When you make **any** change to this repository it **MUST** be accompanied by a changelog file.
The changelog for this repository uses the `towncrier <https://github.com/twisted/towncrier>`__ package.
Entries in the changelog for the next release are added as individual files (one per change) to the ``changelog/`` directory.
Writing a Changelog Entry
^^^^^^^^^^^^^^^^^^^^^^^^^
A changelog entry accompanying a change should be added to the ``changelog/`` directory.
The name of a file in this directory follows a specific template::
<PULL REQUEST NUMBER>.<TYPE>[.<COUNTER>].rst
The fields have the following meanings:
* ``<PULL REQUEST NUMBER>``: This is the number of the pull request, so people can jump from the changelog entry to the diff on BitBucket.
* ``<TYPE>``: This is the type of the change and must be one of the values described below.
* ``<COUNTER>``: This is an optional field, if you make more than one change of the same type you can append a counter to the subsequent changes, i.e. ``100.bugfix.rst`` and ``100.bugfix.1.rst`` for two bugfix changes in the same PR.
The list of possible types is defined in the towncrier section of ``pyproject.toml``, the types are:
* ``feature``: This change is a new code feature.
* ``bugfix``: This is a change which fixes a bug.
* ``doc``: A documentation change.
* ``removal``: A deprecation or removal of public API.
* ``misc``: Any small change which doesn't fit anywhere else, such as a change to the package infrastructure.
Rendering the Changelog at Release Time
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you are about to tag a release first you must run ``towncrier`` to render the changelog.
The steps for this are as follows:
* Run `towncrier build --version vx.y.z` using the version number you want to tag.
* Agree to have towncrier remove the fragments.
* Add and commit your changes.
* Tag the release.
**NOTE:** If you forget to add a Changelog entry to a tagged release (either manually or automatically with ``towncrier``)
then the Bitbucket pipeline will fail. To be able to use the same tag you must delete it locally and on the remote branch:
.. code-block:: bash
# First, actually update the CHANGELOG and commit the update
git commit
# Delete tags
git tag -d vWHATEVER.THE.VERSION
git push --delete origin vWHATEVER.THE.VERSION
# Re-tag with the same version
git tag vWHATEVER.THE.VERSION
git push --tags origin main
Science Changelog
^^^^^^^^^^^^^^^^^
Whenever a release involves changes to the scientific quality of L1 data, additional changelog fragment(s) should be
created. These fragments are intended to be as verbose as is needed to accurately capture the scope of the change(s),
so feel free to use all the fancy RST you want. Science fragments are placed in the same ``changelog/`` directory
as other fragments, but are always called::
<PR NUMBER | +>.science[.<COUNTER>].rst
In the case that a single pull request encapsulates the entirety of the scientific change then the first field should
be that PR number (same as the normal CHANGELOG). If, however, there is not a simple mapping from a single PR to a scientific
change then use the character "+" instead; this will create a changelog entry with no associated PR. For example:
.. code-block:: bash
$ ls changelog/
99.bugfix.rst # This is a normal changelog fragment associated with a bugfix in PR 99
99.science.rst # Apparently that bugfix also changed the scientific results, so that PR also gets a science fragment
+.science.rst # This fragment is not associated with a PR
When it comes time to build the SCIENCE_CHANGELOG, use the ``science_towncrier.sh`` script in this repo to do so.
This script accepts all the same arguments as the default `towncrier`. For example:
.. code-block:: bash
./science_towncrier.sh build --version vx.y.z
This will update the SCIENCE_CHANGELOG and remove any science fragments from the changelog directory.
3. Tag and Push
###############
Once all commits are in place add a git tag that will define the released version, then push the tags up to Bitbucket:
.. code-block:: bash
git tag vX.Y.Z[rcK]
git push --tags origin BRANCH
In the case of an rc, BRANCH will likely be your development branch. For full releases BRANCH should be "main".
.. |codecov| image:: https://codecov.io/bb/dkistdc/dkist-processing-visp/graph/badge.svg?token=SREPBJDS31
:target: https://codecov.io/bb/dkistdc/dkist-processing-visp
| text/x-rst | null | NSO / AURA <dkistdc@nso.edu> | null | null | BSD-3-Clause | null | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"dkist-processing-common==12.6.2",
"dkist-processing-math==2.2.1",
"dkist-processing-pac==3.1.1",
"dkist-header-validator==5.3.0",
"dkist-fits-specifications==4.21.0",
"solar-wavelength-calibration==2.0.1",
"dkist-service-configuration==4.2.0",
"dkist-spectral-lines==3.0.0",
"astropy==7.0.2",
"num... | [] | [] | [] | [
"Homepage, https://nso.edu/dkist/data-center/",
"Repository, https://bitbucket.org/dkistdc/dkist-processing-visp",
"Documentation, https://docs.dkist.nso.edu/projects/visp",
"Help, https://nso.atlassian.net/servicedesk/customer/portal/5"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T20:08:34.532630 | dkist_processing_visp-5.3.14.tar.gz | 520,071 | 36/65/0668551d035dd8d168e81ea759a5ec515004c5834a41c2123a5b3cbbabe4/dkist_processing_visp-5.3.14.tar.gz | source | sdist | null | false | a8c638aeb844efbe220694c14ca68909 | 2838133eca8bf02fddf160176907b50f222647488b90c4f4174d825d3fd7ce3c | 36650668551d035dd8d168e81ea759a5ec515004c5834a41c2123a5b3cbbabe4 | null | [] | 423 |
2.4 | pinecone | 8.1.0 | Pinecone client and SDK | # Pinecone Python SDK
 [](https://github.com/pinecone-io/pinecone-python-client/actions/workflows/on-merge.yaml) [](https://pypi.org/project/pinecone/) [](https://www.python.org/downloads/)
The official Pinecone Python SDK for building vector search applications with AI/ML.
Pinecone is a vector database that makes it easy to add vector search to production applications. Use Pinecone to store, search, and manage high-dimensional vectors for applications like semantic search, recommendation systems, and RAG (Retrieval-Augmented Generation).
## Features
- **Vector Operations**: Store, query, and manage high-dimensional vectors with metadata filtering
- **Serverless & Pod Indexes**: Choose between serverless (auto-scaling) or pod-based (dedicated) indexes
- **Integrated Inference**: Built-in embedding and reranking models for end-to-end search workflows
- **Async Support**: Full asyncio support with `PineconeAsyncio` for modern Python applications
- **GRPC Support**: Optional GRPC transport for improved performance
- **Type Safety**: Full type hints and type checking support
## Table of Contents
- [Documentation](#documentation)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Quickstart](#quickstart)
- [Bringing your own vectors](#bringing-your-own-vectors-to-pinecone)
- [Bring your own data using Pinecone integrated inference](#bring-your-own-data-using-pinecone-integrated-inference)
- [Pinecone Assistant](#pinecone-assistant)
- [More Information](#more-information-on-usage)
- [Issues & Bugs](#issues--bugs)
- [Contributing](#contributing)
## Documentation
- [**Conceptual docs and guides**](https://docs.pinecone.io)
- [**Python Reference Documentation**](https://sdk.pinecone.io/python/index.html)
### Upgrading the SDK
> [!NOTE]
> The official SDK package was renamed from `pinecone-client` to `pinecone` beginning in version `5.1.0`.
> Please remove `pinecone-client` from your project dependencies and add `pinecone` instead to get
> the latest updates.
For notes on changes between major versions, see [Upgrading](./docs/upgrading.md)
## Prerequisites
- The Pinecone Python SDK requires Python 3.10 or greater. It has been tested with CPython versions from 3.10 to 3.13.
- Before you can use the Pinecone SDK, you must sign up for an account and find your API key in the Pinecone console dashboard at [https://app.pinecone.io](https://app.pinecone.io).
## Installation
The Pinecone Python SDK is distributed on PyPI using the package name `pinecone`. The base installation includes everything you need to get started with vector operations, but you can install optional extras to unlock additional functionality.
**Base installation includes:**
- Core Pinecone client (`Pinecone`)
- Vector operations (upsert, query, fetch, delete)
- Index management (create, list, describe, delete)
- Metadata filtering
- Pinecone Assistant plugin
**Optional extras:**
- `pinecone[asyncio]` - Adds `aiohttp` dependency and enables `PineconeAsyncio` for async/await support. Use this if you're building applications with FastAPI, aiohttp, or other async frameworks.
- `pinecone[grpc]` - Adds `grpcio` and related libraries for GRPC transport. Provides modest performance improvements for data operations like `upsert` and `query`. See the guide on [tuning performance](https://docs.pinecone.io/docs/performance-tuning).
**Configuration:** The SDK can read your API key from the `PINECONE_API_KEY` environment variable, or you can pass it directly when instantiating the client.
#### Installing with pip
```shell
# Install the latest version
pip3 install pinecone
# Install the latest version, with optional dependencies
pip3 install "pinecone[asyncio,grpc]"
```
#### Installing with uv
[uv](https://docs.astral.sh/uv/) is a modern package manager that runs 10-100x faster than pip and supports most pip syntax.
```shell
# Install the latest version
uv add pinecone
# Install the latest version, optional dependencies
uv add "pinecone[asyncio,grpc]"
```
#### Installing with [poetry](https://python-poetry.org/)
```shell
# Install the latest version
poetry add pinecone
# Install the latest version, with optional dependencies
poetry add pinecone --extras asyncio --extras grpc
```
## Quickstart
### Bringing your own vectors to Pinecone
This example shows how to create an index, add vectors with embeddings you've generated, and query them. This approach gives you full control over your embedding model and vector generation process.
```python
from pinecone import (
Pinecone,
ServerlessSpec,
CloudProvider,
AwsRegion,
VectorType
)
# 1. Instantiate the Pinecone client
# Option A: Pass API key directly
pc = Pinecone(api_key='YOUR_API_KEY')
# Option B: Use environment variable (PINECONE_API_KEY)
# pc = Pinecone()
# 2. Create an index
index_config = pc.create_index(
name="index-name",
dimension=1536,
spec=ServerlessSpec(
cloud=CloudProvider.AWS,
region=AwsRegion.US_EAST_1
),
vector_type=VectorType.DENSE
)
# 3. Instantiate an Index client
idx = pc.Index(host=index_config.host)
# 4. Upsert embeddings
idx.upsert(
vectors=[
("id1", [0.1, 0.2, 0.3, 0.4, ...], {"metadata_key": "value1"}),
("id2", [0.2, 0.3, 0.4, 0.5, ...], {"metadata_key": "value2"}),
],
namespace="example-namespace"
)
# 5. Query your index using an embedding
query_embedding = [...] # list should have length == index dimension
idx.query(
vector=query_embedding,
top_k=10,
include_metadata=True,
filter={"metadata_key": { "$eq": "value1" }}
)
```
### Bring your own data using Pinecone integrated inference
This example demonstrates using Pinecone's integrated inference capabilities. You provide raw text data, and Pinecone handles embedding generation and optional reranking automatically. This is ideal when you want to focus on your data and let Pinecone handle the ML complexity.
```python
from pinecone import (
Pinecone,
CloudProvider,
AwsRegion,
EmbedModel,
IndexEmbed,
)
# 1. Instantiate the Pinecone client
# The API key can be passed directly or read from PINECONE_API_KEY environment variable
pc = Pinecone(api_key='YOUR_API_KEY')
# 2. Create an index configured for use with a particular embedding model
# This sets up the index with the right dimensions and configuration for your chosen model
index_config = pc.create_index_for_model(
name="my-model-index",
cloud=CloudProvider.AWS,
region=AwsRegion.US_EAST_1,
embed=IndexEmbed(
model=EmbedModel.Multilingual_E5_Large,
field_map={"text": "my_text_field"}
)
)
# 3. Instantiate an Index client for data operations
idx = pc.Index(host=index_config.host)
# 4. Upsert records with raw text data
# Pinecone will automatically generate embeddings using the configured model
idx.upsert_records(
namespace="my-namespace",
records=[
{
"_id": "test1",
"my_text_field": "Apple is a popular fruit known for its sweetness and crisp texture.",
},
{
"_id": "test2",
"my_text_field": "The tech company Apple is known for its innovative products like the iPhone.",
},
{
"_id": "test3",
"my_text_field": "Many people enjoy eating apples as a healthy snack.",
},
{
"_id": "test4",
"my_text_field": "Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces.",
},
{
"_id": "test5",
"my_text_field": "An apple a day keeps the doctor away, as the saying goes.",
},
{
"_id": "test6",
"my_text_field": "Apple Computer Company was founded on April 1, 1976, by Steve Jobs, Steve Wozniak, and Ronald Wayne as a partnership.",
},
],
)
# 5. Search for similar records using text queries
# Pinecone handles embedding the query and optionally reranking results
from pinecone import SearchQuery, SearchRerank, RerankModel
response = idx.search_records(
namespace="my-namespace",
query=SearchQuery(
inputs={
"text": "Apple corporation",
},
top_k=3
),
rerank=SearchRerank(
model=RerankModel.Bge_Reranker_V2_M3,
rank_fields=["my_text_field"],
top_n=3,
),
)
```
## Pinecone Assistant
### Installing the Pinecone Assistant Python plugin
The `pinecone-plugin-assistant` package is now bundled by default when installing `pinecone`. It does not need to be installed separately in order to use Pinecone Assistant.
For more information on Pinecone Assistant, see the [Pinecone Assistant documentation](https://docs.pinecone.io/guides/assistant/overview).
## More information on usage
Detailed information on specific ways of using the SDK are covered in these guides:
**Index Management:**
- [Serverless Indexes](./docs/db_control/serverless-indexes.md) - Learn about auto-scaling serverless indexes that scale automatically with your workload
- [Pod Indexes](./docs/db_control/pod-indexes.md) - Understand dedicated pod-based indexes for consistent performance
**Data Operations:**
- [Working with vectors](./docs/db_data/index-usage-byov.md) - Comprehensive guide to storing, querying, and managing vectors with metadata filtering
**Advanced Features:**
- [Inference API](./docs/inference-api.md) - Use Pinecone's integrated embedding and reranking models
- [FAQ](./docs/faq.md) - Common questions and troubleshooting tips
# Issues & Bugs
If you notice bugs or have feedback, please [file an issue](https://github.com/pinecone-io/pinecone-python-client/issues).
You can also get help in the [Pinecone Community Forum](https://community.pinecone.io/).
# Contributing
If you'd like to make a contribution, or get setup locally to develop the Pinecone Python SDK, please see our [contributing guide](https://github.com/pinecone-io/pinecone-python-client/blob/main/CONTRIBUTING.md)
| text/markdown | null | "Pinecone Systems, Inc." <support@pinecone.io> | null | null | Apache-2.0 | Pinecone, cloud, database, vector | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independen... | [] | null | null | >=3.10 | [] | [] | [] | [
"certifi>=2019.11.17",
"orjson>=3.0.0",
"pinecone-plugin-assistant<4.0.0,>=3.0.1",
"pinecone-plugin-interface<0.1.0,>=0.0.7",
"python-dateutil>=2.5.3",
"typing-extensions>=3.7.4",
"urllib3>=1.26.0; python_version < \"3.12\"",
"urllib3>=1.26.5; python_version >= \"3.12\"",
"aiohttp-retry<3.0.0,>=2.9.... | [] | [] | [] | [
"Homepage, https://www.pinecone.io",
"Documentation, https://pinecone.io/docs"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:08:32.999481 | pinecone-8.1.0.tar.gz | 1,041,965 | e2/e4/8303133de5b3850c85d56caf9cc23cc38c74942bb8a940890b225245d7df/pinecone-8.1.0.tar.gz | source | sdist | null | false | 787f98321a5238ad958eb3e7a53d1da2 | 48a00843fb232ccfd57eba618f0c0294e918b030e1bc7e853fb88d04f80ba569 | e2e48303133de5b3850c85d56caf9cc23cc38c74942bb8a940890b225245d7df | null | [
"LICENSE.txt"
] | 194,444 |
2.4 | judgeval | 0.28.0 | The open source post-building layer for Agent Behavior Monitoring. | <div align="center">
<a href="https://judgmentlabs.ai/">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="assets/logo_darkmode.svg">
<img src="assets/logo_lightmode.svg" alt="Judgment Logo" width="400" />
</picture>
</a>
<br>
## Agent Behavior Monitoring (ABM)
Track and judge any agent behavior in online and offline setups. Set up Sentry-style alerts and analyze agent behaviors / topic patterns at scale!
[](https://docs.judgmentlabs.ai/documentation)
[](https://app.judgmentlabs.ai/register)
[](https://docs.judgmentlabs.ai/documentation/self-hosting/get-started)
[](https://x.com/JudgmentLabs)
[](https://www.linkedin.com/company/judgmentlabs)
</div>
</table>
## [NEW] 🎆 Agent Reinforcement Learning
Train your agents with multi-turn reinforcement learning using judgeval and [Fireworks AI](https://fireworks.ai/)! Judgeval's ABM now integrates with Fireworks' Reinforcement Fine-Tuning (RFT) endpoint, supporting gpt-oss, qwen3, Kimi2, DeepSeek, and more.
Judgeval's agent monitoring infra provides a simple harness for integrating GRPO into any Python agent, giving builders a quick method to **try RL with minimal code changes** to their existing agents!
```python
await trainer.train(
agent_function=your_agent_function, # entry point to your agent
scorers=[RewardScorer()], # Custom scorer you define based on task criteria, acts as reward
prompts=training_prompts # Tasks
)
```
**That's it!** Judgeval automatically manages trajectory collection and reward tagging - your agent can learn from production data with minimal code changes.
👉 Check out the [Wikipedia Racer notebook](https://colab.research.google.com/github/JudgmentLabs/judgment-cookbook/blob/main/rl/WikiRacingAgent_RL.ipynb), where an agent learns to navigate Wikipedia using RL, to see Judgeval in action.
You can view and monitor training progress for free via the [Judgment Dashboard](https://app.judgmentlabs.ai/).
## Judgeval Overview
Judgeval is an open-source framework for agent behavior monitoring. Judgeval offers a toolkit to track and judge agent behavior in online and offline setups, enabling you to convert interaction data from production/test environments into improved agents. To get started, try running one of the notebooks below or dive deeper in our [docs](https://docs.judgmentlabs.ai/documentation).
Our mission is to unlock the power of production data for agent development, enabling teams to improve their apps by catching real-time failures and optimizing over their users' preferences.
## 📚 Cookbooks
| Try Out | Notebook | Description |
|:---------|:-----|:------------|
| RL | [Wikipedia Racer](https://colab.research.google.com/github/JudgmentLabs/judgment-cookbook/blob/main/rl/WikiRacingAgent_RL.ipynb) | Train agents with reinforcement learning |
| Online ABM | [Research Agent](https://colab.research.google.com/github/JudgmentLabs/judgment-cookbook/blob/main/monitoring/Research_Agent_Online_Monitoring.ipynb) | Monitor agent behavior in production |
| Custom Scorers | [HumanEval](https://colab.research.google.com/github/JudgmentLabs/judgment-cookbook/blob/main/custom_scorers/HumanEval_Custom_Scorer.ipynb) | Build custom evaluators for your agents |
| Offline Testing | [Get Started For Free] | Compare how different prompts, models, or agent configs affect performance across ANY metric |
You can access our [repo of cookbooks](https://github.com/JudgmentLabs/judgment-cookbook).
You can find a list of [video tutorials for Judgeval use cases](https://www.youtube.com/@Alexshander-JL).
## Why Judgeval?
🤖 **Simple to run multi-turn RL**: Optimize your agents with multi-turn RL without managing compute infrastructure or data pipelines. Just add a few lines of code to your existing agent code and train!
⚙️ **Custom Evaluators**: No restriction to only monitoring with prefab scorers. Judgeval provides simple abstractions for custom Python scorers, supporting any LLM-as-a-judge rubrics/models and code-based scorers that integrate to our live agent-tracking infrastructure. [Learn more](https://docs.judgmentlabs.ai/documentation/evaluation/custom-scorers)
🚨 **Production Monitoring**: Run any custom scorer in a hosted, virtualized secure container to flag agent behaviors online in production. Get Slack alerts for failures and add custom hooks to address regressions before they impact users. [Learn more](https://docs.judgmentlabs.ai/documentation/performance/online-evals)
📊 **Behavior/Topic Grouping**: Group agent runs by behavior type or topic for deeper analysis. Drill down into subsets of users, agents, or use cases to reveal patterns of agent behavior.
<!-- Add link to Bucketing docs once we have it -->
<!--
TODO: Once we have trainer code docs, plug in here
-->
🧪 **Run experiments on your agents**: Compare test different prompts, models, or agent configs across customer segments. Measure which changes improve agent performance and decrease bad agent behaviors.
<!--
Use this once we have AI PM features:
**Run experiments on your agents**: A/B test different prompts, models, or agent configs across customer segments. Measure which changes improve agent performance and decrease bad agent behaviors. [Learn more]
-->
## 🛠️ Quickstart
Get started with Judgeval by installing our SDK using pip:
```bash
pip install judgeval
```
Ensure you have your `JUDGMENT_API_KEY` and `JUDGMENT_ORG_ID` environment variables set to connect to the [Judgment Platform](https://app.judgmentlabs.ai/).
```bash
export JUDGMENT_API_KEY=...
export JUDGMENT_ORG_ID=...
```
**If you don't have keys, [create an account for free](https://app.judgmentlabs.ai/register) on the platform!**
### Start monitoring with Judgeval
```python
from judgeval.tracer import Tracer, wrap
from judgeval.data import Example
from judgeval.scorers import AnswerRelevancyScorer
from openai import OpenAI
judgment = Tracer(project_name="default_project")
client = wrap(OpenAI()) # tracks all LLM calls
@judgment.observe(span_type="tool")
def format_question(question: str) -> str:
# dummy tool
return f"Question : {question}"
@judgment.observe(span_type="function")
def run_agent(prompt: str) -> str:
task = format_question(prompt)
response = client.chat.completions.create(
model="gpt-5-mini",
messages=[{"role": "user", "content": task}]
)
judgment.async_evaluate( # trigger online monitoring
scorer=AnswerRelevancyScorer(threshold=0.5), # swap with any scorer
example=Example(input=task, actual_output=response), # customize to your data
model="gpt-5",
)
return response.choices[0].message.content
run_agent("What is the capital of the United States?")
```
Running this code will deliver monitoring results to your [free platform account](https://app.judgmentlabs.ai/register) and should look like this:

### Customizable Scorers Over Agent Behavior
Judgeval's strongest suit is the full customization over the types of scorers you can run online monitoring with. No restrictions to only single-prompt LLM judges or prefab scorers - if you can express your scorer
in python code, judgeval can monitor it! Under the hood, judgeval hosts your scorer in a virtualized secure container, enabling online monitoring for any scorer.
First, create a behavior scorer in a file called `helpfulness_scorer.py`:
```python
from judgeval.data import Example
from judgeval.scorers.example_scorer import ExampleScorer
# Define custom example class
class QuestionAnswer(Example):
question: str
answer: str
# Define a server-hosted custom scorer
class HelpfulnessScorer(ExampleScorer):
name: str = "Helpfulness Scorer"
server_hosted: bool = True # Enable server hosting
async def a_score_example(self, example: QuestionAnswer):
# Custom scoring logic for agent behavior
# Can be an arbitrary combination of code and LLM calls
if len(example.answer) > 10 and "?" not in example.answer:
self.reason = "Answer is detailed and provides helpful information"
return 1.0
else:
self.reason = "Answer is too brief or unclear"
return 0.0
```
Then deploy your scorer to Judgment's infrastructure:
```bash
echo "pydantic" > requirements.txt
uv run judgeval upload_scorer helpfulness_scorer.py requirements.txt
```
Now you can instrument your agent with monitoring and online evaluation:
```python
from judgeval.tracer import Tracer, wrap
from helpfulness_scorer import HelpfulnessScorer, QuestionAnswer
from openai import OpenAI
judgment = Tracer(project_name="default_project")
client = wrap(OpenAI()) # tracks all LLM calls
@judgment.observe(span_type="tool")
def format_task(question: str) -> str: # replace with your prompt engineering
return f"Please answer the following question: {question}"
@judgment.observe(span_type="tool")
def answer_question(prompt: str) -> str: # replace with your LLM system calls
response = client.chat.completions.create(
model="gpt-5-mini",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
@judgment.observe(span_type="function")
def run_agent(question: str) -> str:
task = format_task(question)
answer = answer_question(task)
# Add online evaluation with server-hosted scorer
judgment.async_evaluate(
scorer=HelpfulnessScorer(),
example=QuestionAnswer(question=question, answer=answer),
sampling_rate=0.9 # Evaluate 90% of agent runs
)
return answer
if __name__ == "__main__":
result = run_agent("What is the capital of the United States?")
print(result)
```
Congratulations! Your online eval result should look like this:

You can now run any online scorer in a secure Firecracker microVMs with no latency impact on your applications.
---
Judgeval is created and maintained by [Judgment Labs](https://judgmentlabs.ai/).
| text/markdown | null | Andrew Li <andrew@judgmentlabs.ai>, Alex Shan <alex@judgmentlabs.ai>, Joseph Camyre <joseph@judgmentlabs.ai> | null | Judgment Labs <contact@judgmentlabs.ai> | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"dotenv>=0.9.9",
"httpx>=0.28.1",
"opentelemetry-exporter-otlp>=1.36.0",
"opentelemetry-sdk>=1.36.0",
"orjson>=3.9.0",
"typer>=0.9.0",
"boto3>=1.40.11; extra == \"s3\"",
"fireworks-ai>=0.19.18; extra == \"trainer\""
] | [] | [] | [] | [
"Homepage, https://github.com/JudgmentLabs/judgeval",
"Issues, https://github.com/JudgmentLabs/judgeval/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T20:07:43.711926 | judgeval-0.28.0.tar.gz | 23,171,677 | 46/a6/773fa8eb76d364597b786cf6d5efe703ccca540bb687e4b07e4c5793e82c/judgeval-0.28.0.tar.gz | source | sdist | null | false | cd0cf93f863693370550ce5d5bcc1fa2 | 610b48fa9fcd4d59b2de12752e6a7b3a3d7ac834820e50a6e38f48c36d581f1e | 46a6773fa8eb76d364597b786cf6d5efe703ccca540bb687e4b07e4c5793e82c | Apache-2.0 | [
"LICENSE.md"
] | 1,845 |
2.4 | stigmergy | 0.1.0 | Organizational awareness through stigmergic signal processing | # Stigmergy
Organizational awareness through stigmergic signal processing.
Stigmergy ingests work artifacts from GitHub, Linear, and Slack, routes them through an [ART](https://en.wikipedia.org/wiki/Adaptive_resonance_theory)-based mesh of self-organizing agents, and surfaces structural patterns — coordination gaps, knowledge silos, dependency risks — without anyone having to ask.
## How it works
1. **Ingest** signals from your tools (PRs, issues, commits, Slack threads)
2. **Route** each signal through a competitive mesh (stop-on-first-accept, like biological stigmergy)
3. **Correlate** across sources to find patterns no single tool reveals
4. **Surface** findings with PII/credential filtering and configurable delivery
The mesh uses Adaptive Resonance Theory for stable category formation in non-stationary environments: new patterns create new workers, familiar patterns reinforce existing ones, stale patterns decay. No retraining required.
## Quick start
```bash
# Clone and install
git clone https://github.com/jmcentire/stigmergy.git
cd stigmergy
make install
# Run with mock data (no API keys needed)
stigmergy run --once
# Run with live GitHub data
gh auth login
stigmergy run --once --live
```
## Requirements
- Python 3.12+
- [gh CLI](https://cli.github.com/) (for live GitHub data)
## Installation
### Development install (recommended)
```bash
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
```
### With LLM support
```bash
pip install -e ".[cli]"
export ANTHROPIC_API_KEY=your-key
stigmergy config set llm.provider anthropic
```
Without the Anthropic integration, stigmergy uses deterministic heuristics for assessments. The LLM adds richer correlation but is entirely optional.
## Usage
```bash
# Interactive setup — configures sources, identity, constraints
stigmergy init
# Single pass over all configured sources
stigmergy run --once
# Single pass with live API sources
stigmergy run --once --live
# Continuous monitoring
stigmergy run --live
# Check source connectivity (no tokens spent)
stigmergy check
stigmergy check --slack
# View/modify configuration
stigmergy config show
stigmergy config set budget.daily_cap_usd 10.00
# View current mesh state
stigmergy status
```
## Live sources
| Source | Auth | What it ingests |
|--------|------|----------------|
| GitHub | `gh auth login` | PRs, issues, commits, reviews, comments |
| Linear | `LINEAR_API_KEY` env var | Issues, projects, cycles, comments |
| Slack | `SLACK_BOT_TOKEN` env var | Channel messages, threads, reactions |
## Identity resolution
Stigmergy unifies people across sources — the same person may appear as a GitHub handle, Slack display name, Linear UUID, or email address. Resolution runs automatically from configured providers and learns new aliases at runtime.
Identity data lives in `config/` (gitignored). To set up:
```bash
stigmergy init # walks through team setup interactively
```
Or manually create `config/team_roster.csv`:
```csv
Alice Wang,Alice Wang <alice@example.com>,@Alice Wang,alice212
Bob Kim,Bob Kim <bob.kim@example.com>,@Bob,bobkim
```
## Configuration
Config lives in `.stigmergy/config.yaml` (created by `stigmergy init`):
```yaml
sources:
github:
enabled: true
repos: ["org/repo1", "org/repo2"]
linear:
enabled: false
slack:
enabled: false
llm:
provider: stub # stub (free) or anthropic
budget:
daily_cap_usd: 5.00
hourly_cap_usd: 1.00
constraints:
path: config/constraints.yaml # PII/credential kill + redaction rules
```
## Constraint filtering
All output passes through a constraint engine before delivery. By default:
- **Kill** (null-route): SSNs, credit cards, emails, credentials, API keys, compensation data, HR actions
- **Redact** (mask): phone numbers, physical addresses
Rules are configurable in `config/constraints.yaml`.
## Architecture
```
signals (GitHub, Linear, Slack)
|
v
[ Ingestion ] --> normalized Signal objects
|
v
[ Mesh Router ] --> BFS competitive routing, stop-on-first-accept
|
v
[ Workers ] --> ART categories: familiarity match, weight update, fork/merge/decay
|
v
[ Correlator ] --> cross-signal pattern detection
|
v
[ Constraint Filter ] --> PII kill / redact
|
v
[ Output ] --> findings, insights, structural metrics
```
Key design principles:
- **One pattern**: Workers, supervisors, and control layers are the same agent-context-signal pattern at different scales
- **Stop-on-first-accept**: BFS routing with Simon's satisficing — first worker above threshold takes the signal
- **Complement coding**: Full workers raise vigilance thresholds to prevent category monopoly
- **Match-based learning**: Weights update only on acceptance (ART stability guarantee)
- **Immutable signals**: Signals are frozen; derived state lives in contexts
## Testing
```bash
make test # run all tests
make test-v # verbose output
pytest -k "mesh" # run by keyword
```
## Budget
Default caps: $5/day, $1/hour. When the LLM budget is exhausted, stigmergy falls back to heuristic-only mode — it never stops running, just reduces assessment depth. Adjust caps:
```bash
stigmergy config set budget.daily_cap_usd 10.00
stigmergy config set budget.hourly_cap_usd 2.00
```
## Project structure
```
src/stigmergy/
adapters/ Source connectors (GitHub, Linear, Slack — mock + live)
attention/ Attention model, portfolio scoring, surfacing
cli/ CLI entry point, config, budget, live adapters
constraints/ Output filtering (PII/credential kill and redaction)
core/ Algorithms: familiarity, consensus, energy, lifecycle
delivery/ Output delivery framework
identity/ Person identity resolution across sources
mesh/ ART mesh: routing, workers, topology, insights, stability
pipeline/ Signal ingestion pipeline
policy/ Policy engine, spectral analysis, budget enforcement
primitives/ Data types: Signal, Context, Agent, Assessment
services/ LLM, embedding, vector store, token budget
structures/ Bloom filters, LSH, SimHash, ring buffers, tries
unity/ Field equations, eigenmonitor, PID control
tracing/ Execution tracing
```
## Theoretical foundations
Stigmergy implements ideas from:
- **Adaptive Resonance Theory** (Grossberg/Carpenter) — stable category formation with vigilance-gated plasticity
- **Stigmergy** (Grassé, Theraulaz) — coordination through shared environment rather than direct communication
- **Crawford-Sobel** (1982) — information degradation under strategic communication; babbling equilibrium at bias >= 1/4
- **Beer's Viable System Model** — System 4 intelligence function, algedonic signals
- **Spectral graph analysis** — anomaly detection via Laplacian eigenvalue distribution
For the full theoretical treatment, see: [Ambient Structure Discovery via Stigmergic Mesh](https://github.com/jmcentire) (paper forthcoming).
## License
[MIT](LICENSE)
| text/markdown | null | Jeremy McEntire <j.andrew.mcentire@gmail.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Develo... | [] | null | null | >=3.12 | [] | [] | [] | [
"numpy>=1.26",
"pydantic>=2.0",
"pyyaml>=6.0",
"anthropic>=0.40; extra == \"cli\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/jmcentire/stigmergy",
"Repository, https://github.com/jmcentire/stigmergy",
"Issues, https://github.com/jmcentire/stigmergy/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:07:01.895266 | stigmergy-0.1.0.tar.gz | 461,218 | 95/76/96d9098f83db4d66aff6dd46328a737a8cc844e527d33f31306265cca41e/stigmergy-0.1.0.tar.gz | source | sdist | null | false | a3d42c434d8f23e9813c578930315755 | f5acbabfdbc8be964898d403bdb5e60fec36386ed612cc59ab9ccbc0e1385102 | 957696d9098f83db4d66aff6dd46328a737a8cc844e527d33f31306265cca41e | MIT | [
"LICENSE"
] | 200 |
2.4 | hindsight-crewai | 0.4.13 | CrewAI memory integration via Hindsight - persistent memory for AI agent crews | # hindsight-crewai
Persistent memory for AI agent crews via Hindsight. Give your CrewAI crews long-term memory with fact extraction, entity tracking, and temporal awareness.
## Features
- **Drop-in Storage Backend** - Implements CrewAI's `Storage` interface for `ExternalMemory`
- **Automatic Memory Flow** - CrewAI automatically stores task outputs and retrieves relevant memories
- **Per-Agent Banks** - Optionally give each agent its own isolated memory bank
- **Reflect Tool** - Agents can explicitly reason over memories with disposition-aware synthesis
- **Simple Configuration** - Configure once, use everywhere
## Installation
```bash
pip install hindsight-crewai
```
## Quick Start
```python
from hindsight_crewai import configure, HindsightStorage
from crewai.memory.external.external_memory import ExternalMemory
from crewai import Agent, Crew, Task
# Step 1: Configure connection
configure(hindsight_api_url="http://localhost:8888")
# Step 2: Create crew with Hindsight-backed memory
crew = Crew(
agents=[
Agent(role="Researcher", goal="Find information", backstory="..."),
Agent(role="Writer", goal="Write reports", backstory="..."),
],
tasks=[
Task(description="Research AI trends", expected_output="Report"),
],
external_memory=ExternalMemory(
storage=HindsightStorage(bank_id="my-crew")
),
)
crew.kickoff()
```
That's it. CrewAI will automatically:
- **Query memories** at the start of each task
- **Store task outputs** to Hindsight after each task completes
Memories persist across crew runs, so your crew learns over time.
## Per-Agent Memory Banks
Give each agent its own isolated memory bank:
```python
storage = HindsightStorage(
bank_id="my-crew",
per_agent_banks=True, # Researcher -> "my-crew-researcher", Writer -> "my-crew-writer"
)
```
Or use a custom bank resolver for full control:
```python
storage = HindsightStorage(
bank_id="my-crew",
bank_resolver=lambda base, agent: f"{base}-{agent.lower()}" if agent else base,
)
```
## Reflect Tool
CrewAI's storage interface only supports save/search/reset. To give agents access to Hindsight's `reflect` (disposition-aware memory synthesis), add it as a tool:
```python
from hindsight_crewai import HindsightReflectTool
reflect_tool = HindsightReflectTool(
bank_id="my-crew",
budget="mid",
reflect_context="You are helping a software team track decisions.",
)
agent = Agent(
role="Analyst",
goal="Analyze project history",
backstory="...",
tools=[reflect_tool],
)
```
When the agent calls this tool, it gets a synthesized, contextual answer based on all relevant memories — not just raw facts.
## Bank Missions
Set a mission to guide how Hindsight processes and organizes memories:
```python
storage = HindsightStorage(
bank_id="my-crew",
mission="Track software architecture decisions, technical debt, and team preferences.",
)
```
## Configuration
### Global Configuration
```python
from hindsight_crewai import configure
configure(
hindsight_api_url="http://localhost:8888", # Default: production API
api_key="your-api-key", # Or set HINDSIGHT_API_KEY env var
budget="mid", # Recall budget: low/mid/high
max_tokens=4096, # Max tokens for recall results
tags=["env:prod"], # Tags for stored memories
recall_tags=["scope:global"], # Tags to filter recall
recall_tags_match="any", # Tag match mode: any/all/any_strict/all_strict
verbose=True, # Enable logging
)
```
### Per-Storage Overrides
Constructor arguments override global configuration:
```python
storage = HindsightStorage(
bank_id="my-crew",
budget="high", # Override global budget
max_tokens=8192, # Override global max_tokens
tags=["team:alpha"], # Override global tags
)
```
## Examples
See the [CrewAI memory example](https://github.com/vectorize-io/hindsight-cookbook/tree/main/applications/crewai-memory) in the Hindsight Cookbook for a complete working demo with a Researcher + Writer crew.
## Configuration Reference
| Parameter | Default | Description |
|---|---|---|
| `hindsight_api_url` | Production API | Hindsight API URL |
| `api_key` | `HINDSIGHT_API_KEY` env | API key for authentication |
| `budget` | `"mid"` | Recall budget level (low/mid/high) |
| `max_tokens` | `4096` | Maximum tokens for recall results |
| `tags` | `None` | Tags applied when storing memories |
| `recall_tags` | `None` | Tags to filter when searching |
| `recall_tags_match` | `"any"` | Tag matching mode |
| `per_agent_banks` | `False` | Give each agent its own bank |
| `bank_resolver` | `None` | Custom (bank_id, agent) -> bank_id function |
| `mission` | `None` | Bank mission for memory organization |
| `verbose` | `False` | Enable verbose logging |
| text/markdown | null | Vectorize <support@vectorize.io> | null | null | MIT | agents, ai, crewai, hindsight, memory | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engin... | [] | null | null | >=3.10 | [] | [] | [] | [
"crewai>=0.86.0",
"hindsight-client>=0.4.0",
"pytest-mock>=3.10.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/vectorize-io/hindsight",
"Documentation, https://github.com/vectorize-io/hindsight/tree/main/hindsight-integrations/crewai",
"Repository, https://github.com/vectorize-io/hindsight"
] | uv/0.6.7 | 2026-02-19T20:06:55.895490 | hindsight_crewai-0.4.13.tar.gz | 249,797 | 8c/52/9d539600e19a29554492c856410af42fd1e7c7c347d481e176d6d672b28c/hindsight_crewai-0.4.13.tar.gz | source | sdist | null | false | 930f092ee69be5bd2f2d2495b81d6a22 | fe86bdce895aab3663a24dd44e15270c9e275ab89ba7a42f1db82e775f970de2 | 8c529d539600e19a29554492c856410af42fd1e7c7c347d481e176d6d672b28c | null | [] | 236 |
2.4 | spetro | 0.1.4 | Rough volatility models with automatic differentiation. | ## Spetro
[](https://pypi.org/project/spetro/)

[](docs/api.md)
[](https://pepy.tech/project/spetro)
[](https://github.com/aycsi/spetro/actions/workflows/publish.yml)
### Installation
```bash
pip install spetro
```
| text/markdown | aycsi | null | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.21.0",
"scipy>=1.7.0",
"jax>=0.4.0; extra == \"jax\"",
"jaxlib>=0.4.0; extra == \"jax\"",
"optax>=0.1.0; extra == \"jax\"",
"flax>=0.7.0; extra == \"jax\"",
"chex>=0.1.0; extra == \"jax\"",
"torch>=2.0.0; extra == \"torch\"",
"jax>=0.4.0; extra == \"all\"",
"jaxlib>=0.4.0; extra == \"all... | [] | [] | [] | [
"Repository, https://github.com/aycsi/spetro"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:03:21.517708 | spetro-0.1.4.tar.gz | 18,498 | b2/fb/34cf279f999d915806d496a66eae66c1ed9cb3aa4598b35b9d598554db36/spetro-0.1.4.tar.gz | source | sdist | null | false | c56f38ab72d9b15afa82997e828353c4 | 5eddd1d45a26e487985a0e3288e34bb7ed11ad771e8a8ce96ff6b7f42f442f47 | b2fb34cf279f999d915806d496a66eae66c1ed9cb3aa4598b35b9d598554db36 | null | [
"LICENSE"
] | 206 |
2.4 | koi-net | 1.3.0b9 | Implementation of KOI-net protocol in Python | # KOI-net
*This specification is the result of several iterations of KOI research, [read more here](https://github.com/BlockScience/koi).*
### Jump to Sections:
- [Protocol](#protocol)
- [Introduction](#introduction)
- [Communication Methods](#communication-methods)
- [Quickstart](#quickstart)
- [Setup](#setup)
- [Creating a Node](#creating-a-node)
- [Knowledge Processing](#knowledge-processing)
- [Try It Out!](#try-it-out)
- [Advanced](#advanced)
- [Knowledge Processing Pipeline](#knowledge-processing-pipeline)
- [Knowledge Handlers](#knowledge-handlers)
- [RID Handler](#rid-handler)
- [Manifest Handler](#manifest-handler)
- [Bundle Handler](#bundle-handler)
- [Network Handler](#network-handler)
- [Final Handler](#final-handler)
- [Registering Handlers](#registering-handlers)
- [Default Behavior](#default-behavior)
- [Implementation Reference](#implementation-reference)
- [Node Interface](#node-interface)
- [Node Identity](#node-identity)
- [Network Interface](#network-interface)
- [Network Graph](#network-graph)
- [Request Handler](#request-handler)
- [Response Handler](#response-handler)
- [Processor Interface](#processor-interface)
- [Development](#development)
- [Setup](#setup-1)
- [Distribution](#distribution)
# Protocol
## Introduction
*This project builds upon and uses the [RID protocol](https://github.com/BlockScience/rid-lib) to identify and coordinate around knowledge objects.*
This protocol defines the standard communication patterns and coordination norms needed to establish and maintain Knowledge Organization Infrastructure (KOI) networks. KOI-nets are heterogenous compositions of KOI nodes, each of which is capable of autonomously inputting, processing, and outputting knowledge. The behavior of each node and configuration of each network can vary greatly, thus the protocol is designed to be a simple and flexible but interoperable foundation for future projects to build on. The protocol only governs communication between nodes, not how they operate internally. As a result we consider KOI-nets to be fractal-like, in that a network of nodes may act like a single node from an outside perspective.
Generated OpenAPI documentation is provided in this repository, and can be [viewed interactively with Swagger](https://generator.swagger.io/?url=https://raw.githubusercontent.com/BlockScience/koi-net/refs/heads/main/koi-net-protocol-openapi.json).
## Communication Methods
There are two classes of communication methods, event and state communication.
- Event communication is one way, a node send an event to another node.
- State communication is two way, a node asks another node for RIDs, manifests, or bundles and receives a response containing the requested resource (if available).
There are also two types of nodes, full and partial nodes.
- Full nodes are web servers, implementing the endpoints defined in the KOi-net protocol. They are capable of receiving events via webhooks (another node calls their endpoint), and serving state queries. They can also call the endpoints of other full nodes to broadcast events or retrieve state.
- Partial nodes are web clients and don't implement any API endpoints. They are capable of receiving events via polling (asking another node for events). They can also call the endpoints of full nodes to broadcast events or retrieve state.
There are five endpoints defined by the API spec. The first two are for event communication with full and partial nodes respectively. The remaining three are for state communication with full nodes. As a result, partial nodes are unable to directly transfer state and may only output events to other nodes.
- Broadcast events - `/events/broadcast`
- Poll events - `/events/poll`
- Fetch bundles - `/bundles/fetch`
- Fetch manifests - `/manifests/fetch`
- Fetch RIDs - `/rids/fetch`
All endpoints are called with via POST request with a JSON body, and will receive a response containing a JSON payload (with the exception of broadcast events, which won't return anything). The JSON schemas can be found in the attached OpenAPI specification or the Pydantic models in the "protocol" module.
The request and payload JSON objects are composed of the fundamental "knowledge types" from the RID / KOI-net system: RIDs, manifests, bundles, and events. RIDs, manifests, and bundles are defined by the RID protocol and imported from rid-lib, which you can [read about here](https://github.com/BlockScience/rid-lib). Events are now part of the KOI-net protocol, and are defined as an RID and an event type with an optional manifest and contents.
```json
{
"rid": "...",
"event_type": "NEW | UPDATE | FORGET",
"manifest": {
"rid": "...",
"timestamp": "...",
"sha256_hash": "...",
},
"contents": {}
}
```
An event is a signalling construct that conveys information about RID objects between networked nodes. Events are composed of an RID, manifest, or bundle with an event type attached. Event types can be one of `"FORGET"`, `"UPDATE"`, or `"NEW"` forming the "FUN" acronym.
As opposed to CRUD (create, read, update, delete), events are a series of messages, not operations. Each node has its own autonomy in deciding how to react based on the message it receives. For example, a processor node may receive a `"NEW"` event for an RID object its not interested in, and ignore it. Or it may decide that an `"UPDATE"` event should trigger fetching a bundle from another node. A node emits an event to indicate that its internal state has changed:
- `"NEW"` - indicates an previously unknown RID was cached
- `"UPDATE"` - indicates a previously known RID was cached
- `"FORGET"` - indicates a previously known RID was deleted
Nodes may broadcast events to other nodes to indicate their internal state changed. Conversely, nodes may also listen to events from other nodes and as a result decide to change their internal state, take some other action, or do nothing.
# Quickstart
## Setup
The bulk of the code in this repo is taken up by the Python reference implementation, which can be used in other projects to easily set up and configure your own KOI-net node.
This package can be installed with pip:
```shell
pip install koi-net
```
## Creating a Node
*Check out the `examples/` folder to follow along!*
All of the KOI-net functionality comes from the `NodeInterface` class which provides methods to interact with the protocol API, a local RID cache, a view of the network, and an internal processing pipeline. To create a new node, you will need to give it a name and a profile. The name will be used to generate its unique node RID, and the profile stores basic configuration data which will be shared with the other nodes that you communciate with.
Your first decision will be whether to setup a partial or full node:
- Partial nodes only need to indicate their type, and optionally the RID types of events they provide.
- Full nodes need to indicate their type, the base URL for their KOI-net API, and optionally the RID types of events and state they provide.
Nodes are configured using the provided `NodeConfig` class. Defaults can be set as shown below, and will automatically load from and save to YAML files. See the `koi_net.config` module for more info.
### Partial Node
```python
from koi_net import NodeInterface
from koi_net.protocol.node import NodeProfile, NodeProvides, NodeType
from koi_net.config import NodeConfig, KoiNetConfig
class CoordinatorNodeConfig(NodeConfig):
koi_net: KoiNetConfig | None = Field(default_factory = lambda:
KoiNetConfig(
node_name="coordinator",
node_profile=NodeProfile(
node_type=NodeType.FULL
),
cache_directory_path=".basic_partial_rid_cache",
event_queues_path="basic_partial_event_queues.json",
first_contact="http://127.0.0.1:8000/koi-net"
)
)
node = NodeInterface(
config=CoordinatorNodeConfig.load_from_yaml("basical_partial_config.yaml")
)
```
### Full Node
```python
from koi_net import NodeInterface
from koi_net.protocol.node import NodeProfile, NodeProvides, NodeType
from koi_net.config import NodeConfig, KoiNetConfig
class CoordinatorNodeConfig(NodeConfig):
koi_net: KoiNetConfig | None = Field(default_factory = lambda:
KoiNetConfig(
node_name="coordinator",
node_profile=NodeProfile(
node_type=NodeType.FULL,
provides=NodeProvides(
event=[KoiNetNode, KoiNetEdge],
state=[KoiNetNode, KoiNetEdge]
)
),
cache_directory_path=".coordinator_rid_cache",
event_queues_path="coordinator_event_queues.json"
)
)
node = NodeInterface(
config=CoordinatorNodeConfig.load_from_yaml("coordinator_config.yaml"),
use_kobj_processor_thread=True
)
```
When creating a node, you optionally enable `use_kobj_processor_thread` which will run the knowledge processing pipeline on a separate thread. This thread will automatically dequeue and process knowledge objects as they are added to the `kobj_queue`, which happenes when you call `node.process.handle(...)`. This is required to prevent race conditions in asynchronous applications, like web servers, therefore it is recommended to enable this feature for all full nodes.
## Knowledge Processing
Next we'll set up the knowledge processing flow for our node. This is where most of the node's logic and behavior will come into play. For partial nodes this will be an event loop, and for full nodes we will use webhooks. Make sure to call `node.start()` and `node.stop()` at the beginning and end of your node's life cycle.
### Partial Node
Make sure to set `source=KnowledgeSource.External` when calling `handle` on external knowledge, this indicates to the knowledge processing pipeline that the incoming knowledge was received from another node. Where the knowledge is sourced from will impact decisions in the node's knowledge handlers.
```python
import time
from koi_net.processor.knowledge_object import KnowledgeSource
if __name__ == "__main__":
node.start()
try:
while True:
for event in node.network.poll_neighbors():
node.processor.handle(event=event, source=KnowledgeSource.External)
node.processor.flush_kobj_queue()
time.sleep(5)
finally:
node.stop()
```
### Full Node
Setting up a full node is slightly more complex as we'll need a webserver. For this example, we'll use FastAPI and uvicorn. First we need to setup the "lifespan" of the server, to start and stop the node before and after execution, as well as the FastAPI app which will be our web server.
```python
from contextlib import asynccontextmanager
from fastapi import FastAPI
@asynccontextmanager
async def lifespan(app: FastAPI):
node.start()
yield
node.stop()
app = FastAPI(lifespan=lifespan, root_path="/koi-net")
```
Next we'll add our event handling webhook endpoint, which will allow other nodes to broadcast events to us. You'll notice that we have a similar loop to our partial node, but instead of polling periodicially, we handle events asynchronously as we receive them from other nodes.
```python
from koi_net.protocol.api_models import *
from koi_net.protocol.consts import *
@app.post(BROADCAST_EVENTS_PATH)
def broadcast_events(req: EventsPayload):
for event in req.events:
node.processor.handle(event=event, source=KnowledgeSource.External)
```
Next we can add the event polling endpoint, this allows partial nodes to receive events from us.
```python
@app.post(POLL_EVENTS_PATH)
def poll_events(req: PollEvents) -> EventsPayload:
events = node.network.flush_poll_queue(req.rid)
return EventsPayload(events=events)
```
Now for the state transfer "fetch" endpoints:
```python
@app.post(FETCH_RIDS_PATH)
def fetch_rids(req: FetchRids) -> RidsPayload:
return node.network.response_handler.fetch_rids(req)
@app.post(FETCH_MANIFESTS_PATH)
def fetch_manifests(req: FetchManifests) -> ManifestsPayload:
return node.network.response_handler.fetch_manifests(req)
@app.post(FETCH_BUNDLES_PATH)
def fetch_bundles(req: FetchBundles) -> BundlesPayload:
return node.network.response_handler.fetch_bundles(req)
```
Finally we can run the server!
```python
import uvicorn
if __name__ == "__main__":
# update this path to the Python module that defines "app"
uvicorn.run("examples.full_node_template:app", port=8000)
```
*Note: If your node is not the first node in the network, you'll also want to set up a "first contact" in the `NodeInterface`. This is the URL of another full node that can be used to make your first connection and find out about other nodes in the network.*
## Try It Out!
In addition to the partial and full node templates, there's also example implementations that showcase a coordinator + partial node setup. You can run both of them locally after cloning this repository. First, install the koi-net library with the optional examples requirements from the root directory in the repo:
```shell
pip install .[examples]
```
Then you can start each node in a separate terminal:
```shell
python -m examples.basic_coordinator_node
```
```shell
python -m examples.basic_partial_node
```
# Advanced
## Knowledge Processing Pipeline
Beyond the `NodeInterface` setup and boiler plate for partial/full nodes, node behavior is mostly controlled through the use of knowledge handlers. Effectively creating your own handlers relies on a solid understanding of the knowledge processing pipeline, so we'll start with that. As a developer, you will interface with the pipeline through the `ProcessorInterface` accessed with `node.processor`. The pipeline handles knowledge objects, from the `KnowledgeObject` class, a container for all knowledge types in the RID / KOI-net ecosystem:
- RIDs
- Manifests
- Bundles
- Events
Here is the class definition for a knowledge object:
```python
type KnowledgeEventType = EventType | None
class KnowledgeSource(StrEnum):
Internal = "INTERNAL"
External = "EXTERNAL"
class KnowledgeObject(BaseModel):
rid: RID
manifest: Manifest | None = None
contents: dict | None = None
event_type: KnowledgeEventType = None
source: KnowledgeSource
normalized_event_type: KnowledgeEventType = None
network_targets: set[KoiNetNode] = set()
```
In addition to the fields required to represent the knowledge types (`rid`, `manifest`, `contents`, `event_type`), knowledge objects also include a `source` field, indicating whether the knowledge originated from within the node (`KnowledgeSource.Internal`) or from another node (`KnowledgeSource.External`).
The final two fields are not inputs, but are set by handlers as the knowledge object moves through the processing pipeline. The normalized event type indicates the event type normalized to the perspective of the node's cache, and the network targets indicate where the resulting event should be broadcasted to.
Knowledge objects enter the processing pipeline through the `node.processor.handle(...)` method. Using kwargs you can pass any of the knowledge types listed above, a knowledge source, and an optional `event_type` (for non-event knowledge types). The handle function will simply normalize the provided knowledge type into a knowledge object, and put it in the `kobj_queue`, an internal, thread-safe queue of knowledge objects. If you have enabled `use_kobj_processor_thread` then the queue will be automatically processed on the processor thread, otherwise you will need to regularly call `flush_kobj_queue` to process queued knowledge objects (as in the partial node example). Both methods will process knowledge objects sequentially, in the order that they were queued in (FIFO).
## Knowledge Handlers
Processing happens through five distinct phases, corresponding to the handler types: `RID`, `Manifest`, `Bundle`, `Network`, and `Final`. Each handler type can be understood by describing (1) what knowledge object fields are available to the handler, and (2) what action takes place after this phase, which the handler can influence. As knowledge objects pass through the pipeline, fields may be added or updated.
Handlers are registered in a single handler array within the processor. There is no limit to the number of handlers in use, and multiple handlers can be assigned to the same handler type. At each phase of knowledge processing, we will chain together all of the handlers of the corresponding type and run them in their array order. The order handlers are registered in matters!
Each handler will be passed a knowledge object. They can choose to return one of three types: `None`, `KnowledgeObject`, or `STOP_CHAIN`. Returning `None` will pass the unmodified knowledge object (the same one the handler received) to the next handler in the chain. If a handler modified their knowledge object, they should return it to pass the new version to the next handler. Finally, a handler can return `STOP_CHAIN` to immediately stop processing the knowledge object. No further handlers will be called and it will not enter the next phase of processing.
Summary of processing pipeline:
```
RID -> Manifest -> Bundle -> [cache action] -> Network -> [network action] -> Final
|
(skip if event type is "FORGET")
```
### RID Handler
The knowledge object passed to handlers of this type are guaranteed to have an RID and knowledge source field. This handler type acts as a filter, if none of the handlers return `STOP_CHAIN` the pipeline will progress to the next phase. The pipeline diverges slightly after this handler chain, based on the event type of the knowledge object.
If the event type is `"NEW"`, `"UPDATE"`, or `None` and the manifest is not already in the knowledge object, the node will attempt to retrieve it from (1) the local cache if the source is internal, or (2) from another node if the source is external. If it fails to retrieves the manifest, the pipeline will end. Next, the manifest handler chain will be called.
If the event type is `"FORGET"`, and the bundle (manifest + contents) is not already in the knowledge object, the node will attempt to retrieve it from the local cache, regardless of the source. In this case the knowledge object represents what we will delete from the cache, not new incoming knowledge. If it fails to retrieve the bundle, the pipeline will end. Next, the bundle handler chain will be called.
### Manifest Handler
The knowledge object passed to handlers of this type are guaranteed to have an RID, manifest, and knowledge source field. This handler type acts as a filter, if none of the handlers return `STOP_CHAIN` the pipeline will progress to the next phase.
If the bundle (manifest + contents) is not already in the knowledge object, the node will attempt to retrieve it from (1) the local cache if the source is internal, or (2) from another node if the source is external. If it fails to retrieve the bundle, the pipeline will end. Next, the bundle handler chain will be called.
### Bundle Handler
The knowledge object passed to handlers of this type are guaranteed to have an RID, manifest, bundle (manifest + contents), and knowledge source field. This handler type acts as a decider. In this phase, the knowledge object's normalized event type must be set to `"NEW"` or `"UPDATE"` to write it to cache, or `"FORGET"` to delete it from the cache. If the normalized event type remains unset (`None`), or a handler returns `STOP_CHAIN`, then the pipeline will end without taking any cache action.
The cache action will take place after the handler chain ends, so if multiple handlers set a normalized event type, the final handler will take precedence.
### Network Handler
The knowledge object passed to handlers of this type are guaranteed to have an RID, manifest, bundle (manifest + contents), normalized event type, and knowledge source field. This handler type acts as a decider. In this phase, handlers decide which nodes to broadcast this knowledge object to by appending KOI-net node RIDs to the knowledge object's `network_targets` field. If a handler returns `STOP_CHAIN`, the pipeline will end without taking any network action.
The network action will take place after the handler chain ends. The node will attempt to broadcast a "normalized event", created from the knowledge object's RID, bundle, and normalized event type, to all of the node's in the network targets array.
### Final Handler
The knowledge object passed to handlers of this type are guaranteed to have an RID, manifest, bundle (manifest + contents), normalized event type, and knowledge source field.
This is the final handler chain that is called, it doesn't make any decisions or filter for succesive handler types. Handlers here can be useful if you want to take some action after the network broadcast has ended.
## Registering Handlers
Knowledge handlers are registered with a node's processor by decorating a handler function. There are two types of decorators, the first way converts the function into a handler object which can be manually added to a processor. This is how the default handlers are defined, and makes them more portable (could be imported from another package). The second automatically registers a handler with your node instance. This is not portable but more convenient. The input of the decorated function will be the processor instance, and a knowledge object.
```python
from .handler import KnowledgeHandler, HandlerType, STOP_CHAIN
@KnowledgeHandler.create(HandlerType.RID)
def example_handler(processor: ProcessorInterface, kobj: KnowledgeObject):
...
@node.processor.register_handler(HandlerType.RID)
def example_handler(processor: ProcessorInterface, kobj: KnowledgeObject):
...
```
While handler's only require specifying the handler type, you can also specify the RID types, knowledge source, or event types you want to handle. If a knowledge object doesn't match all of the specified parameters, it won't be called. By default, handlers will match all RID types, all event types, and both internal and external sourced knowledge.
```python
@KnowledgeHandler.create(
handler_type=HandlerType.Bundle,
rid_types=[KoiNetEdge],
source=KnowledgeSource.External,
event_types=[EventType.NEW, EventType.UPDATE])
def edge_negotiation_handler(processor: ProcessorInterface, kobj: KnowledgeObject):
...
```
The processor instance passed to your function should be used to take any necessary node actions (cache, network, etc.). It is also sometimes useful to add new knowledge objects to the queue while processing a different knowledge object. You can simply call `processor.handle(...)` in the same way as you would outside of a handler. It will put at the end of the queue and processed when it is dequeued like any other knowledge object.
## Default Behavior
The default configuration provides four default handlers which will take precedence over any handlers you add yourself. To override this behavior, you can set the `handlers` field in the `NodeInterface`:
```python
from koi_net import NodeInterface
from koi_net.protocol.node import NodeProfile, NodeProvides, NodeType
from koi_net.config import NodeConfig
from koi_net.processor.default_handlers import (
basic_rid_handler,
basic_manifest_handler,
edge_negotiation_handler,
basic_network_output_filter
)
node = NodeInterface(
config=NodeConfig.load_from_yaml(),
handlers=[
basic_rid_handler,
basic_manifest_handler,
edge_negotiation_handler,
basic_network_output_filter
# include all or none of the default handlers
]
)
```
Take a look at `src/koi_net/processor/default_handlers.py` to see some more in depth examples and better understand the default node behavior.
# Development
## Setup
Clone this repository:
```console
git clone https://github.com/BlockScience/koi-net
```
Set up and activate virtual environment:
```shell
python -m venv venv
```
Windows:
```shell
.\venv\Scripts\activate
```
Linux:
```shell
source venv/bin/activate
```
Install koi-net with dev dependencies:
```shell
pip install -e .[dev]
```
## Distribution
*Be careful! All files not in `.gitignore` will be included in the distribution, even if they aren't tracked by git! Double check the `.tar.gz` after building to make sure you didn't accidently include other files.*
Build package:
```shell
python -m build
```
Push new package build to PyPI:
```shell
python -m twine upload --skip-existing dist/*
```
| text/markdown | null | Luke Miller <luke@block.science> | null | null | MIT License
Copyright (c) 2025 BlockScience
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"rid-lib>=3.2.7",
"networkx>=3.4.2",
"httpx>=0.28.1",
"pydantic>=2.10.6",
"ruamel.yaml>=0.18.10",
"cryptography>=45.0.3",
"fastapi>=0.115.12",
"uvicorn>=0.34.2",
"rich>=14.1.0",
"structlog>=25.4.0",
"pydantic-settings>=2.12.0",
"jsonpointer>=3.0.0",
"sphinx; extra == \"docs\"",
"sphinx-aut... | [] | [] | [] | [
"Homepage, https://github.com/BlockScience/koi-net/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:02:53.180225 | koi_net-1.3.0b9.tar.gz | 62,051 | 9d/99/e7eb5e53587b694164ae9ef7b33136b6242d488a549aa2e1195c8c63f962/koi_net-1.3.0b9.tar.gz | source | sdist | null | false | e2d0ab793c2b79e9ebfd52d496ddfe75 | cf3d5c244d8d8bf46f39fc38cf8eebfccaf93f9e7f7b904117cd6e2de651f156 | 9d99e7eb5e53587b694164ae9ef7b33136b6242d488a549aa2e1195c8c63f962 | null | [
"LICENSE"
] | 183 |
2.4 | rcsb.utils.chemref | 0.97 | RCSB Python Chemical Reference Data Utility Classes | # RCSB Python Chemical Reference Utility Classes
[](https://dev.azure.com/rcsb/RCSB%20PDB%20Python%20Projects/_build/latest?definitionId=6&branchName=master)
## Introduction
This module contains a collection of utility classes for accessing and packaging
PDB chemical reference data.
### Installation
Download the library source software from the project repository:
```bash
git clone --recurse-submodules https://github.com/rcsb/py-rcsb_utils_chemref.git
```
Optionally, run test suite (Python versions > 3.9) using
[tox](http://tox.readthedocs.io/en/latest/example/platform.html):
```bash
tox
```
Installation is via the program [pip](https://pypi.python.org/pypi/pip).
```bash
pip install rcsb.utils.chemref
or install the local repository using:
pip install -e .
```
| text/markdown | null | John Westbrook <john.westbrook@rcsb.org> | null | Dennis Piehl <dennis.piehl@rcsb.org> | null | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"chembl-webresource-client>=0.10.2",
"mmcif>=1.0.0",
"networkx>=2.4",
"obonet>=0.2.5",
"rcsb-utils-config>=0.40",
"rcsb-utils-io>=1.50",
"black>=21.5b1; extra == \"tests\"",
"check-manifest; extra == \"tests\"",
"coverage; extra == \"tests\"",
"flake8; extra == \"tests\"",
"pylint; extra == \"te... | [] | [] | [] | [
"Homepage, https://github.com/rcsb/py-rcsb_utils_chemref"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-19T20:02:47.704124 | rcsb_utils_chemref-0.97.tar.gz | 46,909 | d2/9f/a540703c90c8881004702bb4769064cb361da4e6024ce18e82a26bc2741a/rcsb_utils_chemref-0.97.tar.gz | source | sdist | null | false | b7fae7dd5ce5cea68939c05d6a1b29b8 | 59a96ce99f8f49ba50038dccdb26ed17f0a270fc2874d6333590fb4e6d25ab8e | d29fa540703c90c8881004702bb4769064cb361da4e6024ce18e82a26bc2741a | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | dkist-processing-cryonirsp | 1.15.14 | Science processing code for the Cryo-NIRSP instrument on DKIST | dkist-processing-cryonirsp
==========================
|codecov|
Overview
--------
The dkist-processing-cryonirsp library contains the implementation of the cryonirsp pipelines as a collection of the
`dkist-processing-core <https://pypi.org/project/dkist-processing-core/>`_ framework and
`dkist-processing-common <https://pypi.org/project/dkist-processing-common/>`_ Tasks.
The recommended project structure is to separate tasks and workflows into separate packages. Having the workflows
in their own package facilitates using the build_utils to test the integrity of those workflows in the unit test.
Environment Variables
---------------------
.. list-table::
:widths: 10 90
:header-rows: 1
* - Variable
- Field Info
* - LOGURU_LEVEL
- annotation=str required=False default='INFO' alias_priority=2 validation_alias='LOGURU_LEVEL' description='Log level for the application'
* - MESH_CONFIG
- annotation=dict[str, MeshService] required=False default_factory=dict alias_priority=2 validation_alias='MESH_CONFIG' description='Service mesh configuration' examples=[{'upstream_service_name': {'mesh_address': 'localhost', 'mesh_port': 6742}}]
* - RETRY_CONFIG
- annotation=RetryConfig required=False default_factory=RetryConfig description='Retry configuration for the service'
* - OTEL_SERVICE_NAME
- annotation=str required=False default='unknown-service-name' alias_priority=2 validation_alias='OTEL_SERVICE_NAME' description='Service name for OpenTelemetry'
* - DKIST_SERVICE_VERSION
- annotation=str required=False default='unknown-service-version' alias_priority=2 validation_alias='DKIST_SERVICE_VERSION' description='Service version for OpenTelemetry'
* - NOMAD_ALLOC_ID
- annotation=str required=False default='unknown-allocation-id' alias_priority=2 validation_alias='NOMAD_ALLOC_ID' description='Nomad allocation ID for OpenTelemetry'
* - NOMAD_ALLOC_NAME
- annotation=str required=False default='unknown-allocation-name' alias='NOMAD_ALLOC_NAME' alias_priority=2 description='Allocation name for the deployed container the task is running on.'
* - NOMAD_GROUP_NAME
- annotation=str required=False default='unknown-allocation-group' alias='NOMAD_GROUP_NAME' alias_priority=2 description='Allocation group for the deployed container the task is running on'
* - OTEL_EXPORTER_OTLP_TRACES_INSECURE
- annotation=bool required=False default=True description='Use insecure connection for OTLP traces'
* - OTEL_EXPORTER_OTLP_METRICS_INSECURE
- annotation=bool required=False default=True description='Use insecure connection for OTLP metrics'
* - OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='OTLP traces endpoint. Overrides mesh configuration' examples=['localhost:4317']
* - OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='OTLP metrics endpoint. Overrides mesh configuration' examples=['localhost:4317']
* - OTEL_PYTHON_DISABLED_INSTRUMENTATIONS
- annotation=list[str] required=False default_factory=list description='List of instrumentations to disable. https://opentelemetry.io/docs/zero-code/python/configuration/' examples=[['pika', 'requests']]
* - OTEL_PYTHON_FASTAPI_EXCLUDED_URLS
- annotation=str required=False default='health' description='Comma separated list of URLs to exclude from OpenTelemetry instrumentation in FastAPI.' examples=['client/.*/info,healthcheck']
* - SYSTEM_METRIC_INSTRUMENTATION_CONFIG
- annotation=Union[dict[str, bool], NoneType] required=False default=None description='Configuration for system metric instrumentation. https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/system_metrics/system_metrics.html' examples=[{'system.memory.usage': ['used', 'free', 'cached'], 'system.cpu.time': ['idle', 'user', 'system', 'irq'], 'system.network.io': ['transmit', 'receive'], 'process.runtime.memory': ['rss', 'vms'], 'process.runtime.cpu.time': ['user', 'system'], 'process.runtime.context_switches': ['involuntary', 'voluntary']}]
* - ISB_USERNAME
- annotation=str required=False default='guest' description='Username for the interservice-bus.'
* - ISB_PASSWORD
- annotation=str required=False default='guest' description='Password for the interservice-bus.'
* - ISB_EXCHANGE
- annotation=str required=False default='master.direct.x' description='Exchange for the interservice-bus.'
* - ISB_QUEUE_TYPE
- annotation=str required=False default='classic' description='Queue type for the interservice-bus.' examples=['quorum', 'classic']
* - BUILD_VERSION
- annotation=str required=False default='dev' description='Fallback build version for workflow tasks.'
* - MAX_FILE_DESCRIPTORS
- annotation=int required=False default=1024 description='Maximum number of file descriptors to allow the process.'
* - GQL_AUTH_TOKEN
- annotation=Union[str, NoneType] required=False default='dev' description='The auth token for the metadata-store-api.'
* - OBJECT_STORE_ACCESS_KEY
- annotation=Union[str, NoneType] required=False default=None description='The access key for the object store.'
* - OBJECT_STORE_SECRET_KEY
- annotation=Union[str, NoneType] required=False default=None description='The secret key for the object store.'
* - OBJECT_STORE_USE_SSL
- annotation=bool required=False default=False description='Whether to use SSL for the object store connection.'
* - MULTIPART_THRESHOLD
- annotation=Union[int, NoneType] required=False default=None description='Multipart threshold for the object store.'
* - S3_CLIENT_CONFIG
- annotation=Union[dict, NoneType] required=False default=None description='S3 client configuration for the object store.'
* - S3_UPLOAD_CONFIG
- annotation=Union[dict, NoneType] required=False default=None description='S3 upload configuration for the object store.'
* - S3_DOWNLOAD_CONFIG
- annotation=Union[dict, NoneType] required=False default=None description='S3 download configuration for the object store.'
* - GLOBUS_MAX_RETRIES
- annotation=int required=False default=5 description='Max retries for transient errors on calls to the globus api.'
* - GLOBUS_INBOUND_CLIENT_CREDENTIALS
- annotation=list[GlobusClientCredential] required=False default_factory=list description='Globus client credentials for inbound transfers.' examples=[[{'client_id': 'id1', 'client_secret': 'secret1'}, {'client_id': 'id2', 'client_secret': 'secret2'}]]
* - GLOBUS_OUTBOUND_CLIENT_CREDENTIALS
- annotation=list[GlobusClientCredential] required=False default_factory=list description='Globus client credentials for outbound transfers.' examples=[[{'client_id': 'id3', 'client_secret': 'secret3'}, {'client_id': 'id4', 'client_secret': 'secret4'}]]
* - OBJECT_STORE_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='Object store Globus Endpoint ID.'
* - SCRATCH_ENDPOINT
- annotation=Union[str, NoneType] required=False default=None description='Scratch Globus Endpoint ID.'
* - SCRATCH_BASE_PATH
- annotation=str required=False default='scratch/' description='Base path for scratch storage.'
* - SCRATCH_INVENTORY_DB_COUNT
- annotation=int required=False default=16 description='Number of databases in the scratch inventory (redis).'
* - DOCS_BASE_URL
- annotation=str required=False default='my_test_url' description='Base URL for the documentation site.'
* - FTS_ATLAS_DATA_DIR
- annotation=Union[str, NoneType] required=False default=None description='Common cached directory for a downloaded FTS Atlas.'
Development
-----------
.. code-block:: bash
git clone git@bitbucket.org:dkistdc/dkist-processing-cryonirsp.git
cd dkist-processing-cryonirsp
pre-commit install
pip install -e .[test]
pytest -v --cov dkist_processing_cryonirsp
Build
-----
Artifacts are built through Bitbucket Pipelines.
The pipeline can be used in other repos with a modification of the package and artifact locations
to use the names relevant to the target repo.
e.g. dkist-processing-test -> dkist-processing-vbi and dkist_processing_test -> dkist_processing_vbi
Deployment
----------
Deployment is done with `turtlebot <https://bitbucket.org/dkistdc/turtlebot/src/main/>`_ and follows
the process detailed in `dkist-processing-core <https://pypi.org/project/dkist-processing-core/>`_
Additionally, when a new release is ready to be built the following steps need to be taken:
1. Freezing Dependencies
#########################
A new "frozen" extra is generated by the `dkist-dev-tools <https://bitbucket.org/dkistdc/dkist-dev-tools/src/main/>`_
package. If you don't have `dkist-dev-tools` installed please follow the directions from that repo.
To freeze dependencies run
.. code-block:: bash
ddt freeze vX.Y.Z[rcK]
Where "vX.Y.Z[rcK]" is the version about to be released.
2. Changelog
############
When you make **any** change to this repository it **MUST** be accompanied by a changelog file.
The changelog for this repository uses the `towncrier <https://github.com/twisted/towncrier>`__ package.
Entries in the changelog for the next release are added as individual files (one per change) to the ``changelog/`` directory.
Writing a Changelog Entry
^^^^^^^^^^^^^^^^^^^^^^^^^
A changelog entry accompanying a change should be added to the ``changelog/`` directory.
The name of a file in this directory follows a specific template::
<PULL REQUEST NUMBER>.<TYPE>[.<COUNTER>].rst
The fields have the following meanings:
* ``<PULL REQUEST NUMBER>``: This is the number of the pull request, so people can jump from the changelog entry to the diff on BitBucket.
* ``<TYPE>``: This is the type of the change and must be one of the values described below.
* ``<COUNTER>``: This is an optional field, if you make more than one change of the same type you can append a counter to the subsequent changes, i.e. ``100.bugfix.rst`` and ``100.bugfix.1.rst`` for two bugfix changes in the same PR.
The list of possible types is defined in the towncrier section of ``pyproject.toml``, the types are:
* ``feature``: This change is a new code feature.
* ``bugfix``: This is a change which fixes a bug.
* ``doc``: A documentation change.
* ``removal``: A deprecation or removal of public API.
* ``misc``: Any small change which doesn't fit anywhere else, such as a change to the package infrastructure.
Rendering the Changelog at Release Time
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you are about to tag a release first you must run ``towncrier`` to render the changelog.
The steps for this are as follows:
* Run `towncrier build --version vx.y.z` using the version number you want to tag.
* Agree to have towncrier remove the fragments.
* Add and commit your changes.
* Tag the release.
**NOTE:** If you forget to add a Changelog entry to a tagged release (either manually or automatically with ``towncrier``)
then the Bitbucket pipeline will fail. To be able to use the same tag you must delete it locally and on the remote branch:
.. code-block:: bash
# First, actually update the CHANGELOG and commit the update
git commit
# Delete tags
git tag -d vWHATEVER.THE.VERSION
git push --delete origin vWHATEVER.THE.VERSION
# Re-tag with the same version
git tag vWHATEVER.THE.VERSION
git push --tags origin main
Science Changelog
^^^^^^^^^^^^^^^^^
Whenever a release involves changes to the scientific quality of L1 data, additional changelog fragment(s) should be
created. These fragments are intended to be as verbose as is needed to accurately capture the scope of the change(s),
so feel free to use all the fancy RST you want. Science fragments are placed in the same ``changelog/`` directory
as other fragments, but are always called::
<PR NUMBER | +>.science[.<COUNTER>].rst
In the case that a single pull request encapsulates the entirety of the scientific change then the first field should
be that PR number (same as the normal CHANGELOG). If, however, there is not a simple mapping from a single PR to a scientific
change then use the character "+" instead; this will create a changelog entry with no associated PR. For example:
.. code-block:: bash
$ ls changelog/
99.bugfix.rst # This is a normal changelog fragment associated with a bugfix in PR 99
99.science.rst # Apparently that bugfix also changed the scientific results, so that PR also gets a science fragment
+.science.rst # This fragment is not associated with a PR
When it comes time to build the SCIENCE_CHANGELOG, use the ``science_towncrier.sh`` script in this repo to do so.
This script accepts all the same arguments as the default `towncrier`. For example:
.. code-block:: bash
./science_towncrier.sh build --version vx.y.z
This will update the SCIENCE_CHANGELOG and remove any science fragments from the changelog directory.
3. Tag and Push
###############
Once all commits are in place add a git tag that will define the released version, then push the tags up to Bitbucket:
.. code-block:: bash
git tag vX.Y.Z[rcK]
git push --tags origin BRANCH
In the case of an rc, BRANCH will likely be your development branch. For full releases BRANCH should be "main".
.. |codecov| image:: https://codecov.io/bb/dkistdc/dkist-processing-cryonirsp/graph/badge.svg?token=TZBK64UKG5
:target: https://codecov.io/bb/dkistdc/dkist-processing-cryonirsp
| text/x-rst | null | NSO / AURA <dkistdc@nso.edu> | null | null | BSD-3-Clause | null | [
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"Pillow==10.4.0",
"astropy==7.0.2",
"dkist-fits-specifications==4.21.0",
"dkist-header-validator==5.3.0",
"dkist-processing-common==12.6.2",
"dkist-processing-math==2.2.1",
"dkist-processing-pac==3.1.1",
"dkist-spectral-lines==3.0.0",
"solar-wavelength-calibration==2.0.1",
"largestinteriorrectangl... | [] | [] | [] | [
"Homepage, https://nso.edu/dkist/data-center/",
"Repository, https://bitbucket.org/dkistdc/dkist-processing-cryonirsp/",
"Documentation, https://docs.dkist.nso.edu/projects/cryo-nirsp",
"Help, https://nso.atlassian.net/servicedesk/customer/portal/5"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T20:02:31.742888 | dkist_processing_cryonirsp-1.15.14.tar.gz | 185,462 | fa/5e/c282ca97b826b4cd9d41863a22ee0eec8cd0d56b882e7afeecfdb65a81fa/dkist_processing_cryonirsp-1.15.14.tar.gz | source | sdist | null | false | 292c7ee7ce09196fab67f3064339af4e | eddaa634adccd55b4aeeecd2a9c03f67715728a561cb99e123d986a6bf0a1e57 | fa5ec282ca97b826b4cd9d41863a22ee0eec8cd0d56b882e7afeecfdb65a81fa | null | [] | 470 |
2.4 | datacontract-cli | 0.11.5 | The datacontract CLI is an open source command-line tool for working with Data Contracts. It uses data contract YAML files to lint the data contract, connect to data sources and execute schema and quality tests, detect breaking changes, and export to different formats. The tool is written in Python. It can be used as a standalone CLI tool, in a CI/CD pipeline, or directly as a Python library. | # Data Contract CLI
<p>
<a href="https://github.com/datacontract/datacontract-cli/actions/workflows/ci.yaml?query=branch%3Amain">
<img alt="Test Workflow" src="https://img.shields.io/github/actions/workflow/status/datacontract/datacontract-cli/ci.yaml?branch=main"></a>
<a href="https://github.com/datacontract/datacontract-cli">
<img alt="Stars" src="https://img.shields.io/github/stars/datacontract/datacontract-cli" /></a>
<a href="https://datacontract.com/slack" rel="nofollow"><img src="https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social" alt="Slack Status" data-canonical-src="https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social" style="max-width: 100%;"></a>
</p>
The `datacontract` CLI is an open-source command-line tool for working with [data contracts](https://datacontract.com).
It natively supports the [Open Data Contract Standard](https://bitol-io.github.io/open-data-contract-standard/latest/) to lint data contracts, connect to data sources and execute schema and quality tests, and export to different formats.
The tool is written in Python.
It can be used as a standalone CLI tool, in a CI/CD pipeline, or directly as a Python library.

## Getting started
Let's look at this data contract:
[https://datacontract.com/orders-v1.odcs.yaml](https://datacontract.com/orders-v1.odcs.yaml)
We have a _servers_ section with endpoint details to a Postgres database, _schema_ for the structure and semantics of the data, _service levels_ and _quality_ attributes that describe the expected freshness and number of rows.
This data contract contains all information to connect to the database and check that the actual data meets the defined schema specification and quality expectations.
We can use this information to test if the actual data product is compliant to the data contract.
Let's use [uv](https://docs.astral.sh/uv/) to install the CLI (or use the [Docker image](#docker)),
```bash
$ uv tool install --python python3.11 --upgrade 'datacontract-cli[all]'
```
Now, let's run the tests:
```bash
$ export DATACONTRACT_POSTGRES_USERNAME=datacontract_cli.egzhawjonpfweuutedfy
$ export DATACONTRACT_POSTGRES_PASSWORD=jio10JuQfDfl9JCCPdaCCpuZ1YO
$ datacontract test https://datacontract.com/orders-v1.odcs.yaml
# returns:
Testing https://datacontract.com/orders-v1.odcs.yaml
Server: production (type=postgres, host=aws-1-eu-central-2.pooler.supabase.com, port=6543, database=postgres, schema=dp_orders_v1)
╭────────┬──────────────────────────────────────────────────────────┬─────────────────────────┬─────────╮
│ Result │ Check │ Field │ Details │
├────────┼──────────────────────────────────────────────────────────┼─────────────────────────┼─────────┤
│ passed │ Check that field 'line_item_id' is present │ line_items.line_item_id │ │
│ passed │ Check that field line_item_id has type UUID │ line_items.line_item_id │ │
│ passed │ Check that field line_item_id has no missing values │ line_items.line_item_id │ │
│ passed │ Check that field 'order_id' is present │ line_items.order_id │ │
│ passed │ Check that field order_id has type UUID │ line_items.order_id │ │
│ passed │ Check that field 'price' is present │ line_items.price │ │
│ passed │ Check that field price has type INTEGER │ line_items.price │ │
│ passed │ Check that field price has no missing values │ line_items.price │ │
│ passed │ Check that field 'sku' is present │ line_items.sku │ │
│ passed │ Check that field sku has type TEXT │ line_items.sku │ │
│ passed │ Check that field sku has no missing values │ line_items.sku │ │
│ passed │ Check that field 'customer_id' is present │ orders.customer_id │ │
│ passed │ Check that field customer_id has type TEXT │ orders.customer_id │ │
│ passed │ Check that field customer_id has no missing values │ orders.customer_id │ │
│ passed │ Check that field 'order_id' is present │ orders.order_id │ │
│ passed │ Check that field order_id has type UUID │ orders.order_id │ │
│ passed │ Check that field order_id has no missing values │ orders.order_id │ │
│ passed │ Check that unique field order_id has no duplicate values │ orders.order_id │ │
│ passed │ Check that field 'order_status' is present │ orders.order_status │ │
│ passed │ Check that field order_status has type TEXT │ orders.order_status │ │
│ passed │ Check that field 'order_timestamp' is present │ orders.order_timestamp │ │
│ passed │ Check that field order_timestamp has type TIMESTAMPTZ │ orders.order_timestamp │ │
│ passed │ Check that field 'order_total' is present │ orders.order_total │ │
│ passed │ Check that field order_total has type INTEGER │ orders.order_total │ │
│ passed │ Check that field order_total has no missing values │ orders.order_total │ │
╰────────┴──────────────────────────────────────────────────────────┴─────────────────────────┴─────────╯
🟢 data contract is valid. Run 25 checks. Took 3.938887 seconds.
```
Voilà, the CLI tested that the YAML itself is valid, all records comply with the schema, and all quality attributes are met.
We can also use the data contract metadata to export in many [formats](#format), e.g., to generate a SQL DDL:
```bash
$ datacontract export --format sql https://datacontract.com/orders-v1.odcs.yaml
# returns:
-- Data Contract: orders
-- SQL Dialect: postgres
CREATE TABLE orders (
order_id None not null primary key,
customer_id text not null,
order_total integer not null,
order_timestamp None,
order_status text
);
CREATE TABLE line_items (
line_item_id None not null primary key,
sku text not null,
price integer not null,
order_id None
);
```
Or generate an HTML export:
```bash
$ datacontract export --format html --output orders-v1.odcs.html https://datacontract.com/orders-v1.odcs.yaml
```
[//]: # (which will create this [HTML export](https://datacontract.com/examples/orders-latest/datacontract.html).)
## Usage
```bash
# create a new data contract from example and write it to odcs.yaml
$ datacontract init odcs.yaml
# lint the odcs.yaml
$ datacontract lint odcs.yaml
# execute schema and quality checks (define credentials as environment variables)
$ datacontract test odcs.yaml
# export data contract as html (other formats: avro, dbt, dbt-sources, dbt-staging-sql, jsonschema, odcs, rdf, sql, sodacl, terraform, ...)
$ datacontract export --format html datacontract.yaml --output odcs.html
# import sql (other formats: avro, glue, bigquery, jsonschema, excel ...)
$ datacontract import --format sql --source my-ddl.sql --dialect postgres --output odcs.yaml
# import from Excel template
$ datacontract import --format excel --source odcs.xlsx --output odcs.yaml
# export to Excel template
$ datacontract export --format excel --output odcs.xlsx odcs.yaml
```
## Programmatic (Python)
```python
from datacontract.data_contract import DataContract
data_contract = DataContract(data_contract_file="odcs.yaml")
run = data_contract.test()
if not run.has_passed():
print("Data quality validation failed.")
# Abort pipeline, alert, or take corrective actions...
```
## How to
- [How to integrate Data Contract CLI in your CI/CD pipeline as a GitHub Action](https://github.com/datacontract/datacontract-action/)
- [How to run the Data Contract CLI API to test data contracts with POST requests](https://cli.datacontract.com/API)
- [How to run Data Contract CLI in a Databricks pipeline](https://www.datamesh-architecture.com/howto/build-a-dataproduct-with-databricks#test-the-data-product)
## Installation
Choose the most appropriate installation method for your needs:
### uv
The preferred way to install is [uv](https://docs.astral.sh/uv/):
```
uv tool install --python python3.11 --upgrade 'datacontract-cli[all]'
```
### uvx
If you have [uv](https://docs.astral.sh/uv/) installed, you can run datacontract-cli directly without installing:
```
uv run --with 'datacontract-cli[all]' datacontract --version
```
### pip
Python 3.10, 3.11, and 3.12 are supported. We recommend using Python 3.11.
```bash
python3 -m pip install 'datacontract-cli[all]'
datacontract --version
```
### pip with venv
Typically it is better to install the application in a virtual environment for your projects:
```bash
cd my-project
python3.11 -m venv venv
source venv/bin/activate
pip install 'datacontract-cli[all]'
datacontract --version
```
### pipx
pipx installs into an isolated environment.
```bash
pipx install 'datacontract-cli[all]'
datacontract --version
```
### Docker
You can also use our Docker image to run the CLI tool. It is also convenient for CI/CD pipelines.
```bash
docker pull datacontract/cli
docker run --rm -v ${PWD}:/home/datacontract datacontract/cli
```
You can create an alias for the Docker command to make it easier to use:
```bash
alias datacontract='docker run --rm -v "${PWD}:/home/datacontract" datacontract/cli:latest'
```
_Note:_ The output of Docker command line messages is limited to 80 columns and may include line breaks. Don't pipe docker output to files if you want to export code. Use the `--output` option instead.
## Optional Dependencies (Extras)
The CLI tool defines several optional dependencies (also known as extras) that can be installed for using with specific servers types.
With _all_, all server dependencies are included.
```bash
uv tool install --python python3.11 --upgrade 'datacontract-cli[all]'
```
A list of available extras:
| Dependency | Installation Command |
|-------------------------|--------------------------------------------|
| Amazon Athena | `pip install datacontract-cli[athena]` |
| Avro Support | `pip install datacontract-cli[avro]` |
| Google BigQuery | `pip install datacontract-cli[bigquery]` |
| Databricks Integration | `pip install datacontract-cli[databricks]` |
| DuckDB (local/S3/GCS/Azure file testing) | `pip install datacontract-cli[duckdb]` |
| Iceberg | `pip install datacontract-cli[iceberg]` |
| Kafka Integration | `pip install datacontract-cli[kafka]` |
| PostgreSQL Integration | `pip install datacontract-cli[postgres]` |
| S3 Integration | `pip install datacontract-cli[s3]` |
| Snowflake Integration | `pip install datacontract-cli[snowflake]` |
| Microsoft SQL Server | `pip install datacontract-cli[sqlserver]` |
| Trino | `pip install datacontract-cli[trino]` |
| Impala | `pip install datacontract-cli[impala]` |
| dbt | `pip install datacontract-cli[dbt]` |
| DBML | `pip install datacontract-cli[dbml]` |
| Parquet | `pip install datacontract-cli[parquet]` |
| RDF | `pip install datacontract-cli[rdf]` |
| API (run as web server) | `pip install datacontract-cli[api]` |
| protobuf | `pip install datacontract-cli[protobuf]` |
## Documentation
Commands
- [init](#init)
- [lint](#lint)
- [test](#test)
- [export](#export)
- [import](#import)
- [catalog](#catalog)
- [publish](#publish)
- [api](#api)
### init
```
Usage: datacontract init [OPTIONS] [LOCATION]
Create an empty data contract.
╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────╮
│ location [LOCATION] The location of the data contract file to create. │
│ [default: datacontract.yaml] │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────────────────────────╮
│ --template TEXT URL of a template or data contract [default: None] │
│ --overwrite --no-overwrite Replace the existing datacontract.yaml │
│ [default: no-overwrite] │
│ --debug --no-debug Enable debug logging [default: no-debug] │
│ --help Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
```
### lint
```
Usage: datacontract lint [OPTIONS] [LOCATION]
Validate that the datacontract.yaml is correctly formatted.
╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────╮
│ location [LOCATION] The location (url or path) of the data contract yaml. │
│ [default: datacontract.yaml] │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────────────────────────╮
│ --schema TEXT The location (url or path) of the ODCS JSON Schema │
│ [default: None] │
│ --output PATH Specify the file path where the test results should be │
│ written to (e.g., │
│ './test-results/TEST-datacontract.xml'). If no path is │
│ provided, the output will be printed to stdout. │
│ [default: None] │
│ --output-format [junit] The target format for the test results. │
│ [default: None] │
│ --debug --no-debug Enable debug logging [default: no-debug] │
│ --help Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
```
### test
```
Usage: datacontract test [OPTIONS] [LOCATION]
Run schema and quality tests on configured servers.
╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────╮
│ location [LOCATION] The location (url or path) of the data contract yaml. │
│ [default: datacontract.yaml] │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────────────────────────╮
│ --schema TEXT The location (url or path) of │
│ the ODCS JSON Schema │
│ [default: None] │
│ --server TEXT The server configuration to run │
│ the schema and quality tests. │
│ Use the key of the server object │
│ in the data contract yaml file │
│ to refer to a server, e.g., │
│ `production`, or `all` for all │
│ servers (default). │
│ [default: all] │
│ --publish-test-results --no-publish-test-results Deprecated. Use publish │
│ parameter. Publish the results │
│ after the test │
│ [default: │
│ no-publish-test-results] │
│ --publish TEXT The url to publish the results │
│ after the test. │
│ [default: None] │
│ --output PATH Specify the file path where the │
│ test results should be written │
│ to (e.g., │
│ './test-results/TEST-datacontra… │
│ [default: None] │
│ --output-format [junit] The target format for the test │
│ results. │
│ [default: None] │
│ --logs --no-logs Print logs [default: no-logs] │
│ --ssl-verification --no-ssl-verification SSL verification when publishing │
│ the data contract. │
│ [default: ssl-verification] │
│ --debug --no-debug Enable debug logging │
│ [default: no-debug] │
│ --help Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
```
Data Contract CLI connects to a data source and runs schema and quality tests to verify that the data contract is valid.
```bash
$ datacontract test --server production datacontract.yaml
```
To connect to the databases the `server` block in the datacontract.yaml is used to set up the connection.
In addition, credentials, such as username and passwords, may be defined with environment variables.
The application uses different engines, based on the server `type`.
Internally, it connects with DuckDB, Spark, or a native connection and executes the most tests with _soda-core_ and _fastjsonschema_.
Credentials are provided with environment variables.
Supported server types:
- [s3](#S3)
- [athena](#athena)
- [bigquery](#bigquery)
- [azure](#azure)
- [sqlserver](#sqlserver)
- [oracle](#oracle)
- [databricks](#databricks)
- [databricks (programmatic)](#databricks-programmatic)
- [dataframe (programmatic)](#dataframe-programmatic)
- [snowflake](#snowflake)
- [kafka](#kafka)
- [postgres](#postgres)
- [trino](#trino)
- [impala](#impala)
- [api](#api)
- [local](#local)
Supported formats:
- parquet
- json
- csv
- delta
- iceberg (coming soon)
Feel free to create an [issue](https://github.com/datacontract/datacontract-cli/issues), if you need support for an additional type and formats.
#### S3
Data Contract CLI can test data that is stored in S3 buckets or any S3-compliant endpoints in various formats.
- CSV
- JSON
- Delta
- Parquet
- Iceberg (coming soon)
##### Examples
###### JSON
datacontract.yaml
```yaml
servers:
production:
type: s3
endpointUrl: https://minio.example.com # not needed with AWS S3
location: s3://bucket-name/path/*/*.json
format: json
delimiter: new_line # new_line, array, or none
```
###### Delta Tables
datacontract.yaml
```yaml
servers:
production:
type: s3
endpointUrl: https://minio.example.com # not needed with AWS S3
location: s3://bucket-name/path/table.delta # path to the Delta table folder containing parquet data files and the _delta_log
format: delta
```
##### Environment Variables
| Environment Variable | Example | Description |
|-------------------------------------|---------------------------------|----------------------------------------|
| `DATACONTRACT_S3_REGION` | `eu-central-1` | Region of S3 bucket |
| `DATACONTRACT_S3_ACCESS_KEY_ID` | `AKIAXV5Q5QABCDEFGH` | AWS Access Key ID |
| `DATACONTRACT_S3_SECRET_ACCESS_KEY` | `93S7LRrJcqLaaaa/XXXXXXXXXXXXX` | AWS Secret Access Key |
| `DATACONTRACT_S3_SESSION_TOKEN` | `AQoDYXdzEJr...` | AWS temporary session token (optional) |
#### Athena
Data Contract CLI can test data in AWS Athena stored in S3.
Supports different file formats, such as Iceberg, Parquet, JSON, CSV...
##### Example
datacontract.yaml
```yaml
servers:
athena:
type: athena
catalog: awsdatacatalog # awsdatacatalog is the default setting
schema: icebergdemodb # in Athena, this is called "database"
regionName: eu-central-1
stagingDir: s3://my-bucket/athena-results/
models:
my_table: # corresponds to a table or view name
type: table
fields:
my_column_1: # corresponds to a column
type: string
config:
physicalType: varchar
```
##### Environment Variables
| Environment Variable | Example | Description |
|-------------------------------------|---------------------------------|----------------------------------------|
| `DATACONTRACT_S3_REGION` | `eu-central-1` | Region of Athena service |
| `DATACONTRACT_S3_ACCESS_KEY_ID` | `AKIAXV5Q5QABCDEFGH` | AWS Access Key ID |
| `DATACONTRACT_S3_SECRET_ACCESS_KEY` | `93S7LRrJcqLaaaa/XXXXXXXXXXXXX` | AWS Secret Access Key |
| `DATACONTRACT_S3_SESSION_TOKEN` | `AQoDYXdzEJr...` | AWS temporary session token (optional) |
#### Google Cloud Storage (GCS)
The [S3](#S3) integration also works with files on Google Cloud Storage through its [interoperability](https://cloud.google.com/storage/docs/interoperability).
Use `https://storage.googleapis.com` as the endpoint URL.
##### Example
datacontract.yaml
```yaml
servers:
production:
type: s3
endpointUrl: https://storage.googleapis.com
location: s3://bucket-name/path/*/*.json # use s3:// schema instead of gs://
format: json
delimiter: new_line # new_line, array, or none
```
##### Environment Variables
| Environment Variable | Example | Description |
|-------------------------------------|----------------|------------------------------------------------------------------------------------------|
| `DATACONTRACT_S3_ACCESS_KEY_ID` | `GOOG1EZZZ...` | The GCS [HMAC Key](https://cloud.google.com/storage/docs/authentication/hmackeys) Key ID |
| `DATACONTRACT_S3_SECRET_ACCESS_KEY` | `PDWWpb...` | The GCS [HMAC Key](https://cloud.google.com/storage/docs/authentication/hmackeys) Secret |
#### BigQuery
We support authentication to BigQuery using Service Account Key or Application Default Credentials (ADC). ADC supports Workload Identity Federation (WIF), GCE metadata server, and `gcloud auth application-default login`. The used Service Account should include the roles:
* BigQuery Job User
* BigQuery Data Viewer
When no `DATACONTRACT_BIGQUERY_ACCOUNT_INFO_JSON_PATH` is set, the CLI falls back to ADC/WIF automatically via Soda's `use_context_auth`.
##### Example
datacontract.yaml
```yaml
servers:
production:
type: bigquery
project: datameshexample-product
dataset: datacontract_cli_test_dataset
models:
datacontract_cli_test_table: # corresponds to a BigQuery table
type: table
fields: ...
```
##### Environment Variables
| Environment Variable | Example | Description |
|----------------------------------------------|---------------------------|---------------------------------------------------------|
| `DATACONTRACT_BIGQUERY_ACCOUNT_INFO_JSON_PATH` | `~/service-access-key.json` | Service Account key JSON file. If not set, ADC/WIF is used automatically. |
| `DATACONTRACT_BIGQUERY_IMPERSONATION_ACCOUNT` | `sa@project.iam.gserviceaccount.com` | Optional. Service account to impersonate. Works with both key file and ADC auth. |
#### Azure
Data Contract CLI can test data that is stored in Azure Blob storage or Azure Data Lake Storage (Gen2) (ADLS) in various formats.
##### Example
datacontract.yaml
```yaml
servers:
production:
type: azure
location: abfss://datameshdatabricksdemo.dfs.core.windows.net/inventory_events/*.parquet
format: parquet
```
##### Environment Variables
Authentication works with an Azure Service Principal (SPN) aka App Registration with a secret.
| Environment Variable | Example | Description |
|------------------------------------|----------------------------------------|------------------------------------------------------|
| `DATACONTRACT_AZURE_TENANT_ID` | `79f5b80f-10ff-40b9-9d1f-774b42d605fc` | The Azure Tenant ID |
| `DATACONTRACT_AZURE_CLIENT_ID` | `3cf7ce49-e2e9-4cbc-a922-4328d4a58622` | The ApplicationID / ClientID of the app registration |
| `DATACONTRACT_AZURE_CLIENT_SECRET` | `yZK8Q~GWO1MMXXXXXXXXXXXXX` | The Client Secret value |
#### Sqlserver
Data Contract CLI can test data in MS SQL Server (including Azure SQL, Synapse Analytics SQL Pool).
##### Example
datacontract.yaml
```yaml
servers:
production:
type: sqlserver
host: localhost
port: 5432
database: tempdb
schema: dbo
driver: ODBC Driver 18 for SQL Server
models:
my_table_1: # corresponds to a table
type: table
fields:
my_column_1: # corresponds to a column
type: varchar
```
##### Environment Variables
| Environment Variable | Example| Description |
|---------------------------------------------------|--------|----------------------------------------------|
| `DATACONTRACT_SQLSERVER_USERNAME` | `root` | Username |
| `DATACONTRACT_SQLSERVER_PASSWORD` | `toor` | Password |
| `DATACONTRACT_SQLSERVER_TRUSTED_CONNECTION` | `True` | Use windows authentication, instead of login |
| `DATACONTRACT_SQLSERVER_TRUST_SERVER_CERTIFICATE` | `True` | Trust self-signed certificate |
| `DATACONTRACT_SQLSERVER_ENCRYPTED_CONNECTION` | `True` | Use SSL |
| `DATACONTRACT_SQLSERVER_DRIVER` | `ODBC Driver 18 for SQL Server` | ODBC driver name |
#### Oracle
Data Contract CLI can test data in Oracle Database.
##### Example
datacontract.yaml
```yaml
servers:
oracle:
type: oracle
host: localhost
port: 1521
service_name: ORCL
schema: ADMIN
models:
my_table_1: # corresponds to a table
type: table
fields:
my_column_1: # corresponds to a column
type: decimal
description: Decimal number
my_column_2: # corresponds to another column
type: text
description: Unicode text string
config:
oracleType: NVARCHAR2 # optional: can be used to explicitly define the type used in the database
# if not set a default mapping will be used
```
##### Environment Variables
These environment variable specify the credentials used by the datacontract tool to connect to the database.
If you've started the database from a container, e.g. [oracle-free](https://hub.docker.com/r/gvenzl/oracle-free)
this should match either `system` and what you specified as `ORACLE_PASSWORD` on the container or
alternatively what you've specified under `APP_USER` and `APP_USER_PASSWORD`.
If you require thick mode to connect to the database, you need to have an Oracle Instant Client
installed on the system and specify the path to the installation within the environment variable
`DATACONTRACT_ORACLE_CLIENT_DIR`.
| Environment Variable | Example | Description |
|--------------------------------------------------|--------------------|--------------------------------------------|
| `DATACONTRACT_ORACLE_USERNAME` | `system` | Username |
| `DATACONTRACT_ORACLE_PASSWORD` | `0x162e53` | Password |
| `DATACONTRACT_ORACLE_CLIENT_DIR` | `C:\oracle\client` | Path to Oracle Instant Client installation |
#### Databricks
Works with Unity Catalog and Hive metastore.
Needs a running SQL warehouse or compute cluster.
##### Example
datacontract.yaml
```yaml
servers:
production:
type: databricks
catalog: acme_catalog_prod
schema: orders_latest
models:
orders: # corresponds to a table
type: table
fields: ...
```
##### Environment Variables
| Environment Variable | Example | Description |
|-------------------------------------------|--------------------------------------|-----------------------------------------------------------|
| `DATACONTRACT_DATABRICKS_TOKEN` | `dapia00000000000000000000000000000` | The personal access token to authenticate |
| `DATACONTRACT_DATABRICKS_HTTP_PATH` | `/sql/1.0/warehouses/b053a3ffffffff` | The HTTP path to the SQL warehouse or compute cluster |
| `DATACONTRACT_DATABRICKS_SERVER_HOSTNAME` | `dbc-abcdefgh-1234.cloud.databricks.com` | The host name of the SQL warehouse or compute cluster |
#### Databricks (programmatic)
Works with Unity Catalog and Hive metastore.
When running in a notebook or pipeline, the provided `spark` session can be used.
An additional authentication is not required.
Requires a Databricks Runtime with Python >= 3.10.
##### Example
datacontract.yaml
```yaml
servers:
production:
type: databricks
host: dbc-abcdefgh-1234.cloud.databricks.com # ignored, always use current host
catalog: acme_catalog_prod
schema: orders_latest
models:
orders: # corresponds to a table
type: table
fields: ...
```
##### Installing on Databricks Compute
**Important:** When using Databricks LTS ML runtimes (15.4, 16.4), installing via `%pip install` in notebooks can cause issues.
**Recommended approach:** Use Databricks' native library management instead:
1. **Create or configure your compute cluster:**
- Navigate to **Compute** in the Databricks workspace
- Create a new cluster or select an existing one
- Go to the **Libraries** tab
2. **Add the datacontract-cli library:**
- Click **Install new**
- Select **PyPI** as the library source
- Enter package name: `datacontract-cli[databricks]`
- Click **Install**
3. **Restart the cluster** to apply the library installation
4. **Use in your notebook** without additional installation:
```python
from datacontract.data_contract import DataContract
data_contract = DataContract(
data_contract_file="/Volumes/acme_catalog_prod/orders_latest/datacontract/datacontract.yaml",
spark=spark)
run = data_contract.test()
run.result
```
Databricks' library management properly resolves dependencies during cluster initialization, rather than at runtime in the notebook.
#### Dataframe (programmatic)
Works with Spark DataFrames.
DataFrames need to be created as named temporary views.
Multiple temporary views are supported if your data contract contains multiple models.
Testing DataFrames is useful to test your datasets in a pipeline before writing them to a data source.
##### Example
datacontract.yaml
```yaml
servers:
production:
type: dataframe
models:
my_table: # corresponds to a temporary view
type: table
fields: ...
```
Example code
```python
from datacontract.data_contract import DataContract
df.createOrReplaceTempView("my_table")
data_contract = DataContract(
data_contract_file="datacontract.yaml",
spark=spark,
)
run = data_contract.test()
assert run.result == "passed"
```
#### Snowflake
Data Contract CLI can test data in Snowflake.
##### Example
datacontract.yaml
```yaml
servers:
snowflake:
type: snowflake
account: abcdefg-xn12345
database: ORDER_DB
schema: ORDERS_PII_V2
models:
my_table_1: # corresponds to a table
type: table
fields:
my_column_1: # corresponds to a column
type: varchar
```
##### Environment Variables
All [parameters supported by Soda](https://docs.soda.io/soda/connect-snowflake.html), uppercased and prepended by `DATACONTRACT_SNOWFLAKE_` prefix.
For example:
| Soda parameter | Environment Variable |
|----------------------|---------------------------------------------|
| `username` | `DATACONTRACT_SNOWFLAKE_USERNAME` |
| `password` | `DATACONTRACT_SNOWFLAKE_PASSWORD` |
| `warehouse` | `DATACONTRACT_SNOWFLAKE_WAREHOUSE` |
| `role` | `DATACONTRACT_SNOWFLAKE_ROLE` |
| `connection_timeout` | `DATACONTRACT_SNOWFLAKE_CONNECTION_TIMEOUT` |
Beware, that parameters:
* `account`
* `database`
* `schema`
are obtained from the `servers` section of the YAML-file.
E.g. from the example above:
```yaml
ser | text/markdown | null | Jochen Christ <jochen.christ@innoq.com>, Stefan Negele <stefan.negele@innoq.com>, Simon Harrer <simon.harrer@innoq.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"typer<0.22,>=0.15.1",
"pydantic<2.13.0,>=2.8.2",
"pyyaml~=6.0.1",
"requests<2.33,>=2.31",
"fastjsonschema<2.22.0,>=2.19.1",
"pytz>=2024.1",
"python-multipart<1.0.0,>=0.0.20",
"rich<15.0,>=13.7",
"sqlglot<29.0.0,>=26.6.0",
"setuptools>=60",
"python-dotenv<2.0.0,>=1.0.0",
"boto3<2.0.0,>=1.34.41... | [] | [] | [] | [
"Homepage, https://cli.datacontract.com",
"Issues, https://github.com/datacontract/datacontract-cli/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T20:02:14.976251 | datacontract_cli-0.11.5.tar.gz | 327,822 | 91/a1/dbe93fae969e085a311b9a7dfc637d176ef0bee6768ff1ea1ea56d6a4b8b/datacontract_cli-0.11.5.tar.gz | source | sdist | null | false | 1afe931d25abdd2ba7f8d79f9c95117d | 6bf2acc3ae66bad7fd96f29302e6ba5b5c06576295c2a96833608090a3a5a9db | 91a1dbe93fae969e085a311b9a7dfc637d176ef0bee6768ff1ea1ea56d6a4b8b | MIT | [
"LICENSE"
] | 43,869 |
2.3 | deep-variable | 0.1.0 | **Deep Variable** is a lightweight, zero-dependency Python utility designed to eliminate `KeyError` and `TypeError` crashes when working with deeply nested data structures | # Deep Variable 🚀
[](https://github.com/CreativeCubicle/deep-variable/actions)
[](https://badge.fury.io/py/deep-variable)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/deep-variable/)
**Deep Variable** is a lightweight, zero-dependency Python utility designed to eliminate `KeyError` and `TypeError` crashes when working with deeply nested data structures.
---
## ✨ Features
* **Safe Traversal**: Access deeply nested keys without worrying about missing parents.
* **Intelligent List Navigation**: Treat string indices (e.g., `"0"`) as list offsets automatically.
* **Zero Dependencies**: Pure Python Standard Library implementation—fast and secure.
* **Fully Type-Hinted**: Optimized for IDE IntelliSense and `mypy` strict mode.
* **Custom Separators**: Use dots, slashes, or any delimiter that fits your data.
---
## 🚀 The Difference
### The Old Way (Standard Python)
```python
# This is fragile and hard to read
email = None
if data and "users" in data and len(data["users"]) > 0:
profile = data["users"][0].get("profile")
if profile:
email = profile.get("email", "default@site.com")
```
### The Deep Variable Way
```
from deep_variable import DeepVariable
# Flat, clean, and crash-proof
email = DeepVariable.get(data, "users.0.profile.email", default="default@site.com")
```
### 📦 Installation
Install the latest version using pip or uv:
```
pip install deep-variable
# OR
uv add deep-variable
```
### 🛠 Usage Examples
1. Safe Reading (Getter)
```
Python
data = {"org": {"teams": [{"name": "Engineering"}]}}
```
### Navigate through mixed Dicts and Lists
```
name = DeepVariable.get(data, "org.teams.0.name") # Returns "Engineering"
```
### Safe default on missing path
```
role = DeepVariable.get(data, "org.teams.0.role", default="Developer") # Returns "Developer"
```
### 2. Existence Checking
```
Python
data = {"status": {"active": False}}
```
### Returns True even if the value is Falsy
```
DeepVariable.has(data, "status.active") # True
DeepVariable.has(data, "status.missing") # False
```
### 3. Safe Writing (Setter)
```
Python
data = {}
```
##### Automatically creates intermediate dictionaries
```
DeepVariable.set(data, "meta.tags.primary", "python")
print(data) # {'meta': {'tags': {'primary': 'python'}}}
```
### 🛡 Performance & Safety
Iterative Logic: Unlike recursive utilities, deep-variable uses loops, making it safe for exceptionally deep JSON structures without risking a RecursionError.
Strict Typing: Built with mypy --strict compliance | text/markdown | Yashas G V | Yashas G V <> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T20:01:23.284755 | deep_variable-0.1.0.tar.gz | 3,614 | 4b/b1/1407e931188b90a9c669dbe16d39f69bd5ad45b8658b4e57ad05ffabb9af/deep_variable-0.1.0.tar.gz | source | sdist | null | false | 01d56bb03540c0f9056e9acd8ea888ef | 04d23cd5a5bde34efa8fba0dd42a6d642b200f89bd102e9f90706a578334e08c | 4bb11407e931188b90a9c669dbe16d39f69bd5ad45b8658b4e57ad05ffabb9af | null | [] | 227 |
2.4 | tandemn-tuna | 0.0.1a7 | Hybrid GPU Inference Orchestrator — serverless for cold starts, spot for scale |
# Tuna
<div align="center">
<img src="https://raw.githubusercontent.com/Tandemn-Labs/tandemn-tuna/main/assets/tuna3.png" width="500" alt="Tuna">
</div>
Spot GPUs are 3-5x cheaper than on-demand, but they take minutes to start and can be interrupted at any time. Serverless GPUs start in seconds and never get interrupted, but you pay a premium for that convenience. What if you didn't have to choose?
Tuna is a smart router that combines both behind a single OpenAI-compatible endpoint. It serves requests from serverless while spot instances boot up, shifts traffic to spot once ready, and falls back to serverless if spot gets preempted. You only pay for the compute you actually use — spot rates for steady traffic, serverless only during cold starts and failover.
<div align="center">
<table>
<tr>
<td align="center" colspan="5"><b>Serverless</b></td>
<td align="center" colspan="1"><b>Spot</b></td>
</tr>
<tr>
<td align="center"><img src="https://raw.githubusercontent.com/Tandemn-Labs/tandemn-tuna/main/assets/modal-logo-icon.png" height="30"><br>Modal</td>
<td align="center"><img src="https://raw.githubusercontent.com/Tandemn-Labs/tandemn-tuna/main/assets/runpod-logo-black.svg" height="30"><br>RunPod</td>
<td align="center"><img src="https://raw.githubusercontent.com/Tandemn-Labs/tandemn-tuna/main/assets/google-cloud-run-logo-png_seeklogo-354677.png" height="30"><br>Cloud Run</td>
<td align="center"><img src="https://raw.githubusercontent.com/Tandemn-Labs/tandemn-tuna/main/assets/baseten.png" height="30"><br>Baseten</td>
<td align="center"><img src="https://raw.githubusercontent.com/Tandemn-Labs/tandemn-tuna/main/assets/azure-container.png" height="30"><br>Azure</td>
<td align="center"><img src="https://raw.githubusercontent.com/Tandemn-Labs/tandemn-tuna/main/assets/Amazon_Web_Services_Logo.svg.png" height="30"><br>AWS via SkyPilot</td>
</tr>
</table>
</div>
<p align="center">
<a href="ROADMAP.md"><b>View Roadmap</b></a>
</p>
> **Note:** Not all GPU types across all providers have been end-to-end tested yet. We are actively testing more combinations. If you run into issues with a specific GPU + provider pair, please [open an issue](https://github.com/Tandemn-Labs/tandemn-tuna/issues).
## Prerequisites
- Python 3.11+
- [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) — required for spot instances (all deployments use spot)
- At least one serverless provider account: [Modal](https://modal.com/), [RunPod](https://www.runpod.io/), [Google Cloud](https://cloud.google.com/), [Baseten](https://www.baseten.co/), or [Azure](https://azure.microsoft.com/)
- For gated models (Llama, Mistral, Gemma, etc.): a [HuggingFace token](https://huggingface.co/settings/tokens) with access to the model
> **Note:** By default Tuna deploys both a serverless backend and a spot backend. AWS credentials are required for spot instances, which run on AWS via [SkyPilot](https://github.com/skypilot-org/skypilot). Use `--serverless-only` to skip spot + router (no AWS needed).
## Quick Start
**1. Install**
```bash
pip install tandemn-tuna[modal] --pre # Modal as serverless provider
pip install tandemn-tuna[cloudrun] --pre # Cloud Run as serverless provider
pip install tandemn-tuna[baseten] --pre # Baseten as serverless provider
pip install tandemn-tuna[azure] --pre # Azure Container Apps as serverless provider
pip install tandemn-tuna --pre # RunPod (no extra deps needed)
pip install tandemn-tuna[all] --pre # everything
```
> This project is under active development and experimental. For the latest version, install from source:
> ```bash
> git clone https://github.com/Tandemn-Labs/tandemn-tuna.git
> cd tandemn-tuna
> pip install -e ".[all]"
> ```
**2. Set up AWS (required for all deployments)**
```bash
aws configure # set up AWS credentials
sky check # verify SkyPilot can see your AWS account
```
**3. Set up your serverless provider (pick one)**
<details>
<summary><b>Modal</b></summary>
```bash
modal token new
```
</details>
<details>
<summary><b>RunPod</b></summary>
```bash
export RUNPOD_API_KEY=<your-key> # https://www.runpod.io/console/user/settings
```
Add this to your `~/.bashrc` or `~/.zshrc` to persist it.
</details>
<details>
<summary><b>Cloud Run</b></summary>
Requires the [gcloud CLI](https://cloud.google.com/sdk/docs/install).
```bash
gcloud auth login
gcloud auth application-default login # required for the Python SDK
gcloud config set project <YOUR_PROJECT_ID>
```
You also need billing enabled and the Cloud Run API (`run.googleapis.com`) enabled on your project.
</details>
<details>
<summary><b>Baseten</b></summary>
**Step 1: Create account** — sign up at https://app.baseten.co/signup/
**Step 2: Get API key** — go to Settings > API Keys (https://app.baseten.co/settings/api_keys), create a key, copy it immediately
**Step 3: Set the API key**
```bash
export BASETEN_API_KEY=<your-api-key>
```
Add to `~/.bashrc` or `~/.zshrc` to persist.
**Step 4: Install and authenticate the Truss CLI**
```bash
pip install --upgrade truss
truss login --api-key $BASETEN_API_KEY
```
**Step 5: (For gated models) Add HuggingFace token** — go to Settings > Secrets (https://app.baseten.co/settings/secrets), add a secret named `hf_access_token` with your HF token.
</details>
<details>
<summary><b>Azure Container Apps</b></summary>
Requires the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli).
**Step 1: Install Azure CLI and log in**
```bash
az login
```
**Step 2: Register required resource providers**
```bash
az provider register --namespace Microsoft.App
az provider register --namespace Microsoft.OperationalInsights
```
Registration can take a few minutes. Check status with `az provider show --namespace Microsoft.App --query registrationState`.
**Step 3: Create a resource group** (if you don't have one)
```bash
az group create --name tuna-rg --location eastus
```
**Step 4: Set environment variables**
```bash
export AZURE_SUBSCRIPTION_ID=$(az account show --query id -o tsv)
export AZURE_RESOURCE_GROUP=tuna-rg
export AZURE_REGION=eastus
```
Add to `~/.bashrc` or `~/.zshrc` to persist.
**Step 5: Install the Azure SDK**
```bash
pip install tandemn-tuna[azure]
```
**Step 6: Verify setup**
```bash
tuna check --provider azure
```
**GPU availability:** Azure Container Apps supports T4 ($0.26/hr) and A100 80GB ($1.90/hr) GPUs. GPU quota must be requested via the Azure portal — search "Quotas" and request `Managed Environment Consumption T4 Gpus` or `Managed Environment Consumption NCA100 Gpus` capacity for Container Apps in your region. Note: this is separate from VM-level (Compute) GPU quota.
**Environment reuse:** The first Azure deploy creates a Container Apps environment (~30 min). Subsequent deploys reuse it (~2 min). Environments are preserved on destroy — use `--azure-cleanup-env` to remove them. An idle environment with no running apps incurs no charges.
</details>
**4. (Optional) Set HuggingFace token for gated models**
```bash
export HF_TOKEN=<your-token> # https://huggingface.co/settings/tokens
```
Required for models like Llama, Mistral, Gemma, and other gated models. Not needed for open models like Qwen.
**5. Validate your setup**
```bash
tuna check --provider modal # check Modal credentials
tuna check --provider runpod # check RunPod API key
tuna check --provider cloudrun --gcp-project <id> --gcp-region us-central1 # check Cloud Run
tuna check --provider baseten # check Baseten API key + truss CLI
tuna check --provider azure # check Azure CLI + SDK + resource providers
```
**6. Deploy a model**
```bash
tuna deploy --model Qwen/Qwen3-0.6B --gpu L4 --service-name my-first-deploy
```
Tuna auto-selects the cheapest serverless provider for your GPU, launches spot instances on AWS, and gives you a single endpoint. The router handles everything — serverless covers traffic immediately while spot boots up in the background.
**6a. (Alternative) Deploy serverless-only**
Skip spot + router for dev/test or low-traffic:
```bash
tuna deploy --model Qwen/Qwen3-0.6B --gpu L4 --serverless-only
```
Returns the provider's direct endpoint. No AWS credentials needed.
**7. Send requests** (OpenAI-compatible)
```bash
curl http://<router-ip>:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "Qwen/Qwen3-0.6B", "messages": [{"role": "user", "content": "Hello"}]}'
```
**8. Monitor and manage**
```bash
tuna status --service-name my-first-deploy # check deployment status
tuna cost --service-name my-first-deploy # real-time cost dashboard
tuna list # list all deployments
tuna destroy --service-name my-first-deploy # tear down a specific deployment
tuna destroy --all # tear down all active deployments
```
> **Tip:** If you don't pass `--service-name` during deploy, Tuna auto-generates a name like `tuna-a3f8c21b`. Use `tuna list` to find it.
**9. Browse GPU pricing**
```bash
tuna show-gpus # compare serverless pricing across providers
tuna show-gpus --spot # include AWS spot prices
tuna show-gpus --gpu H100 # detailed pricing for a specific GPU
tuna show-gpus --provider runpod # filter to one provider
```
## Architecture
```
┌──────────────────────┐
│ User Traffic │
│ (OpenAI-compatible) │
└──────────┬───────────┘
│
┌────────▼────────┐
│ Smart Router │
│ (meta_lb) │
└────────┬────────┘
│
┌────────────┴────────────┐
│ │
┌────────▼─────────┐ ┌─────────▼─────────┐
│ Serverless │ │ Spot GPUs │
│ Modal / RunPod / │ │ AWS via SkyPilot │
│ Cloud Run │ │ │
│ │ │ • 3-5x cheaper │
│ • Fast cold start │ │ • Slower cold start│
│ • Per-second bill │ │ • Auto-failover │
│ • Always ready │ │ • Scale to zero │
└───────────────────┘ └────────────────────┘
```
The router:
- Routes to serverless while spot instances are starting up
- Shifts traffic to spot once ready (cheaper)
- Falls back to serverless if spot has issues or high latency
- Scales serverless down to zero when spot is serving
## CLI Reference
| Command | Description |
|---------|-------------|
| `deploy` | Deploy a model across serverless + spot |
| `destroy` | Tear down a deployment (`--service-name <name>` or `--all` for all active) |
| `status` | Check deployment status |
| `cost` | Show cost dashboard (requires running deployment) |
| `list` | List all deployments (filter with `--status active\|destroyed\|failed`) |
| `show-gpus` | GPU pricing across providers (filter with `--provider`, `--gpu`, `--spot`) |
| `check` | Validate provider credentials and setup |
### `deploy` flags
| Flag | Default | Description |
|------|---------|-------------|
| `--model` | *(required)* | HuggingFace model ID (e.g. `Qwen/Qwen3-0.6B`) |
| `--gpu` | *(required)* | GPU type (e.g. `L4`, `L40S`, `A100`, `H100`) |
| `--gpu-count` | `1` | Number of GPUs |
| `--serverless-provider` | auto (cheapest for GPU) | `modal`, `runpod`, `cloudrun`, `baseten`, or `azure` |
| `--spots-cloud` | `aws` | Cloud provider for spot GPUs |
| `--region` | — | Cloud region for spot instances |
| `--tp-size` | `1` | Tensor parallelism degree |
| `--max-model-len` | `4096` | Maximum sequence length (context window) |
| `--concurrency` | — | Override serverless concurrency limit |
| `--workers-max` | — | Max serverless workers (RunPod only) |
| `--cold-start-mode` | `fast_boot` | `fast_boot` (uses `--enforce-eager`, faster startup but lower throughput) or `no_fast_boot` |
| `--no-scale-to-zero` | off | Keep minimum 1 spot replica running |
| `--scaling-policy` | — | Path to scaling YAML (see below) |
| `--service-name` | auto-generated | Custom service name (recommended — makes status/destroy easier) |
| `--serverless-only` | off | Serverless only (no spot, no router). No AWS needed. |
| `--public` | off | Make service publicly accessible (no auth) |
| `--use-different-vm-for-lb` | off | Launch router on a separate VM instead of colocating on controller |
| `--gcp-project` | — | Google Cloud project ID |
| `--gcp-region` | — | Google Cloud region (e.g. `us-central1`) |
| `--azure-subscription` | — | Azure subscription ID |
| `--azure-resource-group` | — | Azure resource group name |
| `--azure-region` | — | Azure region (e.g. `eastus`) |
| `--azure-environment` | — | Name of existing Container Apps environment to reuse |
Use `-v` / `--verbose` with any command for debug logging.
## Scaling Policy
All autoscaling parameters can be configured via a YAML file passed with `--scaling-policy`. If omitted, sane defaults apply.
```yaml
spot:
min_replicas: 0 # 0 = scale to zero (default)
max_replicas: 5
target_qps: 10 # per-replica QPS target
upscale_delay: 5 # seconds before adding replicas
downscale_delay: 300 # seconds before removing replicas
serverless:
concurrency: 32 # max concurrent requests per container
scaledown_window: 60 # seconds idle before scaling down
timeout: 600 # request timeout in seconds
workers_min: 0 # min workers (RunPod only)
workers_max: 1 # max workers (RunPod only)
scaler_value: 4 # queue delay scaler threshold (RunPod only)
```
**Precedence**: defaults <- YAML file <- CLI flags. For example, `--concurrency 64` overrides `serverless.concurrency` from the YAML. `--no-scale-to-zero` forces `spot.min_replicas` to at least 1 and sets `serverless.scaledown_window` to 300s.
Unknown keys in the YAML will error immediately (catches typos).
## Troubleshooting
### Setup issues
Start with the built-in diagnostic tool:
```bash
tuna check --provider runpod
tuna check --provider modal
tuna check --provider cloudrun --gcp-project <id> --gcp-region us-central1
tuna check --provider baseten
tuna check --provider azure
```
This validates credentials, API access, project configuration, and GPU region availability.
### Endpoint not responding
```bash
# Check your deployment status
tuna status --service-name <name>
# Check router health directly
curl http://<router-ip>:8080/router/health
# Check SkyServe status
sky status --refresh
```
### High latency
Check which backend is serving traffic:
```bash
curl http://<router-ip>:8080/router/health
```
If `skyserve_ready` is `false`, spot instances are still booting — requests are going through serverless (which is working correctly). Once spot boots, traffic shifts automatically.
### Gated model fails to load
If the deployment succeeds but the model fails to start, you likely need a HuggingFace token:
```bash
export HF_TOKEN=<your-token>
```
Then redeploy.
## Contact
- Hetarth — hetarth@tandemn.com
- Mankeerat — mankeerat@tandemn.com
## License
MIT
This project depends on [SkyPilot](https://github.com/skypilot-org/skypilot) (Apache License 2.0).
| text/markdown | null | Hetarth <hetarth@tandemn.com>, Mankeerat <mankeerat@tandemn.com> | null | null | null | gpu, serverless, spot, inference, vllm, openai, router | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"flask>=3.0",
"requests>=2.31",
"gunicorn>=21.0",
"pyyaml>=6.0",
"rich>=13.0",
"skypilot[aws]>=0.7",
"pytest>=8.0; extra == \"dev\"",
"pytest-mock>=3.12; extra == \"dev\"",
"google-cloud-run>=0.10.0; extra == \"cloudrun\"",
"modal>=0.73; extra == \"modal\"",
"truss>=0.9; extra == \"baseten\"",
... | [] | [] | [] | [
"Homepage, https://github.com/Tandemn-Labs/tandemn-tuna",
"Repository, https://github.com/Tandemn-Labs/tandemn-tuna",
"Issues, https://github.com/Tandemn-Labs/tandemn-tuna/issues"
] | uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Linux Mint","version":"22.2","id":"zara","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T20:00:42.719412 | tandemn_tuna-0.0.1a7.tar.gz | 91,950 | 67/18/141c0c511a9d91a0c30adbfcbbed48f92c6a9b5a78e53b3a0337875afad9/tandemn_tuna-0.0.1a7.tar.gz | source | sdist | null | false | 0589a088c409f285966227fc0fbfe3b9 | 3994be7d836dbe7dd7d11aff545e1f9a949fcf3725747c50662587acb2c1d8f7 | 6718141c0c511a9d91a0c30adbfcbbed48f92c6a9b5a78e53b3a0337875afad9 | MIT | [
"LICENSE"
] | 191 |
2.4 | prefab-ui | 0.2.0 | The agentic frontend framework that even a human can use. | <div align="center">
# Prefab 🎨
**The agentic frontend framework that even a human can use.**
🚧 *Don't panic. Prefab is under **extremely** active development. You probably shouldn't use it yet.* 🚧
[](https://pypi.org/project/prefab-ui)
[](https://github.com/prefecthq/prefab/actions/workflows/run-tests.yml)
[](https://github.com/prefecthq/prefab/blob/main/LICENSE)
[Docs](https://prefab.prefect.io) · [Playground](https://prefab.prefect.io/playground) · [GitHub](https://github.com/PrefectHQ/prefab)
</div>
<img src="https://raw.githubusercontent.com/PrefectHQ/prefab/main/docs/assets/showcase.png" alt="Prefab" width="1000">
Prefab is a frontend framework with a Python DSL that compiles to JSON. Describe a UI — layouts, forms, charts, data tables, full interactivity — and a bundled React renderer turns that JSON into a self-contained application.
Composing frontends in Python is ~~blasphemous~~ surprisingly natural. And because it's a JSON protocol, any source can produce a Prefab UI. Write one in Python, serve one as an [MCP App](https://modelcontextprotocol.io/docs/extensions/apps), or let an agent generate one dynamically — no templates or predefined views required.
<div align="center">
<img src="https://raw.githubusercontent.com/PrefectHQ/prefab/main/docs/assets/hello-world-card.png" alt="Hello world card" width="400">
</div>
</br>
This card has a live-updating heading, a text input bound to client-side state, and badges — all from a few lines of Python. You can try an interactive version [in the Prefab docs](https://prefab.prefect.io/docs/welcome). In fact, every example in the Prefab docs is rendered with Prefab itself.
```python
from prefab_ui.components import Card, CardContent, CardFooter, Column, H3, Muted, Input, Badge, Row
with Card():
with CardContent():
with Column(gap=3):
H3("Hello, {{ name }}!")
Muted("Type below and watch this update in real time.")
Input(name="name", placeholder="Your name...")
with CardFooter():
with Row(gap=2):
Badge("Name: {{ name }}", variant="default")
Badge("Prefab", variant="success")
```
Since everything compiles to JSON, you can author a UI from a Python script, have an agent generate one on the fly, or serve one from any MCP server or REST API.
*Made with 💙 by [Prefect](https://www.prefect.io/)*
## Installation
```bash
pip install prefab-ui
```
Requires Python 3.10+.
## How It Works
1. Build a component tree in Python (or raw JSON from any source)
2. The tree compiles to Prefab's JSON format
3. A bundled React renderer turns the JSON into a live interface
State flows through `{{ templates }}`. When you write `{{ query }}`, the renderer interpolates the current value from client-side state. Named form controls sync automatically — `Input(name="city")` keeps `{{ city }}` up to date on every keystroke. Actions like `ToolCall` and `SetState` drive interactivity without custom JavaScript.
## Components
35+ components covering layout, typography, forms, data display, and interactive elements. Containers nest with Python context managers:
```python
from prefab_ui.components import Card, CardHeader, CardTitle, CardContent, Column, Text, Badge
with Card():
with CardHeader():
CardTitle("User Profile")
with CardContent():
with Column():
Text("{{ user.name }}")
Badge("{{ user.role }}", variant="secondary")
```
Pydantic models generate forms automatically — constraints like `min_length` and `ge` become client-side validation:
```python
from pydantic import BaseModel, Field
from prefab_ui.components import Form
from prefab_ui.actions import ToolCall
class SignupForm(BaseModel):
email: str = Field(description="Your email address")
name: str = Field(min_length=2, max_length=50)
age: int = Field(ge=18, le=120)
Form.from_model(SignupForm, on_submit=ToolCall("create_user"))
```
## Actions
Actions define what happens on interaction — state updates, server calls, navigation, notifications:
```python
from prefab_ui.components import Button
from prefab_ui.actions import SetState, ToolCall, ShowToast
Button("Save", on_click=[
SetState("saving", True),
ToolCall(
"save_data",
arguments={"item": "{{ item }}"},
on_success=ShowToast(title="Saved"),
on_error=ShowToast(title="Failed", variant="destructive"),
),
SetState("saving", False),
])
```
## Documentation
Full documentation at [prefab.prefect.io](https://prefab.prefect.io), including an interactive [playground](https://prefab.prefect.io/playground) where you can try components live.
| text/markdown | Jeremiah Lowin | null | null | null | null | FastMCP, MCP, agentic, components, dsl, frontend, generative-ui, json, prefab, react, ui | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cyclopts>=4",
"pydantic>=2.10",
"rich>=13"
] | [] | [] | [] | [
"Homepage, https://prefab.prefect.io",
"Documentation, https://prefab.prefect.io",
"Repository, https://github.com/PrefectHQ/prefab"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T20:00:24.753171 | prefab_ui-0.2.0-py3-none-any.whl | 771,916 | 87/6c/17e03c95c5a5ea611d4f2c044d8495a2beec9e916ad0073d613d655679e4/prefab_ui-0.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | fd989854863caf1e8403cfc5701e0e58 | f82f4c36ed84ccede819b641e0459989ef35da1aa7493ec26e0a4c2b13aafd1d | 876c17e03c95c5a5ea611d4f2c044d8495a2beec9e916ad0073d613d655679e4 | Apache-2.0 | [
"LICENSE"
] | 214 |
2.4 | chart-direction | 1.0.0 | Detect UP/DOWN trend direction in financial line-chart screenshots using computer vision. | <h1 align="center">📈 chart-direction</h1>
<p align="center">
<strong>Detect the trend direction of a financial line-chart screenshot — in one line of Python.</strong>
</p>
<p align="center">
<a href="https://github.com/MahyudeenShahid/chart-direction/blob/main/LICENSE">
<img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="MIT License">
</a>
<a href="https://pypi.org/project/chart-direction/">
<img src="https://img.shields.io/pypi/v/chart-direction.svg" alt="PyPI version">
</a>
<a href="https://pypi.org/project/chart-direction/">
<img src="https://img.shields.io/pypi/pyversions/chart-direction.svg" alt="Python versions">
</a>
<a href="https://github.com/MahyudeenShahid/chart-direction/actions">
<img src="https://img.shields.io/github/actions/workflow/status/MahyudeenShahid/chart-direction/ci.yml?label=CI" alt="CI">
</a>
</p>
---
## Preview
> **Web UI** — drag-and-drop a chart screenshot, get an instant UP / DOWN signal with full debug pipeline images.
<p align="center">
<img src="image.jpg" alt="Chart Direction Analyzer — Web UI screenshot" width="900">
</p>
---
## What it does
`chart-direction` takes a **screenshot of any financial line chart** and tells you whether the line at the right-hand end is going **UP** or **DOWN** — no manual cropping, no hardcoded colours, no ML model to install.
```python
from chart_direction import ChartDirectionDetector
detector = ChartDirectionDetector()
print(detector("screenshot.png")) # "UP" 📈
```
It handles real-world chart challenges:
| Challenge | How it's handled |
|-----------|-----------------|
| Dotted grid lines | Removed via Hough line detection before component analysis |
| Broken/anti-aliased strokes | Reconnected with morphological gap bridging |
| UI panels with large bounding boxes | Rejected using **density scoring** — chart lines are sparse, panels are filled |
| Short visible tail at chart end | 3× zoom + end-extension band search |
---
## Installation
```bash
pip install chart-direction
```
**Requirements:** Python ≥ 3.8, `opencv-python`, `numpy`
Install from source:
```bash
git clone https://github.com/MahyudeenShahid/chart-direction.git
cd chart-direction
pip install -e .
```
---
## Quick Start
### One-liner
```python
from chart_direction import ChartDirectionDetector
detector = ChartDirectionDetector()
print(detector("chart.png")) # "UP" or "DOWN"
```
### Full result dict
```python
result = detector.analyze_with_details("chart.png")
if result["success"]:
print(result["direction"]) # "UP"
print(result["end_dir"]) # +1
print(result["trend_start_x"]) # 312 (pixel x where last trend began)
print(result["roi"]) # ROI(x0=780, y0=40, x1=1024, y1=600, w=244, h=560)
```
### Save debug images
```python
result = detector.analyze_with_details("chart.png", outdir="debug/")
```
This generates 11 images showing every step of the pipeline:
```
debug/
├── full_edges_raw.png Canny edges on full image
├── full_edges_clean.png After removing horizontal grid lines
├── full_edges_bridged.png After bridging gaps
├── full_component.png Selected chart component
├── full_component_dilated.png Dilated for tracing
├── full_traced.png Traced path + END marker
├── original_with_roi.png Original with red ROI box
├── zoom.png 3× zoomed right-end ROI
├── edges.png Edges in zoomed ROI
├── traced.png Final trace on zoom + "Direction: UP"
└── edges_traced.png Trace on edge image
```
### Command-line interface
```bash
chart-direction --image chart.png --outdir debug/
# 📈 Direction: UP
# Debug images → debug/
```
---
## How It Works
The pipeline has **10 stages**, all tunable via constructor parameters:
```
Input image
↓
1. Crop vertical margins (removes chart title / footer UI)
↓
2. Canny edge detection (finds all edges)
↓
3. Remove horizontal artifacts (erases grid lines via Hough)
↓
4. Bridge gaps (reconnects dotted/anti-aliased lines)
↓
5. Component selection (density-aware scoring picks the chart line)
↓
6. Trace y(x) + extend end (converts mask → 1-D function)
↓
7. Build zoomed ROI (frame the last ~28% of the chart)
↓
8. Repeat steps 2-5 on zoom (sub-pixel accuracy at 3× magnification)
↓
9. Gradient → smooth → classify (UP / DOWN / FLAT per pixel)
↓
10. Find last direction change → "UP" or "DOWN"
```
### The density trick (v14 fix)
The key insight that makes this work on full-screen charts:
```
density = component_area / (bbox_width × bbox_height)
```
- A **chart line** spanning the whole image: `density ≈ 0.01 – 0.05` (sparse)
- A **UI panel** filling the screen: `density ≈ 0.3 – 1.0` (dense)
Only reject a huge component when **all three hold**:
```
x_span > 95% W AND y_span > 80% H AND density > 0.18
```
---
## Configuration
```python
detector = ChartDirectionDetector(
# ── Edge detection ──────────────────────────────
canny_low = 30, # lower Canny threshold
canny_high = 120, # upper Canny threshold
# ── Horizontal artifact removal ─────────────────
hough_threshold = 40, # Hough votes needed
hough_max_gap = 14, # max gap in line segment
horizontal_slope_max = 0.08, # |dy/dx| < this → horizontal
# ── Component selection ─────────────────────────
min_component_area = 160, # ignore tiny blobs
density_threshold = 0.18, # UI panel detector
# ── End-extension ───────────────────────────────
band_half_height = 22, # ± pixel search band
max_end_gap = 18, # stop after this many blank columns
# ── Direction analysis ──────────────────────────
slope_threshold = 0.15, # gradient < this → flat
smooth_win_trace = 9, # smoothing window on y(x)
smooth_win_grad = 7, # smoothing window on dy/dx
# ── ROI & zoom ──────────────────────────────────
last_w_frac_graph = 0.28, # analyse last 28% of chart
full_y_margin_frac = 0.02, # crop 2% top & bottom
zoom_factor = 3.0, # 3× magnification on ROI
)
```
### Tuning tips
| Goal | Change |
|------|--------|
| More sensitive to shallow trends | Lower `slope_threshold` (e.g. `0.08`) |
| Charts with thin/faint lines | Lower `canny_low` (e.g. `15`) |
| Analyse a wider end section | Increase `last_w_frac_graph` (e.g. `0.40`) |
| Very high-resolution images | Increase `zoom_factor` (e.g. `4.0`) |
| Noisy images with many components | Increase `min_component_area` (e.g. `300`) |
---
## Documentation
| Doc | Description |
|-----|-------------|
| [Quick Start](docs/QUICKSTART.md) | Installation, basic usage, CLI |
| [API Reference](docs/API.md) | All classes, methods, parameters |
| [File Reference](docs/FILES.md) | Every file explained — how it works and how it connects |
| [Changelog](CHANGELOG.md) | Version history |
| [Contributing](CONTRIBUTING.md) | How to contribute |
---
## Contributing
Contributions are very welcome! Please read [CONTRIBUTING.md](CONTRIBUTING.md) first.
```bash
# Fork → clone → create branch
git checkout -b feat/my-feature
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest tests/ -v
# Format
black chart_direction/
ruff check chart_direction/
# Open a PR 🎉
```
---
## License
[MIT](LICENSE) © 2026 Mahyudeen Shahid
---
<p align="center">
Made with ❤️ and OpenCV
</p>
| text/markdown | null | Mahyudeen Shahid <mahyudeenshahid01@gmail.com> | null | null | MIT | finance, chart, trading, computer-vision, opencv, trend | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Lan... | [] | null | null | >=3.8 | [] | [] | [] | [
"opencv-python>=4.5",
"numpy>=1.21",
"pytest>=7; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"black; extra == \"dev\"",
"ruff; extra == \"dev\"",
"mypy; extra == \"dev\"",
"mkdocs>=1.5; extra == \"docs\"",
"mkdocs-material>=9; extra == \"docs\"",
"mkdocstrings[python]>=0.24; extra == \"docs... | [] | [] | [] | [
"Homepage, https://github.com/MahyudeenShahid/chart-direction",
"Documentation, https://MahyudeenShahid.github.io/chart-direction",
"Repository, https://github.com/MahyudeenShahid/chart-direction",
"Bug Tracker, https://github.com/MahyudeenShahid/chart-direction/issues",
"Changelog, https://github.com/Mahyu... | twine/6.2.0 CPython/3.13.12 | 2026-02-19T20:00:00.970758 | chart_direction-1.0.0.tar.gz | 16,351 | ac/18/5204b4c6e132945c8a3fcdd945d21153c7fc23e07f396e90f076407d2439/chart_direction-1.0.0.tar.gz | source | sdist | null | false | 632b75f7308743bedbbe18447f158d4c | 6d185d4a9b4cac35c4780f161cfc6a709c36de8596b7888dbe395624d7c93955 | ac185204b4c6e132945c8a3fcdd945d21153c7fc23e07f396e90f076407d2439 | null | [
"LICENSE"
] | 238 |
2.1 | cdktn-provider-acme | 13.0.1 | Prebuilt acme Provider for CDK Terrain (cdktn) | # CDKTN prebuilt bindings for vancluever/acme provider version 2.45.0
This repo builds and publishes the [Terraform acme provider](https://registry.terraform.io/providers/vancluever/acme/2.45.0/docs) bindings for [CDK Terrain](https://cdktn.io).
## Available Packages
### NPM
The npm package is available at [https://www.npmjs.com/package/@cdktn/provider-acme](https://www.npmjs.com/package/@cdktn/provider-acme).
`npm install @cdktn/provider-acme`
### PyPI
The PyPI package is available at [https://pypi.org/project/cdktn-provider-acme](https://pypi.org/project/cdktn-provider-acme).
`pipenv install cdktn-provider-acme`
### Nuget
The Nuget package is available at [https://www.nuget.org/packages/Io.Cdktn.Providers.Acme](https://www.nuget.org/packages/Io.Cdktn.Providers.Acme).
`dotnet add package Io.Cdktn.Providers.Acme`
### Maven
The Maven package is available at [https://mvnrepository.com/artifact/io.cdktn/cdktn-provider-acme](https://mvnrepository.com/artifact/io.cdktn/cdktn-provider-acme).
```
<dependency>
<groupId>io.cdktn</groupId>
<artifactId>cdktn-provider-acme</artifactId>
<version>[REPLACE WITH DESIRED VERSION]</version>
</dependency>
```
### Go
The go package is generated into the [`github.com/cdktn-io/cdktn-provider-acme-go`](https://github.com/cdktn-io/cdktn-provider-acme-go) package.
`go get github.com/cdktn-io/cdktn-provider-acme-go/acme/<version>`
Where `<version>` is the version of the prebuilt provider you would like to use e.g. `v11`. The full module name can be found
within the [go.mod](https://github.com/cdktn-io/cdktn-provider-acme-go/blob/main/acme/go.mod#L1) file.
## Docs
Find auto-generated docs for this provider here:
* [Typescript](./docs/API.typescript.md)
* [Python](./docs/API.python.md)
* [Java](./docs/API.java.md)
* [C#](./docs/API.csharp.md)
* [Go](./docs/API.go.md)
You can also visit a hosted version of the documentation on [constructs.dev](https://constructs.dev/packages/@cdktn/provider-acme).
## Versioning
This project is explicitly not tracking the Terraform acme provider version 1:1. In fact, it always tracks `latest` of `~> 2.10` with every release. If there are scenarios where you explicitly have to pin your provider version, you can do so by [generating the provider constructs manually](https://cdktn.io/docs/concepts/providers#import-providers).
These are the upstream dependencies:
* [CDK Terrain](https://cdktn.io) - Last official release
* [Terraform acme provider](https://registry.terraform.io/providers/vancluever/acme/2.45.0)
* [Terraform Engine](https://terraform.io)
If there are breaking changes (backward incompatible) in any of the above, the major version of this project will be bumped.
## Features / Issues / Bugs
Please report bugs and issues to the [CDK Terrain](https://cdktn.io) project:
* [Create bug report](https://github.com/open-constructs/cdk-terrain/issues)
* [Create feature request](https://github.com/open-constructs/cdk-terrain/issues)
## Contributing
### Projen
This is mostly based on [Projen](https://projen.io), which takes care of generating the entire repository.
### cdktn-provider-project based on Projen
There's a custom [project builder](https://github.com/cdktn-io/cdktn-provider-project) which encapsulate the common settings for all `cdktn` prebuilt providers.
### Provider Version
The provider version can be adjusted in [./.projenrc.js](./.projenrc.js).
### Repository Management
The repository is managed by [CDKTN Repository Manager](https://github.com/cdktn-io/cdktn-repository-manager/).
| text/markdown | CDK Terrain Maintainers | null | null | null | MPL-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
... | [] | https://github.com/cdktn-io/cdktn-provider-acme.git | null | ~=3.9 | [] | [] | [] | [
"cdktn<0.23.0,>=0.22.0",
"constructs<11.0.0,>=10.4.2",
"jsii<2.0.0,>=1.119.0",
"publication>=0.0.3",
"typeguard<4.3.0,>=2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/cdktn-io/cdktn-provider-acme.git"
] | twine/6.1.0 CPython/3.14.2 | 2026-02-19T19:59:41.921730 | cdktn_provider_acme-13.0.1.tar.gz | 104,112 | d4/0d/8cb410459966fd14c0943aed3ae74a3ee0843ebc04a61fe8f7c7ec3a20bd/cdktn_provider_acme-13.0.1.tar.gz | source | sdist | null | false | b7c82a80190e9e7fa94088e1c639c2c2 | e3cc1a5195cca05db2a42d70e3cc24835a569a60014291a9464ba40862a54977 | d40d8cb410459966fd14c0943aed3ae74a3ee0843ebc04a61fe8f7c7ec3a20bd | null | [] | 207 |
2.4 | chromadb | 1.5.1 | Chroma. | 

<p align="center">
<b>Chroma - the open-source search engine for AI</b>. <br />
The fastest way to build Python or JavaScript LLM apps that search over your data!
</p>
<p align="center">
<a href="https://discord.gg/MMeYNTmh3x" target="_blank">
<img src="https://img.shields.io/discord/1073293645303795742?cacheSeconds=3600" alt="Discord">
</a> |
<a href="https://github.com/chroma-core/chroma/blob/master/LICENSE" target="_blank">
<img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License">
</a> |
<a href="https://docs.trychroma.com/" target="_blank">
Docs
</a> |
<a href="https://www.trychroma.com/" target="_blank">
Homepage
</a>
</p>
```bash
pip install chromadb # python client
# for javascript, npm install chromadb!
# for client-server mode, chroma run --path /chroma_db_path
```
## Chroma Cloud
Our hosted service, Chroma Cloud, powers serverless vector, hybrid, and full-text search. It's extremely fast, cost-effective, scalable and painless. Create a DB and try it out in under 30 seconds with $5 of free credits.
[Get started with Chroma Cloud](https://trychroma.com/signup)
## API
The core API is only 4 functions (run our [💡 Google Colab](https://colab.research.google.com/drive/1QEzFyqnoFxq7LUGyP1vzR4iLt9PpCDXv?usp=sharing)):
```python
import chromadb
# setup Chroma in-memory, for easy prototyping. Can add persistence easily!
client = chromadb.Client()
# Create collection. get_collection, get_or_create_collection, delete_collection also available!
collection = client.create_collection("all-my-documents")
# Add docs to the collection. Can also update and delete. Row-based API coming soon!
collection.add(
documents=["This is document1", "This is document2"], # we handle tokenization, embedding, and indexing automatically. You can skip that and add your own embeddings as well
metadatas=[{"source": "notion"}, {"source": "google-docs"}], # filter on these!
ids=["doc1", "doc2"], # unique for each doc
)
# Query/search 2 most similar results. You can also .get by id
results = collection.query(
query_texts=["This is a query document"],
n_results=2,
# where={"metadata_field": "is_equal_to_this"}, # optional filter
# where_document={"$contains":"search_string"} # optional filter
)
```
Learn about all features on our [Docs](https://docs.trychroma.com)
## Features
- __Simple__: Fully-typed, fully-tested, fully-documented == happiness
- __Integrations__: [`🦜️🔗 LangChain`](https://blog.langchain.dev/langchain-chroma/) (python and js), [`🦙 LlamaIndex`](https://twitter.com/atroyn/status/1628557389762007040) and more soon
- __Dev, Test, Prod__: the same API that runs in your python notebook, scales to your cluster
- __Feature-rich__: Queries, filtering, regex and more
- __Free & Open Source__: Apache 2.0 Licensed
## Use case: ChatGPT for ______
For example, the `"Chat your data"` use case:
1. Add documents to your database. You can pass in your own embeddings, embedding function, or let Chroma embed them for you.
2. Query relevant documents with natural language.
3. Compose documents into the context window of an LLM like `GPT4` for additional summarization or analysis.
## Embeddings?
What are embeddings?
- [Read the guide from OpenAI](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings)
- __Literal__: Embedding something turns it from image/text/audio into a list of numbers. 🖼️ or 📄 => `[1.2, 2.1, ....]`. This process makes documents "understandable" to a machine learning model.
- __By analogy__: An embedding represents the essence of a document. This enables documents and queries with the same essence to be "near" each other and therefore easy to find.
- __Technical__: An embedding is the latent-space position of a document at a layer of a deep neural network. For models trained specifically to embed data, this is the last layer.
- __A small example__: If you search your photos for "famous bridge in San Francisco". By embedding this query and comparing it to the embeddings of your photos and their metadata - it should return photos of the Golden Gate Bridge.
Chroma allows you to store these vectors or embeddings and search by nearest neighbors rather than by substrings like a traditional database. By default, Chroma uses [Sentence Transformers](https://docs.trychroma.com/guides/embeddings#default:-all-minilm-l6-v2) to embed for you but you can also use OpenAI embeddings, Cohere (multilingual) embeddings, or your own.
## Get involved
Chroma is a rapidly developing project. We welcome PR contributors and ideas for how to improve the project.
- [Join the conversation on Discord](https://discord.com/invite/chromadb) - `#contributing` channel
- [Review the 🛣️ Roadmap and contribute your ideas](https://docs.trychroma.com/docs/overview/oss#roadmap)
- [Grab an issue and open a PR](https://github.com/chroma-core/chroma/issues) - [`Good first issue tag`](https://github.com/chroma-core/chroma/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
- [Read our contributing guide](https://docs.trychroma.com/docs/overview/oss#contributing)
**Release Cadence**
We currently release new tagged versions of the `pypi` and `npm` packages on Mondays. Hotfixes go out at any time during the week.
## License
[Apache 2.0](./LICENSE)
| text/markdown; charset=UTF-8; variant=GFM | null | Jeff Huber <jeff@trychroma.com>, Anton Troynikov <anton@trychroma.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"build>=1.0.3",
"pydantic>=1.9",
"pybase64>=1.4.1",
"uvicorn[standard]>=0.18.3",
"numpy>=1.22.5",
"posthog<6.0.0,>=2.4.0",
"typing-extensions>=4.5.0",
"onnxruntime>=1.14.1",
"opentelemetry-api>=1.2.0",
"opentelemetry-exporter-otlp-proto-grpc>=1.2.0",
"opentelemetry-sdk>=1.2.0",
"tokenizers>=0.... | [] | [] | [] | [
"Bug Tracker, https://github.com/chroma-core/chroma/issues",
"Homepage, https://github.com/chroma-core/chroma"
] | maturin/1.12.3 | 2026-02-19T19:59:34.676317 | chromadb-1.5.1-cp39-abi3-win_amd64.whl | 21,856,118 | 84/a2/023696860162c59ed7d5d2a589d701bf5c54233d82a0f808c69956204c10/chromadb-1.5.1-cp39-abi3-win_amd64.whl | cp39 | bdist_wheel | null | false | af3de258775b0a54a685fd12bc2aad0f | 7ec9dc47841cf3fecc475ca07a0aacfc9a347b3460881051636755618d6250c6 | 84a2023696860162c59ed7d5d2a589d701bf5c54233d82a0f808c69956204c10 | null | [] | 275,348 |
2.4 | initrunner | 1.1.6 | Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon | # InitRunner — AI Agent Roles as YAML
<p align="center"><img src="https://raw.githubusercontent.com/vladkesler/initrunner/main/assets/mascot.png" alt="InitRunner mascot" width="300"></p>
<p align="center">
<img src="https://img.shields.io/badge/python-3.11+-3776ab?logo=python&logoColor=white" alt="Python 3.11+">
<a href="https://pypi.org/project/initrunner/"><img src="https://img.shields.io/pypi/v/initrunner?color=%2334D058&v=1" alt="PyPI version"></a>
<a href="https://github.com/vladkesler/initrunner"><img src="https://img.shields.io/github/stars/vladkesler/initrunner?style=flat&color=%2334D058" alt="GitHub stars"></a>
<a href="https://hub.docker.com/r/vladkesler/initrunner"><img src="https://img.shields.io/docker/pulls/vladkesler/initrunner?color=%2334D058" alt="Docker pulls"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-%2334D058" alt="MIT License"></a>
<a href="tests/"><img src="https://img.shields.io/badge/tests-710+-%2334D058" alt="Tests"></a>
<a href="https://github.com/astral-sh/ruff"><img src="https://img.shields.io/badge/code%20style-ruff-d4aa00?logo=ruff&logoColor=white" alt="Ruff"></a>
<a href="https://ai.pydantic.dev/"><img src="https://img.shields.io/badge/PydanticAI-6e56cf?logo=pydantic&logoColor=white" alt="PydanticAI"></a>
<a href="https://initrunner.ai/"><img src="https://img.shields.io/badge/website-initrunner.ai-blue" alt="Website"></a>
</p>
<p align="center">
<a href="https://initrunner.ai/">Website</a> · <a href="https://initrunner.ai/docs">Docs</a> · <a href="https://github.com/vladkesler/initrunner/issues">Issues</a>
</p>
**Define AI agent roles in YAML and run them anywhere — CLI, API server, or autonomous daemon.**
Your agent is a YAML file. Its tools, knowledge base, memory, triggers, and multimodal input — all config, not code. Deploy it as a CLI tool, a cron-driven daemon, or an OpenAI-compatible API. Compose agents into pipelines. RAG and long-term memory come batteries-included. Manage, chat, and audit from a web dashboard or terminal TUI.
> **v1.1.6** — Stable release. See the [Changelog](CHANGELOG.md) for details.
## Table of Contents
- [See It in Action](#see-it-in-action)
- [Why InitRunner](#why-initrunner)
- [From Simple to Powerful](#from-simple-to-powerful)
- [Community Roles](#community-roles)
- [Install & Quickstart](#install--quickstart)
- [Docker](#docker)
- [Core Concepts](#core-concepts)
- [CLI Quick Reference](#cli-quick-reference)
- [User Interfaces](#user-interfaces)
- [Documentation](#documentation)
- [Examples](#examples)
- [Community & Support](#community--support)
- [Contributing](#contributing)
- [License](#license)
## See It in Action
A code reviewer that can read your files and inspect git history — one YAML file:
```yaml
apiVersion: initrunner/v1
kind: Agent
metadata:
name: code-reviewer
description: Reviews code for bugs and style issues
spec:
role: |
You are a senior engineer. Review code for correctness and readability.
Use git tools to examine changes and read files for context.
model: { provider: openai, name: gpt-5-mini }
tools:
- type: git
repo_path: .
- type: filesystem
root_path: .
read_only: true
```
```bash
initrunner run reviewer.yaml -p "Review the latest commit"
```
That's it. No Python, no boilerplate.
Using Claude? Install the Anthropic extra and swap the model line:
```bash
pip install "initrunner[anthropic]"
```
```yaml
model: { provider: anthropic, name: claude-opus-4-6 }
```
The same file also runs as an interactive chat (`-i`), a trigger-driven daemon, or an OpenAI-compatible API server.
<p align="center">
<img src="assets/screenshot-repl.png" alt="InitRunner CLI REPL" width="700"><br>
<em>Interactive REPL — chat with any agent from the terminal</em>
</p>
## Why InitRunner
**Config, not code** — Define your agent's tools, knowledge base, and memory in one YAML file. No framework boilerplate, no wiring classes together. 16 built-in tools (filesystem, git, HTTP, Python, shell, SQL, search, email, MCP, and more) work out of the box. Need a custom tool? One file, one decorator.
**Version-control your agents** — Agent configs are plain text. Diff them, review them in PRs, validate in CI, reproduce anywhere. Your agent definition lives next to your code.
**Prototype to production** — Same YAML runs as an interactive chat, a one-shot CLI command, a trigger-driven daemon, or an OpenAI-compatible API. No rewrite when you're ready to deploy.
## From Simple to Powerful
Start with the code-reviewer above. Each step adds one capability — no rewrites, just add a section to your YAML.
### 1. Add knowledge & memory
Point at your docs for RAG — a `search_documents` tool is auto-registered. Add `memory` for persistent recall across sessions:
```yaml
spec:
ingest:
sources: ["./docs/**/*.md", "./docs/**/*.pdf"]
memory:
store_path: ./memory.db
max_memories: 1000
```
```bash
initrunner ingest role.yaml # extract | chunk | embed | store
initrunner run role.yaml -i --resume # search_documents + memory ready
```
### 2. Add skills
Compose reusable bundles of tools and prompts. Each skill is a `SKILL.md` file — reference it by path:
```yaml
spec:
skills:
- ../skills/web-researcher
- ../skills/code-tools.md
```
The agent inherits each skill's tools and prompt instructions automatically.
A `SKILL.md` file has a YAML frontmatter block defining the tools it provides, followed by markdown guidelines the agent will follow:
```markdown
---
name: my-skill
description: What this skill does
tools:
- type: web_reader
timeout_seconds: 15
- type: search
---
Use the web_reader tool to fetch pages as markdown before answering.
Cite URLs in your responses.
```
Run `initrunner init --skill my-skill` to scaffold one.
### 3. Add triggers
Turn it into a daemon that reacts to events:
```yaml
spec:
triggers:
- type: cron
schedule: "0 9 * * 1"
prompt: "Generate the weekly status report."
- type: file_watch
paths: [./src]
prompt_template: "File changed: {path}. Review it."
```
```bash
initrunner daemon role.yaml # runs until stopped
```
### 4. Compose agents
Orchestrate multiple agents into a pipeline. One agent's output feeds into the next:
```yaml
apiVersion: initrunner/v1
kind: Compose
metadata:
name: email-pipeline
description: Multi-agent email processing pipeline
spec:
services:
inbox-watcher:
role: roles/inbox-watcher.yaml
sink: { type: delegate, target: triager }
triager:
role: roles/triager.yaml
```
```bash
initrunner compose up pipeline.yaml
```
### 5. Serve as an API
Turn any agent into an OpenAI-compatible endpoint. Drop-in for Open WebUI, Vercel AI SDK, or any OpenAI-compatible client:
```bash
initrunner serve support-agent.yaml --port 3000
```
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:3000/v1", api_key="unused")
response = client.chat.completions.create(
model="support-agent",
messages=[{"role": "user", "content": "How do I reset my password?"}],
)
```
Or connect [Open WebUI](https://github.com/open-webui/open-webui) for a full chat interface:
```bash
docker run -d --name open-webui --network host \
-e OPENAI_API_BASE_URL=http://127.0.0.1:3000/v1 \
-e OPENAI_API_KEY=unused \
-v open-webui:/app/backend/data \
ghcr.io/open-webui/open-webui:main
# Open http://localhost:8080 and select the support-agent model
```
See [Server docs](docs/interfaces/server.md#open-webui-integration) for the full walkthrough.
### 6. Attach files and media
Send images, audio, video, and documents alongside your prompts — from the CLI, REPL, API, or dashboard:
```bash
# Attach an image to a prompt
initrunner run role.yaml -p "Describe this image" -A photo.png
# Multiple attachments
initrunner run role.yaml -p "Compare these" -A before.png -A after.png
# URL attachment
initrunner run role.yaml -p "What's in this image?" -A https://example.com/photo.jpg
```
In the interactive REPL, use `/attach` to queue files:
```
> /attach diagram.png
Queued attachment: diagram.png
> /attach notes.pdf
Queued attachment: notes.pdf
> What do these show?
[assistant response with both attachments]
```
The API server accepts multimodal content in the standard OpenAI format. See [Multimodal Input](docs/core/multimodal.md) for the full reference.
### 7. Get structured output
Force the agent to return validated JSON matching a schema — ideal for pipelines and automation:
```yaml
spec:
output:
type: json_schema
schema:
type: object
properties:
status:
type: string
enum: [approved, rejected, needs_review]
amount:
type: number
vendor:
type: string
required: [status, amount, vendor]
```
```bash
initrunner run classifier.yaml -p "Acme Corp invoice for $250"
# → {"status": "approved", "amount": 250.0, "vendor": "Acme Corp"}
```
See [Structured Output](docs/core/structured-output.md) for inline schemas, external schema files, and pipeline integration.
## Community Roles
Browse, install, and run roles shared by the community — no copy-paste needed:
```bash
initrunner search "code review" # browse the community index
initrunner install code-reviewer # download, validate, confirm
initrunner run ~/.initrunner/roles/code-reviewer.yaml -i
```
Install directly from any GitHub repo:
```bash
initrunner install user/repo:roles/support-agent.yaml@v1.0
```
Every install shows a security summary (tools, model, author) and asks for confirmation before saving. See [docs/agents/registry.md](docs/agents/registry.md) for source formats, the community index, and update workflows.
## Install & Quickstart
**1. Install**
```bash
curl -fsSL https://initrunner.ai/install.sh | sh
```
Or with a package manager:
```bash
pip install initrunner
# or
uv tool install initrunner
# or
pipx install initrunner
```
Common extras:
| Extra | What it adds |
|-------|--------------|
| `initrunner[anthropic]` | Anthropic provider (Claude) |
| `initrunner[ingest]` | PDF, DOCX, XLSX ingestion |
| `initrunner[dashboard]` | FastAPI web dashboard (HTMX + DaisyUI) |
| `initrunner[search]` | Web search (DuckDuckGo) |
See [docs/getting-started/installation.md](docs/getting-started/installation.md) for the full extras table, dev setup, and environment configuration.
**2. Set your API key**
Before running an agent, set your provider API key:
```bash
export OPENAI_API_KEY=sk-... # OpenAI (default)
export ANTHROPIC_API_KEY=sk-ant-... # Claude (requires initrunner[anthropic])
```
`initrunner setup` walks through this interactively and stores the key in your shell profile.
**3. Create your first agent and run it**
The fastest way to get started — `setup` walks you through provider, API key, model, and agent creation in one step:
```bash
initrunner setup # guided wizard — picks provider, stores API key, creates a role
initrunner run my-agent.yaml -p "Hello!" # single-shot prompt
initrunner run my-agent.yaml -i # interactive chat
```
There are several ways to create a role — pick whichever fits:
| Method | Command | Best for |
|--------|---------|----------|
| Copy an example | `initrunner examples list` then `initrunner examples copy <name>` | Complete, working agents ready to run ([docs](docs/getting-started/cli.md)) |
| Guided wizard | `initrunner setup` | First-time setup ([docs](docs/getting-started/setup.md)) |
| Interactive scaffold | `initrunner init -i` | Prompted step-by-step creation ([docs](docs/getting-started/cli.md)) |
| AI generation | `initrunner create "code reviewer for Python"` | Describe what you want in natural language ([docs](docs/agents/role_generation.md)) |
| CLI flags | `initrunner init --name my-agent --model gpt-5-mini` | Quick one-liner ([docs](docs/getting-started/cli.md)) |
| Manual YAML | Copy the [example above](#see-it-in-action) | Full control |
See the hands-on [Tutorial](docs/getting-started/tutorial.md) for a complete walkthrough.
## Docker
Run InitRunner without installing Python — just Docker:
Before running, create a `./roles/` directory and add a role YAML file — the examples below reference it as `/roles/my-agent.yaml`. No role yet? Run `initrunner examples copy hello-world` if you have InitRunner installed, or copy [hello-world.yaml](examples/roles/hello-world.yaml) from this repo.
```bash
# One-shot prompt
docker run --rm -e OPENAI_API_KEY \
-v ./roles:/roles ghcr.io/vladkesler/initrunner:latest \
run /roles/my-agent.yaml -p "Hello"
# Interactive chat
docker run --rm -it -e OPENAI_API_KEY \
-v ./roles:/roles ghcr.io/vladkesler/initrunner:latest \
run /roles/my-agent.yaml -i
# Web dashboard — open http://localhost:8420 after starting
docker run -d -e OPENAI_API_KEY \
-v ./roles:/roles \
-v initrunner-data:/data \
-p 8420:8420 ghcr.io/vladkesler/initrunner:latest \
ui --role-dir /roles
# ./roles — your local role files (mounted read/write into /roles)
# initrunner-data — named volume: audit log, embeddings, memory (persists across restarts)
```
`-e OPENAI_API_KEY` forwards the variable from your current shell — make sure it's exported first (`export OPENAI_API_KEY=sk-...`). Prefer a file? Copy `examples/.env.example` to `.env`, fill in your key, and replace `-e OPENAI_API_KEY` with `--env-file .env`.
The image is also available on Docker Hub: `vladkesler/initrunner`
Or use the included `docker-compose.yml` to start the dashboard with persistent storage:
```bash
# Copy examples/.env.example → .env, add your key, then:
docker compose up
# Dashboard is now at http://localhost:8420
```
Build the image locally:
```bash
docker build -t initrunner .
docker run --rm initrunner --version
```
The default image includes dashboard, ingestion, all model providers, and safety extras. Override with `--build-arg EXTRAS="dashboard,anthropic"` to customize.
Using Ollama on the host? Set the model endpoint to `http://host.docker.internal:11434/v1` in your role YAML.
## Core Concepts
<p align="center">
<img src="assets/screenshot-dashboard.png" alt="InitRunner web dashboard — Create New Role" width="700"><br>
<em>Web dashboard — create and manage roles with a live YAML preview</em>
</p>
### Role files
Every agent is a YAML file with four top-level keys:
```yaml
apiVersion: initrunner/v1
kind: Agent
metadata:
name: my-agent
description: What this agent does
spec:
role: "System prompt goes here."
model: { provider: openai, name: gpt-5-mini }
tools: [...]
guardrails:
max_tool_calls: 20
timeout_seconds: 300
max_tokens_per_run: 50000
autonomous_token_budget: 200000
```
Validate with `initrunner validate role.yaml` or scaffold one with `initrunner init --name my-agent --model gpt-5-mini`.
`metadata.tags` are used by intent sensing (`--sense`) and community search. Specific, task-oriented tags improve role selection:
```yaml
metadata:
name: web-searcher
description: Research assistant that searches the web
tags: [search, web, research, summarize, browse]
```
### Tools
Tools give your agent capabilities beyond text generation. Configure them in `spec.tools`.
#### Built-in tools
| Type | What it does |
|------|-------------|
| `filesystem` | Read/write files within a root directory |
| `git` | Git log, diff, blame, show (read-only by default) |
| `shell` | Run shell commands with allowlist/blocklist |
| `python` | Run Python in an isolated subprocess |
| `sql` | Query SQLite databases (read-only by default) |
| `http` | HTTP requests to a base URL |
| `web_reader` | Fetch web pages and convert to markdown |
| `web_scraper` | Scrape, chunk, embed, and store web pages |
| `search` | Web and news search (DuckDuckGo, SerpAPI, Brave, Tavily) |
| `email` | Search, read, and send email via IMAP/SMTP |
| `slack` | Send messages to Slack channels |
| `api` | Declarative REST API endpoints from YAML |
| `datetime` | Get current time and parse dates |
| `mcp` | Connect to MCP servers (stdio, SSE, streamable-http) |
| `delegate` | Hand off to other agents |
| `custom` | Load tool functions from external Python modules |
See [docs/agents/tools.md](docs/agents/tools.md) for the full reference.
#### Custom tools
Add a built-in tool by creating a single file in `initrunner/agent/tools/` with a config class and a `@register_tool` decorated builder function — it's auto-discovered and immediately available in role YAML. Alternatively, load your own Python functions with `type: custom` and a `module` path pointing to any importable module. See [docs/agents/tool_creation.md](docs/agents/tool_creation.md) for the full guide.
#### Plugin registry
Third-party packages can register new tool types via the `initrunner.tools` entry point. Once installed (`pip install initrunner-<name>`), the tool type is available in YAML like any built-in. Run `initrunner plugins` to list discovered plugins. See the [plugin section of the tool creation guide](docs/agents/tool_creation.md) for details.
### Run modes
| Mode | Command | Use case |
|------|---------|----------|
| Single-shot | `initrunner run role.yaml -p "prompt"` | One question, one answer |
| Interactive | `initrunner run role.yaml -i` | Multi-turn chat (REPL) |
| Autonomous | `initrunner run role.yaml -p "prompt" -a` | Multi-step agentic loop with self-reflection |
| **Intent Sensing** | `initrunner run --sense -p "prompt"` | Pick the best role automatically from discovered roles |
| Daemon | `initrunner daemon role.yaml` | Trigger-driven (cron, file watch, webhook) |
| API server | `initrunner serve role.yaml` | OpenAI-compatible HTTP API |
#### Intent Sensing options
| Flag | Description |
|------|-------------|
| `--sense` | Sense the best role for the given prompt |
| `--role-dir PATH` | Directory to search for roles (used with `--sense`) |
| `--confirm-role` | Confirm the sensed role before running |
Without `--role-dir`, roles are discovered from the current directory (`.`), `./examples/roles/`, and `~/.config/initrunner/roles/` (the global roles directory).
See [Intent Sensing](docs/core/intent_sensing.md) for algorithm details, role tagging tips, and troubleshooting.
### Guardrails
Control costs and runaway agents with `spec.guardrails`:
| Setting | Default | Scope |
|---------|---------|-------|
| `max_tokens_per_run` | 50 000 | Output tokens per single LLM call |
| `max_tool_calls` | 20 | Tool invocations per run |
| `timeout_seconds` | 300 | Wall-clock timeout per run |
| `autonomous_token_budget` | — | Total tokens across all autonomous iterations |
| `session_token_budget` | — | Cumulative limit for an interactive session |
| `daemon_daily_token_budget` | — | Daily token cap for daemon mode |
When any limit is reached the run stops immediately and raises an error. In autonomous mode, the partial result up to that point is returned.
See [Guardrails](docs/configuration/guardrails.md) and [Token Control](docs/configuration/token_control.md) for the full reference.
<p align="center">
<img src="assets/screenshot-audit.png" alt="InitRunner audit log — agent runs with tokens, duration, and trigger modes" width="700"><br>
<em>Audit log — track every agent run with tokens, duration, and trigger mode</em>
</p>
For RAG, memory, triggers, compose, and skills see [From Simple to Powerful](#from-simple-to-powerful) above. Full references: [Ingestion](docs/core/ingestion.md) · [Memory](docs/core/memory.md) · [Triggers](docs/core/triggers.md) · [Compose](docs/orchestration/agent_composer.md) · [Skills](docs/agents/skills_feature.md) · [Providers](docs/configuration/providers.md)
## CLI Quick Reference
| Command | Description |
|---------|-------------|
| `run <role.yaml> -p "..."` | Single-shot prompt |
| `run <role.yaml> -i` | Interactive REPL |
| `run <role.yaml> -p "..." -a` | Autonomous agentic loop |
| `run <role.yaml> -p "..." -a --max-iterations N` | Autonomous with iteration limit |
| `run --sense -p "..."` | Sense best role and run |
| `run --sense --role-dir PATH -p "..."` | Sense best role from a specific directory |
| `run --sense --confirm-role -p "..."` | Sense best role with confirmation prompt |
| `validate <role.yaml>` | Validate a role definition |
| `init --name <name> [--model <model>]` | Scaffold a new role from CLI flags |
| `init -i` | Interactive role-creation wizard |
| `create "<description>"` | AI-generate a role from a description |
| `setup` | Guided provider + API key + role setup |
| `ingest <role.yaml>` | Ingest documents into vector store |
| `daemon <role.yaml>` | Run in trigger-driven daemon mode |
| `run <role.yaml> -p "..." -A file.png` | Attach files or URLs to prompt |
| `run <role.yaml> -p "..." --export-report` | Export a markdown report after the run |
| `doctor` | Check provider config, API keys, connectivity |
| `doctor --quickstart` | End-to-end smoke test with a real API call |
| `serve <role.yaml>` | Serve as OpenAI-compatible API |
| `tui` | Launch terminal dashboard |
| `ui` | Launch web dashboard |
| `compose up <compose.yaml>` | Run multi-agent orchestration |
| `install <source>` | Install a community role from GitHub |
| `uninstall <name>` | Remove an installed role |
| `search <query>` | Search the community role index |
| `info <source>` | Inspect a role before installing |
| `list` | Show installed roles |
| `update [name] / --all` | Update installed roles |
See [docs/getting-started/cli.md](docs/getting-started/cli.md) for the full command list and all options.
## User Interfaces
Beyond the CLI, InitRunner includes a terminal UI and a web dashboard for visual agent management.
| | Terminal UI (`tui`) | Web Dashboard (`ui`) |
|---|---|---|
| **Launch** | `initrunner tui` | `initrunner ui` |
| **Install** | `pip install initrunner[tui]` | `pip install initrunner[dashboard]` |
| **Chat** | Streaming chat with token counts | SSE streaming chat with file attachments |
| **Audit** | Browse & filter audit records | Audit log with detail panel |
| **Memory** | View, export, delete memories | View, filter, export, clear memories |
| **Daemon** | Real-time trigger event log | WebSocket trigger monitor |
| **Style** | k9s-style keyboard-driven (Textual) | Server-rendered HTML (HTMX + DaisyUI) |
See [TUI docs](docs/interfaces/tui.md) · [Dashboard docs](docs/interfaces/dashboard.md) · [API Server docs](docs/interfaces/server.md)
## Documentation
| Area | Key docs |
|------|----------|
| Getting started | [Installation](docs/getting-started/installation.md) · [Setup](docs/getting-started/setup.md) · [RAG Quickstart](docs/getting-started/rag-quickstart.md) · [Tutorial](docs/getting-started/tutorial.md) · [CLI Reference](docs/getting-started/cli.md) |
| Agents & tools | [Tools](docs/agents/tools.md) · [Tool Creation](docs/agents/tool_creation.md) · [Skills](docs/agents/skills_feature.md) · [Structured Output](docs/core/structured-output.md) · [Providers](docs/configuration/providers.md) |
| Knowledge & memory | [Ingestion](docs/core/ingestion.md) · [Memory](docs/core/memory.md) · [Multimodal Input](docs/core/multimodal.md) |
| Orchestration | [Compose](docs/orchestration/agent_composer.md) · [Delegation](docs/orchestration/delegation.md) · [Autonomy](docs/orchestration/autonomy.md) · [Triggers](docs/core/triggers.md) · [Intent Sensing](docs/core/intent_sensing.md) |
| Interfaces | [Dashboard](docs/interfaces/dashboard.md) · [TUI](docs/interfaces/tui.md) · [API Server](docs/interfaces/server.md) |
| Operations | [Security](docs/security/security.md) · [Guardrails](docs/configuration/guardrails.md) · [Audit](docs/core/audit.md) · [Reports](docs/core/reports.md) · [Doctor](docs/operations/doctor.md) · [Observability](docs/core/observability.md) · [CI/CD](docs/operations/cicd.md) |
See [`docs/`](docs/) for the full index.
## Examples
Browse and copy any example locally:
```bash
initrunner examples list # see all available examples
initrunner examples copy code-reviewer # copy to current directory
```
The `examples/` directory includes 20+ ready-to-run agents, skills, and compose pipelines covering real-world scenarios:
**Role definitions** (`examples/roles/`) — single-agent configs for support bots, code reviewers, changelog generators, deploy notifiers, web monitors, data analysts, and more.
**Skills** (`examples/skills/`) — reusable capability bundles:
- `web-researcher/` — web research tools (fetch pages, HTTP requests)
- `code-tools.md` — code execution and file browsing tools
See `examples/roles/skill-demo.yaml` for a role composing multiple skills.
**Compose pipelines** (`examples/compose/`) — multi-agent orchestration:
- `email-pipeline/` — cron-driven email triage with fan-out to researcher and responder
- `content-pipeline/` — file-watch-driven content creation with `process_existing` startup scan
- `ci-pipeline/` — webhook-driven CI build analysis with notifications
## Community & Support
- [GitHub Issues](https://github.com/vladkesler/initrunner/issues) — Bug reports and feature requests
- [Changelog](CHANGELOG.md) — Release notes and version history
If you find InitRunner useful, consider giving it a star — it helps others discover the project.
## Contributing
Contributions welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for dev setup, PR guidelines, and quality checks.
### Share a role
Push your `role.yaml` to a public GitHub repo — anyone can install it with `initrunner install user/repo`. To list it in the community index so users can `initrunner install my-role` by name, open a PR to [vladkesler/community-roles](https://github.com/vladkesler/community-roles) adding an entry to `index.yaml`. See [docs/agents/registry.md](docs/agents/registry.md) for details.
For security vulnerabilities, please see [SECURITY.md](SECURITY.md).
## License
MIT — see [LICENSE](LICENSE) for details.
| text/markdown | null | vladimir kesler <contact@initrunner.ai> | null | null | null | agent, ai, ai-agents, autonomous-agents, llm, mcp, no-code, openai-compatible, pydantic-ai, rag, role, runner, yaml | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"beautifulsoup4>=4.13.2",
"croniter>=6.0.0",
"jinja2>=3.1",
"markdownify>=1.2.2",
"pydantic-ai-slim[fastmcp,openai]>=1.56.0",
"pydantic>=2.11",
"python-dotenv>=1.2.1",
"pyyaml>=6.0.3",
"rich>=14.3.2",
"sqlite-vec>=0.1.6",
"typer>=0.21.1",
"watchfiles>=1.1.1",
"pydantic-ai-slim[anthropic,bedr... | [] | [] | [] | [
"Repository, https://github.com/vladkesler/initrunner",
"Changelog, https://github.com/vladkesler/initrunner/blob/main/CHANGELOG.md",
"Issues, https://github.com/vladkesler/initrunner/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T19:58:57.828731 | initrunner-1.1.6.tar.gz | 3,607,944 | 1a/a6/bd6afbf26dd983e0e9c17f19ae88e753d37408fd6afb295fae7e29f19f3f/initrunner-1.1.6.tar.gz | source | sdist | null | false | a6e101bf92d72d76daa6108801f4b5e5 | 342e4605081fca2839c381a38f4896ff47f384c3e8658b78ed12fe9e7a76154a | 1aa6bd6afbf26dd983e0e9c17f19ae88e753d37408fd6afb295fae7e29f19f3f | MIT | [
"LICENSE"
] | 221 |
2.4 | booklet-splitter | 0.0.4 | Make booklets out of pdf files | # Booklet splitting tool
[](https://codecov.io/gh/fanchuo/booklet_splitter)

For a given large PDF file, bind a book.
### Installation
On your python environment, just issue this pip install command:
```bash
pip install booklet_splitter
```
### Usage
```bash
usage: booklets [-h] [--max_size MAX_SIZE] [--log LOG] [--targetdir TARGETDIR] [--no-layout] [--cover] input_pdf
For a given pdf, builds booklets to be printed, folded and eventualy assembled as a book
positional arguments:
input_pdf PDF file to be sliced as booklets
optional arguments:
-h, --help show this help message and exit
--max_size MAX_SIZE Max size for a booklet, must be multiple of 4
--log LOG Log level at execution
--targetdir TARGETDIR
Directory where the booklets PDF are written
--no-layout Only splits your document in booklets
--cover Adds a page at the very beginning and at the very end, to paste a cover
```
### Useful commands to print pdf files
For a given pdf, print odd/even pages:
```bash
lpr -o page-set=odd <file>
lpr -o page-set=even <file>
```
For a given pdf, print recto/verso
```bash
lpr -o sides=one-sided <file>
lpr -o sides=two-sided-long-edge <file>
lpr -o sides=two-sided-short-edge <file>
```
Print black and white
```bash
lpr -o saturation=percent <file>
```
| text/markdown | François Sécherre | secherre.nospam@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | https://github.com/fanchuo/booklet_splitter | null | null | [] | [] | [] | [
"pypdf==6.7.1",
"click==8.3.1",
"coverage==7.13.1; extra == \"tests\"",
"flake8==7.3.0; extra == \"tests\"",
"black==26.1.0; extra == \"tests\"",
"twine==6.2.0; extra == \"tests\"",
"pre_commit==4.5.1; extra == \"tests\"",
"mypy==1.19.1; extra == \"tests\"",
"pytest==9.0.2; extra == \"tests\"",
"p... | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T19:58:55.156129 | booklet_splitter-0.0.4-py3-none-any.whl | 16,420 | 23/5f/0e6a1c071eb9777b1c5c2db4cc37c3e6cae56e27601b35e2657a66e8f106/booklet_splitter-0.0.4-py3-none-any.whl | py3 | bdist_wheel | null | false | 51d552630bfa21ff1eb74b234702e903 | b0e75c76c6661c80c7b0adc564214fd399d4f446a0b262431400418af35c274f | 235f0e6a1c071eb9777b1c5c2db4cc37c3e6cae56e27601b35e2657a66e8f106 | null | [
"LICENSE"
] | 106 |
2.4 | steamloop | 1.2.1 | Local control for choochoo based thermostats | # steamloop
<p align="center">
<a href="https://github.com/hvaclibs/steamloop/actions/workflows/ci.yml?query=branch%3Amain">
<img src="https://img.shields.io/github/actions/workflow/status/hvaclibs/steamloop/ci.yml?branch=main&label=CI&logo=github&style=flat-square" alt="CI Status" >
</a>
<a href="https://steamloop.readthedocs.io">
<img src="https://img.shields.io/readthedocs/steamloop.svg?logo=read-the-docs&logoColor=fff&style=flat-square" alt="Documentation Status">
</a>
<a href="https://codecov.io/gh/hvaclibs/steamloop">
<img src="https://img.shields.io/codecov/c/github/hvaclibs/steamloop.svg?logo=codecov&logoColor=fff&style=flat-square" alt="Test coverage percentage">
</a>
</p>
<p align="center">
<a href="https://github.com/astral-sh/uv">
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/uv/main/assets/badge/v0.json" alt="uv">
</a>
<a href="https://github.com/astral-sh/ruff">
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff">
</a>
<a href="https://github.com/pre-commit/pre-commit">
<img src="https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white&style=flat-square" alt="pre-commit">
</a>
</p>
<p align="center">
<a href="https://pypi.org/project/steamloop/">
<img src="https://img.shields.io/pypi/v/steamloop.svg?logo=python&logoColor=fff&style=flat-square" alt="PyPI Version">
</a>
<img src="https://img.shields.io/pypi/pyversions/steamloop.svg?style=flat-square&logo=python&logoColor=fff" alt="Supported Python versions">
<img src="https://img.shields.io/pypi/l/steamloop.svg?style=flat-square" alt="License">
</p>
---
Async Python library for local control of thermostat devices over mTLS (port 7878).
## Installation
```bash
pip install steamloop
```
## CLI
### Pairing
Put the thermostat in pairing mode (Menu > Settings > Network > Advanced Setup > Remote Connection > Pair), then:
```bash
steamloop 192.168.1.100 --pair
```
This saves a pairing file in the current directory with the secret key.
### Monitoring
```bash
steamloop 192.168.1.100
```
If already paired, you can pass the secret key directly to skip the pairing file:
```bash
steamloop 192.168.1.100 --key YOUR_SECRET_KEY
```
Interactive commands: `status`, `heat <temp>`, `cool <temp>`, `mode <off|auto|cool|heat>`, `fan <auto|on|circulate>`, `eheat <on|off>`, `help`.
## Library Usage
```python
import asyncio
from steamloop import ThermostatConnection, ZoneMode, FanMode
async def main():
conn = ThermostatConnection(
"192.168.1.100",
secret_key="your-secret-key-from-pairing",
)
async with conn:
# State is populated automatically from thermostat events
for zone_id, zone in conn.state.zones.items():
print(f"{zone.name}: {zone.indoor_temperature}°F")
# Send commands (sync — no await needed)
conn.set_temperature_setpoint("1", heat_setpoint="72")
conn.set_zone_mode("1", ZoneMode.COOL)
conn.set_fan_mode(FanMode.AUTO)
asyncio.run(main())
```
### Pairing Programmatically
`pair()` returns the secret key directly — store it however you like:
```python
from steamloop import ThermostatConnection
async def pair(ip: str) -> str:
conn = ThermostatConnection(ip, secret_key="")
try:
await conn.connect()
ssk = await conn.pair()
return ssk["secret_key"] # store in a database, config entry, etc.
finally:
await conn.disconnect()
```
Or use the built-in file helpers to save/load pairing data to disk:
```python
from steamloop import ThermostatConnection, save_pairing, load_pairing
# Save after pairing
await save_pairing(ip, {
"secret_key": secret_key,
"device_type": "automation",
"device_id": "module",
})
# Load later
pairing = await load_pairing(ip)
conn = ThermostatConnection(ip, secret_key=pairing["secret_key"])
```
### Event Callbacks
```python
def on_event(msg):
print("Received:", msg)
remove = conn.add_event_callback(on_event)
# later: remove() to unregister
```
## Home Assistant Integration
Key design points for using steamloop in a Home Assistant integration:
- **Commands are sync** — `set_zone_mode()`, `set_fan_mode()`, `set_temperature_setpoint()` use `transport.write()` internally, so they won't block the event loop. No `await` needed.
- **State is always fresh** — the `asyncio.Protocol` receives events via `data_received()` and updates `conn.state` automatically. Just read properties directly.
- **Auto-reconnect** — after calling `start_background_tasks()`, the connection automatically reconnects with exponential backoff (5s, 10s, 20s, ... up to 5 min).
- **Event callbacks** — use `add_event_callback()` to trigger `async_write_ha_state()` when the thermostat pushes updates.
- **Multi-zone** — create one `ClimateEntity` per `conn.state.zones` entry. Zones are populated automatically after login.
## API Reference
### `ThermostatConnection(ip, port=7878, *, secret_key, cert_set=None, device_type="automation", device_id="module")`
| Method | Async | Description |
| ------------------------------------------------------------------------------- | ----- | ----------------------------------------------------- |
| `connect()` | yes | Establish mTLS connection |
| `login()` | yes | Authenticate with secret key |
| `pair()` | yes | Pair and receive secret key |
| `start_background_tasks()` | no | Start heartbeat + auto-reconnect |
| `disconnect()` | yes | Close connection and stop tasks |
| `set_temperature_setpoint(zone_id, *, heat_setpoint, cool_setpoint, hold_type)` | no | Set zone temperature |
| `set_zone_mode(zone_id, mode)` | no | Set zone HVAC mode |
| `set_fan_mode(mode)` | no | Set fan mode |
| `set_emergency_heat(enabled)` | no | Toggle emergency heat |
| `add_event_callback(fn)` | no | Register event listener (returns unregister callable) |
Supports `async with` for automatic connect/login/disconnect:
```python
async with ThermostatConnection(ip, secret_key=key) as conn:
... # connected, logged in, background tasks running
# automatically disconnected
```
### Enums
- `ZoneMode` — `OFF`, `AUTO`, `COOL`, `HEAT`
- `FanMode` — `AUTO`, `ALWAYS_ON`, `CIRCULATE`
- `HoldType` — `UNDEFINED`, `MANUAL`, `SCHEDULE`, `HOLD`
### State
- `conn.state.zones` — `dict[str, Zone]` with temperature, setpoints, mode per zone
- `conn.state.fan_mode` — current `FanMode`
- `conn.state.supported_modes` — `list[ZoneMode]`
- `conn.state.emergency_heat` / `relative_humidity` / `cooling_active` / `heating_active`
## Contributors
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
<!-- prettier-ignore-start -->
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- markdownlint-disable -->
<!-- markdownlint-enable -->
<!-- ALL-CONTRIBUTORS-LIST:END -->
<!-- prettier-ignore-end -->
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!
## Credits
[](https://github.com/copier-org/copier)
This package was created with
[Copier](https://copier.readthedocs.io/) and the
[browniebroke/pypackage-template](https://github.com/browniebroke/pypackage-template)
project template.
| text/markdown | null | "J. Nick Koston" <nick@koston.org> | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development ::... | [] | null | null | >=3.12 | [] | [] | [] | [
"orjson>=3.10"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/hvaclibs/steamloop/issues",
"Changelog, https://github.com/hvaclibs/steamloop/blob/main/CHANGELOG.md",
"documentation, https://steamloop.readthedocs.io",
"repository, https://github.com/hvaclibs/steamloop"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T19:58:50.348814 | steamloop-1.2.1.tar.gz | 42,467 | 58/f9/c953ae357d717c1bd864965bfac67225310b7296320d576965e00fae8859/steamloop-1.2.1.tar.gz | source | sdist | null | false | f55ffc31887ee52022ef2ef0b73e187e | 8e6e8dba597b6b6dba2cbf10282b27fdf428ff6960873a82412cf27d813ce6e9 | 58f9c953ae357d717c1bd864965bfac67225310b7296320d576965e00fae8859 | Apache-2.0 | [
"LICENSE"
] | 204 |
2.4 | portable-ai-memory | 1.0.0 | Python SDK for the Portable AI Memory (PAM) interchange format | # Portable AI Memory (PAM) — Python SDK
A Python SDK for the **Portable AI Memory (PAM)** interchange format — a universal way to store, validate, and convert AI user memories across providers.
## What is PAM?
AI assistants learn about you over time — your preferences, facts about your life, project context. But these memories are locked inside each provider. If you switch from ChatGPT to Claude, or use both, your context doesn't follow you.
**PAM** solves this with an open interchange format. It defines three document types:
- **MemoryStore** — your memories (preferences, facts, context) with integrity checksums and semantic relations
- **Conversation** — full chat history with messages, tool calls, citations, and attachments
- **EmbeddingsFile** — vector embeddings linked to memories for semantic search
This SDK lets you:
1. **Convert** exports from ChatGPT, Claude, Gemini, Grok, and Copilot into PAM format
2. **Validate** PAM documents with deep integrity checks (cross-references, temporal ordering, content hashes)
3. **Build** PAM documents programmatically with type-safe Pydantic models
## Installation
```bash
pip install portable-ai-memory # core SDK (models, I/O, validation, converters)
pip install 'portable-ai-memory[cli]' # + CLI tool: typer, rich (pam command)
pip install 'portable-ai-memory[dev]' # + dev tools: pytest, ruff, mypy
pip install 'portable-ai-memory[all]' # cli + dev combined
```
## Quick Start
### Load and validate a PAM file
```python
from portable_ai_memory import load, validate_memory_store
store = load("memory-store.json")
result = validate_memory_store(store)
if result.is_valid:
print(f"Valid — {len(store.memories)} memories")
else:
for issue in result.errors:
print(issue)
```
### Convert a provider export
```python
import json
from pathlib import Path
from portable_ai_memory.converters import detect_provider
from portable_ai_memory import ProviderNotDetectedError
try:
converter = detect_provider("conversations.json")
data = json.loads(Path("conversations.json").read_text())
conversations = converter.convert_conversations(
data, owner_id="user-123",
)
except ProviderNotDetectedError as e:
print(f"Unknown format: {e}")
```
### Build a memory store from scratch
```python
from portable_ai_memory import MemoryStore, MemoryObject, Owner, save
# MemoryObject.create() auto-fills content_hash, temporal, provenance
store = MemoryStore(
schema_version="1.0",
owner=Owner(id="user-123"),
memories=[
MemoryObject.create(
id="mem-001",
type="preference",
content="User prefers dark mode.",
platform="my-app",
)
],
)
save(store, "memory-store.json")
# Convenience lookups
mem = store.get_memory_by_id("mem-001")
prefs = store.get_memories_by_type("preference")
```
## CLI
```bash
# Validate a PAM file or bundle directory
pam validate memory-store.json
pam validate ./my-pam-bundle/
# Convert a provider export to a PAM bundle
pam convert ~/chatgpt-export/ -o ./pam-bundle/ --owner-id user-123
# Inspect a PAM file
pam inspect memory-store.json
```
## Supported Providers
| Provider | Format |
|---|---|
| OpenAI (ChatGPT) | `conversations.json` |
| Anthropic (Claude) | `conversations.json` + `memories.json` |
| Google (Gemini) | Takeout JSON or HTML |
| xAI (Grok) | `prod-grok-backend.json` |
| Microsoft (Copilot) | CSV exports |
To list registered converters programmatically:
```python
from portable_ai_memory.converters import list_converters
print(list_converters()) # ['chatgpt', 'claude', 'gemini', 'grok', 'copilot']
```
## Development
```bash
git clone --recurse-submodules git@github.com:portable-ai-memory/python-sdk.git
cd python-sdk
uv sync --all-extras
uv run pytest
```
> **Note:** The PAM JSON Schemas live in the main
> [portable-ai-memory](https://github.com/portable-ai-memory/portable-ai-memory)
> repo and are included here as a git submodule under `vendor/portable-ai-memory`.
> If you cloned without `--recurse-submodules`, run:
> `git submodule update --init --recursive`
## Links
- [PAM Specification](https://portable-ai-memory.org)
- [GitHub Repository](https://github.com/portable-ai-memory/python-sdk)
## License
Apache License 2.0
| text/markdown | null | Daniel Gines <dangines@gmail.com> | null | null | null | ai, interchange, llm, memory, pam, portability | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artifi... | [] | null | null | >=3.11 | [] | [] | [] | [
"jsonschema<5,>=4.21",
"pydantic<3,>=2.7",
"rfc8785<1,>=0.1.2",
"mypy>=1.10; extra == \"all\"",
"pre-commit>=3.7; extra == \"all\"",
"pytest-cov>=5; extra == \"all\"",
"pytest>=8; extra == \"all\"",
"rich<14,>=13; extra == \"all\"",
"ruff>=0.5; extra == \"all\"",
"typer<1,>=0.12; extra == \"all\""... | [] | [] | [] | [
"Homepage, https://portable-ai-memory.org",
"Documentation, https://portable-ai-memory.org/tools/sdk",
"Repository, https://github.com/portable-ai-memory/python-sdk",
"Specification, https://portable-ai-memory.org/spec/v1.0",
"Issues, https://github.com/portable-ai-memory/python-sdk/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T19:58:34.860408 | portable_ai_memory-1.0.0.tar.gz | 88,849 | ed/b9/b14e8812f999587b61b79a638ff5d1b093c4f12e00207768aa8430984eb2/portable_ai_memory-1.0.0.tar.gz | source | sdist | null | false | 9fa598d5eff5af8566e6a5b034df6963 | a98abfdf11ee4a09f581da98ec1e34b5fce22c1b149a1e0e573b8ec1f7376dfa | edb9b14e8812f999587b61b79a638ff5d1b093c4f12e00207768aa8430984eb2 | Apache-2.0 | [
"LICENSE"
] | 240 |
2.1 | magic_hour | 0.54.1 | Python SDK for Magic Hour API | # Magic Hour Python SDK
[](https://pypi.org/project/magic_hour/)
The Magic Hour Python Library provides convenient access to the Magic Hour API. This library offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
## Documentation
For full documentation of all APIs, please visit https://docs.magichour.ai
If you have any questions, please reach out to us via [discord](https://discord.gg/JX5rgsZaJp).
## Install
```sh
pip install magic_hour
```
## Cookbook
For end-to-end examples demonstrating all available Magic Hour APIs, check out our interactive Google Colab cookbook:
- **Interactive Notebook**: [Magic Hour API Cookbook](https://colab.research.google.com/drive/1NTHL_lr_s-qBJ-mSecSXPzRLi9_V5JiU?usp=sharing)
The cookbook includes:
- Setup instructions
- Examples for all available APIs (image generation, face swap, lip sync, video generation, and more)
- Display helpers for previewing outputs
- Production-ready patterns and best practices
## Synchronous Client Usage
```python
from magic_hour import Client
# generate your API Key at https://magichour.ai/developer
client = Client(token="my api key")
response = client.v1.face_swap_photo.generate(
assets={
"face_swap_mode": "all-faces",
"source_file_path": "/path/to/source/image.png",
"target_file_path": "/path/to/target/image.png",
},
name="Face Swap image",
wait_for_completion=True,
download_outputs=True,
download_directory=".",
)
print(f"Project ID: {response.id}")
print(f"Status: {response.status}")
print(f"Downloaded files: {response.downloaded_paths}")
```
### Asynchronous Client Usage
```python
from magic_hour import AsyncClient
# generate your API Key at https://magichour.ai/developer
client = AsyncClient(token="my api key")
response = await client.v1.face_swap_photo.generate(
assets={
"face_swap_mode": "all-faces",
"source_file_path": "/path/to/source/image.png",
"target_file_path": "/path/to/target/image.png",
},
name="Face Swap image",
wait_for_completion=True,
download_outputs=True,
download_directory=".",
)
print(f"Project ID: {response.id}")
print(f"Status: {response.status}")
print(f"Downloaded files: {response.downloaded_paths}")
```
## Client Functions
Most resources that generate media content support two methods:
- **`generate()`** - A high-level convenience method that handles the entire workflow
- **`create()`** - A low-level method that only initiates the generation process
### Generate Function
The `generate()` function provides a complete end-to-end solution:
- Uploads local file to Magic Hour storage
- Calls the API to start generation
- Automatically polls for completion
- Downloads generated files to your local machine
- Returns both API response data and local file paths
**Additional Parameters:**
- `wait_for_completion` (bool, default True): Whether to wait for the project to complete.
- `download_outputs` (bool, default True): Whether to download the generated files
- `download_directory` (str, optional): Directory to save downloaded files (defaults to current directory)
```python
# Generate function - handles everything automatically
response = client.v1.ai_image_generator.generate(
style={"prompt": "A beautiful sunset over mountains"},
name="Sunset Image",
wait_for_completion=True, # Wait for status to be complete/error/canceled
download_outputs=True, # Download files automatically
download_directory="./outputs/" # Where to save files
)
# You get both the API response AND downloaded file paths
print(f"Project ID: {response.id}")
print(f"Status: {response.status}")
print(f"Downloaded files: {response.downloaded_paths}")
```
### Create Function
The `create()` function provides granular control:
- Only calls the API to start the generation process
- Returns immediately with a project ID and amount of credits used
- Requires manual status checking and file downloading
```python
# Create function - only starts the process
create_response = client.v1.ai_image_generator.create(
style={"prompt": "A beautiful sunset over mountains"},
name="Sunset Image"
)
# You get just the project ID and initial response
project_id = create_response.id
print(f"Started project: {project_id}")
# You must handle the rest:
# 1. Poll for completion. We provide a helper function to handle polling for you
result = client.v1.image_projects.check_status(
wait_for_completion=True,
download_outputs=False,
)
# 2. Download files using the download URLs
download_urls = result.downloads
# download the files using your preferred way
```
### Choosing Between Which Function to use
**Use `generate()` when:**
- You want a simple, one-call solution
- You're building a straightforward application
- You don't need custom polling or download logic
**Use `create()` when:**
- You need custom status checking logic
- You're integrating with existing job processing systems
- You want to separate generation initiation from completion handling
- You need fine-grained control over the entire workflow
## Module Documentation and Snippets
### [v1.ai_clothes_changer](magic_hour/resources/v1/ai_clothes_changer/README.md)
- [create](magic_hour/resources/v1/ai_clothes_changer/README.md#create) - AI Clothes Changer
- [generate](magic_hour/resources/v1/ai_clothes_changer/README.md#generate) - AI Clothes Changer Generate Workflow
### [v1.ai_face_editor](magic_hour/resources/v1/ai_face_editor/README.md)
- [create](magic_hour/resources/v1/ai_face_editor/README.md#create) - AI Face Editor
- [generate](magic_hour/resources/v1/ai_face_editor/README.md#generate) - Ai Face Editor Generate Workflow
### [v1.ai_gif_generator](magic_hour/resources/v1/ai_gif_generator/README.md)
- [create](magic_hour/resources/v1/ai_gif_generator/README.md#create) - AI GIF Generator
- [generate](magic_hour/resources/v1/ai_gif_generator/README.md#generate) - Ai Gif Generator Generate Workflow
### [v1.ai_headshot_generator](magic_hour/resources/v1/ai_headshot_generator/README.md)
- [create](magic_hour/resources/v1/ai_headshot_generator/README.md#create) - AI Headshot Generator
- [generate](magic_hour/resources/v1/ai_headshot_generator/README.md#generate) - Ai Headshot Generator Generate Workflow
### [v1.ai_image_editor](magic_hour/resources/v1/ai_image_editor/README.md)
- [create](magic_hour/resources/v1/ai_image_editor/README.md#create) - AI Image Editor
- [generate](magic_hour/resources/v1/ai_image_editor/README.md#generate) - Ai Image Editor Generate Workflow
### [v1.ai_image_generator](magic_hour/resources/v1/ai_image_generator/README.md)
- [create](magic_hour/resources/v1/ai_image_generator/README.md#create) - AI Image Generator
- [generate](magic_hour/resources/v1/ai_image_generator/README.md#generate) - Ai Image Generator Generate Workflow
### [v1.ai_image_upscaler](magic_hour/resources/v1/ai_image_upscaler/README.md)
- [create](magic_hour/resources/v1/ai_image_upscaler/README.md#create) - AI Image Upscaler
- [generate](magic_hour/resources/v1/ai_image_upscaler/README.md#generate) - Ai Image Upscaler Generate Workflow
### [v1.ai_meme_generator](magic_hour/resources/v1/ai_meme_generator/README.md)
- [create](magic_hour/resources/v1/ai_meme_generator/README.md#create) - AI Meme Generator
- [generate](magic_hour/resources/v1/ai_meme_generator/README.md#generate) - Ai Meme Generator Generate Workflow
### [v1.ai_qr_code_generator](magic_hour/resources/v1/ai_qr_code_generator/README.md)
- [create](magic_hour/resources/v1/ai_qr_code_generator/README.md#create) - AI QR Code Generator
- [generate](magic_hour/resources/v1/ai_qr_code_generator/README.md#generate) - Ai Qr Code Generator Generate Workflow
### [v1.ai_talking_photo](magic_hour/resources/v1/ai_talking_photo/README.md)
- [create](magic_hour/resources/v1/ai_talking_photo/README.md#create) - AI Talking Photo
- [generate](magic_hour/resources/v1/ai_talking_photo/README.md#generate) - Ai Talking Photo Generate Workflow
### [v1.ai_voice_cloner](magic_hour/resources/v1/ai_voice_cloner/README.md)
- [create](magic_hour/resources/v1/ai_voice_cloner/README.md#create) - AI Voice Cloner
### [v1.ai_voice_generator](magic_hour/resources/v1/ai_voice_generator/README.md)
- [create](magic_hour/resources/v1/ai_voice_generator/README.md#create) - AI Voice Generator
- [generate](magic_hour/resources/v1/ai_voice_generator/README.md#generate) - Ai Talking Photo Generate Workflow
### [v1.animation](magic_hour/resources/v1/animation/README.md)
- [create](magic_hour/resources/v1/animation/README.md#create) - Animation
- [generate](magic_hour/resources/v1/animation/README.md#generate) - Animation Generate Workflow
### [v1.audio_projects](magic_hour/resources/v1/audio_projects/README.md)
- [check-result](magic_hour/resources/v1/audio_projects/README.md#check-result) - Check results
- [delete](magic_hour/resources/v1/audio_projects/README.md#delete) - Delete audio
- [get](magic_hour/resources/v1/audio_projects/README.md#get) - Get audio details
### [v1.auto_subtitle_generator](magic_hour/resources/v1/auto_subtitle_generator/README.md)
- [create](magic_hour/resources/v1/auto_subtitle_generator/README.md#create) - Auto Subtitle Generator
- [generate](magic_hour/resources/v1/auto_subtitle_generator/README.md#generate) - Auto Subtitle Generator Generate Workflow
### [v1.face_detection](magic_hour/resources/v1/face_detection/README.md)
- [create](magic_hour/resources/v1/face_detection/README.md#create) - Face Detection
- [generate](magic_hour/resources/v1/face_detection/README.md#generate) - Face Detection Generate Workflow
- [get](magic_hour/resources/v1/face_detection/README.md#get) - Get face detection details
### [v1.face_swap](magic_hour/resources/v1/face_swap/README.md)
- [create](magic_hour/resources/v1/face_swap/README.md#create) - Face Swap Video
- [generate](magic_hour/resources/v1/face_swap/README.md#generate) - Face Swap Generate Workflow
### [v1.face_swap_photo](magic_hour/resources/v1/face_swap_photo/README.md)
- [create](magic_hour/resources/v1/face_swap_photo/README.md#create) - Face Swap Photo
- [generate](magic_hour/resources/v1/face_swap_photo/README.md#generate) - Face Swap Photo Generate Workflow
### [v1.files](magic_hour/resources/v1/files/README.md)
- [upload-file](magic_hour/resources/v1/files/README.md#upload-file) - Upload File
### [v1.files.upload_urls](magic_hour/resources/v1/files/upload_urls/README.md)
- [create](magic_hour/resources/v1/files/upload_urls/README.md#create) - Generate asset upload urls
### [v1.image_background_remover](magic_hour/resources/v1/image_background_remover/README.md)
- [create](magic_hour/resources/v1/image_background_remover/README.md#create) - Image Background Remover
- [generate](magic_hour/resources/v1/image_background_remover/README.md#generate) - Image Background Remover Generate Workflow
### [v1.image_projects](magic_hour/resources/v1/image_projects/README.md)
- [check-result](magic_hour/resources/v1/image_projects/README.md#check-result) - Check results
- [delete](magic_hour/resources/v1/image_projects/README.md#delete) - Delete image
- [get](magic_hour/resources/v1/image_projects/README.md#get) - Get image details
### [v1.image_to_video](magic_hour/resources/v1/image_to_video/README.md)
- [create](magic_hour/resources/v1/image_to_video/README.md#create) - Image-to-Video
- [generate](magic_hour/resources/v1/image_to_video/README.md#generate) - Image To Video Generate Workflow
### [v1.lip_sync](magic_hour/resources/v1/lip_sync/README.md)
- [create](magic_hour/resources/v1/lip_sync/README.md#create) - Lip Sync
- [generate](magic_hour/resources/v1/lip_sync/README.md#generate) - Lip Sync Generate Workflow
### [v1.photo_colorizer](magic_hour/resources/v1/photo_colorizer/README.md)
- [create](magic_hour/resources/v1/photo_colorizer/README.md#create) - Photo Colorizer
- [generate](magic_hour/resources/v1/photo_colorizer/README.md#generate) - Photo Colorizer Generate Workflow
### [v1.text_to_video](magic_hour/resources/v1/text_to_video/README.md)
- [create](magic_hour/resources/v1/text_to_video/README.md#create) - Text-to-Video
- [generate](magic_hour/resources/v1/text_to_video/README.md#generate) - Text To Video Generate Workflow
### [v1.video_projects](magic_hour/resources/v1/video_projects/README.md)
- [check-result](magic_hour/resources/v1/video_projects/README.md#check-result) - Check results
- [delete](magic_hour/resources/v1/video_projects/README.md#delete) - Delete video
- [get](magic_hour/resources/v1/video_projects/README.md#get) - Get video details
### [v1.video_to_video](magic_hour/resources/v1/video_to_video/README.md)
- [create](magic_hour/resources/v1/video_to_video/README.md#create) - Video-to-Video
- [generate](magic_hour/resources/v1/video_to_video/README.md#generate) - Video To Video Generate Workflow
<!-- MODULE DOCS END -->
| text/markdown | null | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0,>=3.8 | [] | [] | [] | [
"make-api-request<0.2.0,>=0.1.3"
] | [] | [] | [] | [] | poetry/1.8.5 CPython/3.8.18 Linux/6.14.0-1017-azure | 2026-02-19T19:58:07.452617 | magic_hour-0.54.1.tar.gz | 112,535 | c1/0b/3ff09835040048d7550c225eb669aceacb8af9d4d2d48ef4d1316a1b4afd/magic_hour-0.54.1.tar.gz | source | sdist | null | false | b3985444eca91b2541b626f5c0cff6c3 | 0ef7062349fca291f8e4d8bb923bc8f2279b92d676c4db8df919fa90c7c81ad0 | c10b3ff09835040048d7550c225eb669aceacb8af9d4d2d48ef4d1316a1b4afd | null | [] | 0 |
2.4 | localtileserver | 0.11.0 | Locally serve geospatial raster tiles in the Slippy Map standard. | ### 🚀 Support This Project
If localtileserver saves you time, powers your work, or you need direct help, please consider supporting the project and my efforts:
[](https://github.com/sponsors/banesullivan)

# 🌐 Local Tile Server for Geospatial Rasters
[](https://codecov.io/gh/banesullivan/localtileserver)
[](https://pypi.org/project/localtileserver/)
[](https://anaconda.org/conda-forge/localtileserver)
*Need to visualize a rather large (gigabytes+) raster?* **This is for you.**
A Python package for serving tiles from large raster files in
the [Slippy Maps standard](https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames)
(i.e., `/zoom/x/y.png`) for visualization in Jupyter with `ipyleaflet` or `folium`.
Launch a [demo](https://github.com/banesullivan/localtileserver-demo) on MyBinder [](https://mybinder.org/v2/gh/banesullivan/localtileserver-demo/HEAD)
Documentation: https://localtileserver.banesullivan.com/
Built on [rio-tiler](https://github.com/cogeotiff/rio-tiler)
## 🌟 Highlights
- Launch a tile server for large geospatial images
- View local or remote* raster files with `ipyleaflet` or `folium` in Jupyter
- View rasters with CesiumJS with the built-in web application
**remote raster files should be pre-tiled Cloud Optimized GeoTiffs*
## 🚀 Usage
Usage details and examples can be found in the documentation: https://localtileserver.banesullivan.com/
The following is a minimal example to visualize a local raster file with
`ipyleaflet`:
```py
from localtileserver import get_leaflet_tile_layer, TileClient
from ipyleaflet import Map
# First, create a tile server from local raster file
client = TileClient('path/to/geo.tif')
# Create ipyleaflet tile layer from that server
t = get_leaflet_tile_layer(client)
m = Map(center=client.center(), zoom=client.default_zoom)
m.add(t)
m
```

## ℹ️ Overview
The `TileClient` class can be used to to launch a tile server in a background
thread which will serve raster imagery to a viewer (usually `ipyleaflet` or
`folium` in Jupyter notebooks).
This tile server can efficiently deliver varying resolutions of your
raster imagery to your viewer; it helps to have pre-tiled,
[Cloud Optimized GeoTIFFs (COGs)](https://www.cogeo.org/).
There is an included, standalone web viewer leveraging
[CesiumJS](https://cesium.com/platform/cesiumjs/).
## ⬇️ Installation
Get started with `localtileserver` to view rasters in Jupyter or deploy as your
own Flask application.
### 🐍 Installing with `conda`
Conda makes managing `localtileserver`'s dependencies across platforms quite
easy and this is the recommended method to install:
```bash
conda install -c conda-forge localtileserver
```
### 🎡 Installing with `pip`
If you prefer pip, then you can install from PyPI: https://pypi.org/project/localtileserver/
```
pip install localtileserver
```
## 💭 Feedback
Please share your thoughts and questions on the [Discussions](https://github.com/banesullivan/localtileserver/discussions) board.
If you would like to report any bugs or make feature requests, please open an issue.
If filing a bug report, please share a scooby `Report`:
```py
import localtileserver
print(localtileserver.Report())
```
| text/markdown | null | Bane Sullivan <hello@banesullivan.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Information Analysis",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Pro... | [] | null | null | >=3.10 | [] | [] | [] | [
"click",
"flask<4,>=2.0.0",
"Flask-Caching",
"flask-cors",
"flask-restx>=1.3.0",
"rio-tiler",
"rio-cogeo",
"requests",
"server-thread",
"scooby",
"werkzeug",
"matplotlib; extra == \"colormaps\"",
"cmocean; extra == \"colormaps\"",
"colorcet; extra == \"colormaps\"",
"jupyter-server-proxy... | [] | [] | [] | [
"Documentation, https://localtileserver.banesullivan.com",
"Bug Tracker, https://github.com/banesullivan/localtileserver/issues",
"Source Code, https://github.com/banesullivan/localtileserver"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T19:57:38.840819 | localtileserver-0.11.0.tar.gz | 30,882 | 07/a5/a00d46fb78e8a16562dbbe7ece8811102511c09660bf9b3e24f388adcf0a/localtileserver-0.11.0.tar.gz | source | sdist | null | false | 2b1361ff76ccbb8f2dc5daaeb463d4c0 | 3af91f295c93523e0c78d718c2167b65add436782fada38b7207fca524e806b2 | 07a5a00d46fb78e8a16562dbbe7ece8811102511c09660bf9b3e24f388adcf0a | MIT | [
"LICENSE"
] | 1,800 |
2.4 | pulumiverse-scaleway | 1.44.0a1771530443 | A Pulumi package for creating and managing Scaleway cloud resources. |
# Scaleway Resource Provider
The Scaleway resource provider for Pulumi lets you creating resources in [Scaleway](https://www.scaleway.com). To use
this package, please [install the Pulumi CLI first](https://pulumi.com/).
## Support
This is a community maintained provider. Please file issues and feature requests here:
[pulumiverse/pulumi-scaleway](https://github.com/pulumiverse/pulumi-scaleway/issues)
You can also reach out on one of these channels:
* `#pulumiverse` channel on the [Pulumi Community Slack](https://slack.pulumi.com)
* `#pulumi` channel on the [Scaleway Community Slack](https://slack.scaleway.com)
## Installing
This package is available in many languages in the standard packaging formats.
### Node.js (Java/TypeScript)
To use from JavaScript or TypeScript in Node.js, install using either `npm`:
```sh
npm install @pulumiverse/scaleway
```
or `yarn`:
```sh
yarn add @pulumiverse/scaleway
```
### Python
To use from Python, install using `pip`:
```sh
pip install pulumiverse-scaleway
```
### Go
To use from Go, use `go get` to grab the latest version of the library
```sh
go get github.com/pulumiverse/pulumi-scaleway/sdk/go/...
```
### .NET
To use from Dotnet, use `dotnet add package` to install into your project. You must specify the version if it is a pre-release version.
```sh
dotnet add package Pulumiverse.Scaleway
```
## Reference
See the Pulumi registry for [API documention](https://www.pulumi.com/registry/packages/scaleway/api-docs/).
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, scaleway, pulumiverse | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.0.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://www.scaleway.com",
"Repository, https://github.com/pulumiverse/pulumi-scaleway"
] | twine/6.2.0 CPython/3.11.8 | 2026-02-19T19:57:21.655096 | pulumiverse_scaleway-1.44.0a1771530443.tar.gz | 1,037,120 | f7/f9/f8f37b6a43dfc2be86195c55253b4686ff07f2b70e78369a19c5c4701c49/pulumiverse_scaleway-1.44.0a1771530443.tar.gz | source | sdist | null | false | 076815f12a19ab71f63bbd67476d9c4a | 5c1209113c9b413dd96a1f4ead8dab5065cb7bb8431567f3c6e50289dc2059b5 | f7f9f8f37b6a43dfc2be86195c55253b4686ff07f2b70e78369a19c5c4701c49 | null | [] | 191 |
2.4 | genlayer-py | 0.9.5 | GenLayer Python SDK | # GenLayerPY
[](https://opensource.org/license/mit/)
[](https://discord.gg/qjCU4AWnKE)
[](https://x.com/GenLayer)
## About
GenLayerPY SDK is a python library designed for developers building decentralized applications (Dapps) on the GenLayer protocol. This SDK provides a comprehensive set of tools to interact with the GenLayer network, including client creation, transaction handling, event subscriptions, and more, all while leveraging the power of web3.py as the underlying blockchain client.
## Prerequisites
Before installing GenLayerPY SDK, ensure you have the following prerequisites installed:
- Python (>=3.12)
## 🛠️ Installation and Usage
To install the GenLayerPY SDK, use the following command:
```bash
$ pip install genlayer-py
```
Here’s how to initialize the client and connect to the GenLayer Simulator:
### Reading a Transaction
```python
from genlayer_py import create_client
from genlayer_py.chains import localnet
client = create_client(
chain=localnet,
)
transaction_hash = "0x..."
transaction = client.get_transaction(hash=transaction_hash)
```
### Waiting for Transaction Receipt
```python
from genlayer_py import create_client
from genlayer_py.chains import localnet
from genlayer_py.types import TransactionStatus
client = create_client(chain=localnet)
# Get simplified receipt (default - removes binary data, keeps execution results)
receipt = client.wait_for_transaction_receipt(
transaction_hash="0x...",
status=TransactionStatus.FINALIZED,
full_transaction=False # Default - simplified for readability
)
# Get complete receipt with all fields
full_receipt = client.wait_for_transaction_receipt(
transaction_hash="0x...",
status=TransactionStatus.FINALIZED,
full_transaction=True # Complete receipt with all internal data
)
```
### Reading a contract
```python
from genlayer_py import create_client
from genlayer_py.chains import localnet
client = create_client(
chain=localnet,
)
result = client.read_contract(
address=contract_address,
function_name='get_complete_storage',
args=[],
state_status='accepted'
)
```
### Writing a transaction
```python
from genlayer_py.chains import localnet
from genlayer_py import create_client, create_account
client = create_client(
chain=localnet,
)
account = create_account()
transaction_hash = client.write_contract(
account=account,
transaction=transaction,
address=contract_address,
function_name='account',
args=['new_storage'],
value=0, // value is optional, if you want to send some native token to the contract
)
receipt = client.wait_for_transaction_receipt(
hash=transaction_hash,
status=TransactionStatus.FINALIZED, // or ACCEPTED
full_transaction=False // False by default - returns simplified receipt for better readability
)
```
## 🚀 Key Features
* **Client Creation**: Easily create and configure a client to connect to GenLayer’s network.
* **Transaction Handling**: Send and manage transactions on the GenLayer network.
* **Gas Estimation**: Estimate gas fees for executing transactions on GenLayer.
_* under development_
## 📖 Documentation
For detailed information on how to use GenLayerPY SDK, please refer to our [documentation](https://docs.genlayer.com/api-references/genlayer-py).
## Contributing
We welcome contributions to GenLayerPY SDK! Whether it's new features, improved infrastructure, or better documentation, your input is valuable. Please read our [CONTRIBUTING](https://github.com/genlayerlabs/genlayer-py/blob/main/CONTRIBUTING.md) guide for guidelines on how to submit your contributions.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
| text/markdown | GenLayer | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"web3>=7.10.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.12 | 2026-02-19T19:55:38.343940 | genlayer_py-0.9.5.tar.gz | 28,477 | 56/e4/e3b17075d76363cc189861f594e809f50f1c7927596339825dc6b22408fb/genlayer_py-0.9.5.tar.gz | source | sdist | null | false | 8bb8dbc982460ddc587d0b882e0c2d3a | 5086c163465af6699e5cba6bb404cb15d500aed05018fb206221f22718a9291b | 56e4e3b17075d76363cc189861f594e809f50f1c7927596339825dc6b22408fb | MIT | [
"LICENSE"
] | 231 |
2.4 | relai | 0.3.25 | An SDK for building reliable AI agents | <p align="center">
<img align="center" src="docs/assets/relai-logo.png" width="460px" />
</p>
<p align="left">
<h1 align="center">Simulate → Evaluate → Optimize AI Agents</h1>
<p align="center">
<a href="https://pypi.org/project/relai/"><img alt="PyPI" src="https://img.shields.io/pypi/v/relai.svg"></a>
<img alt="Python" src="https://img.shields.io/pypi/pyversions/relai.svg">
<a href="LICENSE.md"><img alt="License" src="https://img.shields.io/badge/license-Apache--2.0-blue.svg"></a>
<a href="http://docs-sdk.relai.ai"><img alt="Docs" src="https://img.shields.io/badge/docs-online-brightgreen.svg"></a>
<a href="https://github.com/relai-ai/relai-sdk/actions/workflows/upload-to-package-index.yml"><img alt="CI" src="https://img.shields.io/github/actions/workflow/status/relai-ai/relai-sdk/upload-to-package-index.yml?branch=main"></a>
</p>
**RELAI** is a platform for building **reliable AI agents**. It streamlines the hardest parts of agent development—**simulation**, **evaluation**, and **optimization**—so you can iterate quickly with confidence.
**What you get**
- **Agent Simulation** — Create full/partial environments, define LLM personas, mock MCP servers & tools, and generate synthetic data. Optionally condition simulation on real samples to better match production.
- **Agent Evaluation** — Mix code-based and LLM-based custom evaluators or use RELAI platform evaluators. Turn human reviews into benchmarks you can re-run.
- **Agent Optimization (Maestro)** — Holistic optimizer that uses evaluator signals & feedback to improve **prompts/configs** and suggest **graph-level** changes. Maestro selects best model/tool/graph based on observed performance.
**Works with**: **OpenAI Agents SDK**, **Google ADK**, **LangGraph**, and other agent frameworks.
## Quickstart
Create a free account and get a RELAI API key: [platform.relai.ai/settings/access/api-keys](https://platform.relai.ai/settings/access/api-keys)
### Installation and Setup
```bash
pip install relai
# or
uv add relai
export RELAI_API_KEY="<RELAI_API_KEY>"
```
### Example: A simple Stock Assistant Agent (Simulate → Evaluate → Optimize)
Notebook version of the example below: [stock-assistant (simulate->evaluate->optimize).ipynb](/notebooks/basic/stock-assistant%20(simulate-%3Eevaluate-%3Eoptimize).ipynb)
Prerequisites: Needs an OpenAI API key and `openai-agents` installed to run the base agent.
To use Maestro graph optimizer, save the following in a file called `stock-assistant.py` (or change the `code_paths` argument to `maestro.optimize_structure`).
```python
# ============================================================================
# STEP 0 — Prerequisites
# ============================================================================
# export OPENAI_API_KEY="sk-..."
# `uv add openai-agents`
# export RELAI_API_KEY="relai-..."
# Save as `stock-assistant.py`
import asyncio
from agents import Agent, Runner
from relai import (
AgentOutputs,
AsyncRELAI,
AsyncSimulator,
SimulationTape,
random_env_generator,
)
from relai.critico import Critico
from relai.critico.evaluate import RELAIFormatEvaluator
from relai.maestro import Maestro, params, register_param
from relai.mocker import Persona
from relai.simulator import simulated
# ============================================================================
# STEP 1.1 — Decorate inputs/tools that will be simulated
# ============================================================================
@simulated
async def get_user_query() -> str:
"""Get user's query about stock prices."""
# In a real agent, this function might get input from a chat interface.
return input("Enter you stock query: ")
# ============================================================================
# STEP 1.2 — Register parameters for optimization
# ============================================================================
register_param(
"prompt",
type="prompt",
init_value="You are a helpful assistant for stock price questions.",
desc="system prompt for the agent",
)
# ============================================================================
# STEP 2 — Your agent core
# ============================================================================
async def agent_fn(tape: SimulationTape) -> AgentOutputs:
# It is good practice to catch exceptions in agent function
# especially if the agent might raise errors with different configs
try:
question = await get_user_query()
agent = Agent(
name="Stock assistant",
instructions=params.prompt, # access registered parameter
model="gpt-5-mini",
)
result = await Runner.run(agent, question)
tape.extras["format_rubrics"] = {"Prices must include cents (eg: $XXX.XX)": 1.0}
tape.agent_inputs["question"] = question # trace inputs for later auditing
return {"summary": result.final_output}
except Exception as e:
return {"summary": str(e)}
async def main() -> None:
# Set up your simulation environment
# Bind Personas/MockTools to fully-qualified function names
env_generator = random_env_generator(
config_set={
"__main__.get_user_query": [Persona(user_persona="A polite and curious user.")],
}
)
async with AsyncRELAI() as client:
# ============================================================================
# STEP 3 — Simulate
# ============================================================================
simulator = AsyncSimulator(agent_fn=agent_fn, env_generator=env_generator, client=client)
agent_logs = await simulator.run(num_runs=1)
# ============================================================================
# STEP 4 — Evaluate with Critico
# ============================================================================
critico = Critico(client=client)
format_evaluator = RELAIFormatEvaluator(client=client)
critico.add_evaluators({format_evaluator: 1.0})
critico_logs = await critico.evaluate(agent_logs)
# Submit evaluation results to the RELAI platform (https://platform.relai.ai/results/runs)
await critico.report(critico_logs)
# Submit an aggregate report to RELAI platform (https://platform.relai.ai/results/critico)
await critico.report_aggregate(critico_logs, title="Stock assistant evaluation")
maestro = Maestro(client=client, agent_fn=agent_fn, log_to_platform=True, name="Stock assistant")
maestro.add_setup(simulator=simulator, critico=critico)
# ============================================================================
# STEP 5.1 — Optimize configs with Maestro (the parameters registered earlier in STEP 2)
# ============================================================================
# params.load("saved_config.json") # load previous params if available
await maestro.optimize_config(
total_rollouts=20, # Total number of rollouts to use for optimization.
batch_size=2, # Base batch size to use for individual optimization steps. Defaults to 4.
explore_radius=1, # A positive integer controlling the aggressiveness of exploration during optimization.
explore_factor=0.5, # A float between 0 to 1 controlling the exploration-exploitation trade-off.
verbose=True, # If True, additional information will be printed during the optimization step.
)
params.save("saved_config.json") # save optimized params for future usage
# ============================================================================
# STEP 5.2 — Optimize agent structure with Maestro (changes that cannot be achieved by setting parameters alone)
# ============================================================================
await maestro.optimize_structure(
total_rollouts=10, # Total number of rollouts to use for optimization.
code_paths=["stock-assistant.py"], # A list of paths corresponding to code implementations of the agent.
verbose=True, # If True, additional information will be printed during the optimization step.
)
if __name__ == "__main__":
asyncio.run(main())
```
## Simulation
Create controlled environments where agents interact and generate traces. Compose LLM personas, mock MCP tools/servers, and synthetic data; optionally condition on real events to align simulation ⇄ production.
➡️ Learn more: [Simulator](https://docs-sdk.relai.ai/api/simulator.html)
## Evaluation (Critico)
Use code-based or LLM-based evaluators—or RELAI platform evaluators—and convert human reviews into benchmarks you can re-run in Simuation/CI pipeline.
➡️ Learn more: [Evaluator](https://docs-sdk.relai.ai/api/evaluator.html)
## Optimization (Maestro)
Maestro is a holistic agent optimizer. It consumes evaluator/user feedback to improve prompts, configs, and even graph structure when prompt tuning isn’t enough. It can also select the best model, best tool, and best graph based on observed performance.
➡️ Learn more: [Maestro](https://docs-sdk.relai.ai/api/maestro.html)
## Links
- 📘 **Documentation:** [docs-sdk.relai.ai](https://docs-sdk.relai.ai)
- 🧪 **Examples:** [relai-sdk/examples](examples)
- 📓 **Notebooks:** [relai-sdk/notebooks](notebooks)
- 📖 **Tutorials:** [docs-sdk.relai.ai/tutorials/index.html](https://docs-sdk.relai.ai/tutorials/index.html)
- 🌐 **Website:** [relai.ai](https://relai.ai)
- 📰 **Maestro Technical Report:** [ArXiV](https://arxiv.org/abs/2509.04642)
- 🌐 **Join the Community:** [Discord](https://discord.gg/sjaHJ34YYE)
## License
Apache 2.0
## Citation
If you use the SDK in your research, please consider citing our work:
```
@misc{relai_sdk,
author = {RELAI, Inc.,},
title = {relai-sdk},
year = {2025},
howpublished = {\url{https://github.com/relai-ai/relai-sdk}},
note = {GitHub repository},
urldate = {2025-10-20}
}
@misc{wang2025maestrojointgraph,
title={Maestro: Joint Graph & Config Optimization for Reliable AI Agents},
author={Wenxiao Wang and Priyatham Kattakinda and Soheil Feizi},
year={2025},
eprint={2509.04642},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2509.04642},
}
```
<p align="center"> <sub>Made with ❤️ by the RELAI team — <a href="https://relai.ai">relai.ai</a> • <a href="https://discord.gg/sjaHJ34YYE">Community</a></sub> </p>
| text/markdown | null | RELAI <priyatham@relai.ai>, RELAI <wwx@relai.ai> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2025 RELAI, Inc.,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"License :: OSI Approved :: Apache Software License",
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.11.5",
"numpy>=1.26.4",
"aiohttp[speedups]>=3.12.15",
"httpx>=0.28.1",
"openai-agents[litellm]>=0.2.10",
"opentelemetry-instrumentation>=0.58b0",
"openinference-instrumentation>=0.1.38"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T19:55:18.428299 | relai-0.3.25.tar.gz | 63,439 | aa/40/6d0c0d8be0f2e728897512de0a58f8febac19277cf69865fcd16b5a87085/relai-0.3.25.tar.gz | source | sdist | null | false | 02149603aa91172055d3e73f5a89707d | d079b367f1cce773c37424af60e946a7d474b9f325e43456b4864a3bde08acaa | aa406d0c0d8be0f2e728897512de0a58f8febac19277cf69865fcd16b5a87085 | null | [
"LICENSE.md"
] | 242 |
2.4 | gitstats | 1.6.1 | GitStats - Visualize Your Git Repositories | .. start-of-about
.. figure:: https://raw.githubusercontent.com/shenxianpeng/gitstats/main/docs/source/logo.png
:alt: Project Logo
:align: center
:width: 200px
.. |pypi-version| image:: https://img.shields.io/pypi/v/gitstats?color=blue
:target: https://pypi.org/project/gitstats/
:alt: PyPI - Version
.. |python-versions| image:: https://img.shields.io/pypi/pyversions/gitstats
:alt: PyPI - Python Version
.. |python-download| image:: https://static.pepy.tech/badge/gitstats/week
:target: https://pepy.tech/projects/gitstats
:alt: PyPI Downloads
.. |test-badge| image:: https://github.com/shenxianpeng/gitstats/actions/workflows/test.yml/badge.svg
:target: https://github.com/shenxianpeng/gitstats/actions/workflows/test.yml
:alt: Test
.. |sonarcloud| image:: https://sonarcloud.io/api/project_badges/measure?project=shenxianpeng_gitstats&metric=alert_status
:target: https://sonarcloud.io/summary/new_code?id=shenxianpeng_gitstats
:alt: Quality Gate Status
.. |docs-badge| image:: https://readthedocs.org/projects/gitstats/badge/?version=latest
:target: https://gitstats.readthedocs.io/
:alt: Documentation
.. |contributors| image:: https://img.shields.io/github/contributors/shenxianpeng/gitstats
:target: https://github.com/shenxianpeng/gitstats/graphs/contributors
:alt: GitHub contributors
|pypi-version| |python-versions| |python-download| |test-badge| |docs-badge| |contributors|
``$ gitstats``
===============
📊 Generate insightful visual reports from Git.
📘 Documentation: `gitstats.readthedocs.io <https://gitstats.readthedocs.io/>`_
Example
-------
``gitstats . report`` generates this `gitstats report <https://shenxianpeng.github.io/gitstats/index.html>`_.
Installation
------------
.. code-block:: bash
pip install gitstats
gitstats is compatible with Python 3.9 and newer.
Usage
-----
.. code-block:: bash
gitstats <gitpath> <outputpath>
Run ``gitstats --help`` for more options, or check the `documentation <https://gitstats.readthedocs.io/en/latest/getting-started.html>`_.
Features
--------
Here is a list of some features of ``gitstats``:
* **General**: total files, lines, commits, authors, age.
* **Activity**: commits by hour of day, day of week, hour of week, month of year, year and month, and year.
* **Authors**: list of authors (name, commits (%), first commit date, last commit date, age), author of month, author of year.
* **Files**: file count by date, extensions.
* **Lines**: line of code by date.
* **Tags**: tags by date and author.
* **Customizable**: config values through ``gitstats.conf``.
* **Cross-platform**: works on Linux, Windows, and macOS.
AI-Powered Features 🤖
-----------------------
GitStats supports AI-powered insights to enhance your repository analysis with natural language summaries and actionable recommendations.
**Quick Start:**
.. code-block:: bash
# Install with AI support
pip install gitstats[ai]
# Enable AI with OpenAI
export OPENAI_API_KEY=your-api-key
gitstats --ai --ai-provider openai <gitpath> <outputpath>
For detailed setup instructions, configuration options, and examples, see the `AI Integration Documentation <https://gitstats.readthedocs.io/en/stable/ai-integration.html>`_.
.. end-of-about
Contributing
------------
As an open source project, gitstats welcomes contributions of all forms.
----
The gitstats project was originally created by `Heikki Hokkainen <https://github.com/hoxu>`_ and is currently maintained by `Xianpeng Shen <https://github.com/shenxianpeng>`_.
| text/x-rst | null | Xianpeng Shen <xianpeng.shen@gmail.com>, Heikki Hokkanen <hoxu@users.sf.net> | null | null | null | git, gitstats, statistics, git history | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language ... | [] | null | null | >=3.9 | [] | [] | [] | [
"gnuplot-wheel",
"nox; extra == \"dev\"",
"sphinx; extra == \"docs\"",
"sphinx-rtd-theme; extra == \"docs\"",
"sphinx-autobuild; extra == \"docs\"",
"openai>=1.0; extra == \"ai\"",
"anthropic>=0.8; extra == \"ai\"",
"google-generativeai>=0.3; extra == \"ai\"",
"requests>=2.31; extra == \"ai\""
] | [] | [] | [] | [
"source, https://github.com/shenxianpeng/gitstats",
"tracker, https://github.com/shenxianpeng/gitstats/issues",
"homepage, https://shenxianpeng.github.io/gitstats/index.html",
"documentation, https://gitstats.readthedocs.io/"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T19:54:56.378214 | gitstats-1.6.1-py3-none-any.whl | 42,517 | 03/e7/96c4c75bd9fb1e278c24ecb28673a0ed47e662f330d430297aa96625d539/gitstats-1.6.1-py3-none-any.whl | py3 | bdist_wheel | null | false | e48049d9d13ecb90ad914cffd03694dd | f2bf5d2015b12c5bb8e52ab27f7175a81a1043ed1b972ea1a0e0001cd0f34655 | 03e796c4c75bd9fb1e278c24ecb28673a0ed47e662f330d430297aa96625d539 | null | [
"LICENSE"
] | 135 |
2.4 | tetrascience-streamlit-ui | 0.3.0a0 | Use Tetrascience UI components in Streamlit | # tetrascience-streamlit-ui
TetraScience UI components and data app providers for [Streamlit](https://streamlit.io/).
This library provides:
- **UI Components**: Reusable Streamlit components for TetraScience applications
- **Data App Providers**: SDK functions for retrieving different provider types from the TetraScience platform
## Installation
To install and set up the TetraScience UI components for Streamlit, follow these steps:
1. **Install prerequisites:**
- Python 3.11, 3.12, or 3.13 (supports `>=3.11,<4`)
- Node.js (v20+ recommended)
- [yarn 4](https://yarnpkg.com/)
- [Poetry](https://python-poetry.org/docs/#installation)
2. **Install the package**
```
poetry add tetrascience-streamlit-ui
```
## Usage
### UI Components
```python
from tetrascience.ui.histogram import histogram
from tetrascience.ui.code_editor import code_editor
from tetrascience.ui.protocol_yaml_card import protocol_yaml_card
# Example: Histogram component
dist_result = histogram(name="Sample Distribution", key="hist1")
# Example: Code Editor
code = code_editor(value="# Python code here", language="python", height="200px", key="code1")
# Example: Protocol YAML Card
protocol_yaml_card(
title="Protocol Editor",
version_options=[
{"label": "v1.0.0", "value": "v1.0.0"},
{"label": "v0.9.2", "value": "v0.9.2"},
],
selected_version="v1.0.0",
yaml="# Example YAML\nname: Protocol",
key="protocol1"
)
```
## Data App Providers
The `data_app_providers` module provides functionality for retrieving and using database provider configurations from the TetraScience Data Platform (TDP). This is useful for data apps that need to connect to various databases like Snowflake, Databricks, or Athena.
### Basic Usage
```python
from tetrascience.data_app_providers import (
get_provider_configurations,
build_provider,
TetraScienceClient,
)
# Create TDP client
client = TetraScienceClient(
token="your-auth-token",
x_org_slug="your-org-slug",
base_url="https://api.tetrascience.com"
)
# Get provider configurations from TDP
configs = get_provider_configurations(client)
# Build and use a provider
for config in configs:
provider = build_provider(config)
df = provider.query("SELECT * FROM my_table LIMIT 10")
print(df.head())
```
### Environment Variable Configuration (Development)
For local development, you can specify provider configurations directly via environment variable:
```python
import os
import json
from tetrascience.data_app_providers import get_provider_configurations, build_provider
# Set provider configuration
provider_config = [
{
"name": "Dev Snowflake",
"type": "snowflake",
"iconUrl": "https://example.com/snowflake.png",
"fields": {
"user": "dev_user",
"password": "dev_password",
"account": "dev.snowflakecomputing.com",
"warehouse": "DEV_WH",
"database": "DEV_DB",
"schema": "PUBLIC",
"role": "DEV_ROLE"
}
}
]
os.environ["DATA_APP_PROVIDER_CONFIG"] = json.dumps(provider_config)
# Get configurations (client still needed but won't be used for env var mode)
client = TetraScienceClient() # Can be empty for env var mode
configs = get_provider_configurations(client)
provider = build_provider(configs[0])
```
### Production Usage (TDP Integration)
In production data apps, provider configurations are retrieved from TDP using a connector ID:
```python
# Environment variables set by data app runtime:
# CONNECTOR_ID=your-connector-id
# ORG_SLUG=your-org-slug
# TDP_ENDPOINT=https://api.tetrascience.com
# Provider secrets also set by environment:
# SNOWFLAKE_USER=actual_user
# SNOWFLAKE_PASSWORD=actual_password
# DATABRICKS_CLIENT_ID=actual_client_id
# etc.
from tetrascience.data_app_providers import (
get_provider_configurations,
build_provider,
TetraScienceClient,
)
client = TetraScienceClient(
token=os.getenv("TS_AUTH_TOKEN"),
x_org_slug=os.getenv("ORG_SLUG"),
base_url=os.getenv("TDP_ENDPOINT") or os.getenv("TDP_INTERNAL_ENDPOINT")
)
# Fetch provider configurations from TDP
configs = get_provider_configurations(client)
# Use providers
for config in configs:
print(f"Using provider: {config.name} ({config.type})")
provider = build_provider(config)
# Query data
df = provider.query("SELECT COUNT(*) as row_count FROM my_table")
print(f"Row count: {df['row_count'][0]}")
```
### Supported Provider Types
- **Snowflake**: `snowflake-connector-python` required
- **Databricks**: `databricks-sql-connector[pyarrow]` required
- **Athena**: `pyathena[arrow]` and `boto3` required
- **Local Development**: Requires AWS credentials to be configured (see [AWS Credentials Setup](#aws-credentials-for-athena-local-development) below)
- **TDP Deployment**: AWS credentials are automatically configured for the Data App
### Provider Configuration Format
```python
{
"name": "Human-readable provider name",
"type": "snowflake|databricks|athena",
"iconUrl": "https://example.com/icon.png",
"fields": {
# Provider-specific connection fields
# Snowflake: user, password, account, warehouse, database, schema, role
# Databricks: server_hostname, http_path, client_id, client_secret, catalog, schema
# Athena: Uses AWS credentials and environment variables
}
}
```
### Error Handling
```python
from tetrascience.data_app_providers.exceptions import (
InvalidProviderConfigurationError,
ConnectionError,
QueryError,
MissingTableError
)
try:
configs = get_provider_configurations(client)
provider = build_provider(configs[0])
df = provider.query("SELECT * FROM non_existent_table")
except InvalidProviderConfigurationError as e:
print(f"Configuration error: {e}")
except ConnectionError as e:
print(f"Connection failed: {e}")
except MissingTableError as e:
print(f"Table not found: {e}")
except QueryError as e:
print(f"Query failed: {e}")
```
### AWS Credentials for Athena (Local Development)
When using the Athena provider for local development, you need to configure AWS credentials since the provider uses `boto3` to connect to AWS Athena. The Athena provider automatically uses the default TDP Athena configuration but requires valid AWS credentials.
#### Required Environment Variables
The following environment variables are used by the Athena provider:
```bash
# Required for Athena connection
AWS_REGION=us-east-1 # Your AWS region
ATHENA_S3_OUTPUT_LOCATION=your-bucket # S3 bucket for query results
ORG_SLUG=your-org-slug # Your organization slug
```
#### AWS Credentials Setup
Choose one of the following methods to configure AWS credentials for local development:
#### Option 1: AWS Credentials File
```bash
# Configure AWS credentials using AWS CLI
aws configure
# Or manually create ~/.aws/credentials
[default]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
```
#### Option 2: Environment Variables
```bash
export AWS_ACCESS_KEY_ID=your-access-key
export AWS_SECRET_ACCESS_KEY=your-secret-key
```
#### TDP Deployment vs Local Development
- **Local Development**: You must manually configure AWS credentials using one of the methods above
- **TDP Deployment**: AWS credentials are automatically provided by the connector runtime environment
#### Example: Local Athena Development
```python
import os
import json
from tetrascience.data_app_providers import get_provider_configurations, build_provider, TetraScienceClient
# Set required environment variables for local development
os.environ["AWS_REGION"] = "us-east-1"
os.environ["ATHENA_S3_OUTPUT_LOCATION"] = "your-athena-results-bucket"
os.environ["ORG_SLUG"] = "your-org-slug"
# Configure Athena provider via environment variable
athena_config = [{
"name": "Local Athena",
"type": "athena",
"iconUrl": "https://example.com/athena.png",
"fields": {} # Athena uses AWS credentials and environment variables
}]
os.environ["DATA_APP_PROVIDER_CONFIG"] = json.dumps(athena_config)
# Build and use Athena provider
client = TetraScienceClient() # Empty client for env var mode
configs = get_provider_configurations(client)
athena_provider = build_provider(configs[0])
# Query data (requires valid AWS credentials)
df = athena_provider.query("SELECT COUNT(*) as total FROM your_table")
print(f"Total rows: {df['total'][0]}")
```
## JWT Token Manager (Data Apps Authentication)
The `JWTTokenManager` helps your data app obtain a valid JWT to call the TetraScience Data Platform (TDP) APIs. It supports both:
- Using a standard `ts-auth-token` JWT (from cookie or `TS_AUTH_TOKEN` env var)
- Resolving a `ts-token-ref` cookie into a full JWT via the connector key-value store
### When to use it
Use this in your Streamlit data apps to:
- Read the user's auth token from cookies when deployed on TDP
- Fall back to a local `TS_AUTH_TOKEN` during development
### Environment variables
- `CONNECTOR_ID` (required for ts-token-ref flow)
- `ORG_SLUG` (organization slug)
- `TDP_ENDPOINT` or `TDP_INTERNAL_ENDPOINT` (API base URL; picked automatically if set)
- `TS_AUTH_TOKEN` (optional for local dev; used if cookies are not available)
### Basic (local development) example
For local dev, set `TS_AUTH_TOKEN` and call `get_user_token` with an empty cookie dict. The manager will use the env var.
```python
import os
import streamlit as st
from tetrascience.data_apps.jwt_token_manager import jwt_manager
# export TS_AUTH_TOKEN=... and ORG_SLUG=...
org_slug = os.getenv("ORG_SLUG", "your-org")
# No cookies in local dev; falls back to TS_AUTH_TOKEN
jwt_token = jwt_manager.get_user_token(cookies={}, org_slug=org_slug)
if not jwt_token:
st.warning('Failed to retrieve JWT token')
# Use the token to make authenticated requests to TDP
```
### Production (TDP) example using cookies
In TDP, your app runs behind a proxy that sets cookies. Use a cookie utility to read them (for example, `extra-streamlit-components`' CookieManager), then pass them to the manager.
```python
import os
import streamlit as st
from tetrascience.data_apps.jwt_token_manager import jwt_manager
org_slug = os.environ["ORG_SLUG"]
# Read cookies (contains ts-auth-token or ts-token-ref)
cookies = st.context.cookies.to_dict()
# Resolves the users JWT token. Either the `ts-auth-token` cookie, or resolves the `ts-token-ref` cookie into a full JWT.
jwt_token = jwt_manager.get_user_token(cookies=cookies, org_slug=org_slug)
if not jwt_token:
st.warning('Failed to retrieve JWT token')
# Use the token to make authenticated requests to TDP
```
Notes:
- Tokens are cached and auto-refreshed when close to expiry.
- Errors are surfaced in Streamlit via `st.warning`/`st.error` to help with troubleshooting.
## Features
### UI Components (Summary)
- Custom Streamlit components for TetraScience UI
- Easy integration and usage
- Example components: `my_component`, `histogram`, `bar_graph`, `chromatogram`, `protocol_yaml_card`, `button`, `input`, `dropdown`, `checkbox`, `badge`, `label`, `textarea`, `code_editor`, `markdown_display`, `menu_item`, `tab`, `toast`, `toggle`, `tooltip`, and more.
### Data App Providers (Summary)
- Retrieve and manage data provider configurations from the TetraScience platform
- Connect to Snowflake, Databricks, and Athena databases
- Uses the standardized TetraScience SDK (`ts-sdk-connectors-python`) for TDP API interactions
- Configure providers via environment variables for local development
## License
Apache 2.0
| text/markdown | TetraScience | null | null | null | null | null | [] | [] | https://github.com/tetrascience/ts-lib-ui-kit-streamlit | null | <4,>=3.11 | [] | [] | [] | [
"streamlit<2.0.0,>=1.12.0",
"pydantic<3.0.0,>=2.0.0",
"polars<2.0.0,>=1.0.0",
"requests<3.0.0,>=2.25.0",
"snowflake-connector-python<4.0.0,>=3.0.0",
"databricks-sql-connector[pyarrow]<5.0.0,>=4.0.5",
"databricks-sdk<0.57.0,>=0.56.0",
"pyathena[arrow]<4.0.0,>=3.0.0",
"boto3<2.0.0,>=1.20.0",
"pyjwt<... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T19:54:47.065152 | tetrascience_streamlit_ui-0.3.0a0.tar.gz | 2,067,575 | 8c/f0/23d85f575efc063dae81ffd20d40318d74b674479e7d20423cac7a43db65/tetrascience_streamlit_ui-0.3.0a0.tar.gz | source | sdist | null | false | 945be7b9ba89c4d0e497e582395618da | d9788e87072c1d9499d4487f8c135b73d33079718c888026b9f756035c7e3a55 | 8cf023d85f575efc063dae81ffd20d40318d74b674479e7d20423cac7a43db65 | null | [
"LICENSE"
] | 204 |
2.4 | mcp-testaserver-http | 0.1.3 | Un servidor MCP que actúa como puente HTTP | # MCP ↔ HTTP Bridge
Servidor MCP local (stdio) que expone dinámicamente las tools de un servidor HTTP externo.
```
Cliente MCP (Claude Code / CLI)
↕ stdio
mcp_server.py ← servidor MCP local
↕ HTTP
http_server.py ← tu backend con las tools
```
---
## Requisitos
```bash
pip install mcp httpx fastapi uvicorn
```
---
## Arrancar
### 1. Servidor HTTP (en una terminal)
```bash
python http_server.py
# → http://127.0.0.1:8000
```
Puedes verificar las tools disponibles en:
```
GET http://127.0.0.1:8000/tools
```
### 2. Servidor MCP (lo gestiona el cliente)
El servidor MCP se lanza automáticamente por el cliente MCP vía stdio.
Configúralo en `~/.claude.json` (Claude Code):
```json
{
"mcpServers": {
"http-bridge": {
"command": "python",
"args": ["/ruta/absoluta/a/mcp_server.py"]
}
}
}
```
---
## Añadir tus propias tools
Solo tienes que modificar `http_server.py`:
1. Añade un dict a la lista `TOOLS` con `name`, `description` e `inputSchema`.
2. Añade una función `run_mi_tool(args)` con la lógica.
3. Registra la función en `TOOL_HANDLERS`.
El servidor MCP las detectará automáticamente sin cambios.
```python
# Ejemplo en http_server.py
TOOLS.append({
"name": "reverse_text",
"description": "Invierte un texto.",
"inputSchema": {
"type": "object",
"properties": {
"text": {"type": "string"}
},
"required": ["text"]
}
})
def run_reverse_text(args):
return {"result": args["text"][::-1]}
TOOL_HANDLERS["reverse_text"] = run_reverse_text
```
---
## Contrato HTTP
| Método | Ruta | Descripción |
|--------|-------------------|--------------------------------------------|
| GET | `/tools` | Lista todas las tools disponibles |
| POST | `/tools/{name}` | Ejecuta la tool `name` con el body JSON |
**Respuesta de `/tools`:**
```json
{
"tools": [
{
"name": "calculator",
"description": "...",
"inputSchema": { ... }
}
]
}
```
**Respuesta de `/tools/{name}`:**
```json
{ "result": { ... } }
```
---
## Estructura de archivos
```
.
├── http_server.py # Backend HTTP con las tools
├── mcp_server.py # Servidor MCP local (stdio)
└── README.md
```
# Para inspeccionar el servidor mcp
npx @modelcontextprotocol/inspector python C:\\Users\\acamp\\Documents\\SynologyDrive\\mcp-http-server\\mcp_server.py
npx @modelcontextprotocol/inspector python /home/chrzrd/SynologyDrive/mcp-testaserver-http/mcp_server.py | text/markdown | A. Campayo | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"mcp>=1.0.0",
"python-dotenv>=1.0.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T19:53:22.277529 | mcp_testaserver_http-0.1.3.tar.gz | 7,689 | e7/4b/5e8720b785e034d4a9879660c1b987b4b2653a5b39fecdea50336bdc12e0/mcp_testaserver_http-0.1.3.tar.gz | source | sdist | null | false | cb5dada47c2594438339e56a08a4e9a2 | e24af355e54ecde7d5abd48c5b206159c8b91e51cd8407ee986b343c1a05f7c3 | e74b5e8720b785e034d4a9879660c1b987b4b2653a5b39fecdea50336bdc12e0 | MIT | [
"LICENSE"
] | 231 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.