metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | swarmauri_tool_jupyterexporthtml | 0.8.3.dev5 | A tool that exports a Jupyter Notebook to HTML format using nbconvert’s HTMLExporter, enabling web-based presentation of notebooks. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_jupyterexporthtml/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_jupyterexporthtml" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterexporthtml/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterexporthtml.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexporthtml/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_jupyterexporthtml" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexporthtml/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_jupyterexporthtml" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexporthtml/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_jupyterexporthtml?label=swarmauri_tool_jupyterexporthtml&color=green" alt="PyPI - swarmauri_tool_jupyterexporthtml"/></a>
</p>
---
# Swarmauri Tool Jupyter Export HTML
Converts a Jupyter notebook (passed in as JSON) to HTML using nbconvert’s `HTMLExporter` with optional custom templates, CSS, and JavaScript.
## Features
- Accepts notebook data as a JSON string (e.g., from `json.dumps(nbformat.read(...))`).
- Supports optional template, inline CSS, and inline JS injection.
- Returns a dict containing `exported_html` or `error` when conversion fails.
## Prerequisites
- Python 3.10 or newer.
- `nbconvert`, `nbformat`, and Swarmauri base/core packages (installed automatically).
## Installation
```bash
# pip
pip install swarmauri_tool_jupyterexporthtml
# poetry
poetry add swarmauri_tool_jupyterexporthtml
# uv (pyproject-based projects)
uv add swarmauri_tool_jupyterexporthtml
```
## Quickstart
```python
import json
import nbformat
from swarmauri_tool_jupyterexporthtml import JupyterExportHTMLTool
notebook = nbformat.read("notebooks/example.ipynb", as_version=4)
notebook_json = json.dumps(notebook)
exporter = JupyterExportHTMLTool()
response = exporter(
notebook_json=notebook_json,
template_file=None,
extra_css="body { font-family: Arial; }",
extra_js="console.log('Export complete');",
)
if "exported_html" in response:
Path("notebooks/example.html").write_text(response["exported_html"], encoding="utf-8")
else:
print("Error:", response["error"])
```
## Tips
- nbconvert templates let you customize the layout; pass a `.tpl` file to `template_file`.
- Keep `extra_css`/`extra_js` lightweight to avoid bloating the HTML output.
- Combine with notebook execution tools (execute → export → publish) for end-to-end pipelines.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, jupyterexporthtml, exports, jupyter, notebook, html, format, nbconvert, htmlexporter, enabling, web, based, presentation, notebooks | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audi... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"nbconvert>=7.16.6",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:50:24.191565 | swarmauri_tool_jupyterexporthtml-0.8.3.dev5-py3-none-any.whl | 9,479 | 96/21/bdff41c0ffdafec4ed39574779290e4d814a8f41fa633d4a1fd23b560b45/swarmauri_tool_jupyterexporthtml-0.8.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | c546ad2fe7579132c09de806f88359e8 | 45f15b2bbfdf32d72e60009dd41b82008f28cac3ae98ddb956af197ee6939e43 | 9621bdff41c0ffdafec4ed39574779290e4d814a8f41fa633d4a1fd23b560b45 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tool_jupyterexecutenotebookwithparameters | 1.3.3.dev5 | A tool designed to execute parameterized notebooks using papermill, allowing dynamic input and output capture for automated workflows. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecutenotebookwithparameters/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_jupyterexecutenotebookwithparameters" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterexecutenotebookwithparameters/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterexecutenotebookwithparameters.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecutenotebookwithparameters/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_jupyterexecutenotebookwithparameters" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecutenotebookwithparameters/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_jupyterexecutenotebookwithparameters" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecutenotebookwithparameters/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_jupyterexecutenotebookwithparameters?label=swarmauri_tool_jupyterexecutenotebookwithparameters&color=green" alt="PyPI - swarmauri_tool_jupyterexecutenotebookwithparameters"/></a>
</p>
---
# Swarmauri Tool Jupyter Execute Notebook With Parameters
Runs a Jupyter notebook with injected parameters using Papermill-style execution.
## Features
- Parameterizes notebooks via Papermill and executes them end-to-end.
- Saves the executed notebook to an output path of your choice.
- Returns a dict containing `executed_notebook` on success or `error`/`message` when execution fails.
## Prerequisites
- Python 3.10 or newer.
- Papermill, nbformat, nbconvert, swarmauri base/core packages (installed automatically).
- Notebook dependencies must be available in the runtime environment.
## Installation
```bash
# pip
pip install swarmauri_tool_jupyterexecutenotebookwithparameters
# poetry
poetry add swarmauri_tool_jupyterexecutenotebookwithparameters
# uv (pyproject-based projects)
uv add swarmauri_tool_jupyterexecutenotebookwithparameters
```
## Quickstart
```python
from swarmauri_tool_jupyterexecutenotebookwithparameters import JupyterExecuteNotebookWithParametersTool
executor = JupyterExecuteNotebookWithParametersTool()
response = executor(
notebook_path="templates/report.ipynb",
output_notebook_path="outputs/report-filled.ipynb",
params={
"input_data_path": "data/input.csv",
"run_mode": "production",
},
timeout=600,
)
if "executed_notebook" in response:
print("Notebook executed:", response["executed_notebook"])
else:
print("Error:", response["error"], response.get("message"))
```
## Tips
- Parameters can be any JSON-serializable values used inside the notebook (strings, numbers, dictionaries, etc.).
- Increase `timeout` for notebooks with lengthy cells.
- Combine with Swarmauri notebook cleaning/conversion tools for full pipelines (execute → clear outputs → convert to PDF/HTML).
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, jupyterexecutenotebookwithparameters, designed, execute, parameterized, notebooks, papermill, allowing, dynamic, input, output, capture, automated, workflows | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audi... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"papermill>=2.6.0",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:50:23.181789 | swarmauri_tool_jupyterexecutenotebookwithparameters-1.3.3.dev5-py3-none-any.whl | 9,646 | 8a/56/7833948383322257a990b0167330a37c4fa44f560c396d74ebbcf4d577f5/swarmauri_tool_jupyterexecutenotebookwithparameters-1.3.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 9e43cd53708631b613f9cf6d77d6e1e3 | 6f7b1d230ae31185814313c8d6778b0774184ac34447c93b7820c4c394639afe | 8a567833948383322257a990b0167330a37c4fa44f560c396d74ebbcf4d577f5 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tool_jupyterexecuteandconvert | 0.9.3.dev5 | A tool that programmatically executes and converts a Jupyter Notebook using nbconvert's CLI functionality, enabling automated notebook execution and format conversion. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecuteandconvert/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_jupyterexecuteandconvert" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterexecuteandconvert/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterexecuteandconvert.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecuteandconvert/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_jupyterexecuteandconvert" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecuteandconvert/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_jupyterexecuteandconvert" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecuteandconvert/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_jupyterexecuteandconvert?label=swarmauri_tool_jupyterexecuteandconvert&color=green" alt="PyPI - swarmauri_tool_jupyterexecuteandconvert"/></a>
</p>
---
# Swarmauri Tool Jupyter Execute & Convert
Executes a Jupyter notebook and converts the output to HTML or PDF using nbconvert—packaged as a Swarmauri tool.
## Features
- Runs notebooks with configurable execution timeout.
- Converts executed notebooks to `html` or `pdf` via nbconvert.
- Returns a status dictionary with the converted file path or error details.
## Prerequisites
- Python 3.10 or newer.
- `nbconvert`, `nbformat`, and Jupyter runtime (installed automatically).
- Notebook dependencies must be available in the execution environment.
## Installation
```bash
# pip
pip install swarmauri_tool_jupyterexecuteandconvert
# poetry
poetry add swarmauri_tool_jupyterexecuteandconvert
# uv (pyproject-based projects)
uv add swarmauri_tool_jupyterexecuteandconvert
```
## Quickstart
```python
from swarmauri_tool_jupyterexecuteandconvert import JupyterExecuteAndConvertTool
tool = JupyterExecuteAndConvertTool()
response = tool(
notebook_path="notebooks/analysis.ipynb",
output_format="pdf",
execution_timeout=600,
)
if response.get("status") == "success":
print("Converted file:", response["converted_file"])
else:
print("Error:", response.get("error"))
print("Message:", response.get("message"))
```
## Tips
- Set `execution_timeout` high enough for long-running notebooks; nbconvert defaults to 600 seconds.
- Ensure notebooks run headlessly: avoid widgets or interactive inputs that pause execution.
- Install LaTeX (`tectonic`, `texlive`) if exporting to PDF on systems where nbconvert requires it.
- Combine with `JupyterClearOutputTool` to strip outputs after conversion if you want clean notebooks and rich artifacts.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, jupyterexecuteandconvert, programmatically, executes, converts, jupyter, notebook, nbconvert, cli, functionality, enabling, automated, execution, format, conversion | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audi... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"nbconvert>=7.16.6",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:50:07.028277 | swarmauri_tool_jupyterexecuteandconvert-0.9.3.dev5.tar.gz | 8,762 | d9/55/f1a11e6a1102cf02bee3b77adfe178de051591fb4bd3482dd77e146b5a06/swarmauri_tool_jupyterexecuteandconvert-0.9.3.dev5.tar.gz | source | sdist | null | false | 51358774be8c0541b5e56ca5a8fd1eda | f5a386716e437e6fe89ec2c6ce592a805d792ff91b2b226bcb8a15963eb03ad1 | d955f1a11e6a1102cf02bee3b77adfe178de051591fb4bd3482dd77e146b5a06 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tool_jupyterexecutenotebook | 0.9.3.dev5 | A tool designed to execute all cells in a Jupyter Notebook using nbconvert’s ExecutePreprocessor, capturing outputs for testing and reporting. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecutenotebook/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_jupyterexecutenotebook" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterexecutenotebook/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterexecutenotebook.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecutenotebook/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_jupyterexecutenotebook" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecutenotebook/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_jupyterexecutenotebook" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecutenotebook/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_jupyterexecutenotebook?label=swarmauri_tool_jupyterexecutenotebook&color=green" alt="PyPI - swarmauri_tool_jupyterexecutenotebook"/></a>
</p>
---
# Swarmauri Tool Jupyter Execute Notebook
Executes all cells of a Jupyter notebook using nbclient and returns the executed `NotebookNode` with captured outputs.
## Features
- Runs notebooks programmatically via the Swarmauri tool interface.
- Accepts optional per-cell timeout (default 30 seconds) and continues on cell errors.
- Returns the executed notebook object so downstream tools can inspect outputs or save it.
## Prerequisites
- Python 3.10 or newer.
- Jupyter/nbconvert stack available (`nbclient`, `nbformat`, `ipykernel`, etc.—installed automatically).
- Notebook dependencies must be installed in the environment where the tool runs.
## Installation
```bash
# pip
pip install swarmauri_tool_jupyterexecutenotebook
# poetry
poetry add swarmauri_tool_jupyterexecutenotebook
# uv (pyproject-based projects)
uv add swarmauri_tool_jupyterexecutenotebook
```
## Quickstart
```python
from swarmauri_tool_jupyterexecutenotebook import JupyterExecuteNotebookTool
executor = JupyterExecuteNotebookTool()
executed_nb = executor(
notebook_path="notebooks/example.ipynb",
timeout=120,
)
# Save the executed notebook
import nbformat, json
from pathlib import Path
Path("notebooks/example-executed.ipynb").write_text(
nbformat.writes(executed_nb),
encoding="utf-8",
)
```
## Tips
- Increase `timeout` for notebooks with long-running cells to avoid `CellTimeoutError`.
- Set `allow_errors=True` (default in the tool) so execution continues after a failing cell while error traces are still recorded.
- Combine with `JupyterClearOutputTool` or conversion tools to build end-to-end notebook pipelines.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, jupyterexecutenotebook, designed, execute, all, cells, jupyter, notebook, nbconvert, executepreprocessor, capturing, outputs, testing, reporting | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audi... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"ipykernel>=6.29.5",
"nbconvert>=7.16.6",
"nbformat>=5.10.4",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:50:06.442899 | swarmauri_tool_jupyterexecutenotebook-0.9.3.dev5-py3-none-any.whl | 9,606 | 5b/4f/613187924338247247f27da1f93dff84f53acf2818c87899b4fb704eda91/swarmauri_tool_jupyterexecutenotebook-0.9.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | e54920a6ef7e939eafa32e9026a0571f | 8d06d4ebbf2f2c91d0a8434d8683bf9944f033cebdac2ceadf3d3dc306234b2f | 5b4f613187924338247247f27da1f93dff84f53acf2818c87899b4fb704eda91 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tool_jupyterexecutecell | 0.9.3.dev5 | A tool designed to execute a single code cell in a running Jupyter kernel using jupyter_client, capturing its output and errors. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecutecell/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_jupyterexecutecell" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterexecutecell/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterexecutecell.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecutecell/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_jupyterexecutecell" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecutecell/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_jupyterexecutecell" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterexecutecell/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_jupyterexecutecell?label=swarmauri_tool_jupyterexecutecell&color=green" alt="PyPI - swarmauri_tool_jupyterexecutecell"/></a>
</p>
---
# Swarmauri Tool Jupyter Execute Cell
Executes Python code in the active Jupyter kernel and captures stdout/stderr/errors for downstream tooling.
## Features
- Accepts raw code strings and optionally a timeout.
- Returns a dict with `stdout`, `stderr`, and `error` keys.
- Built on `jupyter_client` to talk to the running kernel.
## Prerequisites
- Python 3.10 or newer.
- Jupyter kernel running in the environment (IPython/Jupyter installed).
## Installation
```bash
# pip
pip install swarmauri_tool_jupyterexecutecell
# poetry
poetry add swarmauri_tool_jupyterexecutecell
# uv (pyproject-based projects)
uv add swarmauri_tool_jupyterexecutecell
```
## Quickstart
```python
from swarmauri_tool_jupyterexecutecell import JupyterExecuteCellTool
code = "print('Hello from Swarmauri!')"
result = JupyterExecuteCellTool()(code, timeout=60)
print("stdout:", result["stdout"])
print("stderr:", result["stderr"])
print("error:", result["error"])
```
## Tips
- Increase `timeout` for cells that perform long-running tasks.
- The tool executes in the current kernel—make sure dependencies are already imported/installed in that environment.
- Handle errors gracefully by checking the `error` field before using results.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, jupyterexecutecell, designed, execute, single, code, cell, running, jupyter, kernel, client, capturing, its, output, errors | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audi... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"IPython>=8.32.0",
"ipykernel==6.29.5",
"jupyter_client>=8.6.3",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:50:03.997903 | swarmauri_tool_jupyterexecutecell-0.9.3.dev5-py3-none-any.whl | 9,963 | 22/75/549119febb9696edc7f49cf0d09cd0d2c85d25f370830f2d13b6b7833bc9/swarmauri_tool_jupyterexecutecell-0.9.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 83fd2bb2c4bd7abbcc71b4249ad0dea9 | d99aa0ba61176b22f284d4e261596a3177901d17e605aac92b0403c4550ac4c5 | 2275549119febb9696edc7f49cf0d09cd0d2c85d25f370830f2d13b6b7833bc9 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tool_jupyterdisplayhtml | 0.9.3.dev5 | A tool designed to render HTML content within a Jupyter Notebook using IPython's HTML display method. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_jupyterdisplayhtml/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_jupyterdisplayhtml" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterdisplayhtml/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterdisplayhtml.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterdisplayhtml/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_jupyterdisplayhtml" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterdisplayhtml/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_jupyterdisplayhtml" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterdisplayhtml/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_jupyterdisplayhtml?label=swarmauri_tool_jupyterdisplayhtml&color=green" alt="PyPI - swarmauri_tool_jupyterdisplayhtml"/></a>
</p>
---
# Swarmauri Tool Jupyter Display HTML
Specialized wrapper for displaying HTML snippets in Jupyter notebooks via IPython's `HTML` display helper.
## Features
- Accepts raw HTML strings and renders them inline in Jupyter.
- Returns status information (`success`/`error`) for integration with larger tool flows.
- Subclass of `ToolBase`, so it plugs into Swarmauri toolchains seamlessly.
## Prerequisites
- Python 3.10 or newer.
- Jupyter/IPython environment with display capabilities.
## Installation
```bash
# pip
pip install swarmauri_tool_jupyterdisplayhtml
# poetry
poetry add swarmauri_tool_jupyterdisplayhtml
# uv (pyproject-based projects)
uv add swarmauri_tool_jupyterdisplayhtml
```
## Quickstart
```python
from swarmauri_tool_jupyterdisplayhtml import JupyterDisplayHTMLTool
tool = JupyterDisplayHTMLTool()
result = tool("""
<h2>Swarmauri</h2>
<p>This HTML was rendered by JupyterDisplayHTMLTool.</p>
""")
print(result)
```
## Tips
- Wrap the call in Swarmauri agents to surface generated HTML reports or tables.
- Validate user-provided HTML before rendering to avoid XSS issues in shared notebooks.
- Combine with other tools that produce HTML (e.g., Folium maps) to display results inline.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, jupyterdisplayhtml, designed, render, html, content, within, jupyter, notebook, ipython, display, method | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audi... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"IPython>=8.32.0",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:50:00.425731 | swarmauri_tool_jupyterdisplayhtml-0.9.3.dev5-py3-none-any.whl | 8,878 | 7c/f4/1e79a8494a2f9560c9d7ec13745b2d28dc8cb69f79a883e513289f96756f/swarmauri_tool_jupyterdisplayhtml-0.9.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 9e2c189b35c3934f5502f797aeaa7672 | d0181fb202a1e8623f3659b825243e416e7aa6aaba3c6ee7dfd6ef9e180e29a1 | 7cf41e79a8494a2f9560c9d7ec13745b2d28dc8cb69f79a883e513289f96756f | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | experiment-utils-pd | 0.1.11 | Set of utility functions for analyzing experimental and observational data | [](https://github.com/sdaza/experiment-utils-pd/actions/workflows/ci.yaml)
[](https://pypi.org/project/experiment-utils-pd/)
# Experiment Utils
A comprehensive Python package for designing, analyzing, and validating experiments with advanced causal inference capabilities.
## Features
- **Experiment Analysis**: Estimate treatment effects with multiple adjustment methods (covariate balancing, regression, IV, AIPW)
- **Multiple Outcome Models**: OLS, logistic, Poisson, negative binomial, and Cox proportional hazards
- **Doubly Robust Estimation**: Augmented IPW (AIPW) for OLS, logistic, Poisson, and negative binomial models
- **Survival Analysis**: Cox proportional hazards with IPW and regression adjustment
- **Covariate Balance**: Check and visualize balance between treatment groups
- **Marginal Effects**: Average marginal effects for GLMs (probability change, count change)
- **Bootstrap Inference**: Robust confidence intervals and p-values via bootstrap resampling
- **Multiple Comparison Correction**: Family-wise error rate control (Bonferroni, Holm, Sidak, FDR)
- **Power Analysis**: Calculate statistical power and find optimal sample sizes
- **Retrodesign Analysis**: Assess reliability of study designs (Type S/M errors)
- **Random Assignment**: Generate balanced treatment assignments with stratification
## Table of Contents
- [Experiment Utils](#experiment-utils)
- [Features](#features)
- [Table of Contents](#table-of-contents)
- [Installation](#installation)
- [From PyPI (Recommended)](#from-pypi-recommended)
- [From GitHub (Latest Development Version)](#from-github-latest-development-version)
- [Quick Start](#quick-start)
- [User Guide](#user-guide)
- [Basic Experiment Analysis](#basic-experiment-analysis)
- [Checking Covariate Balance](#checking-covariate-balance)
- [Covariate Adjustment Methods](#covariate-adjustment-methods)
- [Outcome Models](#outcome-models)
- [Survival Analysis (Cox Models)](#survival-analysis-cox-models)
- [Bootstrap Inference](#bootstrap-inference)
- [Multiple Experiments](#multiple-experiments)
- [Categorical Treatment Variables](#categorical-treatment-variables)
- [Instrumental Variables (IV)](#instrumental-variables-iv)
- [Multiple Comparison Adjustments](#multiple-comparison-adjustments)
- [Non-Inferiority Testing](#non-inferiority-testing)
- [Combining Effects (Meta-Analysis)](#combining-effects-meta-analysis)
- [Retrodesign Analysis](#retrodesign-analysis)
- [Power Analysis](#power-analysis)
- [Calculate Power](#calculate-power)
- [Power from Real Data](#power-from-real-data)
- [Grid Power Simulation](#grid-power-simulation)
- [Find Sample Size](#find-sample-size)
- [Simulate Retrodesign](#simulate-retrodesign)
- [Utilities](#utilities)
- [Balanced Random Assignment](#balanced-random-assignment)
- [Standalone Balance Checker](#standalone-balance-checker)
- [Advanced Topics](#advanced-topics)
- [Covariate Adjustment Methods](#covariate-adjustment-methods)
- [Outcome Models](#outcome-models)
- [Survival Analysis (Cox Models)](#survival-analysis-cox-models)
- [When to Use Different Adjustment Methods](#when-to-use-different-adjustment-methods)
- [Non-Collapsibility of Hazard and Odds Ratios](#non-collapsibility-of-hazard-and-odds-ratios)
- [Handling Missing Data](#handling-missing-data)
- [Best Practices](#best-practices)
- [Common Workflows](#common-workflows)
- [Contributing](#contributing)
- [License](#license)
- [Citation](#citation)
## Installation
### From PyPI (Recommended)
```bash
pip install experiment-utils-pd
```
### From GitHub (Latest Development Version)
```bash
pip install git+https://github.com/sdaza/experiment-utils-pd.git
```
## Quick Start
Here's a complete example analyzing an A/B test with covariate adjustment:
```python
import pandas as pd
import numpy as np
from experiment_utils.experiment_analyzer import ExperimentAnalyzer
# Create sample experiment data
np.random.seed(42)
df = pd.DataFrame({
"user_id": range(1000),
"treatment": np.random.choice([0, 1], 1000),
"conversion": np.random.binomial(1, 0.15, 1000),
"revenue": np.random.normal(50, 20, 1000),
"age": np.random.normal(35, 10, 1000),
"is_member": np.random.choice([0, 1], 1000),
})
# Initialize analyzer
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["conversion", "revenue"],
covariates=["age", "is_member"],
adjustment="balance", # Adjust for covariates
balance_method="ps-logistic",
)
# Estimate treatment effects
analyzer.get_effects()
# View results
results = analyzer.results
print(results[["outcome", "absolute_effect", "relative_effect",
"pvalue", "stat_significance"]])
# Balance is automatically calculated when covariates are provided
balance = analyzer.balance
print(f"\nBalance: {balance['balance_flag'].mean():.1%} of covariates balanced")
```
Output:
```
outcome absolute_effect relative_effect pvalue stat_significance
0 conversion 0.0234 0.1623 0.0456 1
1 revenue 2.1450 0.0429 0.1234 0
Balance: 100.0% of covariates balanced
```
## User Guide
### Basic Experiment Analysis
Analyze a simple A/B test without covariate adjustment:
```python
from experiment_utils.experiment_analyzer import ExperimentAnalyzer
# Simple analysis (no covariates)
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["conversion"],
)
analyzer.get_effects()
print(analyzer.results)
```
**Key columns in results:**
- `outcome`: Outcome variable name
- `absolute_effect`: Treatment effect (treatment - control mean)
- `relative_effect`: Lift (absolute_effect / control_mean)
- `standard_error`: Standard error of the effect
- `pvalue`: P-value for hypothesis test
- `stat_significance`: 1 if significant at alpha level, 0 otherwise
- `abs_effect_lower/upper`: Confidence interval bounds (absolute)
- `rel_effect_lower/upper`: Confidence interval bounds (relative)
### Checking Covariate Balance
**Balance is automatically calculated** when you provide covariates and run `get_effects()`:
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["conversion"],
covariates=["age", "income", "region"], # Can include categorical
)
analyzer.get_effects()
# Balance is automatically available
balance = analyzer.balance
print(balance[["covariate", "smd", "balance_flag"]])
print(f"\nBalanced: {balance['balance_flag'].mean():.1%}")
# Identify imbalanced covariates
imbalanced = balance[balance["balance_flag"] == 0]
if not imbalanced.empty:
print(f"Imbalanced: {imbalanced['covariate'].tolist()}")
```
**Check balance independently** (optional, before running `get_effects()` or with custom parameters):
```python
# Check balance with different threshold
balance_strict = analyzer.check_balance(threshold=0.05)
```
**Balance metrics explained:**
- `smd`: Standardized Mean Difference (|SMD| < 0.1 indicates good balance)
- `balance_flag`: 1 if balanced, 0 if imbalanced
- `mean_treated/control`: Group means for the covariate
### Covariate Adjustment Methods
When treatment and control groups differ on covariates, adjust for bias:
**Option 1: Propensity Score Weighting (Recommended)**
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["conversion", "revenue"],
covariates=["age", "income", "is_member"],
adjustment="balance",
balance_method="ps-logistic", # Logistic regression for propensity scores
target_effect="ATT", # Average Treatment Effect on Treated
)
analyzer.get_effects()
# Check post-adjustment balance
print(analyzer.adjusted_balance)
# Retrieve weights for transparency
weights_df = analyzer.weights
print(weights_df.head())
```
**Available methods:**
- `ps-logistic`: Propensity score via logistic regression (fast, interpretable)
- `ps-xgboost`: Propensity score via XGBoost (flexible, non-linear)
- `entropy`: Entropy balancing (exact moment matching)
**Target effects:**
- `ATT`: Average Treatment Effect on Treated (most common)
- `ATE`: Average Treatment Effect (entire population)
- `ATC`: Average Treatment Effect on Control
**Option 2: Regression Adjustment**
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["conversion"],
regression_covariates=["age", "income"], # Use regression_covariates
adjustment=None, # No weighting, just regression
)
analyzer.get_effects()
```
**Option 3: IPW + Regression (Combined)**
Use both propensity score weighting and regression covariates for extra robustness:
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["conversion", "revenue"],
covariates=["age", "income", "is_member"],
adjustment="balance",
regression_covariates=["age", "income"], # Also include in regression
target_effect="ATE",
)
analyzer.get_effects()
```
**Option 4: Doubly Robust / AIPW**
Augmented Inverse Probability Weighting is consistent if either the propensity score model or the outcome model is correctly specified. Available for OLS, logistic, Poisson, and negative binomial models:
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["revenue"],
covariates=["age", "income", "is_member"],
adjustment="aipw",
target_effect="ATE",
)
analyzer.get_effects()
# AIPW results include influence-function based standard errors
print(analyzer.results[["outcome", "absolute_effect", "standard_error", "pvalue"]])
```
AIPW works by fitting separate outcome models for treated and control groups, predicting potential outcomes for all units, and combining them with IPW via the augmented influence function. Standard errors are derived from the influence function, making them robust without requiring bootstrap.
> **Note**: AIPW is not supported for Cox survival models due to the complexity of survival-specific doubly robust methods. For Cox models, use IPW + Regression instead.
### Outcome Models
By default, all outcomes are analyzed with OLS. Use `outcome_models` to specify different model types:
**Logistic regression (binary outcomes)**
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["converted", "churned"],
outcome_models="logistic", # Apply to all outcomes
covariates=["age", "tenure"],
)
analyzer.get_effects()
# By default, results report marginal effects (probability change in percentage points)
# Use compute_marginal_effects=False for odds ratios instead
```
**Poisson / Negative binomial (count outcomes)**
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["orders", "page_views"],
outcome_models="poisson", # or "negative_binomial" for overdispersed counts
covariates=["age", "tenure"],
)
analyzer.get_effects()
# Results report change in expected count (marginal effects) by default
# Use compute_marginal_effects=False for rate ratios
```
**Mixed models per outcome**
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["revenue", "converted", "orders"],
outcome_models={
"revenue": "ols",
"converted": "logistic",
"orders": ["poisson", "negative_binomial"], # Compare both
},
covariates=["age"],
)
analyzer.get_effects()
# Results include model_type column to distinguish
print(analyzer.results[["outcome", "model_type", "absolute_effect", "pvalue"]])
```
**Marginal effects options**
```python
# Average Marginal Effect (default) - recommended
analyzer = ExperimentAnalyzer(..., compute_marginal_effects="overall")
# Marginal Effect at the Mean
analyzer = ExperimentAnalyzer(..., compute_marginal_effects="mean")
# Odds ratios / rate ratios instead of marginal effects
analyzer = ExperimentAnalyzer(..., compute_marginal_effects=False)
```
| `compute_marginal_effects` | Logistic output | Poisson/NB output |
|---|---|---|
| `"overall"` (default) | Probability change (pp) | Change in expected count |
| `"mean"` | Probability change at mean | Count change at mean |
| `False` | Odds ratio | Rate ratio |
### Survival Analysis (Cox Models)
Analyze time-to-event outcomes using Cox proportional hazards:
```python
from experiment_utils.experiment_analyzer import ExperimentAnalyzer
# Specify Cox outcomes as tuples: (time_col, event_col)
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=[("time_to_event", "event_occurred")],
outcome_models="cox",
covariates=["age", "income"],
)
analyzer.get_effects()
# Results report log(HR) as absolute_effect and HR as relative_effect
print(analyzer.results[["outcome", "absolute_effect", "relative_effect", "pvalue"]])
```
**Cox with regression adjustment**
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=[("survival_time", "died")],
outcome_models="cox",
covariates=["age", "comorbidity_score"],
regression_covariates=["age", "comorbidity_score"],
)
analyzer.get_effects()
```
**Cox with IPW + Regression (recommended for confounded data)**
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=[("survival_time", "died")],
outcome_models="cox",
covariates=["age", "comorbidity_score"],
adjustment="balance",
regression_covariates=["age", "comorbidity_score"], # Include both
target_effect="ATE",
)
analyzer.get_effects()
```
> **Note**: IPW alone for Cox models estimates the marginal hazard ratio, which differs from the conditional HR due to non-collapsibility. The package will warn you if you use IPW without regression covariates. See [Non-Collapsibility](#non-collapsibility-of-hazard-and-odds-ratios) for details.
**Alternative: separate event_col parameter**
```python
# Equivalent to tuple notation
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["survival_time"],
outcome_models="cox",
event_col="died", # Applies to all outcomes
)
```
**Bootstrap for survival models**
Bootstrap can be slow for Cox models with low event rates. Use `skip_bootstrap_for_survival` to fall back to robust standard errors:
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=[("survival_time", "died")],
outcome_models="cox",
bootstrap=True,
skip_bootstrap_for_survival=True, # Use Cox robust SEs instead
)
```
### Bootstrap Inference
Get robust confidence intervals and p-values via bootstrapping:
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["conversion"],
covariates=["age", "income"],
adjustment="balance",
bootstrap=True,
bootstrap_iterations=2000,
bootstrap_ci_method="percentile",
bootstrap_seed=42, # For reproducibility
)
analyzer.get_effects()
# Bootstrap results include robust CIs
results = analyzer.results
print(results[["outcome", "absolute_effect", "abs_effect_lower",
"abs_effect_upper", "inference_method"]])
```
**When to use bootstrap:**
- Small sample sizes
- Non-normal distributions
- Skepticism about asymptotic assumptions
- Want robust, distribution-free inference
### Multiple Experiments
Analyze multiple experiments simultaneously:
```python
# Data with multiple experiments
df = pd.DataFrame({
"experiment": ["exp_A", "exp_A", "exp_B", "exp_B"] * 100,
"treatment": [0, 1, 0, 1] * 100,
"outcome": np.random.randn(400),
"age": np.random.normal(35, 10, 400),
})
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["outcome"],
experiment_identifier="experiment", # Group by experiment
covariates=["age"],
)
analyzer.get_effects()
# Results include experiment column
results = analyzer.results
print(results.groupby("experiment")[["absolute_effect", "pvalue"]].first())
# Balance per experiment (automatically calculated)
balance = analyzer.balance
print(balance.groupby("experiment")["balance_flag"].mean())
```
### Categorical Treatment Variables
Compare multiple treatment variants:
```python
df = pd.DataFrame({
"treatment": np.random.choice(["control", "variant_A", "variant_B"], 1000),
"outcome": np.random.randn(1000),
})
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["outcome"],
)
analyzer.get_effects()
# Results show all pairwise comparisons
results = analyzer.results
print(results[["treatment_group", "control_group", "absolute_effect", "pvalue"]])
```
### Instrumental Variables (IV)
When treatment assignment is confounded (e.g., non-compliance in an experiment), use an instrument -- a variable that affects treatment receipt but only affects the outcome through treatment:
```python
import numpy as np
import pandas as pd
from experiment_utils.experiment_analyzer import ExperimentAnalyzer
# Simulate encouragement design with non-compliance
np.random.seed(42)
n = 5000
Z = np.random.binomial(1, 0.5, n) # Random encouragement (instrument)
U = np.random.normal(0, 1, n) # Unobserved confounder
D = np.random.binomial(1, 1 / (1 + np.exp(-(-1 + 0.5 * U + 2.5 * Z)))) # Actual treatment (confounded)
Y = 2.0 * D + 1.0 * U + np.random.normal(0, 1, n) # Outcome (true LATE = 2.0)
df = pd.DataFrame({"encouragement": Z, "treatment": D, "outcome": Y})
# IV estimation using encouragement as instrument for treatment
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["outcome"],
instrument_col="encouragement",
adjustment="IV",
)
analyzer.get_effects()
print(analyzer.results[["outcome", "absolute_effect", "standard_error", "pvalue"]])
```
**IV with covariates:**
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["outcome"],
instrument_col="encouragement",
adjustment="IV",
covariates=["age", "region"], # Balance checked on instrument
)
analyzer.get_effects()
```
**Key assumptions for valid IV estimation:**
- **Relevance**: The instrument must be correlated with treatment (check first-stage F-statistic)
- **Exclusion restriction**: The instrument affects the outcome *only* through treatment
- **Independence**: The instrument is independent of unobserved confounders (holds by design in randomized encouragement)
> **Note**: IV estimation is only supported for OLS outcome models. For other model types (logistic, Cox, etc.), the analyzer will fall back to unadjusted estimation with a warning.
### Multiple Comparison Adjustments
Control family-wise error rate when testing multiple hypotheses:
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["conversion", "revenue", "retention", "engagement"],
)
analyzer.get_effects()
# Apply Bonferroni correction
analyzer.adjust_pvalues(method="bonferroni")
results = analyzer.results
print(results[["outcome", "pvalue", "pvalue_mcp", "stat_significance_mcp"]])
```
**Available methods:**
- `bonferroni`: Most conservative, controls FWER
- `holm`: Less conservative than Bonferroni, still controls FWER
- `sidak`: Similar to Bonferroni, assumes independence
- `fdr_bh`: Benjamini-Hochberg FDR control (less conservative)
### Non-Inferiority Testing
Test if a new treatment is "not worse" than control:
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["conversion"],
)
analyzer.get_effects()
# Test if treatment is within 10% of control
analyzer.test_non_inferiority(relative_margin=0.10)
results = analyzer.results
print(results[["outcome", "relative_effect", "is_non_inferior",
"non_inferiority_margin"]])
```
### Combining Effects (Meta-Analysis)
When you have multiple experiments or segments, pool results using fixed-effects meta-analysis or weighted averaging.
**Fixed-effects meta-analysis (`combine_effects`)**
Combines effect estimates using inverse-variance weighting, producing a pooled effect with proper standard errors:
```python
from experiment_utils.experiment_analyzer import ExperimentAnalyzer
# Analyze multiple experiments
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["conversion"],
experiment_identifier="experiment",
covariates=["age"],
)
analyzer.get_effects()
# Pool results across experiments using fixed-effects meta-analysis
pooled = analyzer.combine_effects(grouping_cols=["outcome"])
print(pooled[["outcome", "experiments", "absolute_effect", "standard_error", "pvalue"]])
```
**Custom grouping:**
```python
# Pool by outcome and region (e.g., combine experiments within each region)
pooled_by_region = analyzer.combine_effects(grouping_cols=["region", "outcome"])
print(pooled_by_region)
```
**Weighted average aggregation (`aggregate_effects`)**
A simpler alternative that weights by treatment group size (useful for quick summaries, but `combine_effects` provides better standard error estimates):
```python
aggregated = analyzer.aggregate_effects(grouping_cols=["outcome"])
print(aggregated[["outcome", "experiments", "absolute_effect", "pvalue"]])
```
### Retrodesign Analysis
Assess reliability of significant results (post-hoc power analysis):
```python
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["conversion"],
)
analyzer.get_effects()
# Calculate Type S and Type M errors assuming true effect is 0.02
retro = analyzer.calculate_retrodesign(true_effect=0.02)
print(retro[["outcome", "power", "type_s_error", "type_m_error", "relative_bias"]])
```
**Metrics explained:**
- `power`: Probability of detecting the assumed true effect
- `type_s_error`: Probability of wrong sign when significant (if underpowered)
- `type_m_error`: Expected exaggeration ratio (mean |observed|/|true|)
- `relative_bias`: Expected bias ratio preserving signs (mean observed/true)
- Typically lower than type_m_error because wrong signs partially offset overestimates
## Power Analysis
Design well-powered experiments using simulation-based power analysis.
### Calculate Power
Estimate statistical power for a given sample size:
```python
from experiment_utils.power_sim import PowerSim
# Initialize power simulator for proportion metric
power_sim = PowerSim(
metric="proportion", # or "average" for continuous outcomes
relative_effect=False, # False = absolute effect, True = relative
variants=1, # Number of treatment variants
nsim=1000, # Number of simulations
alpha=0.05, # Significance level
alternative="two-tailed" # or "one-tailed"
)
# Calculate power
power_result = power_sim.get_power(
baseline=[0.10], # Control conversion rate
effect=[0.02], # Absolute effect size (2pp lift)
sample_size=[5000] # Total sample size
)
print(f"Power: {power_result['power'].iloc[0]:.2%}")
```
**Example: Multiple variants**
```python
# Compare 2 treatments vs control
power_sim = PowerSim(metric="proportion", variants=2, nsim=1000)
power_result = power_sim.get_power(
baseline=0.10,
effect=[0.02, 0.03], # Different effects for each variant
sample_size=6000
)
print(power_result[["comparison", "power"]])
```
### Power from Real Data
When your data doesn't follow standard parametric assumptions, estimate power by bootstrapping directly from observed data using `get_power_from_data()`. Instead of generating synthetic data from a distribution, it repeatedly samples from your actual dataset and injects the specified effect:
```python
from experiment_utils.power_sim import PowerSim
import pandas as pd
# Use real data for power estimation
power_sim = PowerSim(metric="average", variants=1, nsim=1000)
power_result = power_sim.get_power_from_data(
df=historical_data, # Your actual dataset
metric_col="revenue", # Column to test
sample_size=5000, # Sample size per group
effect=3.0, # Effect to inject (absolute)
)
print(f"Power: {power_result['power'].iloc[0]:.2%}")
```
**When to use `get_power_from_data` vs `get_power`:**
- Use `get_power_from_data` when your metric has a non-standard distribution (heavy tails, skewed, zero-inflated)
- Use `get_power` for standard parametric scenarios (proportions, means, counts)
**With compliance:**
```python
# Account for 80% compliance
power_result = power_sim.get_power_from_data(
df=historical_data,
metric_col="revenue",
sample_size=5000,
effect=3.0,
compliance=0.80,
)
```
### Grid Power Simulation
Explore power across a grid of parameter combinations using `grid_sim_power()`. This is useful for understanding how power varies with sample size, effect size, and baseline rates:
```python
from experiment_utils.power_sim import PowerSim
power_sim = PowerSim(metric="proportion", variants=1, nsim=1000)
# Simulate power across a grid of scenarios
grid_results = power_sim.grid_sim_power(
baseline_rates=[0.05, 0.10, 0.15],
effects=[0.02, 0.03, 0.05],
sample_sizes=[1000, 2000, 5000, 10000],
plot=True, # Generate power curves
)
print(grid_results.head())
```
**With multiple variants and custom compliance:**
```python
power_sim = PowerSim(metric="average", variants=2, nsim=1000)
grid_results = power_sim.grid_sim_power(
baseline_rates=[50.0],
effects=[2.0, 5.0],
sample_sizes=[500, 1000, 2000, 5000],
standard_deviations=[[20.0]],
compliances=[[0.8]],
threads=4, # Parallelize across scenarios
plot=True,
)
```
The output DataFrame includes all input parameters alongside the estimated power for each comparison, making it easy to filter and compare scenarios.
### Find Sample Size
Find the minimum sample size needed to achieve target power:
```python
from experiment_utils.power_sim import PowerSim
power_sim = PowerSim(metric="proportion", variants=1, nsim=1000)
# Find sample size for 80% power
sample_result = power_sim.find_sample_size(
target_power=0.80,
baseline=0.10,
effect=0.02
)
print(f"Required sample size: {sample_result['total_sample_size'].iloc[0]:,.0f}")
print(f"Achieved power: {sample_result['achieved_power_by_comparison'].iloc[0]:.2%}")
```
**Different power targets per comparison:**
```python
# Primary outcome needs 90%, secondary needs 80%
power_sim = PowerSim(metric="proportion", variants=2, nsim=1000)
sample_result = power_sim.find_sample_size(
target_power={(0,1): 0.90, (0,2): 0.80},
baseline=0.10,
effect=[0.05, 0.03]
)
print(sample_result[["comparison", "sample_size_by_group", "achieved_power"]])
```
**Optimize allocation ratio:**
```python
# Find optimal allocation to minimize total sample size
sample_result = power_sim.find_sample_size(
target_power=0.80,
baseline=0.10,
effect=0.05,
optimize_allocation=True
)
print(f"Optimal allocation: {sample_result['allocation_ratio'].iloc[0]}")
print(f"Total sample size: {sample_result['total_sample_size'].iloc[0]:,.0f}")
```
**Custom allocation:**
```python
# 30% control, 70% treatment
sample_result = power_sim.find_sample_size(
target_power=0.80,
baseline=0.10,
effect=0.02,
allocation_ratio=[0.3, 0.7]
)
```
### Simulate Retrodesign
Prospective analysis of Type S (sign) and Type M (magnitude) errors:
```python
from experiment_utils.power_sim import PowerSim
power_sim = PowerSim(metric="proportion", variants=1, nsim=5000)
# Simulate underpowered study
retro = power_sim.simulate_retrodesign(
true_effect=0.02,
sample_size=500,
baseline=0.10
)
print(f"Power: {retro['power'].iloc[0]:.2%}")
print(f"Type S Error: {retro['type_s_error'].iloc[0]:.2%}")
print(f"Exaggeration Ratio: {retro['exaggeration_ratio'].iloc[0]:.2f}x")
print(f"Relative Bias: {retro['relative_bias'].iloc[0]:.2f}x")
```
**Understanding retrodesign metrics:**
| Metric | Description |
|--------|-------------|
| `power` | Probability of detecting the true effect |
| `type_s_error` | Probability of getting wrong sign when significant |
| `exaggeration_ratio` | Expected overestimation (mean |observed|/|true|) |
| `relative_bias` | Expected bias preserving signs (mean observed/true) <br> Lower than exaggeration_ratio because Type S errors partially cancel out overestimates |
| `median_significant_effect` | Median effect among significant results |
| `prop_overestimate` | % of significant results that overestimate |
**Compare power scenarios:**
```python
# Low power scenario
retro_low = power_sim.simulate_retrodesign(
true_effect=0.02, sample_size=500, baseline=0.10
)
# High power scenario
retro_high = power_sim.simulate_retrodesign(
true_effect=0.02, sample_size=5000, baseline=0.10
)
print(f"Low power - Exaggeration: {retro_low['exaggeration_ratio'].iloc[0]:.2f}x, "
f"Relative bias: {retro_low['relative_bias'].iloc[0]:.2f}x")
print(f"High power - Exaggeration: {retro_high['exaggeration_ratio'].iloc[0]:.2f}x, "
f"Relative bias: {retro_high['relative_bias'].iloc[0]:.2f}x")
```
**Multiple variants:**
```python
power_sim = PowerSim(metric="proportion", variants=3, nsim=5000)
retro = power_sim.simulate_retrodesign(
true_effect=[0.02, 0.03, 0.04], # Different effects per variant
sample_size=1000,
baseline=0.10,
target_comparisons=[(0, 1), (0, 2)]
)
print(retro[["comparison", "power", "type_s_error", "exaggeration_ratio", "relative_bias"]])
```
## Utilities
### Balanced Random Assignment
Generate balanced treatment assignments with optional stratification:
```python
from experiment_utils.utils import balanced_random_assignment
import pandas as pd
import numpy as np
# Create sample data
np.random.seed(42)
users = pd.DataFrame({
"user_id": range(1000),
"age_group": np.random.choice(["18-25", "26-35", "36-45", "46+"], 1000),
"region": np.random.choice(["North", "South", "East", "West"], 1000),
})
# Simple 50/50 split
users["treatment"] = balanced_random_assignment(
users,
allocation_ratio=0.5,
seed=42
)
print(users["treatment"].value_counts())
# Output: control: 500, test: 500
```
**Stratified assignment (ensure balance within subgroups):**
```python
# Balance within age_group and region strata
users["treatment_stratified"] = balanced_random_assignment(
users,
allocation_ratio=0.5,
balance_covariates=["age_group", "region"],
check_balance=True, # Print balance diagnostics
seed=42
)
```
**Multiple variants:**
```python
# Three variants with equal allocation
users["assignment"] = balanced_random_assignment(
users,
variants=["control", "variant_A", "variant_B"]
)
# Custom allocation ratios
users["assignment_custom"] = balanced_random_assignment(
users,
variants=["control", "variant_A", "variant_B"],
allocation_ratio={"control": 0.5, "variant_A": 0.3, "variant_B": 0.2},
balance_covariates=["age_group"]
)
```
**Parameters:**
- `allocation_ratio`: Float (for binary) or dict (for multiple variants)
- `balance_covariates`: List of columns to stratify by
- `check_balance`: If True, prints balance diagnostics
- `smd_threshold`: Threshold for balance flag (default 0.1)
- `seed`: Random seed for reproducibility
### Standalone Balance Checker
Check covariate balance on any dataset without using ExperimentAnalyzer:
```python
from experiment_utils.utils import check_covariate_balance
import pandas as pd
import numpy as np
# Create sample data with imbalance
np.random.seed(42)
n_treatment = 300
n_control = 200
df = pd.concat([
pd.DataFrame({
"treatment": [1] * n_treatment,
"age": np.random.normal(40, 10, n_treatment), # Older in treatment
"income": np.random.normal(60000, 15000, n_treatment), # Higher income
}),
pd.DataFrame({
"treatment": [0] * n_control,
"age": np.random.normal(30, 10, n_control), # Younger in control
"income": np.random.normal(45000, 15000, n_control), # Lower income
})
])
# Check balance
balance = check_covariate_balance(
data=df,
treatment_col="treatment",
covariates=["age", "income"],
threshold=0.1 # SMD threshold
)
print(balance)
```
Output:
```
covariate mean_treated mean_control smd balance_flag
0 age 40.23 30.15 1.012345 0
1 income 59823.45 45234.12 0.923456 0
```
**With categorical variables:**
```python
df["region"] = np.random.choice(["North", "South", "East", "West"], len(df))
balance = check_covariate_balance(
data=df,
treatment_col="treatment",
covariates=["age", "income", "region"], # Automatic categorical detection
threshold=0.1
)
# Region will be expanded to dummy variables
print(balance[balance["covariate"].str.contains("region")])
```
**Use cases:**
- Pre-experiment: Check if randomization worked
- Post-assignment: Validate treatment assignment quality
- Observational data: Assess comparability before adjustment
- Research: Standalone balance analysis for publications
## Advanced Topics
### When to Use Different Adjustment Methods
| Method | `adjustment` | `regression_covariates` | Best for |
|---|---|---|---|
| No adjustment | `None` | `None` | Well-randomized experiments |
| Regression | `None` | `["x1", "x2"]` | Variance reduction, simple confounding |
| IPW | `"balance"` | `None` | Many covariates, non-linear confounding |
| IPW + Regression | `"balance"` | `["x1", "x2"]` | Extra robustness, survival models |
| AIPW (doubly robust) | `"aipw"` | (automatic) | Best protection against misspecification |
| IV | `"IV"` | `None` or `["x1"]` | Non-compliance, endogenous treatment (requires `instrument_col`) |
**Choosing a balance method:**
- `ps-logistic`: Default, fast, interpretable
- `ps-xgboost`: Non-linear relationships, complex interactions
- `entropy`: Exact moment matching, but can be unstable with many covariates
**Choosing an outcome model:**
| Outcome type | Model | `outcome_models` |
|---|---|---|
| Continuous (revenue, time) | OLS | `"ols"` (default) |
| Binary (converted, churned) | Logistic | `"logistic"` |
| Count (orders, clicks) | Poisson | `"poisson"` |
| Overdispersed count | Negative binomial | `"negative_binomial"` |
| Time-to-event | Cox PH | `"cox"` |
### Non-Collapsibility of Hazard and Odds Ratios
When using IPW without regression covariates for Cox or logistic models, the estimated effect may differ from the conditional effect even with perfect covariate balancing. This is not a bug -- it reflects a fundamental property called **non-collapsibility**.
**What happens**: IPW creates a pseudo-population where treatment is independent of covariates, then fits a model without covariates. This estimates the **marginal** effect (population-average). For non-collapsible measures like hazard ratios and odds ratios, the marginal effect differs from the conditional effect.
**When it matters**: The gap increases with stronger covariate effects on the outcome. For Cox models the effect is typically larger than for logistic models.
**Recommendations**:
- For Cox models: use **regression adjustment** or **IPW + Regression** to recover the conditional HR
- For logistic models: the default marginal effects output (probability change) is collapsible, so this mainly affects odds ratios (`compute_marginal_effects=False`)
- For OLS: no issue (mean differences are collapsible)
- AIPW estimates are on the marginal scale but are doubly robust
The package warns when IPW is used without regression covariates for Cox models.
### Handling Missing Data
The package handles missing data automatically:
- **Treatment variable**: Rows with missing treatment are dropped (logged as warning)
- **Categorical covariates**: Missing values become explicit "Missing" category
- **Numeric covariates**: Mean imputation
- **Binary covariates**: Mode imputation
```python
analyzer = ExperimentAnalyzer(
data=df, # Can contain missing values
treatment_col="treatment",
outcomes=["conversion"],
covariates=["age", "region"],
)
# Missing data is handled automatically
analyzer.get_effects()
```
### Best Practices
**1. Always check balance:**
```python
analyzer = ExperimentAnalyzer(data=df, treatment_col="treatment",
outcomes=["conversion"], covariates=["age", "income"])
analyzer.get_effects()
# Check balance from results
balance = analyzer.balance
if balance["balance_flag"].mean() < 0.8: # <80% balanced
print("Consider rerunning with covariate adjustment")
```
**2. Use bootstrap for small samples:**
```python
if len(df) < 500:
analyzer = ExperimentAnalyzer(..., bootstrap=True, bootstrap_iterations=2000)
```
**3. Apply multiple comparison correction:**
```python
# Always correct when testing multiple outcomes/experiments
analyzer.get_effects()
analyzer.adjust_pvalues(method="holm") # Less conservative than Bonferroni
```
**4. Report both absolute and relative effects:**
```python
results = analyzer.results
print(results[["outcome", "absolute_effect", "relative_effect",
"abs_effect_lower", "abs_effect_upper"]])
```
**5. Check sensitivity with retrodesign:**
```python
# After finding significant result, check reliability
retro = analyzer.calculate_retrodesign(true_effect=0.01)
if retro["type_m_error"].iloc[0] > 2:
print("Warning: Results may be exaggerated")
```
### Common Workflows
**Pre-experiment: Sample size calculation**
```python
from experiment_utils.power_sim import PowerSim
# Determine required sample size
power_sim = PowerSim(metric="proportion", variants=1, nsim=1000)
result = power_sim.find_sample_size(
target_power=0.80,
baseline=0.10,
effect=0.02
)
print(f"Need {result['total_sample_size'].iloc[0]:,.0f} users")
```
**During experiment: Balance check**
```python
from experiment_utils.utils import check_covariate_balance
# Check if randomization worked
balance = check_covariate_balance(
data=experiment_df,
treatment_col="treatment",
covariates=["age", "region", "tenure"]
)
print(f"Balance: {balance['balance_flag'].mean():.1%}")
```
**Post-experiment: Analysis**
```python
from experiment_utils.experiment_analyzer import ExperimentAnalyzer
# Full analysis pipeline
analyzer = ExperimentAnalyzer(
data=df,
treatment_col="treatment",
outcomes=["primary_metric", "secondary_metric"],
covariates=["age", "region"],
adjustment="balance",
bootstrap=True,
)
analyzer.get_effects()
analyzer.adjust_pvalues(method="holm")
# Report
results = analyzer.results
print(results[["outcome", "absolute_effect", "relative_effect",
"pvalue_mcp", "stat_significance_mcp"]])
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License.
## Citation
If you use this package in your research, please cite:
```bibtex
@software{experiment_utils_pd,
title = {Experiment Utils PD: A Python Package for Experiment Analysis},
author = {Sebastian Daza},
year = {2026},
url = {https://github.com/sdaza/experiment-utils-pd}
}
```
| text/markdown | null | Sebastian Daza <sebastian.daza@gmail.com> | null | null | MIT License
Copyright (c) 2025 Sebastian Daza
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Information Analysis",
"Development Status :: 3 - Alpha"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=1.21.5",
"pandas>=2.3.1",
"matplotlib>=3.5.2",
"seaborn>=0.11.2",
"multiprocess>=0.70.14",
"threadpoolctl>=3.5.0",
"statsmodels>=0.13.2",
"scipy>=1.9.1",
"linearmodels>=6.1",
"scikit-learn==1.5.2",
"xgboost>=2.1.3",
"pytest>=8.3.5",
"ruff>=0.11.7",
"ipykernel>=6.29.5",
"build>=1.... | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T09:49:59.776086 | experiment_utils_pd-0.1.11.tar.gz | 119,566 | d0/91/880a2126f88b5d5d601ed24bd4babfc9aceb54b72a3956fd46b4f6442e13/experiment_utils_pd-0.1.11.tar.gz | source | sdist | null | false | f5f6fa1ea6d8ae8af661a088b7197cd7 | 7710c9e4d53d4e7ed91763c00524aca98cbae96f22531f646a77d25b0d348faf | d091880a2126f88b5d5d601ed24bd4babfc9aceb54b72a3956fd46b4f6442e13 | null | [
"LICENSE"
] | 224 |
2.4 | swarmauri_tool_jupyterdisplay | 0.8.3.dev5 | A tool designed to display rich media and object representations in a Jupyter Notebook using IPython's display functionality. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_jupyterdisplay/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_jupyterdisplay" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterdisplay/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterdisplay.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterdisplay/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_jupyterdisplay" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterdisplay/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_jupyterdisplay" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterdisplay/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_jupyterdisplay?label=swarmauri_tool_jupyterdisplay&color=green" alt="PyPI - swarmauri_tool_jupyterdisplay"/></a>
</p>
---
# Swarmauri Tool Jupyter Display
Tool for displaying text, HTML, images, or LaTeX in a Jupyter notebook using IPython's rich display helpers.
## Features
- Accepts `data` and optional `data_format` (`auto`, `text`, `html`, `image`, `latex`).
- Uses `IPython.display` to render the appropriate representation.
- Returns a status dictionary indicating success or failure.
## Prerequisites
- Python 3.10 or newer.
- Running inside a Jupyter notebook or environment that supports IPython display.
- `IPython` installed (pulled in automatically).
## Installation
```bash
# pip
pip install swarmauri_tool_jupyterdisplay
# poetry
poetry add swarmauri_tool_jupyterdisplay
# uv (pyproject-based projects)
uv add swarmauri_tool_jupyterdisplay
```
## Quickstart
```python
from swarmauri_tool_jupyterdisplay import JupyterDisplayTool
display_tool = JupyterDisplayTool()
print(display_tool("<b>Hello, world!</b>", data_format="html"))
```
## Displaying Images
```python
from swarmauri_tool_jupyterdisplay import JupyterDisplayTool
image_path = "plots/chart.png"
JupyterDisplayTool()(image_path, data_format="image")
```
## Tips
- Use `data_format="auto"` (default) to treat the data as Markdown text.
- Provide absolute or notebook-relative paths for images when using `data_format="image"`.
- Wrap calls in Swarmauri tool chains to render results (e.g., charts, HTML reports) inline during agent runs.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, jupyterdisplay, designed, display, rich, media, object, representations, jupyter, notebook, ipython, functionality | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audi... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"IPython>=8.32.0",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:49:59.078249 | swarmauri_tool_jupyterdisplay-0.8.3.dev5-py3-none-any.whl | 8,997 | 85/2f/48d860d63b5150c6a58113c5679596bc03443503d4308135fe7766735198/swarmauri_tool_jupyterdisplay-0.8.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 2bdef7eadaacdeb3fd1957ab491767cb | d5a6feabaa348385f1b67234b60b1a5dbffa96d2d94eff7a8a81978250b02e98 | 852f48d860d63b5150c6a58113c5679596bc03443503d4308135fe7766735198 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tool_jupyterclearoutput | 0.9.3.dev5 | A tool designed to clear all outputs from a Jupyter Notebook using nbconvert’s ClearOutputPreprocessor, preparing the notebook for sharing or version control. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_jupyterclearoutput/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_jupyterclearoutput" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterclearoutput/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_jupyterclearoutput.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterclearoutput/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_jupyterclearoutput" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterclearoutput/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_jupyterclearoutput" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_jupyterclearoutput/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_jupyterclearoutput?label=swarmauri_tool_jupyterclearoutput&color=green" alt="PyPI - swarmauri_tool_jupyterclearoutput"/></a>
</p>
---
# Swarmauri Tool Jupyter Clear Output
Removes outputs and execution counts from Jupyter notebooks using a Swarmauri tool wrapper. Ideal for cleaning notebooks before publishing or committing to version control.
## Features
- Clears output arrays from all code cells and resets `execution_count` to `None`.
- Leaves markdown and raw cells untouched.
- Works with notebooks already loaded into memory (dict/JSON structure).
## Prerequisites
- Python 3.10 or newer.
- Dependencies: `nbconvert`, `swarmauri_base`, `swarmauri_standard` (installed automatically).
## Installation
```bash
# pip
pip install swarmauri_tool_jupyterclearoutput
# poetry
poetry add swarmauri_tool_jupyterclearoutput
# uv (pyproject-based projects)
uv add swarmauri_tool_jupyterclearoutput
```
## Quickstart
```python
import json
from swarmauri_tool_jupyterclearoutput import JupyterClearOutputTool
notebook_data = json.loads(Path("notebooks/example.ipynb").read_text())
cleaner = JupyterClearOutputTool()
clean_notebook = cleaner(notebook_data)
Path("notebooks/example-clean.ipynb").write_text(json.dumps(clean_notebook, indent=2))
```
## Tips
- Run this tool before committing notebooks to keep diffs small and avoid leaking secrets in output cells.
- Combine with Swarmauri pipelines that regenerate notebooks (e.g., parameterized runs) to ensure clean artifacts.
- For large notebooks, consider streaming to disk rather than loading entirely into memory before clearing.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, jupyterclearoutput, designed, clear, all, outputs, jupyter, notebook, nbconvert, clearoutputpreprocessor, preparing, sharing, version, control | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audi... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"nbconvert>=7.16.6",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:49:41.824512 | swarmauri_tool_jupyterclearoutput-0.9.3.dev5.tar.gz | 7,874 | 86/1d/f3422b6e72c06ac2e15938fba9b30ecdac9e3f971bf1c809af1487a66a09/swarmauri_tool_jupyterclearoutput-0.9.3.dev5.tar.gz | source | sdist | null | false | 6af0730b99cc89814681882f0d5bd5a1 | 9f019fdebe7e3f2c2eda5ecb506dd1693535bb27b3777c19d776f002d6f20e93 | 861df3422b6e72c06ac2e15938fba9b30ecdac9e3f971bf1c809af1487a66a09 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tool_entityrecognition | 0.8.2.dev4 | Swarmauri Community Entity Recognition Tool | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_entityrecognition/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_entityrecognition" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_entityrecognition/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_entityrecognition.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_entityrecognition/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_entityrecognition" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_entityrecognition/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_entityrecognition" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_entityrecognition/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_entityrecognition?label=swarmauri_tool_entityrecognition&color=green" alt="PyPI - swarmauri_tool_entityrecognition"/></a>
</p>
---
# Swarmauri Tool Entity Recognition
Named-entity recognition tool for Swarmauri based on Hugging Face transformers. Uses the default `pipeline("ner")` model to detect tokens labeled as PERSON, ORG, LOC, etc., and returns a JSON-encoded dictionary of entities grouped by label.
## Features
- Wraps the transformers NER pipeline in a Swarmauri `ToolBase` component.
- Auto-downloads the default model on first run (usually `dslim/bert-base-NER`).
- Aggregates entity tokens by label and returns them as a JSON string in the `entities` key.
## Prerequisites
- Python 3.10 or newer.
- `transformers`, `torch`, and associated dependencies (installed automatically). Ensure GPU/CPU compatibility for PyTorch according to your environment.
- Internet access on first run to download model weights.
## Installation
```bash
# pip
pip install swarmauri_tool_entityrecognition
# poetry
poetry add swarmauri_tool_entityrecognition
# uv (pyproject-based projects)
uv add swarmauri_tool_entityrecognition
```
## Quickstart
```python
import json
from swarmauri_tool_entityrecognition import EntityRecognitionTool
text = "Apple Inc. is an American multinational technology company."
tool = EntityRecognitionTool()
result = tool(text=text)
entities = json.loads(result["entities"])
print(entities)
```
Example output:
```
{"B-ORG": ["Apple", "Inc."], "B-MISC": ["American"], "I-MISC": ["multinational"], ...}
```
## Tips
- The default pipeline tokenizes into subwords; reconstruct phrases by joining consecutive tokens with the same label when needed.
- Specify a different model by subclassing and passing `pipeline("ner", model="<model>")` if you require language-specific NER.
- Cache Hugging Face models (`HF_HOME`) in CI or container builds to avoid repeated downloads.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, entityrecognition, community, entity, recognition | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard",
"tensorflow>=2.16.1",
"tf-keras==2.16.0",
"torch>=2.6.0",
"transformers>=4.45.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:49:41.391468 | swarmauri_tool_entityrecognition-0.8.2.dev4-py3-none-any.whl | 8,479 | bf/96/6bcb98751f290ccaf46aa1934e30f376ac6d282b12910b2d213493f25796/swarmauri_tool_entityrecognition-0.8.2.dev4-py3-none-any.whl | py3 | bdist_wheel | null | false | c2529eb153b5418619e81054289fb6e5 | 8f50c8e746cbca588806df66cfcf75f3bc66392db2ff2cc79f69efc5810d5654 | bf966bcb98751f290ccaf46aa1934e30f376ac6d282b12910b2d213493f25796 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tool_gmail | 0.9.3.dev5 | example community package | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_gmail/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_gmail" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_gmail/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_gmail.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_gmail/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_gmail" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_gmail/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_gmail" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_gmail/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_gmail?label=swarmauri_tool_gmail&color=green" alt="PyPI - swarmauri_tool_gmail"/></a>
</p>
---
# Swarmauri Tool Gmail
Tools for sending and reading Gmail messages via Google Workspace service-account delegation. Provides `GmailSendTool` and `GmailReadTool` wrappers around the Gmail REST API.
## Features
- `GmailSendTool` sends HTML emails to one or more recipients.
- `GmailReadTool` fetches messages matching a Gmail search query and formats key headers.
- Both tools authenticate with `googleapiclient` using a service account JSON file and delegated user email.
## Prerequisites
- Python 3.10 or newer.
- Google Cloud service account with Gmail API enabled and domain-wide delegation to the target user.
- Credentials JSON file with the `https://www.googleapis.com/auth/gmail.send` and/or `https://www.googleapis.com/auth/gmail.readonly` scopes.
- Install `google-api-python-client` and `google-auth` (pulled in automatically).
## Installation
```bash
# pip
pip install swarmauri_tool_gmail
# poetry
poetry add swarmauri_tool_gmail
# uv (pyproject-based projects)
uv add swarmauri_tool_gmail
```
## Sending Email
```python
from swarmauri_tool_gmail import GmailSendTool
send_tool = GmailSendTool(
credentials_path="service-account.json",
sender_email="user@yourdomain.com",
)
result = send_tool(
recipients="recipient@example.com",
subject="Test Email",
htmlMsg="<p>Hello, this is a test email!</p>",
)
print(result)
```
## Reading Email
```python
from swarmauri_tool_gmail import GmailReadTool
read_tool = GmailReadTool(
credentials_path="service-account.json",
sender_email="user@yourdomain.com",
)
result = read_tool(query="is:unread", max_results=5)
print(result["gmail_messages"])
```
## Tips
- Ensure the service account has been granted domain-wide delegation and that the Gmail API is enabled in Google Cloud console.
- Store credentials securely (Secrets Manager, Vault) and inject the file path via environment variables.
- When sending to multiple recipients, supply a comma-separated string (Gmail handles the formatting).
- For message bodies beyond simple HTML, extend the tool to add attachments or alternative MIME parts.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, gmail, example, community, package | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"google-api-python-client>=2.157.0",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:49:40.437823 | swarmauri_tool_gmail-0.9.3.dev5.tar.gz | 8,677 | 3d/dd/f387e9113cb67095f0cbad17a07e3390fce0ead0a47efb442b16bc058c73/swarmauri_tool_gmail-0.9.3.dev5.tar.gz | source | sdist | null | false | 9011527bdea537fd1aefb644108a8383 | 3dff3317439dc542293b6695a629f9a9ac46cc4d85f96b3b490d01e6d7fedf18 | 3dddf387e9113cb67095f0cbad17a07e3390fce0ead0a47efb442b16bc058c73 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tool_downloadpdf | 0.9.3.dev5 | Swarmauri Community Download PDF Tool | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_downloadpdf/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_downloadpdf" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_downloadpdf/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_downloadpdf.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_downloadpdf/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_downloadpdf" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_downloadpdf/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_downloadpdf" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_downloadpdf/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_downloadpdf?label=swarmauri_tool_downloadpdf&color=green" alt="PyPI - swarmauri_tool_downloadpdf"/></a>
</p>
---
# Swarmauri Tool Download PDF
Tool that downloads a PDF by URL and returns the file contents as a base64-encoded string, enabling downstream storage or inline embedding without touching disk.
## Features
- Uses `requests` to stream PDF bytes.
- Encodes the result into base64 (`content` field) and includes a success message.
- Returns an `error` key when download or processing fails.
## Prerequisites
- Python 3.10 or newer.
- `requests` (installed automatically).
- Network access to the target PDF URL.
## Installation
```bash
# pip
pip install swarmauri_tool_downloadpdf
# poetry
poetry add swarmauri_tool_downloadpdf
# uv (pyproject-based projects)
uv add swarmauri_tool_downloadpdf
```
## Quickstart
```python
import base64
from pathlib import Path
from swarmauri_tool_downloadpdf import DownloadPDFTool
tool = DownloadPDFTool()
result = tool("https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf")
if "error" in result:
print(result["error"])
else:
pdf_bytes = base64.b64decode(result["content"])
Path("downloaded.pdf").write_bytes(pdf_bytes)
```
## Tips
- Always check for the `error` key before assuming the download succeeded.
- Large PDFs load into memory—consider chunking or alternative tools if you need to stream huge files directly to disk.
- Validate URLs to avoid downloading untrusted content when wiring this tool into automated pipelines.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, downloadpdf, community, download, pdf | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:49:38.822492 | swarmauri_tool_downloadpdf-0.9.3.dev5.tar.gz | 7,357 | bb/87/1d68af719a03e88cc81cb441531f5ea91da1d9eafb406676e24319369a3c/swarmauri_tool_downloadpdf-0.9.3.dev5.tar.gz | source | sdist | null | false | 8610951f7a8b96f7965dd3581f9a6683 | d7bdbfa347bad971455e1ab33b0a1e1898b2eaf53f4530eac6e45cc23c61e214 | bb871d68af719a03e88cc81cb441531f5ea91da1d9eafb406676e24319369a3c | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tool_folium | 0.9.3.dev5 | This repository includes an example of a First Class Swarmauri Example. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_folium/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_folium" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_folium/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_folium.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_folium/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_folium" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_folium/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_folium" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_folium/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_folium?label=swarmauri_tool_folium&color=green" alt="PyPI - swarmauri_tool_folium"/></a>
</p>
---
# Swarmauri Tool Folium
Generates an interactive Folium map with optional markers and returns the HTML as a base64-encoded string. Designed for embedding maps in Swarmauri workflows or downstream UIs.
## Features
- Accepts a map center `(lat, lon)` plus an optional list of markers `(lat, lon, popup)`.
- Creates a Folium map (HTML) and returns `{"image_b64": <base64-html>}`.
- Easy to extend with additional Folium layers/tiles by subclassing.
## Prerequisites
- Python 3.10 or newer.
- [`folium`](https://python-visualization.github.io/folium/) (installed automatically).
- Network access if Folium needs to load tiles from external providers at render time.
## Installation
```bash
# pip
pip install swarmauri_tool_folium
# poetry
poetry add swarmauri_tool_folium
# uv (pyproject-based projects)
uv add swarmauri_tool_folium
```
## Quickstart
```python
import base64
from pathlib import Path
from swarmauri_tool_folium import FoliumTool
map_center = (40.7128, -74.0060)
markers = [(40.7128, -74.0060, "Marker 1"), (40.7328, -74.0010, "Marker 2")]
result = FoliumTool()(map_center, markers)
html_bytes = base64.b64decode(result["image_b64"])
Path("map.html").write_bytes(html_bytes)
```
Open `map.html` in a browser to interact with the generated map.
## Tips
- Customize map appearance by subclassing and adjusting `folium.Map` parameters (`tiles`, `zoom_start`, etc.).
- Add other Folium layers (heatmaps, choropleths) before saving to build richer visualizations.
- When serving maps via APIs, return the base64 string directly or write to a temporary HTML file and send its path.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, folium, repository, includes, example, first, class | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"folium>=0.18.0",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:49:37.982680 | swarmauri_tool_folium-0.9.3.dev5.tar.gz | 7,592 | 4c/87/8765eca312bafc621b78370bbc84e3754591d75b9fb40acc6c1b2217c3dd/swarmauri_tool_folium-0.9.3.dev5.tar.gz | source | sdist | null | false | 9f25d126f45bc3806d2be2d497d88559 | b302beec78f37673cab7a94b6004ccf45a4ef531003684dcbee97127bd231492 | 4c878765eca312bafc621b78370bbc84e3754591d75b9fb40acc6c1b2217c3dd | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tool_dalechallreadability | 0.9.3.dev5 | Swarmauri Community Dale-Chall Readability Tool | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_dalechallreadability/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_dalechallreadability" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_dalechallreadability/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_dalechallreadability.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_dalechallreadability/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_dalechallreadability" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_dalechallreadability/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_dalechallreadability" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_dalechallreadability/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_dalechallreadability?label=swarmauri_tool_dalechallreadability&color=green" alt="PyPI - swarmauri_tool_dalechallreadability"/></a>
</p>
---
# Swarmauri Tool Dale-Chall Readability
Tool wrapper around [`textstat`](https://pypi.org/project/textstat/) to compute the Dale–Chall readability score for a block of text via the Swarmauri tool interface.
## Features
- Accepts an `input_text` parameter and returns `{"dale_chall_score": <float>}`.
- Uses `textstat.dale_chall_readability_score` under the hood.
- Input validation ensures the required parameter is present before calculation.
## Prerequisites
- Python 3.10 or newer.
- `textstat` and `pyphen` dictionaries (installed automatically). Some textstat metrics may download additional word lists on first use.
## Installation
```bash
# pip
pip install swarmauri_tool_dalechallreadability
# poetry
poetry add swarmauri_tool_dalechallreadability
# uv (pyproject-based projects)
uv add swarmauri_tool_dalechallreadability
```
## Quickstart
```python
from swarmauri_tool_dalechallreadability import DaleChallReadabilityTool
text = "This is a simple sentence for testing purposes."
tool = DaleChallReadabilityTool()
result = tool({"input_text": text})
print(result)
```
## Usage in Tool Chains
```python
from swarmauri_tool_dalechallreadability import DaleChallReadabilityTool
def grade_paragraph(paragraph: str) -> float:
tool = DaleChallReadabilityTool()
score = tool({"input_text": paragraph})["dale_chall_score"]
return score
```
## Tips
- Dale–Chall scores roughly map to U.S. grade levels; lower scores indicate easier reading.
- Pre-clean text (remove markup, normalize whitespace) for consistent scoring.
- Combine with Swarmauri measurements or parsers to evaluate readability across multiple documents.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, dalechallreadability, community, dale, chall, readability | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard",
"textstat>=0.7.4"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:49:21.035133 | swarmauri_tool_dalechallreadability-0.9.3.dev5.tar.gz | 7,336 | 26/b3/26dc612d7afb31aebceaedb52047a5e001dad65d25976b97f4f6ab4e30cd/swarmauri_tool_dalechallreadability-0.9.3.dev5.tar.gz | source | sdist | null | false | 585b6481589d33a7ceb87cf5dc3405df | 400e6ce3a771466c2c821dc0bc7698a382dcb1af416604c7525c900d46e3ae62 | 26b326dc612d7afb31aebceaedb52047a5e001dad65d25976b97f4f6ab4e30cd | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_state_clipboard | 0.8.3.dev5 | Swarmauri Community Clipboard State | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_state_clipboard/">
<img src="https://img.shields.io/pypi/dm/swarmauri_state_clipboard" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_state_clipboard/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_state_clipboard.svg"/></a>
<a href="https://pypi.org/project/swarmauri_state_clipboard/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_state_clipboard" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_state_clipboard/">
<img src="https://img.shields.io/pypi/l/swarmauri_state_clipboard" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_state_clipboard/">
<img src="https://img.shields.io/pypi/v/swarmauri_state_clipboard?label=swarmauri_state_clipboard&color=green" alt="PyPI - swarmauri_state_clipboard"/></a>
</p>
---
# Swarmauri State Clipboard
`ClipboardState` implements Swarmauri's `StateBase` interface using the system clipboard as storage. Useful for quick demos or sharing state between desktop tools without running an external datastore.
## Features
- Reads/writes clipboard contents via platform commands (`clip`, `pbcopy`/`pbpaste`, `xclip`).
- Stores JSON-like dictionaries as string representations; uses `ast.literal_eval` when reading.
- Provides `write`, `read`, `update`, `reset`, and `deep_copy` helpers.
## Prerequisites
- Python 3.10 or newer.
- OS clipboard utilities available:
- Windows: `clip` (built-in) and PowerShell `Get-Clipboard`.
- macOS: `pbcopy`/`pbpaste` (built-in).
- Linux: `xclip` installed (`apt install xclip` or equivalent).
## Installation
```bash
# pip
pip install swarmauri_state_clipboard
# poetry
poetry add swarmauri_state_clipboard
# uv (pyproject-based projects)
uv add swarmauri_state_clipboard
```
## Quickstart
```python
from swarmauri_state_clipboard import ClipboardState
state = ClipboardState()
state.write({"key1": "value1", "counter": 42})
print(state.read())
state.update({"counter": 43})
print(state.read())
state.reset()
print(state.read()) # {}
```
## Deep Copy
```python
state = ClipboardState()
state.write({"session": "abc123"})
clone = state.deep_copy()
clone.update({"session": "xyz789"})
print(state.read()) # {'session': 'abc123'}
print(clone.read()) # {'session': 'xyz789'}
```
## Tips
- Clipboard overwrites are global; avoid using this state provider in multi-user or production environments where clipboard privacy matters.
- Contents are stored as Python literal strings—avoid writing untrusted data to the clipboard to prevent evaluation issues (though `ast.literal_eval` mitigates code execution risks).
- Ensure required system commands exist before running in CI or containers (install `xclip` for Linux builds).
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, state, clipboard, community | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"swarmauri_base",
"swarmauri_core"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:49:18.478073 | swarmauri_state_clipboard-0.8.3.dev5-py3-none-any.whl | 9,319 | 5c/d8/52aca3779241459b8ce83426a2bbd487f96f19e57cbe7cc0220c0e1a3788/swarmauri_state_clipboard-0.8.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 079af2d7309abbf6466d873bda7d0acb | 5253dfb720b3f9b11ed80ac0f20c6ee8eb7cfb15218c1111ab0e4c86da1b4861 | 5cd852aca3779241459b8ce83426a2bbd487f96f19e57cbe7cc0220c0e1a3788 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_signing_dsse | 0.1.3.dev5 | Composable DSSE envelope adapter that layers Pre-Authentication Encoding onto Swarmauri signing providers | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_signing_dsse/">
<img src="https://img.shields.io/pypi/dm/swarmauri_signing_dsse" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_signing_dsse/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_signing_dsse.svg"/></a>
<a href="https://pypi.org/project/swarmauri_signing_dsse/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_signing_dsse" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_signing_dsse/">
<img src="https://img.shields.io/pypi/l/swarmauri_signing_dsse" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_signing_dsse/">
<img src="https://img.shields.io/pypi/v/swarmauri_signing_dsse?label=swarmauri_signing_dsse&color=green" alt="PyPI - swarmauri_signing_dsse"/></a>
</p>
---
# Swarmauri Signing DSSE
`DSSESigner` layers the [in-toto DSSE](https://github.com/secure-systems-lab/dsse) envelope format on top of any existing Swarmauri
signer. It computes the Pre-Authentication Encoding (PAE) defined by the spec, delegates raw signing and verification to the
wrapped provider, and exposes helpers for serializing envelopes in the DSSE JSON representation.
## Features
- Adds the `dsse-pae` canonicalization surface to any `SigningBase` provider.
- Supports detached signature workflows for bytes, digests, streams, and envelopes.
- Includes a strict JSON codec with typed helpers for building and inspecting DSSE envelopes.
- Maintains the inner signer's capability matrix while declaring DSSE-specific features (`detached_only`, `multi`).
## Installation
Install the package with your preferred Python packaging tool:
### Using `uv`
```bash
uv add swarmauri_signing_dsse
```
### Using `pip`
```bash
pip install swarmauri_signing_dsse
```
## Usage
```python
import base64
from swarmauri_signing_dsse import DSSESigner, DSSEEnvelope
from swarmauri_signing_ed25519 import Ed25519EnvelopeSigner
# Wrap an existing Swarmauri signer.
inner_signer = Ed25519EnvelopeSigner()
dsse_signer = DSSESigner(inner_signer)
# Prepare a DSSE envelope.
payload = b"example payload"
payload_b64 = base64.urlsafe_b64encode(payload).decode("ascii").rstrip("=")
envelope = DSSEEnvelope(payload_type="text/plain", payload_b64=payload_b64)
# Sign and verify using DSSE PAE over the payload.
key_ref = {"kind": "raw_ed25519_sk", "bytes": b"\x01" * 32}
signatures = await dsse_signer.sign_envelope(key_ref, envelope)
assert await dsse_signer.verify_envelope(envelope, signatures)
```
`DSSESigner` accepts existing DSSE JSON mappings or bytes anywhere an envelope is expected. The helper methods
`encode_envelope()` and `decode_envelope()` let you round-trip envelopes without reimplementing JSON handling.
## License
This project is licensed under the [Apache License 2.0](LICENSE).
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, signing, dsse, sigstore, supply-chain, pae, composable, envelope, adapter, layers, pre, authentication, encoding, onto, providers | [
"Development Status :: 1 - Planning",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language ... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"swarmauri_base",
"swarmauri_core"
] | [] | [] | [] | [
"Documentation, https://github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_signing_dsse",
"Repository, https://github.com/swarmauri/swarmauri-sdk"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:49:16.229630 | swarmauri_signing_dsse-0.1.3.dev5-py3-none-any.whl | 11,529 | 95/fc/9aecbd55abcf241ef67f53f0433ec0e3f3886f650c915517c49c91feef86/swarmauri_signing_dsse-0.1.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 4c43f6271715ee01cf3a9d274c12bd80 | c5229a06b3e8e277a9acef719c2162b21b5f19ee321685db5d733b9856c177e6 | 95fc9aecbd55abcf241ef67f53f0433ec0e3f3886f650c915517c49c91feef86 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tool_captchagenerator | 0.9.3.dev5 | Swarmauri Community Captcha Generator Tool | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_tool_captchagenerator/">
<img src="https://img.shields.io/pypi/dm/swarmauri_tool_captchagenerator" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_captchagenerator/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tool_captchagenerator.svg"/></a>
<a href="https://pypi.org/project/swarmauri_tool_captchagenerator/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_tool_captchagenerator" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_tool_captchagenerator/">
<img src="https://img.shields.io/pypi/l/swarmauri_tool_captchagenerator" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_tool_captchagenerator/">
<img src="https://img.shields.io/pypi/v/swarmauri_tool_captchagenerator?label=swarmauri_tool_captchagenerator&color=green" alt="PyPI - swarmauri_tool_captchagenerator"/></a>
</p>
---
# Swarmauri Tool Captcha Generator
Tool that generates CAPTCHA images from text using the [`captcha`](https://pypi.org/project/captcha/) library. Returns the rendered PNG as a base64-encoded string so it can be stored or embedded inline.
## Features
- Accepts a single parameter (`text`) and produces an ImageCaptcha output.
- Returns a dictionary with `image_b64` containing the PNG bytes encoded as base64.
- Can be wired into larger Swarmauri toolchains to produce human challenges dynamically.
## Prerequisites
- Python 3.10 or newer.
- Pillow and captcha dependencies (installed automatically with this package).
## Installation
```bash
# pip
pip install swarmauri_tool_captchagenerator
# poetry
poetry add swarmauri_tool_captchagenerator
# uv (pyproject-based projects)
uv add swarmauri_tool_captchagenerator
```
## Quickstart
```python
import base64
from pathlib import Path
from swarmauri_tool_captchagenerator import CaptchaGeneratorTool
captcha_tool = CaptchaGeneratorTool()
result = captcha_tool("Verify42")
image_b64 = result["image_b64"]
image_bytes = base64.b64decode(image_b64)
Path("captcha.png").write_bytes(image_bytes)
```
Displaying inline (e.g., in a HTML response):
```python
html = f"<img src='data:image/png;base64,{image_b64}' alt='captcha' />"
```
## Tips
- Customize CAPTCHA rendering (fonts, size) by subclassing and configuring `ImageCaptcha` (e.g., `ImageCaptcha(width=280, height=90)`).
- Store the generated solution (`Verify42` in the example) securely server-side to validate client responses later.
- For high throughput, reuse the tool instance rather than instantiating per request to avoid repeated font loading overhead.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, tool, captchagenerator, community, captcha, generator | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"captcha>=0.6.0",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:49:15.378714 | swarmauri_tool_captchagenerator-0.9.3.dev5-py3-none-any.whl | 8,337 | 25/ba/d7a13c5f63780d3c73598ccaa90024804d558bbc917d72030de08d4efc77/swarmauri_tool_captchagenerator-0.9.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 9c6a3c18a918c3593edf7db83e4c15f6 | 6e4fe4921ed40e0711f424f190b7bfc0af181d9964a0bf4374db64a8d76af079 | 25bad7a13c5f63780d3c73598ccaa90024804d558bbc917d72030de08d4efc77 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_tests_griffe | 0.1.4.dev5 | Pytest plugin that turns Griffe inspection warnings into failing quality gates for Swarmauri packages. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri-tests-griffe/">
<img src="https://img.shields.io/pypi/dm/swarmauri-tests-griffe" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tests_griffe/">
<img src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_tests_griffe.svg" alt="Repository Hits"/></a>
<a href="https://pypi.org/project/swarmauri-tests-griffe/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri-tests-griffe" alt="PyPI - Supported Python Versions"/></a>
<a href="https://pypi.org/project/swarmauri-tests-griffe/">
<img src="https://img.shields.io/pypi/l/swarmauri-tests-griffe" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri-tests-griffe/">
<img src="https://img.shields.io/pypi/v/swarmauri-tests-griffe?label=swarmauri_tests_griffe&color=green" alt="PyPI - Latest Release"/></a>
</p>
---
# swarmauri_tests_griffe
`swarmauri_tests_griffe` is a [pytest](https://pytest.org) plugin that loads your
package metadata with [Griffe](https://github.com/mkdocstrings/griffe). The
plugin converts any warnings generated during inspection into failing tests so
that documentation, annotations, and runtime signatures stay in sync across the
Swarmauri ecosystem.
## Features
- **Python 3.10–3.12 coverage** – verified across the supported Swarmauri
runtime range so you can keep consistent quality gates on every maintained
interpreter.
- **Warning-to-test enforcement** – automatically escalates Griffe warnings to
failing pytest checks to stop documentation drift before it ships.
- **Zero-config discovery** – the plugin registers as a pytest entry point and
loads without additional setup once installed.
- **Flexible targeting** – tune the inspection scope with command-line flags or
persistent `pyproject.toml` settings.
## Installation
Choose the installer that best fits your workflow:
### Using `uv`
```bash
uv add swarmauri-tests-griffe
```
### Using `pip`
```bash
pip install swarmauri-tests-griffe
```
Both commands add the plugin as a dependency of your project. Because the plugin
uses pytest entry points, it is automatically discovered the next time your test
suite runs—no manual configuration required.
> **Supported Python versions:** The plugin is tested and published for Python
> 3.10, 3.11, and 3.12 across the Swarmauri platform.
## Usage
After installation, execute your test suite as normal and a dynamic Griffe check
is injected for each configured package. By default, the package defined in
`pyproject.toml` is inspected. You can target additional packages or limit the
scope with command-line options:
```bash
pytest --griffe-package your_package --griffe-package another_package
```
Each `--griffe-package` argument adds a module to the inspection list. If
Griffe produces warnings while processing any module, the corresponding dynamic
test fails and the collected warnings are rendered in the pytest output, making
it easy to pinpoint the files that need attention.
### Configuring defaults
For larger projects, keep the configuration in `pyproject.toml` to avoid
repeating command-line flags:
```toml
[tool.pytest.ini_options]
addopts = "--griffe-package swarmauri_core --griffe-package swarmauri_tests_griffe"
```
With the options saved, every pytest run enforces the same quality gates across
your codebase without extra setup.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | Apache-2.0 | swarmauri, pytest, griffe, static analysis, developer experience, quality gates, tests, plugin, turns, inspection, warnings, failing, quality, gates, packages | [
"Development Status :: 1 - Planning",
"Framework :: Pytest",
"Intended Audience :: Developers",
"Topic :: Software Development :: Quality Assurance",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pyth... | [] | null | null | >=3.10 | [] | [] | [] | [
"griffe>=0.48",
"pytest>=8.0",
"tomli; python_version < \"3.11\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:49:13.958574 | swarmauri_tests_griffe-0.1.4.dev5.tar.gz | 8,484 | a3/8a/29a4b72506c01d56c39f060c723288a631d44e12a6fd8b8324f285b9b9c7/swarmauri_tests_griffe-0.1.4.dev5.tar.gz | source | sdist | null | false | d51634fad77832fda69d7545cc1368d2 | 0da1b27e4b0fb2208b4fb0dbf47e96f203dfd0f9f3f5427c021c0fad68a9d309 | a38a29a4b72506c01d56c39f060c723288a631d44e12a6fd8b8324f285b9b9c7 | null | [
"LICENSE"
] | 0 |
2.4 | swarmauri_parser_textblob | 0.9.3.dev5 | TextBlob Parser for Swarmauri. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_parser_textblob/">
<img src="https://img.shields.io/pypi/dm/swarmauri_parser_textblob" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_textblob/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_textblob.svg"/></a>
<a href="https://pypi.org/project/swarmauri_parser_textblob/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_parser_textblob" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_parser_textblob/">
<img src="https://img.shields.io/pypi/l/swarmauri_parser_textblob" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_parser_textblob/">
<img src="https://img.shields.io/pypi/v/swarmauri_parser_textblob?label=swarmauri_parser_textblob&color=green" alt="PyPI - swarmauri_parser_textblob"/></a>
</p>
---
# Swarmauri Parser TextBlob
TextBlob-backed parsers for Swarmauri that split text into sentences or extract noun phrases. Ships two components: `TextBlobSentenceParser` and `TextBlobNounParser`.
## Features
- Sentence parser returns a `Document` per sentence with metadata identifying the parser.
- Noun phrase parser returns the original text plus `metadata['noun_phrases']` containing the phrases discovered by TextBlob.
- Auto-downloads required NLTK corpora (`punkt_tab`) during initialization.
## Prerequisites
- Python 3.10 or newer.
- [TextBlob](https://textblob.readthedocs.io/) and its NLTK dependencies (installed automatically).
- Internet access on first run so NLTK can download tokenizer data (or pre-download via `python -m textblob.download_corpora`).
## Installation
```bash
# pip
pip install swarmauri_parser_textblob
# poetry
poetry add swarmauri_parser_textblob
# uv (pyproject-based projects)
uv add swarmauri_parser_textblob
```
## Sentence Parsing
```python
from swarmauri_parser_textblob import TextBlobSentenceParser
parser = TextBlobSentenceParser()
text = "One more large chapula please. It should be extra spicy!"
sentences = parser.parse(text)
for doc in sentences:
print(doc.content)
```
## Noun Phrase Extraction
```python
from swarmauri_parser_textblob import TextBlobNounParser
parser = TextBlobNounParser()
docs = parser.parse("One more large chapula please.")
for doc in docs:
print(doc.content)
print(doc.metadata["noun_phrases"])
```
## Tips
- TextBlob uses simple heuristics—it works well for general English text but may struggle with domain-specific jargon.
- Download corpora once in CI/CD or container builds (`python -m textblob.download_corpora`) to avoid runtime downloads.
- Combine sentence and noun parsers to build structured representations of documents before vectorization or downstream NLP tasks.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, parser, textblob | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"nltk>=3.9.1",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard",
"textblob>=0.18.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:58.749583 | swarmauri_parser_textblob-0.9.3.dev5.tar.gz | 7,877 | 54/76/ea9be0e71a2e89c17276ead13aa9da49a37bccdd5298023ffad783afd62c/swarmauri_parser_textblob-0.9.3.dev5.tar.gz | source | sdist | null | false | 9e950ade1ad956ec8eed6255acd75b3c | 3ce45016146cb1d41d5f06855d3363996f494ec5e9d864981da5047ac795c29f | 5476ea9be0e71a2e89c17276ead13aa9da49a37bccdd5298023ffad783afd62c | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_parser_slate | 0.2.3.dev5 | A parser for extracting text from PDFs using Slate. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_parser_slate/">
<img src="https://img.shields.io/pypi/dm/swarmauri_parser_slate" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_slate/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_slate.svg"/></a>
<a href="https://pypi.org/project/swarmauri_parser_slate/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_parser_slate" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_parser_slate/">
<img src="https://img.shields.io/pypi/l/swarmauri_parser_slate" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_parser_slate/">
<img src="https://img.shields.io/pypi/v/swarmauri_parser_slate?label=swarmauri_parser_slate&color=green" alt="PyPI - swarmauri_parser_slate"/></a>
</p>
---
# Swarmauri Parser Slate
PDF text parser for Swarmauri using [Slate3k](https://pypi.org/project/slate3k/) (a lightweight PDFMiner wrapper). Extracts text from each PDF page and returns `Document` instances with page metadata.
## Features
- Opens PDFs with Slate3k and returns a `Document` per page (`content` = text, `metadata` includes `page_number` and `source`).
- Accepts file paths (string). Raises a `TypeError` when given anything else to prevent silent failures.
- Returns an empty list if Slate encounters parsing errors, logging the exception to stdout.
## Prerequisites
- Python 3.10 or newer.
- Slate3k depends on `pdfminer.six`; make sure operating-system libraries required by PDFMiner (e.g., `libxml2`, `libxslt` on Linux) are installed.
- Read access to the PDF path you pass in.
## Installation
```bash
# pip
pip install swarmauri_parser_slate
# poetry
poetry add swarmauri_parser_slate
# uv (pyproject-based projects)
uv add swarmauri_parser_slate
```
## Quickstart
```python
from swarmauri_parser_slate import SlateParser
parser = SlateParser()
documents = parser.parse("pdfs/handbook.pdf")
for doc in documents:
print(doc.metadata["page_number"], doc.content[:120])
```
## Handling Errors
```python
parser = SlateParser()
try:
docs = parser.parse("missing.pdf")
if not docs:
print("No pages parsed or Slate returned no text.")
except TypeError as exc:
print(f"Bad input: {exc}")
```
## Tips
- Slate3k works best on text-based PDFs. For scanned/bitmap PDFs, run OCR first (e.g., `swarmauri_ocr_pytesseract`).
- Large PDFs can consume memory; consider chunking results or streaming pages to downstream processors.
- Combine with token counting or summarization measurements in Swarmauri to further process the extracted content.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, parser, slate, extracting, text, pdfs | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"slate3k>=0.5",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:54.690212 | swarmauri_parser_slate-0.2.3.dev5.tar.gz | 7,278 | 68/64/6e799697044add69c21690f0d23ea65ed1d2192acb84ea0919d5e8ce7d18/swarmauri_parser_slate-0.2.3.dev5.tar.gz | source | sdist | null | false | 24adad4e1e06808b6f9225edf15273d1 | 805326e017b209a528c42ab18ce5ea0fe0f25f82b2cffca2707de8cd2b9dd685 | 68646e799697044add69c21690f0d23ea65ed1d2192acb84ea0919d5e8ce7d18 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_parser_pypdftk | 0.8.3.dev5 | A parser for extracting text from PDFs using PyPDFTK. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_parser_pypdftk/">
<img src="https://img.shields.io/pypi/dm/swarmauri_parser_pypdftk" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_pypdftk/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_pypdftk.svg"/></a>
<a href="https://pypi.org/project/swarmauri_parser_pypdftk/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_parser_pypdftk" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_parser_pypdftk/">
<img src="https://img.shields.io/pypi/l/swarmauri_parser_pypdftk" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_parser_pypdftk/">
<img src="https://img.shields.io/pypi/v/swarmauri_parser_pypdftk?label=swarmauri_parser_pypdftk&color=green" alt="PyPI - swarmauri_parser_pypdftk"/></a>
</p>
---
# Swarmauri Parser PyPDFTK
Form-field parser for Swarmauri built on [PyPDFTK](https://pypi.org/project/pypdftk/). Extracts PDF AcroForm field metadata and returns it as Swarmauri `Document` content.
## Features
- Calls `pypdftk.dump_data_fields` to extract field key/value pairs.
- Emits a single `Document` with newline-delimited `key: value` text and `metadata['source']` set to the PDF path.
- Returns an empty list when no form fields exist or when parsing fails (logs the error).
## Prerequisites
- Python 3.10 or newer.
- PyPDFTK plus the `pdftk`/`pdftk-java` binary available on the system path. Install operating-system packages: e.g., `apt install pdftk-java` or download `pdftk` for macOS/Windows.
- Read access to the PDF file path you provide.
## Installation
```bash
# pip
pip install swarmauri_parser_pypdftk
# poetry
poetry add swarmauri_parser_pypdftk
# uv (pyproject-based projects)
uv add swarmauri_parser_pypdftk
```
## Quickstart
```python
from swarmauri_parser_pypdftk import PyPDFTKParser
parser = PyPDFTKParser()
documents = parser.parse("forms/enrollment.pdf")
for doc in documents:
print(doc.metadata["source"])
print(doc.content)
```
Example output:
```
source: forms/enrollment.pdf
GivenName: John
FamilyName: Doe
BirthDate: 1990-01-01
```
## Handling Missing Fields
```python
parser = PyPDFTKParser()
docs = parser.parse("forms/plain.pdf")
if not docs:
print("No form fields detected or parsing failed.")
```
## Tips
- Ensure `pdftk` is installed and available on `PATH`; PyPDFTK delegates to the binary.
- For encrypted PDFs, remove or provide the password before parsing; `pdftk` cannot dump fields from password-protected documents without credentials.
- Combine with other Swarmauri parsers to extract both structured form data (`PyPDFTKParser`) and free-form text (`PyPDF2Parser` or `FitzPdfParser`).
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, parser, pypdftk, extracting, text, pdfs | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"pypdftk>=0.5",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:51.855790 | swarmauri_parser_pypdftk-0.8.3.dev5-py3-none-any.whl | 8,263 | 1e/8e/08f1d7fbf65b4c09f2fbba21b56b530add8191f52ba3fae53e2062f31e1c/swarmauri_parser_pypdftk-0.8.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 5478379e395786d85248c2238e67c55e | 492b7e39a8c79dd8a1576db37391f3d2d096846f70a2e56b6fe628a5d9d85711 | 1e8e08f1d7fbf65b4c09f2fbba21b56b530add8191f52ba3fae53e2062f31e1c | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_parser_pypdf2 | 0.8.3.dev5 | PyPDF2 Parser for Swarmauri. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_parser_pypdf2/">
<img src="https://img.shields.io/pypi/dm/swarmauri_parser_pypdf2" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_pypdf2/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_pypdf2.svg"/></a>
<a href="https://pypi.org/project/swarmauri_parser_pypdf2/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_parser_pypdf2" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_parser_pypdf2/">
<img src="https://img.shields.io/pypi/l/swarmauri_parser_pypdf2" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_parser_pypdf2/">
<img src="https://img.shields.io/pypi/v/swarmauri_parser_pypdf2?label=swarmauri_parser_pypdf2&color=green" alt="PyPI - swarmauri_parser_pypdf2"/></a>
</p>
---
# Swarmauri Parser PyPDF2
Lightweight PDF parser for Swarmauri that uses [PyPDF2](https://pypdf2.readthedocs.io/) to extract text from each page. Returns a `Document` per page with metadata describing the source file and page number.
## Features
- Handles PDF input from file paths or raw bytes.
- Produces one `Document` per page, storing text in `content` and metadata fields (`page_number`, `source`).
- Gracefully returns an empty list if PyPDF2 cannot extract text from a page (e.g., scanned PDFs without OCR).
## Prerequisites
- Python 3.10 or newer.
- PyPDF2 (installed automatically). For encrypted PDFs, ensure you provide access credentials before parsing.
## Installation
```bash
# pip
pip install swarmauri_parser_pypdf2
# poetry
poetry add swarmauri_parser_pypdf2
# uv (pyproject-based projects)
uv add swarmauri_parser_pypdf2
```
## Quickstart
```python
from swarmauri_parser_pypdf2 import PyPDF2Parser
parser = PyPDF2Parser()
documents = parser.parse("manuals/device.pdf")
for doc in documents:
print(doc.metadata["page_number"], doc.content[:120])
```
## Parsing PDF Bytes
```python
from swarmauri_parser_pypdf2 import PyPDF2Parser
with open("statements/bank.pdf", "rb") as f:
pdf_bytes = f.read()
parser = PyPDF2Parser()
pages = parser.parse(pdf_bytes)
print(len(pages), "pages parsed from bytes")
```
## Tips
- PyPDF2 extracts text only when the PDF contains accessible text objects. For scanned documents, run OCR first (e.g., with `swarmauri_ocr_pytesseract`).
- Remove or handle password protection before parsing; PyPDF2 cannot decrypt files without the password.
- Combine this parser with Swarmauri chunkers or summarizers to process large documents efficiently.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, parser, pypdf2 | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"PyPDF2>=3.0.1",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:51.047840 | swarmauri_parser_pypdf2-0.8.3.dev5-py3-none-any.whl | 8,305 | e8/63/4e79b3d129e49ff61f31cecd499b9f640754dea0160bc38721566a8ebec3/swarmauri_parser_pypdf2-0.8.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 35a55b8ba5f7eae8a920ad4db4fd36e4 | 0b27af29b208f727a508a9bae5d2b472120eb645b39eb65930f81dc72e8e0fa3 | e8634e79b3d129e49ff61f31cecd499b9f640754dea0160bc38721566a8ebec3 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_parser_fitzpdf | 0.8.3.dev5 | Fitz PDF Parser for Swarmauri. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_parser_fitzpdf/">
<img src="https://img.shields.io/pypi/dm/swarmauri_parser_fitzpdf" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_fitzpdf/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_fitzpdf.svg"/></a>
<a href="https://pypi.org/project/swarmauri_parser_fitzpdf/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_parser_fitzpdf" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_parser_fitzpdf/">
<img src="https://img.shields.io/pypi/l/swarmauri_parser_fitzpdf" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_parser_fitzpdf/">
<img src="https://img.shields.io/pypi/v/swarmauri_parser_fitzpdf?label=swarmauri_parser_fitzpdf&color=green" alt="PyPI - swarmauri_parser_fitzpdf"/></a>
</p>
---
# Swarmauri Parser Fitz PDF
PDF-to-text parser for Swarmauri built on [PyMuPDF (`pymupdf`)](https://pymupdf.readthedocs.io/). Extracts text from every page of a PDF and returns a `Document` object with the aggregated content and source metadata.
## Features
- Opens PDFs via PyMuPDF and collects text per page.
- Emits a single `Document` with `content` containing the combined text and `metadata['source']` holding the file path.
- Raises a clear error if the input is not a file path string; returns an empty list if PyMuPDF encounters parsing failures.
## Prerequisites
- Python 3.10 or newer.
- PyMuPDF (`pymupdf`) along with system dependencies (X11 libraries on Linux, poppler on some distros). Install OS packages listed in [PyMuPDF docs](https://pymupdf.readthedocs.io/en/latest/installation.html) before pip installing if needed.
- Read access to the PDF files you plan to parse.
## Installation
```bash
# pip
pip install swarmauri_parser_fitzpdf
# poetry
poetry add swarmauri_parser_fitzpdf
# uv (pyproject-based projects)
uv add swarmauri_parser_fitzpdf
```
## Quickstart
```python
from swarmauri_parser_fitzpdf import FitzPdfParser
parser = FitzPdfParser()
documents = parser.parse("reports/quarterly.pdf")
for doc in documents:
print(doc.metadata["source"])
print(doc.content[:500])
```
## Handling Errors
```python
from swarmauri_parser_fitzpdf import FitzPdfParser
parser = FitzPdfParser()
try:
docs = parser.parse("missing.pdf")
if not docs:
print("Parsing failed or returned no content.")
except ValueError as exc:
print(f"Bad input: {exc}")
```
## Tips
- Pre-process PDFs (deskew, OCR) before parsing if they contain scanned pages without embedded text; PyMuPDF only extracts existing text objects.
- For multi-document pipelines, pair this parser with Swarmauri token-count measurements or summarizers to chunk large PDFs.
- Cache parsed output if the same PDF is accessed frequently—parsing large documents repeatedly is expensive.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, parser, fitzpdf, fitz, pdf | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"PyMuPDF>=1.24.12",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:49.927435 | swarmauri_parser_fitzpdf-0.8.3.dev5.tar.gz | 7,358 | 9b/9d/b22cba13acf979a4f1f474e3f9094f0b957395e74e8465e70f97039923b4/swarmauri_parser_fitzpdf-0.8.3.dev5.tar.gz | source | sdist | null | false | 9bd39b8d326b3b26da9f3a47c29043cd | 57a41f4f8ba996613fdf679dba1100b8e5cbcb8a3c8e7d186139681ae16699af | 9b9db22cba13acf979a4f1f474e3f9094f0b957395e74e8465e70f97039923b4 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_parser_entityrecognition | 0.8.3.dev5 | Entity Recognition Parser for Swarmauri. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_parser_entityrecognition/">
<img src="https://img.shields.io/pypi/dm/swarmauri_parser_entityrecognition" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_entityrecognition/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_entityrecognition.svg"/></a>
<a href="https://pypi.org/project/swarmauri_parser_entityrecognition/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_parser_entityrecognition" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_parser_entityrecognition/">
<img src="https://img.shields.io/pypi/l/swarmauri_parser_entityrecognition" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_parser_entityrecognition/">
<img src="https://img.shields.io/pypi/v/swarmauri_parser_entityrecognition?label=swarmauri_parser_entityrecognition&color=green" alt="PyPI - swarmauri_parser_entityrecognition"/></a>
</p>
---
# Swarmauri Parser Entityrecognition
Named-entity recognition (NER) parser for Swarmauri built on spaCy. Extracts entities (PERSON, ORG, GPE, etc.) from unstructured text and returns `Document` objects with entity metadata.
## Features
- Uses spaCy's `en_core_web_sm` model by default (downloads automatically if missing).
- Falls back to a blank English pipeline with minimal regex-based tagging when the full model is unavailable (best-effort mode).
- Emits `Document` instances containing the entity text and metadata (`entity_type`, `entity_id`).
## Prerequisites
- Python 3.10 or newer.
- [spaCy](https://spacy.io) and its English model. The parser attempts to download `en_core_web_sm` if missing; set `SPACY_HOME` or pre-install the model in production deployments.
- If running without internet access, install the model ahead of time: `python -m spacy download en_core_web_sm`.
## Installation
```bash
# pip
pip install swarmauri_parser_entityrecognition
# poetry
poetry add swarmauri_parser_entityrecognition
# uv (pyproject-based projects)
uv add swarmauri_parser_entityrecognition
```
## Quickstart
```python
from swarmauri_parser_entityrecognition import EntityRecognitionParser
text = "Barack Obama was born in Hawaii and served as President of the United States."
parser = EntityRecognitionParser()
entities = parser.parse(text)
for entity_doc in entities:
print(entity_doc.content, entity_doc.metadata["entity_type"])
```
## Batch Processing
```python
texts = [
"Apple Inc. unveiled new MacBooks in California.",
"Tim Cook met investors in New York City.",
]
parser = EntityRecognitionParser()
results = [parser.parse(t) for t in texts]
for doc_set in results:
for doc in doc_set:
print(doc.content, doc.metadata["entity_type"])
```
## Handling Fallback Mode
When spaCy's English model is unavailable, the parser performs best-effort matching using a blank pipeline and simple regex patterns. Check for `entity_type` values and the `entity_id` metadata to understand which mode produced the result.
```python
parser = EntityRecognitionParser()
entities = parser.parse("Tim Cook announced new products in New York City for Apple Inc.")
print([d.metadata for d in entities])
```
Install spaCy models before production use to avoid fallback accuracy losses.
## Tips
- For languages beyond English, load a different spaCy model by changing the initialization logic (e.g., subclass the parser and load `es_core_news_sm`).
- Preprocess text to remove noise (HTML tags, markup) before parsing to improve NER accuracy.
- Combine with Swarmauri middleware or pipelines to fuse entity data with downstream tasks (e.g., knowledge graph enrichment, anonymization).
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, parser, entityrecognition, entity, recognition | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"spacy<3.8.0,>=3.7.0",
"spacy-lookups-data>=1.0.5",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:37.968561 | swarmauri_parser_entityrecognition-0.8.3.dev5-py3-none-any.whl | 9,848 | b9/58/da88374d539bd2662f98ed8fd19a792abd3a053b2346d53c8009dd9c7a9b/swarmauri_parser_entityrecognition-0.8.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 1bad2322c3d2e2a0c578ce69df98f166 | 8162c1cce01186fa5cebb32a67444e6f525e8ed437cf9e0e5fd4a2f5715ac55d | b958da88374d539bd2662f98ed8fd19a792abd3a053b2346d53c8009dd9c7a9b | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_parser_bertembedding | 0.8.3.dev5 | Swarmauri Bert Embedding Parser | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_parser_bertembedding/">
<img src="https://img.shields.io/pypi/dm/swarmauri_parser_bertembedding" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_bertembedding/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_parser_bertembedding.svg"/></a>
<a href="https://pypi.org/project/swarmauri_parser_bertembedding/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_parser_bertembedding" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_parser_bertembedding/">
<img src="https://img.shields.io/pypi/l/swarmauri_parser_bertembedding" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_parser_bertembedding/">
<img src="https://img.shields.io/pypi/v/swarmauri_parser_bertembedding?label=swarmauri_parser_bertembedding&color=green" alt="PyPI - swarmauri_parser_bertembedding"/></a>
</p>
---
# Swarmauri Parser Bert Embedding
Parser that converts text into embeddings using a Hugging Face BERT encoder. Produces `Document` objects whose metadata carries the averaged token embedding so downstream Swarmauri pipelines can work with dense vectors.
## Features
- Uses `transformers.BertModel` + `BertTokenizer` (default `bert-base-uncased`).
- Accepts single strings or lists of strings and emits `Document` instances with original text and embedding metadata.
- Runs in inference (`eval`) mode with automatic `torch.no_grad()` handling.
- Works on CPU by default; configure PyTorch device settings to leverage GPU.
## Prerequisites
- Python 3.10 or newer.
- PyTorch compatible with your hardware (installed automatically via `transformers` if not present; install CUDA-enabled wheels manually when needed).
- Internet access on first run so Hugging Face downloads tokenizer/model weights (or warm the cache ahead of time).
## Installation
```bash
# pip
pip install swarmauri_parser_bertembedding
# poetry
poetry add swarmauri_parser_bertembedding
# uv (pyproject-based projects)
uv add swarmauri_parser_bertembedding
```
## Quickstart
```python
from swarmauri_parser_bertembedding import BERTEmbeddingParser
parser = BERTEmbeddingParser(parser_model_name="bert-base-uncased")
documents = parser.parse([
"Swarmauri agents cooperate over shared memory.",
"Dense embeddings power semantic search.",
])
for doc in documents:
vector = doc.metadata["embedding"]
print(doc.content)
print(len(vector), vector[:5])
```
## Custom Models & Devices
```python
import torch
from swarmauri_parser_bertembedding import BERTEmbeddingParser
from transformers import BertModel
class GPUParser(BERTEmbeddingParser):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self._model = BertModel.from_pretrained(self.parser_model_name).to("cuda")
parser = GPUParser(parser_model_name="bert-base-multilingual-cased")
parser._model.eval()
```
## Batch Embeddings at Scale
```python
from tqdm import tqdm
from swarmauri_parser_bertembedding import BERTEmbeddingParser
texts = [f"Paragraph {i}" for i in range(1000)]
parser = BERTEmbeddingParser()
batched_docs = []
batch_size = 32
for start in tqdm(range(0, len(texts), batch_size)):
batch = texts[start:start + batch_size]
batched_docs.extend(parser.parse(batch))
```
Persist the resulting vectors into Swarmauri vector stores (Redis, Qdrant, etc.) via the metadata field.
## Tips
- Preprocess text to match model expectations (lowercase for uncased BERT, language-specific cleanup for multilingual models).
- For extremely long documents, consider chunking before calling `parse` to respect the 512 token limit.
- Use PyTorch's `to("cuda")` or `to("mps")` to execute on GPUs or Apple silicon accelerators.
- Cache Hugging Face weights in CI/CD environments (`HF_HOME=/cache/hf`) to avoid repeated downloads.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, parser, bertembedding, bert, embedding | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard",
"torch",
"transformers>=4.45.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:33.091105 | swarmauri_parser_bertembedding-0.8.3.dev5-py3-none-any.whl | 9,381 | bd/78/c01b30c933658e104bd5db352f33c692d347599824630d5c69723ffbbf41/swarmauri_parser_bertembedding-0.8.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | e2fb79ca1ecb0f5322d495a5be31afa0 | ec12da41908fdc198873f3480eab78603c671af3214797e6638635b1511f78f7 | bd78c01b30c933658e104bd5db352f33c692d347599824630d5c69723ffbbf41 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_ocr_pytesseract | 0.9.3.dev5 | Swarmauri Tesseract Image to Text Model | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_ocr_pytesseract/">
<img src="https://img.shields.io/pypi/dm/swarmauri_ocr_pytesseract" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_ocr_pytesseract/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_ocr_pytesseract.svg"/></a>
<a href="https://pypi.org/project/swarmauri_ocr_pytesseract/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_ocr_pytesseract" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_ocr_pytesseract/">
<img src="https://img.shields.io/pypi/l/swarmauri_ocr_pytesseract" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_ocr_pytesseract/">
<img src="https://img.shields.io/pypi/v/swarmauri_ocr_pytesseract?label=swarmauri_ocr_pytesseract&color=green" alt="PyPI - swarmauri_ocr_pytesseract"/></a>
</p>
---
# Swarmauri OCR Pytesseract
OCR adapter for Swarmauri built on top of [PyTesseract](https://pypi.org/project/pytesseract/). Accepts paths, bytes, or PIL images, and exposes synchronous, async, and batch APIs for extracting text.
## Features
- Wraps Tesseract OCR via PyTesseract behind Swarmauri's `OCRBase` interface.
- Supports multiple languages (`language` parameter) and custom Tesseract configs.
- Handles individual, async, and batched OCR calls with optional concurrency limits.
- Helper to list installed Tesseract languages through `get_supported_languages()`.
## Prerequisites
- Python 3.10 or newer.
- [Tesseract OCR](https://github.com/tesseract-ocr/tesseract) installed on the host (`tesseract` binary reachable on `PATH` or via `TESSERACT_CMD`).
- `pytesseract`, `Pillow`, and related dependencies (installed automatically with this package).
## Installation
```bash
# pip
pip install swarmauri_ocr_pytesseract
# poetry
poetry add swarmauri_ocr_pytesseract
# uv (pyproject-based projects)
uv add swarmauri_ocr_pytesseract
```
## Quickstart
```python
from swarmauri_ocr_pytesseract import PytesseractOCR
ocr = PytesseractOCR(language="eng")
text = ocr.extract_text("docs/invoice.png")
print(text)
```
## Processing Image Bytes
```python
from pathlib import Path
from swarmauri_ocr_pytesseract import PytesseractOCR
png_bytes = Path("receipts/ticket.png").read_bytes()
ocr = PytesseractOCR(language="eng", config="--psm 6")
print(ocr.extract_text(png_bytes))
```
## Async and Batch APIs
```python
import asyncio
from swarmauri_ocr_pytesseract import PytesseractOCR
ocr = PytesseractOCR(language="fra")
async def run_async():
text = await ocr.aextract_text("scans/document_fr.png")
print(text)
texts = await ocr.abatch([
"scans/page1.png",
"scans/page2.png",
], max_concurrent=2)
for page, content in enumerate(texts, start=1):
print(f"Page {page}: {content[:80]}")
# asyncio.run(run_async())
```
## List Available Languages
```python
from swarmauri_ocr_pytesseract import PytesseractOCR
ocr = PytesseractOCR()
print(ocr.get_supported_languages())
```
## Tips
- Set `TESSERACT_CMD` if the binary lives outside standard locations (e.g., Windows installs).
- Use appropriate page segmentation modes (`--psm`) and OCR engine modes (`--oem`) through the `config` parameter to improve quality.
- Pre-process images (grayscale, thresholding) before passing them to the OCR for better accuracy.
- When running in containers, ensure Tesseract language packs (`.traineddata`) are installed for the languages you plan to use.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, ocr, pytesseract, tesseract, image, text, model | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"pytesseract>=0.3.13",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:29.834913 | swarmauri_ocr_pytesseract-0.9.3.dev5-py3-none-any.whl | 9,640 | 3a/2c/a5a1011d8911e20bbd6ff0032a7f98ef20ffca1a6c0b9823b15574dd04a2/swarmauri_ocr_pytesseract-0.9.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | bf8f70530a408ed3324086599605006a | 876b8a00ac8da89f18dcce6f99eb076405b08cb7b52cab22cc10fbc9a909ebcc | 3a2ca5a1011d8911e20bbd6ff0032a7f98ef20ffca1a6c0b9823b15574dd04a2 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | elluminate | 1.0.1 | The Official Elluminate SDK | # elluminate SDK
elluminate SDK is a Software Development Kit that provides a convenient way to interact with the elluminate platform programmatically. It enables developers to evaluate and optimize prompts, manage experiments, and integrate elluminate's powerful evaluation capabilities directly into their applications.
## Installation
Install the elluminate SDK using pip:
```bash
pip install elluminate
```
## 📚 Full Documentation
The full documentation of elluminate including the SDK can be found at: <https://docs.elluminate.de/>
## Quick Start
### Prerequisites
Before you begin, you'll need to set up your API key:
1. Visit your project's "Keys" dashboard to create a new API key
2. Export your API key and service address as environment variables:
```bash
export ELLUMINATE_API_KEY=<your_api_key>
export ELLUMINATE_BASE_URL=<your_elluminate_service_address>
```
Never commit your API key to version control. For detailed information about API key management and security best practices, see our [API Key Management Guide](https://docs.elluminate.de/get_started/api_keys/).
### Basic Usage
Here's a simple example to evaluate your first prompt:
```python
from elluminate import Client
# Initialize the client
client = Client()
# Create a prompt template
template, _ = client.get_or_create_prompt_template(
name="Concept Explanation",
messages=[{"role": "user", "content": "Explain the concept of {{concept}} in simple terms."}],
)
# Generate evaluation criteria for the template
template.get_or_generate_criteria()
# Create a collection with test cases
collection, _ = client.get_or_create_collection(
name="Concept Variables",
defaults={
"description": "Template variables for concept explanations",
"variables": [{"concept": "recursion"}],
},
)
# Run a complete experiment (generates responses + rates them)
experiment = client.run_experiment(
name="Concept Evaluation Test",
prompt_template=template,
collection=collection,
description="Evaluating concept explanation responses",
)
# Print results
for response in experiment.responses():
print(f"Response: {response.response_str}")
for rating in response.ratings:
print(f" Criterion: {rating.criterion.criterion_str}")
print(f" Rating: {rating.rating}")
```
### Alternative Client Initialization
You can also initialize the client by directly passing the API key and/or base url:
```python
client = Client(api_key="your-api-key", base_url="your-base-url")
```
## Advanced Features
### Batch Evaluation with Experiments
For evaluating prompts across multiple test cases:
```python
from elluminate import Client
from elluminate.schemas import RatingMode
client = Client()
# Create a prompt template
template, _ = client.get_or_create_prompt_template(
name="Math Teaching Prompt",
messages=[{"role": "user", "content": "Explain {{math_concept}} to a {{grade_level}} student using simple examples."}],
)
# Generate evaluation criteria
template.get_or_generate_criteria()
# Create a collection with multiple test cases
collection, _ = client.get_or_create_collection(
name="Math Teaching Test Cases",
defaults={"description": "Various math concepts and grade levels"},
)
# Add test cases in batch
collection.add_many(
variables=[
{"math_concept": "fractions", "grade_level": "5th grade"},
{"math_concept": "algebra", "grade_level": "8th grade"},
{"math_concept": "geometry", "grade_level": "6th grade"},
]
)
# Run the experiment (handles all response generation and rating)
experiment = client.run_experiment(
name="Math Teaching Evaluation",
prompt_template=template,
collection=collection,
description="Evaluating math explanations across different concepts and grade levels",
rating_mode=RatingMode.DETAILED, # Get reasoning with ratings
)
# Print results for each response
for response in experiment.responses():
variables = response.prompt.template_variables.input_values
print(f"\nConcept: {variables['math_concept']}, Grade: {variables['grade_level']}")
print(f"Response: {response.response_str[:100]}...")
for rating in response.ratings:
print(f" • {rating.criterion.criterion_str}: {rating.rating}")
if rating.reasoning:
print(f" Reasoning: {rating.reasoning}")
```
### Evaluating External Agents
To evaluate responses from external systems (LangChain agents, OpenAI Assistants, custom APIs):
```python
from elluminate import Client
from elluminate.schemas import RatingValue
client = Client()
# Set up template and collection
template, _ = client.get_or_create_prompt_template(
name="Agent Evaluation",
messages=[{"role": "user", "content": "Answer: {{question}}"}],
)
template.get_or_generate_criteria()
collection, _ = client.get_or_create_collection(
name="Agent Test Cases",
defaults={"variables": [{"question": "What is Python?"}]},
)
# Create experiment WITHOUT auto-generation
experiment = client.create_experiment(
name="External Agent Eval",
prompt_template=template,
collection=collection,
)
# Get responses from your external agent
external_responses = ["Python is a high-level programming language..."]
template_vars = list(collection.items())
# Upload responses and rate them
experiment.add_responses(responses=external_responses, template_variables=template_vars)
experiment.rate_responses()
# Analyze results
for response in experiment.responses():
passed = sum(1 for r in response.ratings if r.rating == RatingValue.YES)
print(f"Pass rate: {passed}/{len(response.ratings)}")
```
## Additional Resources
- [General Documentation](https://docs.elluminate.de/)
- [Key Concepts Guide](https://docs.elluminate.de/guides/the_basics/)
- [API Documentation](https://docs.elluminate.de/elluminate/client/)
| text/markdown | ellamind GmbH | null | null | null | null | null | [] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"httpx-sse==0.4.1",
"loguru>=0.7.3",
"openai>=1.63.2",
"pydantic>=2.10.6",
"requests>=2.32.3",
"tenacity>=9.0.0",
"tqdm>=4.67.1",
"beautifulsoup4>=4.13.3; extra == \"examples\"",
"langchain-community>=0.3.18; extra == \"examples\"",
"langchain-core>=0.3.37; extra == \"examples\"",
"langchain-ope... | [] | [] | [] | [
"Documentation, https://docs.elluminate.de"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:28.468121 | elluminate-1.0.1-py3-none-any.whl | 86,144 | fc/54/d76b13ff30c925e397fe7be80aca6bf19b1e577b0cab41d3318950592f2a/elluminate-1.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 3993270a87624693e7e784c65e49ac5b | 0e18345ef5c8c405cc85a06421c2edcc7b8c4c6ce03166975ba0dc59e9ed7857 | fc54d76b13ff30c925e397fe7be80aca6bf19b1e577b0cab41d3318950592f2a | null | [] | 332 |
2.4 | swarmauri_middleware_ratepolicy | 0.8.3.dev5 | A middleware implementing rate policy and retry logic for Swarmauri | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_middleware_ratepolicy/">
<img src="https://img.shields.io/pypi/dm/swarmauri_middleware_ratepolicy" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_middleware_ratepolicy/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_middleware_ratepolicy.svg"/></a>
<a href="https://pypi.org/project/swarmauri_middleware_ratepolicy/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_middleware_ratepolicy" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_middleware_ratepolicy/">
<img src="https://img.shields.io/pypi/l/swarmauri_middleware_ratepolicy" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_middleware_ratepolicy/">
<img src="https://img.shields.io/pypi/v/swarmauri_middleware_ratepolicy?label=swarmauri_middleware_ratepolicy&color=green" alt="PyPI - swarmauri_middleware_ratepolicy"/></a>
</p>
---
# Swarmauri Middleware Ratepolicy
Retry-policy middleware for Swarmauri services. Provides exponential backoff with configurable retry attempts and wait intervals so unreliable upstream calls can be retried transparently.
## Features
- Implements Swarmauri's `MiddlewareBase` contract; wrap any callable sequence (FastAPI routes, job runners, etc.).
- Configurable `max_retries` and `initial_wait` seconds. Wait time doubles on each retry (`initial_wait * 2**attempt`).
- Emits structured logs on retry attempts and successes for observability.
- Simple synchronous dispatch; wrap async callables by providing a sync shim that executes the coroutine (see example below).
## Prerequisites
- Python 3.10 or newer.
- A Swarmauri application or FastAPI project that supports middleware registration.
## Installation
```bash
# pip
pip install swarmauri_middleware_ratepolicy
# poetry
poetry add swarmauri_middleware_ratepolicy
# uv (pyproject-based projects)
uv add swarmauri_middleware_ratepolicy
```
## Quickstart
```python
import logging
from swarmauri_middleware_ratepolicy import RetryPolicyMiddleware
logging.basicConfig(level=logging.INFO)
retry_middleware = RetryPolicyMiddleware(max_retries=3, initial_wait=0.5)
class RequestEnvelope:
def __init__(self, payload: str):
self.payload = payload
request = RequestEnvelope("work-item-123")
def call_next(req: RequestEnvelope):
raise RuntimeError("Simulated upstream failure")
retry_middleware.dispatch(request, call_next)
```
With Swarmauri's middleware stack (or FastAPI), register it just like other Swarmauri middleware:
```python
from swarmauri_app.middleware import middleware_stack
from swarmauri_middleware_ratepolicy import RetryPolicyMiddleware
middleware_stack.add_middleware(
RetryPolicyMiddleware,
max_retries=4,
initial_wait=0.25,
)
```
## Example: Wrapping an External API Call
```python
import logging
import requests
from swarmauri_middleware_ratepolicy import RetryPolicyMiddleware
logging.basicConfig(level=logging.INFO)
retry = RetryPolicyMiddleware(max_retries=4, initial_wait=0.25)
class RequestWrapper:
def __init__(self, url: str):
self.url = url
wrapper = RequestWrapper("https://api.example.com/data")
response = retry.dispatch(
wrapper,
lambda req: requests.get(req.url, timeout=5),
)
print(response.status_code)
```
## Tips
- Keep `max_retries` small for user-facing endpoints to avoid long wait chains; rely on background queues for bulk retries.
- Combine with the circuit breaker middleware for layered resilience (circuit breaker opens when repeated retries fail).
- When wrapping async callables, convert them to sync functions using `asyncio.run` or `anyio.from_thread` to fit the middleware signature.
- Capture logs at INFO level to trace retry attempts in production.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, middleware, ratepolicy, retry, tenacity, async, api, framework, implementing, rate, policy, logic | [
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Development Status :: 3 - Alpha",
"Intended Audi... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard",
"tenacity"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:27.164999 | swarmauri_middleware_ratepolicy-0.8.3.dev5.tar.gz | 7,538 | 8b/38/a806b7231fd95374f26d00f687d6391dd724007811af85563d9f451adb79/swarmauri_middleware_ratepolicy-0.8.3.dev5.tar.gz | source | sdist | null | false | 7739c2c07e76a5ec666f37ab52589d5e | 08a733a80b7a5a5d541edf927753ded003d1f776f82e59d75964e20f1b5c5c8b | 8b38a806b7231fd95374f26d00f687d6391dd724007811af85563d9f451adb79 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_middleware_circuitbreaker | 0.7.3.dev5 | Circuit Breaker Middleware for Swarmauri | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_middleware_circuitbreaker/">
<img src="https://img.shields.io/pypi/dm/swarmauri_middleware_circuitbreaker" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_middleware_circuitbreaker/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_middleware_circuitbreaker.svg"/></a>
<a href="https://pypi.org/project/swarmauri_middleware_circuitbreaker/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_middleware_circuitbreaker" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_middleware_circuitbreaker/">
<img src="https://img.shields.io/pypi/l/swarmauri_middleware_circuitbreaker" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_middleware_circuitbreaker/">
<img src="https://img.shields.io/pypi/v/swarmauri_middleware_circuitbreaker?label=swarmauri_middleware_circuitbreaker&color=green" alt="PyPI - swarmauri_middleware_circuitbreaker"/></a>
</p>
---
# Swarmauri Middleware Circuitbreaker
FastAPI middleware that adds a configurable circuit breaker (powered by `pybreaker`) to Swarmauri services. Automatically blocks requests after repeated failures and re-opens based on a half-open probation window.
## Features
- Integrates with Swarmauri's `MiddlewareBase` interface—drop in via `app.add_middleware`.
- Configurable thresholds: `fail_max`, `reset_timeout`, and `half_open_wait_time`.
- Supports async FastAPI request handling and conveys 429 responses when the circuit is open.
- Logs state transitions (closed ➜ open ➜ half-open) for observability.
## Prerequisites
- Python 3.10 or newer.
- FastAPI application (ASGI) using Swarmauri's middleware system.
- `pybreaker` installed (included as a dependency of this package).
## Installation
```bash
# pip
pip install swarmauri_middleware_circuitbreaker
# poetry
poetry add swarmauri_middleware_circuitbreaker
# uv (pyproject-based projects)
uv add swarmauri_middleware_circuitbreaker
```
## Quickstart
```python
from fastapi import FastAPI
from swarmauri_middleware_circuitbreaker import CircuitBreakerMiddleware
app = FastAPI()
app.add_middleware(
CircuitBreakerMiddleware,
fail_max=5,
reset_timeout=30,
half_open_wait_time=10,
)
@app.get("/unstable")
async def unstable_endpoint():
raise RuntimeError("Simulated failure")
```
- After 5 failures (`fail_max=5`), the circuit opens and subsequent calls receive HTTP 429.
- After `reset_timeout` seconds, a single test request is allowed in the half-open state; success closes the circuit, failure keeps it open.
## Observing the Circuit
```python
import logging
logging.basicConfig(level=logging.INFO)
# Logs include:
# "Circuit half-open: Waiting for test request to determine health"
# "Circuit opened: Excessive failures detected"
# "Circuit closed: Service is healthy again"
```
Integrate with your logging/monitoring stack to alert on circuit state changes.
## Tips
- Use targeted middleware stacks; wrap only the routes that call upstream services prone to failure.
- Tune `fail_max` and `reset_timeout` for each dependency—critical paths may require conservative thresholds.
- Pair with retry logic or queueing to degrade gracefully while the circuit is open.
- When testing locally, trigger failures intentionally to ensure your observability tracks circuit transitions correctly.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, middleware, circuitbreaker, circuit, breaker | [
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Development Status :: 3 - Alpha",
"Intended Audi... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"fastapi",
"pybreaker",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:26.484533 | swarmauri_middleware_circuitbreaker-0.7.3.dev5-py3-none-any.whl | 8,918 | 81/3b/709abc68a7c6407b8826c2a0242cff2d1d118a314dfcf84d35efb82e7559/swarmauri_middleware_circuitbreaker-0.7.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 0c48d093e9cfcfc4b4c8a21cb31246d2 | 5dd7d040ec5e499dd32fd6e701209555c9f32bc09e2b7c8cccb6164dc100ab4e | 813b709abc68a7c6407b8826c2a0242cff2d1d118a314dfcf84d35efb82e7559 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | eodash_catalog | 0.4.2 | This package is intended to help create a compatible STAC catalog for the eodash dashboard client. It supports configuration of multiple endpoint types for information extraction. | # eodash_catalog
[](https://pypi.org/project/eodash_catalog)
[](https://pypi.org/project/eodash_catalog)
---
**Table of Contents**
- [Installation](#installation)
- [License](#license)
## Installation
```console
pip install eodash_catalog
```
## Testing
Project uses pytest and runs it as part of CI:
```bash
python -m pytest
```
## Testing
Project uses ruff to perform checks on code style and formatting
```bash
ruff check .
```
## Versioning and branches
eodash_catalog adheres to [Semantic Versioning](https://semver.org/) and follows these rules:
Given a version number `MAJOR.MINOR.PATCH`, we increment the:
- `MAJOR` version when we make incompatible API changes
- `MINOR` version when we add functionality in a backward compatible manner
- `PATCH` version when we make backward compatible bug fixes
Active development is followed by the `main` branch.
`
New features or maintenance commits should be done against this branch in the form of a Merge Request of a Feature branch.
## Tagging
This repository uses bump2version for managing tags. To bump a version use
```bash
bump2version <major|minor|patch> # or bump2version --new-version <new_version>
git push && git push --tags
```
Pushing a tag in the repository automatically creates:
- versioned package on pypi
## License
`eodash_catalog` is distributed under the terms of the [MIT](https://spdx.org/licenses/MIT.html) license.
| text/markdown | null | Daniel Santillan <daniel.santillan@eox.at> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementati... | [] | null | null | >=3.10 | [] | [] | [] | [
"click",
"click<9",
"oauthlib<3.3",
"owslib",
"pystac-client<1",
"pystac[validation]<2",
"python-dateutil<3",
"python-dotenv<1.1.0",
"pyyaml<7",
"requests-oauthlib<1.3.2",
"requests<3",
"setuptools<71",
"spdx-lookup<=0.3.3",
"stac-geoparquet<=0.7.0",
"structlog<22.0"
] | [] | [] | [] | [
"Documentation, https://github.com/eodash/eodash_catalog#readme",
"Issues, https://github.com/eodash/eodash_catalog/issues",
"Source, https://github.com/eodash/eodash_catalog"
] | Hatch/1.16.3 cpython/3.12.12 HTTPX/0.28.1 | 2026-02-18T09:48:21.854925 | eodash_catalog-0.4.2-py3-none-any.whl | 40,609 | 6d/d5/42f7f823a78e07d5a741c493033efb6684f08e5d5558489399a6d167f1b9/eodash_catalog-0.4.2-py3-none-any.whl | py3 | bdist_wheel | null | false | d3a121b432f407cebb06325ae57d81fe | 0427ced95b9bd22239a1163f9f56fd3006fce8f57b2009d13f89a30f101b380b | 6dd542f7f823a78e07d5a741c493033efb6684f08e5d5558489399a6d167f1b9 | MIT | [
"LICENSE.txt"
] | 0 |
2.4 | kestrapy | 1.0.9 | Kestra EE | # @kestra-io/kestrapy
All API operations, except for Superadmin-only endpoints, require a tenant identifier in the HTTP path.<br/>
Endpoints designated as Superadmin-only are not tenant-scoped.
This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:
- API version: 1.2.5
- Package version: 1.0.9
- Generator version: 7.19.0
- Build package: org.openapitools.codegen.languages.PythonClientCodegen
## Requirements.
Python 3.9+
## Installation & Usage
### pip install
If the python package is hosted on a repository, you can install directly using:
```sh
pip install git+https://github.com/GIT_USER_ID/GIT_REPO_ID.git
```
(you may need to run `pip` with root permission: `sudo pip install git+https://github.com/GIT_USER_ID/GIT_REPO_ID.git`)
Then import the package:
```python
import kestrapy
```
### Setuptools
Install via [Setuptools](http://pypi.python.org/pypi/setuptools).
```sh
python setup.py install --user
```
(or `sudo python setup.py install` to install the package for all users)
Then import the package:
```python
import kestrapy
```
### Tests
Execute `pytest` to run the tests.
## Getting Started
Please follow the [installation procedure](#installation--usage) and then run the following:
```python
import kestrapy
from kestrapy.rest import ApiException
from pprint import pprint
# Defining the host is optional and defaults to http://localhost
# See configuration.py for a list of all supported configuration parameters.
configuration = kestrapy.Configuration(
host = "http://localhost"
)
# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.
# Configure HTTP basic authorization: basicAuth
configuration = kestrapy.Configuration(
username = os.environ["USERNAME"],
password = os.environ["PASSWORD"]
)
# Configure Bearer authorization (Bearer): bearerAuth
configuration = kestrapy.Configuration(
access_token = os.environ["BEARER_TOKEN"]
)
# Enter a context with an instance of the API client
with kestrapy.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = kestrapy.ExecutionsApi(api_client)
namespace = 'namespace_example' # str | The flow namespace
id = 'id_example' # str | The flow id
wait = False # bool | If the server will wait the end of the execution (default to False)
tenant = 'tenant_example' # str |
labels = ['labels_example'] # List[str] | The labels as a list of 'key:value' (optional)
revision = 56 # int | The flow revision or latest if null (optional)
schedule_date = '2013-10-20T19:20:30+01:00' # datetime | Schedule the flow on a specific date (optional)
breakpoints = 'breakpoints_example' # str | Set a list of breakpoints at specific tasks 'id.value', separated by a coma. (optional)
kind = kestrapy.ExecutionKind() # ExecutionKind | Specific execution kind (optional)
try:
# Create a new execution for a flow
api_response = api_instance.create_execution(namespace, id, wait, tenant, labels=labels, revision=revision, schedule_date=schedule_date, breakpoints=breakpoints, kind=kind)
print("The response of ExecutionsApi->create_execution:\n")
pprint(api_response)
except ApiException as e:
print("Exception when calling ExecutionsApi->create_execution: %s\n" % e)
```
## Documentation for API Endpoints
All URIs are relative to *http://localhost*
Class | Method | HTTP request | Description
------------ | ------------- | ------------- | -------------
*ExecutionsApi* | [**create_execution**](docs/ExecutionsApi.md#create_execution) | **POST** /api/v1/{tenant}/executions/{namespace}/{id} | Create a new execution for a flow
*ExecutionsApi* | [**delete_execution**](docs/ExecutionsApi.md#delete_execution) | **DELETE** /api/v1/{tenant}/executions/{executionId} | Delete an execution
*ExecutionsApi* | [**delete_executions_by_ids**](docs/ExecutionsApi.md#delete_executions_by_ids) | **DELETE** /api/v1/{tenant}/executions/by-ids | Delete a list of executions
*ExecutionsApi* | [**delete_executions_by_query**](docs/ExecutionsApi.md#delete_executions_by_query) | **DELETE** /api/v1/{tenant}/executions/by-query | Delete executions filter by query parameters
*ExecutionsApi* | [**download_file_from_execution**](docs/ExecutionsApi.md#download_file_from_execution) | **GET** /api/v1/{tenant}/executions/{executionId}/file | Download file for an execution
*ExecutionsApi* | [**execution**](docs/ExecutionsApi.md#execution) | **GET** /api/v1/{tenant}/executions/{executionId} | Get an execution
*ExecutionsApi* | [**execution_flow_graph**](docs/ExecutionsApi.md#execution_flow_graph) | **GET** /api/v1/{tenant}/executions/{executionId}/graph | Generate a graph for an execution
*ExecutionsApi* | [**file_metadatas_from_execution**](docs/ExecutionsApi.md#file_metadatas_from_execution) | **GET** /api/v1/{tenant}/executions/{executionId}/file/metas | Get file meta information for an execution
*ExecutionsApi* | [**flow_from_execution**](docs/ExecutionsApi.md#flow_from_execution) | **GET** /api/v1/{tenant}/executions/flows/{namespace}/{flowId} | Get flow information's for an execution
*ExecutionsApi* | [**flow_from_execution_by_id**](docs/ExecutionsApi.md#flow_from_execution_by_id) | **GET** /api/v1/{tenant}/executions/{executionId}/flow | Get flow information's for an execution
*ExecutionsApi* | [**follow_dependencies_executions**](docs/ExecutionsApi.md#follow_dependencies_executions) | **GET** /api/v1/{tenant}/executions/{executionId}/follow-dependencies | Follow all execution dependencies executions
*ExecutionsApi* | [**follow_execution**](docs/ExecutionsApi.md#follow_execution) | **GET** /api/v1/{tenant}/executions/{executionId}/follow | Follow an execution
*ExecutionsApi* | [**force_run_by_ids**](docs/ExecutionsApi.md#force_run_by_ids) | **POST** /api/v1/{tenant}/executions/force-run/by-ids | Force run a list of executions
*ExecutionsApi* | [**force_run_execution**](docs/ExecutionsApi.md#force_run_execution) | **POST** /api/v1/{tenant}/executions/{executionId}/force-run | Force run an execution
*ExecutionsApi* | [**force_run_executions_by_query**](docs/ExecutionsApi.md#force_run_executions_by_query) | **POST** /api/v1/{tenant}/executions/force-run/by-query | Force run executions filter by query parameters
*ExecutionsApi* | [**kill_execution**](docs/ExecutionsApi.md#kill_execution) | **DELETE** /api/v1/{tenant}/executions/{executionId}/kill | Kill an execution
*ExecutionsApi* | [**kill_executions_by_ids**](docs/ExecutionsApi.md#kill_executions_by_ids) | **DELETE** /api/v1/{tenant}/executions/kill/by-ids | Kill a list of executions
*ExecutionsApi* | [**kill_executions_by_query**](docs/ExecutionsApi.md#kill_executions_by_query) | **DELETE** /api/v1/{tenant}/executions/kill/by-query | Kill executions filter by query parameters
*ExecutionsApi* | [**latest_executions**](docs/ExecutionsApi.md#latest_executions) | **POST** /api/v1/{tenant}/executions/latest | Get the latest execution for given flows
*ExecutionsApi* | [**pause_execution**](docs/ExecutionsApi.md#pause_execution) | **POST** /api/v1/{tenant}/executions/{executionId}/pause | Pause a running execution.
*ExecutionsApi* | [**pause_executions_by_ids**](docs/ExecutionsApi.md#pause_executions_by_ids) | **POST** /api/v1/{tenant}/executions/pause/by-ids | Pause a list of running executions
*ExecutionsApi* | [**pause_executions_by_query**](docs/ExecutionsApi.md#pause_executions_by_query) | **POST** /api/v1/{tenant}/executions/pause/by-query | Pause executions filter by query parameters
*ExecutionsApi* | [**replay_execution**](docs/ExecutionsApi.md#replay_execution) | **POST** /api/v1/{tenant}/executions/{executionId}/replay | Create a new execution from an old one and start it from a specified task run id
*ExecutionsApi* | [**replay_execution_withinputs**](docs/ExecutionsApi.md#replay_execution_withinputs) | **POST** /api/v1/{tenant}/executions/{executionId}/replay-with-inputs | Create a new execution from an old one and start it from a specified task run id
*ExecutionsApi* | [**replay_executions_by_ids**](docs/ExecutionsApi.md#replay_executions_by_ids) | **POST** /api/v1/{tenant}/executions/replay/by-ids | Create new executions from old ones. Keep the flow revision
*ExecutionsApi* | [**replay_executions_by_query**](docs/ExecutionsApi.md#replay_executions_by_query) | **POST** /api/v1/{tenant}/executions/replay/by-query | Create new executions from old ones filter by query parameters. Keep the flow revision
*ExecutionsApi* | [**restart_execution**](docs/ExecutionsApi.md#restart_execution) | **POST** /api/v1/{tenant}/executions/{executionId}/restart | Restart a new execution from an old one
*ExecutionsApi* | [**restart_executions_by_ids**](docs/ExecutionsApi.md#restart_executions_by_ids) | **POST** /api/v1/{tenant}/executions/restart/by-ids | Restart a list of executions
*ExecutionsApi* | [**restart_executions_by_query**](docs/ExecutionsApi.md#restart_executions_by_query) | **POST** /api/v1/{tenant}/executions/restart/by-query | Restart executions filter by query parameters
*ExecutionsApi* | [**resume_execution**](docs/ExecutionsApi.md#resume_execution) | **POST** /api/v1/{tenant}/executions/{executionId}/resume | Resume a paused execution.
*ExecutionsApi* | [**resume_executions_by_ids**](docs/ExecutionsApi.md#resume_executions_by_ids) | **POST** /api/v1/{tenant}/executions/resume/by-ids | Resume a list of paused executions
*ExecutionsApi* | [**resume_executions_by_query**](docs/ExecutionsApi.md#resume_executions_by_query) | **POST** /api/v1/{tenant}/executions/resume/by-query | Resume executions filter by query parameters
*ExecutionsApi* | [**search_executions**](docs/ExecutionsApi.md#search_executions) | **GET** /api/v1/{tenant}/executions/search | Search for executions
*ExecutionsApi* | [**search_executions_by_flow_id**](docs/ExecutionsApi.md#search_executions_by_flow_id) | **GET** /api/v1/{tenant}/executions | Search for executions for a flow
*ExecutionsApi* | [**set_labels_on_terminated_execution**](docs/ExecutionsApi.md#set_labels_on_terminated_execution) | **POST** /api/v1/{tenant}/executions/{executionId}/labels | Add or update labels of a terminated execution
*ExecutionsApi* | [**set_labels_on_terminated_executions_by_ids**](docs/ExecutionsApi.md#set_labels_on_terminated_executions_by_ids) | **POST** /api/v1/{tenant}/executions/labels/by-ids | Set labels on a list of executions
*ExecutionsApi* | [**set_labels_on_terminated_executions_by_query**](docs/ExecutionsApi.md#set_labels_on_terminated_executions_by_query) | **POST** /api/v1/{tenant}/executions/labels/by-query | Set label on executions filter by query parameters
*ExecutionsApi* | [**trigger_execution_by_get_webhook**](docs/ExecutionsApi.md#trigger_execution_by_get_webhook) | **GET** /api/v1/{tenant}/executions/webhook/{namespace}/{id}/{key} | Trigger a new execution by GET webhook trigger
*ExecutionsApi* | [**unqueue_execution**](docs/ExecutionsApi.md#unqueue_execution) | **POST** /api/v1/{tenant}/executions/{executionId}/unqueue | Unqueue an execution
*ExecutionsApi* | [**unqueue_executions_by_ids**](docs/ExecutionsApi.md#unqueue_executions_by_ids) | **POST** /api/v1/{tenant}/executions/unqueue/by-ids | Unqueue a list of executions
*ExecutionsApi* | [**unqueue_executions_by_query**](docs/ExecutionsApi.md#unqueue_executions_by_query) | **POST** /api/v1/{tenant}/executions/unqueue/by-query | Unqueue executions filter by query parameters
*ExecutionsApi* | [**update_execution_status**](docs/ExecutionsApi.md#update_execution_status) | **POST** /api/v1/{tenant}/executions/{executionId}/change-status | Change the state of an execution
*ExecutionsApi* | [**update_executions_status_by_ids**](docs/ExecutionsApi.md#update_executions_status_by_ids) | **POST** /api/v1/{tenant}/executions/change-status/by-ids | Change executions state by id
*ExecutionsApi* | [**update_executions_status_by_query**](docs/ExecutionsApi.md#update_executions_status_by_query) | **POST** /api/v1/{tenant}/executions/change-status/by-query | Change executions state by query parameters
*ExecutionsApi* | [**update_task_run_state**](docs/ExecutionsApi.md#update_task_run_state) | **POST** /api/v1/{tenant}/executions/{executionId}/state | Change state for a taskrun in an execution
*FlowsApi* | [**bulk_update_flows**](docs/FlowsApi.md#bulk_update_flows) | **POST** /api/v1/{tenant}/flows/bulk | Update from multiples yaml sources
*FlowsApi* | [**create_flow**](docs/FlowsApi.md#create_flow) | **POST** /api/v1/{tenant}/flows | Create a flow from yaml source
*FlowsApi* | [**delete_flow**](docs/FlowsApi.md#delete_flow) | **DELETE** /api/v1/{tenant}/flows/{namespace}/{id} | Delete a flow
*FlowsApi* | [**delete_flows_by_ids**](docs/FlowsApi.md#delete_flows_by_ids) | **DELETE** /api/v1/{tenant}/flows/delete/by-ids | Delete flows by their IDs.
*FlowsApi* | [**delete_flows_by_query**](docs/FlowsApi.md#delete_flows_by_query) | **DELETE** /api/v1/{tenant}/flows/delete/by-query | Delete flows returned by the query parameters.
*FlowsApi* | [**delete_revisions**](docs/FlowsApi.md#delete_revisions) | **DELETE** /api/v1/{tenant}/flows/{namespace}/{id}/revisions | Delete revisions for a flow
*FlowsApi* | [**disable_flows_by_ids**](docs/FlowsApi.md#disable_flows_by_ids) | **POST** /api/v1/{tenant}/flows/disable/by-ids | Disable flows by their IDs.
*FlowsApi* | [**disable_flows_by_query**](docs/FlowsApi.md#disable_flows_by_query) | **POST** /api/v1/{tenant}/flows/disable/by-query | Disable flows returned by the query parameters.
*FlowsApi* | [**enable_flows_by_ids**](docs/FlowsApi.md#enable_flows_by_ids) | **POST** /api/v1/{tenant}/flows/enable/by-ids | Enable flows by their IDs.
*FlowsApi* | [**enable_flows_by_query**](docs/FlowsApi.md#enable_flows_by_query) | **POST** /api/v1/{tenant}/flows/enable/by-query | Enable flows returned by the query parameters.
*FlowsApi* | [**export_flows_by_ids**](docs/FlowsApi.md#export_flows_by_ids) | **POST** /api/v1/{tenant}/flows/export/by-ids | Export flows as a ZIP archive of yaml sources.
*FlowsApi* | [**export_flows_by_query**](docs/FlowsApi.md#export_flows_by_query) | **GET** /api/v1/{tenant}/flows/export/by-query | Export flows as a ZIP archive of yaml sources.
*FlowsApi* | [**flow**](docs/FlowsApi.md#flow) | **GET** /api/v1/{tenant}/flows/{namespace}/{id} | Get a flow
*FlowsApi* | [**flow_dependencies**](docs/FlowsApi.md#flow_dependencies) | **GET** /api/v1/{tenant}/flows/{namespace}/{id}/dependencies | Get flow dependencies
*FlowsApi* | [**flow_dependencies_from_namespace**](docs/FlowsApi.md#flow_dependencies_from_namespace) | **GET** /api/v1/{tenant}/namespaces/{namespace}/dependencies | Retrieve flow dependencies
*FlowsApi* | [**generate_flow_graph**](docs/FlowsApi.md#generate_flow_graph) | **GET** /api/v1/{tenant}/flows/{namespace}/{id}/graph | Generate a graph for a flow
*FlowsApi* | [**generate_flow_graph_from_source**](docs/FlowsApi.md#generate_flow_graph_from_source) | **POST** /api/v1/{tenant}/flows/graph | Generate a graph for a flow source
*FlowsApi* | [**import_flows**](docs/FlowsApi.md#import_flows) | **POST** /api/v1/{tenant}/flows/import | Import flows as a ZIP archive of yaml sources or a multi-objects YAML file. When sending a Yaml that contains one or more flows, a list of index is returned. When sending a ZIP archive, a list of files that couldn't be imported is returned.
*FlowsApi* | [**list_distinct_namespaces**](docs/FlowsApi.md#list_distinct_namespaces) | **GET** /api/v1/{tenant}/flows/distinct-namespaces | List all distinct namespaces
*FlowsApi* | [**list_flow_revisions**](docs/FlowsApi.md#list_flow_revisions) | **GET** /api/v1/{tenant}/flows/{namespace}/{id}/revisions | Get revisions for a flow
*FlowsApi* | [**list_flows_by_namespace**](docs/FlowsApi.md#list_flows_by_namespace) | **GET** /api/v1/{tenant}/flows/{namespace} | Retrieve all flows from a given namespace
*FlowsApi* | [**search_concurrency_limits**](docs/FlowsApi.md#search_concurrency_limits) | **GET** /api/v1/{tenant}/concurrency-limit/search | Search for flow concurrency limits
*FlowsApi* | [**search_flows**](docs/FlowsApi.md#search_flows) | **GET** /api/v1/{tenant}/flows/search | Search for flows
*FlowsApi* | [**search_flows_by_source_code**](docs/FlowsApi.md#search_flows_by_source_code) | **GET** /api/v1/{tenant}/flows/source | Search for flows source code
*FlowsApi* | [**task_from_flow**](docs/FlowsApi.md#task_from_flow) | **GET** /api/v1/{tenant}/flows/{namespace}/{id}/tasks/{taskId} | Get a flow task
*FlowsApi* | [**update_concurrency_limit**](docs/FlowsApi.md#update_concurrency_limit) | **PUT** /api/v1/{tenant}/concurrency-limit/{namespace}/{flowId} | Update a flow concurrency limit
*FlowsApi* | [**update_flow**](docs/FlowsApi.md#update_flow) | **PUT** /api/v1/{tenant}/flows/{namespace}/{id} | Update a flow
*FlowsApi* | [**update_flows_in_namespace**](docs/FlowsApi.md#update_flows_in_namespace) | **POST** /api/v1/{tenant}/flows/{namespace} | Update a complete namespace from yaml source
*FlowsApi* | [**update_task**](docs/FlowsApi.md#update_task) | **PATCH** /api/v1/{tenant}/flows/{namespace}/{id}/{taskId} | Update a single task on a flow
*FlowsApi* | [**validate_flows**](docs/FlowsApi.md#validate_flows) | **POST** /api/v1/{tenant}/flows/validate | Validate a list of flows
*FlowsApi* | [**validate_task**](docs/FlowsApi.md#validate_task) | **POST** /api/v1/{tenant}/flows/validate/task | Validate a task
*FlowsApi* | [**validate_trigger**](docs/FlowsApi.md#validate_trigger) | **POST** /api/v1/{tenant}/flows/validate/trigger | Validate trigger
*GroupsApi* | [**add_user_to_group**](docs/GroupsApi.md#add_user_to_group) | **PUT** /api/v1/{tenant}/groups/{id}/members/{userId} | Add a user to a group
*GroupsApi* | [**autocomplete_groups**](docs/GroupsApi.md#autocomplete_groups) | **POST** /api/v1/{tenant}/groups/autocomplete | List groups for autocomplete
*GroupsApi* | [**create_group**](docs/GroupsApi.md#create_group) | **POST** /api/v1/{tenant}/groups | Create a group
*GroupsApi* | [**delete_group**](docs/GroupsApi.md#delete_group) | **DELETE** /api/v1/{tenant}/groups/{id} | Delete a group
*GroupsApi* | [**delete_user_from_group**](docs/GroupsApi.md#delete_user_from_group) | **DELETE** /api/v1/{tenant}/groups/{id}/members/{userId} | Remove a user from a group
*GroupsApi* | [**group**](docs/GroupsApi.md#group) | **GET** /api/v1/{tenant}/groups/{id} | Retrieve a group
*GroupsApi* | [**list_group_ids**](docs/GroupsApi.md#list_group_ids) | **POST** /api/v1/{tenant}/groups/ids | List groups by ids
*GroupsApi* | [**search_group_members**](docs/GroupsApi.md#search_group_members) | **GET** /api/v1/{tenant}/groups/{id}/members | Search for users in a group
*GroupsApi* | [**search_groups**](docs/GroupsApi.md#search_groups) | **GET** /api/v1/{tenant}/groups/search | Search for groups
*GroupsApi* | [**set_user_membership_for_group**](docs/GroupsApi.md#set_user_membership_for_group) | **PUT** /api/v1/{tenant}/groups/{id}/members/membership/{userId} | Update a user's membership type in a group
*GroupsApi* | [**update_group**](docs/GroupsApi.md#update_group) | **PUT** /api/v1/{tenant}/groups/{id} | Update a group
*KVApi* | [**delete_key_value**](docs/KVApi.md#delete_key_value) | **DELETE** /api/v1/{tenant}/namespaces/{namespace}/kv/{key} | Delete a key-value pair
*KVApi* | [**delete_key_values**](docs/KVApi.md#delete_key_values) | **DELETE** /api/v1/{tenant}/namespaces/{namespace}/kv | Bulk-delete multiple key/value pairs from the given namespace.
*KVApi* | [**key_value**](docs/KVApi.md#key_value) | **GET** /api/v1/{tenant}/namespaces/{namespace}/kv/{key} | Get value for a key
*KVApi* | [**list_all_keys**](docs/KVApi.md#list_all_keys) | **GET** /api/v1/{tenant}/kv | List all keys
*KVApi* | [**list_keys**](docs/KVApi.md#list_keys) | **GET** /api/v1/{tenant}/namespaces/{namespace}/kv | List all keys for a namespace
*KVApi* | [**list_keys_with_inheritence**](docs/KVApi.md#list_keys_with_inheritence) | **GET** /api/v1/{tenant}/namespaces/{namespace}/kv/inheritance | List all keys for inherited namespaces
*KVApi* | [**set_key_value**](docs/KVApi.md#set_key_value) | **PUT** /api/v1/{tenant}/namespaces/{namespace}/kv/{key} | Puts a key-value pair in store
*NamespacesApi* | [**autocomplete_namespaces**](docs/NamespacesApi.md#autocomplete_namespaces) | **POST** /api/v1/{tenant}/namespaces/autocomplete | List namespaces for autocomplete
*NamespacesApi* | [**create_namespace**](docs/NamespacesApi.md#create_namespace) | **POST** /api/v1/{tenant}/namespaces | Create a namespace
*NamespacesApi* | [**delete_namespace**](docs/NamespacesApi.md#delete_namespace) | **DELETE** /api/v1/{tenant}/namespaces/{id} | Delete a namespace
*NamespacesApi* | [**delete_secret**](docs/NamespacesApi.md#delete_secret) | **DELETE** /api/v1/{tenant}/namespaces/{namespace}/secrets/{key} | Delete a secret for a namespace
*NamespacesApi* | [**inherited_plugin_defaults**](docs/NamespacesApi.md#inherited_plugin_defaults) | **GET** /api/v1/{tenant}/namespaces/{id}/inherited-plugindefaults | List inherited plugin defaults
*NamespacesApi* | [**inherited_secrets**](docs/NamespacesApi.md#inherited_secrets) | **GET** /api/v1/{tenant}/namespaces/{namespace}/inherited-secrets | List inherited secrets
*NamespacesApi* | [**inherited_variables**](docs/NamespacesApi.md#inherited_variables) | **GET** /api/v1/{tenant}/namespaces/{id}/inherited-variables | List inherited variables
*NamespacesApi* | [**list_namespace_secrets**](docs/NamespacesApi.md#list_namespace_secrets) | **GET** /api/v1/{tenant}/namespaces/{namespace}/secrets | Get secrets for a namespace
*NamespacesApi* | [**namespace**](docs/NamespacesApi.md#namespace) | **GET** /api/v1/{tenant}/namespaces/{id} | Get a namespace
*NamespacesApi* | [**patch_secret**](docs/NamespacesApi.md#patch_secret) | **PATCH** /api/v1/{tenant}/namespaces/{namespace}/secrets/{key} | Patch a secret metadata for a namespace
*NamespacesApi* | [**put_secrets**](docs/NamespacesApi.md#put_secrets) | **PUT** /api/v1/{tenant}/namespaces/{namespace}/secrets | Update secrets for a namespace
*NamespacesApi* | [**search_namespaces**](docs/NamespacesApi.md#search_namespaces) | **GET** /api/v1/{tenant}/namespaces/search | Search for namespaces
*NamespacesApi* | [**update_namespace**](docs/NamespacesApi.md#update_namespace) | **PUT** /api/v1/{tenant}/namespaces/{id} | Update a namespace
*RolesApi* | [**autocomplete_roles**](docs/RolesApi.md#autocomplete_roles) | **POST** /api/v1/{tenant}/roles/autocomplete | List roles for autocomplete
*RolesApi* | [**create_role**](docs/RolesApi.md#create_role) | **POST** /api/v1/{tenant}/roles | Create a role
*RolesApi* | [**delete_role**](docs/RolesApi.md#delete_role) | **DELETE** /api/v1/{tenant}/roles/{id} | Delete a role
*RolesApi* | [**list_roles_from_given_ids**](docs/RolesApi.md#list_roles_from_given_ids) | **POST** /api/v1/{tenant}/roles/ids | List roles by ids
*RolesApi* | [**role**](docs/RolesApi.md#role) | **GET** /api/v1/{tenant}/roles/{id} | Retrieve a role
*RolesApi* | [**search_roles**](docs/RolesApi.md#search_roles) | **GET** /api/v1/{tenant}/roles/search | Search for roles
*RolesApi* | [**update_role**](docs/RolesApi.md#update_role) | **PUT** /api/v1/{tenant}/roles/{id} | Update a role
*ServiceAccountApi* | [**create_api_tokens_for_service_account**](docs/ServiceAccountApi.md#create_api_tokens_for_service_account) | **POST** /api/v1/service-accounts/{id}/api-tokens | Create new API Token for a specific service account
*ServiceAccountApi* | [**create_api_tokens_for_service_account_with_tenant**](docs/ServiceAccountApi.md#create_api_tokens_for_service_account_with_tenant) | **POST** /api/v1/{tenant}/service-accounts/{id}/api-tokens | Create new API Token for a specific service account
*ServiceAccountApi* | [**create_service_account**](docs/ServiceAccountApi.md#create_service_account) | **POST** /api/v1/service-accounts | Create a service account
*ServiceAccountApi* | [**create_service_account_for_tenant**](docs/ServiceAccountApi.md#create_service_account_for_tenant) | **POST** /api/v1/{tenant}/service-accounts | Create a service account for the given tenant
*ServiceAccountApi* | [**delete_api_token_for_service_account**](docs/ServiceAccountApi.md#delete_api_token_for_service_account) | **DELETE** /api/v1/service-accounts/{id}/api-tokens/{tokenId} | Delete an API Token for specific service account and token id
*ServiceAccountApi* | [**delete_api_token_for_service_account_with_tenant**](docs/ServiceAccountApi.md#delete_api_token_for_service_account_with_tenant) | **DELETE** /api/v1/{tenant}/service-accounts/{id}/api-tokens/{tokenId} | Delete an API Token for specific service account and token id
*ServiceAccountApi* | [**delete_service_account**](docs/ServiceAccountApi.md#delete_service_account) | **DELETE** /api/v1/service-accounts/{id} | Delete a service account
*ServiceAccountApi* | [**delete_service_account_for_tenant**](docs/ServiceAccountApi.md#delete_service_account_for_tenant) | **DELETE** /api/v1/{tenant}/service-accounts/{id} | Delete a service account
*ServiceAccountApi* | [**list_api_tokens_for_service_account**](docs/ServiceAccountApi.md#list_api_tokens_for_service_account) | **GET** /api/v1/service-accounts/{id}/api-tokens | List API tokens for a specific service account
*ServiceAccountApi* | [**list_api_tokens_for_service_account_with_tenant**](docs/ServiceAccountApi.md#list_api_tokens_for_service_account_with_tenant) | **GET** /api/v1/{tenant}/service-accounts/{id}/api-tokens | List API tokens for a specific service account
*ServiceAccountApi* | [**list_service_accounts**](docs/ServiceAccountApi.md#list_service_accounts) | **GET** /api/v1/service-accounts | List service accounts. Superadmin-only.
*ServiceAccountApi* | [**patch_service_account_details**](docs/ServiceAccountApi.md#patch_service_account_details) | **PATCH** /api/v1/service-accounts/{id} | Update service account details
*ServiceAccountApi* | [**patch_service_account_super_admin**](docs/ServiceAccountApi.md#patch_service_account_super_admin) | **PATCH** /api/v1/service-accounts/{id}/superadmin | Update service account superadmin privileges
*ServiceAccountApi* | [**service_account**](docs/ServiceAccountApi.md#service_account) | **GET** /api/v1/service-accounts/{id} | Get a service account
*ServiceAccountApi* | [**service_account_for_tenant**](docs/ServiceAccountApi.md#service_account_for_tenant) | **GET** /api/v1/{tenant}/service-accounts/{id} | Retrieve a service account
*ServiceAccountApi* | [**update_service_account**](docs/ServiceAccountApi.md#update_service_account) | **PUT** /api/v1/{tenant}/service-accounts/{id} | Update a user service account
*TriggersApi* | [**delete_backfill**](docs/TriggersApi.md#delete_backfill) | **POST** /api/v1/{tenant}/triggers/backfill/delete | Delete a backfill
*TriggersApi* | [**delete_backfill_by_ids**](docs/TriggersApi.md#delete_backfill_by_ids) | **POST** /api/v1/{tenant}/triggers/backfill/delete/by-triggers | Delete backfill for given triggers
*TriggersApi* | [**delete_backfill_by_query**](docs/TriggersApi.md#delete_backfill_by_query) | **POST** /api/v1/{tenant}/triggers/backfill/delete/by-query | Delete backfill for given triggers
*TriggersApi* | [**delete_trigger**](docs/TriggersApi.md#delete_trigger) | **DELETE** /api/v1/{tenant}/triggers/{namespace}/{flowId}/{triggerId} | Delete a trigger
*TriggersApi* | [**delete_triggers_by_ids**](docs/TriggersApi.md#delete_triggers_by_ids) | **DELETE** /api/v1/{tenant}/triggers/delete/by-triggers | Delete given triggers
*TriggersApi* | [**delete_triggers_by_query**](docs/TriggersApi.md#delete_triggers_by_query) | **DELETE** /api/v1/{tenant}/triggers/delete/by-query | Delete triggers by query parameters
*TriggersApi* | [**disabled_triggers_by_ids**](docs/TriggersApi.md#disabled_triggers_by_ids) | **POST** /api/v1/{tenant}/triggers/set-disabled/by-triggers | Disable/enable given triggers
*TriggersApi* | [**disabled_triggers_by_query**](docs/TriggersApi.md#disabled_triggers_by_query) | **POST** /api/v1/{tenant}/triggers/set-disabled/by-query | Disable/enable triggers by query parameters
*TriggersApi* | [**export_triggers**](docs/TriggersApi.md#export_triggers) | **GET** /api/v1/{tenant}/triggers/export/by-query/csv | Export all triggers as a streamed CSV file
*TriggersApi* | [**pause_backfill**](docs/TriggersApi.md#pause_backfill) | **PUT** /api/v1/{tenant}/triggers/backfill/pause | Pause a backfill
*TriggersApi* | [**pause_backfill_by_ids**](docs/TriggersApi.md#pause_backfill_by_ids) | **POST** /api/v1/{tenant}/triggers/backfill/pause/by-triggers | Pause backfill for given triggers
*TriggersApi* | [**pause_backfill_by_query**](docs/TriggersApi.md#pause_backfill_by_query) | **POST** /api/v1/{tenant}/triggers/backfill/pause/by-query | Pause backfill for given triggers
*TriggersApi* | [**restart_trigger**](docs/TriggersApi.md#restart_trigger) | **POST** /api/v1/{tenant}/triggers/{namespace}/{flowId}/{triggerId}/restart | Restart a trigger
*TriggersApi* | [**search_triggers**](docs/TriggersApi.md#search_triggers) | **GET** /api/v1/{tenant}/triggers/search | Search for triggers
*TriggersApi* | [**search_triggers_for_flow**](docs/TriggersApi.md#search_triggers_for_flow) | **GET** /api/v1/{tenant}/triggers/{namespace}/{flowId} | Get all triggers for a flow
*TriggersApi* | [**unlock_trigger**](docs/TriggersApi.md#unlock_trigger) | **POST** /api/v1/{tenant}/triggers/{namespace}/{flowId}/{triggerId}/unlock | Unlock a trigger
*TriggersApi* | [**unlock_triggers_by_ids**](docs/TriggersApi.md#unlock_triggers_by_ids) | **POST** /api/v1/{tenant}/triggers/unlock/by-triggers | Unlock given triggers
*TriggersApi* | [**unlock_triggers_by_query**](docs/TriggersApi.md#unlock_triggers_by_query) | **POST** /api/v1/{tenant}/triggers/unlock/by-query | Unlock triggers by query parameters
*TriggersApi* | [**unpause_backfill**](docs/TriggersApi.md#unpause_backfill) | **PUT** /api/v1/{tenant}/triggers/backfill/unpause | Unpause a backfill
*TriggersApi* | [**unpause_backfill_by_ids**](docs/TriggersApi.md#unpause_backfill_by_ids) | **POST** /api/v1/{tenant}/triggers/backfill/unpause/by-triggers | Unpause backfill for given triggers
*TriggersApi* | [**unpause_backfill_by_query**](docs/TriggersApi.md#unpause_backfill_by_query) | **POST** /api/v1/{tenant}/triggers/backfill/unpause/by-query | Unpause backfill for given triggers
*TriggersApi* | [**update_trigger**](docs/TriggersApi.md#update_trigger) | **PUT** /api/v1/{tenant}/triggers | Update a trigger
*UsersApi* | [**autocomplete_users**](docs/UsersApi.md#autocomplete_users) | **POST** /api/v1/{tenant}/tenant-access/autocomplete | List users for autocomplete
*UsersApi* | [**create_api_tokens_for_user**](docs/UsersApi.md#create_api_tokens_for_user) | **POST** /api/v1/users/{id}/api-tokens | Create new API Token for a specific user
*UsersApi* | [**create_user**](docs/UsersApi.md#create_user) | **POST** /api/v1/users | Create a new user account
*UsersApi* | [**delete_api_token_for_user**](docs/UsersApi.md#delete_api_token_for_user) | **DELETE** /api/v1/users/{id}/api-tokens/{tokenId} | Delete an API Token for specific user and token id
*UsersApi* | [**delete_refresh_token**](docs/UsersApi.md#delete_refresh_token) | **DELETE** /api/v1/users/{id}/refresh-token | Delete a user refresh token
*UsersApi* | [**delete_user**](docs/UsersApi.md#delete_user) | **DELETE** /api/v1/users/{id} | Delete a user
*UsersApi* | [**delete_user_auth_method**](docs/UsersApi.md#delete_user_auth_method) | **DELETE** /api/v1/users/{id}/auths/{auth} | Update user password
*UsersApi* | [**impersonate**](docs/UsersApi.md#impersonate) | **POST** /api/v1/users/{id}/impersonate | Impersonate a user
*UsersApi* | [**list_api_tokens_for_user**](docs/UsersApi.md#list_api_tokens_for_user) | **GET** /api/v1/users/{id}/api-tokens | List API tokens for a specific user
*UsersApi* | [**list_users**](docs/UsersApi.md#list_users) | **GET** /api/v1/users | Retrieve users
*UsersApi* | [**patch_user**](docs/UsersApi.md#patch_user) | **PATCH** /api/v1/users/{id} | Update user details
*UsersApi* | [**patch_user_demo**](docs/UsersApi.md#patch_user_demo) | **PATCH** /api/v1/users/{id}/restricted | Update user demo
*UsersApi* | [**patch_user_password**](docs/UsersApi.md#patch_user_password) | **PATCH** /api/v1/users/{id}/password | Update user password
*UsersApi* | [**patch_user_super_admin**](docs/UsersApi.md#patch_user_super_admin) | **PATCH** /api/v1/users/{id}/superadmin | Update user superadmin privileges
*UsersApi* | [**update_current_user_password**](docs/UsersApi.md#update_current_user_password) | **PUT** /api/v1/me/password | Update authenticated user password
*UsersApi* | [**update_user**](docs/UsersApi.md#update_user) | **PUT** /api/v1/users/{id} | Update a user account
*UsersApi* | [**update_user_groups**](docs/UsersApi.md#update_user_groups) | **PUT** /api/v1/{tenant}/users/{id}/groups | Update the list of groups a user belongs to for the given tenant
*UsersApi* | [**user**](docs/UsersApi.md#user) | **GET** /api/v1/users/{id} | Get a user
## Documentation For Models
- [AbstractFlow](docs/AbstractFlow.md)
- [AbstractGraph](docs/AbstractGraph.md)
- [AbstractGraphBranchType](docs/AbstractGraphBranchType.md)
- [AbstractTrigger](docs/AbstractTrigger.md)
- [AbstractTriggerForExecution](docs/AbstractTriggerForExecution.md)
- [AbstractUser](docs/AbstractUser.md)
- [AbstractUserTenantIdentityProvider](docs/AbstractUserTenantIdentityProvider.md)
- [Action](docs/Action.md)
- [ApiAuth](docs/ApiAuth.md)
- [ApiAutocomplete](docs/ApiAutocomplete.md)
- [ApiGroupSummary](docs/ApiGroupSummary.md)
- [ApiIds](docs/ApiIds.md)
- [ApiPatchSuperAdminRequest](docs/ApiPatchSuperAdminRequest.md)
- [ApiRoleSummary](docs/ApiRoleSummary.md)
- [ApiSecretListResponseApiSecretMeta](docs/ApiSecretListResponseApiSecretMeta.md)
- [ApiSecretMeta](docs/ApiSecretMeta.md)
- [ApiSecretMetaEE](docs/ApiSecretMetaEE.md)
- [ApiSecretTag](docs/ApiSecretTag.md)
- [ApiSecretValue](docs/ApiSecretValue.md)
- [ApiTenant](docs/ApiTenant.md)
- [ApiTenantSummary](docs/ApiTenantSummary.md)
- [ApiToken](docs/ApiToken.md)
- [ApiTokenList](docs/ApiTokenList.md)
- [ApiUser](docs/ApiUser.md)
- [AppsControllerApiApp](docs/AppsControllerApiApp.md)
- [AppsControllerApiAppCatalogItem](docs/AppsControllerApiAppCatalogItem.md)
- [AppsControllerApiAppSource](docs/AppsControllerApiAppSource.md)
- [AppsControllerApiAppTags](docs/AppsControllerApiAppTags.md)
- [AppsControllerApiBulkImportResponse](docs/AppsControllerApiBulkImportResponse.md)
- [AppsControllerApiBulkImportResponseError](docs/AppsControllerApiBulkImportResponseError.md)
- [AppsControllerApiBulkOperationRequest](docs/AppsControllerApiBulkOperationRequest.md)
- [Assertion](docs/Assertion.md)
- [AssertionResult](docs/AssertionResult.md)
- [AssertionRunError](docs/AssertionRunError.md)
- [Asset](docs/Asset.md)
- [AssetIdentifier](docs/AssetIdentifier.md)
- [AssetTopologyGraph](docs/AssetTopologyGraph.md)
- [AssetTopologyGraphEdge](docs/AssetTopologyGraphEdge.md)
- [AssetTopologyGraphNode](docs/AssetTopologyGraphNode.md)
- [AssetTopologyGraphNodeNodeType](docs/AssetTopologyGraphNodeNodeType.md)
- [AssetsControllerApiAsset](docs/AssetsControllerApiAsset.md)
- [AssetsControllerApiAssetUsage](docs/AssetsControllerApiAssetUsage.md)
- [AssetsDeclaration](docs/AssetsDeclaration.md)
- [AssetsInOut](docs/AssetsInOut.md)
- [AttributeReference](docs/AttributeReference.md)
- [AuditLog](docs/AuditLog.md)
- [AuditLogControllerApiAuditLogItem](docs/AuditLogControllerApiAuditLogItem.md)
- [AuditLogControllerAuditLogDiff](docs/AuditLogControllerAuditLogDiff.md)
- [AuditLogControllerAuditLogOption](docs/AuditLogControllerAuditLogOption.md)
- [AuditLogControllerFindRequest](docs/AuditLogControllerFindRequest.md)
- [AuditLogDetail](docs/AuditLogDetail.md)
- [AuthControllerAuth](docs/AuthControllerAuth.md)
- [AuthControllerInvitationUserRequest](docs/AuthControllerInvitationUserRequest.md)
- [AuthControllerResetPasswordRequest](docs/AuthControllerResetPasswordRequest.md)
- [Backfill](docs/Backfill.md)
- [Banner](docs/Banner.md)
- [BannerType](docs/BannerType.md)
- [BaseAuditLog](docs/BaseAuditLog.md)
- [BaseResourcePatchRequest](docs/BaseResourcePatchRequest.md)
- [BaseResourceScimResource](docs/BaseResourceScimResource.md)
- [BaseResourceSearchRequest](docs/BaseResourceSearchRequest.md)
- [Binding](docs/Binding.md)
- [BindingType](docs/BindingType.md)
- [Blueprint](docs/Blueprint.md)
- [BlueprintControllerApiBlueprintItem](docs/BlueprintControllerApiBlueprintItem.md)
- [BlueprintControllerApiBlueprintItemWithSource](docs/BlueprintControllerApiBlueprintItemWithSource.md)
- [BlueprintControllerApiBlueprintTagItem](docs/BlueprintControllerApiBlueprintTagItem.md)
- [BlueprintControllerApiFlowBlueprint](docs/BlueprintControllerApiFlowBlueprint.md)
- [BlueprintControllerFlowBlueprintCreateOrUpdate](docs/BlueprintControllerFlowBlueprintCreateOrUpdate.md)
- [BlueprintControllerKind](docs/BlueprintControllerKind.md)
- [BlueprintControllerUseBlueprintTemplateRequest](docs/BlueprintControllerUseBlueprintTemplateRequest.md)
- [BlueprintControllerUseBlueprintTemplateResponse](docs/BlueprintControllerUseBlueprintTemplateResponse.md)
- [BlueprintTemplate](docs/BlueprintTemplate.md)
- [BlueprintWithFlowEntity](docs/BlueprintWithFlowEntity.md)
- [Breakpoint](docs/Breakpoint.md)
- [BulkErrorResponse](docs/BulkErrorResponse.md)
- [BulkImportAppsRequest](docs/BulkImportAppsRequest.md)
- [BulkResponse](docs/BulkResponse.md)
- [Cache](docs/Cache.md)
- [ChartChartOption](docs/ChartChartOption.md)
- [ChartFiltersOverrides](docs/ChartFiltersOverrides.md)
- [Check](docs/Check.md)
- [CheckBehavior](docs/CheckBehavior.md)
- [CheckStyle](docs/CheckStyle.md)
- [Concurrency](docs/Concurrency.md)
- [ConcurrencyBehavior](docs/ConcurrencyBehavior.md)
- [ConcurrencyLimit](docs/ConcurrencyLimit.md)
- [Condition](docs/Condition.md)
- [CreateApiTokenRequest](docs/CreateApiTokenRequest.md)
- [CreateApiTokenResponse](docs/CreateApiTokenResponse.md)
- [CreateNamespaceFileRequest](docs/CreateNamespaceFileRequest.md)
- [CreateSecurityIntegrationRequest](docs/CreateSecurityIntegrationRequest.md)
- [CrudEventType](docs/CrudEventType.md)
- [Dashboard](docs/Dashboard.md)
- [DashboardControllerPreviewRequest](docs/DashboardControllerPreviewRequest.md)
- [DeleteTriggersByQueryRequest](docs/DeleteTriggersByQueryRequest.md)
- [DeletedInterface](docs/DeletedInterface.md)
- [DependsOn](docs/DependsOn.md)
- [DocumentationWithSchema](docs/DocumentationWithSchema.md)
- [EditionProviderEdition](docs/EditionProviderEdition.md)
- [Email](docs/Email.md)
- [EventExecution](docs/EventExecution.md)
- [EventExecutionStatusEvent](docs/EventExecutionStatusEvent.md)
- [ExecutableTaskSubflowId](docs/ExecutableTaskSubflowId.md)
- [Execution](docs/Execution.md)
- [ExecutionControllerExecutionResponse](docs/ExecutionControllerExecutionResponse.md)
- [ExecutionControllerLastExecutionResponse](docs/ExecutionControllerLastExecutionResponse.md)
- [ExecutionControllerSetLabelsByIdsRequest](docs/ExecutionControllerSetLabelsByIdsRequest.md)
- [ExecutionControllerStateRequest](docs/ExecutionControllerStateRequest.md)
- [ExecutionControllerWebhookResponse](docs/ExecutionControllerWebhookResponse.md)
- [ExecutionKind](docs/ExecutionKind.md)
- [ExecutionMetadata](docs/ExecutionMetadata.md)
- [ExecutionRepositoryInterfaceFlowFilter](docs/ExecutionRepositoryInterfaceFlowFilter.md)
- [ExecutionStatusEvent](docs/ExecutionStatusEvent.md)
- [ExecutionTrigger](docs/ExecutionTrigger.md)
- [FileAttributes](docs/FileAttributes.md)
- [FileAttributesFileType](docs/FileAttributesFileType.md)
- [FileMetas](docs/FileMetas.md)
- [Filter](docs/Filter.md)
- [Fixtures](docs/Fixtures.md)
- [Flow](docs/Flow.md)
- [FlowControllerTaskValidati | text/markdown | OpenAPI Generator community | OpenAPI Generator Community <team@openapitools.org> | null | null | null | OpenAPI, OpenAPI-Generator, Kestra EE | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"requests>=2.32.5",
"sseclient-py>=1.8.0",
"urllib3<3.0.0,>=2.1.0",
"python-dateutil>=2.8.2",
"pydantic>=2",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [
"Repository, https://github.com/GIT_USER_ID/GIT_REPO_ID"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:48:20.930702 | kestrapy-1.0.9.tar.gz | 205,959 | b9/3b/84845dbb19c3e3f74776eb5df22f829d6b72f3d2c2efd0edda2f671b1602/kestrapy-1.0.9.tar.gz | source | sdist | null | false | b2db6b83930b701cab87aa8243ef78cb | d347aad0c5e9658e3cca6f3158e38746f3868bc200db1340cb25afe3cafd6db3 | b93b84845dbb19c3e3f74776eb5df22f829d6b72f3d2c2efd0edda2f671b1602 | null | [] | 238 |
2.4 | swarmauri_measurement_tokencountestimator | 0.9.3.dev5 | This repository includes an example of a First Class Swarmauri Example. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_measurement_tokencountestimator/">
<img src="https://img.shields.io/pypi/dm/swarmauri_measurement_tokencountestimator" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_measurement_tokencountestimator/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_measurement_tokencountestimator.svg"/></a>
<a href="https://pypi.org/project/swarmauri_measurement_tokencountestimator/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_measurement_tokencountestimator" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_measurement_tokencountestimator/">
<img src="https://img.shields.io/pypi/l/swarmauri_measurement_tokencountestimator" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_measurement_tokencountestimator/">
<img src="https://img.shields.io/pypi/v/swarmauri_measurement_tokencountestimator?label=swarmauri_measurement_tokencountestimator&color=green" alt="PyPI - swarmauri_measurement_tokencountestimator"/></a>
</p>
---
# Swarmauri Measurement Token Count Estimator
Token-count measurement plugin for Swarmauri pipelines. Uses OpenAI's `tiktoken` library to estimate how many tokens a piece of text will consume given a specific tokenizer (default `cl100k_base`).
## Features
- Implements the `MeasurementBase` API to slot into Swarmauri observability flows.
- Wraps `tiktoken` encoders for quick token-count estimation ahead of LLM calls.
- Configurable tokenizer name so you can match counts to different model families.
## Prerequisites
- Python 3.10 or newer.
- `tiktoken` installed (pulled in automatically as a dependency).
- If you use custom encodings, ensure they are registered with `tiktoken` before invoking the measurement.
## Installation
```bash
# pip
pip install swarmauri_measurement_tokencountestimator
# poetry
poetry add swarmauri_measurement_tokencountestimator
# uv (pyproject-based projects)
uv add swarmauri_measurement_tokencountestimator
```
## Quickstart
```python
from swarmauri_measurement_tokencountestimator import TokenCountEstimatorMeasurement
measurement = TokenCountEstimatorMeasurement()
text = "Lorem ipsum odor amet, consectetuer adipiscing elit."
count = measurement.calculate(text)
print(f"Tokens (cl100k_base): {count}")
```
## Switching Encodings
```python
from swarmauri_measurement_tokencountestimator import TokenCountEstimatorMeasurement
text = "Swarmauri agents coordinate over shared memory"
measurement = TokenCountEstimatorMeasurement()
for encoding in ["cl100k_base", "o200k_base", "p50k_base"]:
print(encoding, measurement.calculate(text, encoding=encoding))
```
Use this to check token budgets across different model families before dispatching a request.
## Handling Unknown Encodings
```python
from swarmauri_measurement_tokencountestimator import TokenCountEstimatorMeasurement
measurement = TokenCountEstimatorMeasurement()
invalid_count = measurement.calculate("Hello", encoding="not-real")
print(invalid_count) # Returns None and prints an error message
```
Wrap the call if you prefer structured error handling:
```python
try:
count = measurement.calculate("Hello", encoding="not-real")
if count is None:
raise ValueError("Unsupported encoding")
except ValueError:
# fallback logic
pass
```
## Tips
- Token counts can change as tokenizers evolve; pin `tiktoken` to a known version for stable measurements.
- Normalize whitespace if your prompt assembly adds or strips spaces—tokenizers are sensitive to exact byte sequences.
- For batch estimation, combine this measurement with Pandas or list comprehensions to preprocess entire prompt sets before sending them to an LLM.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, measurement, tokencountestimator, repository, includes, example, first, class | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"swarmauri_base",
"swarmauri_core",
"tiktoken>=0.8.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:14.876852 | swarmauri_measurement_tokencountestimator-0.9.3.dev5-py3-none-any.whl | 8,692 | d6/c3/3fb8aa6faf6ca9c197423decfea57d5eda66b92a9244012dff13dd322574/swarmauri_measurement_tokencountestimator-0.9.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | fc068790ceb1e9dac6d64592b60df538 | 71578a825f6dc1f9f180b99a2f6b12317effa6e852c3762974df80154987d6b8 | d6c33fb8aa6faf6ca9c197423decfea57d5eda66b92a9244012dff13dd322574 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_metric_hamming | 0.9.2.dev5 | Swarmauri Hamming distance metric community package. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_metric_hamming/">
<img src="https://img.shields.io/pypi/dm/swarmauri_metric_hamming" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_metric_hamming/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_metric_hamming.svg"/></a>
<a href="https://pypi.org/project/swarmauri_metric_hamming/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_metric_hamming" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_metric_hamming/">
<img src="https://img.shields.io/pypi/l/swarmauri_metric_hamming" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_metric_hamming/">
<img src="https://img.shields.io/pypi/v/swarmauri_metric_hamming?label=swarmauri_metric_hamming&color=green" alt="PyPI - swarmauri_metric_hamming"/></a>
</p>
---
# Swarmauri Metric Hamming
The `swarmauri_metric_hamming` package delivers a production-ready implementation of the Hamming distance that integrates seamlessly with the Swarmauri metric ecosystem. It extends `MetricBase` with binary vector validation, pairwise distance calculations, and axiom verification utilities designed for error-correcting code workflows.
## Features
- ✅ Fully compliant `MetricBase` implementation for binary sequences.
- 🔁 Pairwise and batched distance calculations for matrices, vectors, and iterables.
- 🧪 Built-in validation helpers for metric axioms and input compatibility checks.
- 🧰 Utility conversion helpers for strings, dictionaries, NumPy arrays, and Swarmauri matrices/vectors.
## Installation
### Using `uv`
```bash
uv pip install swarmauri_metric_hamming
```
### Using `pip`
```bash
pip install swarmauri_metric_hamming
```
## Usage
```python
from swarmauri_metric_hamming import HammingMetric
metric = HammingMetric()
codeword = [1, 0, 1, 1, 0, 0, 1]
received = [1, 1, 1, 1, 0, 0, 1]
# Compute the Hamming distance between two binary vectors
print(metric.distance(codeword, received)) # 1.0
# Verify the metric axioms for the provided inputs
assert metric.check_symmetry(codeword, received)
assert metric.check_triangle_inequality(codeword, received, codeword)
```
## Support
- Python 3.10, 3.11, and 3.12
- Licensed under the Apache-2.0 license
## Contributing
We welcome contributions! Please submit issues and pull requests through the [Swarmauri SDK GitHub repository](https://github.com/swarmauri/swarmauri-sdk).
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, metric, hamming, distance, community, package | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"numpy>=1.26",
"swarmauri_base",
"swarmauri_core"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:13.172613 | swarmauri_metric_hamming-0.9.2.dev5.tar.gz | 8,069 | b8/02/3f5b0089635c631a231627734a0c3ca4649802abe4e60056423d27047d76/swarmauri_metric_hamming-0.9.2.dev5.tar.gz | source | sdist | null | false | aad6661e06fd853e09a62754999245a2 | 481bfa0c0ffd9916d372f368f0aa71a190da60c070655eb8ec5ae46f1b4f2a75 | b8023f5b0089635c631a231627734a0c3ca4649802abe4e60056423d27047d76 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_measurement_mutualinformation | 0.9.3.dev5 | Swarmauri Mutual Information Measurement Community Package. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_measurement_mutualinformation/">
<img src="https://img.shields.io/pypi/dm/swarmauri_measurement_mutualinformation" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_measurement_mutualinformation/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_measurement_mutualinformation.svg"/></a>
<a href="https://pypi.org/project/swarmauri_measurement_mutualinformation/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_measurement_mutualinformation" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_measurement_mutualinformation/">
<img src="https://img.shields.io/pypi/l/swarmauri_measurement_mutualinformation" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_measurement_mutualinformation/">
<img src="https://img.shields.io/pypi/v/swarmauri_measurement_mutualinformation?label=swarmauri_measurement_mutualinformation&color=green" alt="PyPI - swarmauri_measurement_mutualinformation"/></a>
</p>
---
# Swarmauri Measurement Mutual Information
Mutual-information measurement plugin for Swarmauri pipelines. Computes the average mutual information (in bits) between every feature column and a target column, letting you rank signal strength before training models.
## Features
- Wraps `sklearn.feature_selection.mutual_info_classif` behind the standard `MeasurementBase` API.
- Supports Pandas DataFrame inputs; automatically excludes the target column from the feature set.
- Returns the average mutual information across all features (in bits) for quick screening.
## Prerequisites
- Python 3.10 or newer.
- `scikit-learn` and `pandas` installed (pulled in as dependencies of this package).
- Clean, pre-processed categorical data (encode non-numeric columns before calling) since `mutual_info_classif` expects numerical inputs.
## Installation
```bash
# pip
pip install swarmauri_measurement_mutualinformation
# poetry
poetry add swarmauri_measurement_mutualinformation
# uv (pyproject-based projects)
uv add swarmauri_measurement_mutualinformation
```
## Quickstart
```python
import pandas as pd
from swarmauri_measurement_mutualinformation import MutualInformationMeasurement
# Example dataset
frame = pd.DataFrame(
{
"feature_a": [0, 1, 1, 0, 1, 0],
"feature_b": [5.1, 5.0, 4.9, 5.2, 5.1, 5.0],
"target": [0, 1, 1, 0, 1, 0],
}
)
mi = MutualInformationMeasurement()
avg_mi = mi.calculate(frame, target_column="target")
print(f"Average mutual information: {avg_mi:.4f} bits")
```
## Per-Feature Scores
If you need the individual MI score per feature, compute it directly and inspect the array:
```python
import pandas as pd
from sklearn.feature_selection import mutual_info_classif
frame = pd.DataFrame(
{
"feat1": [0, 1, 1, 0, 1, 0],
"feat2": [5.1, 5.0, 4.9, 5.2, 5.1, 5.0],
"target": [0, 1, 1, 0, 1, 0],
}
)
scores = mutual_info_classif(frame[["feat1", "feat2"]], frame["target"])
for column, score in zip(["feat1", "feat2"], scores):
print(column, score)
```
Use the per-feature scores to filter low-signal columns before passing the DataFrame back through Swarmauri.
## Tips
- Normalize or discretize continuous features when comparing very different scales; mutual information is sensitive to distribution assumptions.
- Handle missing values before calling `calculate`; `mutual_info_classif` does not accept NaNs.
- Binary targets work out of the box; for multi-class targets, ensure `target_column` contains integer encodings.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, measurement, mutualinformation, mutual, information, community, package | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"pandas>=2.2.3",
"scikit-learn>=1.5.2",
"swarmauri_base",
"swarmauri_core"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:09.228096 | swarmauri_measurement_mutualinformation-0.9.3.dev5-py3-none-any.whl | 8,727 | de/55/13acee4a47d40b99b1d960839335035c26b70c81597c954c88a487dd223f/swarmauri_measurement_mutualinformation-0.9.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | bf499966c05a03f301b6847ad1812736 | cfc97b8991afcacaa05b173cbf6c52bc72aa15daab484213d4e1b2c158abe465 | de5513acee4a47d40b99b1d960839335035c26b70c81597c954c88a487dd223f | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_matrix_hamming74 | 0.9.3.dev5 | Swarmauri Hamming (7,4) matrix community package. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_matrix_hamming74/">
<img src="https://img.shields.io/pypi/dm/swarmauri_matrix_hamming74" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_matrix_hamming74/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_matrix_hamming74.svg"/></a>
<a href="https://pypi.org/project/swarmauri_matrix_hamming74/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_matrix_hamming74" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_matrix_hamming74/">
<img src="https://img.shields.io/pypi/l/swarmauri_matrix_hamming74" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_matrix_hamming74/">
<img src="https://img.shields.io/pypi/v/swarmauri_matrix_hamming74?label=swarmauri_matrix_hamming74&color=green" alt="PyPI - swarmauri_matrix_hamming74"/></a>
</p>
---
# Swarmauri Matrix Hamming(7,4)
The `swarmauri_matrix_hamming74` package provides a binary Hamming (7,4) code matrix that extends `MatrixBase`. It includes generator and parity-check representations, encoding and decoding helpers, and tight integration with the `swarmauri_metric_hamming` package for error detection and correction workflows.
## Features
- 🧮 Binary matrix implementation built on `MatrixBase` with full indexing and arithmetic support.
- 🧷 Generator and parity-check matrix accessors for classic Hamming (7,4) coding schemes.
- 📨 Encoding, syndrome calculation, and nearest-codeword decoding helpers powered by the Hamming metric.
- 🔁 Binary-safe matrix operations (addition, subtraction, multiplication, and matrix multiplication modulo 2).
## Installation
### Using `uv`
```bash
uv pip install swarmauri_matrix_hamming74
```
### Using `pip`
```bash
pip install swarmauri_matrix_hamming74
```
## Usage
```python
from swarmauri_matrix_hamming74 import Hamming74Matrix
matrix = Hamming74Matrix()
message = [1, 0, 1, 1]
codeword = matrix.encode(message)
# Introduce a single-bit error
received = codeword.copy()
received[3] ^= 1
# Decode using syndrome lookup and Hamming distance search
nearest = matrix.nearest_codeword(received)
print(nearest) # Recovers the original codeword
```
## Support
- Python 3.10, 3.11, and 3.12
- Licensed under the Apache-2.0 license
## Contributing
We welcome contributions! Please submit issues and pull requests through the [Swarmauri SDK GitHub repository](https://github.com/swarmauri/swarmauri-sdk).
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, matrix, hamming, error-correcting, community, package | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"numpy>=1.26",
"swarmauri_base",
"swarmauri_core",
"swarmauri_metric_hamming"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:06.968678 | swarmauri_matrix_hamming74-0.9.3.dev5.tar.gz | 8,123 | b0/b3/c84ef057a4461fb1df5a28ba82a36b325af3ebfbd1cf2ce48d5c352ec3d4/swarmauri_matrix_hamming74-0.9.3.dev5.tar.gz | source | sdist | null | false | 1a83c698c46c2e34ac6959d5ee69d6ee | 9798370b75176b16c4ad4904731fd3e51280f6e7c3a9e2481be9db9f768ec637 | b0b3c84ef057a4461fb1df5a28ba82a36b325af3ebfbd1cf2ce48d5c352ec3d4 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_llm_leptonai | 0.9.3.dev5 | Swarmauri Lepton AI Model | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_llm_leptonai/">
<img src="https://img.shields.io/pypi/dm/swarmauri_llm_leptonai" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_llm_leptonai/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_llm_leptonai.svg"/></a>
<a href="https://pypi.org/project/swarmauri_llm_leptonai/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_llm_leptonai" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_llm_leptonai/">
<img src="https://img.shields.io/pypi/l/swarmauri_llm_leptonai" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_llm_leptonai/">
<img src="https://img.shields.io/pypi/v/swarmauri_llm_leptonai?label=swarmauri_llm_leptonai&color=green" alt="PyPI - swarmauri_llm_leptonai"/></a>
</p>
---
# Swarmauri LLM LeptonAI
Integration package for calling Lepton AI's hosted language and image generation models from Swarmauri agents. Ships LLM and image-gen adapters with synchronous, streaming, and asynchronous workflows that match Swarmauri conventions.
## Features
- Chat completion support for Lepton AI models (e.g., `llama3-8b`, `mixtral-8x7b`) with automatic usage tracking.
- Streaming and async token generation for latency-sensitive experiences.
- SDXL-based image generation with convenience helpers to save or display returned bytes.
- Single configuration surface for model name, base URL, and API key; reuse the same credential for both text and image endpoints.
## Prerequisites
- Python 3.10 or newer.
- A Lepton AI API key stored outside source control (environment variables or secret stores recommended).
- Network access to `*.lepton.run` endpoints; the `openai` Python client is installed automatically as a dependency.
## Installation
```bash
# pip
pip install swarmauri_llm_leptonai
# poetry
poetry add swarmauri_llm_leptonai
# uv (pyproject-based projects)
uv add swarmauri_llm_leptonai
```
## Quickstart: Chat Completions
```python
import os
from swarmauri_llm_leptonai import LeptonAIModel
from swarmauri_standard.conversations.Conversation import Conversation
from swarmauri_standard.messages.HumanMessage import HumanMessage
api_key = os.environ["LEPTON_API_KEY"]
conversation = Conversation()
conversation.add_message(HumanMessage(content="Summarize Swarmauri in two sentences."))
model = LeptonAIModel(api_key=api_key, name="llama3-8b")
response = model.predict(conversation=conversation)
print(response.get_last().content)
print("Tokens used", response.get_last().usage.total_tokens)
```
### Async and Streaming
```python
import asyncio
import os
from swarmauri_llm_leptonai import LeptonAIModel
from swarmauri_standard.conversations.Conversation import Conversation
from swarmauri_standard.messages.HumanMessage import HumanMessage
async def ask_async(prompt: str) -> None:
convo = Conversation()
convo.add_message(HumanMessage(content=prompt))
model = LeptonAIModel(api_key=os.environ["LEPTON_API_KEY"], name="mixtral-8x7b")
result = await model.apredict(conversation=convo)
print(result.get_last().content)
def stream_story(prompt: str) -> None:
convo = Conversation()
convo.add_message(HumanMessage(content=prompt))
model = LeptonAIModel(api_key=os.environ["LEPTON_API_KEY"])
for token in model.stream(conversation=convo):
print(token, end="", flush=True)
# asyncio.run(ask_async("Draft a product announcement."))
# stream_story("Write a haiku about distributed agents.")
```
## Generate Images with SDXL
```python
import os
from pathlib import Path
from swarmauri_llm_leptonai import LeptonAIImgGenModel
img_model = LeptonAIImgGenModel(api_key=os.environ["LEPTON_API_KEY"], model_name="sdxl")
prompt = "A cyberpunk skyline at blue hour in watercolor style"
image_bytes = img_model.generate_image(prompt=prompt, width=768, height=512)
output = Path("leptonai_cyberpunk.png")
img_model.save_image(image_bytes, output.as_posix())
# Display in a notebook or desktop environment
# img_model.display_image(image_bytes)
```
## Operational Tips
- Models are invoked via `https://<model>.lepton.run/api/v1/`; updating `name` on `LeptonAIModel` switches endpoints without altering the client setup.
- Streaming responses emit usage data at stream completion; consume the generator fully before inspecting `conversation.get_last().usage`.
- Respect Lepton AI rate limits—add retries with exponential backoff or queue requests during traffic spikes.
- Store API keys securely and rotate them regularly; avoid hard-coding credentials in notebooks or scripts.
- Large image generations may take longer and consume more credits; adjust `width`, `height`, `steps`, and `guidance_scale` to balance quality versus latency.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, llm, leptonai, lepton, model | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"openai>=1.62.0",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:48:03.315507 | swarmauri_llm_leptonai-0.9.3.dev5.tar.gz | 10,168 | 7b/40/3c78a286512b843682ebe48e5ad852c64be7488c6e28cdc47666794d1ade/swarmauri_llm_leptonai-0.9.3.dev5.tar.gz | source | sdist | null | false | f90d7e48cab375e6fcdb460694f993a5 | 13e9d5dbd1252210924a71d843cb75af9bfc7fb30baa3bbc34fcf7522efe66ef | 7b403c78a286512b843682ebe48e5ad852c64be7488c6e28cdc47666794d1ade | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_keyprovider_vaulttransit | 0.9.3.dev5 | Swarmauri Vault Transit Key Provider | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_keyprovider_vaulttransit/">
<img src="https://img.shields.io/pypi/dm/swarmauri_keyprovider_vaulttransit" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_keyprovider_vaulttransit/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_keyprovider_vaulttransit.svg"/></a>
<a href="https://pypi.org/project/swarmauri_keyprovider_vaulttransit/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_keyprovider_vaulttransit" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_keyprovider_vaulttransit/">
<img src="https://img.shields.io/pypi/l/swarmauri_keyprovider_vaulttransit" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_keyprovider_vaulttransit/">
<img src="https://img.shields.io/pypi/v/swarmauri_keyprovider_vaulttransit?label=swarmauri_keyprovider_vaulttransit&color=green" alt="PyPI - swarmauri_keyprovider_vaulttransit"/></a>
</p>
---
# Swarmauri Vault Transit Key Provider
HashiCorp Vault Transit engine integration for the Swarmauri key provider interface. Manage hardware-protected keys through Vault, expose public JWK(S) material, rotate versions, and consume Vault RNG and HKDF services without leaving Swarmauri.
## Features
- Create and rotate symmetric (`aes256-gcm96`) and asymmetric (`rsa-3072`, `ecdsa-p256`, `ed25519`) keys via Vault Transit.
- Export public keys in JWK/JWKS form using the built-in `get_public_jwk`/`jwks` helpers.
- Perform signing, verification, encryption, decryption, wrapping, and unwrapping through Vault's REST API.
- Generate cryptographically secure random bytes either from Vault's RNG or local entropy (configurable with `prefer_vault_rng`).
- Run HKDF derivations with SHA-256 to support envelope encryption or key diversification workflows.
## Prerequisites
- Python 3.10 or newer.
- Running HashiCorp Vault instance with the Transit secrets engine enabled and a mount path you can access (default `transit`).
- Vault token with capabilities such as `transit/keys/*` for `read`, `create`, `update`, `delete`, and `transit/random/*` if you plan to use Vault RNG.
- The [`hvac`](https://pypi.org/project/hvac/) client library (installed automatically with this package) unless you inject a custom Vault client.
## Installation
```bash
# pip
pip install swarmauri_keyprovider_vaulttransit
# poetry
poetry add swarmauri_keyprovider_vaulttransit
# uv (pyproject-based projects)
uv add swarmauri_keyprovider_vaulttransit
```
## Quickstart: Create and Rotate a Signing Key
```python
import asyncio
from swarmauri_core.key_providers.types import KeyAlg, KeySpec, ExportPolicy
from swarmauri_keyprovider_vaulttransit import VaultTransitKeyProvider
async def main() -> None:
provider = VaultTransitKeyProvider(
url="http://localhost:8200",
token="swarmauri-dev-token",
mount="transit",
verify=False,
)
spec = KeySpec(
alg=KeyAlg.ED25519,
export_policy=ExportPolicy.never_export_secret,
label="agents-signing",
)
key_ref = await provider.create_key(spec)
print("Created key", key_ref.kid, "version", key_ref.version)
jwk = await provider.get_public_jwk(key_ref.kid, key_ref.version)
print("Public JWK", jwk)
rotated = await provider.rotate_key(key_ref.kid)
print("Rotated to version", rotated.version)
jwks_payload = await provider.jwks()
print("JWKS contains", [entry["kid"] for entry in jwks_payload["keys"]])
if __name__ == "__main__":
asyncio.run(main())
```
## Encrypt, Wrap, and Derive Keys
```python
import asyncio
from swarmauri_keyprovider_vaulttransit import VaultTransitKeyProvider
async def encrypt_and_wrap() -> None:
provider = VaultTransitKeyProvider(
url="http://localhost:8200",
token="swarmauri-dev-token",
prefer_vault_rng=True,
)
plaintext = b"vault keeps my secrets"
aad = b"tenant::demo"
ciphertext = await provider.encrypt("aes-encryption", plaintext, associated_data=aad)
decrypted = await provider.decrypt("aes-encryption", ciphertext, associated_data=aad)
assert decrypted == plaintext
dek = await provider.random_bytes(32)
wrapped = await provider.wrap("rsa-wrap-key", dek)
unwrapped = await provider.unwrap("rsa-wrap-key", wrapped)
assert unwrapped == dek
derived = await provider.hkdf(
ikm=dek,
salt=b"vault-salt",
info=b"swarmauri/derivation",
length=32,
)
print("Derived key length", len(derived))
# asyncio.run(encrypt_and_wrap())
```
## Configuration Reference
- `url` – Vault server address (e.g., `https://vault.example.com:8200`).
- `token` – Vault token or wrapped token with permissions for the Transit mount.
- `mount` – Transit engine mount path; defaults to `transit`.
- `namespace` – Optional Vault Enterprise namespace header.
- `verify` – TLS verification flag or CA bundle path.
- `prefer_vault_rng` – When `True`, `random_bytes` uses Vault's RNG; otherwise falls back to `os.urandom`.
- `client` – Provide a pre-configured `hvac.Client` if you manage authentication externally.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md).
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, keyprovider, vaulttransit, vault, transit, key, provider | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"cryptography>=41.0.0",
"cryptography>=41.0.0; extra == \"crypto\"",
"hvac>=2.1.0",
"hvac>=2.1.0; extra == \"vault\"",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:47:50.738251 | swarmauri_keyprovider_vaulttransit-0.9.3.dev5-py3-none-any.whl | 12,868 | 64/be/cc42b595ef43cec0d88bf1957ddeaf810e77dcb9b985fa18e5f7253c830f/swarmauri_keyprovider_vaulttransit-0.9.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 0a5fee557ddd13a4faa9746f4b60ee71 | 390471b0a32165c84b0a7cbc942b2fc40fc24669007702d8b889c27a79d25cb5 | 64becc42b595ef43cec0d88bf1957ddeaf810e77dcb9b985fa18e5f7253c830f | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_keyprovider_gcpkms | 0.9.3.dev5 | Google Cloud KMS Key Provider for Swarmauri | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_keyprovider_gcpkms/">
<img src="https://img.shields.io/pypi/dm/swarmauri_keyprovider_gcpkms" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_keyprovider_gcpkms/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_keyprovider_gcpkms.svg"/></a>
<a href="https://pypi.org/project/swarmauri_keyprovider_gcpkms/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_keyprovider_gcpkms" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_keyprovider_gcpkms/">
<img src="https://img.shields.io/pypi/l/swarmauri_keyprovider_gcpkms" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_keyprovider_gcpkms/">
<img src="https://img.shields.io/pypi/v/swarmauri_keyprovider_gcpkms?label=swarmauri_keyprovider_gcpkms&color=green" alt="PyPI - swarmauri_keyprovider_gcpkms"/></a>
</p>
---
# Swarmauri GCP KMS Key Provider
Google Cloud KMS-backed key provider for the Swarmauri framework. It exposes Cloud KMS asymmetric and symmetric keys through the common `IKeyProvider` interface so agents can sign, verify, encrypt, decrypt, wrap, and unwrap data without leaving Swarmauri.
## Optional Canonicalization Extras
- `cbor` – installs `cbor2` to enable canonical CBOR utilities where workflows require deterministic binary encoding.
## Features
- Use Cloud KMS asymmetric keys for RSA/EC signing and verification while receiving RFC 7517 JWKS payloads for downstream services.
- Perform RSA-OAEP wrapping/unwrapping of data encryption keys and AES-256 encryption/decryption with hardware-backed material.
- Publish JWKS documents from Cloud KMS public keys, including caching and TTL-based refresh to minimize API calls.
- Generate random bytes and derive key material via HKDF with SHA-256 for envelope encryption scenarios.
- Destroy individual key versions via the Cloud KMS REST API when performing decommissioning workflows.
## Prerequisites
- Python 3.10 or newer.
- `google-auth` and `requests` (installed automatically) plus network access to Google Cloud KMS endpoints.
- A Google Cloud project with the KMS API enabled, along with a key ring (`key_ring_id`) in your chosen location (`location_id`).
- Service account or workload identity with permissions such as `cloudkms.cryptoKeys.get`, `cloudkms.cryptoKeyVersions.useToSign`, `cloudkms.cryptoKeyVersions.useToDecrypt`, `cloudkms.cryptoKeys.list`, and `cloudkms.keyRings.get`.
- Application Default Credentials available to the runtime (e.g., `GOOGLE_APPLICATION_CREDENTIALS`, workload identity, or Cloud Run default service account).
## Installation
```bash
# pip
pip install swarmauri_keyprovider_gcpkms
# poetry
poetry add swarmauri_keyprovider_gcpkms
# uv (pyproject-based projects)
uv add swarmauri_keyprovider_gcpkms
# Extras for CBOR canonicalization
pip install "swarmauri_keyprovider_gcpkms[cbor]"
```
## Quickstart: Sign and Verify with Cloud KMS
```python
import asyncio
from swarmauri_keyprovider_gcpkms import GcpKmsKeyProvider
from swarmauri_core.key_providers.types import KeySpec, KeyUse
async def main() -> None:
provider = GcpKmsKeyProvider(
project_id="my-project",
location_id="us-central1",
key_ring_id="swarmauri",
)
key_ref = await provider.get_key(
kid="projects/my-project/locations/us-central1/keyRings/swarmauri/cryptoKeys/jwt-key",
version=None,
)
message = b"payload to sign"
signature = await provider.sign(key_ref.kid, message, alg=JWAAlg.RS256)
await provider.verify(key_ref.kid, message, signature, alg=JWAAlg.RS256)
jwk = await provider.get_public_jwk(key_ref.kid, key_ref.version)
print("Public JWK", jwk)
if __name__ == "__main__":
asyncio.run(main())
```
## Encrypt and Wrap Data Keys
```python
import asyncio
from swarmauri_keyprovider_gcpkms import GcpKmsKeyProvider
async def encrypt_documents() -> None:
provider = GcpKmsKeyProvider(
project_id="my-project",
location_id="us-east1",
key_ring_id="data-protection",
)
dek = await provider.random_bytes(32)
aad = b"swarmauri::tenant-a"
ciphertext = await provider.encrypt(
kid="projects/my-project/locations/us-east1/keyRings/data-protection/cryptoKeys/primary",
plaintext=b"secret payload",
associated_data=aad,
)
wrapped = await provider.wrap(
kid="projects/my-project/locations/us-east1/keyRings/data-protection/cryptoKeys/wrapping",
plaintext=dek,
)
unwrapped = await provider.unwrap(
kid="projects/my-project/locations/us-east1/keyRings/data-protection/cryptoKeys/wrapping",
ciphertext=wrapped,
)
assert unwrapped == dek
# asyncio.run(encrypt_documents())
```
## Operational Tips
- The provider caches public keys (`_pub_cache`) for 5 minutes; call `get_public_jwk(..., force=True)` if you rotate Cloud KMS key versions and need instant propagation.
- Use explicit key version names when destroying or disabling keys: `projects/.../cryptoKeys/<name>/cryptoKeyVersions/<n>`.
- Cloud KMS rotation is controlled outside the provider (per key configuration). Combine the provider with IAM rotation settings to enforce regular key versioning.
- For auditability, inspect the `tags` field on returned `KeyRef` objects—they include algorithm purpose and key type hints derived from Cloud KMS metadata.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, keyprovider, gcpkms, google, cloud, kms, key, provider | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"cbor2; extra == \"cbor\"",
"cryptography",
"google-auth",
"requests",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:47:48.144211 | swarmauri_keyprovider_gcpkms-0.9.3.dev5-py3-none-any.whl | 13,007 | 8a/d8/47c0410c573f8e1aac7c3860b7aec3690d6ec0e602bf3599bec181b46c34/swarmauri_keyprovider_gcpkms-0.9.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | fd59a8ab2ee2f5f7dcf7c108d27d23fc | 070d6a707648b65b4333e7924b2268d37d3bc531e2cd88792902a7d90f4ad2ce | 8ad847c0410c573f8e1aac7c3860b7aec3690d6ec0e602bf3599bec181b46c34 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_keyprovider_aws_kms | 0.3.3.dev5 | AWS KMS KeyProvider for Swarmauri | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_keyprovider_aws_kms/">
<img src="https://img.shields.io/pypi/dm/swarmauri_keyprovider_aws_kms" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_keyprovider_aws_kms/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_keyprovider_aws_kms.svg"/></a>
<a href="https://pypi.org/project/swarmauri_keyprovider_aws_kms/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_keyprovider_aws_kms" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_keyprovider_aws_kms/">
<img src="https://img.shields.io/pypi/l/swarmauri_keyprovider_aws_kms" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_keyprovider_aws_kms/">
<img src="https://img.shields.io/pypi/v/swarmauri_keyprovider_aws_kms?label=swarmauri_keyprovider_aws_kms&color=green" alt="PyPI - swarmauri_keyprovider_aws_kms"/></a>
</p>
---
# Swarmauri AWS KMS Key Provider
Community plugin providing an AWS Key Management Service (KMS) backed `KeyProvider` for Swarmauri. It manages non-exportable customer managed keys (CMKs), exposes JWKS for downstream services, and handles key rotation workflows aligned with AWS best practices.
## Features
- Create RSA, ECC, and AES-256 keys in AWS KMS with deterministic aliasing per `kid` and version.
- Rotate keys by minting new KMS key versions and updating aliases, while preserving previous versions for auditing or staged cutovers.
- Describe keys through `KeyRef` objects, including public PEM material when the key spec allows export, and RFC 7517-compliant JWKs via `get_public_jwk`/`jwks`.
- Generate cryptographically secure random bytes and perform HKDF expansion with SHA-256 to support envelope encryption and symmetric derivation flows.
- Destroy keys by scheduling deletion through the KMS API, maintaining Swarmauri tagging metadata for traceability.
## Prerequisites
- Python 3.10 or newer.
- `boto3` (installed automatically with this package) and network access to the target AWS region.
- AWS credentials with permissions such as `kms:CreateKey`, `kms:CreateAlias`, `kms:UpdateAlias`, `kms:DescribeKey`, `kms:GetPublicKey`, `kms:ListAliases`, `kms:ListResourceTags`, and `kms:ScheduleKeyDeletion`.
- Optional: a custom key policy if you need to delegate key administration to non-root principals; pass it through the `key_policy` constructor argument.
## Installation
```bash
# pip
pip install swarmauri_keyprovider_aws_kms
# poetry
poetry add swarmauri_keyprovider_aws_kms
# uv (pyproject-based projects)
uv add swarmauri_keyprovider_aws_kms
```
## Quickstart: Create, Rotate, and Publish Keys
```python
import asyncio
from swarmauri_keyprovider_aws_kms import AwsKmsKeyProvider
from swarmauri_core.key_providers.types import KeyAlg, KeyClass, KeySpec, ExportPolicy
async def main() -> None:
provider = AwsKmsKeyProvider(region="us-east-1", alias_prefix="swarmauri-demo")
rsa_spec = KeySpec(
klass=KeyClass.asymmetric,
alg=KeyAlg.RSA_PSS_SHA256,
size_bits=3072,
export_policy=ExportPolicy.never_export_secret,
label="api-signing",
)
# Create the initial version (aliases: alias/swarmauri-demo/<kid> and .../v1)
key_ref = await provider.create_key(rsa_spec)
print("KID", key_ref.kid, "version", key_ref.version)
# Surface the public JWK for JWT signing or JWKS endpoints
jwk = await provider.get_public_jwk(key_ref.kid)
print("Public JWK", jwk)
# Rotate the key – new CMK in KMS, version alias bump, old alias retained
rotated = await provider.rotate_key(key_ref.kid)
print("Rotated to version", rotated.version)
# Publish the aggregate JWKS (includes the latest version per kid)
jwks_payload = await provider.jwks()
print("JWKS keys", [k["kid"] for k in jwks_payload["keys"]])
if __name__ == "__main__":
asyncio.run(main())
```
## Symmetric Utilities: Random Bytes and HKDF
```python
import asyncio
from swarmauri_keyprovider_aws_kms import AwsKmsKeyProvider
async def derive_data_key() -> bytes:
provider = AwsKmsKeyProvider(region="us-east-1")
master_salt = await provider.random_bytes(32)
info = b"swarmauri/example"
pseudo_random_key = await provider.random_bytes(32)
derived = await provider.hkdf(
pseudo_random_key,
salt=master_salt,
info=info,
length=32,
)
return derived
# asyncio.run(derive_data_key())
```
## Operational Tips
- `list_versions(kid)` inspects versioned aliases (`alias/<prefix>/<kid>/vN`); use it before destructive actions to ensure you capture all active CMKs.
- Destroying a key schedules deletion for 7 days. Plan rotations ahead of time so dependent systems can migrate to the new version before you call `destroy_key`.
- Tag metadata persisted by the provider (`saur:kid`, `saur:version`, `saur:alg`, optional `saur:label`) enables inventory checks—query them from the AWS console or CLI when auditing.
- For high-throughput signing, ensure your IAM policies, KMS quotas, and region placement match latency expectations; consider caching public JWKs from `jwks()` in your verifier services.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, keyprovider, aws, kms | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"boto3>=1.28.0",
"cryptography",
"pytest-benchmark>=4.0.0; extra == \"perf\"",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:47:47.553691 | swarmauri_keyprovider_aws_kms-0.3.3.dev5.tar.gz | 11,590 | 1d/99/6d49cf583e76a991d754cd148dc575a80ccecec4e5d404c1996e40ee52e0/swarmauri_keyprovider_aws_kms-0.3.3.dev5.tar.gz | source | sdist | null | false | fe28f250ba16d3ecc1692ea49e0ad8d5 | f8a37bc20f69a91742ba6ea09bd72f1c4e978e7ec18622bac398201afa4b0f9b | 1d996d49cf583e76a991d754cd148dc575a80ccecec4e5d404c1996e40ee52e0 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_embedding_mlm | 0.8.2.dev5 | example community package |

<p align="center">
<a href="https://pypi.org/project/swarmauri_embedding_mlm/">
<img src="https://img.shields.io/pypi/dm/swarmauri_embedding_mlm" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_embedding_mlm/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_embedding_mlm.svg"/></a>
<a href="https://pypi.org/project/swarmauri_embedding_mlm/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_embedding_mlm" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_embedding_mlm/">
<img src="https://img.shields.io/pypi/l/swarmauri_embedding_mlm" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_embedding_mlm/">
<img src="https://img.shields.io/pypi/v/swarmauri_embedding_mlm?label=swarmauri_embedding_mlm&color=green" alt="PyPI - swarmauri_embedding_mlm"/></a>
</p>
---
# Swarmauri Embedding MLM
Trainable embedding provider that fine-tunes a Hugging Face masked language model (MLM) end-to-end so Swarmauri agents can produce contextual document vectors without leaving the framework.
## Features
- Wraps any Hugging Face masked language model (`embedding_name`) behind the Swarmauri `EmbeddingBase` interface.
- Supports optional vocabulary expansion via `add_new_tokens` before fine-tuning to capture domain-specific terminology.
- Handles end-to-end fine-tuning with masking, AdamW optimization, and GPU/CPU selection based on availability.
- Exposes pooling utilities (`transform`, `infer_vector`) that average the last hidden state to yield dense vectors ready for downstream retrieval or clustering.
- Provides `save_model`/`load_model` helpers so trained weights and tokenizers can be persisted and reloaded across workers.
## Prerequisites
- Python 3.10 or newer.
- PyTorch with CUDA support if you plan to train on GPU (the class falls back to CPU automatically).
- Access to the Hugging Face model hub for downloading `embedding_name`. Set `HF_HOME`, proxies, or tokens if your environment requires authentication.
- Enough disk space to cache the chosen MLM (e.g., `bert-base-uncased` ~420 MB).
## Installation
```bash
# pip
pip install swarmauri_embedding_mlm
# poetry
poetry add swarmauri_embedding_mlm
# uv (pyproject-based projects)
uv add swarmauri_embedding_mlm
```
## Quickstart: Fine-tune and Embed Documents
```python
from swarmauri_embedding_mlm import MlmEmbedding
docs = [
"Swarmauri SDK ships modular agents.",
"Masked language models produce contextual embeddings.",
]
embedder = MlmEmbedding(
embedding_name="distilbert-base-uncased",
batch_size=16,
learning_rate=3e-5,
)
# One epoch of MLM fine-tuning on your corpus
embedder.fit(docs)
# Generate vectors for downstream tasks
vectors = embedder.transform([
"Agents coordinate through shared memory",
"Fine-tuning improves domain recall",
])
for v in vectors:
print(len(v.value), v.value[:4]) # dimension and preview
# Single-text inference helper
query_vector = embedder.infer_vector("How do masked models compute embeddings?")
```
## Expanding the Vocabulary
Set `add_new_tokens=True` to capture domain-specific terms before training. New tokens are identified via simple whitespace tokenization and appended to the tokenizer before the first epoch.
```python
from swarmauri_embedding_mlm import MlmEmbedding
domain_docs = [
"Neo4j graph embeddings power fraud detection",
"Qdrant supports hybrid sparse-dense search",
]
embedder = MlmEmbedding(add_new_tokens=True)
embedder.fit(domain_docs)
# Inspect the tokenizer to confirm additions
print(f"Vocabulary size: {len(embedder.extract_features())}")
```
## Persisting and Reloading Models
```python
from pathlib import Path
from swarmauri_embedding_mlm import MlmEmbedding
save_dir = Path("models/mlm-distilbert")
embedder = MlmEmbedding()
embedder.fit(["short corpus", "to warm up the model"])
embedder.save_model(save_dir.as_posix())
# Later or on another machine
restored = MlmEmbedding()
restored.load_model(save_dir.as_posix())
embedding = restored.infer_vector("Reuse the trained weights instantly")
```
## Operational Tips
- Batch and sequence length drive GPU memory usage; reduce `batch_size` or `max_length` in tokenizer calls when running on constrained hardware.
- `fit_transform` runs a full fine-tuning pass and immediately returns embeddings—useful for one-off adaptation jobs.
- When training on large corpora, stream documents from a generator, chunk them, or wrap the `.fit` call in your own epoch loop.
- Run `extract_features()` to audit the tokenizer vocabulary (helpful when debugging domain token coverage).
- Combine the generated vectors with Swarmauri vector stores (Redis, Qdrant, etc.) to build end-to-end retrieval pipelines.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, embedding, mlm, example, community, package | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard",
"torch>=2.6.0",
"transformers>=4.49.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:47:44.061076 | swarmauri_embedding_mlm-0.8.2.dev5.tar.gz | 9,950 | 59/29/265a0ddc865eadac67256b007fac407113c8b69baef7206f5d30c854109e/swarmauri_embedding_mlm-0.8.2.dev5.tar.gz | source | sdist | null | false | dc0b09144f937bd4ca28076a92e3265a | ef3f18224343c375a5438b39f9946f8455cb982a50aa72052416c78dc59a55b3 | 5929265a0ddc865eadac67256b007fac407113c8b69baef7206f5d30c854109e | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | silik-kernel | 1.5.1 | Multi-kernel Manager | # Silik Kernel
This is a jupyter kernel that allows to interface with multiple kernels, you can:
- start, stop and restart kernels,
- switch between kernels,
- list available kernels.
As a jupyter kernel, it takes text as input, transfer it to appropriate sub-kernel; and returns the result in a cell output. Silik kernel also forwards TAB completion from kernels, as well as multiline cells.
> **Any jupyter kernel can be plugged to silik**

> But managing multi-kernels seems to be a nightmare ?
**Not with Agents and LLM**. In order to allow users to easily manage multi-kernels, we present a way to access AI agents through jupyter kernels. To do so, we provide a [wrapper of a pydantic-ai agent in a kernel](https://github.com/mariusgarenaux/pydantic-ai-kernel). This allows to interact easily with these agents, through ipython for example, and let them manage the output of cells.
It also allows to share agents easily (with **pypi** for example); because they can be shipped in a python module. We split properly the agent and the interaction framework with the agent, by reusing the ones from jupyter kernels.
## Getting started
```bash
pip install silik-kernel
```
The kernel is then installed on the current python venv.
Any jupyter frontend should be able to access the kernel, for example :
• **Notebook** (you might need to restart the IDE) : select 'silik' on top right of the notebook
• **CLI** : Install jupyter-console (`pip install jupyter-console`); and run `jupyter console --kernel silik`
• **Silik Signal Messaging** : Access the kernel through Signal Message Application.
To use diverse kernels through silik, you can install some example kernels : [https://github.com/Tariqve/jupyter-kernels](https://github.com/Tariqve/jupyter-kernels). You can also create new agent-based kernel by subclassing [pydantic-ai base kernel](https://github.com/mariusgarenaux/pydantic-ai-kernel).
> You can list the available kernels by running `jupyter kernelspec list` in a terminal.
## Usage
### Tuto
Start by running `mkdir <kernel_type> --label=my-kernel` with <kernel_type> among the installed kernels (send `kernels` to see which ones).
Then, you can run `cd my-kernel` and, `run <code>` to run one shot code in this kernel.
You can also run /cnct to avoid typing `run`. /cmd allows at any time to go back to command mode (navigation and creation of kernels).
### Commands
Here is a quick reminder of available commands
• cd <path> : Moves the selected kernel in the kernel tree
• ls | tree : Displays the kernels tree
• mkdir <kernel_type> --label=<kernel_label> : starts a kernel (see 'kernels' command)
• run `code` | r `code` : run code on selected kernel - in one shot
• restart : restart the selected kernel
• branch <kernel_label> : branch the output of selected kernel to the input of one of its children. Output of parent kernel is now output of children kernel. (In -> Parent Kernel -> Children Kernel -> Out)
• detach : detach the branch starting from the selected kernel
• history : displays the cells input history for this kernel
• kernels : displays the list of available kernels types
• /cnct : direct connection towards selected kernel : cells will be directly executed on this kernel; except if cell content is '/cmd'
• /cmd : switch to command mode (default one) - exit /cnct mode
## Recursive
You can start a silik kernel from a silik kernel. But you can only control the children-silik with 'run `code`'; and not directly /cmd or /cnct (because these two are catched before by the first silik). Here is an example :

> You can hence implement your own sub-class of silik kernel, and add any method for spreading silik input to sub-kernels, and merging output of sub-kernels to produce silik output.
| text/markdown | null | Marius Garénaux-Gruau <marius.garenaux-gruau@irisa.fr> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"ipykernel>=7.1.0",
"jupyter-client>=8.6.3",
"statikomand>=0.0.3"
] | [] | [] | [] | [
"Repository, https://github.com/mariusgarenaux/silik-kernel"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-18T09:47:40.083118 | silik_kernel-1.5.1.tar.gz | 23,489 | ab/0f/5de0f500e16ecde1e29e6bdd0eb767c9cb84f792f929f8c276b613110910/silik_kernel-1.5.1.tar.gz | source | sdist | null | false | 21d475b2585200673bacd63f4ca975f7 | 2c3374ca60880e54eb0420fba3f4eef670efc78bff76a161cd0c9c3648611ec2 | ab0f5de0f500e16ecde1e29e6bdd0eb767c9cb84f792f929f8c276b613110910 | null | [
"LICENSE-2.0.txt"
] | 126 |
2.4 | swarmauri_certservice_stepca | 0.3.3.dev5 | Step-ca backed certificate service for Swarmauri | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_certservice_stepca/">
<img src="https://img.shields.io/pypi/dm/swarmauri_certservice_stepca" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certservice_stepca/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certservice_stepca.svg"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_stepca/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_certservice_stepca" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_stepca/">
<img src="https://img.shields.io/pypi/l/swarmauri_certservice_stepca" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_stepca/">
<img src="https://img.shields.io/pypi/v/swarmauri_certservice_stepca?label=swarmauri_certservice_stepca&color=green" alt="PyPI - swarmauri_certservice_stepca"/></a>
</p>
---
# Swarmauri Step-ca Certificate Service
Community plugin providing a step-ca backed certificate service for Swarmauri. It turns the generic `ICertService` workflows into calls against the [smallstep step-ca](https://smallstep.com/docs/step-ca) REST API so agents can request certificates without hand-crafting HTTP payloads.
## Features
- Generate RFC 2986-compliant PKCS#10 certificate signing requests with SANs, challenge passwords, and custom extensions.
- Exchange CSRs for signed certificates through the step-ca `/1.0/sign` endpoint, including provisioner selection and template data.
- Asynchronous HTTP client with configurable TLS verification, timeouts, and one-time token (OTT) acquisition via a pluggable `token_provider`.
- Structured capability introspection through `supports()` so orchestrators can negotiate key algorithms and features.
## Prerequisites
- Python 3.10 or newer.
- Reachable step-ca instance (hosted or self-managed) exposing the `/1.0/sign` API.
- Provisioner configured in step-ca with one-time token authentication. Either supply OTTs directly at request time or provide an async `token_provider` function that returns them when asked.
- Local private key material for each CSR you plan to submit; embed it in `KeyRef.material` or wire in your own key management layer.
## Installation
```bash
# pip
pip install swarmauri_certservice_stepca
# poetry
poetry add swarmauri_certservice_stepca
# uv (pyproject-based projects)
uv add swarmauri_certservice_stepca
```
## Quickstart: Issue a Certificate via step-ca
```python
import asyncio
from pathlib import Path
from swarmauri_certservice_stepca import StepCaCertService
from swarmauri_core.certs.ICertService import SubjectSpec
from swarmauri_core.crypto.types import ExportPolicy, KeyRef, KeyType, KeyUse
async def enroll() -> None:
async def fetch_ott(claims):
# Look up the device-specific token issued by step-ca (KV store, API call, etc).
device_id = claims["sub"] or "default"
return Path(f"otts/{device_id}.txt").read_text().strip()
service = StepCaCertService(
ca_url="https://ca.example",
provisioner="devices",
token_provider=fetch_ott,
timeout_s=10.0,
)
key_bytes = Path("device.key.pem").read_bytes()
key_ref = KeyRef(
kid="device-key",
version=1,
type=KeyType.RSA,
uses=(KeyUse.SIGN,),
export_policy=ExportPolicy.SECRET_WHEN_ALLOWED,
material=key_bytes,
)
subject: SubjectSpec = {
"C": "US",
"O": "Example Corp",
"CN": "device-001.example.com",
}
csr_pem = await service.create_csr(
key=key_ref,
subject=subject,
san={"dns": ["device-001.example.com", "device-001"]},
extensions={
"extended_key_usage": {"oids": ["serverAuth", "clientAuth"]},
},
)
cert_pem = await service.sign_cert(csr_pem, ca_key=key_ref)
Path("device.pem").write_bytes(cert_pem)
await service.aclose()
print("Certificate written to device.pem")
if __name__ == "__main__":
asyncio.run(enroll())
```
The service extracts the subject name from the CSR and passes it to the `token_provider`, making it easy to map devices to their OTTs. If you already possess the token, skip the provider and pass it in via `opts={"ott": "..."}` when calling `sign_cert`.
## Control Validity Windows and Template Data
```python
import asyncio
from datetime import datetime, timedelta, timezone
from pathlib import Path
from swarmauri_certservice_stepca import StepCaCertService
from swarmauri_core.crypto.types import ExportPolicy, KeyRef, KeyType, KeyUse
async def request_short_lived_cert(ott: str) -> bytes:
service = StepCaCertService("https://ca.example", verify_tls="/etc/ssl/ca.pem")
key_ref = KeyRef(
kid="build-runner",
version=1,
type=KeyType.EC,
uses=(KeyUse.SIGN,),
export_policy=ExportPolicy.SECRET_WHEN_ALLOWED,
material=Path("runner-key.pem").read_bytes(),
)
csr = await service.create_csr(
key_ref,
{"CN": "ci-runner", "O": "Example Corp"},
)
now = datetime.now(timezone.utc)
cert = await service.sign_cert(
csr,
key_ref,
not_before=int(now.timestamp()),
not_after=int((now + timedelta(hours=8)).timestamp()),
opts={
"ott": ott,
"template_data": {"env": "ci", "workload": "runner"},
},
)
await service.aclose()
return cert
# Usage
# asyncio.run(request_short_lived_cert(os.environ["STEPCA_OTT"]))
```
`sign_cert` automatically normalizes DER and PEM inputs, propagates custom template data, and honors explicit validity windows when your provisioner allows overrides.
## Operational Tips
- Close the underlying HTTP client with `aclose()` once you are done issuing certificates to release sockets.
- Provisioners can restrict key algorithms and SAN contents—call `supports()` to check compatibility before presenting the service to end users.
- Store OTTs in a secure vault or request them just-in-time from step-ca’s ACME/JWT machinery; never hard-code long-lived tokens in scripts.
- Combine this service with Swarmauri verification agents (CRL/OCSP) or your preferred PKI lints to track certificate health after issuance.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, certservice, stepca, step, backed, certificate, service | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"cryptography",
"httpx>=0.27.0",
"pytest-benchmark>=4.0.0; extra == \"perf\"",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:47:39.140769 | swarmauri_certservice_stepca-0.3.3.dev5.tar.gz | 11,425 | 87/a4/d40a9063b8e3e8fbd3863dbcc54e0828a9770ad8b68c63a1bf2c94c4ef71/swarmauri_certservice_stepca-0.3.3.dev5.tar.gz | source | sdist | null | false | 388762a58ef6130a25d9a091a6e7cf95 | 85bb2ae07feb79b8ed701758da446c6e7e9bf8413ceb9b48e62c70cc0995937c | 87a4d40a9063b8e3e8fbd3863dbcc54e0828a9770ad8b68c63a1bf2c94c4ef71 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_certservice_scep | 0.8.3.dev5 | Swarmauri SCEP Certificate Service | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_certservice_scep/">
<img src="https://img.shields.io/pypi/dm/swarmauri_certservice_scep" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certservice_scep/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certservice_scep.svg"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_scep/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_certservice_scep" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_scep/">
<img src="https://img.shields.io/pypi/l/swarmauri_certservice_scep" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_scep/">
<img src="https://img.shields.io/pypi/v/swarmauri_certservice_scep?label=swarmauri_certservice_scep&color=green" alt="PyPI - swarmauri_certservice_scep"/></a>
</p>
---
# Swarmauri Certservice SCEP
`ScepCertService` implements certificate enrollment using the [Simple Certificate Enrollment Protocol (SCEP)](https://datatracker.ietf.org/doc/html/rfc8894). It maps the generic `ICertService` flows onto SCEP operations so applications can request, receive, and validate X.509 certificates without dealing with protocol details.
## Features
- Generate RFC 2986-compliant PKCS#10 certificate signing requests with challenge passwords and subject alternative names.
- Submit CSRs to SCEP responders via `PKCSReq` and retrieve issued certificates.
- Download issuer CA certificates and validate issued leaf certificates for time window, issuer, and CA flags.
- Parse returned certificates into structured dictionaries for downstream automation.
## Prerequisites
- Python 3.10 or newer.
- An accessible SCEP server URL (for example, `https://mdm.example.com/scep`).
- Private key material for each device or service enrolling via SCEP. Software keys can be embedded in the `KeyRef.material` field.
- Optional: RA challenge password if your SCEP service requires one for enrollment.
## Installation
```bash
# pip
pip install swarmauri_certservice_scep
# poetry
poetry add swarmauri_certservice_scep
# uv (pyproject-based projects)
uv add swarmauri_certservice_scep
```
## Quickstart: Enroll a Device Certificate
```python
import asyncio
from pathlib import Path
from cryptography.hazmat.primitives import serialization
from swarmauri_certservice_scep import ScepCertService
from swarmauri_core.certs.ICertService import SubjectSpec
from swarmauri_core.crypto.types import ExportPolicy, KeyRef, KeyType, KeyUse
async def enroll() -> None:
service = ScepCertService(
"https://scep.example.test",
challenge_password="enroll-secret",
)
key_bytes = Path("device.key.pem").read_bytes()
key_ref = KeyRef(
kid="device-key",
version=1,
type=KeyType.RSA,
uses=(KeyUse.SIGN,),
export_policy=ExportPolicy.SECRET_WHEN_ALLOWED,
material=key_bytes,
)
subject: SubjectSpec = {
"C": "US",
"O": "Example Corp",
"CN": "device-001.example.com",
}
csr_pem = await service.create_csr(
key=key_ref,
subject=subject,
san={"dns": ["device-001.example.com", "device-001"]},
)
fullchain = await service.sign_cert(csr_pem, ca_key=key_ref)
Path("device.pem").write_bytes(fullchain)
print("Enrollment complete → device.pem")
if __name__ == "__main__":
asyncio.run(enroll())
```
`sign_cert` returns the DER content provided by the SCEP server. Depending on your responder, the payload may be a single certificate or a PKCS#7 chain; decode accordingly before storing.
## Verify Certificates from SCEP
```python
import asyncio
from pathlib import Path
from swarmauri_certservice_scep import ScepCertService
async def verify() -> None:
service = ScepCertService("https://scep.example.test")
device_cert = Path("device.pem").read_bytes()
result = await service.verify_cert(device_cert)
if result["valid"]:
print("Issuer:", result["issuer"])
print("Valid until:", result["not_after"])
else:
print("Certificate failed validation:", result["reason"])
details = await service.parse_cert(device_cert)
print("Serial:", details["serial"])
print("Subject alternative names:", details.get("san"))
if __name__ == "__main__":
asyncio.run(verify())
```
`verify_cert` evaluates SCEP-issued certificates for validity windows and CA constraints, while `parse_cert` extracts SAN, EKU, and key usage metadata for logging or policy engines.
## Operational Tips
- Generate distinct key pairs per device or workload, and store them securely—`KeyRef` can reference HSM-backed keys instead of raw PEM material.
- Capture challenge passwords and sensitive enrollment secrets from a secure vault or environment variables rather than hard-coding them in scripts.
- If your SCEP responder returns PKCS#7 payloads, feed the response into `cryptography.hazmat.primitives.serialization.pkcs7` to extract certificate chains before deployment.
- Pair SCEP enrollment with Swarmauri revocation check services (`swarmauri_certs_ocspverify`, `swarmauri_certs_crlverifyservice`) to maintain lifecycle hygiene.
## Want to help?
If you want to contribute to swarmauri-sdk, read up on our [guidelines for contributing](https://github.com/swarmauri/swarmauri-sdk/blob/master/contributing.md) that will help you get started.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, certservice, scep, certificate, service | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"cryptography>=41.0.0",
"httpx>=0.27.0; extra == \"httpx\"",
"requests>=2.32.3",
"swarmauri_base",
"swarmauri_core"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:47:30.296930 | swarmauri_certservice_scep-0.8.3.dev5-py3-none-any.whl | 11,120 | b3/1a/04cd6cf1615b9f655494c61913ecb0757f25f6b4bd58c18f0caec0eff469/swarmauri_certservice_scep-0.8.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | a9b81b4719d6c4439684236186c5b5cb | 4a2a5b3104dd2689986086a7d01b98783360aaaf3875d0e343ace76ce8685367 | b31a04cd6cf1615b9f655494c61913ecb0757f25f6b4bd58c18f0caec0eff469 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_certservice_ms_adcs | 0.2.3.dev5 | Microsoft AD CS certificate service client for Swarmauri | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_certservice_ms_adcs/">
<img src="https://img.shields.io/pypi/dm/swarmauri_certservice_ms_adcs" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certservice_ms_adcs/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certservice_ms_adcs.svg"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_ms_adcs/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_certservice_ms_adcs" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_ms_adcs/">
<img src="https://img.shields.io/pypi/l/swarmauri_certservice_ms_adcs" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_ms_adcs/">
<img src="https://img.shields.io/pypi/v/swarmauri_certservice_ms_adcs?label=swarmauri_certservice_ms_adcs&color=green" alt="PyPI - swarmauri_certservice_ms_adcs"/></a>
</p>
---
# swarmauri_certservice_ms_adcs
Community plugin providing a certificate service client for Microsoft Active Directory Certificate Services (AD CS).
## Features
- Generate RFC 2986-compliant PKCS#10 CSRs with rich subject, subject alternative name, and extension options.
- Parse and validate X.509 certificates per RFC 5280, including issuer matching and signature verification.
- Ready-to-use authentication helpers for NTLM, Kerberos, and HTTP basic auth while preserving TLS configuration.
- Typed `supports()` metadata describing templates, key algorithms, and capabilities advertised to Swarmauri agents.
## Prerequisites
- Python 3.10 or newer.
- Network access to an AD CS Web Enrollment endpoint (typically `https://<ca>/certsrv`).
- A private key for each CSR you plan to submit; software keys can be read from PEM while HSM-backed keys can be referenced via `KeyRef` metadata.
- Optional authentication libraries: install `requests-ntlm` for NTLM flows and `requests-kerberos` for Kerberos/SPNEGO delegation.
## Installation
Install the core package or include extras for the auth helpers your environment requires:
```bash
# pip
pip install "swarmauri_certservice_ms_adcs[ntlm,kerberos]"
# poetry
poetry add swarmauri_certservice_ms_adcs -E ntlm -E kerberos
# uv (pyproject-based projects)
uv add "swarmauri_certservice_ms_adcs[ntlm,kerberos]"
```
You can drop the extras if your AD CS deployment only needs anonymous access or HTTP basic authentication.
## Quickstart: Build a CSR for AD CS
```python
import asyncio
from pathlib import Path
from swarmauri_certservice_ms_adcs import MsAdcsCertService, _AuthCfg
from swarmauri_core.certs.ICertService import SubjectSpec
from swarmauri_core.crypto.types import ExportPolicy, KeyRef, KeyType, KeyUse
async def main() -> None:
service = MsAdcsCertService(
base_url="https://ca.example.com/certsrv",
default_template="WebServer",
auth=_AuthCfg(
mode="ntlm",
username="EXAMPLE\\svc-adcs",
password="s3cr3t!",
verify_tls=True,
),
)
key_bytes = Path("webserver.key.pem").read_bytes()
key_ref = KeyRef(
kid="webserver-key",
version=1,
type=KeyType.RSA,
uses=(KeyUse.SIGN,),
export_policy=ExportPolicy.PUBLIC_ONLY,
material=key_bytes,
)
subject: SubjectSpec = {
"C": "US",
"ST": "Texas",
"L": "Austin",
"O": "Example Corp",
"CN": "app.example.com",
}
csr_pem = await service.create_csr(
key=key_ref,
subject=subject,
san={"dns": ["app.example.com", "www.example.com"]},
)
Path("app.csr").write_bytes(csr_pem)
print("CSR saved to app.csr")
if __name__ == "__main__":
asyncio.run(main())
```
Submit `app.csr` through your AD CS Web Enrollment UI, automation, or a downstream Swarmauri agent responsible for certificate issuance.
## Validate Issued Certificates
After AD CS returns a certificate, use the same service instance to confirm the chain and inspect metadata:
```python
import asyncio
from pathlib import Path
from swarmauri_certservice_ms_adcs import MsAdcsCertService, _AuthCfg
async def verify_certificate() -> None:
service = MsAdcsCertService(
base_url="https://ca.example.com/certsrv",
auth=_AuthCfg(mode="none"),
)
issued_cert = Path("app.pem").read_bytes()
issuing_ca = Path("issuing-ca.pem").read_bytes()
verification = await service.verify_cert(
cert=issued_cert,
trust_roots=[issuing_ca],
)
if verification["valid"]:
print("Certificate is valid until", verification["not_after"])
else:
print("Validation failed:", verification["reason"])
parsed = await service.parse_cert(issued_cert)
print("Subject:", parsed["subject"])
print("Subject Alternative Names:", parsed.get("san"))
if __name__ == "__main__":
asyncio.run(verify_certificate())
```
`verify_cert` performs structural checks and signature validation when an issuer certificate is supplied, while `parse_cert` surfaces extension data for auditing or observability pipelines.
## Authentication Modes
- **NTLM** – enable by installing `requests-ntlm` and providing domain credentials via `_AuthCfg(mode="ntlm", username="DOMAIN\\user", password="..." )`.
- **Kerberos/SPNEGO** – install `requests-kerberos` and set `_AuthCfg(mode="kerberos", spnego_delegate=True)` when delegation is required.
- **HTTP Basic** – provide `_AuthCfg(mode="basic", username=..., password=...)` for AD CS deployments fronted by basic auth proxies.
- **Anonymous** – set `_AuthCfg(mode="none")` for environments that rely on IP allow lists or mutual TLS.
## Best Practices
- Store AD CS credentials in a secure secrets manager and inject them via environment variables rather than hard-coding passwords.
- Capture issued certificates, verification results, and parsed metadata in your logging system so you can trace enrollment activity.
- Rotate key pairs and certificates regularly; regenerate CSRs ahead of expiry to leave time for manual approvals.
- Combine this plugin with Swarmauri certificate verification agents (CRL/OCSP) to maintain revocation visibility across the lifecycle.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, certservice, ms, adcs, microsoft, certificate, service, client | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"cryptography",
"pytest-benchmark>=4.0.0; extra == \"perf\"",
"requests>=2.32.3",
"requests-kerberos; extra == \"kerberos\"",
"requests-ntlm; extra == \"ntlm\"",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:47:28.074237 | swarmauri_certservice_ms_adcs-0.2.3.dev5-py3-none-any.whl | 13,321 | 88/e2/fb243093bbb2440ca2df5b6b9de51692fd37e5048c879ef96e6184244a96/swarmauri_certservice_ms_adcs-0.2.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 5296544f942ac6cca6da8d07d4a993c1 | 4c52421f0358a8c88c08ee611e40f6c62280dd96fb75f713f09b81e77cd86148 | 88e2fb243093bbb2440ca2df5b6b9de51692fd37e5048c879ef96e6184244a96 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_certservice_gcpkms | 0.2.3.dev5 | Google Cloud KMS Certificate Service for Swarmauri | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_certservice_gcpkms/">
<img src="https://img.shields.io/pypi/dm/swarmauri_certservice_gcpkms" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certservice_gcpkms/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certservice_gcpkms.svg"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_gcpkms/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_certservice_gcpkms" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_gcpkms/">
<img src="https://img.shields.io/pypi/l/swarmauri_certservice_gcpkms" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_gcpkms/">
<img src="https://img.shields.io/pypi/v/swarmauri_certservice_gcpkms?label=swarmauri_certservice_gcpkms&color=green" alt="PyPI - swarmauri_certservice_gcpkms"/></a>
</p>
---
# swarmauri_certservice_gcpkms
Google Cloud KMS backed certificate service for Swarmauri.
This package exposes a `GcpKmsCertService` component implementing
`CertServiceBase`. It can create CSRs, generate self-signed certificates,
issue certificates from CSRs, verify certificates and parse their
metadata while using keys stored in Google Cloud KMS.
## Features
- Create certificate signing requests using keys stored in KMS
- Issue self-signed or CA-signed certificates
- Verify signatures and validity windows
- Parse certificate metadata including extensions
## Prerequisites
- A Google Cloud project with the Cloud KMS API enabled
- Credentials available to the application (for example via the
`GOOGLE_APPLICATION_CREDENTIALS` environment variable)
- Keys provisioned in Cloud KMS with the `AsymmetricSign` capability (RSA 2048, EC P-256, or Ed25519).
- Python 3.10 or newer and the `google-cloud-kms` dependency (installed via the extras shown below).
- Network access to the Google Cloud KMS endpoint for the target location.
## Installation
```bash
# pip
pip install swarmauri_certservice_gcpkms[gcp]
# poetry
poetry add swarmauri_certservice_gcpkms -E gcp
# uv (pyproject-based projects)
uv add "swarmauri_certservice_gcpkms[gcp]"
```
The optional `gcp` extra installs the `google-cloud-kms` dependency.
## Usage
### Issue a Certificate from a CSR
```python
import asyncio
from datetime import datetime, timedelta, timezone
from pathlib import Path
from swarmauri_certservice_gcpkms import GcpKmsCertService
from swarmauri_core.crypto.types import KeyRef
async def issue_certificate() -> None:
service = GcpKmsCertService()
csr_bytes = Path("leaf.csr").read_bytes()
kms_ca_key = KeyRef(
kid="projects/my-project/locations/us-central1/keyRings/pki/cryptoKeys/issuing-ca/cryptoKeyVersions/1"
)
certificate_pem = await service.sign_cert(
csr=csr_bytes,
ca_key=kms_ca_key,
issuer={"CN": "Example GCP Issuing CA", "O": "Example Corp"},
not_after=int((datetime.now(timezone.utc) + timedelta(days=365)).timestamp()),
)
Path("leaf.pem").write_bytes(certificate_pem)
print("Issued certificate saved to leaf.pem")
if __name__ == "__main__":
asyncio.run(issue_certificate())
```
### Create CSRs and Self-Signed Roots
```python
import asyncio
from datetime import datetime, timedelta, timezone
from pathlib import Path
from swarmauri_certservice_gcpkms import GcpKmsCertService
from swarmauri_core.crypto.types import KeyRef
async def bootstrap_pki() -> None:
service = GcpKmsCertService()
# Generate a CSR using an exportable private key
local_key = KeyRef(material=Path("intermediate-key.pem").read_bytes())
csr_pem = await service.create_csr(
key=local_key,
subject={"CN": "Intermediate CA", "O": "Example Corp"},
san={"dns": ["intermediate.example.com"]},
)
Path("intermediate.csr").write_bytes(csr_pem)
# Create a self-signed root using Cloud KMS
root_key = KeyRef(
kid="projects/my-project/locations/us-central1/keyRings/pki/cryptoKeys/root-ca/cryptoKeyVersions/1"
)
root_pem = await service.create_self_signed(
key=root_key,
subject={"CN": "Example Root CA", "O": "Example Corp"},
not_after=int((datetime.now(timezone.utc) + timedelta(days=3650)).timestamp()),
)
Path("root-ca.pem").write_bytes(root_pem)
if __name__ == "__main__":
asyncio.run(bootstrap_pki())
```
### Verification and Parsing
```python
import asyncio
from pathlib import Path
from swarmauri_certservice_gcpkms import GcpKmsCertService
async def inspect() -> None:
service = GcpKmsCertService()
cert_bytes = Path("leaf.pem").read_bytes()
root_bytes = Path("root-ca.pem").read_bytes()
verification = await service.verify_cert(
cert=cert_bytes,
trust_roots=[root_bytes],
)
print("Valid:", verification["valid"], "Issuer:", verification.get("issuer"))
metadata = await service.parse_cert(cert_bytes)
print("Subject:", metadata["subject"])
print("Not after:", metadata["not_after"])
if __name__ == "__main__":
asyncio.run(inspect())
```
## License
Apache-2.0
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, certservice, gcpkms, google, cloud, kms, certificate, service | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"cryptography",
"google-cloud-kms; extra == \"gcp\"",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:47:23.549822 | swarmauri_certservice_gcpkms-0.2.3.dev5-py3-none-any.whl | 12,405 | ad/c6/ccc1c3fc41d37d39ec923f31a848cbc5d206846f4851e94e05a41e22ce39/swarmauri_certservice_gcpkms-0.2.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | de6b06744075a2ca3cba7ec4fa1ac128 | 853581c8bdf7e56795ab176cd630e63e81213a3e32768dfd04700fc4584464a8 | adc6ccc1c3fc41d37d39ec923f31a848cbc5d206846f4851e94e05a41e22ce39 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_certservice_aws_kms | 0.3.3.dev5 | AWS KMS backed CertService for Swarmauri | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_certservice_aws_kms/">
<img src="https://img.shields.io/pypi/dm/swarmauri_certservice_aws_kms" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certservice_aws_kms/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certservice_aws_kms.svg"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_aws_kms/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_certservice_aws_kms" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_aws_kms/">
<img src="https://img.shields.io/pypi/l/swarmauri_certservice_aws_kms" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_certservice_aws_kms/">
<img src="https://img.shields.io/pypi/v/swarmauri_certservice_aws_kms?label=swarmauri_certservice_aws_kms&color=green" alt="PyPI - swarmauri_certservice_aws_kms"/></a>
</p>
---
# swarmauri_certservice_aws_kms
AWS KMS backed certificate service for Swarmauri.
This package provides an implementation of `CertServiceBase` that signs and verifies X.509 certificates using AWS Key Management Service.
## Features
- Create CSRs from exportable key material.
- Issue certificates using AWS KMS `Sign` API.
- Create self‑signed certificates.
- Verify and parse certificates with RFC 5280 compliance.
## Prerequisites
- Python 3.10 or newer.
- AWS account with KMS keys that allow the `Sign` operation (`RSA` or `ECC_NIST_P256`).
- `AWS_ACCESS_KEY_ID`/`AWS_SECRET_ACCESS_KEY` (or an IAM role/instance profile) granting `kms:GetPublicKey` and `kms:Sign` permissions.
- `boto3` installed (automatically pulled in via this package) and network access to the target AWS region.
- For certificate signing: an issuer subject template and optional CA certificate bytes to embed in verification metadata.
## Extras
- `docs`: documentation helpers.
- `perf`: benchmarking support.
## Installation
```bash
# pip
pip install swarmauri_certservice_aws_kms
# poetry
poetry add swarmauri_certservice_aws_kms
# uv (pyproject-based projects)
uv add swarmauri_certservice_aws_kms
```
## Testing
Run unit, functional and performance tests in isolation from the repository root:
```bash
uv run --package swarmauri_certservice_aws_kms --directory community/swarmauri_certservice_aws_kms pytest
```
## Quickstart: Issue a Certificate with AWS KMS
The snippet below signs an incoming CSR using a customer-managed KMS key. Attach the key ARN to the `KeyRef` via `kid` or `tags` (`aws_kms_key_id`).
```python
import asyncio
from datetime import datetime, timedelta, timezone
from pathlib import Path
from swarmauri_certservice_aws_kms import AwsKmsCertService
from swarmauri_core.crypto.types import KeyRef
async def main() -> None:
service = AwsKmsCertService(region_name="us-east-1")
csr_bytes = Path("tenant.csr").read_bytes()
ca_cert = Path("ca.pem").read_bytes()
kms_key = KeyRef(kid="arn:aws:kms:us-east-1:123456789012:key/abcd-1234")
certificate_pem = await service.sign_cert(
csr=csr_bytes,
ca_key=kms_key,
issuer={"CN": "Example KMS Issuing CA", "O": "Example Corp"},
ca_cert=ca_cert,
not_after=int((datetime.now(timezone.utc) + timedelta(days=365)).timestamp()),
)
Path("tenant.pem").write_bytes(certificate_pem)
print("Issued certificate saved to tenant.pem")
if __name__ == "__main__":
asyncio.run(main())
```
## Generating CSRs and Self-Signed Roots
`AwsKmsCertService` can build CSRs from exportable key material and mint a self-signed certificate using the same KMS key.
```python
import asyncio
from datetime import datetime, timedelta, timezone
from pathlib import Path
from swarmauri_certservice_aws_kms import AwsKmsCertService
from swarmauri_core.crypto.types import KeyRef
async def bootstrap_ca() -> None:
service = AwsKmsCertService(region_name="us-east-1")
# Generate CSR from a local private key
key_ref = KeyRef(material=Path("intermediate-key.pem").read_bytes())
csr_pem = await service.create_csr(
key=key_ref,
subject={"CN": "Example Intermediate CA", "O": "Example Corp"},
san={"dns": ["intermediate.example.com"]},
)
Path("intermediate.csr").write_bytes(csr_pem)
# Issue a self-signed root using a KMS key
kms_key = KeyRef(kid="arn:aws:kms:us-east-1:123456789012:key/root-ca-key")
root_pem = await service.create_self_signed(
key=kms_key,
subject={"CN": "Example Root CA", "O": "Example Corp"},
not_after=int((datetime.now(timezone.utc) + timedelta(days=3650)).timestamp()),
)
Path("root-ca.pem").write_bytes(root_pem)
if __name__ == "__main__":
asyncio.run(bootstrap_ca())
```
## Best Practices
- Grant the KMS key limited permissions: `kms:GetPublicKey`, `kms:DescribeKey`, `kms:Sign`. Avoid broad grants (e.g., wildcard actions).
- Store KMS key ARNs in `KeyRef.tags["aws_kms_key_id"]` or `KeyRef.kid` for clarity and to avoid hard-coding ARNs throughout application logic.
- Coordinate certificate validity with KMS key rotation—renew certificates before rotating customer-managed keys.
- Cache returned certificates and metadata to minimize repeated calls to KMS and reduce signing latency.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, certservice, aws, kms, backed | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"asn1crypto",
"boto3>=1.28.0",
"cryptography",
"pytest-benchmark>=4.0.0; extra == \"perf\"",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:47:21.161677 | swarmauri_certservice_aws_kms-0.3.3.dev5-py3-none-any.whl | 14,394 | ac/53/fe29672375c10164442bde5fcee88f63d0bb1a20a95ba8fb034817d65f86/swarmauri_certservice_aws_kms-0.3.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 5160b04bd2508bd4173e118b4d7b4f3a | 83b252b46c74f4b0183375ce79e384e220edcfb99c1a62268bac1dbbe150404a | ac53fe29672375c10164442bde5fcee88f63d0bb1a20a95ba8fb034817d65f86 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | sqlmesh | 0.230.2.dev1 | Next-generation data transformation framework | <p align="center">
<img src="docs/readme/sqlmesh.png" alt="SQLMesh logo" width="50%" height="50%">
</p>
SQLMesh is a next-generation data transformation framework designed to ship data quickly, efficiently, and without error. Data teams can run and deploy data transformations written in SQL or Python with visibility and control at any size.
It is more than just a [dbt alternative](https://tobikodata.com/reduce_costs_with_cron_and_partitions.html).
<p align="center">
<img src="docs/readme/architecture_diagram.png" alt="Architecture Diagram" width="100%" height="100%">
</p>
## Core Features
<img src="https://github.com/TobikoData/sqlmesh-public-assets/blob/main/vscode.gif?raw=true" alt="SQLMesh Plan Mode">
> Get instant SQL impact and context of your changes, both in the CLI and in the [SQLMesh VSCode Extension](https://sqlmesh.readthedocs.io/en/latest/guides/vscode/?h=vs+cod)
<details>
<summary><b>Virtual Data Environments</b></summary>
* See a full diagram of how [Virtual Data Environments](https://whimsical.com/virtual-data-environments-MCT8ngSxFHict4wiL48ymz) work
* [Watch this video to learn more](https://www.youtube.com/watch?v=weJH3eM0rzc)
</details>
* Create isolated development environments without data warehouse costs
* Plan / Apply workflow like [Terraform](https://www.terraform.io/) to understand potential impact of changes
* Easy to use [CI/CD bot](https://sqlmesh.readthedocs.io/en/stable/integrations/github/) for true blue-green deployments
<details>
<summary><b>Efficiency and Testing</b></summary>
Running this command will generate a unit test file in the `tests/` folder: `test_stg_payments.yaml`
Runs a live query to generate the expected output of the model
```bash
sqlmesh create_test tcloud_demo.stg_payments --query tcloud_demo.seed_raw_payments "select * from tcloud_demo.seed_raw_payments limit 5"
# run the unit test
sqlmesh test
```
```sql
MODEL (
name tcloud_demo.stg_payments,
cron '@daily',
grain payment_id,
audits (UNIQUE_VALUES(columns = (
payment_id
)), NOT_NULL(columns = (
payment_id
)))
);
SELECT
id AS payment_id,
order_id,
payment_method,
amount / 100 AS amount, /* `amount` is currently stored in cents, so we convert it to dollars */
'new_column' AS new_column, /* non-breaking change example */
FROM tcloud_demo.seed_raw_payments
```
```yaml
test_stg_payments:
model: tcloud_demo.stg_payments
inputs:
tcloud_demo.seed_raw_payments:
- id: 66
order_id: 58
payment_method: coupon
amount: 1800
- id: 27
order_id: 24
payment_method: coupon
amount: 2600
- id: 30
order_id: 25
payment_method: coupon
amount: 1600
- id: 109
order_id: 95
payment_method: coupon
amount: 2400
- id: 3
order_id: 3
payment_method: coupon
amount: 100
outputs:
query:
- payment_id: 66
order_id: 58
payment_method: coupon
amount: 18.0
new_column: new_column
- payment_id: 27
order_id: 24
payment_method: coupon
amount: 26.0
new_column: new_column
- payment_id: 30
order_id: 25
payment_method: coupon
amount: 16.0
new_column: new_column
- payment_id: 109
order_id: 95
payment_method: coupon
amount: 24.0
new_column: new_column
- payment_id: 3
order_id: 3
payment_method: coupon
amount: 1.0
new_column: new_column
```
</details>
* Never build a table [more than once](https://tobikodata.com/simplicity-or-efficiency-how-dbt-makes-you-choose.html)
* Track what data’s been modified and run only the necessary transformations for [incremental models](https://tobikodata.com/correctly-loading-incremental-data-at-scale.html)
* Run [unit tests](https://tobikodata.com/we-need-even-greater-expectations.html) for free and configure automated audits
* Run [table diffs](https://sqlmesh.readthedocs.io/en/stable/examples/sqlmesh_cli_crash_course/?h=crash#run-data-diff-against-prod) between prod and dev based on tables/views impacted by a change
<details>
<summary><b>Level Up Your SQL</b></summary>
Write SQL in any dialect and SQLMesh will transpile it to your target SQL dialect on the fly before sending it to the warehouse.
<img src="https://github.com/TobikoData/sqlmesh/blob/main/docs/readme/transpile_example.png?raw=true" alt="Transpile Example">
</details>
* Debug transformation errors *before* you run them in your warehouse in [10+ different SQL dialects](https://sqlmesh.readthedocs.io/en/stable/integrations/overview/#execution-engines)
* Definitions using [simply SQL](https://sqlmesh.readthedocs.io/en/stable/concepts/models/sql_models/#sql-based-definition) (no need for redundant and confusing `Jinja` + `YAML`)
* See impact of changes before you run them in your warehouse with column-level lineage
For more information, check out the [website](https://www.tobikodata.com/sqlmesh) and [documentation](https://sqlmesh.readthedocs.io/en/stable/).
## Getting Started
Install SQLMesh through [pypi](https://pypi.org/project/sqlmesh/) by running:
```bash
mkdir sqlmesh-example
cd sqlmesh-example
python -m venv .venv
source .venv/bin/activate
pip install 'sqlmesh[lsp]' # install the sqlmesh package with extensions to work with VSCode
source .venv/bin/activate # reactivate the venv to ensure you're using the right installation
sqlmesh init # follow the prompts to get started (choose DuckDB)
```
</details>
> Note: You may need to run `python3` or `pip3` instead of `python` or `pip`, depending on your python installation.
<details>
<summary><b>Windows Installation</b></summary>
```bash
mkdir sqlmesh-example
cd sqlmesh-example
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install 'sqlmesh[lsp]' # install the sqlmesh package with extensions to work with VSCode
.\.venv\Scripts\Activate.ps1 # reactivate the venv to ensure you're using the right installation
sqlmesh init # follow the prompts to get started (choose DuckDB)
```
</details>
Follow the [quickstart guide](https://sqlmesh.readthedocs.io/en/stable/quickstart/cli/) to learn how to use SQLMesh. You already have a head start!
Follow the [crash course](https://sqlmesh.readthedocs.io/en/stable/examples/sqlmesh_cli_crash_course/) to learn the core movesets and use the easy to reference cheat sheet.
Follow this [example](https://sqlmesh.readthedocs.io/en/stable/examples/incremental_time_full_walkthrough/) to learn how to use SQLMesh in a full walkthrough.
## Join Our Community
Together, we want to build data transformation without the waste. Connect with us in the following ways:
* Join the [Tobiko Slack Community](https://tobikodata.com/slack) to ask questions, or just to say hi!
* File an issue on our [GitHub](https://github.com/TobikoData/sqlmesh/issues/new)
* Send us an email at [hello@tobikodata.com](mailto:hello@tobikodata.com) with your questions or feedback
* Read our [blog](https://tobikodata.com/blog)
## Contribution
Contributions in the form of issues or pull requests (from fork) are greatly appreciated.
[Read more](https://sqlmesh.readthedocs.io/en/stable/development/) on how to contribute to SQLMesh open source.
[Watch this video walkthrough](https://www.loom.com/share/2abd0d661c12459693fa155490633126?sid=b65c1c0f-8ef7-4036-ad19-3f85a3b87ff2) to see how our team contributes a feature to SQLMesh.
| text/markdown | null | "TobikoData Inc." <engineering@tobikodata.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2024 Tobiko Data Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: SQL",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"astor",
"click",
"croniter",
"duckdb!=0.10.3,>=0.10.0",
"dateparser<=1.2.1",
"humanize",
"hyperscript>=0.1.0",
"importlib-metadata; python_version < \"3.12\"",
"ipywidgets",
"jinja2",
"packaging",
"pandas<3.0.0",
"pydantic>=2.0.0",
"python-dotenv",
"requests",
"rich[jupyter]",
"ruam... | [] | [] | [] | [
"Homepage, https://sqlmesh.com/",
"Documentation, https://sqlmesh.readthedocs.io/en/stable/",
"Repository, https://github.com/TobikoData/sqlmesh",
"Issues, https://github.com/TobikoData/sqlmesh/issues"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T09:47:20.818197 | sqlmesh-0.230.2.dev1.tar.gz | 13,309,461 | 4b/9c/b734008f71f6e95d43c246119ff40723d517f36e20afcd00d91088f03c89/sqlmesh-0.230.2.dev1.tar.gz | source | sdist | null | false | 4b1347f41c5b4d1b9ae8f4674ba98cab | e3b89c4736f0cd44822d82e0a96b0358f4b7eea1eaed94eea3f87b34c3401122 | 4b9cb734008f71f6e95d43c246119ff40723d517f36e20afcd00d91088f03c89 | null | [
"LICENSE"
] | 183 |
2.4 | swarmauri_certs_ocspverify | 0.9.3.dev5 | OCSP certificate verification service for Swarmauri. | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_certs_ocspverify/">
<img src="https://img.shields.io/pypi/dm/swarmauri_certs_ocspverify" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certs_ocspverify/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certs_ocspverify.svg"/></a>
<a href="https://pypi.org/project/swarmauri_certs_ocspverify/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_certs_ocspverify" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_certs_ocspverify/">
<img src="https://img.shields.io/pypi/l/swarmauri_certs_ocspverify" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_certs_ocspverify/">
<img src="https://img.shields.io/pypi/v/swarmauri_certs_ocspverify?label=swarmauri_certs_ocspverify&color=green" alt="PyPI - swarmauri_certs_ocspverify"/></a>
</p>
---
# swarmauri_certs_ocspverify
OCSP-based certificate verification service for the Swarmauri SDK.
This package provides an implementation of an `ICertService` that checks
certificate revocation status using the Online Certificate Status Protocol
(OCSP) defined in [RFC 6960](https://www.rfc-editor.org/rfc/rfc6960) while
remaining compatible with X.509 certificate guidelines from
[RFC 5280](https://www.rfc-editor.org/rfc/rfc5280).
## Features
- Parse PEM certificates to extract subject, issuer and OCSP responder URLs.
- Verify certificate status via OCSP responders advertised in the certificate's
Authority Information Access extension.
## Prerequisites
- Python 3.10 or newer.
- Leaf certificate PEM to inspect and validate.
- Issuer (intermediate) certificate PEM required to build the OCSP request.
- Network access to the OCSP responder URLs exposed in the certificate's Authority Information Access extension.
- Optional: trust root bundle if performing additional validation on issuer metadata alongside OCSP results.
## Installation
```bash
# pip
pip install swarmauri_certs_ocspverify
# poetry
poetry add swarmauri_certs_ocspverify
# uv (pyproject-based projects)
uv add swarmauri_certs_ocspverify
```
## Usage
Perform an OCSP status check for a leaf certificate using its issuer certificate:
```python
import asyncio
from pathlib import Path
from swarmauri_certs_ocspverify import OcspVerifyService
async def main() -> None:
service = OcspVerifyService()
leaf_cert = Path("leaf.pem").read_bytes()
issuer_cert = Path("issuer.pem").read_bytes()
verification = await service.verify_cert(
cert=leaf_cert,
intermediates=[issuer_cert],
check_revocation=True,
)
if verification["valid"]:
print("Certificate status: GOOD")
else:
print("Certificate status:", verification["reason"])
print("Next update:", verification.get("next_update"))
if __name__ == "__main__":
asyncio.run(main())
```
## Parsing OCSP Metadata
Use `parse_cert` to confirm which OCSP responder URLs are embedded and to inspect the validity window:
```python
import asyncio
from pathlib import Path
from swarmauri_certs_ocspverify import OcspVerifyService
async def describe() -> None:
service = OcspVerifyService()
leaf_cert = Path("leaf.pem").read_bytes()
metadata = await service.parse_cert(leaf_cert)
print("Subject:", metadata["subject"])
print("Issuer:", metadata["issuer"])
print("OCSP URLs:", metadata.get("ocsp_urls", []))
if __name__ == "__main__":
asyncio.run(describe())
```
## Best Practices
- Cache issuer certificates alongside leaf certificates so OCSP requests can be constructed quickly.
- Respect OCSP responder rate limits; consider backoff and caching GOOD responses until `next_update`.
- Combine OCSP checks with CRL fallbacks for authorities that support multiple revocation mechanisms.
- Log `reason` and timestamp fields from the verification output to aid in incident response and compliance reporting.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, certs, ocspverify, ocsp, certificate, verification, service | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"cryptography>=41.0.0",
"httpx>=0.27.0",
"pytest>=8.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:47:16.661727 | swarmauri_certs_ocspverify-0.9.3.dev5-py3-none-any.whl | 9,346 | 84/8d/d3ab052e4975aec9deaf8f239fdb82c0d2b1ff122009a8caa28de732530d/swarmauri_certs_ocspverify-0.9.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 3333b0138f4d197ad703e53829f0d5cb | 3c4c8a18420de98a41a99e97225e11f489bdd5b371d7f98bbb92caa34f4e88a2 | 848dd3ab052e4975aec9deaf8f239fdb82c0d2b1ff122009a8caa28de732530d | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_certs_csronly | 0.8.3.dev5 | Community service for generating PKCS#10 CSRs | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_certs_csronly/">
<img src="https://img.shields.io/pypi/dm/swarmauri_certs_csronly" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certs_csronly/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certs_csronly.svg"/></a>
<a href="https://pypi.org/project/swarmauri_certs_csronly/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_certs_csronly" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_certs_csronly/">
<img src="https://img.shields.io/pypi/l/swarmauri_certs_csronly" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_certs_csronly/">
<img src="https://img.shields.io/pypi/v/swarmauri_certs_csronly?label=swarmauri_certs_csronly&color=green" alt="PyPI - swarmauri_certs_csronly"/></a>
</p>
---
# swarmauri_certs_csronly
A community-provided certificate service that builds PKCS#10 Certificate Signing Requests (CSRs).
## Features
- `CsrOnlyService` focused exclusively on generating standards-compliant PKCS#10 CSRs (RFC 2986).
- Supports RSA (2048/3072/4096), ECDSA (P-256), and Ed25519 private keys.
- Adds subject alternative names, challenge passwords, and basic constraints when needed.
- Designed to interoperate with other Swarmauri certificate services that handle issuance/verification.
## Prerequisites
- Python 3.10 or newer.
- PEM-encoded private key material available locally or via a `KeyRef` provider.
- Subject metadata (CN, O, OU, etc.) for the entity requesting a certificate.
- Optional: SAN entries, basic constraints, and challenge passwords when integrating with stricter PKI workflows.
## Installation
```bash
# pip
pip install swarmauri_certs_csronly
# poetry
poetry add swarmauri_certs_csronly
# uv (pyproject-based projects)
uv add swarmauri_certs_csronly
```
## Usage
Generate a CSR for `example.com` with SAN entries using an existing private key:
```python
import asyncio
from pathlib import Path
from swarmauri_certs_csronly import CsrOnlyService
from swarmauri_core.crypto.types import KeyRef
async def main() -> None:
key_ref = KeyRef(material=Path("example-key.pem").read_bytes())
service = CsrOnlyService()
csr_pem = await service.create_csr(
key=key_ref,
subject={"CN": "example.com", "O": "Example Inc"},
san={"dns": ["example.com", "www.example.com"]},
)
Path("example.csr").write_bytes(csr_pem)
print("CSR written to example.csr")
if __name__ == "__main__":
asyncio.run(main())
```
## Advanced CSR Options
Fine-tune extensions and output encoding for specialized PKI workflows:
```python
import asyncio
from pathlib import Path
from swarmauri_certs_csronly import CsrOnlyService
from swarmauri_core.crypto.types import KeyRef
async def build_der_csr() -> None:
key_ref = KeyRef(material=Path("root-ca-key.pem").read_bytes())
service = CsrOnlyService()
csr_der = await service.create_csr(
key=key_ref,
subject={"CN": "Example Root CA"},
extensions={"basic_constraints": {"ca": True, "path_len": 0}},
challenge_password="p@ssw0rd",
output_der=True,
)
Path("root-ca.csr.der").write_bytes(csr_der)
print("DER CSR saved to root-ca.csr.der")
if __name__ == "__main__":
asyncio.run(build_der_csr())
```
## Best Practices
- Generate new key pairs and CSRs ahead of certificate expiry to allow review and approval time.
- Store private keys securely—`KeyRef` can reference hardware or cloud KMS-backed material rather than local files.
- Keep SAN lists minimal and auditable to avoid issuing overly permissive certificates.
- Pair this service with a signing backend (e.g., CFSSL, ACME, Azure Key Vault) to form a complete issuance pipeline.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, certs, csronly, community, service, generating, pkcs, csrs | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"cryptography>=41",
"pytest>=8.0; extra == \"tests\"",
"pytest-asyncio>=0.24.0; extra == \"tests\"",
"pytest-benchmark>=4.0.0; extra == \"tests\"",
"pytest-json-report>=1.5.0; extra == \"tests\"",
"pytest-timeout>=2.3.1; extra == \"tests\"",
"pytest-xdist>=3.6.1; extra == \"tests\"",
"ruff>=0.9.9; ext... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:47:10.021580 | swarmauri_certs_csronly-0.8.3.dev5.tar.gz | 8,571 | 4a/7d/31608efc02b76a3f587f32cd8925b0fc1d19c4cc9ab9670241e9eb05ab3b/swarmauri_certs_csronly-0.8.3.dev5.tar.gz | source | sdist | null | false | e80faa0014041f4b72683e25367878f1 | 47c728c518b630b69f7c594078db01b5cff572a2d2b7cb783a6d5bfa21b8e2c4 | 4a7d31608efc02b76a3f587f32cd8925b0fc1d19c4cc9ab9670241e9eb05ab3b | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_certs_crlverifyservice | 0.1.3.dev5 | Certificate verification against CRLs | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_certs_crlverifyservice/">
<img src="https://img.shields.io/pypi/dm/swarmauri_certs_crlverifyservice" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certs_crlverifyservice/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certs_crlverifyservice.svg"/></a>
<a href="https://pypi.org/project/swarmauri_certs_crlverifyservice/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_certs_crlverifyservice" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_certs_crlverifyservice/">
<img src="https://img.shields.io/pypi/l/swarmauri_certs_crlverifyservice" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_certs_crlverifyservice/">
<img src="https://img.shields.io/pypi/v/swarmauri_certs_crlverifyservice?label=swarmauri_certs_crlverifyservice&color=green" alt="PyPI - swarmauri_certs_crlverifyservice"/></a>
</p>
---
# swarmauri_certs_crlverifyservice
CRL-based certificate verification service for the Swarmauri SDK.
This package implements an `ICertService` that checks X.509 certificates
against Certificate Revocation Lists as described in
[RFC 5280](https://www.rfc-editor.org/rfc/rfc5280). It validates the
certificate's validity period, issuer, and revocation status.
## Features
- `CrlVerifyService` adapter dedicated to revocation-aware verification and parsing.
- Accepts PEM or DER certificates/CRLs and normalizes them with `cryptography`.
- Returns structured validity metadata, revocation flags, issuers, and extension details.
- Focuses purely on verification; CSR and signing flows stay delegated to other Swarmauri services.
## Prerequisites
- Python 3.10 or newer.
- Access to up-to-date CRLs for the certificate authorities you care about.
- Certificates and CRLs stored in PEM (Base64) or DER; the service can decode either.
- Optional: trusted root/intermediate certificates if you plan to record issuer context alongside revocation checks.
## Installation
```bash
# pip
pip install swarmauri_certs_crlverifyservice
# poetry
poetry add swarmauri_certs_crlverifyservice
# uv (pyproject-based projects)
uv add swarmauri_certs_crlverifyservice
```
## Quickstart: Revocation Check
Load a certificate and its corresponding CRL, then validate the revocation status and validity window:
```python
import asyncio
from pathlib import Path
from swarmauri_certs_crlverifyservice import CrlVerifyService
async def main() -> None:
service = CrlVerifyService()
cert_bytes = Path("leaf.pem").read_bytes()
crl_bytes = Path("issuer.crl").read_bytes()
verification = await service.verify_cert(
cert=cert_bytes,
crls=[crl_bytes],
check_revocation=True,
)
if verification["valid"]:
print("Certificate is valid.")
elif verification.get("revoked"):
print("Certificate was revoked:", verification["reason"])
else:
print("Certificate failed validation:", verification["reason"])
if __name__ == "__main__":
asyncio.run(main())
```
## Parsing Metadata
Use `parse_cert` to surface fields needed for logging, auditing, or dashboards:
```python
import asyncio
from pathlib import Path
from swarmauri_certs_crlverifyservice import CrlVerifyService
async def describe() -> None:
service = CrlVerifyService()
cert_bytes = Path("leaf.pem").read_bytes()
metadata = await service.parse_cert(cert_bytes)
print("Subject:", metadata["subject"])
print("Valid until:", metadata["not_after"])
print("Key usage:", metadata.get("key_usage"))
if __name__ == "__main__":
asyncio.run(describe())
```
## Best Practices
- Refresh CRLs frequently; RFC 5280 `nextUpdate` dictates how long a CRL should be considered valid.
- Combine this service with Swarmauri signing services to perform a full lifecycle check (issue → deploy → monitor).
- Cache CRLs in memory or a fast datastore to avoid repeatedly downloading them when calling `verify_cert`.
- Log verification outputs (especially `reason` and `revoked`) to your observability pipeline to catch trust issues early.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, certs, crlverifyservice, certificate, verification, against, crls | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"cryptography>=41.0.0",
"swarmauri-base"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:47:05.880662 | swarmauri_certs_crlverifyservice-0.1.3.dev5-py3-none-any.whl | 9,905 | 67/88/6ae6e97e165907f31c5a4ab8d5c2df3be651a86d45ce120dcdb6bded8449/swarmauri_certs_crlverifyservice-0.1.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | d4b74b172bf092d6c867b57cbd4e409e | 3a59998b7aead127b7e9643ee86781c667d8a5ca7d05e9ff7f662f3b00c6919c | 67886ae6e97e165907f31c5a4ab8d5c2df3be651a86d45ce120dcdb6bded8449 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_certs_cfssl | 0.2.3.dev5 | CFSSL-backed certificate service for Swarmauri |

<p align="center">
<a href="https://pypi.org/project/swarmauri_certs_cfssl/">
<img src="https://img.shields.io/pypi/dm/swarmauri_certs_cfssl" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certs_cfssl/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certs_cfssl.svg"/></a>
<a href="https://pypi.org/project/swarmauri_certs_cfssl/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_certs_cfssl" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_certs_cfssl/">
<img src="https://img.shields.io/pypi/l/swarmauri_certs_cfssl" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_certs_cfssl/">
<img src="https://img.shields.io/pypi/v/swarmauri_certs_cfssl?label=swarmauri_certs_cfssl&color=green" alt="PyPI - swarmauri_certs_cfssl"/></a>
</p>
---
# Swarmauri Cert Cfssl
CFSSL-backed certificate service for Swarmauri.
## Features
- `CfsslCertService` adapter that wraps the CFSSL REST API for signing, parsing, and verifying certificates.
- Supports RSA, ECDSA (P-256/P-384), and Ed25519 key material with profile/label routing.
- Optional certificate bundling during verification to ensure complete chains before deployment.
- Detailed parsing utilities that expose SANs, key usage, EKU, Subject/Authority Key Identifiers, and more.
## Prerequisites
- Python 3.10 or newer.
- A reachable CFSSL instance (standalone binary, Kubernetes deployment, or the Cloudflare Docker image).
- Valid CFSSL signing profile(s) configured for your use case (e.g., `www`, `client`, `code_signing`).
- If your CFSSL endpoint is protected, API credentials or access tokens for the headers you plan to use.
## Installation
```bash
# pip
pip install swarmauri_certs_cfssl
# poetry
poetry add swarmauri_certs_cfssl
# uv (pyproject-based projects)
uv add swarmauri_certs_cfssl
```
## Quickstart: Issue a Certificate
`CfsslCertService` consumes CSRs generated by other Swarmauri certificate services (for example, the Azure or ACME packages). The example below submits a CSR to CFSSL and saves the issued certificate:
```python
import asyncio
from datetime import datetime, timedelta, timezone
from pathlib import Path
from swarmauri_certs_cfssl import CfsslCertService
from swarmauri_core.crypto.types import KeyRef
async def main() -> None:
service = CfsslCertService(
base_url="https://cfssl.internal",
default_profile="www",
timeout_s=15.0,
auth_header=("X-Auth-Key", "super-secret-token"),
)
csr_bytes = Path("site.csr").read_bytes()
# KeyRef tags allow you to override CFSSL profile/label per request
ca_key = KeyRef(material=b"", tags={"profile": "www", "label": "primary"})
certificate_pem = await service.sign_cert(
csr=csr_bytes,
ca_key=ca_key,
extensions={
"subject_alt_name": {"dns": ["site.example.com", "www.site.example.com"]}
},
not_after=int((datetime.now(timezone.utc) + timedelta(days=90)).timestamp()),
)
Path("site.pem").write_bytes(certificate_pem)
await service.aclose()
if __name__ == "__main__":
asyncio.run(main())
```
## Verify and Parse Certificates
Leverage CFSSL's bundling API to confirm a certificate's trust chain, then inspect the returned metadata:
```python
import asyncio
from pathlib import Path
from swarmauri_certs_cfssl import CfsslCertService
async def verify_and_parse() -> None:
service = CfsslCertService(
base_url="https://cfssl.internal",
use_bundle_for_verify=True,
)
cert_bytes = Path("site.pem").read_bytes()
verification = await service.verify_cert(
cert=cert_bytes,
trust_roots=[Path("root.pem").read_bytes()],
)
print("Valid:", verification["valid"], "Chain length:", verification["chain_len"])
parsed = await service.parse_cert(cert_bytes)
print("Subject CN:", parsed["subject"].get("CN"))
print("SAN entries:", parsed.get("san", {}))
await service.aclose()
if __name__ == "__main__":
asyncio.run(verify_and_parse())
```
## Notes
- `CfsslCertService` focuses on signing and validation. Generate CSRs with other Swarmauri services (e.g., `swarmauri_certs_acme`, `swarmauri_certs_azure`) or your existing PKI tooling.
- The client uses `httpx.AsyncClient`; reuse a service instance for multiple operations and call `aclose()` when finished to release connections.
- Profile and label defaults can be set globally in the constructor or dynamically by attaching tags to the `KeyRef` passed into `sign_cert`.
## Best Practices
- Store CFSSL credentials outside source control (environment variables, secret stores, or Swarmauri state providers).
- Enable TLS on the CFSSL API and pin the certificate when connecting over untrusted networks.
- Use dedicated CFSSL profiles for each application tier and rotate them regularly.
- Capture verification results (e.g., bundle size, expiry) in metrics to stay ahead of certificate renewals.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, certs, cfssl, backed, certificate, service | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"cryptography",
"httpx",
"pydantic>=2.0",
"pytest-benchmark>=4.0.0; extra == \"perf\"",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:46:57.150617 | swarmauri_certs_cfssl-0.2.3.dev5-py3-none-any.whl | 12,646 | 53/e0/010cf7ff6d464d07d68cf5104a29ab106883340a79d29681f5f7d0ae78b6/swarmauri_certs_cfssl-0.2.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | 7101c4e8fcdcde1e07b883f800f75c41 | c6cce854aee4c9e869345ee6d2c1df7eedcd1964d0dc723d06fc1947029466dc | 53e0010cf7ff6d464d07d68cf5104a29ab106883340a79d29681f5f7d0ae78b6 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_certs_acme | 0.3.3.dev5 | ACME certificate service for Swarmauri | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_certs_acme/">
<img src="https://img.shields.io/pypi/dm/swarmauri_certs_acme" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certs_acme/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certs_acme.svg"/></a>
<a href="https://pypi.org/project/swarmauri_certs_acme/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_certs_acme" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_certs_acme/">
<img src="https://img.shields.io/pypi/l/swarmauri_certs_acme" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_certs_acme/">
<img src="https://img.shields.io/pypi/v/swarmauri_certs_acme?label=swarmauri_certs_acme&color=green" alt="PyPI - swarmauri_certs_acme"/></a>
</p>
---
# Swarmauri ACME Certificate Service
Community plugin providing an ACME (RFC 8555) certificate service built on top of Swarmauri's certificate interfaces.
## Features
- Implements `AcmeCertService`, a drop-in `CertServiceBase` compatible class for Swarmauri workflows.
- Supports ACME directory discovery, order creation, finalization, and full chain retrieval.
- Handles RSA and EC key material while exposing capability metadata through `supports()`.
- Convenience helpers for certificate verification and parsing using `cryptography` primitives.
## Prerequisites
- Python 3.10 or newer.
- Existing ACME account key material (PEM encoded) accessible to your Swarmauri runtime.
- Network access to your chosen ACME directory (defaults to Let's Encrypt production).
- DNS or HTTP challenge automation handled externally; this service focuses on CSR submission and certificate retrieval.
## Installation
```bash
# pip
pip install swarmauri_certs_acme
# poetry
poetry add swarmauri_certs_acme
# uv (pyproject-based projects)
uv add swarmauri_certs_acme
```
## Quickstart
The snippet below submits a CSR to Let's Encrypt using `AcmeCertService` and persists the resulting PEM chain.
```python
import asyncio
from pathlib import Path
from swarmauri_certs_acme import AcmeCertService
from swarmauri_core.crypto.types import KeyRef
async def main() -> None:
account_key = KeyRef(material=Path("account-key.pem").read_bytes())
service = AcmeCertService(
account_key=account_key,
contact_emails=["admin@example.com"],
)
csr_bytes = Path("server.csr").read_bytes()
certificate_chain = await service.sign_cert(
csr=csr_bytes,
ca_key=account_key, # required by the CertService interface
)
Path("server-fullchain.pem").write_bytes(certificate_chain)
print("Certificate chain written to server-fullchain.pem")
if __name__ == "__main__":
asyncio.run(main())
```
## CSR Generation Example
`AcmeCertService` can construct a CSR when provided with private key material and subject metadata:
```python
import asyncio
from pathlib import Path
from swarmauri_certs_acme import AcmeCertService
from swarmauri_core.crypto.types import KeyRef
async def build_csr() -> None:
account_key = KeyRef(material=Path("account-key.pem").read_bytes())
host_key = KeyRef(material=Path("server-key.pem").read_bytes())
service = AcmeCertService(account_key=account_key)
csr_bytes = await service.create_csr(
key=host_key,
subject={"CN": "example.com"},
san={"dns": ["example.com", "www.example.com"]},
)
Path("server.csr").write_bytes(csr_bytes)
if __name__ == "__main__":
asyncio.run(build_csr())
```
## Verification and Parsing
Use the built-in helpers to inspect returned certificates before deployment:
```python
import asyncio
from pathlib import Path
from swarmauri_certs_acme import AcmeCertService
from swarmauri_core.crypto.types import KeyRef
async def inspect() -> None:
account_key = KeyRef(material=Path("account-key.pem").read_bytes())
service = AcmeCertService(account_key=account_key)
pem_chain = Path("server-fullchain.pem").read_bytes()
info = await service.verify_cert(pem_chain)
print("Issuer:", info["issuer"])
print("Valid until:", info["not_after"])
metadata = await service.parse_cert(pem_chain)
print(metadata)
if __name__ == "__main__":
asyncio.run(inspect())
```
## Best Practices
- Rotate account keys periodically and store them in a secure vault (`KeyRef` works with external KMS integrations).
- When using Let's Encrypt production, respect rate limits and consider staging endpoints during development.
- Automate DNS/HTTP challenges upstream; this service assumes the order is ready for finalization once the CSR is submitted.
- Cache successful certificate chains and perform proactive renewals before `not_after` to avoid downtime.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, certs, acme, certificate, service | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"acme>=5.0.0",
"cryptography",
"pytest-benchmark>=4.0.0; extra == \"perf\"",
"swarmauri_base",
"swarmauri_core",
"swarmauri_standard"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:46:55.360451 | swarmauri_certs_acme-0.3.3.dev5-py3-none-any.whl | 9,783 | c0/d6/78b38a0a5518f48eadca06618f1acba85f8e465d27367173fe9196d25506/swarmauri_certs_acme-0.3.3.dev5-py3-none-any.whl | py3 | bdist_wheel | null | false | bdfc0f5ed7ebd3a3fda4abc8778947bd | c1f5d5874c6700f8b7de7a6dedbc21475b71228897c84cb15efcd0937646d46f | c0d678b38a0a5518f48eadca06618f1acba85f8e465d27367173fe9196d25506 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | swarmauri_certs_azure | 0.3.3.dev5 | Azure Key Vault certificate utilities for the Swarmauri ecosystem | 
<p align="center">
<a href="https://pypi.org/project/swarmauri_certs_azure/">
<img src="https://img.shields.io/pypi/dm/swarmauri_certs_azure" alt="PyPI - Downloads"/></a>
<a href="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certs_azure/">
<img alt="Hits" src="https://hits.sh/github.com/swarmauri/swarmauri-sdk/tree/master/pkgs/community/swarmauri_certs_azure.svg"/></a>
<a href="https://pypi.org/project/swarmauri_certs_azure/">
<img src="https://img.shields.io/pypi/pyversions/swarmauri_certs_azure" alt="PyPI - Python Version"/></a>
<a href="https://pypi.org/project/swarmauri_certs_azure/">
<img src="https://img.shields.io/pypi/l/swarmauri_certs_azure" alt="PyPI - License"/></a>
<a href="https://pypi.org/project/swarmauri_certs_azure/">
<img src="https://img.shields.io/pypi/v/swarmauri_certs_azure?label=swarmauri_certs_azure&color=green" alt="PyPI - swarmauri_certs_azure"/></a>
</p>
---
# swarmauri_certs_azure
Community-maintained utilities for working with X.509 certificates via Azure Key Vault.
## Features
- `AzureKeyVaultCertService` adapter that plugs into Swarmauri's certificate service architecture.
- RFC-aligned helpers for serial number generation (RFC 5280), PEM formatting (RFC 7468), and PKCS#10 CSR creation (RFC 2986).
- Native `DefaultAzureCredential` support so you can reuse the same authentication chain across tools.
- Works with RSA 2048-bit key material—perfect for Key Vault-backed certificate issuance flows.
## Prerequisites
- Python 3.10 or newer.
- An Azure Key Vault enabled for the Certificates and Keys resource providers.
- Exportable RSA key material (PEM) or an Azure Key Vault key that can be exported for CSR signing.
- Azure credentials configured for `DefaultAzureCredential` (e.g., `AZURE_CLIENT_ID`, managed identity, or CLI login).
## Installation
```bash
# pip
pip install swarmauri_certs_azure
# poetry
poetry add swarmauri_certs_azure
# uv (pyproject-based projects)
uv add swarmauri_certs_azure
```
## Quickstart
Generate a CSR using `AzureKeyVaultCertService` and store it for downstream issuance:
```python
import asyncio
from pathlib import Path
from azure.identity import DefaultAzureCredential
from swarmauri_certs_azure.certs import AzureKeyVaultCertService
from swarmauri_core.crypto.types import KeyRef
async def main() -> None:
service = AzureKeyVaultCertService(
vault_url="https://example-vault.vault.azure.net/",
credential=DefaultAzureCredential(),
)
key_ref = KeyRef(material=Path("app-private-key.pem").read_bytes())
csr_bytes = await service.create_csr(
key=key_ref,
subject={"CN": "app.example.com"},
san={"dns": ["app.example.com", "www.app.example.com"]},
)
Path("app.csr").write_bytes(csr_bytes)
print("CSR written to app.csr")
if __name__ == "__main__":
asyncio.run(main())
```
## Integrate with Azure Certificate Workflows
After generating the CSR, import it into Azure Key Vault or an external CA:
```python
from pathlib import Path
from azure.identity import DefaultAzureCredential
from azure.keyvault.certificates import CertificateClient
vault_url = "https://example-vault.vault.azure.net/"
client = CertificateClient(vault_url=vault_url, credential=DefaultAzureCredential())
csr_bytes = Path("app.csr").read_bytes()
poller = client.begin_create_certificate(
certificate_name="app-cert",
policy={
"contentType": "application/x-pem-file",
"csr": csr_bytes,
},
)
certificate = poller.result()
print("Certificate operation state:", certificate.properties.x509_thumbprint)
```
For external issuance, submit `app.csr` to your CA, then store the returned certificate chain back in Key Vault using `set_certificate_contacts` and `import_certificate`.
## Testing
Run tests with:
```bash
uv run --package swarmauri_certs_azure --directory community pytest
```
## Best Practices
- Prefer managed identities or workload identity federation over client secrets in production.
- Scope Key Vault permissions tightly (`get`, `sign`, `unwrapKey`) for the keys used by this service.
- Rotate keys and certificates ahead of expiry; the helper functions simplify CSR generation for renewals.
- Persist generated CSRs and issued certificates securely to aid in auditing and disaster recovery.
| text/markdown | Jacob Stewart | jacob@swarmauri.com | null | null | null | swarmauri, certs, azure, key, vault, certificate, utilities, ecosystem | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Natural Language :: English",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software De... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"asn1crypto",
"asn1crypto; extra == \"crypto\"",
"azure-identity",
"azure-identity; extra == \"azure\"",
"azure-keyvault-keys",
"azure-keyvault-keys; extra == \"azure\"",
"cryptography",
"cryptography; extra == \"crypto\"",
"swarmauri_base",
"swarmauri_core"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:46:51.334777 | swarmauri_certs_azure-0.3.3.dev5.tar.gz | 8,385 | 3e/05/6558c8228860cb6701bd5ba3fed256fc7079bc4026fea2bd1f88bf732195/swarmauri_certs_azure-0.3.3.dev5.tar.gz | source | sdist | null | false | 39a33f38d7e690addec95e5658ce0f1c | da4e5f29e2e73d7646925a2489959a928ceaa8f2696caa5060fdbb26b07d04c5 | 3e056558c8228860cb6701bd5ba3fed256fc7079bc4026fea2bd1f88bf732195 | Apache-2.0 | [
"LICENSE"
] | 0 |
2.4 | rustling | 0.5.0 | A blazingly fast library for computational linguistics | # Rustling
[](https://pypi.org/project/rustling/)
[](https://crates.io/crates/rustling)
Rustling is a blazingly fast library for computational linguistics.
It is written in Rust, with Python bindings.
Documentation: [Python](https://rustling.readthedocs.io/) | [Rust](https://docs.rs/rustling)
## Features
- **Language Models** — N-gram language models with smoothing
- `MLE` — Maximum Likelihood Estimation (no smoothing)
- `Lidstone` — Lidstone (additive) smoothing
- `Laplace` — Laplace (add-one) smoothing
- **Word Segmentation** — Models for segmenting unsegmented text into words
- `LongestStringMatching` — Greedy left-to-right longest match segmenter
- `RandomSegmenter` — Random baseline segmenter
- **Part-of-speech Tagging**
- `AveragedPerceptronTagger` - Averaged perceptron tagger
- **CHAT Parsing** — Parser for CHAT transcription files (CHILDES/TalkBank)
- `CHAT` — Read and query CHAT data from directories, files, strings, or ZIP archives
## Performance
Benchmarked against pure Python implementations from NLTK, wordseg (v0.0.5), and pylangacq (v0.19.1).
See [`benchmarks/`](benchmarks/) for full details and reproduction scripts.
| Component | Task | Speedup | vs. |
|-----------|------|---------|-----|
| **Language Models** | Fit | **10x** | NLTK |
| | Score | **2x** | NLTK |
| | Generate | **80–112x** | NLTK |
| **Word Segmentation** | LongestStringMatching | **9x** | wordseg |
| | RandomSegmenter | **1.1x** | wordseg |
| **POS Tagging** | Training | **5x** | NLTK |
| | Tagging | **7x** | NLTK |
| **CHAT Parsing** | from_dir | **55x** | pylangacq |
| | from_zip | **48x** | pylangacq |
| | from_files | **63x** | pylangacq |
| | from_strs | **116x** | pylangacq |
| | words() | **3x** | pylangacq |
| | utterances() | **15x** | pylangacq |
## Installation
### Python
```bash
pip install rustling
```
### Rust
```bash
cargo add rustling
```
## License
MIT License
| text/markdown; charset=UTF-8; variant=GFM | null | "Jackson L. Lee" <jacksonlunlee@gmail.com> | null | null | MIT License | null | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: R... | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:46:08.398159 | rustling-0.5.0.tar.gz | 117,456 | 84/92/0067df4dc20446305597e193918196873acac16a6063d462d82d1188d6ea/rustling-0.5.0.tar.gz | source | sdist | null | false | 07cec8e4eb39b99aea9f63ec9b66f50a | 75f1071c72686a212224e70222f9636763d41976141286ec091955bf194c187f | 84920067df4dc20446305597e193918196873acac16a6063d462d82d1188d6ea | null | [
"LICENSE.md"
] | 825 |
2.4 | zpp_store | 0.6.0 | Système de sérialisation de données | # zpp_store
## Informations
Outil de sérialisation d'objet Python.
Il peut prendre en charge:
- les formats classiques: str, int, float, bool, type, none, complex, bytes, decimal
- les formats structurés: list, dict, tuple, array, bytearray, frozenset
- les formats io: BytesIO, StringIO, TextIOWrapper, BufferedReader
- les objets: Class, datetime, function, builtin function
Le fichier de sortie peut-être soit un yaml, soit un fichier binaire; et celui-ci peut être chiffré avec un mot de passe.
Le système fonctionne comme un coffre dans lequel on peut stocker plusieurs données et utilise le système clé/valeur pour identifier les données.
Il intègre également un outil pour stocker des données sous forme de Class pour les structurer et faciliter leurs manipulations.
### Prérequis
- Python 3
## Installation
```shell
pip install zpp_store
```
## Utilisation
### Initialisation du connecteur
Le store est accessible de deux façons:
- En initialisant le store dans une variable
```python
import zpp_store
vault = zpp_store.Store()
```
- Ou avec la méthode **with**
```python
import zpp_store
with zpp_store.Store() as vault:
'''Traitement'''
```
La Classe Store peut prendre plusieurs arguments:
- ***filename***: pour spécifier le fichier de sortie
Si le fichier de sortie n'est pas précisé, le résultat restera dans le stockage de la classe. Il sera alors possible de récupérer le résultat avec la méthode **get_content()**
- ***protected***: pour spécifier si le fichier doit être chiffré (si le mot de passe n'est pas spécifié. Un prompt demandera le mot de passe)
- ***password***: pour spécifier le mot de passe pour le chiffrement (active protected)
- ***format***: pour spécifier le format de sortie. **zpp_store.Formatstore.to_yaml**, **zpp_store.Formatstore.to_binary** ou **zpp_store.Formatstore.to_dict**.
Attention, le format to_dict n'accepte pas la sortie dans un fichier.
- ***compressed***: pour activer la compression du store
- ***cached***: pour activer le cache pour les appels aux clés
<br>
### Sérialisation de données
Pour sérialiser des données, il suffit d'appeler la classe **Store** et d'utiliser la méthode **push** avec comme paramètre le nom de la clé à utiliser et la donnée à sérialiser.
```python
import zpp_store
class Person:
def __init__(self, name, age, city):
self.name = name
self.age = age
self.city = city
new_person = Person("Bob", 35, "Paris")
with zpp_store.Store() as vault:
vault.push("utilisateur_bob", new_person)
```
Possibilité de travailler avec des données hiérarchiques en séparant les clés par un point.
```python
vault.push("config.app.data", "data_line") #Ce qui donnera {"config": {"app": {"data": "data_line"}}}
```
<br>
### Désérialisation de données
Pour désérialiser des données, il suffit d'appeler la classe **Store** et d'utiliser la méthode **pull** avec comme paramètre le nom de la clé à récupérer.
```python
import zpp_store
class Person:
def __init__(self, name, age, city):
self.name = name
self.age = age
self.city = city
with zpp_store.Store() as vault:
new_person = vault.pull("utilisateur_bob")
```
<br>
### Suppression de données
Il est possible de supprimer des données avec la méthode **erase()** en précisant en paramètre la clé.
```python
import zpp_store
with zpp_store.Store() as vault:
vault.erase("utilisateur_bob")
```
La méthode retournera alors **True** si une donnée a été supprimée, sinon **False**.
<br>
### Liste des clés
Il est possible de lister l'ensemble des clés disponibles dans un store.
```python
import zpp_store
with zpp_store.Store() as vault:
print(vault.list()) # Affiche: ['app', 'app.config', 'app.users']
```
### Structuration de données
Il est possible de structurer un dictionnaire pour le transformer en **DataStore** et pour nous permettre de manipuler ces données comme une Class avec a.b.c
Pour cela, il nous suffit d'appeler la méthode **structure()** avec comme argument le dictionnaire
```python
import zpp_store
dict_data = {"Bonjour": "Hello world", "Data": {"insert": True, "false": True}}
data = zpp_store.structure(dict_data)
```
Le dictionnaire devient alors un **zpp_store.structure.DataStore**
Pour permettre de contrôler la modification, le DataStore permet de récupérer un hash du contenu avec la méthode **get_hash()**
```python
import zpp_store
dict_data = {"Bonjour": "Hello world", "Data": {"insert": True, "false": True}}
data = zpp_store.structure(dict_data)
hash_dict = data.get_hash()
```
<br>
### Déstructuration de données
Pour récupérer un dictionnaire à partir d'un **DataStore**, il suffit d'appeler la méthode **destructure()**.
```python
import zpp_store
data = zpp_store.destructure(datastore)
```
| text/markdown | null | null | null | null | mit | store zephyroff | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3"
] | [] | https://github.com/ZephyrOff/zpp_store | null | null | [] | [] | [] | [
"PyYAML",
"msgpack",
"cryptography"
] | [] | [] | [] | [
"Documentation, https://github.com/ZephyrOff/zpp_store"
] | nexus/0.5.0 CPython/3.13.11 Windows/11 | 2026-02-18T09:45:47.079457 | zpp_store-0.6.0.tar.gz | 15,428 | f7/8f/4b84c590428ead5abd21f21710f6260b8c20241fa95c15f6a0a59e8a51b4/zpp_store-0.6.0.tar.gz | source | sdist | null | false | d17e8f49782ecc779b0f57d3fc7c795a | 1945c7fba2de683f0677e9fc2a8869dcbcb4de0c2b12b465e08a79f489d98436 | f78f4b84c590428ead5abd21f21710f6260b8c20241fa95c15f6a0a59e8a51b4 | null | [] | 0 |
2.4 | pydantic-json-patch | 0.6.3 | Pydantic models for implementing JSON Patch. | # Pydantic JSON Patch
[![Python uv CI][ci-badge]][ci-page]
[![Coverage Status][coverage-badge]][coverage-page]
[![PyPI - Version][pypi-badge]][pypi-page]
[Pydantic] models for implementing [JSON Patch].
## Installation
_Pydantic JSON Patch_ is published to [PyPI], and can be installed with e.g.:
```shell
pip install pydantic-json-patch
```
## Models
A model is provided for each of the six JSON Patch operations:
- `AddOp`
- `CopyOp`
- `MoveOp`
- `RemoveOp`
- `ReplaceOp`
- `TestOp`
As repeating the op is a bit awkward (`CopyOp(op="copy", ...)`), a `create` factory method is available:
```python
>>> from pydantic_json_patch import AddOp
>>> op = AddOp.create(path="/foo/bar", value=123)
>>> op
AddOp(op='add', path='/foo/bar', value=123)
>>> op.model_dump_json()
'{"op":"add","path":"/foo/bar","value":123}'
```
The operations that take a value (`AddOp`, `ReplaceOp`, and `TestOp`) are generic, so you can parameterize them with a specific value type:
```python
>>> from pydantic_json_patch import ReplaceOp
>>> op = ReplaceOp[str].create(path="/foo/bar", value="hello")
>>> op
ReplaceOp[str](op='replace', path='/foo/bar', value='hello')
```
Additionally, there are two compound types:
- `Operation` is the union of all the operations; and
- `JsonPatch` is a Pydantic `RootModel` representing a sequence of operations.
`JsonPatch` can be used directly for validation:
```python
>>> from pydantic_json_patch import JsonPatch
>>> patch = JsonPatch.model_validate_json('[{"op":"add","path":"/a/b/c","value":"foo"}]')
>>> patch[0]
AddOp(op='add', path='/a/b/c', value='foo')
```
### Pointer tokens
The `path` property (and `from` property, where present) of an operation is a [JSON Pointer].
This means that any `~` or `/` characters in property names need to be properly encoded.
To aid working with these, the models expose a read-only `path_tokens` property (and, where appropriate, `from_tokens`):
```python
>>> from pydantic_json_patch import CopyOp
>>> op = CopyOp.model_validate_json('{"op":"copy","path":"/foo/bar~1new","from":"/foo/bar~0old"}')
>>> op
CopyOp(op='copy', path='/foo/bar~1new', from_='/foo/bar~0old')
>>> op.path_tokens
('foo', 'bar/new')
>>> op.from_tokens
('foo', 'bar~old')
```
Similarly, the `create` factory methods can accept sequences of tokens, and will encode them appropriately:
```python
>>> from pydantic_json_patch import TestOp
>>> op = TestOp.create(path=("annotations", "scope/value"), value=None)
>>> op
TestOp(op='test', path='/annotations/scope~1value', value=None)
>>> op.model_dump_json()
'{"op":"test","path":"/annotations/scope~1value","value":null}'
```
## FastAPI
You can use this package to validate a JSON Patch endpoint in a FastAPI application, for example:
```python
import typing as tp
from uuid import UUID
from fastapi import Body, FastAPI
from pydantic_json_patch import JsonPatch
app = FastAPI()
@app.patch("/resource/{resource_id}")
def _(resource_id: UUID, operations: tp.Annotated[JsonPatch, Body()]) -> ...:
...
```
This will provide a sensible example of the request body:
[![Screenshot of Swagger UI request body example][swagger-example]][swagger-example]
and list the models along with the other schemas:
[![Screenshot of Swagger UI schema list][swagger-schemas]][swagger-schemas]
### Value type validation
You can also use a more specific type to apply type validation to the value properties:
```python
import typing as tp
from uuid import UUID
from fastapi import Body, FastAPI
from pydantic import Discriminator
from pydantic_json_patch import AddOp, TestOp
app = FastAPI()
@app.patch("/resource/{resource_id}")
def _(
resource_id: UUID,
operations: tp.Annotated[list[tp.Annotated[AddOp[int] | TestOp[int], Discriminator("op")]], Body()],
) -> ...:
...
```
**Notes**:
- Explicitly specifying the [discriminator][pydantic-discriminator] gives better results on _failed_ validation for unions of operations; and
- Parameterised versions of the operations will also appear in the JSON Schema as e.g. `AddOp_int_` (with the title _"JsonPatchAddOperation\[int\]"_).
## Development
This project uses [uv] for managing dependencies.
Having installed uv, you can set the project up for local development with:
```shell
uv sync
uv run pre-commit install
```
The pre-commit hooks will ensure that the code style checks (using [isort] and [ruff]) are applied.
### Testing
The test suite uses [pytest] and can be run with:
```shell
uv run pytest
```
Additionally, there is [ty] type-checking that can be run with:
```shell
uv run ty check
```
### FastAPI
You can preview the FastAPI/Swagger documentation by running:
```shell
uv run fastapi dev tests/app.py
```
and visiting the Documentation link that's logged in the console.
This will auto-restart as you make changes.
[ci-badge]: https://github.com/textbook/pydantic_json_patch/actions/workflows/push.yml/badge.svg
[ci-page]: https://github.com/textbook/pydantic_json_patch/actions/workflows/push.yml
[coverage-badge]: https://coveralls.io/repos/github/textbook/pydantic_json_patch/badge.svg?branch=main
[coverage-page]: https://coveralls.io/github/textbook/pydantic_json_patch?branch=main
[fastapi]: https://fastapi.tiangolo.com/
[isort]: https://pycqa.github.io/isort/
[json patch]: https://datatracker.ietf.org/doc/html/rfc6902/
[json pointer]: https://datatracker.ietf.org/doc/html/rfc6901/
[pydantic]: https://docs.pydantic.dev/latest/
[pydantic-discriminator]: https://docs.pydantic.dev/latest/concepts/unions/#discriminated-unions-with-str-discriminators
[pypi]: https://pypi.org/
[pypi-badge]: https://img.shields.io/pypi/v/pydantic-json-patch?logo=python&logoColor=white&label=PyPI
[pypi-page]: https://pypi.org/project/pydantic-json-patch/
[pytest]: https://docs.pytest.org/en/stable/
[ruff]: https://docs.astral.sh/ruff/
[swagger-example]: https://github.com/textbook/pydantic_json_patch/blob/main/docs/swagger-example.png?raw=true
[swagger-schemas]: https://github.com/textbook/pydantic_json_patch/blob/main/docs/swagger-schemas.png?raw=true
[ty]: https://docs.astral.sh/ty/
[uv]: https://docs.astral.sh/uv/
| text/markdown | Jonathan Sharpe | Jonathan Sharpe <mail@jonrshar.pe> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Framework :: FastAPI",
"Framework :: Pydantic",
"Framework :: Pydantic :: 2",
"Intended Audience :: Developers",
"License :: OSI Approved :: ISC License (ISCL)",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python... | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.12",
"typing-extensions>=4.14.1"
] | [] | [] | [] | [
"repository, https://github.com/textbook/pydantic_json_patch",
"Issues, https://github.com/textbook/pydantic_json_patch/issues",
"Sponsor, https://ko-fi.com/textbook"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:45:40.223235 | pydantic_json_patch-0.6.3.tar.gz | 5,679 | 55/d8/285db282b0a4cd6f0612c54546b7d7d6feaea37b37e5156ec235911ecc3c/pydantic_json_patch-0.6.3.tar.gz | source | sdist | null | false | 7389712294045825bcbd15877cd5a5a4 | ada3b0d2f3f12e6ad68af26c9149999a45ce426741dd3cfd234615973e995216 | 55d8285db282b0a4cd6f0612c54546b7d7d6feaea37b37e5156ec235911ecc3c | ISC | [] | 200 |
2.4 | argparse-typing | 0.2.0 | Unofficial enhanced types for argparse | # argparse-typing
This is a fork of Python's standard library module `argparse.pyi`.
## License Information
This project contains code from two sources:
1. **Original Python module** - Licensed under the PSF License Agreement
(see [LICENSE.PSF.txt](https://github.com/MagIlyasDOMA/argparse-typing/blob/main/LICENSE.PSF.txt))
Copyright (c) 2001-2025 Python Software Foundation
2. **Modifications and additions** - Licensed under the BSD 3-Clause License
(see [LICENSE](https://github.com/MagIlyasDOMA/argparse-typing/blob/main/LICENSE))
Copyright (c) 2026 Маг Ильяс DOMA (MagIlyasDOMA)
The original PSF-licensed code remains under the PSF license. All new code
and modifications are under the BSD license. You may use this package under
either license or both, depending on which parts of the code you are using.
## Original Source
The original module can be found in the [CPython repository](https://github.com/python/cpython).
| text/markdown | Маг Ильяс DOMA (MagIlyasDOMA) | null | null | null | BSD | argparse, type-stubs, typing, cli | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"License :: OSI Approved :: Python Software Foundation License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Pro... | [] | null | null | >=3.9 | [] | [] | [] | [
"typing-extensions>=4.0.0",
"pip-setuptools>=1.1.4; extra == \"dev\"",
"mypy>=1.19.1; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/MagIlyasDOMA/argparse-typing",
"Repository, https://github.com/MagIlyasDOMA/argparse-typing.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:44:50.173151 | argparse_typing-0.2.0.tar.gz | 13,068 | 9b/bf/e32c40d028178e93ff7486f734ce2bcec114a0e79524d6e1912d2125111e/argparse_typing-0.2.0.tar.gz | source | sdist | null | false | 9d2fb3d2116bba1d2291975c4fba1cea | 9a633486237bd42461a254008e5413aec8ae95c0219823fc379a2343f10fa9bf | 9bbfe32c40d028178e93ff7486f734ce2bcec114a0e79524d6e1912d2125111e | null | [
"LICENSE",
"LICENSE.PSF.txt",
"NOTICE"
] | 259 |
2.4 | PyViCare | 2.58.0 | Library to communicate with the Viessmann ViCare API. | # PyViCare
This library implements access to Viessmann devices by using the official API from the [Viessmann Developer Portal](https://developer.viessmann.com/).
## Breaking changes in version 2.27.x
- Some base classes have been renamed to provide a better support for non heating devices. See [PR #307](https://github.com/somm15/PyViCare/pull/307)
## Breaking changes in version 2.8.x
- The circuit, burner (Gaz) and compressor (Heat Pump) is now separated. Accessing the properties of the burner/compressor is moved from `device.circuits` to `device.burners` and `device.compressor`.
## Breaking changes in version 2.x
- The API to access your device changed to a general `PyViCare` class. Use this class to load all available devices.
- The API to access the heating circuit of the device has moved to the `Device` class. You can now access and iterate over all available circuits via `device.circuits`. This allows to easily see which properties are depending on the circuit.
See the example below for how you can use that.
## Breaking changes in version 1.x
- The versions prior to 1.x used an unofficial API which stopped working on July, 15th 2021. All users need to migrate to version 1.0.0 to continue using the API.
- Exception is raised if the library runs into a API rate limit. (See feature flag `raise_exception_on_rate_limit`)
- Exception is raised if an unsupported device feature is used. (See feature flag `raise_exception_on_not_supported_device_feature`)
- Python 3.4 is no longer supported.
- Python 3.9 is now supported.
## Prerequisites
To use PyViCare, every user has to register and create their personal API client. Follow these steps to create your client:
1. Login to the [Viessmann Developer Portal](https://app.developer.viessmann.com/) with **your existing ViCare app username/password**.
2. On the developer dashboard click *add* in the *clients* section.
3. Create a new client using following data:
- Name: PyViCare
- Google reCAPTCHA: Disabled
- Redirect URIs: `vicare://oauth-callback/everest`
4. Copy the `Client ID` to use in your code. Pass it as constructor parameter to the device.
Please note that not all properties from older versions and the ViCare mobile app are available in the new API. Missing properties were removed and might be added later if they are available again.
## Help
We need help testing and improving PyViCare, since the maintainers only have specific types of heating systems. For bugs, questions or feature requests join the [PyViCare channel on Discord](https://discord.gg/aM3SqCD88f) or create an issue in this repository.
## Device Features / Errors
Depending on the device, some features are not available/supported. This results in a raising of a `PyViCareNotSupportedFeatureError` if the dedicated method is called. This is most likely not a bug, but a limitation of the device itself.
Tip: You can use Pythons [contextlib.suppress](https://docs.python.org/3/library/contextlib.html#contextlib.suppress) to handle it gracefully.
## Types of heatings
- Use `asGazBoiler` for gas heatings
- Use `asHeatPump` for heat pumps
- Use `asFuelCell` for fuel cells
- Use `asPelletsBoiler` for pellets heatings
- Use `asOilBoiler` for oil heatings
- Use `asHybridDevice` for gas/heat pump hybrid heatings
## Basic Usage:
```python
import sys
import logging
from PyViCare.PyViCare import PyViCare
client_id = "INSERT CLIENT ID"
email = "email@domain"
password = "password"
vicare = PyViCare()
vicare.initWithCredentials(email, password, client_id, "token.save")
device = vicare.devices[1]
print(device.getModel())
print("Online" if device.isOnline() else "Offline")
t = device.asAutoDetectDevice()
print(t.getDomesticHotWaterConfiguredTemperature())
print(t.getDomesticHotWaterStorageTemperature())
print(t.getOutsideTemperature())
print(t.getRoomTemperature())
print(t.getBoilerTemperature())
print(t.setDomesticHotWaterTemperature(59))
circuit = t.circuits[0] #select heating circuit
print(circuit.getSupplyTemperature())
print(circuit.getHeatingCurveShift())
print(circuit.getHeatingCurveSlope())
print(circuit.getActiveProgram())
print(circuit.getPrograms())
print(circuit.getCurrentDesiredTemperature())
print(circuit.getDesiredTemperatureForProgram("comfort"))
print(circuit.getActiveMode())
print(circuit.getDesiredTemperatureForProgram("comfort"))
print(circuit.setProgramTemperature("comfort",21))
print(circuit.activateProgram("comfort"))
print(circuit.deactivateComfort())
burner = t.burners[0] #select burner
print(burner.getActive())
compressor = t.compressors[0] #select compressor
print(compressor.getActive())
```
## API Usage in Postman
Follow these steps to access the API in Postman:
1. Create an access token in the `Authorization` tab with type `OAuth 2.0` and following inputs:
- Token Name: `PyViCare`
- Grant Type: `Authorization Code (With PKCE)`
- Callback URL: `vicare://oauth-callback/everest`
- Authorize using browser: Disabled
- Auth URL: `https://iam.viessmann-climatesolutions.com/idp/v3/authorize`
- Access Token URL: `https://iam.viessmann-climatesolutions.com/idp/v3/token`
- Client ID: Your personal Client ID created in the developer portal.
- Client Secret: Blank
- Code Challenge Method: `SHA-256`
- Code Veriefier: Blank
- Scope: `IoT User`
- State: Blank
- Client Authentication: `Send client credentials in body`.
A login popup will open. Enter your ViCare username and password.
2. Use this URL to access your `installationId`, `gatewaySerial` and `deviceId`:
`https://api.viessmann-climatesolutions.com/iot/v1/equipment/installations?includeGateways=true`
- `installationId` is `data[0].id`
- `gatewaySerial` is `data[0].gateways[0].serial`
- `deviceId` is `data[0].gateways[0].devices[0].id`
3. Use above data to replace `{installationId}`, `{gatewaySerial}` and `{deviceId}` in this URL to investigate the Viessmann API:
`https://api.viessmann-climatesolutions.com/iot/v1/features/installations/{installationId}/gateways/{gatewaySerial}/devices/{deviceId}/features`
## Rate Limits
[Due to latest changes in the Viessmann API](https://www.viessmann-community.com/t5/Konnektivitaet/Q-amp-A-Viessmann-API/td-p/127660) rate limits can be hit. In that case a `PyViCareRateLimitError` is raised. You can read from the error (`limitResetDate`) when the rate limit is reset.
## More different devices for test cases needed
In order to help ensuring making it easier to create more test cases you can run this code and make a pull request with the new test of your device type added. Your test should be committed into [tests/response](tests/response) and named `<family><model>`.
The code to run to make this happen is below. This automatically removes "sensitive" information like installation id and serial numbers.
You can either replace default values or use the `PYVICARE_*` environment variables.
```python
import sys
import os
from PyViCare.PyViCare import PyViCare
client_id = os.getenv("PYVICARE_CLIENT_ID", "INSERT CLIENT_ID")
email = os.getenv("PYVICARE_EMAIL", "email@domain")
password = os.getenv("PYVICARE_PASSWORD", "password")
vicare = PyViCare()
vicare.initWithCredentials(email, password, client_id, "token.save")
with open(f"dump.json", mode='w') as output:
output.write(vicare.devices[0].dump_secure())
```
To make the test data comparable with future updates, it must be sorted. No worries, this can be done automatically using [`jq`](https://jqlang.github.io/jq/).
```sh
jq ".data|=sort_by(.feature)" --sort-keys testData.json > testDataSorted.json
```
## Testing
### Home Assistant
To test a certain change in Home Assistant, one needs to have a `ViCare` integration installed as a custom component.
Change the dependency in the `manifest.json` to point to a GitHub branch or commit SHA:
```json
"requirements": [
"PyViCare@git+https://github.com/openviess/PyViCare.git@<branchname>"
],
```
To install `ViCare` as a custom component, one can use the terminal addon to install the changes from a certain Home Assistant PR:
```sh
curl -o- -L https://gist.githubusercontent.com/bdraco/43f8043cb04b9838383fd71353e99b18/raw/core_integration_pr | bash /dev/stdin -d vicare -p <pr-number>
```
| text/markdown | Simon Gillet | mail+pyvicare@gillet.ninja | Christopher Fenner | fenner.christopher@gmail.com | Apache-2.0 | viessmann, vicare, api | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language... | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"authlib>1.2.0",
"deprecated<2.0.0,>=1.2.15",
"requests>=2.31.0"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/openviess/PyViCare/issues",
"Changelog, https://github.com/openviess/PyViCare/releases",
"Documentation, https://github.com/openviess/PyViCare",
"Homepage, https://github.com/openviess/PyViCare",
"Repository, https://github.com/openviess/PyViCare"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:43:18.456430 | pyvicare-2.58.0.tar.gz | 31,656 | 4c/b3/32a9234fff3baab23b17a9b1c4848917b21175e427a8e565cbb6c96eac6b/pyvicare-2.58.0.tar.gz | source | sdist | null | false | 2323f8f23b72b336e58471e77f67aebf | 34fe8659a851e48d23c04f0be9c4f1d3f9ce5e289081dd3a9cfc4f09446dc73e | 4cb332a9234fff3baab23b17a9b1c4848917b21175e427a8e565cbb6c96eac6b | null | [
"LICENSE"
] | 0 |
2.4 | redzed | 26.2.18 | An asyncio-based library for building small automated systems | # Redzed
Redzed is an asyncio-based library for building small automated systems,
i.e. systems that control outputs according to input values,
system’s internal state, date and time. Redzed was written in Python.
It is free and open source.
Included are pre-defined logic blocks for general use. There are memory cells,
timers, programmable finite-state machines, outputs and many more.
Blocks have outputs and react to events. Blocks are complemented by triggers
running user-supplied functions when certain outputs change. Triggered functions
evaluate outputs, make decisions and can send events to other blocks.
The mutual interaction of blocks and triggers allows to build modular
automated systems of small to middle complexity.
What is not included:
The application code must connect the system with outside world.
## Documentation
Please read the [online documentation](https://redzed.readthedocs.io/en/latest/)
for more information.
### Note:
Redzed is intended to replace Edzed (an older library from the same author).
It has the same capabilities, but is based on simpler concepts and that's why
it is easier to learn and to use.
| text/markdown | null | Vlado Potisk <redzed@poti.sk> | null | null | null | automation, finite-state machine | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pytest>=8.4.0; extra == \"tests\"",
"pytest-asyncio>=0.26.0; extra == \"tests\"",
"pytest-xdist; extra == \"tests\""
] | [] | [] | [] | [
"homepage, https://github.com/xitop/redzed",
"repository, https://github.com/xitop/redzed",
"documentation, https://redzed.readthedocs.io/en/latest/"
] | twine/6.1.0 CPython/3.13.11 | 2026-02-18T09:43:15.969834 | redzed-26.2.18-py3-none-any.whl | 55,936 | f8/9d/190ffc054b933ad2ed20097ea763de814c2644dbcd6083626b5d4ea95d56/redzed-26.2.18-py3-none-any.whl | py3 | bdist_wheel | null | false | c01c3c7af29ef364da152289ee7563c1 | efe7c1242f4f24bfab1f3227833f9ab8b44088121fd635fea3c586d5c90cbbaa | f89d190ffc054b933ad2ed20097ea763de814c2644dbcd6083626b5d4ea95d56 | MIT | [
"LICENSE.txt"
] | 109 |
2.4 | yt-mixer | 0.1.0 | Server-side YouTube Audio Mixer with Ducking and Podcast Processing | # YT Mixer 🎵
Server-side YouTube playlist mixer with **true shuffle** for more unpredictable playback order.
## Features
- Mix music + podcast playlists with automatic ducking
- **True random shuffle** using Fisher-Yates (avoids YouTube's linear pseudo-shuffle behavior)
- Three-tier audio quality:
- ⚡ **Immediate**: Quick playback, no normalization, vocal EQ applied
- 📊 **Quick Mix**: Per-track normalization + EQ, ready in ~30–60 seconds
- ✨ **Final Mix**: Full LUFS mastering, professional loudness & sidechain
- Hour-long chunks streamed seamlessly
- Session persistence
## Install
```bash
pip install -e .
````
## Usage
```bash
# Start service
yt-mixer service start
# Access at http://localhost:5052
# Enter two YouTube playlist IDs and mix!
```
## How it works
1. Downloads tracks from both playlists
2. **Properly shuffles** them using Fisher-Yates for true randomness
3. Applies vocal EQ + per-track normalization
4. Streams hour-long chunks with incremental quality upgrades (Immediate → Quick → Final)
| text/markdown | null | 1minds3t <1minds3t@proton.me> | null | null | MIT | youtube, audio, mixer, podcast, ffmpeg | [
"Development Status :: 4 - Beta",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language... | [] | null | null | >=3.8 | [] | [] | [] | [
"Flask>=3.0.0",
"yt-dlp>=2024.0.0",
"eventlet>=0.33.0"
] | [] | [] | [] | [
"Homepage, https://github.com/1minds3t/yt_mixer",
"Issues, https://github.com/1minds3t/yt_mixer/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:43:07.387013 | yt_mixer-0.1.0.tar.gz | 32,560 | 76/c5/1a80a04bb554952a79c3cec2e7fa170412d0b060cbad82e3c66ee4b7e8d1/yt_mixer-0.1.0.tar.gz | source | sdist | null | false | 6abb9cb794831a8d7b804bbc90a318f6 | fd6d640475fa553390a4bbfa9732f398b431eac44511faee123483846b563e79 | 76c51a80a04bb554952a79c3cec2e7fa170412d0b060cbad82e3c66ee4b7e8d1 | null | [
"LICENSE",
"THIRD_PARTY_NOTICES.txt"
] | 234 |
2.4 | cap_upload_validator | 1.6.0 | Python tool for validating H5AD AnnData files before uploading to the Cell Annotation Platform. | # cap-validator
[](https://pypi.org/project/cap-upload-validator/) [](https://github.com/cellannotation/cap-validator/blob/main/LICENSE) [](https://github.com/cellannotation/cap-validator/actions)
## Overview
Python tool for validating H5AD AnnData files before uploading to the Cell Annotation Platform. The same validation code is used in [Cell Annotation Platform](https://celltype.info/) following requirements from the CAP-AnnData schema published [here](https://github.com/cellannotation/cap-data-schema/blob/main/cap-anndata-schema.md).
Full documentation could be found in the [GitHub Wiki](https://github.com/cellannotation/cap-validator/wiki)
## Features
- ✨ Validates all upload requirements and returns results at once
- 🚀 RAM efficient
- 🧬 Provides a full list of supported ENSEMBL gene IDs for *Homo sapiens* and *Mus musculus*
## Installation
```bash
pip install -U cap-upload-validator
```
## Usage
### Basic usage
```python
from cap_upload_validator import UploadValidator
h5ad_path = "path_to.h5ad"
uv = UploadValidator(h5ad_path)
uv.validate()
```
### CLI interface
```console
$ capval tmp/tmp.h5ad
CapMultiException:
AnnDataMissingEmbeddings:
The embedding is missing or is incorrectly named: embeddings must be a [n_cells x 2]
numpy array saved with the prefix X_, for example: X_tsne, X_pca or X_umap.
AnnDataMisingObsColumns:
Required obs column(s) missing: file must contain
'assay', 'disease', 'organism' and 'tissue' fields with valid values.
For details visit:
https://github.com/cellannotation/cap-validator/wiki/Validation-Errors
$ capval --help
usage: capval [-h] adata_path
CLI tool to validate an AnnData H5AD file before uploading to the Cell Annotation Platform.
The validator will raise CAP-specific errors if the file does not follow the CAP AnnData Schema,
as defined in:
https://github.com/cellannotation/cell-annotation-schema/blob/main/docs/cap_anndata_schema.md
Full documentation, including a list of possible validation errors, is available at:
https://github.com/cellannotation/cap-validator/wiki
Usage Example:
`capval path/to/anndata.h5ad`
positional arguments:
adata_path Path to the AnnData h5ad file.
optional arguments:
-h, --help show this help message and exit
```
## License
[BSD 3-Clause License](LICENSE)
| text/markdown | Roman Mukhin, Andrey Isaev, Evan Biederstedt | null | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"cap-anndata>=0.5.2",
"numpy>=2.0.2",
"pandas>=2.2.3",
"scipy>=1.13.1"
] | [] | [] | [] | [
"Homepage, https://celltype.info/",
"GitHub, https://github.com/cellannotation/cap-validator",
"Issues, https://github.com/cellannotation/cap-validator/issues",
"Changelog, https://github.com/cellannotation/cap-validator/blob/main/CHANGELOG.md",
"Documentation, https://github.com/cellannotation/cap-validato... | uv/0.8.0 | 2026-02-18T09:42:59.964892 | cap_upload_validator-1.6.0.tar.gz | 2,506,626 | 20/07/f091bdec7905f4b5f477132baa9489b234f8afc62ae9f71e63488c9a2677/cap_upload_validator-1.6.0.tar.gz | source | sdist | null | false | 26c61385c2faf44a8830061329d15585 | 95e5c2ce4a2082351c41597f2c520e8913f0c2bc8336c72b8b5a9913d5802595 | 2007f091bdec7905f4b5f477132baa9489b234f8afc62ae9f71e63488c9a2677 | BSD-3-Clause | [
"LICENSE"
] | 0 |
2.4 | cervellaswarm-code-intelligence | 0.1.0 | AST-based code analysis toolkit: symbol extraction, dependency graphs, semantic search, impact analysis, and repository mapping. | # CervellaSwarm Code Intelligence
[](LICENSE)
[](https://www.python.org)
[](tests/)
Find any symbol, trace any dependency, map any repository. Built on [tree-sitter](https://tree-sitter.github.io/).
```bash
pip install cervellaswarm-code-intelligence
```
## What It Does
Extract symbols from source code, build dependency graphs with PageRank scoring,
and answer questions like:
- Where is `UserService` defined?
- What calls `authenticate()`? What does it call?
- How risky is it to refactor `DatabasePool`?
- What are the most important symbols in this repo?
## Quick Start
### Find symbols across your codebase
```python
from cervellaswarm_code_intelligence import SemanticSearch
search = SemanticSearch("/path/to/your/repo")
# Where is this symbol defined?
location = search.find_symbol("UserService")
# => ("/path/to/your/repo/app/services.py", 42)
# Who calls this function?
callers = search.find_callers("authenticate")
# => [("app/auth.py", 15, "login"), ("app/api.py", 88, "verify_token")]
# What does this function call?
callees = search.find_callees("login")
# => ["authenticate", "generate_token", "log_attempt"]
```
### Estimate impact of code changes
```python
from cervellaswarm_code_intelligence import ImpactAnalyzer
analyzer = ImpactAnalyzer("/path/to/your/repo")
result = analyzer.estimate_impact("DatabasePool")
print(result.risk_level) # => "high"
print(result.risk_score) # => 0.62
print(result.callers_count) # => 14
print(result.files_affected) # => 7
print(result.reasons)
# => ["14 callers - high impact",
# "Used in 7 files - moderate scope",
# "Class type - changes may affect multiple methods"]
```
### Generate repository maps within token budgets
```python
from cervellaswarm_code_intelligence import RepoMapper
mapper = RepoMapper("/path/to/your/repo")
repo_map = mapper.build_map(token_budget=2000)
print(repo_map)
# => # REPOSITORY MAP
#
# ## app/auth.py
#
# def login(username: str, password: str) -> Token
# def verify_token(token: str) -> bool
# class AuthMiddleware
# ...
```
### Extract symbols from a single file
```python
from cervellaswarm_code_intelligence import SymbolExtractor, TreesitterParser
parser = TreesitterParser()
extractor = SymbolExtractor(parser)
symbols = extractor.extract_symbols("app/models.py")
for symbol in symbols:
print(f"{symbol.type:10} {symbol.name:20} line {symbol.line}")
# => class User line 5
# function create_user line 28
# function get_user_by_email line 45
```
### Build and analyze dependency graphs
```python
from cervellaswarm_code_intelligence import DependencyGraph, Symbol
graph = DependencyGraph()
# Add symbols and references
graph.add_symbol(login_symbol)
graph.add_symbol(auth_symbol)
graph.add_reference("auth.py:login", "auth.py:verify_credentials")
# Compute importance via PageRank
scores = graph.compute_importance()
# Get the most important symbols
top_10 = graph.get_top_symbols(n=10)
```
## CLI Tools
Three command-line tools are included:
```bash
# Find where a symbol is defined, who calls it, what it calls
cervella-search /path/to/repo UserService
cervella-search /path/to/repo authenticate callers
cervella-search /path/to/repo login callees
# Estimate impact of modifying a symbol
cervella-impact /path/to/repo DatabasePool
# Risk: HIGH (0.62) - 14 callers, 7 files affected
# Generate a repository map within a token budget
cervella-map --repo-path /path/to/repo --budget 2000 --output repo_map.md
cervella-map --repo-path /path/to/repo --filter "**/*.py" --stats
```
## Architecture
```
Source Files (.py, .ts, .tsx, .js, .jsx)
|
TreesitterParser -- Parse into AST
|
SymbolExtractor -- Extract functions, classes, interfaces
| |
PythonExtractor TypeScriptExtractor
|
DependencyGraph -- Build edges, compute PageRank
|
+-----------+-----------+
| | |
SemanticSearch RepoMapper ImpactAnalyzer
```
**5 layers, 14 modules, 4 external dependencies.**
## Supported Languages
| Language | Extensions | Functions | Classes | Interfaces | Types | References |
|------------|---------------------|-----------|---------|------------|-------|------------|
| Python | `.py` | Yes | Yes | -- | -- | Yes |
| TypeScript | `.ts`, `.tsx` | Yes | Yes | Yes | Yes | Yes |
| JavaScript | `.js`, `.jsx` | Yes | Yes | -- | -- | Yes |
Other languages: contributions welcome. The extractor architecture is designed for
easy addition of new language backends.
## API Reference
### Core Classes
| Class | Purpose | Key Methods |
|-------|---------|-------------|
| `Symbol` | Data class for extracted symbols | `.name`, `.type`, `.file`, `.line`, `.signature`, `.references` |
| `TreesitterParser` | Parse source files into ASTs | `.parse_file(path)`, `.detect_language(path)` |
| `SymbolExtractor` | Extract symbols from parsed files | `.extract_symbols(path)`, `.clear_cache()` |
| `DependencyGraph` | Build and analyze dependency graphs | `.add_symbol()`, `.compute_importance()`, `.get_top_symbols(n)` |
| `SemanticSearch` | High-level code navigation | `.find_symbol()`, `.find_callers()`, `.find_callees()`, `.find_references()` |
| `ImpactAnalyzer` | Risk assessment for code changes | `.estimate_impact(name)`, `.find_dependencies(path)`, `.find_dependents(path)` |
| `RepoMapper` | Generate token-budgeted repo maps | `.build_map(budget)`, `.get_stats()` |
### Risk Score Algorithm
Impact analysis computes risk as: `min(base + caller_factor + type_factor, 1.0)`
| Factor | Range | Source |
|--------|-------|--------|
| `base` | 0.0 - 0.3 | PageRank importance score |
| `caller_factor` | 0.0 - 0.4 | `min(callers / 20, 0.4)` |
| `type_factor` | 0.0 - 0.3 | Symbol type (class=0.3, interface=0.25, function=0.2) |
Risk levels: **low** (< 0.3), **medium** (0.3-0.5), **high** (0.5-0.7), **critical** (> 0.7).
## Limitations
- **Language support**: Python, TypeScript, and JavaScript only. No Go, Rust, Java, C++.
- **Reference extraction**: Based on name matching within AST, not full type resolution.
This means it can produce false positives for common names.
- **Performance**: Builds a full in-memory index on initialization. For very large
repositories (10k+ files), the initial scan may take several seconds.
- **Token estimation**: Uses a 4-chars-per-token heuristic, which is approximate.
## Development
```bash
# Clone and install in development mode
git clone https://github.com/rafapra3008/cervellaswarm.git
cd cervellaswarm/packages/code-intelligence
pip install -e ".[dev]"
# Run tests (395 tests, ~0.5s)
pytest
# Run with coverage
pytest --cov=cervellaswarm_code_intelligence --cov-report=term-missing
```
## Part of CervellaSwarm
This package is the code intelligence engine of [CervellaSwarm](https://github.com/rafapra3008/cervellaswarm),
a multi-agent AI coordination system. It works standalone -- no other CervellaSwarm
packages are required.
## License
[Apache-2.0](LICENSE)
| text/markdown | CervellaSwarm Contributors | null | null | null | null | ast, code-analysis, dependency-graph, impact-analysis, repository-mapping, semantic-search, symbol-extraction, tree-sitter | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :... | [] | null | null | >=3.10 | [] | [] | [] | [
"networkx>=3.0",
"scipy>=1.10.0",
"tree-sitter-language-pack>=0.3.0",
"tree-sitter>=0.23.0",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.9.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"test\"",
"pytest>=8.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/rafapra3008/cervellaswarm",
"Repository, https://github.com/rafapra3008/cervellaswarm",
"Documentation, https://github.com/rafapra3008/cervellaswarm/tree/main/packages/code-intelligence",
"Bug Tracker, https://github.com/rafapra3008/cervellaswarm/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:42:53.867068 | cervellaswarm_code_intelligence-0.1.0.tar.gz | 76,530 | da/06/f1b4ff455ec80666494fdbb21a38e6cb3d44bd2d9fd1aeeb9ed76fbda029/cervellaswarm_code_intelligence-0.1.0.tar.gz | source | sdist | null | false | 03239da233866adc186fbcd85610d0c1 | 970ae6eca63ff000b20a030c564d7f2424f86fefe9a531203fe3718dadfaf79f | da06f1b4ff455ec80666494fdbb21a38e6cb3d44bd2d9fd1aeeb9ed76fbda029 | Apache-2.0 | [
"LICENSE",
"NOTICE"
] | 244 |
2.4 | mlguardx | 0.1.0 | ML data validation, profiling, drift detection and data contracts framework | # MLGuardian
- Smart Schema Inference
- Data Contracts
- Drift Detection
- Data Profiling
<!-- utils.py functionality -->
1. Tracks if data changed or not
2. Detect schema changes
3. Reduce memory usage
4. Automatically detect CSV format
5. Preparing datasets for production ML
why?
1. Schema change
2. Data Corruption
3. Memory overload
<!-- Profiling script -->
Pandas alternative, as it automatically analyzes the dataset structure, quality, correlations, and drift and produces warnings and actionable ML recommendations.
<!-- Schema.py -->
This is a schema management system for ML datasets.
It does 4 major things:
- Automatically infer dataset schema
- Detect semantic meaning (email, URL, UUID, etc.)
- Validate new data against stored schema
- Handle schema evolution & drift
<!-- problems I faced -> when working -> ML projects
-->
1. dataset schema changes -> model crashes
2. more NANs or NULLS -> model fails
3. Accuracy drops
what this pip will do?
1. Automatically infers column types
2. Detects target column
3. Detects semantic types (email, uuid, etc.)
4. Stores min/max/mean
5. Saves schema to JSON/YAML
6. Validates new data against schema
7. Calculates null %
8. Detects high cardinality
9. Detects constant columns
10. Finds correlated features
11. Produces HTML report
12. Suggests recommendations
## Installation
pip install mlguardian
| text/markdown | Astha Paika | asthapaika647@gmail.com | null | null | null | null | [] | [] | https://github.com/asthapaikacse/mlguardian | null | >=3.7 | [] | [] | [] | [
"pandas",
"numpy",
"scipy",
"scikit-learn",
"pyyaml"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.13 | 2026-02-18T09:42:15.201247 | mlguardx-0.1.0.tar.gz | 21,438 | 31/8a/66c9516a57509bbc59b3ff305da3abb1f18e2cdecbacd86168a247ef7020/mlguardx-0.1.0.tar.gz | source | sdist | null | false | 6659e04f4a3cafe0678eb5f81def4cce | 9458ba4862d3ee43ca0ccaa4f50d59b83a32ab6d44213962077b8f67b29a2459 | 318a66c9516a57509bbc59b3ff305da3abb1f18e2cdecbacd86168a247ef7020 | null | [] | 250 |
2.4 | concentric | 1.4 | a simple connection manager for connecting to various rdbms (mostly legacy) | concentric
==========
a connection manager for python for connecting to various databases.
supported databases:
* oracle
* netsuite
* mssql
* mysql
* vertica
* redshift
* postgres
* db2 i-series
overview
--------
concentric is based off of `waddle <https://pypi.org/project/waddle/>`_ for secrets
management, which means it is strongly coupled to aws kms for its key management.
quick start
-----------
#. create a waddle configuration file
.. code-block::
oracle:
host: localhost
user: scott
password: tiger
sid: xe
#. waddle in the password for security
.. code-block::
waddle add-secret -f /path/to/config.yml oracle.password
#. use it
.. code-block::
from concentric.managers import setup_concentric
from concentric.managers import CachingConnectionManager as ccm
setup_concentric('/path/to/waddle_config.yml', '/path/to/another_config.yml')
conn = ccm.connect('oracle')
with conn.cursor() as cursor:
cursor.execute('select sysdate as dt from dual')
results = cursor.fetchall()
contributing
------------
Sample configuration files:
#. `db2 <./concentric/example_config/db2.yml>`_
#. `hp3000 <./concentric/example_config/hp3000.yml>`_
#. `mysql <./concentric/example_config/mysql.yml>`_
#. `netsuite <./concentric/example_config/netsuite.yml>`_
#. `oracle <./concentric/example_config/oracle_sid.yml>`_
#. `postgres <./concentric/example_config/postgres.yml>`_
#. `redshift <./concentric/example_config/redshift.yml>`_
#. `snowflake <./concentric/example_config/snowflake.yml>`_
#. `sql server <./concentric/example_config/sql_server.yml>`_
#. `vertica <./concentric/example_config/vertica.yml>`_
| null | Preetam Shingavi | p.shingavi@yahoo.com | null | null | BSD | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: BSD License",
"Operating System :: POSIX",
"Operating System :: Unix",
"Operating System :: MacOS :: MacOS X",
"Operat... | [] | https://bitbucket.org/dbuy/concentric | null | null | [] | [] | [] | [
"waddle",
"raft",
"sqlalchemy",
"pyodbc; extra == \"db2i\"",
"sqlalchemy-ibmi; extra == \"db2i\"",
"jaydebeapi; extra == \"hp3000\"",
"pyodbc; extra == \"mssql\"",
"mysqlclient; extra == \"mysql\"",
"pyodbc; extra == \"netsuite\"",
"cx_oracle; extra == \"oracle\"",
"oracledb; extra == \"oracle\"... | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T09:40:40.878962 | concentric-1.4.tar.gz | 13,963 | 9c/eb/594d05d45a02e553a3305f8d5c5e0daf3019a2b91b10be03dceb0f3edab7/concentric-1.4.tar.gz | source | sdist | null | false | d0ecd250b5f4d1d0cfd86e36a681d9b2 | ee3d63bc95aa7b8b6ac9cfbef15aeef3a8643bb2a2a97bb93ade6fb98382c2cc | 9ceb594d05d45a02e553a3305f8d5c5e0daf3019a2b91b10be03dceb0f3edab7 | null | [] | 149 |
2.3 | unique_six | 0.1.2 | Unique Six API client | # Unique Six Connector
A Python connector library for the [Six API](https://api.six-group.com/web/), providing easy access to end of day history, free text search, intraday history, intraday snapshot and entity listing.
## Installation
```bash
poetry add unique_six
```
Or using pip:
```bash
pip install unique_six
```
```bash
# Clone the repository
git clone <repository_url>
cd unique_six
# Install dependencies
poetry install
# Run linting
poetry run ruff check .
# Run formatting
poetry run ruff format .
```
## License
Proprietary
## Authors
- Ahmed Jellouli <ahmed.jellouli.ext@unique.ch>
## Support
For issues and questions, please contact the maintainers or refer to the [Six API documentation](https://developer.six-group.com/en/home.html).
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.1.2] - 2026-02-18
- Bump `cryptography` to `^46.0.5` to fix CVE-2026-26007
## [0.1.1] - 2026-01-08
- Fix unresolved import of get six client
## [0.1.0] - 2026-01-06
- Initial release of `unique_six`
| text/markdown | Ahmed Jellouli | ahmed.jellouli.ext@unique.ch | null | null | Proprietary | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"pydantic<3.0.0,>=2.12.4",
"python-dotenv<2.0.0,>=1.0.1",
"unique-toolkit<2.0.0,>=1.38.3",
"cryptography<47.0.0,>=46.0.5"
] | [] | [] | [] | [] | poetry/2.1.3 CPython/3.12.3 Linux/6.14.0-1017-azure | 2026-02-18T09:40:27.097158 | unique_six-0.1.2.tar.gz | 21,822 | ac/7f/ef005562cc3220b66401e22af0f1e94f612468836867cb690727ec8d914a/unique_six-0.1.2.tar.gz | source | sdist | null | false | 4208c12b132fd769babe8e16cd66c03b | 123e1c278015eb3a78403a82ba62c7cd61eb73c451badcf5d3660873d34ef15b | ac7fef005562cc3220b66401e22af0f1e94f612468836867cb690727ec8d914a | null | [] | 0 |
2.4 | quicklearnkit | 0.4.2 | Learning-first machine learning utilities library for simplified imports, sampling, splitting, and probabilistic preprocessing. |
# QuickLearnKit
[](https://pypi.org/project/quicklearnkit/)

[](https://quicklearnkit.readthedocs.io/en/latest/)

QuickLearnKit is a **learning-first machine learning utilities library** designed to simplify common ML workflows while preserving full control for advanced users.
It provides:
- Simplified model imports
- Sampling and dataset utilities
- Train–test splitting
- Probabilistic, group-aware imputation
- Teaching-friendly visualization wrappers
- Notebook → Script pipeline compilation
---
## Installation
```bash
pip install quicklearnkit
````
---
## Documentation
Full documentation is available at:
👉 [https://quicklearnkit.readthedocs.io/en/latest/](https://quicklearnkit.readthedocs.io/en/latest/)
---
## Philosophy
> Remove mechanical friction so students can focus on concepts, not syntax.
QuickLearnKit bridges:
Learning → Experimentation → Structured building
---
## License
MIT License
| text/markdown | Hazi Afrid | null | null | null | MIT License
Copyright (c) 2025 Masterhazi
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"pandas",
"scikit-learn",
"xgboost",
"seaborn",
"matplotlib"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.19 | 2026-02-18T09:40:20.827886 | quicklearnkit-0.4.2.tar.gz | 12,297 | a8/02/7396578dc77f092189f111e26b553b2157d2dd2d4e227d3a48516128e23e/quicklearnkit-0.4.2.tar.gz | source | sdist | null | false | 02ffe95adc9cf5bf1b4be7fbcb446f64 | 3e0a0e16e1276842d62cf20dcfddf8baaac122d8ef70d0540173c5f3f49622ae | a8027396578dc77f092189f111e26b553b2157d2dd2d4e227d3a48516128e23e | null | [
"LICENSE"
] | 225 |
2.1 | odoo14-addon-ssi-stock | 14.0.8.4.0 | Inventory | .. image:: https://img.shields.io/badge/licence-AGPL--3-blue.svg
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
=========
Inventory
=========
Installation
============
To install this module, you need to:
1. Clone the branch 14.0 of the repository https://github.com/open-synergy/ssi-stock
2. Add the path to this repository in your configuration (addons-path)
3. Update the module list
4. Go to menu *Apps -> Apps -> Main Apps*
5. Search For *Inventory*
6. Install the module
Bug Tracker
===========
Bugs are tracked on `GitHub Issues
<https://github.com/open-synergy/ssi-stock/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us smashing it by providing a detailed
and welcomed feedback.
Credits
=======
Contributors
------------
* Andhitia Rama <andhitia.r@gmail.com>
* Miftahussalam <miftahussalam08@gmail.com>
Maintainer
----------
.. image:: https://simetri-sinergi.id/logo.png
:alt: PT. Simetri Sinergi Indonesia
:target: https://simetri-sinergi.id.com
This module is maintained by the PT. Simetri Sinergi Indonesia.
| null | PT. Simetri Sinergi Indonesia, OpenSynergy Indonesia | null | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 14.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://simetri-sinergi.id | null | >=3.6 | [] | [] | [] | [
"odoo14-addon-ssi-master-data-mixin",
"odoo14-addon-ssi-multiple-approval-mixin",
"odoo14-addon-ssi-policy-mixin",
"odoo14-addon-ssi-print-mixin",
"odoo14-addon-ssi-product",
"odoo14-addon-stock-inventory-preparation-filter",
"odoo14-addon-stock-move-backdating",
"odoo14-addon-stock-move-line-auto-fil... | [] | [] | [] | [] | twine/5.1.1 CPython/3.12.3 | 2026-02-18T09:40:12.059650 | odoo14_addon_ssi_stock-14.0.8.4.0-py3-none-any.whl | 80,651 | cf/a7/abce0598489594e031f4c80b480abb2fd0b44aadbcc137241515ebdfa3d0/odoo14_addon_ssi_stock-14.0.8.4.0-py3-none-any.whl | py3 | bdist_wheel | null | false | cc085c5fc6cf4e844ee50a285db99cc3 | 9d9c21016fab6ac9d0fed5f5062da2b25a9129bf7a87606ba57d983a29dc16c7 | cfa7abce0598489594e031f4c80b480abb2fd0b44aadbcc137241515ebdfa3d0 | null | [] | 102 |
2.4 | quirkvis | 0.1.1 | Highly Customizable Quantum Circuit SVG Visualizer | # Quantum QuirkVis 🙃
A python package that draws quantum circuits written in qasm/OpenQASM 3.0 and outputs as SVG with a strong focus on personalization.
## Features
- SVG output
- Theming engine (JSON)
- Personalization (night mode, custom colors)
- Gate symbol substitutions (emojis, icons, animations)
- Awesome "dial" for parametrized gates
- Animations and other CSS filters
- Lightweight
# Themes
### Default

### Night

### Matrix

### Emoji

## Installation
```bash
pip install quirkvis
```
## Usage
```python
from quantum_quirkvis import draw
qasm_str = """
OPENQASM 3.0;
qubit[2] q;
h q[0];
cx q[0], q[1];
"""
draw(qasm_str, theme="matrix", filename="path/to/file.svg")
# or
svg = draw(qasm_str, theme="matrix")
```
## Personalization
Check the themes json files to see how much you can customize, you can customize anything!
Then create your own theme, you don't need to overwrite all the file, you only have to change the values of interest, the rest if defaulted.
- mytheme.json
```json
{
"name": "default",
"styles": {
"background": "#777777",
}
}
```
then simply
```python
draw(qasm_str, theme="mytheme.json")
```
You can also use the cli provided which exposes the command **qasmvis**
```bash
qasmvis ghz.qasm -t matrix -o ghz.svg
```
or
```bash
qasmvis ghz.qasm -t matrix > ghz.svg
```
or
```bash
cat ghz.qasm | qasmvis -t night > ghz.svg
```
## Libraries
The current package just requires pyqasm to parse the qasm files or string into the AST that is processed to create SVG with the selected theme.
## License
MIT
## Limitations
Currently the library "unrolls" the circuit, therefore complex or custom gates (like CY, Toffoli, CSWAP, etc) are drawn as their primitives.
U-n gates are not yet supported
## unitaryDESIGN
This repo is currently running for a bounty on [unitaryDESIGN](unitary.design) hackathon
| text/markdown | null | Luis J Camargo <lsjcp@yahoo.com> | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"pyqasm"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:39:43.552635 | quirkvis-0.1.1.tar.gz | 11,904 | a0/e7/483f69e80bbcada4d23486082344a3c2703d306879edfa295b7cc3b6ff12/quirkvis-0.1.1.tar.gz | source | sdist | null | false | 749b6cc5bec6b2e9e36b4b8156fd656c | afcf609002075c68f0fc21f3c581a90d8fb875eafb69a85f5bd4d6f0e7acd4b9 | a0e7483f69e80bbcada4d23486082344a3c2703d306879edfa295b7cc3b6ff12 | null | [
"LICENSE"
] | 234 |
2.4 | codegrade | 17.0.29 | A client library for accessing CodeGrade | # CodeGrade API
This library makes it easier to use the CodeGrade API. Its API allows you to
automate your usage of CodeGrade. We are currently still busy documenting the
entire API, but everything that is possible in the UI of CodeGrade is also
possible through the API.
## Installation
You can install the library through [pypi](https://pypi.org/), simply run
`python3 -m pip install codegrade`. If you want to get a dev version with support
for the latest features, simply email [support@codegrade.com](mailto:support@codegrade.com)
and we'll provide you with a dev version.
## Usage
First, create a client:
```python
import codegrade
# Don't store your password in your code!
with codegrade.login(
username='my-username',
password=os.getenv('CG_PASSWORD'),
tenant='My University',
) as client:
pass
# Or supply information interactively.
with codegrade.login_from_cli() as client:
pass
```
Now call your endpoint and use your models:
```python
from codegrade.models import PatchCourseData
courses = client.course.get_all()
for course in courses:
client.course.patch(
PatchCourseData(name=course.name + ' (NEW)'),
course_id=course.id,
)
# Or, simply use dictionaries.
for course in courses:
client.course.patch(
{"name": course.name + ' (NEW)'},
course_id=course.id,
)
```
For the complete documentation go to
https://python.api.codegrade.com.
## Backwards compatibility
CodeGrade is constantly upgrading its API, but we try to minimize backwards
incompatible changes. We'll announce every backwards incompatible change in the
[changelog](http://codegrade.com/changelog). A new version of the API
client is released with every release of CodeGrade, which is approximately every
month. To upgrade simply run `pip install --upgrade codegrade`.
## Supported python versions
We support python 3.6 and above, `pypy` is currently not tested but should work
just fine.
## License
The library is licensed under AGPL-3.0-only or BSD-3-Clause-Clear. | text/markdown | null | null | null | null | null | null | [] | [] | null | null | <4.0,>=3.8.0 | [] | [] | [] | [
"httpx<1.0.0,>=0.19.0",
"python-dateutil<3,>=2.8.1",
"structlog>=20.1.0",
"typing-extensions<5,>=4.13.2",
"validate-email~=1.3"
] | [] | [] | [] | [
"Documentation, https://python.api.codegrade.com"
] | twine/6.1.0 CPython/3.12.9 | 2026-02-18T09:39:35.229019 | codegrade-17.0.29.tar.gz | 304,087 | 4f/60/da54fc45286c6d37312cca7ab1fb1e400bc66b99e79c6d6634f04530d369/codegrade-17.0.29.tar.gz | source | sdist | null | false | 430ab494e92824c37af2e4f25ac16eb5 | 74591f38cf7799df95f707837f78f7f4174ba69e1356e4ccfa86820037b919bd | 4f60da54fc45286c6d37312cca7ab1fb1e400bc66b99e79c6d6634f04530d369 | AGPL-3.0-only OR BSD-3-Clause-Clear | [] | 252 |
2.1 | materials-commons-cli | 2.2.2 | Materials Commons CLI | This package contains the materials_commons.cli module. This module is an interface
to the Materials Commons project. We assume you have used (or are otherwise familiar with) the Materials
Commons web site, https://materialscommons.org/, or a similar site based on the
Materials Commons code (https://github.com/materials-commons/materialscommons), and intend to use these
tools in that context.
| null | Materials Commons development team | materials-commons-help@umich.edu | null | null | MIT | materials science mc materials-commons prisms | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Information Analysis",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8"
] | [] | https://materials-commons.github.io/materials-commons-cli/html/index.html | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-18T09:39:10.491643 | materials_commons_cli-2.2.2.tar.gz | 96,510 | d4/dd/bba535adb40d66a3a25c71ea78ed91f4ddb469ccbdcc1a5b70c667ed2f1e/materials_commons_cli-2.2.2.tar.gz | source | sdist | null | false | a37fbf64bb900dbc887144e8d4a8b6af | 1e2f82e0fa2b950d0e8b95a418aa799afa87e85f4a3807f3a34db2d901fb501f | d4ddbba535adb40d66a3a25c71ea78ed91f4ddb469ccbdcc1a5b70c667ed2f1e | null | [] | 230 |
2.4 | freeact | 0.7.8 | Freeact code action agent | # freeact
<p align="left">
<a href="https://gradion-ai.github.io/freeact/"><img alt="Website" src="https://img.shields.io/website?url=https%3A%2F%2Fgradion-ai.github.io%2Ffreeact%2F&up_message=online&down_message=offline&label=docs"></a>
<a href="https://pypi.org/project/freeact/"><img alt="PyPI - Version" src="https://img.shields.io/pypi/v/freeact?color=blue"></a>
<a href="https://github.com/gradion-ai/freeact/releases"><img alt="GitHub Release" src="https://img.shields.io/github/v/release/gradion-ai/freeact"></a>
<a href="https://github.com/gradion-ai/freeact/actions"><img alt="GitHub Actions Workflow Status" src="https://img.shields.io/github/actions/workflow/status/gradion-ai/freeact/test.yml"></a>
<a href="https://github.com/gradion-ai/freeact/blob/main/LICENSE"><img alt="GitHub License" src="https://img.shields.io/github/license/gradion-ai/freeact?color=blueviolet"></a>
</p>
Freeact is a lightweight agent that acts by executing Python code and shell commands.
Code actions are key for an agent to improve itself and its tool library.
Freeact has a tiny core, a small system prompt, and is extensible with agent skills.
It relies on a minimal set of generic tools: read, write, execute, subagent, and tool search.
Code and shell command execution runs locally in a stateful, sandboxed environment.
Freeact supports utilization of MCP servers by generating Python APIs for their tools.
**Supported models**: Freeact supports any model compatible with [Pydantic AI](https://ai.pydantic.dev/), with `gemini-3-flash-preview` as the default. See [Models](https://gradion-ai.github.io/freeact/models/) for provider configuration and examples.
## Documentation
- 📚 [Documentation](https://gradion-ai.github.io/freeact/)
- 🚀 [Quickstart](https://gradion-ai.github.io/freeact/quickstart/)
- 🤖 [llms.txt](https://gradion-ai.github.io/freeact/llms.txt)
- 🤖 [llms-full.txt](https://gradion-ai.github.io/freeact/llms-full.txt)
## Usage
| Component | Description |
|---|---|
| **[Agent SDK](https://gradion-ai.github.io/freeact/sdk/)** | Agent harness and Python API for building freeact applications. |
| **[CLI tool](https://gradion-ai.github.io/freeact/cli/)** | Terminal interface for interactive conversations with a freeact agent. |
## Capabilities
| Capability | Description |
|---|---|
| **Code actions** | Freeact agents act via Python code and shell commands. This enables tool composition and intermediate result processing in a single LLM inference pass. |
| **Local execution** | Freeact executes code and shell commands locally in an IPython kernel provided by [ipybox](https://github.com/gradion-ai/ipybox). Data, configuration and generated tools live in local workspaces. |
| **Sandbox mode** | IPython kernels optionally run in a sandbox environment based on Anthropic's [sandbox-runtime](https://github.com/anthropic-experimental/sandbox-runtime). It enforces filesystem and network restrictions on OS-level. |
| **MCP code mode** | Freeact calls MCP server tools programmatically via generated Python APIs. This enables composition of tool calls in code actions with much lower latency. |
| **Tool discovery** | Tools are discovered via category browsing or hybrid BM25/vector search. On-demand loading frees the context window and scales to larger tool libraries. |
| **Tool authoring** | Agents can create new tools, enhance existing tools, or save code actions as reusable tools. This captures successful experience as executable knowledge. |
| **Agent skills** | Skills give agents new capabilities and expertise based on [agentskills.io](https://agentskills.io/). They compose naturally with code actions and agent-authored tools. |
| **Subagent delegation** | Tasks can be delegated to subagents, each using their own sandbox. It enables specialization and parallelization without cluttering the main agent's context. |
| **Action approval** | Fine-grained approval of code actions and (programmatic) tool calls from both main agents and subagents. Enables human control over potentially risky actions. |
| **Session persistence** | Freeact persists agent state incrementally. Persisted sessions can be resumed and serve as a record for debugging, evaluation, and improvement. |
---
<sup>1)</sup> Freeact also supports JSON-based tool calls on MCP servers, but mainly for internal operations.<br>
<sup>2)</sup> A workspace is an agent's working directory where it manages tools, skills, configuration and other resources.
| text/markdown | null | Martin Krasser <martin@gradion.ai> | null | null | null | null | [] | [] | null | null | <3.15,>=3.11 | [] | [] | [] | [
"aiofiles>=25.1.0",
"aiostream>=0.7.1",
"google-genai>=1.56.0",
"ipybox>=0.7.4",
"pillow>=12.0.0",
"prompt-toolkit>=3.0.0",
"pydantic-ai>=1.59.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0.3",
"rich>=14.2.0",
"sqlite-vec>=0.1.0",
"watchfiles>=1.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:36:54.293161 | freeact-0.7.8.tar.gz | 46,144 | b4/8f/6f3c54b0d92fa8624850894215e1cf3d1a31313884a395fb44bc6278a7a3/freeact-0.7.8.tar.gz | source | sdist | null | false | 94e78dce5e9fbedfa2e3601a0183b3bf | 657e5dbb072334f4359e107d8bfb0c40adf5f15bca6783ba55c705f6f2c5d0a1 | b48f6f3c54b0d92fa8624850894215e1cf3d1a31313884a395fb44bc6278a7a3 | Apache-2.0 | [
"LICENSE"
] | 234 |
2.1 | tsu-wave | 1.0.0 | TSU-WAVE: Tsunami Spectral Understanding of Wave-Amplitude Variance and Energy | <div align="center">
```
╔══════════════════════════════════════════════════════════════════╗
║ ║
║ ████████╗███████╗██╗ ██╗ ██╗ ██╗ █████╗ ██╗ ██╗ ║
║ ██╔══╝██╔════╝██║ ██║ ██║ ██║██╔══██╗██║ ██║ ║
║ ██║ ███████╗██║ ██║ ████╗██║ █╗ ██║███████║██║ ██║ ║
║ ██║ ╚════██║██║ ██║ ╚═══╝██║███╗██║██╔══██║╚██╗ ██╔╝ ║
║ ██║ ███████║╚██████╔╝ ╚███╔███╔╝██║ ██║ ╚████╔╝ ║
║ ╚═╝ ╚══════╝ ╚═════╝ ╚══╝╚══╝ ╚═╝ ╚═╝ ╚═══╝ ║
║ ║
║ Tsunami Spectral Understanding of Wave-Amplitude ║
║ Variance and Energy ║
╚══════════════════════════════════════════════════════════════════╝
```
---
[](https://gitlab.com/gitdeeper4/tsu-wave/-/releases)
[](LICENSE)
[](https://python.org)
[]()
[]()
[](https://doi.org/10.5281/zenodo.XXXXXXXX)
[](https://osf.io/XXXXX/)
[](https://orcid.org/0009-0003-8903-0029)
---
**`91.3%` Run-up Accuracy** ·
**`96.4%` Threat Detection** ·
**`3.1%` False Alert Rate** ·
**`67 min` Mean Lead Time** ·
**`23` Events Validated**
---
[🌊 Website](https://tsu-wave.netlify.app) ·
[📖 Documentation](https://tsu-wave.netlify.app/documentation) ·
[📊 Live Dashboard](https://tsu-wave.netlify.app/dashboard) ·
[🔬 Research Paper](#-research-paper) ·
[🚀 Quick Start](#-quick-start)
</div>
---
## 📋 Table of Contents
- [Overview](#-overview)
- [What's New in v1.0.0](#-whats-new-in-v100)
- [Seven Hydrodynamic Parameters](#-seven-hydrodynamic-parameters)
- [Architecture](#️-architecture)
- [Project Structure](#-project-structure)
- [Quick Start](#-quick-start)
- [Installation](#-installation)
- [Usage](#-usage)
- [API Reference](#-api-reference)
- [Research Paper](#-research-paper)
- [Data & Resources](#-data--resources)
- [Contributing](#-contributing)
- [License](#-license)
- [Contact](#-contact)
---
## 🌊 Overview
**TSU-WAVE** is a production-ready, physics-based framework for real-time tsunami wave front analysis and coastal inundation forecasting. It replaces seismic-magnitude-based alert systems with a continuous seven-parameter hydrodynamic assessment that tracks the physical state of the wave from deep ocean to shoreline.
> *"The physics of long-wave shoaling, bathymetric energy focusing, and hydrodynamic front instability are deterministic and measurable in real time."*
> — TSU-WAVE Research Paper, February 2026
Validated against **23 documented events** spanning 36 years (1990–2026), propagation distances of 180 km to 14,200 km, and run-up heights of 0.3 m to 40.5 m.
### 📊 Performance vs. Existing Systems
| System | Run-up RMSE | False Alert Rate | Mean Lead Time |
|--------|-------------|-----------------|----------------|
| **TSU-WAVE** | **11.7%** | **3.1%** | **67 min** |
| DART + linear model | 35–65% | 8.4% | 52 min |
| MOST (NOAA) | 28–45% | 6.2% | 58 min |
| TUNAMI-N2 | 22–40% | 5.8% | 55 min |
| Seismic-only (legacy) | 60–300% | 12.1% | 61 min |
---
## 🆕 What's New in v1.0.0
> **Released:** February 17, 2026
- 🌊 **First public release** of the complete TSU-WAVE framework
- 🌐 **Website live**: [tsu-wave.netlify.app](https://tsu-wave.netlify.app)
- 📖 **Documentation portal**: [/documentation](https://tsu-wave.netlify.app/documentation)
- 📊 **Interactive dashboard**: [/dashboard](https://tsu-wave.netlify.app/dashboard)
- ✅ All **47/47 tests** passing
- 🗺️ **180 pre-computed BECF bay maps** included
- 📦 **23-event validation dataset** (1990–2026)
- 📝 **Research paper** finalized (28,000 words · 95 pages)
- 🔗 **Zenodo dataset**: `10.5281/zenodo.XXXXXXXX` *(to be activated)*
- 📋 **OSF pre-registration**: `osf.io/XXXXX` *(to be activated)*
### Version History
| Version | Date | Notes |
|---------|------|-------|
| **v1.0.0** | 2026-02-17 | ✅ First public release |
| v0.9.0 | 2026-01-20 | Beta — full validation suite |
| v0.5.0 | 2025-09-15 | Alpha — core NSWE solver |
| v0.1.0 | 2025-05-01 | Prototype — parameter definitions |
---
## 🔬 Seven Hydrodynamic Parameters
> All parameters are derived from the **Nonlinear Shallow-Water Equations (NSWE)** and computed continuously in real time. Each maps to a distinct physical process in the tsunami lifecycle.
---
### 1 · WCC — Wave Front Celerity Coefficient
Measures departure from linear shallow-water propagation speed, indicating onset of nonlinear wave regime.
```
c_NL = √(gH) · [1 + 3η/4H − π²H²/6λ²]
WCC = c_observed / c₀ = c_NL / √(gH)
```
> **Safe:** WCC < `1.35` · **Alert:** WCC > `1.35` · **Critical:** WCC > `1.58`
> *Activated when wave height-to-depth ratio η/H exceeds 0.15*
---
### 2 · KPR — Kinetic-to-Potential Energy Transfer Ratio
Tracks the partition between kinetic and potential wave energy, identifying bore formation.
```
E_K = ½·ρ·H·u² E_P = ½·ρ·g·η²
KPR = E_K / E_P = (H·u²) / (g·η²)
```
> **Safe:** KPR < `1.2` · **Alert:** KPR > `1.6` · **Critical:** KPR > `2.0` *(bore formation)*
> *Linear equipartition: KPR = 1.0*
---
### 3 · HFSI — Hydrodynamic Front Stability Index
Quantifies wave front stability via the Boussinesq parameter — the primary early-warning indicator.
```
Bo = H³ / (η·λ²)
HFSI = tanh(Bo)
```
> **Safe:** HFSI > `0.80` · **Alert:** HFSI < `0.60` · **Critical:** HFSI < `0.40`
> *Instability onset confirmed at h/H₀ = 0.42 ± 0.05 across 23 events*
---
### 4 · BECF — Bathymetric Energy Concentration Factor
Quantifies energy amplification by convergent bathymetric geometries — the dominant spatial control.
```
BECF = [H₀/H(x)]^(1/2) · [b₀/b(x)]
```
> **Safe:** BECF < `2.0` · **Alert:** BECF > `4.0` · **Critical:** BECF > `6.0`
> *Explains 84% of spatial run-up variability. Validated ρ = 0.947 (p < 0.001)*
---
### 5 · SDB — Spectral Dispersion Bandwidth
Tracks spectral spreading of wave energy and nonlinear harmonic energy transfer.
```
SDB = Δf₉₅ / f_peak
```
> **High Threat:** SDB < `1.0` *(narrow-band coherent bore)*
> **Reduced Threat:** SDB > `3.5` *(broad dispersed packet)*
> *Second harmonic F₂ > 15% at h/H₀ > 0.35 — confirmed nonlinear cascade*
---
### 6 · SBSP — Shoreline Boundary Stress Parameter
Estimates inundation momentum flux and bore formation at the shoreline transition.
```
SBSP = Fr² · (H/H_ref) = (u²·H) / (g·H_ref²)
```
> **Safe:** SBSP < `0.3` · **Alert:** SBSP > `0.7` · **Critical:** SBSP > `1.2` *(supercritical)*
> *Run-up regression: R = 19.7 × SBSP − 2.1 [m] · Pearson r = 0.956*
---
### 7 · SMVI — Sub-Surface Micro-Vorticity Index
Detects vorticity generation at bathymetric slope breaks — governs localized extreme run-up events.
```
ζ = ∂v/∂x − ∂u/∂y
SMVI = (1/A)·∫∫|ζ(x,y,t)|dA / ζ_reference
```
> **Safe:** SMVI < `0.2` · **Alert:** SMVI > `0.4` · **Critical:** SMVI > `0.6`
> *Monai Valley 1993: SMVI = 0.72 → 31 m run-up vs. 8 m regional average*
---
### Coastal Hazard Index (CHI)
All seven parameters combine into a single actionable index:
```
CHI = 0.12·WCC* + 0.19·KPR* + 0.24·HFSI* + 0.21·BECF*
+ 0.08·SDB* + 0.11·SBSP* + 0.05·SMVI*
```
| CHI Range | Status | Action |
|-----------|--------|--------|
| < 0.30 | 🟢 LOW | Monitoring mode |
| 0.30 – 0.60 | 🟡 MODERATE | Issue advisory |
| 0.60 – 0.80 | 🟠 HIGH | Issue warning / prepare evacuation |
| 0.80 – 1.00 | 🔴 SEVERE | Execute evacuation immediately |
| > 1.00 | ⛔ CATASTROPHIC | Maximum impact expected |
---
## 🏗️ Architecture
```
┌─────────────────────────────────────────────────────────────────────┐
│ TSU-WAVE — Three-Layer Processing Architecture │
└─────────────────────────────────────────────────────────────────────┘
SENSOR LAYER PROCESSING LAYER OUTPUT LAYER
┌──────────────┐ ┌──────────────────┐ ┌────────────────┐
│ DART BPR │──Iridium─▶│ │──────▶│ CHI Dashboard │
│ Tide Gauges │──TCP/IP──▶│ Signal │ │ Run-up Map │
│ ADCP Arrays │──TCP/IP──▶│ Processing │──────▶│ Alert Stream │
│ GPS Buoys │──Iridium─▶│ Pipeline │ │ REST API │
└──────────────┘ │ │ │ WebSocket Feed │
Deep ocean, │ ↓ │ └────────────────┘
shelf break, │ NSWE Solver │
nearshore │ (Fortran 90) │
│ │
Latency budget: │ ↓ │
DART → receive: 120 s │ 7-Parameter │
Signal proc.: 15 s │ Computation │
NSWE solve: 124 s │ │
CHI update: 1 s │ ↓ │
Alert send: 30 s │ CHI Engine │
───────────── │ + Alert Manager │
Total: ~5 min └──────────────────┘
TimescaleDB · Redis
PostgreSQL · FastAPI
```
### Technology Stack
| Layer | Technology | Purpose |
|-------|-----------|---------|
| Core solver | Fortran 90 (f2py) | NSWE integration |
| Framework | Python 3.10+ | Parameter computation, API |
| Database | TimescaleDB + PostgreSQL 14 | Wave time-series storage |
| Cache | Redis 7 | Real-time parameter cache |
| API | FastAPI + JWT | REST + WebSocket endpoints |
| Dashboard | Streamlit | Live monitoring interface |
| Container | Docker + Kubernetes | Deployment |
| IaC | Terraform | Cloud infrastructure |
---
## 📁 Project Structure
```
tsu-wave/
│
├── 📄 README.md ← You are here
├── 📄 LICENSE ← MIT License
├── 📄 requirements.txt ← Python dependencies
├── 📄 pyproject.toml ← Package configuration
├── 🐳 docker-compose.yml ← Container orchestration
├── ☸️ kubernetes/ ← K8s manifests
│ ├── deployment.yaml
│ ├── service.yaml
│ └── ingress.yaml
├── ⚙️ .gitlab-ci.yml ← CI/CD pipeline
├── 🔧 terraform/ ← Infrastructure as Code
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
│
├── 📦 src/
│ │
│ ├── core/ ── NSWE Solver Core
│ │ ├── nswe_solver.f90 ← Nonlinear SW equations (Fortran)
│ │ ├── nswe_wrapper.py ← f2py Python interface
│ │ ├── boussinesq.py ← Dispersive extension terms
│ │ └── vorticity.py ← 2D vorticity transport
│ │
│ ├── parameters/ ── Seven Physical Parameters
│ │ ├── wcc.py ← Wave Front Celerity Coefficient
│ │ ├── kpr.py ← Kinetic/Potential Energy Ratio
│ │ ├── hfsi.py ← Hydrodynamic Front Stability Index
│ │ ├── becf.py ← Bathymetric Energy Concentration
│ │ ├── sdb.py ← Spectral Dispersion Bandwidth
│ │ ├── sbsp.py ← Shoreline Boundary Stress Param.
│ │ └── smvi.py ← Sub-Surface Micro-Vorticity Index
│ │
│ ├── hazard/ ── Hazard Assessment
│ │ ├── chi.py ← Coastal Hazard Index computation
│ │ ├── runup_forecast.py ← Run-up estimation from CHI
│ │ ├── alert_manager.py ← Threshold monitoring + dispatch
│ │ └── inundation_map.py ← Spatial inundation probability
│ │
│ ├── data/ ── Data Ingestion
│ │ ├── dart_reader.py ← DART BPR stream parser
│ │ ├── tide_gauge.py ← IOC/NOAA gauge ingest
│ │ ├── adcp_reader.py ← ADCP velocity profiles
│ │ ├── bathymetry.py ← ETOPO1/GEBCO grid manager
│ │ └── becf_maps.py ← Pre-computed BECF map library
│ │
│ ├── signals/ ── Signal Processing
│ │ ├── bandpass.py ← Tsunami-band Butterworth filter
│ │ ├── arrival_detect.py ← STA/LTA front detection
│ │ ├── spectral.py ← FFT + spectral analysis (SDB)
│ │ └── tidal_remove.py ← Harmonic tidal prediction
│ │
│ ├── database/ ── Data Persistence
│ │ ├── timescale.py ← TimescaleDB hypertables
│ │ ├── models.py ← SQLAlchemy ORM models
│ │ ├── redis_cache.py ← Real-time parameter cache
│ │ └── migrations/ ← Alembic schema migrations
│ │
│ ├── api/ ── REST + WebSocket API
│ │ ├── main.py ← FastAPI application entry
│ │ ├── endpoints/
│ │ │ ├── events.py ← Tsunami event endpoints
│ │ │ ├── parameters.py ← Real-time parameter endpoints
│ │ │ ├── forecast.py ← Run-up forecast endpoints
│ │ │ └── alerts.py ← Alert management endpoints
│ │ ├── websocket.py ← Real-time WebSocket handler
│ │ └── auth.py ← JWT authentication
│ │
│ ├── dashboard/ ── Monitoring Dashboard
│ │ ├── app.py ← Streamlit entry point
│ │ ├── chi_gauge.py ← Real-time CHI display
│ │ ├── parameter_plots.py ← 7-parameter time series
│ │ ├── wave_front_map.py ← Interactive propagation map
│ │ ├── becf_viewer.py ← Bathymetric focusing viewer
│ │ └── alert_panel.py ← Alert status dashboard
│ │
│ └── utils/ ── Shared Utilities
│ ├── config.py ← System configuration (YAML)
│ ├── logger.py ← Structured JSON logging
│ ├── units.py ← Physical unit conversions
│ └── constants.py ← Physical constants (g, ρ, etc.)
│
├── 🧪 tests/ ── Test Suite (47/47 passing ✅)
│ ├── unit/
│ │ ├── test_wcc.py
│ │ ├── test_kpr.py
│ │ ├── test_hfsi.py
│ │ ├── test_becf.py
│ │ ├── test_sdb.py
│ │ ├── test_sbsp.py
│ │ └── test_smvi.py
│ ├── integration/
│ │ ├── test_nswe_solver.py
│ │ ├── test_chi_pipeline.py
│ │ └── test_api_endpoints.py
│ └── validation/
│ ├── test_tohoku_2011.py
│ ├── test_indian_ocean_2004.py
│ └── test_23_event_suite.py
│
├── 📊 data/ ── Reference Datasets
│ ├── bathymetry/
│ │ ├── etopo1_pacific.nc ← ETOPO1 Pacific basin grid
│ │ ├── etopo1_indian.nc ← ETOPO1 Indian Ocean grid
│ │ └── etopo1_atlantic.nc ← ETOPO1 Atlantic basin grid
│ ├── becf_precomputed/
│ │ ├── pacific_bays.json ← 120 Pacific bay BECF values
│ │ ├── indian_bays.json ← 40 Indian Ocean bay BECF values
│ │ └── atlantic_bays.json ← 20 Atlantic bay BECF values
│ ├── validation_events/
│ │ ├── tohoku_2011/ ← DART + tide gauge records
│ │ ├── indian_ocean_2004/ ← DART + tide gauge records
│ │ ├── hokkaido_1993/ ← Archive tide gauge records
│ │ └── [20 additional events]/
│ └── runup_surveys/
│ └── itst_database.csv ← 712 field run-up points
│
├── 📓 notebooks/ ── Jupyter Analysis Notebooks
│ ├── 01_parameter_tutorial.ipynb ← Introduction to 7 parameters
│ ├── 02_tohoku_case_study.ipynb ← Full Tōhoku 2011 analysis
│ ├── 03_becf_global_map.ipynb ← World BECF visualization
│ ├── 04_smvi_sensitivity.ipynb ← SMVI parametric study
│ ├── 05_friction_validation.ipynb ← β=0.73 derivation
│ └── 06_chi_calibration.ipynb ← CHI weight optimization
│
├── ⚙️ config/ ── Configuration Files
│ ├── config.example.yml ← Template (copy to config.yml)
│ ├── thresholds.yml ← 7-parameter alert thresholds
│ ├── alert_routing.yml ← Alert dispatch rules
│ ├── dart_stations.yml ← DART station registry
│ └── becf_zones.yml ← High-BECF zone registry
│
├── 🚀 deployment/ ── Deployment Resources
│ ├── docker/
│ │ ├── Dockerfile ← Production image
│ │ ├── Dockerfile.dev ← Development image
│ │ └── nginx.conf ← Reverse proxy config
│ ├── kubernetes/
│ │ ├── namespace.yaml
│ │ ├── deployment.yaml
│ │ ├── service.yaml
│ │ ├── ingress.yaml
│ │ └── hpa.yaml ← Horizontal Pod Autoscaler
│ └── ansible/
│ ├── playbook.yml
│ └── inventory.ini
│
├── 📖 docs/ ── Full Documentation
│ ├── physics_guide.md ← Physical theory reference
│ ├── api_reference.md ← REST + WebSocket API docs
│ ├── operator_manual.md ← Warning center integration
│ ├── validation_report.md ← 23-event validation summary
│ ├── parameter_derivations.md ← Mathematical derivations
│ └── installation_guide.md ← Step-by-step setup
│
└── 📝 CHANGELOG.md ← Version history
```
---
## 🚀 Quick Start
### Prerequisites
```
Python 3.10+ · PostgreSQL 14+ · TimescaleDB 2.8+ · Redis 7+ · Docker 20.10+
```
### Install from PyPI *(coming soon)*
```bash
pip install tsu-wave
```
### Clone & Run
```bash
# Clone from GitLab (primary)
git clone https://gitlab.com/gitdeeper4/tsu-wave.git
cd tsu-wave
# Create virtual environment
python3 -m venv venv && source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Compile Fortran NSWE solver
cd src/core && f2py -c nswe_solver.f90 -m nswe_solver && cd ../..
# Configure system
cp config/config.example.yml config/config.yml
# Edit config.yml with your database credentials and DART stream settings
# Setup database
./scripts/setup_database.sh
# Load pre-computed BECF maps
python scripts/load_becf_maps.py
# Launch all services
python src/main.py
```
Dashboard → [http://localhost:8080](http://localhost:8080)
API → [http://localhost:8000/docs](http://localhost:8000/docs)
### Docker (Recommended)
```bash
# Copy and configure
cp config/config.example.yml config/config.yml
# Start all services
docker-compose up -d
# Check health
docker-compose ps
```
```yaml
# docker-compose.yml services:
# tsu-wave-core — NSWE solver + parameter computation
# tsu-wave-api — FastAPI REST + WebSocket server
# tsu-wave-dash — Streamlit dashboard
# postgresql — TimescaleDB time-series database
# redis — Real-time parameter cache
# nginx — Reverse proxy
```
---
## 💻 Installation
### Ubuntu / Debian
```bash
# System dependencies
sudo apt update && sudo apt install -y \
python3.10 python3-pip python3-venv \
gfortran liblapack-dev \
postgresql-14 redis-server
# TimescaleDB extension
sudo add-apt-repository ppa:timescale/timescaledb-ppa
sudo apt update && sudo apt install timescaledb-postgresql-14
sudo timescaledb-tune --quiet --yes
# Database setup
sudo -u postgres psql <<EOF
CREATE DATABASE tsuwave;
CREATE USER tsuwave_user WITH PASSWORD 'your_password';
GRANT ALL PRIVILEGES ON DATABASE tsuwave TO tsuwave_user;
\c tsuwave
CREATE EXTENSION IF NOT EXISTS timescaledb;
EOF
```
### macOS
```bash
brew install python@3.10 postgresql@14 redis gcc
brew services start postgresql@14 redis
```
---
## 🔧 Usage
### Python API
```python
from tsuwave import TSUWave
from tsuwave.parameters import CHI
from tsuwave.data import DARTStream
# Initialize system
tsw = TSUWave.from_config("config/config.yml")
# Start real-time parameter computation
await tsw.start()
# Access current Coastal Hazard Index for a coastal zone
chi = await tsw.get_chi(zone="hilo_bay_hawaii")
print(f"CHI: {chi.value:.3f} — Status: {chi.status}")
# CHI: 0.724 — Status: HIGH
# Get all seven parameters
params = await tsw.get_parameters(zone="hilo_bay_hawaii")
print(params)
# WCC: 1.31 MONITOR
# KPR: 1.44 MONITOR
# HFSI: 0.63 ALERT
# BECF: 4.80 ALERT
# SDB: 1.20 MODERATE
# SBSP: 0.67 ALERT
# SMVI: 0.29 MONITOR
```
### Command Line
```bash
# Monitor active events
tsu-wave monitor --live
# Compute CHI for specific coastal zone
tsu-wave chi --zone hilo_bay --event active
# Run-up forecast
tsu-wave forecast --zone khao_lak --source sumatra
# Validate against historical event
tsu-wave validate --event tohoku_2011
# Generate operational report
tsu-wave report --event tohoku_2011 --format pdf
# System status
tsu-wave status --all
```
---
## 📡 API Reference
**Base URL:** `https://api.tsu-wave.io/v1`
**Authentication:** `Authorization: Bearer YOUR_API_KEY`
**WebSocket:** `wss://api.tsu-wave.io/ws/v1/realtime`
### Core Endpoints
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | `/events/active` | Current active tsunami events |
| `GET` | `/events/{id}/chi` | CHI time series for event |
| `GET` | `/events/{id}/parameters` | All 7 parameters (latest) |
| `GET` | `/coastal/{zone}/becf` | Pre-computed BECF for zone |
| `GET` | `/coastal/{zone}/runup` | Run-up forecast |
| `GET` | `/stations/{id}/waveform` | Raw tide gauge waveform |
| `POST` | `/forecast/runup` | On-demand run-up computation |
| `GET` | `/alerts/current` | Active threshold alerts |
| `WS` | `/ws/v1/realtime` | WebSocket real-time stream |
### Example: Get Parameters
```bash
curl -X GET "https://api.tsu-wave.io/v1/events/EV-2011-001/parameters" \
-H "Authorization: Bearer YOUR_API_KEY"
```
```json
{
"event_id": "EV-2011-001",
"zone": "miyako_bay",
"timestamp": "2011-03-11T13:46:00Z",
"chi": { "value": 0.97, "status": "CRITICAL" },
"parameters": {
"WCC": { "value": 1.56, "threshold": 1.35, "status": "critical" },
"KPR": { "value": 1.89, "threshold": 1.60, "status": "alert" },
"HFSI": { "value": 0.31, "threshold": 0.60, "status": "critical" },
"BECF": { "value": 7.30, "threshold": 4.00, "status": "critical" },
"SDB": { "value": 0.80, "threshold": 1.00, "status": "alert" },
"SBSP": { "value": 1.18, "threshold": 0.70, "status": "critical" },
"SMVI": { "value": 0.38, "threshold": 0.40, "status": "monitor" }
},
"run_up_forecast": {
"predicted_m": 38.8,
"confidence_interval": [34.1, 43.5],
"lead_time_min": 10
}
}
```
### Rate Limits
| Tier | Requests/min | Requests/day |
|------|-------------|--------------|
| Free | 60 | 1,000 |
| Research | 600 | 50,000 |
| Operational | 6,000 | unlimited |
---
## 🔬 Research Paper
**Title:** TSU-WAVE: A Multi-Parameter Hydrodynamic Framework for Real-Time Tsunami Wave Front Evolution, Energy Transfer Analysis, and Coastal Inundation Forecasting
**Authors:** Samir Baladi · Dr. Elena Marchetti · Prof. Kenji Watanabe · Dr. Lars Petersen · Dr. Amira Hassan
**Target:** Journal of Geophysical Research — Oceans · February 2026
### Key Physical Findings
| Finding | Quantitative Result |
|---------|---------------------|
| Instability onset | h/H₀ = **0.42 ± 0.05** |
| Friction exponent (field-validated) | β = **0.73 ± 0.04** |
| BECF–run-up correlation | ρ = **+0.947** (p < 0.001) |
| SMVI–anomaly correlation | ρ = **+0.831** (p < 0.001) |
| Manning friction error (vs. field) | **−34%** (systematic overestimate) |
| Second harmonic onset | h/H₀ > **0.35** → F₂ > 15% |
### Validated Case Studies
| Event | Year | Max Run-up | TSU-WAVE | Key Parameter |
|-------|------|-----------|----------|---------------|
| Tōhoku (Miyako) | 2011 | 40.5 m | 38.8 m | BECF = 7.3 |
| Indian Ocean (Aceh) | 2004 | 30.0 m | 28.5 m | SMVI = 0.61 |
| Hokkaido (Monai) | 1993 | 31.0 m | 29.8 m | SMVI = 0.72 |
### Citation
```bibtex
@article{baladi2026tsuwave,
title = {TSU-WAVE: A Multi-Parameter Hydrodynamic Framework for
Real-Time Tsunami Wave Front Evolution, Energy Transfer
Analysis, and Coastal Inundation Forecasting},
author = {Baladi, Samir and Marchetti, Elena and Watanabe, Kenji
and Petersen, Lars and Hassan, Amira},
journal = {Journal of Geophysical Research: Oceans},
year = {2026},
month = {February},
doi = {10.5281/zenodo.XXXXXXXX},
url = {https://doi.org/10.5281/zenodo.XXXXXXXX}
}
```
---
## 📊 Data & Resources
### Repositories
| Platform | URL | Role |
|----------|-----|------|
| 🦊 **GitLab** | [gitlab.com/gitdeeper4/tsu-wave](https://gitlab.com/gitdeeper4/tsu-wave) | **Primary** |
| 🐙 GitHub | [github.com/gitdeeper4/tsu-wave](https://github.com/gitdeeper4/tsu-wave) | Mirror |
| 🌲 Codeberg | [codeberg.org/gitdeeper4/tsu-wave](https://codeberg.org/gitdeeper4/tsu-wave) | Mirror |
| 🪣 Bitbucket | [bitbucket.org/gitdeeper7/tsu-wave](https://bitbucket.org/gitdeeper7/tsu-wave) | Mirror |
### Web & Documentation
| Resource | URL |
|----------|-----|
| 🌐 Website | [tsu-wave.netlify.app](https://tsu-wave.netlify.app) |
| 📖 Documentation | [tsu-wave.netlify.app/documentation](https://tsu-wave.netlify.app/documentation) |
| 📊 Dashboard | [tsu-wave.netlify.app/dashboard](https://tsu-wave.netlify.app/dashboard) |
### Research & Data
| Platform | Identifier | Contents |
|----------|-----------|----------|
| 📦 Zenodo | `10.5281/zenodo.XXXXXXXX` *(pending)* | Dataset · Paper · BECF Maps |
| 🔬 OSF | `osf.io/XXXXX` *(pending)* | Pre-registration · Protocols |
| 🐍 PyPI | `tsu-wave` *(coming soon)* | `pip install tsu-wave` |
| 🤗 Hugging Face | *(pending)* | Pre-trained ML components |
### External Data Sources
| Source | URL | Data Used |
|--------|-----|-----------|
| NOAA DART | [ndbc.noaa.gov](https://www.ndbc.noaa.gov/dart.shtml) | Deep-ocean BPR records |
| IOC Sea Level | [ioc-sealevelmonitoring.org](http://www.ioc-sealevelmonitoring.org) | Tide gauge records |
| GEBCO 2023 | [gebco.net](https://www.gebco.net) | Global bathymetry |
| NOAA NGDC | [ngdc.noaa.gov](https://www.ngdc.noaa.gov/hazard/tsu_db.shtml) | Run-up database |
---
## 🤝 Contributing
Contributions are welcome from oceanographers, coastal engineers, and hazard scientists.
```bash
# 1. Fork the repository on GitLab
# 2. Create a feature branch
git checkout -b feature/your-feature
# 3. Make your changes with tests
# 4. Run the full test suite
pytest tests/ -v
# 5. Commit with a descriptive message
git commit -m "feat: add SDB harmonic coupling correction"
# 6. Push and open a Merge Request
git push origin feature/your-feature
```
### Contribution Guidelines
- All new physical parameters require a peer-reviewed derivation reference
- Test coverage must remain ≥ 90%
- New features require validation against ≥ 1 historical event
- Code must follow PEP 8 with full type annotations and docstrings
### Issue Tracker
- 🦊 GitLab Issues: [gitlab.com/gitdeeper4/tsu-wave/-/issues](https://gitlab.com/gitdeeper4/tsu-wave/-/issues)
- 🐙 GitHub Issues: [github.com/gitdeeper4/tsu-wave/issues](https://github.com/gitdeeper4/tsu-wave/issues)
---
## 🙏 Acknowledgments
**Funding:** NSF-OCE ($1.8M) · UNESCO-IOC (€420K) · Ronin Institute Scholar Award ($45K) · **Total: $2.27M**
**Institutions:** NOAA PTWC · Japan Meteorological Agency · IOC/UNESCO IOTWMS
**Field Support:** International Tsunami Survey Team (ITST) · Japan DPRI
**Technical Consultation:** Dr. Frank González (NOAA-PMEL) · Prof. Costas Synolakis (USC)
---
## 📄 License
```
MIT License — Copyright (C) 2026 Samir Baladi and Contributors
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software.
```
Full license: [LICENSE](LICENSE)
---
## 📞 Contact
**Samir Baladi** — Principal Investigator
*Ronin Institute / Rite of Renaissance*
[](mailto:gitdeeper@gmail.com)
[](https://orcid.org/0009-0003-8903-0029)
[](https://gitlab.com/gitdeeper4)
[](https://github.com/gitdeeper4)
---
<div align="center">
**Built on physics. Validated on data. Open to the world.**
⭐ Star · 🔱 Fork · 📝 Cite · 🤝 Contribute
[](https://tsu-wave.netlify.app)
[⬆ Back to top](#)
</div>
| text/markdown | Samir Baladi | gitdeeper@gmail.com | null | null | MIT | tsunami, early-warning, hydrodynamics, oceanography, natural-hazards, disaster-prevention | [] | [] | https://tsu-wave.netlify.app | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | TSU-WAVE-Uploader/1.0 | 2026-02-18T09:36:19.423770 | tsu_wave-1.0.0.tar.gz | 106,143 | 3f/17/9ff895cd7150ecbea98fcdcf014bbb218ffb6f935fbf0d9f789bbdea33d0/tsu_wave-1.0.0.tar.gz | source | sdist | null | false | c98cf81bbb69fb20dd4af2472af92d9e | f7b7951a1f58d717fab2252b7fe5830f58535fa17b90e4a1550924a269dcfada | 3f179ff895cd7150ecbea98fcdcf014bbb218ffb6f935fbf0d9f789bbdea33d0 | null | [] | 252 |
2.4 | mytext | 0.5 | MyText: A Minimal AI-Powered Text Rewriting Tool |
<div align="center">
<img src="https://github.com/sepandhaghighi/mytext/raw/main/otherfiles/logo.png" width="350">
<h1>MyText: A Minimal AI-Powered Text Rewriting Tool</h1>
<br/>
<a href="https://codecov.io/gh/sepandhaghighi/mytext"><img src="https://codecov.io/gh/sepandhaghighi/mytext/graph/badge.svg?token=qNCcVof7QW"></a>
<a href="https://www.python.org/"><img src="https://img.shields.io/badge/built%20with-Python3-green.svg" alt="built with Python3"></a>
<a href="https://github.com/sepandhaghighi/mytext"><img alt="GitHub repo size" src="https://img.shields.io/github/repo-size/sepandhaghighi/mytext"></a>
<a href="https://badge.fury.io/py/mytext"><img src="https://badge.fury.io/py/mytext.svg" alt="PyPI version"></a>
</div>
## Overview
<p align="justify">
<b>MyText</b> is a lightweight AI-powered text enhancement tool that rewrites, paraphrases, and adjusts tone using modern LLM providers. It offers a clean command-line interface and a minimal Python API, supports multiple providers (Google AI Studio & Cloudflare Workers AI), and automatically selects the first available provider based on your environment variables.
</p>
<table>
<tr>
<td align="center">PyPI Counter</td>
<td align="center"><a href="http://pepy.tech/project/mytext"><img src="http://pepy.tech/badge/mytext"></a></td>
</tr>
<tr>
<td align="center">Github Stars</td>
<td align="center"><a href="https://github.com/sepandhaghighi/mytext"><img src="https://img.shields.io/github/stars/sepandhaghighi/mytext.svg?style=social&label=Stars"></a></td>
</tr>
</table>
<table>
<tr>
<td align="center">Branch</td>
<td align="center">main</td>
<td align="center">dev</td>
</tr>
<tr>
<td align="center">CI</td>
<td align="center"><img src="https://github.com/sepandhaghighi/mytext/actions/workflows/test.yml/badge.svg?branch=main"></td>
<td align="center"><img src="https://github.com/sepandhaghighi/mytext/actions/workflows/test.yml/badge.svg?branch=dev"></td>
</tr>
</table>
<table>
<tr>
<td align="center">Code Quality</td>
<td align="center"><a href="https://www.codefactor.io/repository/github/sepandhaghighi/mytext"><img src="https://www.codefactor.io/repository/github/sepandhaghighi/mytext/badge" alt="CodeFactor"></a></td>
<td align="center"><a href="https://app.codacy.com/gh/sepandhaghighi/mytext/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade"><img src="https://app.codacy.com/project/badge/Grade/239efecb91c0428693c3ec744853aff5"></a></td>
</tr>
</table>
## Installation
### Source Code
- Download [Version 0.5](https://github.com/sepandhaghighi/mytext/archive/v0.5.zip) or [Latest Source](https://github.com/sepandhaghighi/mytext/archive/dev.zip)
- `pip install .`
### PyPI
- Check [Python Packaging User Guide](https://packaging.python.org/installing/)
- `pip install mytext==0.5`
## Usage
### CLI
#### Single Run
Executes a one-time text transformation using the provided options and exits immediately after producing the result.
```bash
mytext \
--mode="paraphrase" \
--tone="formal" \
--text="Can you update me on the project timeline by the end of the day?"
```
#### Loop
Starts an interactive session that repeatedly accepts new text inputs from the user while keeping the same configuration until the process is terminated.
```bash
mytext \
--mode="paraphrase" \
--tone="formal" \
--loop
```
#### Arguments
| Argument | Description | Default |
|--------- |-------------|---------|
| `--text` | Text to process (required unless `--loop` is used) | - |
| `--mode` | Text processing mode | `paraphrase` |
| `--tone` | Output text desired tone | `neutral` |
| `--provider` | AI provider selection | `auto` |
| `--loop` | Enable interactive loop mode | `false` |
| `--model` | Override provider LLM model | - |
| `--version` | Show application version| - |
| `--info` | Show application information| - |
ℹ️ Supported modes: `paraphrase`, `grammar`, `summarize`, `simplify`, `bulletize`, `shorten`
ℹ️ Supported tones: `neutral`, `formal`, `casual`, `friendly`, `professional`, `academic`, `creative`
ℹ️ Supported providers: `auto`, `ai-studio`, `cloudflare`, `openrouter`, `cerebras`, `groq`, `nvidia`
### Library
You can also use MyText directly inside Python.
```python
from mytext import run_mytext
from mytext import Mode, Tone, Provider
auth = {"api_key": "YOUR_KEY"}
result = run_mytext(
text="Let me know if you have any questions after reviewing the attached document.",
auth=auth,
mode=Mode.PARAPHRASE,
tone=Tone.NEUTRAL,
provider=Provider.AI_STUDIO
)
print(result["status"], result["message"])
```
#### Parameters
| Parameter | Description | Default |
|-----------|-------------|---------|
| `text` | Input text to process | - |
| `auth` | Authentication parameters for the provider | - |
| `mode` | Text processing mode | `Mode.PARAPHRASE` |
| `tone` | Output text desired tone | `Tone.NEUTRAL` |
| `provider` | AI provider | `Provider.AI_STUDIO` |
| `model` | Override provider LLM model | `None` |
## Supported Providers
MyText automatically detects which providers are available based on environment variables.
Each provider has a default model. You may optionally override it using either the CLI `--model` argument or a `*_MODEL` environment variable.
| Provider | Required Environment Variables | Default Model | Optional Model Override |
|---------|--------------------------------|------------|------------|
| [**AI Studio**](https://ai.google.dev/) | `AI_STUDIO_API_KEY` | `gemma-3-1b-it` | `AI_STUDIO_MODEL` |
| [**Cloudflare**](https://developers.cloudflare.com/workers-ai/) | `CLOUDFLARE_API_KEY`, `CLOUDFLARE_ACCOUNT_ID` | `meta/llama-3-8b-instruct` | `CLOUDFLARE_MODEL` |
| [**OpenRouter**](https://openrouter.ai/docs) | `OPENROUTER_API_KEY` | `google/gemma-3-27b-it:free` | `OPENROUTER_MODEL` |
| [**Cerebras**](https://docs.cerebras.ai/) | `CEREBRAS_API_KEY` | `gpt-oss-120b` | `CEREBRAS_MODEL` |
| [**Groq**](https://console.groq.com/docs) | `GROQ_API_KEY` | `openai/gpt-oss-20b` | `GROQ_MODEL` |
| [**NVIDIA**](https://docs.nvidia.com/nim/) | `NVIDIA_API_KEY` | `meta/llama-3.1-8b-instruct` | `NVIDIA_MODEL` |
## Configuration Resolution Priority
MyText supports multiple configuration sources (CLI arguments, environment variables, and built-in defaults).
When resolving any configurable parameter (e.g., `model`), MyText follows this priority order:
1. CLI argument (highest priority)
2. Corresponding environment variable
3. Built-in default value (lowest priority)
## Issues & Bug Reports
Just fill an issue and describe it. We'll check it ASAP!
- Please complete the issue template
## Show Your Support
<h3>Star This Repo</h3>
Give a ⭐️ if this project helped you!
<h3>Donate to Our Project</h3>
<h4>Bitcoin</h4>
1KtNLEEeUbTEK9PdN6Ya3ZAKXaqoKUuxCy
<h4>Ethereum</h4>
0xcD4Db18B6664A9662123D4307B074aE968535388
<h4>Litecoin</h4>
Ldnz5gMcEeV8BAdsyf8FstWDC6uyYR6pgZ
<h4>Doge</h4>
DDUnKpFQbBqLpFVZ9DfuVysBdr249HxVDh
<h4>Tron</h4>
TCZxzPZLcJHr2qR3uPUB1tXB6L3FDSSAx7
<h4>Ripple</h4>
rN7ZuRG7HDGHR5nof8nu5LrsbmSB61V1qq
<h4>Binance Coin</h4>
bnb1zglwcf0ac3d0s2f6ck5kgwvcru4tlctt4p5qef
<h4>Tether</h4>
0xcD4Db18B6664A9662123D4307B074aE968535388
<h4>Dash</h4>
Xd3Yn2qZJ7VE8nbKw2fS98aLxR5M6WUU3s
<h4>Stellar</h4>
GALPOLPISRHIYHLQER2TLJRGUSZH52RYDK6C3HIU4PSMNAV65Q36EGNL
<h4>Zilliqa</h4>
zil1knmz8zj88cf0exr2ry7nav9elehxfcgqu3c5e5
<h4>Coffeete</h4>
<a href="http://www.coffeete.ir/opensource">
<img src="http://www.coffeete.ir/images/buttons/lemonchiffon.png" style="width:260px;" />
</a>
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.5] - 2026-02-18
### Added
- `--provider` argument
- `--model` argument
- `_load_model_from_env` function
### Changed
- `model` parameter added to `run_mytext` function
- AI Studio default model changed to `gemma-3-1b-it`
- OpenRouter default model changed to `google/gemma-3-27b-it:free`
- Test system modified
- `README.md` updated
## [0.4] - 2025-12-25
### Added
- Groq provider
- NVIDIA provider
- `--loop` argument
### Changed
- Test system modified
- `README.md` updated
## [0.3] - 2025-12-17
### Added
- OpenRouter provider
- Cerebras provider
### Changed
- Test system modified
- `README.md` updated
- AI Studio main model changed to `gemini-2.5-flash`
- AI Studio fallback model changed to `gemma-3-1b-it`
- Providers moved to `providers.py`
## [0.2] - 2025-12-05
### Added
- Logo
- `summarize` mode
- `simplify` mode
- `bulletize` mode
- `shorten` mode
### Changed
- `README.md` updated
- Cloudflare fallback model changed to `meta/llama-3.1-8b-instruct-fast`
- Model switching modified
## [0.1] - 2025-11-26
### Added
- `run_mytext` function
- AI Studio provider
- Cloudflare provider
- `--mode` argument
- `--tone` argument
[Unreleased]: https://github.com/sepandhaghighi/mytext/compare/v0.5...dev
[0.5]: https://github.com/sepandhaghighi/mytext/compare/v0.4...v0.5
[0.4]: https://github.com/sepandhaghighi/mytext/compare/v0.3...v0.4
[0.3]: https://github.com/sepandhaghighi/mytext/compare/v0.2...v0.3
[0.2]: https://github.com/sepandhaghighi/mytext/compare/v0.1...v0.2
[0.1]: https://github.com/sepandhaghighi/mytext/compare/dde63ee...v0.1
| text/markdown | Sepand Haghighi | me@sepand.tech | null | null | MIT | text rewrite paraphrase editing llm ai text-processing cli | [
"Development Status :: 3 - Alpha",
"Natural Language :: English",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :... | [] | https://github.com/sepandhaghighi/mytext | https://github.com/sepandhaghighi/mytext/tarball/v0.5 | >=3.7 | [] | [] | [] | [
"memor>=0.6",
"requests>=2.20.0",
"art>=5.3"
] | [] | [] | [] | [
"Source, https://github.com/sepandhaghighi/mytext"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-18T09:35:04.846145 | mytext-0.5-py3-none-any.whl | 13,182 | 6a/db/e3835e793d8cd96c437ca90caa3769cf2f1a7a059789c14a21ba4631ab71/mytext-0.5-py3-none-any.whl | py3 | bdist_wheel | null | false | 3e0de8743ea19f1981ab031250f0af97 | 71d67a5a5142d06c5fc80c593fec5dc7abfb59f7bdb7722abcbce51c8903d713 | 6adbe3835e793d8cd96c437ca90caa3769cf2f1a7a059789c14a21ba4631ab71 | null | [
"LICENSE",
"AUTHORS.md"
] | 246 |
2.4 | harmonized-telemetry-format | 0.0.3a1 | Core library for reading and writing the Harmonized Telemetry Format (HTF). | # Harmonized Telemetry Format (HTF)
Detailed description coming soon.
Exemplary HTF file content:
```htf
[static_metadata;s]12.51
[dim_metadata;index;delta]2;0.08;4;0.484
(channel;s;50;135)0=12.27;1=12.42;...
(empty_channel;m;50;135)0=;
```
| text/markdown | null | Max Schlosser <schlosse@hs-mittweida.de> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Utilities"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/Schloool/harmonized-telemetry-format",
"Bug Tracker, https://github.com/Schloool/harmonized-telemetry-format/issues"
] | twine/6.2.0 CPython/3.10.10 | 2026-02-18T09:34:04.995967 | harmonized_telemetry_format-0.0.3a1.tar.gz | 4,824 | 6c/89/001ca2669cf9bfe3fa7ee61dde378f276562a42fb765a97539bb13a67ff2/harmonized_telemetry_format-0.0.3a1.tar.gz | source | sdist | null | false | 01434e49a9e209bd18b5faba1bf24390 | 81866d230d7fa997a177a80bcfa49201d511d500e60e979bdd7c87da9d1b5868 | 6c89001ca2669cf9bfe3fa7ee61dde378f276562a42fb765a97539bb13a67ff2 | null | [] | 215 |
2.2 | trueform | 0.6.0 | Real-time geometric processing on NumPy arrays. Easy to use, robust on real-world data. | # trueform
Real-time geometric processing on NumPy arrays. Easy to use, robust on real-world data.
Mesh booleans, registration, remeshing and queries — at interactive speed on million-polygon meshes. Robust to non-manifold flaps, inconsistent winding, and pipeline artifacts. NumPy in, NumPy out.
**[Documentation](https://trueform.polydera.com/py/getting-started)** | **[Live Examples](https://trueform.polydera.com/live-examples/boolean)**
## Installation
```bash
pip install trueform
```
## Quick Tour
```python
import numpy as np
import trueform as tf
# NumPy arrays in
points = np.array([
[0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1]
], dtype=np.float32)
faces = np.array([
[0, 1, 2], [0, 2, 3], [0, 3, 1], [1, 3, 2]
], dtype=np.int32)
mesh = tf.Mesh(faces, points)
# Or read from file
mesh = tf.read_stl("model.stl")
```
**Boolean operations:**
```python
(result_faces, result_points), labels = tf.boolean_union(mesh0, mesh1)
# With intersection curves
(result_faces, result_points), labels, (paths, curve_points) = tf.boolean_union(
mesh0, mesh1, return_curves=True
)
```
**Spatial queries:**
```python
static_mesh = tf.Mesh(faces0, points0)
dynamic_mesh = tf.Mesh(faces1, points1)
dynamic_mesh.transformation = rotation_matrix
does_intersect = tf.intersects(static_mesh, dynamic_mesh)
distance = tf.distance(static_mesh, dynamic_mesh)
(id0, id1), (dist2, pt0, pt1) = tf.neighbor_search(static_mesh, dynamic_mesh)
neighbors = tf.neighbor_search(dynamic_mesh, static_mesh.points[0], k=10)
for idx, dist2, pt in neighbors:
pass
```
→ [Full documentation](https://trueform.polydera.com/py/modules) covers mesh analysis, topology, isocontours, curvature, and more.
## Examples
- **[Guided Examples](https://trueform.polydera.com/py/examples)** — Step-by-step walkthroughs for spatial queries, topology, and booleans
- **[VTK Integration](https://trueform.polydera.com/py/examples/vtk-integration)** — Interactive VTK applications
Run examples locally:
```bash
git clone https://github.com/polydera/trueform.git
cd trueform/python/examples
pip install vtk # for interactive examples
python vtk/collision.py mesh.stl
```
## Blender Integration
Cached meshes with automatic dirty-tracking for live preview add-ons. See [Blender docs](https://trueform.polydera.com/py/blender).
## Benchmarks
| Operation | Input | Time | Speedup | Baseline |
|-----------|-------|------|---------|----------|
| Boolean Union | 2 × 1M | 28 ms | **84×** | CGAL `Simple_cartesian<double>` |
| Mesh–Mesh Curves | 2 × 1M | 7 ms | **233×** | CGAL `Simple_cartesian<double>` |
| ICP Registration | 1M | 7.7 ms | **93×** | libigl |
| Self-Intersection | 1M | 78 ms | **37×** | libigl EPECK (GMP/MPFR) |
| Isocontours | 1M, 16 cuts | 3.8 ms | **38×** | VTK `vtkContourFilter` |
| Connected Components | 1M | 15 ms | **10×** | CGAL |
| Boundary Paths | 1M | 12 ms | **11×** | CGAL |
| k-NN Query | 500K | 1.7 µs | **3×** | nanoflann k-d tree |
| Mesh–Mesh Distance | 2 × 1M | 0.2 ms | **2×** | Coal (FCL) `OBBRSS` |
| Decimation (50%) | 1M | 72 ms | **50×** | CGAL `edge_collapse` |
| Principal Curvatures | 1M | 25 ms | **55×** | libigl |
Apple M4 Max, 16 threads, Clang `-O3`. [Full methodology](https://trueform.polydera.com/py/benchmarks)
## License
Dual-licensed: [PolyForm Noncommercial 1.0.0](https://github.com/polydera/trueform/blob/main/LICENSE.noncommercial) for noncommercial use, [commercial licenses](mailto:info@polydera.com) available.
## Contributing
See [CONTRIBUTING.md](https://github.com/polydera/trueform/blob/main/CONTRIBUTING.md) and [open issues](https://github.com/polydera/trueform/issues).
## Citation
```bibtex
@software{trueform2025,
title={trueform: Real-time Geometric Processing},
author={Sajovic, {\v{Z}}iga and {et al.}},
organization={XLAB d.o.o.},
year={2025},
url={https://github.com/polydera/trueform}
}
```
| text/markdown | XLAB | Ziga Sajovic <info@polydera.com> | null | null | # PolyForm Noncommercial License 1.0.0
<https://polyformproject.org/licenses/noncommercial/1.0.0>
## Acceptance
In order to get any license under these terms, you must agree to them as both strict obligations and conditions to all your licenses.
## Copyright License
The licensor grants you a copyright license for the software to do everything you might do with the software that would otherwise infringe the licensor's copyright in it for any permitted purpose. However, you may only distribute the software according to [Distribution License](#distribution-license) and make changes or new works based on the software according to [Changes and New Works License](#changes-and-new-works-license).
## Distribution License
The licensor grants you an additional copyright license to distribute copies of the software. Your license to distribute covers distributing the software with changes and new works permitted by [Changes and New Works License](#changes-and-new-works-license).
## Notices
You must ensure that anyone who gets a copy of any part of the software from you also gets a copy of these terms or the URL for them above, as well as copies of any plain-text lines beginning with `Required Notice:` that the licensor provided with the software. For example:
> Required Notice: Copyright Yoyodyne, Inc. (http://example.com)
## Changes and New Works License
The licensor grants you an additional copyright license to make changes and new works based on the software for any permitted purpose.
## Patent License
The licensor grants you a patent license for the software that covers patent claims the licensor can license, or becomes able to license, that you would infringe by using the software.
## Noncommercial Purposes
Any noncommercial purpose is a permitted purpose.
## Personal Uses
Personal use for research, experiment, and testing for the benefit of public knowledge, personal study, private entertainment, hobby projects, amateur pursuits, or religious observance, without any anticipated commercial application, is use for a permitted purpose.
## Noncommercial Organizations
Use by any charitable organization, educational institution, public research organization, public safety or health organization, environmental protection organization, or government institution is use for a permitted purpose regardless of the source of funding or obligations resulting from the funding.
## Fair Use
You may have "fair use" rights for the software under the law. These terms do not limit them.
## No Other Rights
These terms do not allow you to sublicense or transfer any of your licenses to anyone else, or prevent the licensor from granting licenses to anyone else. These terms do not imply any other licenses.
## Patent Defense
If you make any written claim that the software infringes or contributes to infringement of any patent, your patent license for the software granted under these terms ends immediately. If your company makes such a claim, your patent license ends immediately for work on behalf of your company.
## Violations
The first time you are notified in writing that you have violated any of these terms, or done anything with the software not covered by your licenses, your licenses can nonetheless continue if you come into full compliance with these terms, and take practical steps to correct past violations, within 32 days of receiving notice. Otherwise, all your licenses end immediately.
## No Liability
***As far as the law allows, the software comes as is, without any warranty or condition, and the licensor will not be liable to you for any damages arising out of these terms or the use or nature of the software, under any kind of legal claim.***
## Definitions
The **licensor** is the individual or entity offering these terms, and the **software** is the software the licensor makes available under these terms.
**You** refers to the individual or entity agreeing to these terms.
**Your company** is any legal entity, sole proprietorship, or other kind of organization that you work for, plus all organizations that have control over, are under the control of, or are under common control with that organization. **Control** means ownership of substantially all the assets of an entity, or the power to direct its management and policies by vote, contract, or otherwise. Control can be direct or indirect.
**Your licenses** are all the licenses granted to you for the software under these terms.
**Use** means anything you do with the software requiring one of your licenses.
| mesh, geometry, computational-geometry, mesh-processing, mesh-boolean, collision-detection, point-cloud, csg, numpy | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: O... | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.23"
] | [] | [] | [] | [
"Homepage, https://trueform.polydera.com",
"Documentation, https://trueform.polydera.com",
"Repository, https://github.com/polydera/trueform",
"Issues, https://github.com/polydera/trueform/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:33:17.581916 | trueform-0.6.0.tar.gz | 749,319 | d4/ff/f52d8abeeebce3c218356249fe8c741549fe22af774fc2b4da6c57119958/trueform-0.6.0.tar.gz | source | sdist | null | false | db962cb5872867badc219b44a30e2843 | f7b576cdf500e9e0e3b828adedccd6491e717b02f58b31148a82ae95b8a0d9a6 | d4fff52d8abeeebce3c218356249fe8c741549fe22af774fc2b4da6c57119958 | null | [] | 829 |
2.1 | fantasia | 4.1.1 | Functional ANnoTAtion based on embedding space SImilArity | 
[](https://pypi.org/project/fantasia/)
[](https://fantasia.readthedocs.io/en/latest/?badge=latest)

# FANTASIA v4.1
**Functional ANnoTAtion based on embedding space SImilArity**
FANTASIA is an advanced pipeline for the automatic functional annotation of protein sequences using state-of-the-art protein language models. It integrates deep learning embeddings and in-memory similarity searches, retrieving reference vectors from a PostgreSQL database with pgvector, to associate Gene Ontology (GO) terms with proteins.
For full documentation, visit [FANTASIA Documentation](https://fantasia.readthedocs.io/en/latest/).
For users who need a lightweight, standalone alternative, FANTASIA-Lite provides fast Gene Ontology annotation directly from local FASTA files, without requiring a database server or the full FANTASIA infrastructure. It leverages protein language model embeddings and nearest-neighbor similarity in embedding space to deliver high-quality functional annotations with minimal setup.
For FANTASIA-Lite, visit https://github.com/CBBIO/FANTASIA-Lite
## Reference Datasets
Two packaged reference datasets are available; select one depending on your analysis needs:
- **Main Reference (last layer, default)**
Embeddings extracted only from the **final hidden layer** of each PLM.
Recommended for most annotation tasks (smaller, faster to load).
*Record*: https://zenodo.org/records/17795871
- **Multilayer Reference (early layers + final layers)**
Embeddings extracted from **multiple hidden layers** (including intermediate and final).
Suitable for comparative and exploratory analyses requiring layer-wise representations.
*Record*: https://zenodo.org/records/17793273
## Key Features
**✅ Available Embedding Models**
Supports protein language models: **ESM-2**, **ProtT5**, **ProstT5**, **Ankh3-Large**, and **ESM3c** for sequence representation.
- **🔍 Redundancy Filtering**
Filters out homologous sequences using **MMseqs2** in the lookup table, allowing controlled redundancy levels through an adjustable
threshold, ensuring reliable benchmarking and evaluation.
- **💾 Optimized Data Storage**
Embeddings are stored in **HDF5 format** for input sequences. The reference table, however, is hosted in a **public
relational PostgreSQL database** using **pgvector**.
- **🚀 Efficient Similarity Lookup**
High-throughput similarity search with a **hybrid approach**: reference embeddings are stored in a **PostgreSQL + pgvector** database and **fetched in batches to memory** to compute similarities at speed.
- **🧭 Global & Local Alignment of Hits**
Candidate hits from the reference table are **aligned both globally and locally** against the input protein for validation and scoring.
- **🧩 Multi-layer Embedding Support**
Optional support for **intermediate + final layers** to enable layer-wise analyses and improved exploration.
- **📦 Raw Outputs & Flexible Post-processing**
Exposes **raw result tables** for custom analyses and includes a **flexible post-processing & scoring system** that produces **TopGO-ready** files.
Performs high-speed searches using **in-memory computations**. Reference vectors are retrieved from a PostgreSQL database with pgvector for comparison.
- **🔬 Functional Annotation by Similarity**
Assigns Gene Ontology (GO) terms to proteins based on **embedding space similarity**, using pre-trained embeddings from all supported models.
## Pipeline Overview (Simplified)
1. **Embedding Generation**
Computes protein embeddings using deep learning models (**ProtT5**, **ProstT5**, **ESM2** and **Ankh**).
2. **GO Term Lookup**
Performs vector similarity searches using **in-memory computations** to assign Gene Ontology terms. Reference
embeddings are retrieved from a **PostgreSQL database with pgvector**. Only experimental evidence codes are used for transfer.
## � Setting Up Required Services with Docker Compose
FANTASIA requires two key services:
- **PostgreSQL 16 with pgvector**: Stores reference protein embeddings and provides vector similarity search
- **RabbitMQ**: Message broker for distributed embedding task processing
### Prerequisites
- Docker and Docker Compose installed
### Quick Start
1. **Start services** (from the FANTASIA directory):
```bash
docker-compose up -d
```
2. **Verify services are running**:
```bash
docker-compose ps
```
Expected output:
```
CONTAINER ID IMAGE STATUS
xxx pgvector/pgvector:0.7.0-pg16 Up (healthy)
xxx rabbitmq:3.13-management Up (healthy)
```
3. **Test database connection**:
```bash
PGPASSWORD=clave psql -h localhost -U usuario -d BioData -c "SELECT 1"
```
### Service Credentials
The `docker-compose.yml` is configured with the following default credentials (matching `config.yaml`):
| Service | Host | Port | User | Password | Database |
|------------|-----------|-------|----------|----------|----------|
| PostgreSQL | localhost | 5432 | usuario | clave | BioData |
| RabbitMQ | localhost | 5672 | guest | guest | - |
RabbitMQ Management UI is available at: http://localhost:15672 (user: guest, password: guest)
### Troubleshooting
**Connection refused error**:
```bash
# Check if containers are running
docker-compose ps
# If stopped, restart them
docker-compose restart
# View logs
docker-compose logs postgres
docker-compose logs rabbitmq
```
**Password authentication failed**:
Ensure the credentials in `docker-compose.yml` match those in `config.yaml`:
```bash
# Current values in docker-compose.yml
POSTGRES_USER: usuario
POSTGRES_PASSWORD: clave
POSTGRES_DB: BioData
```
**Cleaning up**: To remove containers and volumes:
```bash
docker-compose down -v
```
## �📚 Supported Embedding Models
| Name | Model ID | Params | Architecture | Description |
|--------------|-------------------------------------------|--------|-------------------|-----------------------------------------------------------------------------|
| **ESM-2** | `facebook/esm2_t33_650M_UR50D` | 650M | Encoder (33L) | Learns structure/function from UniRef50. No MSAs. Optimized for accuracy. |
| **ProtT5** | `Rostlab/prot_t5_xl_uniref50` | 1.2B | Encoder-Decoder | Trained on UniRef50. Strong transfer for structure/function tasks. |
| **ProstT5** | `Rostlab/ProstT5` | 1.2B | Multi-modal T5 | Learns 3Di structural states + function. Enhances contact/function tasks. |
| **Ankh3-Large** | `ElnaggarLab/ankh3-large` | 620M | Encoder (T5-style)| Fast inference. Good semantic/structural representation. |
| **ESM3c** | `esmc_600m` | 600M | Encoder (36L) | New gen. model trained on UniRef + MGnify + JGI. High precision & speed. |
## Acknowledgments
FANTASIA is the result of a collaborative effort between **Ana Rojas’ Lab (CBBIO)** (Andalusian Center for Developmental
Biology, CSIC) and **Rosa Fernández’s Lab** (Metazoa Phylogenomics Lab, Institute of Evolutionary Biology, CSIC-UPF).
This project demonstrates the synergy between research teams with diverse expertise.
This version of FANTASIA builds upon previous work from:
- [`Metazoa Phylogenomics Lab's FANTASIA`](https://github.com/MetazoaPhylogenomicsLab/FANTASIA)
The original implementation of FANTASIA for functional annotation.
- [`bio_embeddings`](https://github.com/sacdallago/bio_embeddings)
A state-of-the-art framework for generating protein sequence embeddings.
- [`GoPredSim`](https://github.com/Rostlab/goPredSim)
A similarity-based approach for Gene Ontology annotation.
- [`protein-information-system`](https://github.com/CBBIO/protein-information-system)
Serves as the **reference biological information system**, providing a robust data model and curated datasets for
protein structural and functional analysis.
We also extend our gratitude to **LifeHUB-CSIC** for inspiring this initiative and fostering innovation in computational
biology.
## Citing FANTASIA
If you use **FANTASIA** in your research, please cite the following publications:
1. Martínez-Redondo, G. I., Barrios, I., Vázquez-Valls, M., Rojas, A. M., & Fernández, R. (2024).
*Illuminating the functional landscape of the dark proteome across the Animal Tree of Life.*
[DOI: 10.1101/2024.02.28.582465](https://doi.org/10.1101/2024.02.28.582465)
2. Barrios-Núñez, I., Martínez-Redondo, G. I., Medina-Burgos, P., Cases, I., Fernández, R., & Rojas, A. M. (2024).
*Decoding proteome functional information in model organisms using protein language models.*
[DOI: 10.1101/2024.02.14.580341](https://doi.org/10.1101/2024.02.14.580341)
## License
FANTASIA is distributed under the terms of the [GNU Affero General Public License v3.0](LICENSE).
---
### 👥 Project Team
- **Ana M. Rojas**: [a.rojas.m@csic.es](mailto:a.rojas.m@csic.es)
- **Rosa Fernández**: [rosa.fernandez@ibe.upf-csic.es](mailto:rosa.fernandez@ibe.upf-csic.es)
- **Gemma I. Martínez-Redondo**: [gemma.martinez@ibe.upf-csic.es](mailto:gemma.martinez@ibe.upf-csic.es)
- **Francisco Miguel Pérez Canales**: [fmpercan@upo.es](mailto:fmpercan@upo.es)
- **Belén Carbonetto**: [belen.carbonetto.metazomics@gmail.com](mailto:belen.carbonetto.metazomics@gmail.com)
- **Francisco J. Ruiz Mota**: [fraruimot@alum.us.es](mailto:fraruimot@alum.us.es)
- **Àlex Domínguez Rodríguez**: [adomrod4@upo.es](maito:adomrod4@upo.es)
---
| text/markdown | Francisco Miguel Pérez Canales | frapercan1@alum.us.es | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"docopt<0.7.0,>=0.6.2",
"parasail<2.0.0,>=1.3.4",
"ete3<4.0.0,>=3.1.3",
"scipy<2.0.0,>=1.16.0",
"protein-information-system<4.0.0,>=3.1.0",
"goatools<2.0.0,>=1.4.12",
"polars<2.0.0,>=1.33.1",
"pyarrow<22.0.0,>=21.0.0",
"numpy<3.0.0,>=2.3.4"
] | [] | [] | [] | [] | poetry/1.4.0 CPython/3.12.12 Linux/6.8.0-1044-azure | 2026-02-18T09:32:39.336932 | fantasia-4.1.1.tar.gz | 64,716 | ea/d2/eb56e4ccd9bd5a6195e28804bcb77547296095c7f38317d397aaa0d56a59/fantasia-4.1.1.tar.gz | source | sdist | null | false | 13a1f93933d41c214ba145f321062dc1 | 2435aa75658df1eae4563edbf3b769004de15ea069906934e71576cd5a40bd13 | ead2eb56e4ccd9bd5a6195e28804bcb77547296095c7f38317d397aaa0d56a59 | null | [] | 244 |
2.4 | secops | 0.35.0 | Python SDK for wrapping the Google SecOps API for common use cases | # Google SecOps SDK for Python
[](https://pypi.org/project/secops/)
A Python SDK for interacting with Google Security Operations products, currently supporting Chronicle/SecOps SIEM.
This wraps the API for common use cases, including UDM searches, entity lookups, IoCs, alert management, case management, and detection rule management.
## Prerequisites
Follow these steps to ensure your environment is properly configured:
1. **Configure a Google Cloud Project for Google SecOps**
- Your Google Cloud project must be linked to your Google SecOps instance.
- Chronicle API needs to be enabled in your Google Cloud project.
- The project used for authentication must be the same project that was set up during your SecOps onboarding.
- For detailed instructions, see [Configure a Google Cloud project for Google SecOps](https://cloud.google.com/chronicle/docs/onboard/configure-cloud-project).
2. **Set up IAM Permissions**
- The service account or user credentials you use must have appropriate permissions
- The recommended predefined role is **Chronicle API Admin** (`roles/chronicle.admin`)
- For more granular access control, you can create custom roles with specific permissions
- See [Access control using IAM](https://cloud.google.com/chronicle/docs/onboard/configure-feature-access) for detailed permission information
3. **Required Information**
- Your Chronicle instance ID (customer_id)
- Your Google Cloud project number (project_id)
- Your preferred region (e.g., "us", "europe", "asia")
> **Note:** Using a Google Cloud project that is not linked to your SecOps instance will result in authentication failures, even if the service account/user has the correct IAM roles assigned.
## Installation
```bash
pip install secops
```
## Command Line Interface
The SDK also provides a comprehensive command-line interface (CLI) that makes it easy to interact with Google Security Operations products from your terminal:
```bash
# Save your credentials
secops config set --customer-id "your-instance-id" --project-id "your-project-id" --region "us"
# Now use commands without specifying credentials each time
secops search --query "metadata.event_type = \"NETWORK_CONNECTION\""
```
For detailed CLI documentation and examples, see the [CLI Documentation](https://github.com/google/secops-wrapper/blob/main/CLI.md).
## Authentication
The SDK supports two main authentication methods:
### 1. Application Default Credentials (ADC)
The simplest and recommended way to authenticate the SDK. Application Default Credentials provide a consistent authentication method that works across different Google Cloud environments and local development.
There are several ways to use ADC:
#### a. Using `gcloud` CLI (Recommended for Local Development)
```bash
# Login and set up application-default credentials
gcloud auth application-default login
```
Then in your code:
```python
from secops import SecOpsClient
# Initialize with default credentials - no explicit configuration needed
client = SecOpsClient()
```
#### b. Using Environment Variable
Set the environment variable pointing to your service account key:
```bash
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
```
Then in your code:
```python
from secops import SecOpsClient
# Initialize with default credentials - will automatically use the credentials file
client = SecOpsClient()
```
#### c. Google Cloud Environment (Automatic)
When running on Google Cloud services (Compute Engine, Cloud Functions, Cloud Run, etc.), ADC works automatically without any configuration:
```python
from secops import SecOpsClient
# Initialize with default credentials - will automatically use the service account
# assigned to your Google Cloud resource
client = SecOpsClient()
```
ADC will automatically try these authentication methods in order:
1. Environment variable `GOOGLE_APPLICATION_CREDENTIALS`
2. Google Cloud SDK credentials (set by `gcloud auth application-default login`)
3. Google Cloud-provided service account credentials
4. Local service account impersonation credentials
### 2. Service Account Authentication
For more explicit control, you can authenticate using a service account that is created in the Google Cloud project associated with Google SecOps.
**Important Note on Permissions:**
* This service account needs to be granted the appropriate Identity and Access Management (IAM) role to interact with the Google Secops (Chronicle) API. The recommended predefined role is **Chronicle API Admin** (`roles/chronicle.admin`). Alternatively, if your security policies require more granular control, you can create a custom IAM role with the specific permissions needed for the operations you intend to use (e.g., `chronicle.instances.get`, `chronicle.events.create`, `chronicle.rules.list`, etc.).
Once the service account is properly permissioned, you can authenticate using it in two ways:
#### a. Using a Service Account JSON File
```python
from secops import SecOpsClient
# Initialize with service account JSON file
client = SecOpsClient(service_account_path="/path/to/service-account.json")
```
#### b. Using Service Account Info Dictionary
If you prefer to manage credentials programmatically without a file, you can create a dictionary containing the service account key's contents.
```python
from secops import SecOpsClient
# Service account details as a dictionary
service_account_info = {
"type": "service_account",
"project_id": "your-project-id",
"private_key_id": "key-id",
"private_key": "-----BEGIN PRIVATE KEY-----\n...",
"client_email": "service-account@project.iam.gserviceaccount.com",
"client_id": "client-id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/..."
}
# Initialize with service account info
client = SecOpsClient(service_account_info=service_account_info)
```
### Impersonate Service Account
Both [Application Default Credentials](#1-application-default-credentials-adc) and [Service Account Authentication](#2-service-account-authentication) supports impersonating a Service Account leveraging the corresponding `impersonate_service_account` parameter as per the following configuration:
```python
from secops import SecOpsClient
# Initialize with default credentials and impersonate service account
client = SecOpsClient(impersonate_service_account="secops@test-project.iam.gserviceaccount.com")
```
### Retry Configuration
The SDK provides built-in retry functionality that automatically handles transient errors such as rate limiting (429), server errors (500, 502, 503, 504), and network issues. You can customize the retry behavior when initializing the client:
```python
from secops import SecOpsClient
from secops.auth import RetryConfig
# Define retry configurations
retry_config = RetryConfig(
total=3, # Maximum number of retries (default: 5)
retry_status_codes=[429, 500, 502, 503, 504], # HTTP status codes to retry
allowed_methods=["GET", "DELETE"], # HTTP methods to retry
backoff_factor=0.5 # Backoff factor (default: 0.3)
)
# Initialize with custom retry config
client = SecOpsClient(retry_config=retry_config)
# Disable retry completely by marking retry config as False
client = SecOpsClient(retry_config=False)
```
## Using the Chronicle API
### Initializing the Chronicle Client
After creating a SecOpsClient, you need to initialize the Chronicle-specific client:
```python
# Initialize Chronicle client
chronicle = client.chronicle(
customer_id="your-chronicle-instance-id", # Your Chronicle instance ID
project_id="your-project-id", # Your GCP project ID
region="us" # Chronicle API region
)
```
[See available regions](https://github.com/google/secops-wrapper/blob/main/regions.md)
#### API Version Control
The SDK supports flexible API version selection:
- **Default Version**: Set `default_api_version` during client initialization (default is `v1alpha`)
- **Per-Method Override**: Many methods accept an `api_version` parameter to override the default for specific calls
**Supported API versions:**
- `v1` - Stable production API
- `v1beta` - Beta API with newer features
- `v1alpha` - Alpha API with experimental features
**Example with per-method version override:**
```python
from secops.chronicle.models import APIVersion
# Client defaults to v1alpha
chronicle = client.chronicle(
customer_id="your-chronicle-instance-id",
project_id="your-project-id",
region="us",
default_api_version="v1alpha"
)
# Use v1 for a specific rule operation
rule = chronicle.get_rule(
rule_id="ru_12345678-1234-1234-1234-123456789abc",
api_version=APIVersion.V1 # Override to use v1 for this call
)
```
### Log Ingestion
Ingest raw logs directly into Chronicle:
```python
from datetime import datetime, timezone
import json
# Create a sample log (this is an OKTA log)
current_time = datetime.now(timezone.utc).isoformat().replace('+00:00', 'Z')
okta_log = {
"actor": {
"alternateId": "mark.taylor@cymbal-investments.org",
"displayName": "Mark Taylor",
"id": "00u4j7xcb5N6zfiRP5d8",
"type": "User"
},
"client": {
"userAgent": {
"rawUserAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36",
"os": "Windows 10",
"browser": "CHROME"
},
"ipAddress": "96.6.127.53",
"geographicalContext": {
"city": "New York",
"state": "New York",
"country": "United States",
"postalCode": "10118",
"geolocation": {"lat": 40.7123, "lon": -74.0068}
}
},
"displayMessage": "Max sign in attempts exceeded",
"eventType": "user.account.lock",
"outcome": {"result": "FAILURE", "reason": "LOCKED_OUT"},
"published": "2025-06-19T21:51:50.116Z",
"securityContext": {
"asNumber": 20940,
"asOrg": "akamai technologies inc.",
"isp": "akamai international b.v.",
"domain": "akamaitechnologies.com",
"isProxy": false
},
"severity": "DEBUG",
"legacyEventType": "core.user_auth.account_locked",
"uuid": "5b90a94a-d7ba-11ea-834a-85c24a1b2121",
"version": "0"
# ... additional OKTA log fields may be included
}
# Ingest a single log using the default forwarder
result = chronicle.ingest_log(
log_type="OKTA", # Chronicle log type
log_message=json.dumps(okta_log) # JSON string of the log
)
print(f"Operation: {result.get('operation')}")
# Batch ingestion: Ingest multiple logs in a single request
batch_logs = [
json.dumps({"actor": {"displayName": "User 1"}, "eventType": "user.session.start"}),
json.dumps({"actor": {"displayName": "User 2"}, "eventType": "user.session.start"}),
json.dumps({"actor": {"displayName": "User 3"}, "eventType": "user.session.start"})
]
# Ingest multiple logs in a single API call
batch_result = chronicle.ingest_log(
log_type="OKTA",
log_message=batch_logs # List of log message strings
)
print(f"Batch operation: {batch_result.get('operation')}")
# Add custom labels to your logs
labeled_result = chronicle.ingest_log(
log_type="OKTA",
log_message=json.dumps(okta_log),
labels={"environment": "production", "app": "web-portal", "team": "security"}
)
```
The SDK also supports non-JSON log formats. Here's an example with XML for Windows Event logs:
```python
# Create a Windows Event XML log
xml_content = """<Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'>
<System>
<Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-A5BA-3E3B0328C30D}'/>
<EventID>4624</EventID>
<Version>1</Version>
<Level>0</Level>
<Task>12544</Task>
<Opcode>0</Opcode>
<Keywords>0x8020000000000000</Keywords>
<TimeCreated SystemTime='2024-05-10T14:30:00Z'/>
<EventRecordID>202117513</EventRecordID>
<Correlation/>
<Execution ProcessID='656' ThreadID='700'/>
<Channel>Security</Channel>
<Computer>WIN-SERVER.xyz.net</Computer>
<Security/>
</System>
<EventData>
<Data Name='SubjectUserSid'>S-1-0-0</Data>
<Data Name='SubjectUserName'>-</Data>
<Data Name='TargetUserName'>svcUser</Data>
<Data Name='WorkstationName'>CLIENT-PC</Data>
<Data Name='LogonType'>3</Data>
</EventData>
</Event>"""
# Ingest the XML log - no json.dumps() needed for XML
result = chronicle.ingest_log(
log_type="WINEVTLOG_XML", # Windows Event Log XML format
log_message=xml_content # Raw XML content
)
print(f"Operation: {result.get('operation')}")
```
The SDK supports all log types available in Chronicle. You can:
1. View available log types:
```python
# Get all available log types
log_types = chronicle.get_all_log_types()
for lt in log_types[:5]: # Show first 5
print(f"{lt.id}: {lt.description}")
# Fetch only first 50 log types (single page)
log_types_page = chronicle.get_all_log_types(page_size=50)
# Fetch specific page using token
log_types_next = chronicle.get_all_log_types(
page_size=50,
page_token="next_page_token"
)
```
2. Search for specific log types:
```python
# Search for log types related to firewalls
firewall_types = chronicle.search_log_types("firewall")
for lt in firewall_types:
print(f"{lt.id}: {lt.description}")
```
3. Validate log types:
```python
# Check if a log type is valid
if chronicle.is_valid_log_type("OKTA"):
print("Valid log type")
else:
print("Invalid log type")
```
4. Classify logs to predict log type:
```python
# Classify a raw log to determine its type
okta_log = '{"eventType": "user.session.start", "actor": {"alternateId": "user@example.com"}}'
predictions = chronicle.classify_logs(log_data=okta_log)
# Display predictions sorted by confidence score
for prediction in predictions:
print(f"Log Type: {prediction['logType']}, Score: {prediction['score']}")
```
> **Note:** Confidence scores are provided by the API as guidance only and may not always accurately reflect classification certainty. Use scores for relative ranking rather than absolute confidence.
5. Use custom forwarders:
```python
# Create or get a custom forwarder
forwarder = chronicle.get_or_create_forwarder(display_name="MyCustomForwarder")
forwarder_id = forwarder["name"].split("/")[-1]
# Use the custom forwarder for log ingestion
result = chronicle.ingest_log(
log_type="WINDOWS",
log_message=json.dumps(windows_log),
forwarder_id=forwarder_id
)
```
### Forwarder Management
Chronicle log forwarders are essential for handling log ingestion with specific configurations. The SDK provides comprehensive methods for creating and managing forwarders:
#### Create a new forwarder
```python
# Create a basic forwarder with just a display name
forwarder = chronicle.create_forwarder(display_name="MyAppForwarder")
# Create a forwarder with optional configuration
forwarder = chronicle.create_forwarder(
display_name="ProductionForwarder",
metadata={"labels": {"env": "prod"}},
upload_compression=True, # Enable upload compression for efficiency
enable_server=False # Server functionality disabled,
http_settings={
"port":8080,
"host":"192.168.0.100",
"routeSettings":{
"availableStatusCode": 200,
"readyStatusCode": 200,
"unreadyStatusCode": 500
}
}
)
print(f"Created forwarder with ID: {forwarder['name'].split('/')[-1]}")
```
#### List all forwarders
Retrieve all forwarders in your Chronicle environment with pagination support:
```python
# Get the default page size (50)
forwarders = chronicle.list_forwarders()
# Get forwarders with custom page size
forwarders = chronicle.list_forwarders(page_size=100)
# Process the forwarders
for forwarder in forwarders.get("forwarders", []):
forwarder_id = forwarder.get("name", "").split("/")[-1]
display_name = forwarder.get("displayName", "")
create_time = forwarder.get("createTime", "")
print(f"Forwarder ID: {forwarder_id}, Name: {display_name}, Created: {create_time}")
```
#### Get forwarder details
Retrieve details about a specific forwarder using its ID:
```python
# Get a specific forwarder using its ID
forwarder_id = "1234567890"
forwarder = chronicle.get_forwarder(forwarder_id=forwarder_id)
# Access forwarder properties
display_name = forwarder.get("displayName", "")
metadata = forwarder.get("metadata", {})
server_enabled = forwarder.get("enableServer", False)
print(f"Forwarder {display_name} details:")
print(f" Metadata: {metadata}")
print(f" Server enabled: {server_enabled}")
```
#### Get or create a forwarder
Retrieve an existing forwarder by display name or create a new one if it doesn't exist:
```python
# Try to find a forwarder with the specified display name
# If not found, create a new one with that display name
forwarder = chronicle.get_or_create_forwarder(display_name="ApplicationLogForwarder")
# Extract the forwarder ID for use in log ingestion
forwarder_id = forwarder["name"].split("/")[-1]
```
#### Update a forwarder
Update an existing forwarder's configuration with specific properties:
```python
# Update a forwarder with new properties
forwarder = chronicle.update_forwarder(
forwarder_id="1234567890",
display_name="UpdatedForwarderName",
metadata={"labels": {"env": "prod"}},
upload_compression=True
)
# Update specific fields using update mask
forwarder = chronicle.update_forwarder(
forwarder_id="1234567890",
display_name="ProdForwarder",
update_mask=["display_name"]
)
print(f"Updated forwarder: {forwarder['name']}")
```
#### Delete a forwarder
Delete an existing forwarder by its ID:
```python
# Delete a forwarder by ID
chronicle.delete_forwarder(forwarder_id="1234567890")
print("Forwarder deleted successfully")
```
### Log Processing Pipelines
Chronicle log processing pipelines allow you to transform, filter, and enrich log data before it is stored in Chronicle. Common use cases include removing empty key-value pairs, redacting sensitive data, adding ingestion labels, filtering logs by field values, and extracting host information. Pipelines can be associated with log types (with optional collector IDs) and feeds, providing flexible control over your data ingestion workflow.
The SDK provides comprehensive methods for managing pipelines, associating streams, testing configurations, and fetching sample logs.
#### List pipelines
Retrieve all log processing pipelines in your Chronicle instance:
```python
# Get all pipelines
result = chronicle.list_log_processing_pipelines()
pipelines = result.get("logProcessingPipelines", [])
for pipeline in pipelines:
pipeline_id = pipeline["name"].split("/")[-1]
print(f"Pipeline: {pipeline['displayName']} (ID: {pipeline_id})")
# List with pagination
result = chronicle.list_log_processing_pipelines(
page_size=50,
page_token="next_page_token"
)
```
#### Get pipeline details
Retrieve details about a specific pipeline:
```python
# Get pipeline by ID
pipeline_id = "1234567890"
pipeline = chronicle.get_log_processing_pipeline(pipeline_id)
print(f"Name: {pipeline['displayName']}")
print(f"Description: {pipeline.get('description', 'N/A')}")
print(f"Processors: {len(pipeline.get('processors', []))}")
```
#### Create a pipeline
Create a new log processing pipeline with processors:
```python
# Define pipeline configuration
pipeline_config = {
"displayName": "My Custom Pipeline",
"description": "Filters and transforms application logs",
"processors": [
{
"filterProcessor": {
"include": {
"logMatchType": "REGEXP",
"logBodies": [".*error.*", ".*warning.*"],
},
"errorMode": "IGNORE",
}
}
],
"customMetadata": [
{"key": "environment", "value": "production"},
{"key": "team", "value": "security"}
]
}
# Create the pipeline (server generates ID)
created_pipeline = chronicle.create_log_processing_pipeline(
pipeline=pipeline_config
)
pipeline_id = created_pipeline["name"].split("/")[-1]
print(f"Created pipeline with ID: {pipeline_id}")
```
#### Update a pipeline
Update an existing pipeline's configuration:
```python
# Get the existing pipeline first
pipeline = chronicle.get_log_processing_pipeline(pipeline_id)
# Update specific fields
updated_config = {
"name": pipeline["name"],
"description": "Updated description",
"processors": pipeline["processors"]
}
# Patch with update mask
updated_pipeline = chronicle.update_log_processing_pipeline(
pipeline_id=pipeline_id,
pipeline=updated_config,
update_mask="description"
)
print(f"Updated: {updated_pipeline['displayName']}")
```
#### Delete a pipeline
Delete an existing pipeline:
```python
# Delete by ID
chronicle.delete_log_processing_pipeline(pipeline_id)
print("Pipeline deleted successfully")
# Delete with etag for concurrency control
chronicle.delete_log_processing_pipeline(
pipeline_id=pipeline_id,
etag="etag_value"
)
```
#### Associate streams with a pipeline
Associate log streams (by log type or feed) with a pipeline:
```python
# Associate by log type
streams = [
{"logType": "WINEVTLOG"},
{"logType": "LINUX"}
]
chronicle.associate_streams(
pipeline_id=pipeline_id,
streams=streams
)
print("Streams associated successfully")
# Associate by feed ID
feed_streams = [
{"feed": "feed-uuid-1"},
{"feed": "feed-uuid-2"}
]
chronicle.associate_streams(
pipeline_id=pipeline_id,
streams=feed_streams
)
```
#### Dissociate streams from a pipeline
Remove stream associations from a pipeline:
```python
# Dissociate streams
streams = [{"logType": "WINEVTLOG"}]
chronicle.dissociate_streams(
pipeline_id=pipeline_id,
streams=streams
)
print("Streams dissociated successfully")
```
#### Fetch associated pipeline
Find which pipeline is associated with a specific stream:
```python
# Find pipeline for a log type
stream_query = {"logType": "WINEVTLOG"}
associated = chronicle.fetch_associated_pipeline(stream=stream_query)
if associated:
print(f"Associated pipeline: {associated['name']}")
else:
print("No pipeline associated with this stream")
# Find pipeline for a feed
feed_query = {"feed": "feed-uuid"}
associated = chronicle.fetch_associated_pipeline(stream=feed_query)
```
#### Fetch sample logs
Retrieve sample logs for specific streams:
```python
# Fetch sample logs for log types
streams = [
{"logType": "WINEVTLOG"},
{"logType": "LINUX"}
]
result = chronicle.fetch_sample_logs_by_streams(
streams=streams,
sample_logs_count=10
)
for log in result.get("logs", []):
print(f"Log: {log}")
```
#### Test a pipeline
Test a pipeline configuration against sample logs before deployment:
```python
import base64
from datetime import datetime, timezone
# Define pipeline to test
pipeline_config = {
"displayName": "Test Pipeline",
"processors": [
{
"filterProcessor": {
"include": {
"logMatchType": "REGEXP",
"logBodies": [".*"],
},
"errorMode": "IGNORE",
}
}
]
}
# Create test logs with base64-encoded data
current_time = datetime.now(timezone.utc).isoformat()
log_data = base64.b64encode(b"Sample log entry").decode("utf-8")
input_logs = [
{
"data": log_data,
"logEntryTime": current_time,
"collectionTime": current_time,
}
]
# Test the pipeline
result = chronicle.test_pipeline(
pipeline=pipeline_config,
input_logs=input_logs
)
print(f"Processed {len(result.get('logs', []))} logs")
for processed_log in result.get("logs", []):
print(f"Result: {processed_log}")
```
5. Use custom timestamps:
```python
from datetime import datetime, timedelta, timezone
# Define custom timestamps
log_entry_time = datetime.now(timezone.utc) - timedelta(hours=1)
collection_time = datetime.now(timezone.utc)
result = chronicle.ingest_log(
log_type="OKTA",
log_message=json.dumps(okta_log),
log_entry_time=log_entry_time, # When the log was generated
collection_time=collection_time # When the log was collected
)
```
Ingest UDM events directly into Chronicle:
```python
import uuid
from datetime import datetime, timezone
# Generate a unique ID
event_id = str(uuid.uuid4())
# Get current time in ISO 8601 format
current_time = datetime.now(timezone.utc).isoformat().replace("+00:00", "Z")
# Create a UDM event for a network connection
network_event = {
"metadata": {
"id": event_id,
"event_timestamp": current_time,
"event_type": "NETWORK_CONNECTION",
"product_name": "My Security Product",
"vendor_name": "My Company"
},
"principal": {
"hostname": "workstation-1",
"ip": "192.168.1.100",
"port": 12345
},
"target": {
"ip": "203.0.113.10",
"port": 443
},
"network": {
"application_protocol": "HTTPS",
"direction": "OUTBOUND"
}
}
# Ingest a single UDM event
result = chronicle.ingest_udm(udm_events=network_event)
print(f"Ingested event with ID: {event_id}")
# Create a second event
process_event = {
"metadata": {
# No ID - one will be auto-generated
"event_timestamp": current_time,
"event_type": "PROCESS_LAUNCH",
"product_name": "My Security Product",
"vendor_name": "My Company"
},
"principal": {
"hostname": "workstation-1",
"process": {
"command_line": "ping 8.8.8.8",
"pid": 1234
},
"user": {
"userid": "user123"
}
}
}
# Ingest multiple UDM events in a single call
result = chronicle.ingest_udm(udm_events=[network_event, process_event])
print("Multiple events ingested successfully")
```
Import entities into Chronicle:
```python
# Create a sample entity
entity = {
"metadata": {
"collected_timestamp": "2025-01-01T00:00:00Z",
"vendor_name": "TestVendor",
"product_name": "TestProduct",
"entity_type": "USER",
},
"entity": {
"user": {
"userid": "testuser",
}
},
}
# Import a single entity
result = chronicle.import_entities(entities=entity, log_type="TEST_LOG_TYPE")
print(f"Imported entity: {result}")
# Import multiple entities
entity2 = {
"metadata": {
"collected_timestamp": "2025-01-01T00:00:00Z",
"vendor_name": "TestVendor",
"product_name": "TestProduct",
"entity_type": "ASSET",
},
"entity": {
"asset": {
"hostname": "testhost",
}
},
}
entities = [entity, entity2]
result = chronicle.import_entities(entities=entities, log_type="TEST_LOG_TYPE")
print(f"Imported entities: {result}")
```
### Data Export
> **Note**: The Data Export API features are currently under test and review. We welcome your feedback and encourage you to submit any issues or unexpected behavior to the issue tracker so we can improve this functionality.
You can export Chronicle logs to Google Cloud Storage using the Data Export API:
```python
from datetime import datetime, timedelta, timezone
# Set time range for export
end_time = datetime.now(timezone.utc)
start_time = end_time - timedelta(days=1) # Last 24 hours
# Get available log types for export
available_log_types = chronicle.fetch_available_log_types(
start_time=start_time,
end_time=end_time
)
# Print available log types
for log_type in available_log_types["available_log_types"]:
print(f"{log_type.display_name} ({log_type.log_type.split('/')[-1]})")
print(f" Available from {log_type.start_time} to {log_type.end_time}")
# Create a data export for a single log type (legacy method)
export = chronicle.create_data_export(
gcs_bucket="projects/my-project/buckets/my-export-bucket",
start_time=start_time,
end_time=end_time,
log_type="GCP_DNS" # Single log type to export
)
# Create a data export for multiple log types
export_multiple = chronicle.create_data_export(
gcs_bucket="projects/my-project/buckets/my-export-bucket",
start_time=start_time,
end_time=end_time,
log_types=["WINDOWS", "LINUX", "GCP_DNS"] # Multiple log types to export
)
# Get the export ID
export_id = export["name"].split("/")[-1]
print(f"Created export with ID: {export_id}")
print(f"Status: {export['data_export_status']['stage']}")
# List recent exports
recent_exports = chronicle.list_data_export(page_size=10)
print(f"Found {len(recent_exports.get('dataExports', []))} recent exports")
# Print details of recent exports
for item in recent_exports.get("dataExports", []):
item_id = item["name"].split("/")[-1]
if "dataExportStatus" in item:
status = item["dataExportStatus"]["stage"]
else:
status = item["data_export_status"]["stage"]
print(f"Export ID: {item_id}, Status: {status}")
# Check export status
status = chronicle.get_data_export(export_id)
# Update an export that is in IN_QUEUE state
if status.get("dataExportStatus", {}).get("stage") == "IN_QUEUE":
# Update with a new start time
updated_start = start_time + timedelta(hours=2)
update_result = chronicle.update_data_export(
data_export_id=export_id,
start_time=updated_start,
# Optionally update other parameters like end_time, gcs_bucket, or log_types
)
print("Export updated successfully")
# Cancel an export if needed
if status.get("dataExportStatus", {}).get("stage") in ["IN_QUEUE", "PROCESSING"]:
cancelled = chronicle.cancel_data_export(export_id)
print(f"Export has been cancelled. New status: {cancelled['data_export_status']['stage']}")
# Export all log types at once
export_all = chronicle.create_data_export(
gcs_bucket="projects/my-project/buckets/my-export-bucket",
start_time=start_time,
end_time=end_time,
export_all_logs=True
)
print(f"Created export for all logs. Status: {export_all['data_export_status']['stage']}")
```
The Data Export API supports:
- Exporting one, multiple, or all log types to Google Cloud Storage
- Listing recent exports and filtering results
- Checking export status and progress
- Updating exports that are in the queue
- Cancelling exports in progress
- Fetching available log types for a specific time range
If you encounter any issues with the Data Export functionality, please submit them to our issue tracker with detailed information about the problem and steps to reproduce.
### Basic UDM Search
Search for network connection events:
```python
from datetime import datetime, timedelta, timezone
# Set time range for queries
end_time = datetime.now(timezone.utc)
start_time = end_time - timedelta(hours=24) # Last 24 hours
# Perform UDM search
results = chronicle.search_udm(
query="""
metadata.event_type = "NETWORK_CONNECTION"
ip != ""
""",
start_time=start_time,
end_time=end_time,
max_events=5
)
# Example response:
{
"events": [
{
"name": "projects/my-project/locations/us/instances/my-instance/events/encoded-event-id",
"udm": {
"metadata": {
"eventTimestamp": "2024-02-09T10:30:00Z",
"eventType": "NETWORK_CONNECTION"
},
"target": {
"ip": ["192.168.1.100"],
"port": 443
},
"principal": {
"hostname": "workstation-1"
}
}
}
],
"total_events": 1,
"more_data_available": false
}
```
### UDM Search View
Retrieve UDM search results with additional contextual information, including detection data:
```python
from datetime import datetime, timedelta, timezone
# Set time range for queries
end_time = datetime.now(timezone.utc)
start_time = end_time - timedelta(hours=24) # Last 24 hours
# Fetch UDM search view results
results = chronicle.fetch_udm_search_view(
query='metadata.event_type = "NETWORK_CONNECTION"',
start_time=start_time,
end_time=end_time,
max_events=5, # Limit to 5 events
max_detections=10, # Get up to 10 detections
snapshot_query='feedback_summary.status = "OPEN"', # Filter for open alerts
case_insensitive=True # Case-insensitive search
)
```
> **Note:** The `fetch_udm_search_view` method is synchronous and returns all results at once, not as a streaming response since the parameter passed to endpoint(legacyFetchUDMSearchView) provides synchronous response.
### Fetch UDM Field Values
Search for ingested UDM field values that match a query:
```python
# Search for fields containing "source"
results = chronicle.find_udm_field_values(
query="source",
page_size=10
)
# Example response:
{
"valueMatches": [
{
"fieldPath": "metadata.ingestion_labels.key",
"value": "source",
"ingestionTime": "2025-08-18T08:00:11.670673Z",
"matchEnd": 6
},
{
"fieldPath": "additional.fields.key",
"value": "source",
"ingestionTime": "2025-02-18T19:45:01.811426Z",
"matchEnd": 6
}
],
"fieldMatches": [
{
"fieldPath": "about.labels.value"
},
{
"fieldPath": "additional.fields.value.string_value"
}
],
"fieldMatchRegex": "source"
}
```
### Statistics Queries
Get statistics about network connections grouped by hostname:
```python
stats = chronicle.get_stats(
query="""metadata.event_type = "NETWORK_CONNECTION"
match:
target.hostname
outcome:
$count = count(metadata.id)
order:
$count desc""",
start_time=start_time,
end_time=end_time,
max_events=1000,
max_values=10,
timeout=180
)
# Example response:
{
"columns": ["hostname", "count"],
"rows": [
{"hostname": "server-1", "count": 1500},
{"hostname": "server-2", "count": 1200}
],
"total_rows": 2
}
```
### CSV Export
Export specific fields to CSV format:
```python
csv_data = chronicle.fetch_udm_search_csv(
query='metadata.event_type = "NETWORK_CONNECTION"',
start_time=start_time,
end_time=end_time,
fields=["timestamp", "user", "hostname", "process name"]
)
# Example response:
"""
metadata.eventTimestamp,principal.hostname,target.ip,target.port
2024-02-09T10:30:00Z,workstation-1,192.168.1.100,443
2024-02-09T10:31:00Z,workstation-2,192.168.1.101,80
"""
```
### Query Validation
Validate a UDM query before execution:
```python
query = 'target.ip != "" and principal.hostname = "test-host"'
validation = chronicle.validate_query(query)
# Example response:
{
"isValid": true,
"queryType": "QUERY_TYPE_UDM_QUERY",
"suggestedFields": [
"target.ip",
"principal.hostname"
]
}
```
### Natural Language Search
Search for events using natural language instead of UDM query syntax:
```python
from datetime import datetime, timedelta, timezone
# Set time range for queries
end_time = datetime.now(timezone.utc)
start_time = end_time - timedelta(hours=24) # Last 24 hours
# Option 1: Translate natural language to UDM query
udm_query = chronicle.translate_nl_to_udm("show me network connections")
print(f"Translated query: {udm_query}")
# Example output: 'metadata.event_type="NETWORK_CONNECTION"'
# Then run the query manually if needed
results = chronicle.search_udm(
query=udm_query,
start_time=start_time,
end_time=end_time
)
# Option 2: Perform complete search with natural language
results = chronicle.nl_search(
text="show me failed login attempts",
start_time=start_time,
end_time=end_time,
max_events=100
)
# Example response (same format as search_udm):
{
"events": [
{
"event": {
"metadata": {
"eventTimestamp": "2024-02-09T10:30:00Z",
"eventType": "USER_LOGIN"
},
"principal": {
"user": {
"userid": "jdoe"
}
},
"securityResult": {
"action": "BLOCK",
"summary": "Failed login attempt"
}
}
}
],
"total_events": 1
}
```
The natural language search feature supports various query patterns:
- "Show me network connections"
- "Find suspicious processes"
- "Show login failures in the last hour"
- "Display connections to IP address 192.168.1.100"
If the natural language cannot be translated to a valid UDM query, an `APIError` will be raised with a message indicating that no valid query could be generated.
### Entity Summary
Get detailed information about specific entities like IP addresses, domains, or file hashes. The function automatically detects the entity type based on the provided value and fetches a comprehensive summary including related entities, alerts, timeline, prevalence, and more.
```python
# IP address summary
ip_summary = chronicle.summarize_entity(
value="8.8.8.8",
start_time=start_time,
end_time=end_time
)
# Domain summary
domain_summary = chronicle.summarize_entity(
value="google.com",
start_time=start_time,
end_time=end_time
)
# File hash summary (SHA256)
file_hash = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
file_summary = chronicle.summarize_entity(
value=file_hash,
start_time=start_time,
end_time=end_time
)
# Optionally hint the preferred type if auto-detection might be ambiguous
user_summary = chronicle.summarize_entity(
value="jdoe",
start_time=start_time,
end_time=end_time,
preferred_entity_type="USER"
)
# Example response structure (EntitySummary object):
# Access attributes like: ip_summary.primary_entity, ip_summary.related_entities,
# ip_summary.alert_counts, ip_summary.timeline, ip_summary.prevalence, etc.
# Example fields within the EntitySummary object:
# primary_entity: {
# "name": "entities/...",
# "metadata": {
# "entityType": "ASSET", # Or FILE, DOMAIN_NAME, USER, etc.
# "interval": { "startTime": "...", "endTime": "..." }
# },
# "metric": { "firstSeen": "...", "lastSeen": "..." },
# "entity": { # Contains specific details like 'asset', 'file', 'domain'
# "asset": { "ip": ["8.8.8.8"] }
# }
# }
# related_entities: [ { ... similar to primary_entity ... } ]
# alert_counts: [ { "rule": "Rule Name", "count": 5 } ]
# timeline: { "buckets": [ { "alertCount": 1, "eventCount": 10 } ], "bucketSize": "3600s" }
# prevalence: [ { "prevalenceTime": "...", "count": 100 } ]
# file_metadata_and_properties: { # Only for FILE entities
# "metadata": [ { "key": "...", "value": "..." } ],
# "properties": [ { "title": "...", "properties": [ { "key": "...", "value": "..." } ] } ]
# }
```
### List IoCs (Indicators of Compromise)
Retrieve IoC matches against ingested events:
```python
iocs = chronicle.list_iocs(
start_time=start_time,
end_time=end_time,
max_matches=1000,
add_mandiant_attributes=True,
prioritized_only=False
)
# Process the results
for ioc in iocs['matches']:
ioc_type = next(iter(ioc['artifactIndicator'].keys()))
ioc_value = next(iter(ioc['artifactIndicator'].values()))
print(f"IoC Type: {ioc_type}, Value: {ioc_value}")
print(f"Sources: {', '.join(ioc['sources'])}")
```
The IoC response includes:
- The indicator itself (domain, IP, hash, etc.)
- Sources and categories
- Affected assets in | text/markdown | null | Google SecOps Team <chronicle@google.com> | null | null | null | chronicle, google, secops, security | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming ... | [] | null | null | >=3.10 | [] | [] | [] | [
"google-api-python-client>=2.0.0",
"google-auth-httplib2>=0.1.0",
"google-auth>=2.0.0",
"sphinx-rtd-theme>=1.0.0; extra == \"docs\"",
"sphinx>=4.0.0; extra == \"docs\"",
"pytest-cov>=3.0.0; extra == \"test\"",
"pytest>=7.0.0; extra == \"test\"",
"python-dotenv>=0.17.1; extra == \"test\"",
"tox>=3.24... | [] | [] | [] | [
"Homepage, https://github.com/google/secops-wrapper",
"Documentation, https://github.com/google/secops-wrapper#readme",
"Repository, https://github.com/google/secops-wrapper.git",
"Issues, https://github.com/google/secops-wrapper/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T09:31:38.144509 | secops-0.35.0.tar.gz | 435,701 | 6d/32/2bff14942079182e6645a17582809d76f31a963061f00bb70a75a7038f62/secops-0.35.0.tar.gz | source | sdist | null | false | ed02df6501e1815d2051b52b784ccd82 | d241c20f84dd68b2b60c4191137d60e5ed90fa512bbe9925ee9136c869680bb8 | 6d322bff14942079182e6645a17582809d76f31a963061f00bb70a75a7038f62 | Apache-2.0 | [
"LICENSE"
] | 958 |
2.4 | anthropic-haystack | 5.3.0 | An integration of Anthropic Claude models into the Haystack framework. | # anthropic-haystack
[](https://pypi.org/project/anthropic-haystack)
[](https://pypi.org/project/anthropic-haystack)
- [Integration page](https://haystack.deepset.ai/integrations/anthropic)
- [Changelog](https://github.com/deepset-ai/haystack-core-integrations/blob/main/integrations/anthropic/CHANGELOG.md)
---
## Contributing
Refer to the general [Contribution Guidelines](https://github.com/deepset-ai/haystack-core-integrations/blob/main/CONTRIBUTING.md).
To run integration tests locally, you need to export the `ANTHROPIC_API_KEY` environment variable. | text/markdown | null | deepset GmbH <info@deepset.ai> | null | null | null | null | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming... | [] | null | null | >=3.10 | [] | [] | [] | [
"anthropic>=0.47.0",
"haystack-ai>=2.23.0"
] | [] | [] | [] | [
"Documentation, https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/anthropic#readme",
"Issues, https://github.com/deepset-ai/haystack-core-integrations/issues",
"Source, https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/anthropic"
] | Hatch/1.16.3 cpython/3.12.12 HTTPX/0.28.1 | 2026-02-18T09:31:28.007746 | anthropic_haystack-5.3.0-py3-none-any.whl | 22,731 | ff/59/2355c891ff79c75bd94c889424b3642da4d57a3c59b0f138802903dc7a20/anthropic_haystack-5.3.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 623538c84da2c942f1ad5604c8aa7835 | b353bc8328c2250d547dd697dec0e0cd18c03d7b94aaf415e91d0766dc7e3919 | ff592355c891ff79c75bd94c889424b3642da4d57a3c59b0f138802903dc7a20 | Apache-2.0 | [
"LICENSE.txt"
] | 788 |
2.3 | algomancy-gui | 0.4.3 | Dash components for the Algomancy library | ### algomancy-gui
UI components and configuration utilities for Algomancy dashboards built on Plotly Dash and Dash Bootstrap Components.
#### Features
- `StylingConfigurator` to control layout, theme colors, button styles, and card highlighting
- Page modules (home, data, scenario, compare, overview, admin) with helpers for building common UI
- Utilities such as `SettingsManager` and data‑page filename matching tools
#### Installation
```
pip install -e packages/algomancy-gui
```
Requires Python >= 3.14. Dependencies: `dash`, `dash_bootstrap_components`.
#### Quick start: configure styling
```python
from algomancy_gui.stylingconfigurator import (
StylingConfigurator,
LayoutSelection,
ColorConfiguration,
CardHighlightMode,
ButtonColorMode,
)
styling = StylingConfigurator(
layout_selection=LayoutSelection.SIDEBAR,
color_configuration=ColorConfiguration(
background_color="#FFFFFF",
theme_color_primary="#1F271B",
theme_color_secondary="#6DA34D",
theme_color_tertiary="#FEFAE0",
text_color="#424242",
text_color_highlight="#6DA34D",
text_color_selected="#FFFFFF",
button_color_mode=ButtonColorMode.UNIFIED,
button_colors={
"unified_color": "#6DA34D",
"unified_hover": "#8FBE74",
},
),
logo_path="CQM-logo-white.png",
button_path="cqm-button-white.png",
card_highlight_mode=CardHighlightMode.SUBTLE_DARK,
)
```
Use this `styling` in your `AppConfiguration` (see `example/main.py`).
#### Related docs and examples
- Example application wiring: `example/main.py`
- Root documentation: UI is referenced throughout `documentation/3_dash_contents.md`
| text/markdown | Pepijn Wissing | Pepijn Wissing <Wsg@cqm.nl> | null | null | null | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [
"dash",
"dash-bootstrap-components",
"dash-auth>=2.3.0",
"algomancy-data",
"algomancy-scenario",
"algomancy-content",
"algomancy-utils",
"waitress>=3.0.2",
"strenum>=0.4.15"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:30:42.738763 | algomancy_gui-0.4.3-py3-none-any.whl | 71,042 | 3f/f6/c8c2148c04645b076efc80f41dfb94c99d81672302e44522c5a3ab36bf43/algomancy_gui-0.4.3-py3-none-any.whl | py3 | bdist_wheel | null | false | a1626713774f38f7a14fb0929f615619 | bbb731674dce25235a372162c56d1e6135610f906ad47afe908b115e45d58554 | 3ff6c8c2148c04645b076efc80f41dfb94c99d81672302e44522c5a3ab36bf43 | null | [] | 119 |
2.3 | algomancy-data | 0.4.3 | Data management model for the Algomancy library | ### algomancy-data
Data layer for Algomancy dashboards: schemas, extract/transform/load (ETL) primitives, validators, and data containers used by the GUI and scenario packages.
#### Features
- `DataSource` and `BaseDataSource` containers with table management and JSON (de)serialization
- Pluggable ETL pipeline building blocks: `Extractor`, `Transformer`, `Validator`, `Loader`
- `DataManager` orchestrators (stateful/stateless) to drive ETL and manage datasets
- Declarative `InputFileConfiguration` for file inputs (CSV, XLSX, JSON)
#### Installation
```
pip install -e packages/algomancy-data
```
Requires Python >= 3.14. Core dependency: `pandas`.
#### Quick start: use `DataSource` directly
```python
import pandas as pd
from algomancy_data import DataSource, DataClassification
ds = DataSource(ds_type=DataClassification.MASTER_DATA, name="warehouse")
ds.add_table("inventory", pd.DataFrame({"sku": ["A", "B"], "qty": [10, 5]}))
# JSON roundtrip
json_str = ds.to_json()
ds2 = DataSource.from_json(json_str)
assert ds2.get_table("inventory").equals(ds.get_table("inventory"))
```
#### Quick start: orchestrate ETL with a `DataManager`
`DataManager` wires `Extractor` → `Transformer` → `Validator` → `Loader`. You provide an `ETLFactory` that builds these parts for each input configuration.
```python
from typing import List
from algomancy_data import (
DataSource, DataClassification,
DataManager, StatelessDataManager, ETLFactory,
SingleInputFileConfiguration, FileExtension
)
class MyETLFactory(ETLFactory):
# Implement factory methods to build Extractor/Transformer/Validator/Loader
...
input_cfgs: List[SingleInputFileConfiguration] = [
SingleInputFileConfiguration(
tag="inventory", file_name="inventory", extension=FileExtension.CSV
)
]
dm: DataManager = StatelessDataManager(
etl_factory=MyETLFactory,
input_configs=input_cfgs,
save_type="json", # or other configured type
data_object_type=DataSource,
)
files = dm.prepare_files(file_items_with_path=[("inventory", "./data/inventory.csv")])
ds: DataSource = dm.etl_data(files=files, dataset_name="warehouse")
```
#### Documentation and examples
- Root docs: `documentation/1_data.md`
- End‑to‑end usage in the example app: `example/` (see `example/data_handling` and `example/main.py`)
| text/markdown | Pepijn Wissing | Pepijn Wissing <Wsg@cqm.nl> | null | null | null | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [
"pandas",
"openpyxl>=3.1.5",
"algomancy-utils",
"strenum>=0.4.15"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:30:41.866491 | algomancy_data-0.4.3-py3-none-any.whl | 25,271 | cc/98/f2bc09111e482cd4f832e5fac1bdd48253995f91603f93d1766865dfc364/algomancy_data-0.4.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 4f9ecc0943590cffb866fefcacafde70 | 8b14c2822ca4865f47519272c4cf821de453ee6c03b91c322ad150dec56b1d58 | cc98f2bc09111e482cd4f832e5fac1bdd48253995f91603f93d1766865dfc364 | null | [] | 121 |
2.3 | algomancy-content | 0.4.3 | Placeholder content for quick start to the Algomancy library | ### algomancy-content
Reusable content blocks (pages and callbacks) for Algomancy dashboards built with Dash. This package provides ready‑made creators for home, data, and overview pages, plus placeholder content to get you started quickly.
#### Features
- Standard page content creators (home, data, overview)
- Placeholder content for instant scaffolding
- Pairs naturally with `algomancy-gui` styling and the core launcher under `src/algomancy`
#### Installation
```
pip install -e packages/algomancy-content
```
Requires Python >= 3.14.
#### Quick start
```python
from algomancy_content import ShowcaseHomePage
# Plug into your AppConfiguration (see project root `example/main.py`)
home_content = ShowcaseHomePage.create_default_elements_showcase
home_callbacks = ShowcaseHomePage.register_callbacks
```
Use the prebuilt standard home page:
```python
from algomancy_content.pages.standardhomepage import StandardHomePage
home_content = StandardHomePage.create_content
home_callbacks = StandardHomePage.register_callbacks
```
#### Related docs and examples
- Example application: `example/main.py`
- Root documentation: `documentation/3_dash_contents.md`
| text/markdown | Pepijn Wissing | Pepijn Wissing <Wsg@cqm.nl> | null | null | null | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [
"algomancy-data",
"algomancy-scenario",
"dash",
"dash-bootstrap-components",
"dash-daq",
"pandas"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:30:40.585908 | algomancy_content-0.4.3-py3-none-any.whl | 15,001 | 9b/63/855dfb53fdc709bceb82f3f12ca399197fb82e3cd2e54b22ff2a40df4713/algomancy_content-0.4.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 75b8f04a68eb5e25b69f922f4d237045 | b78c40371502a6fadeaf5293b666d3287f10879b67f74a1b95273dfbfae1835a | 9b63855dfb53fdc709bceb82f3f12ca399197fb82e3cd2e54b22ff2a40df4713 | null | [] | 119 |
2.3 | algomancy-scenario | 0.4.3 | Scenario management model for the Algomancy library | ### algomancy-scenario
Scenario modeling utilities for Algomancy: define algorithms and parameters, run scenarios against data, and compute KPIs.
#### Features
- `Scenario` lifecycle with statuses (`CREATED`, `QUEUED`, `PROCESSING`, `COMPLETE`, `FAILED`)
- `BaseAlgorithm` and parameter classes to define pluggable algorithms
- KPI framework (`BaseKPI`) to compute metrics from algorithm results
- Works with `algomancy-data` data sources and can be orchestrated from the GUI
#### Installation
```
pip install -e packages/algomancy-scenario
```
Requires Python >= 3.14.
#### Quick start
Define a simple algorithm and KPI, then run a `Scenario`:
```python
from algomancy_scenario import (
Scenario, ScenarioStatus,
BaseAlgorithm, BaseParameterSet, BaseKPI,
)
from algomancy_data import DataSource, DataClassification
# Minimal parameters type
class ExampleParams(BaseParameterSet):
def serialize(self) -> dict:
return {"hello": "world"}
# Minimal algorithm
class ExampleAlgorithm(BaseAlgorithm):
def __init__(self):
super().__init__(name="Example", params=ExampleParams())
@staticmethod
def initialize_parameters() -> ExampleParams: # used by GUI tooling
return ExampleParams()
def run(self, data: DataSource) -> dict:
# do something with data and return a result dictionary
self.set_progress(100)
return {"count_tables": len(data.list_tables())}
# Minimal KPI
class CountTablesKPI(BaseKPI):
def __init__(self):
super().__init__(name="Tables", improvement_direction=None)
def compute_and_check(self, result: dict):
self.value = result["count_tables"]
# Prepare data
ds = DataSource(ds_type=DataClassification.MASTER_DATA, name="warehouse")
# Build and run scenario
scenario = Scenario(
tag="demo",
input_data=ds,
kpis={"tables": CountTablesKPI()},
algorithm=ExampleAlgorithm(),
)
scenario.process()
assert scenario.status == ScenarioStatus.COMPLETE
print("Tables KPI:", scenario.kpis["tables"].value)
```
#### Related docs and examples
- Example app demonstrates scenario wiring: `example/pages/ScenarioPageContent.py`
- Algorithm/KPI examples: `example/templates/algorithm/` and `example/templates/kpi/`
| text/markdown | Pepijn Wissing | Pepijn Wissing <Wsg@cqm.nl> | null | null | null | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [
"algomancy-utils",
"algomancy-data",
"strenum>=0.4.15"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:30:38.489341 | algomancy_scenario-0.4.3-py3-none-any.whl | 17,535 | 38/02/e1abe70bba6dac1b08d5df5a89d5985e823d1e286b54e4091c267bfee657/algomancy_scenario-0.4.3-py3-none-any.whl | py3 | bdist_wheel | null | false | ed355c4935bd38c115e7e1c6a9557dfa | 40d33a3e5361ca35183bc40184484bdc21b38c936e653e34e395203145716bdf | 3802e1abe70bba6dac1b08d5df5a89d5985e823d1e286b54e4091c267bfee657 | null | [] | 120 |
2.3 | algomancy-cli | 0.4.3 | CLI shell for Algomancy to exercise backend functionality without the GUI. | Algomancy CLI
Interactive terminal shell to exercise Algomancy backend functionality without the GUI. Useful for rapid development of ETL, algorithms, and scenarios.
Install / Run
```
uv run algomancy-cli --example
```
Or point to your own configuration factory:
```
uv run algomancy-cli --config-callback myproject.config:make_config
```
Where `make_config` returns an `AppConfiguration` instance.
Commands
- `help` — show help
- `list-data` / `ld` — list datasets
- `load-data <name>` — load example data into dataset `<name>`
- `etl-data <name>` — run ETL to create dataset `<name>`
- `list-scenarios` / `ls` — list scenarios
- `create-scenario <tag> <dataset_key> <algo_name> [json_params]` — create scenario
- `run <scenario_id_or_tag>` — run scenario and wait for completion
- `status` — show processing status
- `quit` / `exit` — exit shell
Parameters for `create-scenario` can be provided as a JSON object, e.g.:
```
create-scenario test1 "Master data" Fast "{\"duration\": 0.5}"
```
How it works
The CLI wraps `ScenarioManager` created via `AppConfiguration` just like the GUI launcher. See `src/algomancy/cli_launcher.py` for details.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.14 | [] | [] | [] | [
"algomancy-utils",
"algomancy-scenario",
"algomancy-data"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:30:37.690492 | algomancy_cli-0.4.3-py3-none-any.whl | 6,486 | 88/4e/e29bd93e48fdf212fb587acaf452a9ac7bc69b6e334e1ba9e6601646284d/algomancy_cli-0.4.3-py3-none-any.whl | py3 | bdist_wheel | null | false | d112b4da95814133ed28b030ef62371e | 89cdd8dfd875bd3e2690412f5989b0dde3a7d170a3629d978bdff8bc0e687a83 | 884ee29bd93e48fdf212fb587acaf452a9ac7bc69b6e334e1ba9e6601646284d | null | [] | 117 |
2.4 | Algomancy | 0.4.3 | A dashboarding framework for visualizing performances of algorithms or simulations in various scenarios. | # Algomancy
Algomancy is a lightweight framework for building interactive dashboards that visualize the performance of algorithms and/or simulations across scenarios. It brings together ETL, scenario orchestration, KPI computation, and a Dash-based UI with modular pages.
## Highlights
- Python 3.14+
- Dash UI with modular pages and a production-ready server
- Batteries-included packages: content, data, scenario, GUI, CLI
## Installation
- Using uv (recommended):
```
uv add algomancy
```
- Using pip:
```
pip install algomancy
```
## Minimal example
The following example launches a small placeholder dashboard using the default building blocks from the Algomancy ecosystem. Copy this into a file called `main.py` and run it.
## Set up folder structure
1. Create the following directory structure:
```text
root/
|── assets/ (*)
├── data/ (*)
├── src/
│ ├── data_handling/
│ ├── pages/
│ └── templates/
│ ├── kpi/
│ └── algorithm/
├── main.py (*)
├── README.md
└── pyproject.toml
```
> Only the items marked (*) are required.
2. create `main.py`
```python
from algomancy_gui.gui_launcher import GuiLauncher
from algomancy_gui.appconfiguration import AppConfiguration
from algomancy_content import (
PlaceholderETLFactory,
PlaceholderAlgorithm,
PlaceholderKPI,
PlaceholderSchema,
)
from algomancy_data import DataSource
def main() -> None:
host = "127.0.0.1"
port = 8050
app_cfg = AppConfiguration(
etl_factory = PlaceholderETLFactory,
kpi_templates = {"placeholder": PlaceholderKPI},
algo_templates = {"placeholder": PlaceholderAlgorithm},
schemas = [PlaceholderSchema()],
host = host,
port = port,
title = "My Algomancy Dashboard",
)
app = GuiLauncher.build(app_cfg)
GuiLauncher.run(app=app, host=app_cfg.host, port=app_cfg.port)
if __name__ == "__main__":
main()
```
## Run
- Save the file as `main.py` and start the app:
```
uv run main.py
```
- Open your browser at http://127.0.0.1:8050
Examples
- A more complete example (including assets and templates) is available in the algomancy repository under `example/`. The entry point is `example/main.py`.
Requirements
- Python 3.14+
- Windows, macOS, or Linux
CLI
- This package also exposes a CLI entry point `algomancy-cli`. Run `algomancy-cli --help` for usage.
License
- See the `LICENSE` file included with this distribution.
Changelog
- See `changelog.md` for notable changes. | text/markdown | Pepijn Wissing | Pepijn Wissing <pepijn.wissing@cqm.nl> | Pepijn Wissing, Bart Post | Pepijn Wissing <pepijn.wissing@cqm.nl>, Bart Post <bart.post@cqm.nl> | null | visualization, algorithm, simulation, scenario | [
"Development Status :: 4 - Beta",
"Programming Language :: Python"
] | [] | null | null | >=3.14 | [] | [] | [] | [
"algomancy-cli>=0.4.3",
"algomancy-content>=0.4.3",
"algomancy-data>=0.4.3",
"algomancy-gui>=0.4.3",
"algomancy-scenario>=0.4.3",
"algomancy-utils>=0.4.3",
"furo>=2025.12.19",
"myst-parser>=5.0.0",
"pydata-sphinx-theme>=0.16.1",
"sphinx>=9.1.0",
"sphinx-autobuild>=2025.8.25",
"sphinx-autodoc2>... | [] | [] | [] | [
"Documentation, https://algomancy.readthedocs.io/en/latest/",
"Repository, https://github.com/PepijnWissing/algomancy"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:30:35.841994 | algomancy-0.4.3-py3-none-any.whl | 4,296 | 03/6e/07b8c204cc584e0d4fab56f8ef6606f939753e30d00a786471f0eb60c03d/algomancy-0.4.3-py3-none-any.whl | py3 | bdist_wheel | null | false | f59873f1a341742c48a909fc10118256 | e3385e19315b1b4f9a434ec6c00f0c163a8a4b9c9901d449e0fd35f846eb8ef1 | 036e07b8c204cc584e0d4fab56f8ef6606f939753e30d00a786471f0eb60c03d | null | [
"LICENSE"
] | 0 |
2.4 | licenselynx | 2.1.0 | Deterministically map license strings to its canonical identifier | # LicenseLynx for Python
To use LicenseLynx in Python, you can call the ``map`` method from the ``LicenseLynx`` module to map a license name to its canonical form.
The return value is an object with the canonical name and the source of the license.
## Installation
To install the library, run following command:
```shell
pip install licenselynx
```
## Usage
```python
from licenselynx.licenselynx import LicenseLynx
# Map the license name
license_object = LicenseLynx.map("licenseName")
print(license_object.id)
print(license_object.src)
# Map the license name with risky mappings enabled
license_object = LicenseLynx.map("licenseName", risky=True)
```
## License
This project is licensed under the [BSD 3-Clause "New" or "Revised" License](../LICENSE) (SPDX-License-Identifier: BSD-3-Clause).
Copyright (c) Siemens AG 2025 ALL RIGHTS RESERVED
| text/markdown | Leo Reinmann | leo.reinmann@siemens.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:30:34.170106 | licenselynx-2.1.0.tar.gz | 107,517 | 8d/2c/6bf70584456650b6c86fb9416b7d282538cdaa38114a00125205bb3b72ee/licenselynx-2.1.0.tar.gz | source | sdist | null | false | 6959dfde82db991206f24a3df991a2b2 | 0b05f8376e2e648d4aa4ae07ab14e90f04de13a19a2a089a87d762af02f34cd6 | 8d2c6bf70584456650b6c86fb9416b7d282538cdaa38114a00125205bb3b72ee | BSD-3-Clause | [
"LICENSE"
] | 561 |
2.4 | omer-bio | 0.1.0 | Ce package contient un module d'Analyse Promethee et des fonctions necessaires pour cette analyse | # Python package: omerbio
[](https://opensource.org/licenses/MIT)
<!-- [](https://www.metabohub.fr) -->
## Metadata
- authors: <etienne.jules@inrae.fr>
- creation date: `2025-08-19`
- main usage: This repo contains the code for the omerbio python package. A Promethee multi-criteria analysis weighted by PCA.
## Description
This package implements a multi-criteria analysis of data using Promethee method, with a weighing procedure based of principal components analysis (PCA).
## Features
The main functions are:
- `run_promethee_analysis`: Execute the multi-criteria analysis
- `generate_radar_plots`: Generates a pdf with radar plots of individuals on a set of criteria.
## Getting Started
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
### Prerequisites
Before you begin, ensure you have met the following requirements:
- **Operating System**: Windows 10+, macOS 10.14+, or Linux
- **Programming Language**: [Python] version 3.6 or higher
- **Package Manager**: [pip] version 24.1 or higher
- **Other dependencies**: Python packages: matplotlib, pandas, numpy, Promethee, sklearn
### Installing
Clone the repo and install the package localy (soon available via pip gitlab or pypi) :
```bash
git clone https://forge.inrae.fr/pfem/dev/libs/omerbio
cd src
pip install -e .
```
### Running the tests
In the root folder of this repository run:
```
python -m unittest
```
This runs the unit test of the package functions.
## Changelog
COMING SOON !
## Authors
- **Elfried Salanon** - *Initial idea and implentation* - MTH, INRAE, UNH, PFEM
- **Salomon Gouiri Loembe** - *First packaging work* - INRAE, UNH, PFEM
- **Marie Lefebvre** - *Project management* - MTH, INRAE, UNH, PFEM
- **Etienne Jules** - *Project management, final packaging, maintainer* - MTH, INRAE, UNH, PFEM
## License
**Omerbio** is distributed under the ~~MIT License~~.
Please refer to [LICENSE](LICENSE) file for further details.
## Support & External resources
<!-- NOTE: this section is facultative; customize / remove useless / add relevant / ... items in the list below -->
- :book: **Documentation** - Coming soon !
- :bug: **Bug Reports** - [GitLab Issues](https://forge.inrae.fr/pfem/dev/libs/omerbio/-/issues)
- :email: **Email** - <etienne.jules@inrae.fr>
## Acknowledgments
<!-- NOTE: this section is required -->
- Thanks to Elfried Salanon, Salomon Gouiri Loembe, Marie Lefebvre and Etienne Jules for their valuable contributions
- Inspired by and built with [Promethee python package](https://pypi.org/project/Promethee/)
- Special thanks to the open-source community
---
| text/markdown | null | Etienne Jules <etienne.jules@inrae.fr> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"pandas",
"scikit-learn",
"matplotlib==3.9.4",
"PyMuPDF==1.26.3",
"Promethee"
] | [] | [] | [] | [
"Homepage, https://unh-pfem-gitlab.ara.inrae.fr/packages/omerbio",
"Issues, https://unh-pfem-gitlab.ara.inrae.fr/packages/omerbio/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T09:30:32.209065 | omer_bio-0.1.0.tar.gz | 6,463 | f4/0b/dc063b54153d055c3694ac27dc686fd4fef001dc2dfe117afc595f591033/omer_bio-0.1.0.tar.gz | source | sdist | null | false | 993b95b94b566a41fd8afa473cdb8ff1 | 6a20ce1e265ee97a75b27ba200b1d32d2d0875a9054b53c09db6c9a5c50b2025 | f40bdc063b54153d055c3694ac27dc686fd4fef001dc2dfe117afc595f591033 | MIT | [
"LICENSE"
] | 274 |
2.4 | trspecfit | 0.4.0 | Fit 2D time- and energy-resolved spectroscopy data | # trspecfit - 2D Time- and Energy-resolved Spectroscopy Fitting
[](https://time-resolved-spectroscopy-fit.readthedocs.io/en/latest/?badge=latest)
[](https://pypi.org/project/trspecfit/)
`trspecfit` is a Python package for modeling and fitting 1D energy-resolved and 2D time-and-energy-resolved spectroscopy data. It extends lmfit with composable spectral components, parameter-level time dynamics, convolution kernels, and simulation tools so you can build, fit, and validate physically meaningful models in one workflow.
## Capabilities
- Modular components (Gaussian, Voigt/GLP/GLS, Doniach-Sunjic, backgrounds, kernels)
- 1D and 2D model construction with time-dependent parameters
- Global fitting via `lmfit`, including CI and optional MCMC (`lmfit.emcee`)
- Synthetic data generation (single spectra, 2D datasets, noisy realizations)
- Parameter-sweep simulation for validation and ML training data generation
## Documentation
Full docs are hosted on Read the Docs:
- Docs home: https://time-resolved-spectroscopy-fit.readthedocs.io/en/latest/
- Installation: https://time-resolved-spectroscopy-fit.readthedocs.io/en/latest/installation.html
- Quick start: https://time-resolved-spectroscopy-fit.readthedocs.io/en/latest/quickstart.html
- Examples: https://time-resolved-spectroscopy-fit.readthedocs.io/en/latest/examples/index.html
- API reference: https://time-resolved-spectroscopy-fit.readthedocs.io/en/latest/api/index.html
For consistent, central plot behavior, set plotting defaults at `Project` creation (typically via `project.yaml`), and see PlotConfig details and override patterns here: https://time-resolved-spectroscopy-fit.readthedocs.io/en/latest/api/plot_config.html
## Installation
Install from PyPI:
```bash
pip install trspecfit
```
Install from GitHub:
```bash
pip install git+https://github.com/InfinityMonkeyAtWork/time-resolved-spectroscopy-fit.git
```
## Quick Usage
```python
from trspecfit import Project, File
project = Project(path='examples/simulator', name='local-test')
file = File(parent_project=project, path='simulated_dataset')
file.load_model('models_energy.yaml', ['ModelName'])
file.describe_model()
file.add_time_dependence(
model_yaml='models_time.yaml',
model_info=['TimeModelName'],
par_name='EnergyModelComponent_NN_par',
)
file.model_active.create_value2D()
value_2d = file.model_active.value2D
```
For full workflows, see the docs examples page and the notebooks in `examples/`.
## Development
```bash
# Create env (same on all platforms)
python -m venv .venv
# Activate virtual environment
# Linux / macOS
source .venv/bin/activate
# OR Windows PowerShell
.\.venv\Scripts\Activate
# Install and setup (same on all platforms)
pip install -U pip
pip install -e ".[dev]"
python -m pre_commit install --install-hooks
# Commit changes (same on all platforms)
pytest
python -m pre_commit run --all-files
```
## Repository Layout
- `src/trspecfit/` - package source
- `docs/` - Sphinx docs source
- `examples/` - notebooks and YAML models
- `tests/` - pytest test suite
## Copyright Notice
time-resolved spectroscopy fit (trspecfit) Copyright (c) 2025, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.
If you have questions about your rights to use or distribute this software,
please contact Berkeley Lab's Intellectual Property Office at
IPO@lbl.gov.
NOTICE. This Software was developed under funding from the U.S. Department
of Energy and the U.S. Government consequently retains certain rights. As
such, the U.S. Government has been granted for itself and others acting on
its behalf a paid-up, nonexclusive, irrevocable, worldwide license in the
Software to reproduce, distribute copies to the public, prepare derivative
works, and perform publicly and display publicly, and to permit others to do so.
| text/markdown | null | Johannes Mahl <johannes.a.mahl@gmail.com> | null | null | null | fit, fitting, spectroscopy, 2D, multi-dimensional, time-resolved, global fit | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"lmfit",
"numdifftools",
"emcee",
"tqdm",
"corner",
"numpy",
"pandas",
"scipy",
"ruamel.yaml",
"h5py",
"IPython",
"matplotlib",
"pre-commit; extra == \"dev\"",
"mypy; extra == \"dev\"",
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/InfinityMonkeyAtWork/time-resolved-spectroscopy-fit/",
"Bug Tracker, https://github.com/InfinityMonkeyAtWork/time-resolved-spectroscopy-fit/issues",
"Discussions, https://github.com/InfinityMonkeyAtWork/time-resolved-spectroscopy-fit/discussions"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:28:15.058283 | trspecfit-0.4.0.tar.gz | 2,158,577 | ea/a8/b5b1fb5eedda43f139e6f5eaea3d6691c3fb50d8309a8b1a35f12ecc962f/trspecfit-0.4.0.tar.gz | source | sdist | null | false | de250d3c0f5a3d3855666343b4f29ec2 | 5bbd9700baf1ba5bf1dc6467d74ef4b09630b63a2937ecb172f4ba5d8ef13cac | eaa8b5b1fb5eedda43f139e6f5eaea3d6691c3fb50d8309a8b1a35f12ecc962f | null | [
"LICENSE"
] | 247 |
2.3 | mijnbib | 0.10.5 | Python API voor de website mijn.bibliotheek.be | # mijnbib
Python API for bibliotheek.be (formerly mijn.bibliotheek.be)
With this Python library you can retrieve your borrowed items, reservations and
account information if you have an account on <https://bibliotheek.be>, the
Flemish & Brussels public library network. You can also extend loans.
This API allows you to show and extend your loans (or loans from multiple accounts
in your family) in your own coding projects, such as a small desktop or web
application.
A list of supported libraries can be found in [libraries.md](./libraries.md).
## Installation
Install via:
pip install mijnbib
Or, to force an upgrade:
pip install --upgrade mijnbib
## Usage
For example, retrieving your borrowed items can be done as follows (after installation):
from mijnbib import MijnBibliotheek
username = "johndoe"
password = "12345678"
account_id = "123" # see the number in the URL, or via mb.get_accounts()
mb = MijnBibliotheek(username, password)
loans = mb.get_loans(account_id)
print(loans)
For a more readable version, use `pprint()`:
import pprint
pprint.pprint([l for l in loans])
[Loan(title='Erebus',
loan_from=datetime.date(2023, 11, 25),
loan_till=datetime.date(2023, 12, 23),
author='Palin, Michael',
type='Boek',
extendable=True,
extend_url='https://gent.bibliotheek.be/mijn-bibliotheek/lidmaatschappen/123/uitleningen/verlengen?loan-ids=789',
extend_id='789',
branchname='Gent Hoofdbibliotheek',
id='456789',
url='https://gent.bibliotheek.be/resolver.ashx?extid=%7Cwise-oostvlaanderen%7C456789',
cover_url='https://webservices.bibliotheek.be/index.php?func=cover&ISBN=9789000359325&VLACCnr=10157217&CDR=&EAN=&ISMN=&EBS=&coversize=medium',
account_id='123'
)]
For more examples, see the code in the `examples` folder.
It also uses `asdict` for conversion to a dictionary.
## Command-line interface
You can call the module from the command-line as follows:
python -m mijnbib loans
python -m mijnbib --version
Or, directly via the CLI command:
mijnbib loans
The `--help` option shows all available options
$ mijnbib --help
usage: mijnbib [-h] [-V] [-v] {all,accounts,loans,reservations,login} ...
Interact with bibliotheek.be website, e.g. to retrieve loans, reservations
or accounts.
Specify the required authentication parameters (username, password, ...)
as a parameter of the subcommando. See the help of a subcommando for all
parameters, e.g. `mijnbib --help all`
More convenient is creating a `mijnbib.ini` file containing the parameters:
[DEFAULT]
username = john
password = 123456
accountid = 456
positional arguments:
{all,accounts,loans,reservations,login}
all retrieve all information for all accounts
accounts retrieve accounts
loans retrieve loans for account id
reservations retrieve reservations for account id
login just log in, and report if success or not
options:
-h, --help show this help message and exit
-V, --version show program's version number and exit
-v, --verbose show debug logging
## Notes
- **Error handling**. Depending on the application, it may be advisable to
provide error handling. The `errors.py` file contains the list of
Mijnbib-specific exceptions. The docstrings of the public methods contain
the errors that can occur. For example:
from mijnbib import AuthenticationError, MijnbibError, MijnBibliotheek
mb = MijnBibliotheek(username, password)
try:
accounts = mb.get_accounts()
except AuthenticationError as e:
print(e) # wrong credentials
except MijnbibError as e:
print(e) # any other custom mijnbib error
- **Compatibility with bibliotheek.be** - This Python API retrieves its data
via web scraping of the bibliotheek.be website.
Therefore it depends on the structure of the website. When the structure of
the website changes, it is very likely that all or certain functionality
will suddenly stop working.
In that case, you have to wait until this Python library is updated to deal
with the new structure.
Provide a try/except wrapper, where you either catch `MijnbibError`, or the
more specific `IncompatibleSourceError`.
## Alternatives
The Home Assistant plugin <https://github.com/myTselection/bibliotheek_be> scrapes
the bibliotheek.be website in a similar way.
## Development
This project uses `uv`. If needed, install first via, e.g.
curl -LsSf https://astral.sh/uv/install.sh | sh
To install all dependencies for development:
make init
If all is good, the following should print `mijnbib <version>`:
uv run mijnbib --version
Note: This works because mijnbib is installed as a cli script via the
`project.scripts` entry in `pyproject.toml`, with `uv run` taking care of
activating the virtual environment before running the command.
You need `make` as well. For installation on Windows, see the options at
<https://stackoverflow.com/a/32127632/50899>
Running the tests, applying linting and code formatting can be done via:
make test
make lint
make format
To work around the challenge of testing a web scraper, the following *snapshot
testing* approach can be used to get some confidence when applying refactoring:
1. Create a file `mijnbib.ini` in the project root folder, and make it contain
a section `[DEFAULT]` holding the following parameters: `username`,
`password` and `account_id`
2. Run `python tests/save_testref.py` to capture and store the current output
(a couple of files will be created)
3. Perform refactoring as needed
4. Run `pytest tests/tst_mijnbibliotheek.py` (note: it's `pytest` here!) to check
if the output still matches the earlier captured output
Creating a distribution archive:
make clean
make build
## Publishing
1. Update `changelog.md` (do not commit)
2. Do:
make all
uvx uv-ship next patch # (updates pyproject.toml and uv.lock,
# creates tag and pushes to remote)
make clean build
make publish
3. Create release in github, starting from tag
| text/markdown | Ward Van Heddeghem | Ward Van Heddeghem <wardvh@fastmail.fm> | null | null | MIT License
Copyright (c) 2023 Ward
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | mijn bibliotheek, bibliotheek | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"beautifulsoup4>=4.14.3",
"requests>=2.32.3",
"urllib3>=2.6.3"
] | [] | [] | [] | [
"Changelog, https://github.com/wvanhed/mijnbib/blob/main/changelog.md",
"Homepage, https://github.com/wvanhed/mijnbib"
] | twine/6.1.0 CPython/3.8.13 | 2026-02-18T09:28:02.807488 | mijnbib-0.10.5.tar.gz | 19,230 | 23/fb/4aa19aae1991f4e24e368420a6a84ac7cfcfd9c1fdab7c9db17330877367/mijnbib-0.10.5.tar.gz | source | sdist | null | false | c328559f991bf5ac143d7d3b1e74ceca | eadba43904976d05501831761b934ee38fe83592f15542c4bd474d6e58fa5212 | 23fb4aa19aae1991f4e24e368420a6a84ac7cfcfd9c1fdab7c9db17330877367 | null | [] | 256 |
2.4 | fiddler-langgraph | 1.4.0 | Python SDK for instrumenting GenAI Applications with Fiddler | # Fiddler LangGraph SDK
SDK for instrumenting GenAI Applications with Fiddler using OpenTelemetry and LangGraph.
## Installation
```bash
pip install fiddler-langgraph
```
**Note**: This SDK supports LangGraph versions >= 0.3.28 and <= 1.0.2 If you already have LangGraph installed in your environment, the SDK will work with your existing version as long as it falls within this range. If LangGraph is not installed or is outside the supported range, you'll get a helpful error message with installation instructions.
### With Example Dependencies
To run the example scripts in the `examples/` directory:
```bash
pip install fiddler-langgraph[examples]
```
### Development Dependencies
For development and testing:
```bash
pip install fiddler-langgraph[dev]
```
## Quick Start
```python
from fiddler_langgraph import FiddlerClient
# Initialize the FiddlerClient with basic configuration
client = FiddlerClient(
url="https://your-instance.fiddler.ai",
api_key="fdl_api_key",
application_id="fdl_application_id" # Must be a valid UUID4
)
# For langgraph, you can instrument like below
from fiddler_langgraph.tracing.instrumentation import LangGraphInstrumentor, set_llm_context, set_conversation_id
LangGraphInstrumentor(client).instrument()
# Set additional context for LLM processing
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model='gpt-4o-mini')
set_llm_context(model, "Previous conversation context")
# Set conversation ID for multi-turn conversations
from langgraph.graph import StateGraph
workflow = StateGraph(state_schema=State)
app = workflow.compile()
set_conversation_id("conversation_123")
app.invoke({"messages": [{"role": "user", "content": "Write a novel"}]})
```
## LangGraph Usage Examples
### Basic Instrumentation
```python
from fiddler_langgraph.tracing.instrumentation import LangGraphInstrumentor
# Initialize and instrument
instrumentor = LangGraphInstrumentor(client)
instrumentor.instrument()
```
### Setting LLM Context
```python
from fiddler_langgraph.tracing.instrumentation import set_llm_context
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model='gpt-4o-mini')
set_llm_context(model, "User prefers concise responses")
```
### Conversation Tracking
```python
from fiddler_langgraph.tracing.instrumentation import set_conversation_id
import uuid
# Set conversation ID for tracking multi-turn conversations
conversation_id = str(uuid.uuid4())
set_conversation_id(conversation_id)
```
## Configuration
The Fiddler SDK provides flexible configuration options for OpenTelemetry integration and performance tuning.
### Basic Configuration
```python
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id", # Must be a valid UUID4
url="https://your-instance.fiddler.ai"
)
```
### Advanced Configuration
```python
from opentelemetry.sdk.trace import SpanLimits, sampling
from opentelemetry.exporter.otlp.proto.http.trace_exporter import Compression
# Custom span limits for high-volume applications
custom_limits = SpanLimits(
max_events=64,
max_links=64,
max_span_attributes=64,
max_event_attributes=64,
max_link_attributes=64,
max_span_attribute_length=4096,
)
# Sampling strategy for production
sampler = sampling.TraceIdRatioBased(0.1) # Sample 10% of traces
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id",
url="https://your-instance.fiddler.ai",
span_limits=custom_limits,
sampler=sampler,
console_tracer=False, # Set to True for debugging
compression=Compression.Gzip, # Enable gzip compression (default)
)
```
### Compression Options
The SDK supports compression for OTLP export to reduce payload size:
```python
from opentelemetry.exporter.otlp.proto.http.trace_exporter import Compression
# Enable gzip compression (default, recommended for production)
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id",
url="https://your-instance.fiddler.ai",
compression=Compression.Gzip,
)
# Disable compression (useful for debugging or local development)
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id",
url="https://your-instance.fiddler.ai",
compression=Compression.NoCompression,
)
# Use deflate compression (alternative to gzip)
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id",
url="https://your-instance.fiddler.ai",
compression=Compression.Deflate,
)
```
### Environment Variables for Batch Processing
Configure batch span processor behavior using environment variables:
```python
import os
# Configure batch processing
os.environ['OTEL_BSP_MAX_QUEUE_SIZE'] = '500'
os.environ['OTEL_BSP_SCHEDULE_DELAY_MILLIS'] = '500'
os.environ['OTEL_BSP_MAX_EXPORT_BATCH_SIZE'] = '50'
os.environ['OTEL_BSP_EXPORT_TIMEOUT'] = '10000'
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id",
url="https://your-instance.fiddler.ai"
)
```
### Default Configuration
The SDK uses restrictive defaults to prevent excessive resource usage:
- **Span Limits**: 32 events/links/attributes per span, 2048 character attribute length
- **Batch Processing**: 100 queue size, 1000ms delay, 10 batch size, 5000ms timeout
- **Sampling**: Always on (100% sampling)
## Features
### Core Features
- **OpenTelemetry Integration**: Full tracing support with configurable span limits
- **Input Validation**: UUID4 validation for application IDs, URL validation
- **Flexible Configuration**: Custom span limits, sampling strategies, and batch processing
- **Resource Management**: Conservative defaults to prevent resource exhaustion
### LangGraph Instrumentation
- **Automatic Tracing**: Complete workflow tracing with span hierarchy
- **LLM Context Setting**: Set additional context information for LLM processing via `set_llm_context()`
- **Conversation Tracking**: Set conversation IDs for multi-turn conversations via `set_conversation_id()`
- **Message Serialization**: Smart handling of complex message content (lists, dicts)
- **Attribute Truncation**: Automatic truncation of long attribute values (256 character limit)
- **Error Handling**: Comprehensive error tracking and status reporting
### Monitoring and Observability
- **Span Types**: Different span types for chains, tools, retrievers, and LLMs
- **Agent Tracking**: Automatic agent name and ID generation
- **Performance Metrics**: Timing, token usage, and model information
- **Error Context**: Detailed error information with stack traces
## Validation and Error Handling
The SDK includes comprehensive validation:
- **Application ID**: Must be a valid UUID4 string
- **URL**: Must have valid scheme (http/https) and netloc
- **Attribute Values**: Automatically truncated to prevent oversized spans
- **Message Content**: Smart serialization of complex data structures
## Performance Considerations
- **High-volume applications**: Increase span limits and batch processing parameters
- **Low-latency requirements**: Decrease batch schedule delay
- **Memory constraints**: Use restrictive span limits and smaller batch sizes
- **Debugging**: Enable console tracer and use higher attribute limits
- **Production**: Use appropriate sampling strategies to control data volume
## Requirements
- Python 3.10, 3.11, 3.12, or 3.13
- Dependencies (automatically installed):
- opentelemetry-api (1.34.1)
- opentelemetry-sdk (1.34.1)
- opentelemetry-instrumentation (0.55b1)
- opentelemetry-exporter-otlp-proto-http (1.34.1)
- langgraph (0.4.8)
- langchain (0.3.26)
- langchain-core (automatically installed with langchain)
## Development
### Running Tests
```bash
# Run all tests
pytest
# Run specific test file
pytest tests/core/test_client.py
# Run with coverage
pytest --cov=fiddler_langgraph
```
### Code Quality
```bash
# Run linting
flake8 fiddler_langgraph/
# Run type checking
mypy fiddler_langgraph/
# Run security checks
bandit -r fiddler_langgraph/
```
## License
Apache License 2.0 - see LICENSE file for details
| text/markdown | Fiddler AI | Fiddler AI <support@fiddler.ai> | null | null | null | fiddler, ai, genai, llm, monitoring, observability, instrumentation, langgraph, langchain, opentelemetry | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Monitoring",
"Topic :: Software Development :: Quality Assurance",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
... | [] | https://fiddler.ai | null | >=3.10 | [] | [] | [] | [
"pip>=21.0",
"setuptools<82.0.0,>=61.0",
"opentelemetry-api<=1.39.1,>=1.19.0",
"opentelemetry-sdk<=1.39.1,>=1.19.0",
"opentelemetry-instrumentation<=0.60b1,>=0.40b0",
"opentelemetry-exporter-otlp-proto-http<=1.39.1,>=1.19.0",
"pydantic>=2.0",
"setuptools<82.0.0,>=61.0; extra == \"dev\"",
"pytest>=8.... | [] | [] | [] | [
"Homepage, https://fiddler.ai",
"Documentation, https://docs.fiddler.ai",
"Repository, https://github.com/fiddler-labs/fiddler-sdk",
"Issues, https://github.com/fiddler-labs/fiddler-sdk/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:27:44.781075 | fiddler_langgraph-1.4.0.tar.gz | 37,298 | a1/24/3da316c42a8ba256652fd6dd92eae8972ca6eeb25d3ca4564e71892a15c7/fiddler_langgraph-1.4.0.tar.gz | source | sdist | null | false | 49a0c42818450b8041d1aa49199bb80b | 3b937db81a7ec84f56d5bd206c2c7279c5520608532c3e0d5c7ed9d0fbaace1c | a1243da316c42a8ba256652fd6dd92eae8972ca6eeb25d3ca4564e71892a15c7 | Apache-2.0 | [] | 319 |
2.4 | osdu-perf | 1.0.31 | Performance Testing Framework for OSDU Services - A comprehensive tool for testing OSDU APIs with Locust and Azure Load Testing SDK | # 🔥 OSDU Performance Testing Framework
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/osdu-perf/)
A comprehensive Python framework for performance testing OSDU (Open Subsurface Data Universe) services. Features automatic test discovery, Azure authentication, Locust integration, and both local and cloud-based load testing capabilities with intelligent service orchestration.
## 📋 Key Features
✅ **Service Orchestration** - Intelligent service discovery and execution management
✅ **Azure Authentication** - Seamless Azure AD token management with multiple credential flows
✅ **Dual Execution Modes** - Run locally with Locust or scale with Azure Load Testing
✅ **CLI Tools** - Comprehensive command-line interface with three main commands
✅ **Template System** - Pre-built templates for common OSDU services
✅ **Configuration Management** - YAML-based configuration with environment-aware settings
✅ **Metrics Collection** - Automated metrics push to Azure Data Explorer (Kusto)
✅ **Environment Detection** - Automatically adapts behavior for local vs Azure environments
## 🏗️ Framework Architecture
### Core Components
- **`PerformanceUser`**: Locust integration with automatic service discovery
- **`ServiceOrchestrator`**: Plugin architecture for test discovery and execution
- **`BaseService`**: Abstract base class for implementing performance tests
- **`InputHandler`**: Configuration management and environment detection
- **`AzureTokenManager`**: Multi-credential authentication system
## 🚀 Quick Start
### Installation
```bash
# Install from PyPI
pip install osdu_perf
```
### Three Simple Commands
The framework provides three main commands for the complete performance testing workflow:
#### 1. Initialize Project (`init`)
```bash
# Create a new performance testing project
osdu_perf init <service_name>
# Examples:
osdu_perf init storage # Creates storage service performance tests
osdu_perf init search # Creates search service performance tests
osdu_perf init wellbore # Creates wellbore service performance tests
```
**What this creates:**
```
perf_tests/
├── config.yaml # Framework configuration
├── locustfile.py # Main test file with API calls
├── requirements.txt # Python dependencies
└── README.md # Project documentation
```
#### 2. Run Local Tests (`run local`)
```bash
# Run performance tests locally using Locust
osdu_perf run local --config config.yaml
```
**Features:**
- Uses Locust for load generation
- Azure CLI authentication for local development
- Real-time web UI at http://localhost:8089
- Automatic service discovery and execution
- Automatic metric collection and sends to Kusto
#### 3. Run Azure Load Tests (`run azure_load_test`)
```bash
# Deploy and run tests on Azure Load Testing service
osdu_perf run azure_load_test --config config.yaml
```
**Features:**
- Creates Azure Load Testing resources automatically
- Scales to hundreds/thousands of concurrent users
- Managed Identity authentication in Azure
- Comprehensive metrics and reporting
- Entitlement will be created on ADME for azure load tests
## 🛠️ Command Reference
### 1. Initialize Command
```bash
osdu_perf init <service_name> [OPTIONS]
```
**Parameters:**
- `service_name`: Name of the OSDU service to test (e.g., storage, search, wellbore)
- `--force`: Force overwrite existing files without prompting
**Examples:**
```bash
osdu_perf init storage # Initialize storage service tests
osdu_perf init search --force # Force overwrite existing search tests
osdu_perf init wellbore # Initialize wellbore service tests
```
**Generated Files:**
- `config.yaml` - Framework configuration with OSDU connection details
- `locustfile.py` - Main test file with API calls to your service
- `requirements.txt` - Python dependencies
- `README.md` - Project-specific documentation
### 2. Local Testing Command
```bash
osdu_perf run local [OPTIONS]
```
**Configuration:**
- Uses `config.yaml` for base configuration
- CLI arguments override config file settings
- Environment variables provide runtime values
**Key Options:**
- `--config`: Path to config.yaml file (required)
- `--host`: OSDU host URL (overrides config)
- `--partition`: OSDU data partition ID (overrides config)
- `--app-id`: Azure AD Application ID (overrides config)
- `--users` (`-u`): Number of concurrent users (default: from config)
- `--spawn-rate` (`-r`): User spawn rate per second (default: from config)
- `--run-time` (`-t`): Test duration (default: from config)
**Examples:**
```bash
# Basic run using config.yaml
osdu_perf run local --config config.yaml
# Override specific settings
osdu_perf run local --config config.yaml --users 50 --run-time 5m
# Full override
osdu_perf run local \
--config config.yaml \
--host https://api.example.com \
--partition dp1 \
--app-id 12345678-1234-1234-1234-123456789abc \
--users 25 --spawn-rate 5
```
### 3. Azure Load Testing Command
```bash
osdu_perf run azure_load_test [OPTIONS]
```
**Required Parameters:**
- `--config`: Path to config.yaml file
**Optional Parameters:**
- `--loadtest-name`: Azure Load Testing resource name (auto-generated)
- `--test-name`: Test name (auto-generated with timestamp)
- `--engine-instances`: Number of load generator instances (default: from config)
- `--users` (`-u`): Number of concurrent users per instance (default: from config)
- `--run-time` (`-t`): Test duration (default: from config)
**Examples:**
```bash
# Basic Azure Load Test using config
osdu_perf run azure \
--config config.yaml \
--subscription-id "12345678-1234-1234-1234-123456789012" \
--resource-group "myResourceGroup" \
--location "eastus"
# High-scale cloud test
osdu_perf run azure \
--config config.yaml \
--subscription-id "12345678-1234-1234-1234-123456789012" \
--resource-group "myResourceGroup" \
--location "eastus" \
--users 100 --engine-instances 5 --run-time 30m
```
## 📝 Configuration System
### config.yaml Structure
The framework uses a centralized configuration file that supports both local and Azure environments:
```yaml
# OSDU Environment Configuration
osdu_environment:
# OSDU instance details (required for run local command)
host: "https://your-osdu-host.com"
partition: "your-partition-id"
app_id: "your-azure-app-id"
# OSDU deployment details (optional - used for metrics collection)
sku: "Standard"
version: "25.2.35"
# Authentication (optional - uses automatic token generation if not provided)
auth:
# Manual token override (optional)
token: ""
# Metrics Collection Configuration
metrics_collector:
# Kusto (Azure Data Explorer) Configuration
kusto:
cluster: "https://your-kusto-cluster.eastus.kusto.windows.net"
database: "your-database"
ingest_uri: "https://ingest-your-kusto.eastus.kusto.windows.net"
# Test Configuration (Optional)
test_settings:
# Azure Load Test resource and test locations
subscription_id: "your-azure-subscription-id"
resource_group: "your-resource-group"
location: "eastus"
# Test-specific configurations
default_wait_time:
min: 1
max: 3
users: 10
spawn_rate: 2
run_time: "60s"
engine_instances: 1
test_name_prefix: "osdu_perf_test"
test_scenario: "health_check"
test_run_id_description: "Automated performance test"
```
### Configuration Hierarchy
The framework uses a layered configuration approach:
1. **config.yaml** (project-specific settings)
2. **CLI arguments** (highest priority)
## 🏗️ How It Works
### 🔍 Simple API-Based Approach
The framework now uses a simplified API-based approach where developers write test methods directly in `locustfile.py`:
```
perf_tests/
├── locustfile.py → OSDUUser class with @task methods for testing
├── config.yaml → Configuration for host, partition, authentication
├── requirements.txt → Dependencies (osdu_perf package)
```
**Simplified Process:**
1. `osdu_perf init <service>` generates `locustfile.py` template
2. Developers add `@task` methods with API calls (`self.get()`, `self.post()`, etc.)
3. `PerformanceUser` base class handles authentication, headers, tokens automatically
4. Run with `osdu_perf run local` or `osdu_perf run azure_load_test`
### 🎯 Smart Resource Naming
Based on detected services, Azure resources are automatically named:
- **Load Test Resource**: `osdu-{service}-loadtest-{timestamp}`
- **Test Name**: `osdu_{service}_test_{timestamp}`
- **Example**: `osdu-storage-loadtest-20241028` with test `osdu_storage_test_20241028_142250`
### 🔐 Multi-Environment Authentication
**Local Development:**
- Azure CLI credentials (`az login`)
- Manual token via config or environment variables
- Automatic token refresh and caching
**Azure Load Testing:**
- Managed Identity authentication (no secrets needed)
- Environment variables injected by Azure Load Testing service
- Automatic credential detection and fallback
### 📊 Intelligent Metrics Collection
**Automatic Kusto Integration:**
- Detects environment (local vs Azure) automatically
- Uses appropriate authentication method
- Pushes detailed metrics to three tables:
- `LocustMetrics` - Per-endpoint statistics
- `LocustExceptions` - Error tracking
- `LocustTestSummary` - Overall test summaries
## 🧪 Writing Performance Tests
### Simple API-Based Approach
The framework generate your `locustfile.py`:
```python
"""
OSDU Performance Tests - Locust Configuration
Generated by OSDU Performance Testing Framework
"""
import os
from locust import events, task, tag
from osdu_perf import PerformanceUser
# STEP 1: Register custom CLI args with Locust
@events.init_command_line_parser.add_listener
def add_custom_args(parser):
"""Add OSDU-specific command line arguments"""
parser.add_argument("--partition", type=str, default=os.getenv("PARTITION"), help="OSDU Data Partition ID")
parser.add_argument("--appid", type=str, default=os.getenv("APPID"), help="Azure AD Application ID")
class OSDUUser(PerformanceUser):
"""
OSDU Performance Test User
This class automatically:
- Handles Azure authentication using --appid
- Manages HTTP headers and tokens
- Provides simple API methods for testing
- Manages Locust user simulation and load testing
"""
def on_start(self):
"""Called when a user starts - performs setup"""
super().on_start()
# Access OSDU parameters from Locust parsed options or environment variables
partition = getattr(self.environment.parsed_options, 'partition', None) or os.getenv('PARTITION')
host = getattr(self.environment.parsed_options, 'host', None) or self.host or os.getenv('HOST')
token = os.getenv('ADME_BEARER_TOKEN') # Token only from environment for security
appid = getattr(self.environment.parsed_options, 'appid', None) or os.getenv('APPID')
print(f"� Started performance testing user")
print(f" 📍 Partition: {partition}")
print(f" 🌐 Host: {host}")
print(f" 🔑 Token: {'***' if token else 'Not provided'}")
print(f" 🆔 App ID: {appid or 'Not provided'}")
@tag("storage", "health_check")
@task(1)
def check_service_health(self):
# Simple API call - framework handles headers, tokens, authentication
self.get("/api/storage/v2/health")
@tag("storage", "health_check")
@task(2)
def test_service_endpoints(self):
# More API calls for your service
self.get("/api/storage/v2/info")
self.post("/api/storage/v2/records", json={"test": "data"})
```
### Key Implementation Points
1. **Inherit from PerformanceUser**: Your class extends `PerformanceUser` which handles all authentication and setup
2. **Use @task decorators**: Mark methods with `@task(weight)` to define test scenarios
3. **Simple HTTP methods**: Use `self.get()`, `self.post()`, `self.put()`, `self.delete()` - framework handles headers/tokens
4. **No manual authentication**: Framework automatically handles Azure AD tokens and HTTP headers
5. **Environment awareness**: Automatically adapts for local vs Azure Load Testing environments
### Available HTTP Methods
The `PerformanceUser` base class provides these simple methods:
```python
# GET request
self.get("/api/storage/v2/records/12345")
# POST request with JSON data
self.post("/api/storage/v2/records", json={
"kind": "osdu:wks:partition:storage:1.0.0",
"data": {"test": "data"}
})
# PUT request
self.put("/api/storage/v2/records/12345", json=updated_data)
# DELETE request
self.delete("/api/storage/v2/records/12345")
# Custom headers (if needed)
self.get("/api/storage/v2/info", headers={"Custom-Header": "value"})
# Also locust client available
self.client.get("/api/storage/v2/records/12345")
```
### Authentication Handling
The framework automatically manages authentication:
- **Local Development**: Uses Azure CLI credentials (`az login`)
- **Azure Load Testing**: Uses Managed Identity
- **Manual Override**: Set `ADME_BEARER_TOKEN` environment variable
- **All requests**: Automatically include proper Authorization headers
## 🔧 Configuration & Environment Variables
### Configuration Hierarchy
The framework uses a layered configuration approach (highest priority first):
1. **CLI arguments** - Direct command-line overrides
2. **Environment variables** - Runtime values
3. **config.yaml** - Project-specific settings
4. **Default values** - Framework defaults
### Environment Variables
**Universal Variables:**
- `OSDU_HOST`: Base URL of OSDU instance
- `OSDU_PARTITION`: Data partition ID
- `OSDU_APP_ID`: Azure AD Application ID
- `ADME_BEARER_TOKEN`: Manual bearer token override
**Azure Load Testing Variables (auto-set):**
- `AZURE_LOAD_TEST=true`: Indicates Azure environment
- `PARTITION`: Data partition ID
- `LOCUST_HOST`: OSDU host URL
- `APPID`: Azure AD Application ID
**Metrics Collection:**
- `KUSTO_CLUSTER`: Azure Data Explorer cluster URL
- `KUSTO_DATABASE`: Database name for metrics
- `TEST_RUN_ID`: Unique identifier for test run
### Azure Authentication
The framework supports multiple Azure authentication methods with automatic detection:
**Local Development:**
- Azure CLI credentials (`az login`)
- Service Principal (via environment variables)
- DefaultAzureCredential chain
**Azure Environments:**
- Managed Identity (preferred for Azure-hosted resources)
- System-assigned or user-assigned identities
- Automatic credential detection and fallback
## 📊 Monitoring & Results
### Local Testing (Web UI)
- Open http://localhost:8089 after starting with `--web-ui`
- Real-time performance metrics
- Request statistics and response times
- Download results as CSV
### Azure Load Testing
- Monitor in Azure Portal under "Load Testing"
- Comprehensive dashboards and metrics
- Automated result retention
- Integration with Azure Monitor
### Key Metrics
- **Requests per second (RPS)**
- **Average response time**
- **95th percentile response time**
- **Error rate**
- **Failure count by endpoint**
## 🚀 Advanced Usage
### Multiple Services
Test multiple services by adding more `@task` methods in your `locustfile.py`:
```python
class OSDUUser(PerformanceUser):
@task(3) # Higher weight = more frequent execution
def test_storage_apis(self):
self.get("/api/storage/v2/info")
self.post("/api/storage/v2/records", json={"data": "test"})
@task(2)
def test_search_apis(self):
self.get("/api/search/v2/query")
self.post("/api/search/v2/query", json={"query": "*"})
@task(1)
def test_schema_apis(self):
self.get("/api/schema-service/v1/schema")
```
All tests run in the same `locustfile.py` with automatic load balancing based on task weights.
### CI/CD Integration
```yaml
# Example GitHub Actions workflow
- name: Run OSDU Performance Tests
run: |
osdu_perf run local \
--host ${{ secrets.OSDU_HOST }} \
--partition ${{ secrets.OSDU_PARTITION }} \
--token ${{ secrets.OSDU_TOKEN }} \
--headless \
--users 5 \
--run-time 2m
```
## 🐛 Troubleshooting
### Common Issues
**Authentication Errors**
```bash
# Ensure Azure CLI is logged in
az login
```
**Import Errors**
```bash
# Install dependencies
pip install -r requirements.txt
```
**Service Discovery Issues**
```bash
# Ensure locustfile.py exists and inherits from PerformanceUser
ls locustfile.py
# Check class inheritance
grep "PerformanceUser" locustfile.py
```
**Azure Load Testing Errors**
```bash
# Install Azure dependencies
pip install azure-cli azure-identity azure-mgmt-loadtesting azure-mgmt-resource requests
```
## 🧩 Project Structure (Generated)
```
perf_tests/
├── locustfile.py # Main test file with API calls and @task methods
├── config.yaml # Framework configuration (OSDU, metrics, test settings)
├── requirements.txt # Python dependencies (osdu_perf package)
└── README.md # Project documentation
```
## 🧪 Development
### Running Tests
```bash
pytest tests/
```
### Code Quality
```bash
# Formatting
black osdu_perf/
# Linting
flake8 osdu_perf/
```
### Building Package
```bash
# Build wheel and source distribution
python -m build
# Upload to TestPyPI
python -m twine upload --repository testpypi dist/*
```
## 📄 License
This project is licensed under the MIT License — see the `LICENSE` file for details.
## 🆘 Support
- **Issues**: [GitHub Issues](https://github.com/janraj/osdu_perf/issues)
- **Contact**: janrajcj@microsoft.com
- **Documentation**: This README and inline code documentation
## 🚀 What's New in v1.0.24
- ✅ **Three-Command Workflow**: `init`, `run local`, `run azure` - complete testing pipeline
- ✅ **Configuration-Driven**: YAML-based configuration with environment-aware settings
- ✅ **Service Orchestration**: Intelligent service discovery with lifecycle management
- ✅ **Enhanced Authentication**: Multi-credential Azure authentication with automatic detection
- ✅ **Metrics Integration**: Automated Kusto metrics collection with environment detection
- ✅ **Template System**: Updated project templates with modern framework patterns
- ✅ **Error Handling**: Improved error handling and defensive coding patterns
- ✅ **CLI Improvements**: Better argument parsing and validation
---
**Generated by OSDU Performance Testing Framework v1.0.24**
| text/markdown | Janraj CJ | Janraj CJ <janrajcj@microsoft.com> | null | Janraj CJ <janrajcj@microsoft.com> | null | performance, testing, locust, azure, osdu, load-testing, azure-sdk, performance-testing | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming La... | [] | https://github.com/janraj/osdu_perf | null | >=3.8 | [] | [] | [] | [
"locust>=2.0.0",
"azure-identity>=1.13.0",
"azure-core>=1.28.0",
"azure-mgmt-core>=1.4.0",
"azure-mgmt-resource>=23.0.0",
"azure-mgmt-loadtesting>=1.0.0",
"azure-developer-loadtesting>=1.0.0",
"requests>=2.28.0",
"pyyaml>=6.0",
"azure-kusto-data>=5.0.0",
"azure-kusto-ingest>=5.0.0",
"pytest>=7... | [] | [] | [] | [
"Homepage, https://github.com/janraj/osdu_perf",
"Documentation, https://github.com/janraj/osdu_perf#readme",
"Repository, https://github.com/janraj/osdu_perf",
"Issues, https://github.com/janraj/osdu_perf/issues"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-18T09:26:20.875839 | osdu_perf-1.0.31.tar.gz | 62,903 | d4/02/29c0ef4768afd9271d342ef245621f03ae64775c58caa8289876a46a832c/osdu_perf-1.0.31.tar.gz | source | sdist | null | false | ca6bd4c64b9e66c5813eb802b05f4efa | 036509f1fa94dfdb54a1804f35695f5154693b1ff4d860d5598ee185fd554e4a | d40229c0ef4768afd9271d342ef245621f03ae64775c58caa8289876a46a832c | MIT | [
"LICENSE"
] | 318 |
2.4 | digitalkin | 0.3.2b3 | SDK to build kin used in DigitalKin | # DigitalKin Python SDK
[](https://github.com/DigitalKin-ai/digitalkin/actions/workflows/ci.yml)
[](https://pypi.org/project/digitalkin/)
[](https://pypi.org/project/digitalkin/)
[](https://github.com/DigitalKin-ai/digitalkin/blob/main/LICENSE)
Welcome to the DigitalKin Python SDK, a powerful tool designed for developers
who aim to build and manage agents within multi-agent systems according to the
innovative DigitalKin agentic mesh standards. This SDK streamlines the process
of creating and managing custom Tools, Triggers, and Kin Archetypes while
ensuring full compliance with the DigitalKin ecosystem's standards.
## 🚀 Features
- **Seamless Integration**: Easily integrate with DigitalKin's services using
our comprehensive gRPC support.
- **Customizable Agents**: Build custom agents and manage their lifecycle
efficiently.
- **Standards Compliance**: Adhere to the latest DigitalKin agentic mesh
standards.
- **Robust Development Tools**: Utilize advanced development tools for testing,
building, and deploying your projects.
## 📦 Installation
To install the DigitalKin SDK, simply run:
```bash
pip install digitalkin
```
**Optional Taskiq Integration**: Asynchronous task execution powered by Taskiq, backed by RabbitMQ and Redis
To enable the Rabbitmq streaming capabilities, run:
```sh
sudo rabbitmq-plugins enable rabbitmq_stream
# Core + Taskiq integration (RabbitMQ broker)
pip install digitalkin[taskiq]
```
## 🛠️ Usage
### Basic Import
Start by importing the necessary modules:
```python
import digitalkin
```
## Features
### Taskiq with RabbitMQ
TaskIQ intergration allows the module to scale for heavy CPU tasks by having the request's stateless module in a new instance.
- **Decoupled Scalability**: RabbitMQ brokers messages, letting producers and consumers scale independently.
- **Reliability**: Durable queues, acknowledgements, and dead-lettering ensure tasks aren’t lost.
- **Concurrency Control**: Taskiq’s worker pool manages parallel execution without custom schedulers.
- **Flexibility**: Built-in retries, exponential backoff, and Redis result-backend for resilient workflows.
- **Ecosystem**: Battle-tested `aio-pika` AMQP client plus Taskiq’s decorator-based API.
By combining Taskiq’s async API with RabbitMQ’s guarantees, you get a robust, production-ready queue with minimal boilerplate.
## 👷♂️ Development
### Prerequisites
Ensure you have the following installed:
- Python 3.10+
- [uv](https://astral.sh/uv) - Modern Python package management
- [buf](https://buf.build/docs/installation) - Protocol buffer toolkit
- [protoc](https://grpc.io/docs/protoc-installation/) - Protocol Buffers
compiler
- [Task](https://taskfile.dev/) - Task runner
### Setting Up Your Development Environment
Clone the repository and set up your environment with these commands:
```bash
# Clone the repository with submodules
git clone --recurse-submodules https://github.com/DigitalKin-ai/digitalkin.git
cd digitalkin
# Setup development environment
task setup-dev
task setup-dev
source .venv/bin/activate
```
### Common Development Tasks
Utilize the following commands for common tasks:
```bash
# Build the package
task build-package
# Run tests
task run-tests
# Format code using Ruff linter and formatter
task linter
# Clean build artifacts
task clean
# Bump version (major, minor, patch)
task bump-version -- major|minor|patch
```
### Publishing Process
1. Update code and commit changes. (following conventional branch/commit
standard)
2. Use `task bump-version -- major|minor|patch` command to commit new version.
3. Use GitHub "Create Release" workflow to plublish the new version.
4. Workflow automatically publishes to Test PyPI and PyPI.
## 📄 License
This project is licensed under the terms specified in the LICENSE file.
---
For more information, please visit our
[Homepage](https://github.com/DigitalKin-ai/digitalkin), check our
[Documentation](https://github.com/DigitalKin-ai/digitalkin), or report issues
on our [Issues page](https://github.com/DigitalKin-ai/digitalkin/issues).
Happy coding! 🎉🚀
| text/markdown | null | "DigitalKin.ai" <contact@digitalkin.ai> | null | null | Attribution-NonCommercial-ShareAlike 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International Public License
("Public License"). To the extent this Public License may be
interpreted as a contract, You are granted the Licensed Rights in
consideration of Your acceptance of these terms and conditions, and the
Licensor grants You such rights in consideration of benefits the
Licensor receives from making the Licensed Material available under
these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. BY-NC-SA Compatible License means a license listed at
creativecommons.org/compatiblelicenses, approved by Creative
Commons as essentially the equivalent of this Public License.
d. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
e. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
f. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
g. License Elements means the license attributes listed in the name
of a Creative Commons Public License. The License Elements of this
Public License are Attribution, NonCommercial, and ShareAlike.
h. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
i. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
j. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
k. NonCommercial means not primarily intended for or directed towards
commercial advantage or monetary compensation. For purposes of
this Public License, the exchange of the Licensed Material for
other material subject to Copyright and Similar Rights by digital
file-sharing or similar means is NonCommercial provided there is
no payment of monetary compensation in connection with the
exchange.
l. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
m. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
n. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part, for NonCommercial purposes only; and
b. produce, reproduce, and Share Adapted Material for
NonCommercial purposes only.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. Additional offer from the Licensor -- Adapted Material.
Every recipient of Adapted Material from You
automatically receives an offer from the Licensor to
exercise the Licensed Rights in the Adapted Material
under the conditions of the Adapter's License You apply.
c. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties, including when
the Licensed Material is used other than for NonCommercial
purposes.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
b. ShareAlike.
In addition to the conditions in Section 3(a), if You Share
Adapted Material You produce, the following conditions also apply.
1. The Adapter's License You apply must be a Creative Commons
license with the same License Elements, this version or
later, or a BY-NC-SA Compatible License.
2. You must include the text of, or the URI or hyperlink to, the
Adapter's License You apply. You may satisfy this condition
in any reasonable manner based on the medium, means, and
context in which You Share Adapted Material.
3. You may not offer or impose any additional or different terms
or conditions on, or apply any Effective Technological
Measures to, Adapted Material that restrict exercise of the
rights granted under the Adapter's License You apply.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database for NonCommercial purposes
only;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material,
including for purposes of Section 3(b); and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public
licenses. Notwithstanding, Creative Commons may elect to apply one of
its public licenses to material it publishes and in those instances
will be considered the “Licensor.†The text of the Creative Commons
public licenses is dedicated to the public domain under the CC0 Public
Domain Dedication. Except for the limited purpose of indicating that
material is shared under a Creative Commons public license or as
otherwise permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the
public licenses.
Creative Commons may be contacted at creativecommons.org.
https://creativecommons.org/licenses/by-nc-sa/4.0/
| agent, digitalkin, gprc, kin, sdk | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :... | [] | null | null | >=3.10 | [] | [] | [] | [
"agentic-mesh-protocol==0.2.2",
"grpcio-health-checking>=1.78.0",
"grpcio-reflection>=1.78.0",
"grpcio-status>=1.78.0",
"pydantic>=2.12.5",
"surrealdb>=1.0.7",
"rstream>=0.40.1; extra == \"taskiq\"",
"taskiq-aio-pika>=0.5.0; extra == \"taskiq\"",
"taskiq-redis>=1.2.2; extra == \"taskiq\"",
"taskiq... | [] | [] | [] | [
"Documentation, https://github.com/DigitalKin-ai/digitalkin",
"Homepage, https://github.com/DigitalKin-ai/digitalkin",
"Issues, https://github.com/DigitalKin-ai/digitalkin/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:25:31.850395 | digitalkin-0.3.2b3.tar.gz | 157,921 | c5/88/6a1a4b41130561cda8b1890c007ac5a7916f99ef6a0dfcda47238f8d986a/digitalkin-0.3.2b3.tar.gz | source | sdist | null | false | 4ec05531bc70cf3ab20dc28e2bf54867 | 05b60a4ddcb6969d7dfee5443d6a0a51b756e39579a0e5de4405ae1d3a15e050 | c5886a1a4b41130561cda8b1890c007ac5a7916f99ef6a0dfcda47238f8d986a | null | [
"LICENSE"
] | 343 |
2.4 | oslo.metrics | 0.15.0 | Oslo Metrics library | ====================
Oslo Metrics Library
====================
This Oslo metrics API supports collecting metrics data from other Oslo
libraries and exposing the metrics data to monitoring system.
| text/x-rst | null | OpenStack <openstack-discuss@lists.openstack.org> | null | null | null | null | [
"Environment :: OpenStack",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Lan... | [] | null | null | >=3.10 | [] | [] | [] | [
"pbr>=3.1.1",
"oslo.utils>=3.41.0",
"oslo.log>=3.44.0",
"oslo.config>=6.9.0",
"prometheus-client>=0.6.0"
] | [] | [] | [] | [
"Homepage, https://docs.openstack.org/oslo.metrics",
"Repository, https://opendev.org/openstack/oslo.metrics"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-18T09:25:30.456912 | oslo_metrics-0.15.0.tar.gz | 22,822 | a6/67/c01de759855936af87a8fab181937d5b295cce437438161123081ad55857/oslo_metrics-0.15.0.tar.gz | source | sdist | null | false | 5286f6a3158ad230b25520fe64e7e28b | cd1064d654f763d0384aff8b3b08da78cd6959b0bb35dc13d55436617467bb8e | a667c01de759855936af87a8fab181937d5b295cce437438161123081ad55857 | null | [
"LICENSE"
] | 0 |
2.4 | django-unfold | 0.80.2 | Modern Django admin theme | [](https://unfoldadmin.com)
## Unfold - Modern Django Admin
[](https://pypi.org/project/django-unfold/)
[](https://discord.gg/9sQj9MEbNz)
[](https://github.com/unfoldadmin/django-unfold/actions?query=workflow%3Arelease)

Enhance Django Admin with a modern interface and powerful tools to build internal applications.
- **Documentation:** The full documentation is available at [unfoldadmin.com](https://unfoldadmin.com?utm_medium=github&utm_source=unfold).
- **Live demo:** The demo site is available at [unfoldadmin.com](https://unfoldadmin.com?utm_medium=github&utm_source=unfold).
- **Formula:** A repository with a demo implementation is available at [github.com/unfoldadmin/formula](https://github.com/unfoldadmin/formula?utm_medium=github&utm_source=unfold).
- **Turbo:** A Django & Next.js boilerplate implementing Unfold is available at [github.com/unfoldadmin/turbo](https://github.com/unfoldadmin/turbo?utm_medium=github&utm_source=unfold).
- **Discord:** Join our Unfold community on [Discord](https://discord.gg/9sQj9MEbNz).
## Quickstart
**Install the package**
```sh
pip install django-unfold
```
**Change INSTALLED_APPS in settings.py**
```python
INSTALLED_APPS = [
"unfold",
# Rest of the apps
]
```
**Use Unfold ModelAdmin**
```python
from unfold.admin import ModelAdmin
@admin.register(MyModel)
class MyModelAdmin(ModelAdmin):
pass
```
*Unfold works alongside the default Django admin and requires no migration of existing models or workflows. Unfold is actively developed and continuously evolving as new use cases and edge cases are discovered.*
## Why Unfold?
- Built on `django.contrib.admin`: Enhances the existing admin without replacing it.
- Provides a modern interface and improved workflows.
- Designed for real internal tools and backoffice apps.
- Incremental adoption for existing projects.
## Features
- **Visual interface**: Provides a modern user interface based on the Tailwind CSS framework.
- **Sidebar navigation**: Simplifies the creation of sidebar menus with icons, collapsible sections, and more.
- **Dark mode support**: Includes both light and dark mode themes.
- **Flexible actions**: Provides multiple ways to define actions throughout the admin interface.
- **Advanced filters**: Features custom dropdowns, autocomplete, numeric, datetime, and text field filters.
- **Dashboard tools**: Includes helpers for building custom dashboard pages.
- **UI components**: Offers reusable interface components such as cards, buttons, and charts.
- **Crispy forms**: Custom template pack for django-crispy-forms to style forms with Unfold's design system.
- **WYSIWYG editor**: Built-in support for WYSIWYG editing through Trix.
- **Array widget:** Support for `django.contrib.postgres.fields.ArrayField`.
- **Inline tabs:** Group inlines into tab navigation in the change form.
- **Conditional fields:** Show or hide fields dynamically based on the values of other fields in the form.
- **Model tabs:** Allow defining custom tab navigation for models.
- **Fieldset tabs:** Merge multiple fieldsets into tabs in the change form.
- **Sortable inlines:** Allow sorting inlines by dragging and dropping.
- **Command palette**: Quickly search across models and custom data.
- **Datasets**: Custom changelists `ModelAdmin` displayed on change form detail pages.
- **Environment label:** Distinguish between environments by displaying a label.
- **Nonrelated inlines:** Display nonrelated models as inlines in the change form.
- **Paginated inlines:** Break down large record sets into pages within inlines for better admin performance.
- **Favicons:** Built-in support for configuring various site favicons.
- **Theming:** Customize color schemes, backgrounds, border radius, and more.
- **Font colors:** Adjust font colors for better readability.
- **Changeform modes:** Display fields in compressed mode in the change form.
- **Language switcher:** Allow changing language directly from the admin area.
- **Infinite paginator:** Efficiently handle large datasets with seamless pagination that reduces server load.
- **Parallel admin:** Supports [running the default admin](https://unfoldadmin.com/blog/migrating-django-admin-unfold/?utm_medium=github&utm_source=unfold) alongside Unfold.
- **Third-party packages:** Provides default support for multiple popular applications.
- **Configuration:** Allows basic options to be changed in `settings.py`.
- **Dependencies:** Built entirely on `django.contrib.admin`.
## Third-party package support
- [django-guardian](https://github.com/django-guardian/django-guardian) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-guardian/)
- [django-import-export](https://github.com/django-import-export/django-import-export) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-import-export/)
- [django-simple-history](https://github.com/jazzband/django-simple-history) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-simple-history/)
- [django-constance](https://github.com/jazzband/django-constance) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-constance/)
- [django-celery-beat](https://github.com/celery/django-celery-beat) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-celery-beat/)
- [django-modeltranslation](https://github.com/deschler/django-modeltranslation) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-modeltranslation/)
- [django-money](https://github.com/django-money/django-money) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-money/)
- [django-location-field](https://github.com/caioariede/django-location-field) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-location-field/)
- [djangoql](https://github.com/ivelum/djangoql) - [Integration guide](https://unfoldadmin.com/docs/integrations/djangoql/)
- [django-json-widget](https://github.com/jmrivas86/django-json-widget) - [Integration guide](https://unfoldadmin.com/docs/integrations/django-json-widget/)
## Professional services
Need help integrating, customizing, or scaling Django Admin with Unfold?
- **Consulting**: Expert guidance on Django architecture, performance, feature development, and Unfold integration. [Learn more](https://unfoldadmin.com/consulting/?utm_medium=github&utm_source=unfold)
- **Support**: Assistance with integrating or customizing Unfold, including live 1:1 calls and implementation review. Fixed price, no ongoing commitment. [Learn more](https://unfoldadmin.com/support/?utm_medium=github&utm_source=unfold)
- **Studio**: Extend Unfold with advanced dashboards, visual customization, and additional admin tooling. [Learn more](https://unfoldadmin.com/studio?utm_medium=github&utm_source=unfold)
[](https://unfoldadmin.com/studio?utm_medium=github&utm_source=unfold)
## Credits
- **Tailwind**: [Tailwind CSS](https://github.com/tailwindlabs/tailwindcss) - Licensed under the [MIT License](https://opensource.org/licenses/MIT).
- **Icons**: [Material Symbols](https://github.com/google/material-design-icons) - Licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
- **Font**: [Inter](https://github.com/rsms/inter) - Licensed under the [SIL Open Font License 1.1](https://scripts.sil.org/OFL).
- **Charts**: [Chart.js](https://github.com/chartjs/Chart.js) - Licensed under the [MIT License](https://opensource.org/licenses/MIT).
- **JavaScript Framework**: [Alpine.js](https://github.com/alpinejs/alpine) - Licensed under the [MIT License](https://opensource.org/licenses/MIT).
- **AJAX calls**: [HTMX](https://htmx.org/) - Licensed under the [BSD 2-Clause License](https://opensource.org/licenses/BSD-2-Clause).
- **Custom Scrollbars**: [SimpleBar](https://github.com/Grsmto/simplebar) - Licensed under the [MIT License](https://opensource.org/licenses/MIT).
- **Range Slider**: [noUiSlider](https://github.com/leongersen/noUiSlider) - Licensed under the [MIT License](https://opensource.org/licenses/MIT).
- **Number Formatting**: [wNumb](https://github.com/leongersen/wnumb) - Licensed under the [MIT License](https://opensource.org/licenses/MIT).
| text/markdown | null | null | null | null | null | admin, django, tailwind, theme | [
"Environment :: Web Environment",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Progra... | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"django>=4.2"
] | [] | [] | [] | [
"Homepage, https://unfoldadmin.com",
"Documentation, https://unfoldadmin.com/docs/",
"Repository, https://github.com/unfoldadmin/django-unfold",
"Issues, https://github.com/unfoldadmin/django-unfold/issues",
"Changelog, https://github.com/unfoldadmin/django-unfold/blob/main/CHANGELOG.md"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:25:28.853295 | django_unfold-0.80.2-py3-none-any.whl | 1,226,068 | 37/11/792187e14290dc7737a78905f6d7ab664da11bb2f29873b5152bdc14114a/django_unfold-0.80.2-py3-none-any.whl | py3 | bdist_wheel | null | false | f3b2804deb088ef0e8ccdeea015049ff | 9e9d98eb6bcbc58769a7e17b104fa17be88672fb0379e8ca26a4f978564b1b0b | 3711792187e14290dc7737a78905f6d7ab664da11bb2f29873b5152bdc14114a | MIT | [
"LICENSE.md"
] | 18,370 |
2.4 | drawcv | 0.1.5 | Stylized OpenCV rectangle and frame drawing helpers for detection overlays. | # 🖼️ drawcv — Stylish Bounding Boxes for OpenCV
<p align="center">
<img src="resource/visionframe_styles_gallery.png" alt="drawcv style gallery" width="100%"/>
</p>
<p align="center">
<a href="https://pypi.org/project/drawcv/"><img src="https://img.shields.io/pypi/v/drawcv?color=blue&label=PyPI" alt="PyPI version"/></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-green" alt="MIT License"/></a>
<img src="https://img.shields.io/badge/python-3.8%2B-blue" alt="Python 3.8+"/>
<img src="https://img.shields.io/badge/OpenCV-compatible-brightgreen" alt="OpenCV compatible"/>
</p>
> Drop-in upgrade for `cv2.rectangle()` — turn plain bounding boxes into polished, production-ready detection overlays with a single function call.
---
## ✨ What is drawcv?
`drawcv` is a lightweight Python library that replaces OpenCV's bare-bones bounding rectangles with a collection of **modern, styled frames** ready for demos, dashboards, and detection pipelines. It works directly on `numpy` arrays using OpenCV under the hood, so it fits seamlessly into any existing workflow.
**One line change. Dramatically better visuals.**
---
## 📦 Installation
```bash
pip install drawcv
```
---
## 🚀 Quick Start
```python
import cv2
from drawcv import drawcv
image = cv2.imread("resource/test.png")
drawcv(
image=image,
style_id="pro-clean-blue", # style name or index (0, 1, 2…)
coords=(80, 60, 280, 220), # (x1, y1, x2, y2)
)
cv2.imwrite("output.jpg", image)
```
---
## 🔁 Migration from Plain OpenCV
If you're already using `cv2.rectangle`, switching takes seconds:
### Before
```python
cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), 2)
```
### After
```python
from drawcv import drawcv
drawcv(
image=image,
style_id="pro-clean-blue",
coords=(x1, y1, x2, y2),
)
```
### With Optional Color & Line Width Override
```python
drawcv(
image=image,
style_id="futuristic-hud",
coords=(x1, y1, x2, y2),
color=(255, 255, 255), # BGR color override
line_width=1,
)
```
---
## 🎨 Available Styles
List all available styles programmatically:
```python
from drawcv import list_visionframe_styles
styles = list_visionframe_styles()
print(styles)
```
Styles can be referenced by **name** (e.g. `"futuristic-hud"`) or by **index** (e.g. `0`, `1`, `2`).
### Generate the Style Gallery
```python
import cv2
from drawcv import create_visionframe_gallery
gallery = create_visionframe_gallery()
cv2.imwrite("resource/visionframe_styles_gallery.png", gallery)
```
---
## 🔧 API Reference
### `drawcv(image, style_id, coords, color=None, line_width=None)`
| Parameter | Type | Description |
|-----------|------|-------------|
| `image` | `np.ndarray` | OpenCV image (BGR, modified in-place) |
| `style_id` | `str` or `int` | Style name or index from `list_visionframe_styles()` |
| `coords` | `tuple` | Bounding box as `(x1, y1, x2, y2)` |
| `color` | `tuple` (optional) | BGR color override `(B, G, R)` |
| `line_width` | `int` (optional) | Line thickness override |
### `list_visionframe_styles() → list[str]`
Returns a list of all available style names.
### `create_visionframe_gallery() → np.ndarray`
Generates a preview gallery image of all available styles.
---
## 🏗️ Project Structure
```
visionframe/
├── src/ # Library source code
├── tests/ # Pytest test suite
├── resource/ # Sample images and gallery output
├── setup.py
├── pyproject.toml
└── requirements-dev.txt
```
---
## 🛠️ Local Development
```bash
# Create and activate virtual environment
python -m venv .venv
.venv\Scripts\activate # Windows
# source .venv/bin/activate # Linux/macOS
# Install in editable mode with dev dependencies
pip install -e .[dev]
```
### Run Tests
```bash
pytest
```
### Build Package
```bash
python -m build
```
---
## 📋 Requirements
- Python 3.8+
- `opencv-python`
- `numpy`
---
## 📄 License
This project is licensed under the [MIT License](LICENSE).
---
## 🤝 Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request for new styles, bug fixes, or improvements.
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/new-style`)
3. Commit your changes (`git commit -m 'Add new frame style'`)
4. Push and open a Pull Request
---
<p align="center">Made with ❤️ for the computer vision community</p>
| text/markdown | Kaushal | null | null | null | MIT License
Copyright (c) 2026 Kaushal
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| opencv, vision, computer-vision, bounding-box, annotation | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engi... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.24",
"opencv-python>=4.8",
"build>=1.2; extra == \"dev\"",
"twine>=5.1; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://pypi.org/project/drawcv/",
"Repository, https://github.com/trainOwn/visionframe",
"Issues, https://github.com/trainOwn/visionframe/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-18T09:25:10.087612 | drawcv-0.1.5.tar.gz | 18,392 | a5/33/5125143f95a3a47032a4bb3ed4190351517ffe1f19c6c192cdcf4f70e07a/drawcv-0.1.5.tar.gz | source | sdist | null | false | 347bb7b0acb47706837e5e8a4d8d0b92 | e4e8e768da6bba9e7d2020d53979ffe9553fb4a7419c1938a86a5b0d4e3d7066 | a5335125143f95a3a47032a4bb3ed4190351517ffe1f19c6c192cdcf4f70e07a | null | [
"LICENSE"
] | 264 |
2.4 | aspen-pysys | 0.0.13 | Python interface for Aspen HYSYS | # aspen-pysys
Python interface for Aspen HYSYS
## Installation
Install via `pip install aspen-pysys` ([PyPI link](https://pypi.org/project/aspen-pysys/)).
## User guide
### Setup
1. Before you can use HYSYS, ensure that the simulation case you wish to work on is already open.
2. Once your simulation file is open, enter the following lines of code to set the package up.
3. An indicator that you have successfully connected with your simulation file is the printing of 'Aspen HYSYS' in the shell (or in a notebook cell if you are using a Jupyter cell or something similar).
```python
from aspen_pysys import *
Hysys.init()
```
To open a Aspen HYSYS simulation file (.hsc), use the `Hysys.open(sim_filepath)`, where `sim_filepath` is the filepath to your simulation file.
```python
from pathlib import Path
import os
current_filepath = Path(os.getcwd())
sim_filepath = current_filepath / "test_sim.hsc"
simcase = Hysys.open(sim_filepath)
simcase
```
4. You can now use the aspen-pysys package in your code! Please refer to the [tutorial folder](/tutorial/) for an interactive example. | text/markdown | null | Hariidaran Tamilmaran <hariidaran@proton.me> | null | null | null | null | [
"Operating System :: Microsoft :: Windows :: Windows 11",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://codeberg.org/CacklingTanuki/aspen-pysys",
"Issues, https://codeberg.org/CacklingTanuki/aspen-pysys/issues"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-18T09:23:40.290032 | aspen_pysys-0.0.13.tar.gz | 27,295 | d7/4e/38784d0fe66394457c63cc5def439fe425a1d78bb8468ae4e7c219d323fe/aspen_pysys-0.0.13.tar.gz | source | sdist | null | false | 5def0d44e24e08d0d114d765ea225c77 | 88ff40734d7fdc91b5de6a23d57e886a053ba3e840c2ede6c83132151a7ec166 | d74e38784d0fe66394457c63cc5def439fe425a1d78bb8468ae4e7c219d323fe | GPL-3.0-or-later | [
"LICENSE"
] | 271 |
2.4 | ucp-content | 0.1.13 | Python bindings for UCP (Unified Content Protocol) - Rust implementation | # UCP - Unified Content Protocol
Python bindings for the Rust UCP implementation.
## Installation
```bash
pip install ucp-content
```
## Usage
```python
import ucp
# Create a document
doc = ucp.create("My Document")
# Add blocks
root = doc.root_id
block1 = doc.add_block(root, "Hello, World!", role="paragraph")
# Edit blocks
doc.edit_block(block1, "Updated content")
# Render to markdown
md = ucp.render(doc)
print(md)
```
## Features
- **Document Operations**: Create, edit, move, delete blocks
- **Traversal**: Children, parent, ancestors, descendants, siblings
- **Finding**: By tag, label, role, content type
- **Edges**: Create relationships between blocks
- **LLM Utilities**: IdMapper for token-efficient prompts, PromptBuilder for UCL generation
- **Snapshots**: Version control for documents
- **UCL Execution**: Execute UCL commands on documents
## API Reference
See the [documentation](https://github.com/your-org/ucp) for full API reference.
| text/markdown; charset=UTF-8; variant=GFM | UCP Contributors | null | null | null | MIT | ucp, content, document, markdown, structured | [
"Development Status :: 4 - Beta",
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Progra... | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/unified-content/ucp",
"Homepage, https://github.com/unified-content/ucp",
"Repository, https://github.com/unified-content/ucp"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:21:37.384039 | ucp_content-0.1.13.tar.gz | 212,032 | 16/4b/794f0f84ab9ea5273e4e58c59206a316509e1667087227ee59f3fd86a1e1/ucp_content-0.1.13.tar.gz | source | sdist | null | false | 04bf0499531c77cfe431bed9b77c707c | fd6325c8cc4bdab35556330de64b1ce58c420c340a267bedfe31c66dec17f5ef | 164b794f0f84ab9ea5273e4e58c59206a316509e1667087227ee59f3fd86a1e1 | null | [] | 274 |
2.4 | mzidentml-reader | 0.4.15 | mzidentml-reader uses pyteomics (https://pyteomics.readthedocs.io/en/latest/index.html) to parse mzIdentML files (v1.2.0) and extract crosslink information. Results are written to a relational database (PostgreSQL or SQLite) using sqlalchemy. | # mzidentml-reader
[](https://opensource.org/licenses/Apache-2.0)
mzidentml-reader processes mzIdentML 1.2.0 and 1.3.0 files with the primary aim of extracting crosslink information.
It has three use cases:
1. to validate mzIdentML files against the criteria given here: https://www.ebi.ac.uk/pride/markdownpage/crosslinking
2. to extract information on crosslinked residue pairs and output it in a form more easily used by modelling software
3. to populate the database that is accessed by [crosslinking-api](https://github.com/Rappsilber-Laboratory/crosslinking-api)
It uses the pyteomics library (https://pyteomics.readthedocs.io/en/latest/index.html) as the underlying parser for mzIdentML.
Results are written into a relational database (PostgreSQL or SQLite) using sqlalchemy.
## Requirements
- Python 3.10 (includes SQLite3 in standard library)
- pipenv (for dependency management)
- PostgreSQL server (optional, only required for crosslinking-api database creation; validation and residue pair extraction use built-in SQLite3)
## Installation
### Production Installation
Install via PyPI:
```bash
pip install mzidentml-reader
```
PyPI project: https://pypi.org/project/mzidentml-reader/
For more installation details, see: https://packaging.python.org/en/latest/tutorials/installing-packages/
### Development Setup
Clone the repository and set up the development environment:
```bash
git clone https://github.com/PRIDE-Archive/mzidentml-reader.git
cd mzidentml-reader
pipenv install --python 3.10 --dev
pipenv shell
```
## Usage
`process_dataset` is the CLI entry point. Run it with `-h` to see all options:
```
process_dataset -h
```
Alternative (from the repository root):
```
python -m parser -h
```
### CLI Options Reference
One of the following mutually exclusive options is required:
| Option | Description |
|--------|-------------|
| `-p, --pxid <ID> [ID ...]` | ProteomeXchange accession(s), e.g. `PXD000001` or numbers only. Multiple IDs can be space-separated. |
| `-f, --ftp <URL>` | Process files from the specified FTP location. |
| `-d, --dir <PATH>` | Process files in the specified local directory. |
| `-v, --validate <PATH>` | Validate an mzIdentML file or all files in a directory. Exits after first failure. |
| `--seqsandresiduepairs <PATH>` | Extract sequences and crosslinked residue pairs as JSON. Requires `-j`. |
Additional options:
| Option | Description | Default |
|--------|-------------|---------|
| `-t, --temp <PATH>` | Temp folder for downloaded files or the sqlite DB. | System temp directory |
| `-n, --nopeaklist` | Skip peak list file checks. Works with `-d` and `-v` only. | Off |
| `-w, --writer <db\|api>` | Save data to database (`db`) or API (`api`). Used with `-p`, `-f`, `-d`. | `db` |
| `-j, --json <FILE>` | Output JSON filename. Required when using `--seqsandresiduepairs`. | |
| `-i, --identifier <ID>` | Project identifier for the database. Defaults to PXD accession or directory name. | |
| `--dontdelete` | Don't delete downloaded data after processing. | Off |
### 1. Validate a dataset
Run with the `-v` option to validate a dataset. The argument is the path to a specific mzIdentML file
or to a directory containing multiple mzIdentML files, in which case all of them will be validated. To pass, all the peaklist files
referenced must be in the same directory as the mzIdentML file(s). The converter will create an sqlite database in the
temporary folder which is used in the validation process, the temporary folder can be specified with the `-t` option.
Use `-n` to skip peak list file checks (useful when peak list files are not available locally):
Examples:
```
process_dataset -v ~/mydata
```
```
process_dataset -v ~/mydata/mymzid.mzid -t ~/mytempdir
```
```
process_dataset -v ~/mydata/mymzid.mzid -n
```
The result is written to the console. If the data fails validation but the error message is not informative,
please open an issue on the github repository: https://github.com/Rappsilber-Laboratory/mzidentml-reader/issues
### 2. Extract summary of crosslinked residue pairs
Run with the `--seqsandresiduepairs` option to extract a summary of search sequences and
crosslinked residue pairs. The output is JSON which is written to a file specified with the `-j` option (required).
The argument is the path to an mzIdentML file or a directory containing multiple mzIdentML files, in which case
all of them will be processed.
Examples:
```
process_dataset --seqsandresiduepairs ~/mydata -j output.json -t ~/mytempdir
```
```
process_dataset --seqsandresiduepairs ~/mydata/mymzid.mzid -j output.json
```
#### Programmatic access
The functionality can also be accessed programmatically in Python:
```python
from parser.process_dataset import sequences_and_residue_pairs
import tempfile
# Get sequences and residue pairs as a dictionary
filepath = "/path/to/file.mzid" # or directory containing .mzid files
tmpdir = tempfile.gettempdir() # or specify your own temp directory
data = sequences_and_residue_pairs(filepath, tmpdir)
# Iterate through sequences
print(f"Found {len(data['sequences'])} sequences:")
for seq in data['sequences']:
print(f" {seq['accession']}: {seq['sequence'][:50]}... (from {seq['file']})")
# Iterate through crosslinked residue pairs
print(f"\nFound {len(data['residue_pairs'])} unique crosslinked residue pairs:")
for pair in data['residue_pairs']:
print(f" {pair['prot1_acc']}:{pair['pos1']} <-> {pair['prot2_acc']}:{pair['pos2']}")
print(f" Match IDs: {pair['match_ids']}")
print(f" Modification accessions: {pair['mod_accs']}")
```
The returned dictionary has two keys:
- `sequences`: List of protein sequences (id, file, sequence, accession)
- `residue_pairs`: List of crosslinked residue pairs (prot1, prot1_acc, pos1, prot2, prot2_acc, pos2, match_ids, files, mod_accs)
### 3. Populate the crosslinking-api database
#### Create the database
```
sudo su postgres;
psql;
create database crosslinking;
create user xiadmin with login password 'your_password_here';
grant all privileges on database crosslinking to xiadmin;
\connect crosslinking;
GRANT ALL PRIVILEGES ON SCHEMA public TO xiadmin;
```
find the hba.conf file in the postgresql installation directory and add a line to allow the xiadmin role to access the database:
e.g.
```
sudo nano /etc/postgresql/13/main/pg_hba.conf
```
then add the line:
`local crosslinking xiadmin md5`
then restart postgresql:
```
sudo service postgresql restart
```
#### Configure the python environment for the file parser
edit the file mzidentml-reader/config/database.ini to point to your postgressql database.
e.g. so its content is:
```
[postgresql]
host=localhost
database=crosslinking
user=xiadmin
password=your_password_here
port=5432
```
#### Create the database schema
run create_db_schema.py to create the database tables:
```
python parser/database/create_db_schema.py
```
#### Populate the database
To parse a test dataset:
```
process_dataset -d ~/PXD038060
```
The command line options that populate the database are `-d`, `-f` and `-p`. Only one of these can be used.
- `-d` — process files in a local directory
- `-f` — process files from an FTP location
- `-p` — process by ProteomeXchange identifier(s), space-separated
The `-i` option sets the project identifier in the database. It defaults to the PXD accession or the
name of the directory containing the mzIdentML file.
The `-w` option selects the writer method (`db` for database, `api` for API). Defaults to `db`.
Use `--dontdelete` to keep downloaded data after processing.
Examples:
```
process_dataset -p PXD038060
```
```
process_dataset -p PXD038060 PXD000001 -w api
```
```
process_dataset -f ftp://ftp.jpostdb.org/JPST001914/ -i JPST001914
```
### 4. Cleanup noncov modifications
The `cleanup_noncov` module removes invalid crosslink donor/acceptor modifications (`location="-1"`) from mzIdentML files.
This is useful for pre-processing files that contain noncovalent modifications that are not properly located.
#### Programmatic access
```python
from parser.cleanup_noncov import cleanup_noncov, cleanup_noncov_gz
# For plain .mzid files
peps_cleaned, mods_removed, sii_cleaned = cleanup_noncov("input.mzid", "output.mzid")
# For gzipped .mzid.gz files
peps_cleaned, mods_removed, sii_cleaned = cleanup_noncov_gz("input.mzid.gz", "output.mzid.gz")
print(f"Peptides cleaned: {peps_cleaned}")
print(f"Modifications removed: {mods_removed}")
print(f"SpectrumIdentificationItems cleaned: {sii_cleaned}")
```
## Development
### Code Quality
This project uses standardized code quality tools:
```bash
# Format code
pipenv run black .
# Sort imports
pipenv run isort .
# Check style and syntax
pipenv run flake8
```
### Testing
Make sure the test database user is available:
```bash
psql -p 5432 -c "create role ximzid_unittests with password 'ximzid_unittests';"
psql -p 5432 -c 'alter role ximzid_unittests with login;'
psql -p 5432 -c 'alter role ximzid_unittests with createdb;'
psql -p 5432 -c 'GRANT pg_signal_backend TO ximzid_unittests;'
```
Run tests with coverage:
```bash
pipenv run pytest # Run tests with coverage (80% threshold)
pipenv run pytest --cov-report=html # Generate HTML coverage report
pipenv run pytest -m "not slow" # Skip slow tests
```
| text/markdown | Colin Combe, Lars Kolbowski, Suresh Hewapathirana | null | null | null | 'Apache 2.0 | crosslinking python proteomics | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [
"any"
] | https://github.com/PRIDE-Archive/mzidentml-reader | null | >=3.10 | [] | [] | [] | [
"lxml>=4.9.1",
"numpy>=1.14.3",
"pandas>=0.21.0",
"pymzml>=0.7.8",
"pyteomics>=4.7.3",
"requests>=2.31.0",
"urllib3>=2.6.3",
"psycopg2-binary",
"sqlalchemy>=2.0.38",
"sqlalchemy-utils",
"obonet==1.1.0",
"orjson",
"authlib>=1.6.6",
"virtualenv>=20.36.1",
"filelock>=3.20.3",
"certifi>=20... | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-18T09:21:35.977359 | mzidentml_reader-0.4.15.tar.gz | 146,254 | 50/f6/184920f21c62b1771925e703f2e26aa897ecffa2b570d1ec61adc6a05f7d/mzidentml_reader-0.4.15.tar.gz | source | sdist | null | false | 45c42fd2df3139ac4a88d6db56d6212b | 1c1d88cc33e9de76fe09c8f13f8e5b6b5a67e89843eed1967091a99b47fa0886 | 50f6184920f21c62b1771925e703f2e26aa897ecffa2b570d1ec61adc6a05f7d | null | [
"LICENSE.md"
] | 257 |
2.1 | basedpyright | 1.38.1 | static type checking for Python (but based) | <h1><img src="https://docs.basedpyright.com/latest/img/readme_logo.png"> basedpyright</h1>
<!-- --8<-- [start:header] -->
[](https://pypi.org/project/basedpyright/)
[](https://marketplace.visualstudio.com/items?itemName=detachhead.basedpyright)
[](https://open-vsx.org/extension/detachhead/basedpyright)
[](https://packagecontrol.io/packages/LSP-basedpyright)
[](https://formulae.brew.sh/formula/basedpyright)
[](https://discord.gg/7y9upqPrk2)
[](https://docs.basedpyright.com)
Basedpyright is a fork of [pyright](https://github.com/microsoft/pyright) with various type checking improvements, pylance features and more.
<!-- --8<-- [end:header] -->
See [the documentation](https://detachhead.github.io/basedpyright) for information about why this fork exists, and a comprehensive list of features and improvements we've made to pyright.
| text/markdown | null | detachhead <detachhead@users.noreply.github.com> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming ... | [] | null | null | >=3.8 | [] | [] | [] | [
"nodejs-wheel-binaries>=20.13.1"
] | [] | [] | [] | [
"repository, https://github.com/detachhead/basedpyright"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-18T09:20:50.090982 | basedpyright-1.38.1-py3-none-any.whl | 12,311,610 | 28/92/42f4dc30a28c052a70c939d8dbb34102674b48c89369010442038d3c888b/basedpyright-1.38.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 2065ac30a09691b9d64f59545ee4a1d1 | 24f21661d2754687b64f3bc35efcc78781e11b08c8b2310312ed92bf178ea627 | 289242f4dc30a28c052a70c939d8dbb34102674b48c89369010442038d3c888b | null | [] | 63,252 |
2.2 | pypolymlp | 0.18.8 | This is the pypolymlp module. | # A generator of polynomial machine learning potentials
`pypolymlp` is a Python code designed for the development of polynomial machine learning potentials (MLPs) based on datasets generated from density functional theory (DFT) calculations. The code provides functionalities for fitting polynomial models to energy, force, and stress data, enabling the construction of accurate and computationally efficient interatomic potentials.
In addition to potential development, `pypolymlp` allows users to compute various physical properties and perform atomistic simulations using the trained MLPs.
## Polynomial machine learning potentials
A polynomial MLP represents the potential energy as a polynomial function of linearly independent polynomial invariants of the O(3) group. Developed polynomial MLPs are available in [Polynomial Machine Learning Potential Repository](http://cms.mtl.kyoto-u.ac.jp/seko/mlp-repository/index.html).
## Citation of pypolymlp
“Tutorial: Systematic development of polynomial machine learning potentials for elemental and alloy systems”, [A. Seko, J. Appl. Phys. 133, 011101 (2023)](https://doi.org/10.1063/5.0129045)
```
@article{pypolymlp,
author = {Seko, Atsuto},
title = "{"Tutorial: Systematic development of polynomial machine learning potentials for elemental and alloy systems"}",
journal = {J. Appl. Phys.},
volume = {133},
number = {1},
pages = {011101},
year = {2023},
month = {01},
}
```
## Required libraries and python modules
- python >= 3.9
- numpy != 2.0.*
- scipy
- pyyaml
- setuptools
- eigen3
- pybind11
- openmp (recommended)
[Optional]
- phonopy
- phono3py
- symfc
- sparse_dot_mkl
- spglib
- pymatgen
- ase
- joblib
## How to install pypolymlp
- Install from conda-forge (Recommended)
| Version | Last Update | Downloads | Platform | License |
| ---- | ---- | ---- | ---- | ---- |
|  |  | |  |  |
```
conda create -n pypolymlp-env
conda activate pypolymlp-env
conda install -c conda-forge pypolymlp
```
- Install from PyPI
```
conda create -n pypolymlp-env
conda activate pypolymlp-env
conda install -c conda-forge numpy scipy pybind11 eigen cmake cxx-compiler
pip install pypolymlp
```
Building C++ codes in pypolymlp may require a significant amount of time.
- Install from GitHub
```
git clone https://github.com/sekocha/pypolymlp.git
cd pypolymlp
conda create -n pypolymlp-env
conda activate pypolymlp-env
conda install -c conda-forge numpy scipy pybind11 eigen cmake cxx-compiler
pip install . -vvv
```
Building C++ codes in pypolymlp may require a significant amount of time.
## How to use pypolymlp
### Polynomial MLP development
To develop polynomial MLPs from datasets obtained from DFT calculations, both the command-line interface and the Python API are available.
Several procedures for generating structures used in DFT calculations are also supported.
- Tutorials
1. [Development of a single on-the-fly MLP](docs/tutorial_onthefly.md)
2. Development of a single general-purpose MLP
3. Development of Pareto-optimal MLPs
- [MLP development using command line interface](docs/mlpdev_command.md)
- [MLP development using Python API](docs/mlpdev_api.md)
- [Utilities for MLP development](docs/utilities.md)
- [Generator of structures used for DFT calculations](docs/strgen.md)
- Random atomic displacements with constant magnitude
- Random atomic displacements with sequential magnitudes and volume changes
- Random atomic displacements, cell expansion, and distortion
- Compression of vasprun.xml files
- Automatic division of DFT dataset
- Atomic energies
- Enumeration of optimal MLPs
- Estimation of computational costs
- Experimental features
- [SSCHA free energy model](docs/experimental/mlpdev_sscha.md)
- [Electronic free energy model](docs/experimental/mlpdev_electron.md)
- [Substitutional disordered model](docs/experimental/mlpdev_disorder.md)
### Calculations using polynomial MLP
In version 0.8.0 or earlier, polymlp files are generated in a plain text format as `polymlp.lammps`.
Starting from version 0.9.0, the files are generated in YAML format as `polymlp.yaml`.
Both formats are supported by the command-line interface and the Python API.
The following calculations can be performed using **pypolymlp** with the polynomial MLP files `polymlp.yaml` or `polymlp.lammps`.
- [Notes on hybrid polynomial MLPs](docs/calc_hybrid.md)
- [Energy, forces, stress tensor](docs/calc_property.md)
- [Equation of states](docs/calc_eos.md)
- [Local geometry optimization](docs/calc_geometry.md)
- [Elastic constants](docs/calc_elastic.md)
- [Phonon properties, Quasi-harmonic approximation](docs/calc_phonon.md)
- [Force constants](docs/calc_fc.md)
- [Polynomial invariants](docs/calc_features.md)
- Experimental features
- [Self-consistent phonon calculations](docs/experimental/calc_sscha.md)
- [Molecular dynamics](docs/experimental/calc_md.md)
- [Thermodynamic integration using molecular dynamics](docs/experimental/calc_ti.md)
- [Thermodynamic property calculation](docs/experimental/calc_thermodynamics.md)
- Evaluation of atomic-configuration-dependent electronic free energy
- Global structure optimization
- Structure optimization at finite temperatures
- [How to use polymlp in other calculator tools](docs/api_other_calc.md)
- LAMMPS
- phonopy and phonon3py
- ASE
| text/markdown | null | Atsuto Seko <seko@cms.mtl.kyoto-u.ac.jp> | null | Atsuto Seko <seko@cms.mtl.kyoto-u.ac.jp> | BSD 3-Clause License
Copyright (c) 2024, pypolymlp
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy!=2.0.*",
"scipy",
"PyYAML>=5.3",
"symfc; extra == \"symfc\"",
"phonopy; extra == \"phonopy\"",
"phono3py; extra == \"phono3py\"",
"spglib; extra == \"spglib\"",
"symfc; extra == \"tools\"",
"phonopy; extra == \"tools\"",
"phono3py; extra == \"tools\"",
"spglib; extra == \"tools\""
] | [] | [] | [] | [
"Homepage, https://github.com/sekocha/pypolymlp",
"Repository, https://github.com/sekocha/pypolymlp"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:20:07.180835 | pypolymlp-0.18.8.tar.gz | 29,247,441 | f8/a9/f1839a16c6149acc7f15c664b0022af59ff7af66870436f739c40146ae24/pypolymlp-0.18.8.tar.gz | source | sdist | null | false | 01b749bc7724eff14d982152273f3ee8 | 2f27d7dc4c0b5906cddead6ebe889c16b1d7ea7474bc8069312fea30066b4d08 | f8a9f1839a16c6149acc7f15c664b0022af59ff7af66870436f739c40146ae24 | null | [] | 235 |
2.4 | pymadng | 0.8.4 | A python interface to MAD-NG running as subprocess | # PyMAD-NG
**Python interface to MAD-NG running as a subprocess**
[](https://pypi.org/project/pymadng/)
[](https://pymadng.readthedocs.io/en/latest/)
[](https://github.com/MethodicalAcceleratorDesign/MAD-NG.py/blob/main/LICENSE)
---
## 🚀 Installation
Install via pip from [PyPI](https://pypi.org/project/pymadng/):
```bash
pip install pymadng
```
---
## 🧠 Getting Started
Before diving into PyMAD-NG, we recommend you:
1. Familiarise yourself with [MAD-NG](https://madx.web.cern.ch/releases/madng/html/) — understanding MAD-NG is essential.
2. Read the [Quick Start Guide](https://pymadng.readthedocs.io/en/latest/quickstartguide.html) to see how to control MAD-NG from Python.
### Explore Key Examples
- **[LHC Matching Example](https://pymadng.readthedocs.io/en/latest/ex-lhc-couplingLocal.html)** – Real-world optics matching with intermediate feedback.
- **[Examples Page](https://pymadng.readthedocs.io/en/latest/examples.html)** - List of examples in an easy to read format.
- **[GitHub Examples Directory](https://github.com/MethodicalAcceleratorDesign/MAD-NG.py/blob/main/examples/)** – List of available examples on the repository
If anything seems unclear:
- Refer to the [API Reference](https://pymadng.readthedocs.io/en/latest/pymadng.html#module-pymadng)
- Check the [MAD-NG Docs](https://madx.web.cern.ch/releases/madng/html/)
- Or open an [issue](https://github.com/MethodicalAcceleratorDesign/MAD-NG.py/issues)
---
## 📚 Documentation
Full documentation and example breakdowns are hosted at:
[https://pymadng.readthedocs.io/en/latest/](https://pymadng.readthedocs.io/en/latest/)
To build locally:
```bash
git clone https://github.com/MethodicalAcceleratorDesign/MAD-NG.py.git
cd MAD-NG.py/docs
make html
```
---
## 🧪 Running Examples
Examples are stored in the `examples/` folder.
Run any script with:
```bash
python3 examples/ex-fodos.py
```
You can also batch-run everything using:
```bash
python3 runall.py
```
---
## 💡 Features
- High-level Python interface to MAD-NG
- Access to MAD-NG functions, sequences, optics, and tracking
- Dynamic `send()` and `recv()` communication
- Python-native handling of MAD tables and expressions
- Optional integration with `pandas` and `tfs-pandas`
---
## 🤝 Contributing
We welcome contributions! See [`CONTRIBUTING.md`](docs/source/contributing.md) or the [Contributing Guide](https://pymadng.readthedocs.io/en/latest/contributing.html) in the docs.
Bug reports, feature requests, and pull requests are encouraged.
---
## 📜 License
PyMAD-NG is licensed under the [GNU General Public License v3.0](https://github.com/MethodicalAcceleratorDesign/MAD-NG.py/blob/main/LICENSE).
---
## 🙌 Acknowledgements
Built on top of MAD-NG, developed at CERN. This interface aims to bring MAD's power to the Python ecosystem with minimal friction.
| text/markdown | Joshua Gray | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: Unix",
"Development Status :: 4 - Beta",
"Natural Language :: English",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"numpy>=1.11.0",
"tfs-pandas>3.0.0; extra == \"tfs\""
] | [] | [] | [] | [
"Repository, https://github.com/MethodicalAcceleratorDesign/MAD-NG.py",
"Bug Tracker, https://github.com/MethodicalAcceleratorDesign/MAD-NG.py/issues",
"MAD Source, https://github.com/MethodicalAcceleratorDesign/MAD",
"Documentation, https://pymadng.readthedocs.io/en/latest/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-18T09:20:02.405704 | pymadng-0.8.4.tar.gz | 10,208,156 | e7/ed/f1969f7abed159dad60bdb618a389a2e999700e6fcc83107c8d48016a533/pymadng-0.8.4.tar.gz | source | sdist | null | false | 3b5391d52ec42c87225c986ec18c2d00 | 237c67f956d87c5eac27463ab1d04d120b3fed7c7ec6eb9117940db8922039e6 | e7edf1969f7abed159dad60bdb618a389a2e999700e6fcc83107c8d48016a533 | null | [
"LICENSE"
] | 951 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.