metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | relancify-sdk | 0.4.0 | Official Python SDK for the Relancify API. | # Relancify SDK (Python)
Official Python SDK for the Relancify API.
## Installation
```bash
pip install relancify-sdk
```
## Quickstart
```python
from relancify_sdk import RelancifyClient
client = RelancifyClient(
base_url="https://api.relancify.com/api/v1",
api_key="<your_api_key>",
)
agents = client.agents.list()
print(len(agents))
client.close()
```
## Available resources
- `client.agents`
- `client.operations`
- `client.runtime`
- `client.users`
- `client.voices`
- `client.api_keys`
## Notes
- The SDK uses synchronous `httpx`.
- HTTP errors are raised as `relancify_sdk.errors.ApiError`.
- Runtime websocket connections can use short-lived connect tokens via `client.runtime.create_connect_token(...)`.
- Publish flow: create/update agent locally, call `client.agents.publish(agent_id)`, then poll `client.operations.get(operation_id)`.
- Agent IDs use the public format `ag_<uuid>` for all agent endpoints.
## Security best practices
- Never hardcode API keys or bearer tokens in source code.
- Use environment variables or a secure secret manager.
- Rotate credentials periodically.
- Prefer short-lived access tokens when possible.
| text/markdown | Relancify | null | null | null | null | relancify, sdk, voice, agent, api | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx<1.0,>=0.25"
] | [] | [] | [] | [
"Homepage, https://www.relancify.com",
"Repository, https://github.com/NKSTUD/relancify-sdk",
"Documentation, https://www.relancify.com"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:28:25.590708 | relancify_sdk-0.4.0.tar.gz | 5,467 | 33/c2/b43cb1200570b42d93d70f52586f64a0a03ae6e0edc58476a4aa899b3808/relancify_sdk-0.4.0.tar.gz | source | sdist | null | false | ab01634864d1f1aa5641158e239c6e88 | c69352a4642a7fd0750c2a2ef2cd3d3f7feb54cd4e9205940359212135717fb9 | 33c2b43cb1200570b42d93d70f52586f64a0a03ae6e0edc58476a4aa899b3808 | null | [] | 215 |
2.4 | stimulsoft-data-adapters | 2026.1.4 | Stimulsoft data adapters for Python. | # Stimulsoft data adapters for Python.
Since pure JavaScript does not have built-in methods for working with remote databases, this functionality is implemented using server-side code.
## Install Data Adapters
To install the **Stimulsoft data adapters for Python**, you can use the specified command:
```
python -m pip install stimulsoft-data-adapters
```
All supported data adapters will be installed, as well as the universal **pyodbc** data driver, which supports most databases. If necessary, you can install any additional native data driver from the list below.
To install **Stimulsoft data adapters for Python** with all the necessary data drivers, you can use the following command:
```
python -m pip install stimulsoft-data-adapters[ext]
```
## Working with data adapters
To start working with data adapters, it is enough to define the **StiBaseHandler** class, call **processRequest()** function which accepts HTTP request data as input, and generates a response that needs to be passed to the report generator.
The **StiBaseHandler** class supports simplified work with the Flask, Django, and Tornado frameworks. To process the request, it is enough to pass the request object to the handler, and return a response generated specifically for the framework.
### Flask
```
from flask import Flask, request
from stimulsoft_data_adapters import StiBaseHandler
@app.route('/handler', methods = ['GET', 'POST'])
def handler():
handler = StiBaseHandler()
handler.processRequest(request)
return handler.getFrameworkResponse()
```
### Django
```
from django.http import HttpRequest
from stimulsoft_data_adapters import StiBaseHandler
def handler(request: HttpRequest):
handler = StiBaseHandler()
handler.processRequest(request)
return handler.getFrameworkResponse()
```
### Tornado
```
from tornado.web import Application, RequestHandler
from stimulsoft_data_adapters import StiBaseHandler
class Handler(RequestHandler):
def get(self):
handler = StiBaseHandler()
handler.processRequest(self.request)
response = handler.getResponse()
self.set_header('Content-Type', response.contentType)
self.write(response.data)
```
For all other cases, it is enough to pass query vars and the request body to the handler. After that, you can get a response from the handler, which will contain the data and the necessary information.
```
from stimulsoft_data_adapters import StiBaseHandler
def handler():
handler = StiBaseHandler()
handler.processRequest(None, query, body)
response = handler.getResponse()
data = response.data
contentType = response.contentType
mimetype = response.mimetype
```
## Data adapter events
The handler provides two events: **onBeginProcessData** and **onEndProcessData**, which occur before connecting to the database and after receiving data.
```
from stimulsoft_data_adapters import StiBaseHandler, StiDataEventArgs
@app.route('/handler', methods = ['GET', 'POST'])
def handler():
handler = StiBaseHandler()
handler.onBeginProcessData += beginProcessData
handler.onEndProcessData += endProcessData
handler.processRequest(request)
return handler.getFrameworkResponse()
```
### onBeginProcessData
In the event args, you can get and change all connection parameters, such as connection string, SQL query, connection name, connection type, data source name and others.
```
def beginProcessData(args: StiDataEventArgs):
args.command
args.database
args.connection
args.connectionString = args.connectionString.replace('Pwd=;', 'Pwd=**********;')
args.queryString
args.dataSource
```
### onEndProcessData
The event args, in addition to all connection parameters, will contain the result of the data request. It is a set of three arrays - column names, column types and data rows. All values can be changed in the event.
```
def endProcessData(args: StiDataEventArgs):
args.result.columns
args.result.types
args.result.rows
```
## Install database drivers
By default, without extras, only the data adapters will be installed. All required database drivers must be installed manually. This may be useful for some cases and certain operating systems, or for installing only the necessary drivers.
### MS SQL
To use the **MS SQL data adapter**, you need to install the specified package:
```
python -m pip install "pymssql[binary]"
```
Standard connection strings for MS SQL databases are supported.
You can also use the ODBC driver for MS SQL. For this, you need to install the [Microsoft ODBC Driver for SQL Server](https://learn.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-ver16) for your operation system. After this, you need to add the name of the ODBC driver to the connection string, for example:
```
Driver={ODBC Driver 18 for SQL Server}; Data Source=myServerAddress;
Initial Catalog=myDataBase; User ID=myUsername; Password=myPassword;
```
Also, additional connection string parameters are supported:
```
TrustServerCertificate=Yes;
```
### MySQL
To use the **MySQL data adapter**, you need to install the specified package:
```
python -m pip install mysql-connector-python
```
Standard connection strings for MySQL databases are supported.
You can also use the ODBC driver for MySQL. For this, you need to install the [Connector/ODBC for MySQL](https://dev.mysql.com/downloads/connector/odbc/) for your operation system. After this, you need to add the name of the ODBC driver to the connection string, for example:
```
Driver={MySQL ODBC 8.1 UNICODE Driver}; Server=myServerAddress;
Database=myDataBase; UserId=myUsername; Pwd=myPassword;
```
### PostgreSQL
To use the **PostgreSQL data adapter**, you need to install the specified package:
```
python -m pip install "psycopg[binary]"
```
Standard connection strings for PostgreSQL databases are supported.
You can also use the ODBC driver for PostgreSQL. For this, you need to install the [PostgreSQL ODBC Driver](https://odbc.postgresql.org/) for your operation system. After this, you need to add the name of the ODBC driver to the connection string, for example:
```
Driver={PostgreSQL Unicode}; Server=myServerAddress; Port=5432;
Database=myDataBase; User Id=myUsername; Password=myPassword;
```
### Firebird
To use the **Firebird data adapter**, you need to install the specified package:
```
python -m pip install firebird-driver
```
Standard connection strings for Firebird databases are supported.
You can also use the ODBC driver for Firebird. For this, you need to install the [Firebird ODBC Driver](https://firebirdsql.org/en/odbc-driver/) for your operation system. After this, you need to add the name of the ODBC driver to the connection string, for example:
```
Driver={Firebird/InterBase(r) driver}; User=SYSDBA; Password=masterkey;
Database=SampleDatabase.fdb; DataSource=myServerAddress; Port=3050;
```
### Oracle
To use the **Oracle data adapter**, you need to install the specified package:
```
python -m pip install oracledb
```
To run the driver, you will also need to install [Oracle Instant Client](https://www.oracle.com/pl/database/technologies/instant-client/downloads.html) for your operating system, and if required, add the path to it to the environment variables. Standard connection strings for Oracle databases are supported.
You can also use the ODBC driver for Oracle. For this, you need to install the [Oracle Instant Client ODBC](https://www.oracle.com/pl/database/technologies/releasenote-odbc-ic.html) for your operation system. After this, you need to add the name of the ODBC driver to the connection string, for example:
```
Driver={Oracle in instantclient_19_20}; Data Source=TORCL;
User Id=myUsername; Password=myPassword;
```
### MongoDB
To use the **MongoDB data adapter**, you need to install the specified package:
```
python -m pip install "pymongo[srv]"
```
Standard connection strings for MongoDB databases are supported.
### ODBC
To use the **ODBC data adapter**, you need to install the specified package (if for some reason it was not installed automatically with the data adapter package):
```
python -m pip install pyodbc
```
After that, you can create a native ODBC connection in the Report Designer using any supported driver specified on [this page](https://github.com/mkleehammer/pyodbc/wiki).
## Useful links
* [Live Demo](http://demo.stimulsoft.com/#Js)
* [Product Page](https://www.stimulsoft.com/en/products/reports-js)
* [Free Download](https://www.stimulsoft.com/en/downloads)
* [PyPI](https://pypi.org/project/stimulsoft-data-adapters/)
* [GitHub](https://github.com/stimulsoft/DataAdapters.JS/tree/main/PythonDataAdapters)
* [Documentation](https://www.stimulsoft.com/en/documentation/online/programming-manual/index.html?reports_js.htm)
* [License](https://www.stimulsoft.com/en/licensing/developers)
| text/markdown | Stimulsoft | info@stimulsoft.com | null | null | https://www.stimulsoft.com/en/licensing/developers | null | [
"License :: Other/Proprietary License",
"Framework :: Django",
"Framework :: Flask",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Database",
"Topic :: Software Development"
] | [] | https://www.stimulsoft.com/en/products/reports-python | null | >=3.10 | [] | [] | [] | [
"pyodbc",
"requests",
"pyodbc; extra == \"ext\"",
"pymssql; extra == \"ext\"",
"mysql-connector-python; extra == \"ext\"",
"psycopg; extra == \"ext\"",
"firebird-driver; extra == \"ext\"",
"oracledb; extra == \"ext\"",
"pymongo[srv]; extra == \"ext\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.5 | 2026-02-20T11:28:17.872712 | stimulsoft_data_adapters-2026.1.4.tar.gz | 29,361 | fa/91/1a76f4a584690720ce9f8ed40da23ff4c1a645572c69bb866d42fd9dded2/stimulsoft_data_adapters-2026.1.4.tar.gz | source | sdist | null | false | 3d82aa03e0874c4801cd5ad4ed34d1d9 | 075768dbd945ba09e09b8cb5666e85825964b7a9679d54d6cb6e9c8d14d359fb | fa911a76f4a584690720ce9f8ed40da23ff4c1a645572c69bb866d42fd9dded2 | null | [
"LICENSE.md"
] | 354 |
2.4 | stimulsoft-dashboards | 2026.1.4 | Data visualization in Python applications. | # Stimulsoft Dashboards.PYTHON
Data visualization in Python applications.
## About the product
Stimulsoft Dashboards.PYTHON is a fast and powerful tool for creating analytical dashboards in services and projects written in Python. The product includes a JavaScript data processing engine, a designer component for creating dashboards, and a fully interactive viewer for viewing ready-made dashboards on the screen of any device.
Stimulsoft Dashboards.PYTHON is a client-server system wherein a JavaScript component operates on the client side, and a Python server is responsible for data processing. These two parts are closely related and represent a single product that greatly simplifies working with dashboards in web applications written in Python.
## Install dashboard components
To install the **Stimulsoft Dashboards.PYTHON**, you can use the specified command:
```
python -m pip install stimulsoft-dashboards
```
## Working with data analysis tool
### Dashboard Engine
The **StiReport** component is designed to work with the dashboard engine in a Web project. Using this component, you can create a dashboard, load a dashboard from a file or string, and call a dashboard export function.
> For simplicity, all code examples in this tutorial use the Flask framework (any other can be used).
The code example shows how you can load a dashboard from a file, and export it to HTML format:
### app.py
```python
from flask import Flask, render_template, url_for, request
from stimulsoft_reports.report import StiReport
from stimulsoft_reports.report.enums import StiExportFormat
app = Flask(__name__)
@app.route('/report', methods = ['GET', 'POST'])
def report():
report = StiReport()
if report.processRequest(request):
return report.getFrameworkResponse()
report.loadFile(url_for('static', filename='reports/Financial.mrt'))
report.render()
report.exportDocument(StiExportFormat.HTML)
js = report.javascript.getHtml()
html = report.getHtml()
return render_template('report.html', reportJavaScript = js, reportHtml = html)
```
### report.html
```html
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Load and Export a Dashboard</title>
{{ reportJavaScript|safe }}
</head>
<body>
{{ reportHtml|safe }}
</body>
</html>
```
More details in [our documentation](https://www.stimulsoft.com/en/documentation/online/programming-manual/index.html?reports_python.htm).
### Report Viewer
The **StiViewer** component is designed for viewing, printing and exporting dashboards in a browser window. Full support for working with interactive dashboards has been implemented. Using this component, you can create a viewer object, set the necessary options, process the request and return the result of its execution, and receive the prepared JavaScript and HTML code of the component.
An example of displaying a viewer on an HTML page:
### app.py
```python
from flask import Flask, render_template, url_for, request
from stimulsoft_reports.viewer import StiViewer
app = Flask(__name__)
@app.route('/viewer', methods = ['GET', 'POST'])
def viewer():
viewer = StiViewer()
viewer.options.appearance.fullScreenMode = True
if viewer.processRequest(request):
return viewer.getFrameworkResponse()
report = StiReport()
report.loadFile(url_for('static', filename='dashboards/Financial.mrt'))
viewer.report = report
js = viewer.javascript.getHtml()
html = viewer.getHtml()
return render_template('viewer.html', viewerJavaScript = js, viewerHtml = html)
```
### viewer.html
```html
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Showing a Dashboard in the Viewer</title>
{{ viewerJavaScript|safe }}
</head>
<body>
{{ viewerHtml|safe }}
</body>
</html>
```
There is a simplified deployment of the viewer without using an HTML page template. For example, this same example can be implemented using only Python code:
### app.py
```python
from flask import Flask, url_for, request
from stimulsoft_reports.viewer import StiViewer
app = Flask(__name__)
@app.route('/viewer', methods = ['GET', 'POST'])
def viewer():
viewer = StiViewer()
viewer.options.appearance.fullScreenMode = True
if viewer.processRequest(request):
return viewer.getFrameworkResponse()
report = StiReport()
report.loadFile(url_for('static', filename='dashboards/Financial.mrt'))
viewer.report = report
return viewer.getFrameworkResponse()
```
More details in [our documentation](https://www.stimulsoft.com/en/documentation/online/programming-manual/index.html?reports_python.htm).
### Reports Designer
The **StiDesigner** component is designed for developing dashboards in a browser window. The designer's interface is built using HTML5, which allows it to be used on almost any modern platform and different operating systems. JavaScript technology used to build reports allows you to use almost any low-performance server side.
An example of displaying a designer on an HTML page:
### app.py
```python
from flask import Flask, render_template, url_for, request
from stimulsoft_reports.designer import StiDesigner
app = Flask(__name__)
@app.route('/designer', methods = ['GET', 'POST'])
def designer():
designer = StiDesigner()
designer.options.appearance.fullScreenMode = True
if designer.processRequest(request):
return designer.getFrameworkResponse()
report = StiReport()
report.loadFile(url_for('static', filename='dashboards/Financial.mrt'))
designer.report = report
js = designer.javascript.getHtml()
html = designer.getHtml()
return render_template(designer.html', designerJavaScript = js, designerHtml = html)
```
### designer.html
```html
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Editing a Dashboard Template in the Designer</title>
{{ designerJavaScript|safe }}
</head>
<body>
{{ designerHtml|safe }}
</body>
</html>
```
There is a simplified deployment of the designer without using an HTML page template. For example, this same example can be implemented using only Python code:
### app.py
```python
from flask import Flask, url_for, request
from stimulsoft_reports.designer import StiDesigner
app = Flask(__name__)
@app.route('/designer', methods = ['GET', 'POST'])
def designer():
designer = StiDesigner()
designer.options.appearance.fullScreenMode = True
if designer.processRequest(request):
return designer.getFrameworkResponse()
report = StiReport()
report.loadFile(url_for('static', filename='dashboards/Financial.mrt'))
designer.report = report
return designer.getFrameworkResponse()
```
More details in [our documentation](https://www.stimulsoft.com/en/documentation/online/programming-manual/index.html?reports_python.htm).
## Useful links
* [Live Demo](http://demo.stimulsoft.com/#Js)
* [Product Page](https://www.stimulsoft.com/en/products/dashboards-python)
* [Free Download](https://www.stimulsoft.com/en/downloads)
* [PyPI](https://pypi.org/project/stimulsoft-dashboards/)
* [Documentation](https://www.stimulsoft.com/en/documentation/online/programming-manual/index.html?reports_python.htm)
* [License](https://www.stimulsoft.com/en/licensing/developers)
| text/markdown | Stimulsoft | info@stimulsoft.com | null | null | https://www.stimulsoft.com/en/licensing/developers | null | [
"License :: Other/Proprietary License",
"Framework :: Django",
"Framework :: Flask",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Office/Business",
"Topic :: Software Development"
] | [] | https://www.stimulsoft.com/en/products/dashboards-python | null | >=3.10 | [] | [] | [] | [
"stimulsoft-reports==2026.1.4",
"stimulsoft-reports[ext]==2026.1.4; extra == \"ext\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.5 | 2026-02-20T11:28:16.572916 | stimulsoft_dashboards-2026.1.4.tar.gz | 543,225 | ad/16/912b54ab1454ea3d5058b77e2e793727070eba453c423cc0a3400e50961a/stimulsoft_dashboards-2026.1.4.tar.gz | source | sdist | null | false | 5161be32565a10bad419b9529d8405c8 | 1e36cf47c308fbc6e5ac2f926f03469c6b527ea1091171b0b4bd01f56f2b1d0b | ad16912b54ab1454ea3d5058b77e2e793727070eba453c423cc0a3400e50961a | null | [
"LICENSE.md"
] | 278 |
2.4 | piper-control | 1.3.6 | A wrapper around piper_sdk for controlling AgileX Piper arms. | # piper_control - Library for controlling AgileX Pipers
## Overview
This repo provides low-level python modules for connecting and controlling
AgileX Piper robots.
* `piper_connect`: a python implementation of the CAN setup scripts bundled
with `piper_sdk`. This are not accessible on pip installs, and we found it
useful to be able to query and activate CAN ports programmatically.
* `piper_control`: our lightweight wrapper of `piper_sdk` for controlling
AgileX Piper robots.
The `piper_sdk` API is powerful and quickly maturing, but it's a bit complex
and under-documented, and we found it helpful to define a simple abstraction
for basic I/O.
There are also several sharp bits in `piper_sdk` which can make the robots
seem tempermental, e.g. becoming unresponsive despite repeated calls to
`MotionCtrl_2`, `EnableArm`, `GripperCtrl`, etc. We've bundled our solutions
into `PiperControl` so `reset` and the various move commands perform as one
would expect.
## Quick start
Install the dependencies and package:
```shell
sudo apt install can-utils
pip install piper_control
```
Set up the CAN connection to the arm(s):
```python
# Set up the connection to the Piper arm.
# These steps require sudo access.
from piper_control import piper_connect
# Print out the CAN ports that are available to connect.
print(piper_connect.find_ports())
# Activate all the ports so that you can connect to any arms connected to your
# machine.
piper_connect.activate()
# Check to see that all the ports are active.
print(piper_connect.active_ports())
```
Then control the robot:
```python
from piper_control import piper_interface
from piper_control import piper_init
robot = piper_interface.PiperInterface(can_port="can0")
# Resets the robot and enables the motors and motion controller for the arm.
# This call is necessary to be able to both query state and send commands to the
# robot.
piper_init.reset_arm(
robot,
arm_controller=piper_interface.ArmController.POSITION_VELOCITY,
move_mode=piper_interface.MoveMode.JOINT,
)
piper_init.reset_gripper(robot)
# See where the robot is now.
joint_angles = robot.get_joint_positions()
print(joint_angles)
# Move one joint of the arm.
joint_angles = robot.get_joint_positions()
joint_angles[-2] -= 0.1
print(f"Setting joint angles to {joint_angles}")
robot.command_joint_positions(joint_angles)
```
See the [tutorial.ipynb][tutorial] for a longer walkthrough.
## Gravity Compensation
The package includes tools for gravity compensation calibration and execution.
### Installation
Install with the gravity compensation dependencies:
```shell
pip install piper_control[gravity]
```
### Generate Calibration Samples
Collect joint position and effort samples across the robot's workspace:
```shell
piper-generate-samples -o samples.npz
```
If using `uv`:
```shell
uv run piper-generate-samples -o /tmp/grav_comp_samples.npz
```
Options:
* `--model-path`: Path to MuJoCo XML model (default: bundled model)
* `--joint-names`: Joint names in the model (default: joint1-6)
* `--num-samples`: Number of samples to collect (default: 50)
* `--can-port`: CAN interface name (default: can0)
### Try out the Gravity Compensation
Try the gravity compensation model using calibrated samples:
```shell
piper-gravity-compensation --samples-path samples.npz
```
Options:
* `--model-path`: Path to MuJoCo XML model (default: bundled model)
* `--joint-names`: Joint names in the model (default: joint1-6)
* `--can-port`: CAN interface name (default: can0)
* `--model-type`: Compensation model type (default: cubic).
<!-- markdownlint-disable-next-line MD007 -->
* Choices: linear, affine, quadratic, cubic, features, direct
* `--damping`: Velocity damping gain for stability (default: 1.0)
## Collision Protection
Set the collision protection level for all joints:
```shell
piper-set-collision-protection 5
```
Options:
* `level`: Protection level to set (default: 1)
* `--can-port`: CAN interface name (default: can0)
## Local development setup
Use this workflow when you need to develop on `piper_control` directly instead
of installing the released package from PyPI.
### 1. Clone the repository
```shell
git clone https://github.com/Reimagine-Robotics/piper_control.git
cd piper_control
```
### 2. Choose how you want to manage dependencies
#### Option A: Virtual environment + pip
```shell
python -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
pip install -e .
# Pull in the optional gravity-compensation tools if needed.
pip install -e .[gravity]
```
Install any extra dev tooling you care about (e.g. `pip install pre-commit`).
If you want to use our fork of `piper_sdk`, which fixes some jerkiness issues
when using MIT mode on the Piper, you can:
```shell
pip uninstall piper_sdk
pip install \
git+https://github.com/Reimagine-Robotics/piper_sdk.git@master#egg=piper_sdk
```
Swap `@master` for another branch or tag if you need something different.
#### Option B: uv-managed environment
[`uv`](https://github.com/astral-sh/uv) reads both `pyproject.toml` and
`uv.lock`, so it can recreate the exact environment the branch was developed
with, including the dev helpers defined in the `dev` dependency group.
```shell
# Create/refresh the .venv using the lockfile and install dev tools.
uv sync --all-extras --group dev
uv tree
```
`uv sync` automatically installs the gravity-compensation extras and the dev
tools (pre-commit, jupyterlab, pylint). Use `uv run <command>` to execute tools
without activating the environment manually.
> [!NOTE]
> `uv sync` will use our fork of `piper_sdk`, which fixes some jerkiness issues
> when using MIT mode on the Piper.
## Generating udev rules for CAN adapters
To avoid needing to run `sudo` to set up the CAN interface, you can create a
udev rule that sets the bitrate and desired name for your CAN adapter.
### Usage
1. Plug in your CAN adapter
2. Run the script:
```bash
sudo ./scripts/generate_udev_rule.bash -i can0 -b 1000000
```
Or name your robot (e.g. myrobot):
```bash
sudo ./scripts/generate_udev_rule.bash -i can0 -n myrobot -b 1000000
```
3. Unplug and replug the adapter to test
### Test
```bash
ip link show can0
```
Or if you named it something else:
```bash
ip link show myrobot
```
That's it!
## Linting
To lint the codebase, run:
```bash
uv run pre-commit
```
## Troubleshooting / FAQ
### Is my PiperInterface working?
This snippet is a good check for whether things are working:
```python
from piper_control import piper_interface
from piper_control import piper_init
robot = piper_interface.PiperInterface(can_port="can0")
piper_init.reset_arm(robot)
print(robot.get_joint_positions())
print(robot.get_joint_velocities())
print(robot.get_joint_efforts())
print(robot.get_gripper_state())
```
If you get the following text, then the CAN connection is not working. See
[this section](#canexceptionscanerror-errors) on how to debug the CAN
connection.
```text
<class 'can.exceptions.CanError'> Message NOT sent
```
If you get output that looks like this (as in the CAN seems like it is working,
but you get all 0's for the rest of the output), see
[this section](#get-all-zeros-when-calling-get_joint_positions).
```text
can0 is exist
can0 is UP
can0 bitrate is 1000000
can0 bus opened successfully.
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
(0.0, 0.0)
```
### Get all zeros when calling `get_joint_positions()`
Run through these steps:
```python
# Assume you already have the PipeInterface object.
robot = piper_interface.PiperInterface(can_port="can0")
# Reset again.
piper_init.reset_arm(robot)
```
And after that, calling `robot.get_joint_positions()` should return non-zero
values. If it doesn't, then double-check the CAN connection. See
[this section](#canexceptionscanerror-errors).
### `can.exceptions.CanError` errors
Quite often, you may see the following error while connecting or resetting your
arm:
```text
<class 'can.exceptions.CanError'> Message NOT sent
```
There are several possible issues here, and several things you can try to fix
it:
1. Unplug the CAN cable, power off the robot, power on the robot, then plug the
CAN cable back in.
If this works, but the error happens often enough, the issue is the USB
cable. The CAN adapter that ships with the Piper is finnicky and sensitive
to the USB cable used.
Be sure to call `piper_init.reset()` on the robot after starting it again.
2. Still doesn't work?
You can check whether the cable itself is working by making sure the CAN
adapter is visible:
```shell
lsusb | grep CAN
```
If nothing shows up, the cable is not working.
As mentioned above, the CAN adapter is sensitive to this piece.
3. Still not working? Make sure the CAN interface is set up correctly.
Some things to try:
1. Run:
```shell
ifconfig
```
And verify the output has your CAN interface (e.g. `can0`).
2. Check the bitrate of the interface:
```shell
ip -details link show can0 | grep bitrate
```
Verify this is set to something like 1000000.
3. Check the state of the CAN interface:
```shell
ip -details link show can0 | grep "can state"
```
If this is `ERROR-ACTIVE` or `ERROR-PASSIVE`, something is wrong here.
If any of these are not working, try resetting the CAN connection. You can
re-run `piper_connect.activate()` or run the steps manually here:
```shell
sudo ip link set can0 down
sudo ip link set can0 type can bitrate 1000000
sudo ip link set can0 up
```
Check the state afterwards:
```shell
ip -details link show can0 | grep "can state"
```
Try resetting again for good measure:
```shell
sudo ip link set can0 down
sudo ip link set can0 up
```
4. Still not working? Likely one of the other components are not working.
Make sure the high-low cables connected to your CAN adapter are inserted
properly. This is the most common failure mode we've seen. In particular,
ensure that the wire ends are wedged __at the top__ of the hole, not the
bottom.
If needed, swap out your main cable (the aviation connector in the back of
the robot) and try again. Try swapping out the CAN adapter too if needed.
[tutorial]: tutorial.ipynb "Tutorial"
| text/markdown | null | Reimagine Robotics <info@reimaginerobotics.ai> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"packaging",
"piper-sdk>=0.2.19",
"mujoco; extra == \"gravity\"",
"scipy; extra == \"gravity\""
] | [] | [] | [] | [
"Repository, https://github.com/Reimagine-Robotics/piper_control"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-20T11:28:10.306589 | piper_control-1.3.6.tar.gz | 32,229 | c3/1f/f078e4d94aec6d4d35a8c6ef71012b35695ccf944672215a2e02cef6504b/piper_control-1.3.6.tar.gz | source | sdist | null | false | beb060a05cd3c5ff15233e3cc2b778d1 | 2a5274b5b202f629bb3fd3a93a517444850449c76cd031b808aa81cb338c4ebd | c31ff078e4d94aec6d4d35a8c6ef71012b35695ccf944672215a2e02cef6504b | MIT | [
"LICENSE"
] | 276 |
2.4 | git-portal | 0.1.1 | Modern git worktree manager with color coding and automation | # ⛩️ Portal - Git Worktree Manager
[](https://www.python.org/downloads/)
[](https://github.com/astral-sh/ruff)
[](https://github.com/python/mypy)
[](https://opensource.org/licenses/MIT)
<p align="center">
<img src="docs/assets/demo.gif" alt="Portal Demo" width="800">
</p>
Portal is a modern CLI tool for managing Git worktrees with automatic color coding and editor integration. Each worktree gets a deterministic color that syncs across your terminal, IDE, and CLI output, making it easy to identify which worktree you're working in.
## Features
### Color Coding System
- **Deterministic colors**: Each worktree gets a consistent color based on its name
- **Cross-tool sync**: Colors appear in iTerm tabs, VS Code/Cursor themes, and CLI output
- **Visual identification**: Quickly identify which worktree you're working in
### Automation & Hooks
- **8 hook types**: `command`, `copy`, `template`, `mkdir`, `symlink`, `env_update`, `script`, `git`
- **Project-level overrides**: Use `.portal` files for project-specific automation
- **Conditional execution**: Run hooks based on file existence, platform, environment
- **Security-hardened**: Built-in protection against injection attacks and path traversal
- **Variable substitution**: Dynamic values like `{{worktree_name}}`, `{{color_hex}}`, `auto` ports
### Developer Tools
- **Interactive menu**: Arrow-key navigation with color-highlighted worktree entries
- **iTerm integration**: Open worktrees in new tabs with colored tab indicators
- **Claude integration**: Start AI-assisted coding sessions in worktree context
- **Shell completions**: Support for Bash and Zsh
### Technical Design
- **Type-safe**: Fully type-checked with mypy strict mode
- **Async operations**: Built with Python's async/await for better performance
- **Event-driven**: Extensible architecture using event bus pattern
- **Cross-platform**: Works on macOS and Linux (Windows via WSL)
## Installation
### Prerequisites
- Python 3.12 or higher
- Git 2.5+ (for worktree support)
- iTerm2 (optional, for terminal integration on macOS)
- VS Code or Cursor (optional, for editor integration)
### Quick Install
```bash
# With uv (recommended)
uv tool install git-portal[cli]
# With pipx
pipx install git-portal[cli]
# With Homebrew (macOS)
brew tap aureliensibiril/portal
brew install portal
```
### From Source
```bash
git clone https://github.com/aureliensibiril/portal.git
cd portal
uv venv --python 3.12
source .venv/bin/activate
uv pip install -e ".[cli]"
```
### Shell Integration
Install shell completions and functions:
```bash
# Install shell completions and portal cd function
portal shell install
```
This adds tab completion for Portal commands and the `pw` alias for quick worktree switching.
## Quick Start
### Create Your First Worktree
```bash
# Interactive mode - shows numbered list with color indicators
portal list
# Create a new worktree
portal new feature/awesome-feature
# Create from specific base branch
portal new hotfix/urgent-fix --base release-v2.0
```
### Example Workflow
```bash
# 1. Open the interactive menu (arrow-key navigation with true-color entries)
$ portal
⛩️ Portal - Git Worktree Manager
↑/↓ Navigate • Enter: Select • N: New worktree • Q: Quit
▶ main (base) # highlighted, colored per worktree
feature/auth
hotfix/bug
# Pressing Enter on a worktree opens an action submenu:
# ↵ Open terminal in new tab
# 📝 Open in Cursor
# 🤖 Open with Claude
# 🎨 Reset IDE colors
# 🗑️ Delete worktree
# 2. Create new worktree with hooks
$ portal new feature/payments
✅ Created worktree: feature/payments
Path: ../portal_worktrees/feature_payments
Color: Purple (#9C27B0)
✅ Hook: Copy environment file
✅ Hook: Install dependencies
✅ Opened in Cursor with Purple theme
# 3. Switch between worktrees
$ portal switch feature/auth
✅ Switched to feature/auth
# 4. Open in iTerm with tab color
$ portal terminal feature/payments
✅ Opened feature/payments in new iTerm tab
# 5. Start AI coding session
$ portal claude feature/payments
✅ Opened feature/payments with Claude
```
## Command Reference
### Core Worktree Management
| Command | Description | Example |
| -------------------------- | ------------------------------------- | ---------------------------------------- |
| `portal` | Open interactive menu | `portal` |
| `portal list` | Interactive worktree list with colors | `portal list --format table` |
| `portal new <branch>` | Create new worktree | `portal new feature/auth --base develop` |
| `portal delete [worktree]` | Delete a worktree | `portal delete feature/old --force` |
| `portal switch [worktree]` | Switch to a worktree | `portal switch main --open` |
| `portal info [worktree]` | Show worktree information | `portal info feature/auth` |
| `portal branches` | List available branches | `portal branches --fetch` |
**Options detail:**
| Command | Option | Description |
| --------------- | ----------------------------------------------- | ---------------------------------------- |
| `portal new` | `-b/--base <branch>` | Base branch for the new worktree |
| `portal new` | `-t/--template <name>` | Template to use |
| `portal new` | `--fetch` | Fetch remote branches before creating |
| `portal new` | `--no-hooks` | Skip hook execution |
| `portal new` | `--no-open` | Don't open in editor after creation |
| `portal delete` | `-f/--force` | Force deletion |
| `portal delete` | `--with-branch` | Also delete the associated branch |
| `portal list` | `--format [interactive\|simple\|table\|json]` | Output format (default: interactive) |
| `portal list` | `--no-interactive` | Disable interactive mode |
| `portal switch` | `--open` | Open in editor after switching |
### Editor Integration
| Command | Description | Example |
| -------------------------- | ----------------------------------- | ---------------------------- |
| `portal cursor [worktree]` | Open in Cursor IDE with color theme | `portal cursor feature/auth` |
| `portal vscode [worktree]` | Open in VS Code with color theme | `portal vscode main` |
### Terminal Integration
| Command | Description | Example |
| ---------------------------- | ------------------------- | ---------------------------- |
| `portal terminal [worktree]` | Open iTerm tab with color | `portal terminal feature/ui` |
| `portal claude [worktree]` | Open iTerm with Claude AI | `portal claude hotfix/bug` |
### Hook Management
| Command | Description | Example |
| -------------------------- | -------------------------------- | ------------------------------ |
| `portal hooks list` | Show hook configuration guidance | `portal hooks list` |
| `portal hooks run <stage>` | Run hooks for a specific stage | `portal hooks run post_create --dry-run` |
### Configuration
| Command | Description | Example |
| ---------------------------- | ------------------------------------------ | ----------------------------- |
| `portal config show` | Display current configuration with sources | `portal config show` |
| `portal config show --json` | Show configuration as JSON | `portal config show --json` |
| `portal config show --global`| Show only global configuration | `portal config show --global` |
| `portal config edit` | Edit configuration file | `portal config edit --global` |
| `portal config set` | Set a configuration value | `portal config set editor.default vscode --global` |
### Integration Management
| Command | Description | Example |
| ------------------------------ | ------------------------------------ | ------------------------------ |
| `portal integrations list` | List available integrations | `portal integrations list` |
| `portal integrations status` | Show status of all integrations | `portal integrations status` |
| `portal integrations test` | Test all available integrations | `portal integrations test` |
### Utility Commands
| Command | Description | Example |
| ---------------------- | ---------------------------- | ----------------------- |
| `portal shell install` | Install shell completions | `portal shell install` |
| `portal --version` | Show Portal version | `portal --version` |
## Configuration
Portal uses YAML configuration files with sensible defaults. You can inspect the current configuration and see exactly what values Portal will use with the `config show` command.
### Viewing Configuration
The `portal config show` command displays the merged configuration from all sources:
```bash
# Show current configuration with resolved paths and sources
portal config show
# Output as JSON for scripting
portal config show --json
# Show only global configuration
portal config show --global
```
This command shows:
- **Repository Context**: Current Git repository, project name, and branch
- **Configuration Values**: All settings with resolved paths showing exactly where worktrees will be created
- **Configuration Sources**: Which config files are loaded (project `.portal.yml`, global `~/.portal/config.yml`, and defaults)
### Example Configuration (`~/.portal/config.yml`)
```yaml
# Portal Configuration
version: "1.0"
base_dir: "../{project}_worktrees" # Where worktrees are created
# Color settings
colors:
enabled: true
sync_iterm: true # Sync colors to iTerm
sync_cursor: true # Sync colors to Cursor
sync_claude: true # Generate Claude context
high_contrast: false # Use high-contrast palette
# Editor settings
editor:
default: "cursor" # Default editor (cursor/vscode/vim)
auto_open: true # Auto-open on creation
# Shell integration
shell:
completions_enabled: true # Enable tab completion
cd_function: true # Install 'portal cd' function
prompt_integration: false # Show worktree in prompt
# Global hooks (can be overridden per-project)
hooks:
post_create:
- type: mkdir
config:
paths: ["logs", "tmp"]
- type: command
config:
command: "echo 'Worktree created'"
# Branch pattern mappings
branch_patterns:
"feature/*": "feature"
"hotfix/*": "hotfix"
"release/*": "release"
"bugfix/*": "bugfix"
```
## Hook System
Portal's powerful hook system automates worktree setup, teardown, and environment configuration. Hooks execute at specific lifecycle stages and support multiple operation types.
### Hook Types
Portal supports 8 different hook types for comprehensive automation:
| Hook Type | Purpose | Configuration |
| ---------------- | ------------------------ | ------------------------------------------------ |
| **`command`** | Execute shell commands | `command: "npm install"` |
| **`copy`** | Copy files/directories | `from: ".env.example"`, `to: ".env"` |
| **`template`** | Process Jinja2 templates | `template: "config.j2"`, `output: "config.json"` |
| **`mkdir`** | Create directories | `paths: ["logs", "tmp", "cache"]` |
| **`symlink`** | Create symbolic links | `source: "../shared"`, `target: "public"` |
| **`env_update`** | Update environment files | `file: ".env"`, `updates: {...}` |
| **`script`** | Run custom scripts | `script: "setup.sh"` |
| **`git`** | Git operations | `operation: "fetch"` |
### Hook Stages
Hooks execute at these lifecycle stages:
- **`pre_create`**: Before worktree creation
- **`post_create`**: After worktree creation
- **`pre_delete`**: Before worktree deletion
- **`post_delete`**: After worktree deletion
- **`pre_switch`**: Before switching worktrees
- **`post_switch`**: After switching worktrees
- **`pre_list`**: Before listing worktrees
- **`post_list`**: After listing worktrees
> **Note:** The `portal hooks run` command supports manually running `post_create`, `pre_delete`, `pre_switch`, and `post_switch`. The other stages are triggered automatically by their respective operations.
### Configuration Hierarchy
Portal merges configuration from multiple sources (in order of priority):
1. **`.portal.yml`** - Project configuration (highest priority)
2. **`~/.portal/config.yml`** - Global configuration
3. **Default configuration** (lowest priority)
Each level merges with the previous, allowing you to override specific values without redefining everything.
### Basic Hook Configuration
**Global Configuration** (`~/.portal/config.yml`):
```yaml
version: "1.0"
base_dir: "../{project}_worktrees"
hooks:
post_create:
- type: command
config:
command: "echo 'Setting up worktree...'"
- type: mkdir
config:
paths: ["logs", "tmp"]
```
**Project Configuration** (`.portal.yml` in project root):
```yaml
# These hooks merge with (and override) global hooks
hooks:
post_create:
# Copy files from project root to new worktree
- type: copy
config:
from: ".env.local.example"
to: ".env"
name: "Setup local environment"
# Install dependencies conditionally
- type: command
config:
command: "npm install --frozen-lockfile"
condition: "file_exists:package.json"
name: "Install Node.js dependencies"
# Auto-configure environment with dynamic values
- type: env_update
config:
file: ".env"
updates:
DATABASE_NAME: "{{project}}_{{worktree_name}}_dev"
API_PORT: "auto" # Automatically finds available port
REDIS_PREFIX: "{{worktree_name}}:"
name: "Configure development environment"
pre_delete:
# Cleanup before worktree deletion
- type: command
config:
command: "docker-compose down -v"
condition: "file_exists:docker-compose.yml"
on_error: "warn"
name: "Stop Docker services"
```
### Advanced Hook Features
#### Variable Substitution
All hooks support `{{variable}}` substitution:
```yaml
hooks:
post_create:
- type: env_update
config:
file: ".env"
updates:
DB_NAME: "{{project}}_{{worktree_name}}_dev"
WORKTREE_PATH: "{{worktree_path}}"
ASSIGNED_COLOR: "{{color_hex}}"
```
**Available Variables:**
- `{{project}}` - Project name (derived from worktree parent directory)
- `{{worktree_name}}` - Worktree name
- `{{worktree_path}}` - Full worktree path
- `{{worktree}}` - Alias for `{{worktree_path}}`
- `{{branch}}` - Git branch name
- `{{color_hex}}` - Assigned color hex code (e.g., `#3F51B5`)
- `{{home}}` - User home directory
- `{{user}}` - Current username
#### Conditional Execution
Execute hooks only when conditions are met:
```yaml
hooks:
post_create:
- type: command
config:
command: "npm install"
condition: "file_exists:package.json"
- type: command
config:
command: "pip install -r requirements.txt"
condition: "file_exists:requirements.txt"
- type: command
config:
command: "brew services start postgresql"
condition: "platform:darwin"
```
**Available Conditions:**
- `file_exists:filename` - File exists in worktree
- `file_not_exists:filename` - File doesn't exist
- `dir_exists:dirname` - Directory exists
- `env:VAR_NAME` - Environment variable is set
- `env:VAR_NAME=value` - Environment variable equals value
- `platform:darwin` - Platform check (darwin/linux/win32)
- `command_success:command` - Command executes successfully
**Negation:** Prefix any condition with `!` to negate it (e.g., `!file_exists:package.json` runs the hook only when `package.json` does not exist).
#### Error Handling
Control what happens when hooks fail:
```yaml
hooks:
post_create:
- type: command
config:
command: "critical-setup"
on_error: "fail" # Stop execution on failure
- type: command
config:
command: "optional-setup"
on_error: "warn" # Continue with warning (default)
- type: command
config:
command: "nice-to-have"
on_error: "ignore" # Continue silently
```
### Common Hook Examples
#### Node.js Project Setup
```yaml
# .portal.yml file
hooks:
post_create:
- type: copy
config:
from: ".env.example"
to: ".env"
- type: command
config:
command: "npm install"
condition: "file_exists:package.json"
- type: env_update
config:
file: ".env"
updates:
NODE_ENV: "development"
PORT: "auto"
```
#### Rust Project Setup
```yaml
# .portal.yml file
hooks:
post_create:
- type: env_update
config:
file: ".env"
updates:
RUST_LOG: "{{project}}={{worktree_name}}=debug"
DATABASE_URL: "postgres://localhost/{{project}}_{{worktree_name}}"
name: "Configure Rust environment"
- type: command
config:
command: "cargo build --workspace"
condition: "file_exists:Cargo.toml"
name: "Build workspace"
- type: command
config:
command: "cargo sqlx database setup"
condition: "file_exists:migrations"
on_error: "warn"
name: "Setup database"
```
#### Full-Stack Development
```yaml
# .portal.yml file
hooks:
post_create:
# Setup environment files
- type: template
config:
template: "docker-compose.template.yml"
output: "docker-compose.yml"
variables:
db_port: "{{color_hex}}"
# Start services
- type: command
config:
command: "docker-compose up -d postgres redis"
# Install dependencies
- type: command
config:
command: "npm run setup:dev"
pre_delete:
# Cleanup services
- type: command
config:
command: "docker-compose down -v"
on_error: "warn"
```
### Security Features
Portal's hook system includes built-in security protections:
- **Command injection prevention**: Dangerous shell operators blocked by default
- **Path traversal protection**: Prevents `../` attacks in file operations
- **Input validation**: All hook configurations validated
- **Secure defaults**: Safe error handling and timeout protection
#### Shell Command Security
```yaml
# ❌ This is blocked for security:
- type: command
config:
command: "echo test; rm -rf /"
# ✅ Explicit bypass when needed:
- type: command
config:
command: "echo test && echo done"
allow_shell: true # Explicitly allow shell operators
```
### Project-Level Configuration
Use `.portal.yml` file in your project root for project-specific configurations:
```bash
# Project structure
my-project/
├── .git/
├── .portal.yml # Project configuration
└── src/
```
The `.portal.yml` file merges with and overrides global settings for project-specific automation.
## Tips & Tricks
### Productivity Aliases
After running `portal shell install`, you get:
- **`pw <worktree>`** - Quick switch to a worktree and cd into it
Additional aliases you can add to your shell config:
```bash
# Quick worktree commands
alias pnew="portal new"
alias plist="portal list"
alias pdel="portal delete"
# Open in editor
alias pcursor="portal cursor"
alias pvscode="portal vscode"
```
### Example Hook Scripts
**Auto-install dependencies:**
```yaml
hooks:
post_create:
- type: command
name: "Install deps"
config:
command: |
if [ -f package.json ]; then npm install
elif [ -f requirements.txt ]; then pip install -r requirements.txt
elif [ -f Gemfile ]; then bundle install
fi
allow_shell: true
```
**Rust project with workspace setup:**
```yaml
hooks:
post_create:
- type: command
name: "Build workspace"
config:
command: "cargo build --workspace"
- type: command
name: "Setup git hooks"
config:
command: "cargo husky install"
```
## Troubleshooting
### Worktrees Created in Wrong Location
Use `portal config show` to verify where worktrees will be created:
```bash
# Check current configuration and resolved paths
portal config show
# Look for this section in the output:
# Worktree Settings:
# Base Directory Template: ../{project}_worktrees
# Resolved Base Directory: ../myproject_worktrees
# Actual Worktree Path: /Users/you/code/myproject_worktrees
```
If the path is incorrect, check your `.portal.yml` file:
```yaml
# Default: Creates worktrees as siblings to main repo (recommended)
base_dir: "../{project}_worktrees"
# This would create worktrees inside the main repo (not recommended)
base_dir: "{project}_worktrees"
# This would create worktrees two levels up from the main repo
base_dir: "../../{project}_worktrees"
```
The `base_dir` path is relative to your main repository. Using `../` creates a sibling folder, which keeps worktrees organized and separate from your main codebase.
### Configuration Not Loading
Verify which configuration files are being loaded:
```bash
portal config show
# Check the "Configuration Sources" section to see which files are found
```
### Debugging Configuration Issues
```bash
# View merged configuration as JSON for detailed inspection
portal config show --json | jq '.'
# Check if running from correct Git repository
git rev-parse --show-toplevel
```
## License
MIT License - see [LICENSE](LICENSE) file for details.
---
<p align="center">
<a href="https://github.com/aureliensibiril/portal">⛩️ Portal</a> - Modern Git Worktree Manager
</p>
| text/markdown | Portal Team | null | null | null | MIT | cli, developer-tools, git, worktree | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Version Control :: Git"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"gitpython>=3.1.40",
"jinja2>=3.1.2",
"pydantic-settings>=2.1.0",
"pydantic>=2.11.7",
"pyyaml>=6.0.1",
"click>=8.1.7; extra == \"cli\"",
"prompt-toolkit>=3.0.0; extra == \"cli\"",
"rich>=13.7.0; extra == \"cli\"",
"mypy>=1.7.0; extra == \"dev\"",
"pre-commit>=3.5.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"ruff==0.13.1; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/aureliensibiril/portal",
"Repository, https://github.com/aureliensibiril/portal",
"Issues, https://github.com/aureliensibiril/portal/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:27:54.292477 | git_portal-0.1.1.tar.gz | 88,155 | 9e/c8/c114bc277ffe21db3b8423368d1ec9019fcf459b600fcdd9dab851c8c64d/git_portal-0.1.1.tar.gz | source | sdist | null | false | 640026e93fc7d67df8a28d568773da5e | 020cd42e86965ccf852fbc27abaa1cde3b3b781f79d5b5caadc7fbefb9caf4b8 | 9ec8c114bc277ffe21db3b8423368d1ec9019fcf459b600fcdd9dab851c8c64d | null | [] | 233 |
2.4 | notebooklm-wrapper | 0.1.1 | Pythonic wrapper for NotebookLM via notebooklm-mcp-cli MCP server | # NotebookLM Wrapper
Pythonic wrapper for Google NotebookLM via the MCP (Model Context Protocol). Connects to the `notebooklm-mcp` server and provides a clean, typed interface to all NotebookLM functionality.
## Features
- **Clean Python API** - No subprocess or CLI calls
- **Full type safety** - Pydantic models for all data
- **Sync + Async** - Use either programming style
- **28 operations** - Notebooks, sources, chat, research, studio, sharing, and more
- **Auth handled by server** - Run `nlm login` once, then use the wrapper
## Requirements
- Python 3.11+
- [notebooklm-mcp-cli](https://pypi.org/project/notebooklm-mcp-cli/) (installed automatically)
## Installation
**From PyPI** (once published):
```bash
pip install notebooklm-wrapper
```
**From source** (development or unreleased):
```bash
pip install git+https://github.com/ai-chitect/notebooklm-wrapper.git
# or, from a local clone:
pip install -e .
```
**Publishing to PyPI**: Run `./scripts/publish.sh` (runs tests, builds, then publishes). Use `./scripts/publish.sh --test` for TestPyPI. Requires [hatch](https://hatch.pypa.io/) and PyPI credentials.
**Prerequisites**: Authenticate with NotebookLM (one-time setup):
```bash
pip install notebooklm-mcp-cli
nlm login # Opens browser for authentication
```
## Quick Start
```python
from notebooklm_wrapper import NotebookLMClient
# Initialize client (uses default profile from nlm login)
client = NotebookLMClient()
# List notebooks
notebooks = client.notebook.list()
for nb in notebooks:
print(f"{nb.title} ({nb.id})")
# Create notebook
notebook = client.notebook.create(title="My Research")
# Add sources
client.source.add(
notebook.id,
"url",
url="https://example.com/article",
)
# Ask questions
response = client.chat.ask(notebook.id, "What are the main points?")
print(response.answer)
for citation in response.citations:
print(f" - {citation.source_title}")
```
## Async Usage
```python
import asyncio
from notebooklm_wrapper import AsyncNotebookLMClient
async def main():
client = AsyncNotebookLMClient()
notebooks = await client.notebook.list()
print(f"Found {len(notebooks)} notebooks")
# Create and query
notebook = await client.notebook.create(title="Async Research")
response = await client.chat.ask(notebook.id, "Summarize the key findings")
print(response.answer)
asyncio.run(main())
```
## Multi-Profile
```python
# Use specific profile (set via NOTEBOOKLM_MCP_PROFILE when spawning server)
client = NotebookLMClient(profile="work")
notebooks = client.notebook.list()
# Async with profile
async_client = AsyncNotebookLMClient(profile="personal")
```
## Multi-User / Web App
For multi-tenant apps (e.g. on Google Cloud) where each user has their own NotebookLM credentials stored in your DB (encrypted) and obtained via OAuth, use **`config_dir`** to isolate credentials per user. The wrapper spawns one MCP server process per client; with a unique `config_dir` per user, that process uses a dedicated credential store under `config_dir/.notebooklm-mcp-cli`.
1. **Store credentials:** After OAuth (or user-provided cookies), encrypt and store the cookie string in your DB keyed by user id.
2. **Per request/session:** Create a user-specific directory (or use a persistent path per user), decrypt credentials from your DB, then create a client and inject tokens:
```python
import os
from notebooklm_wrapper import AsyncNotebookLMClient
async def notebooklm_client_for_user(user_id: str, decrypted_cookies: str):
# Use a dedicated dir per user so the MCP server stores credentials there
config_dir = f"/app/data/users/{user_id}"
os.makedirs(config_dir, exist_ok=True)
client = AsyncNotebookLMClient(
profile=user_id,
config_dir=config_dir,
)
# Inject credentials so the server can use them (first time or refresh)
await client.auth.save_tokens(cookies=decrypted_cookies)
return client
# Then use the client as usual
async def handle_request(user_id: str, ...):
cookies = get_encrypted_cookies_from_db(user_id)
decrypted = decrypt(cookies)
client = await notebooklm_client_for_user(user_id, decrypted)
try:
notebooks = await client.notebook.list()
# ...
finally:
await client.disconnect()
```
See [docs/multi-user-credentials-design.md](docs/multi-user-credentials-design.md) for how credential isolation works and design details.
## API Reference
| Resource | Methods |
|----------|---------|
| `client.notebook` | `list()`, `get(id)`, `describe(id)`, `create(title)`, `rename(id, title)`, `delete(id, confirm=True)` |
| `client.source` | `add(notebook_id, type, url=...)`, `list_drive(notebook_id)`, `sync_drive(ids, confirm=True)`, `delete(id, confirm=True)`, `describe(id)`, `get_content(id)` |
| `client.chat` | `ask(notebook_id, query)`, `configure(notebook_id, goal=..., response_length=...)` |
| `client.research` | `start(query, source="web", mode="fast")`, `status(notebook_id)`, `import_sources(notebook_id, task_id)` |
| `client.studio` | `create(notebook_id, type, confirm=True)`, `status(notebook_id)`, `delete(notebook_id, artifact_id, confirm=True)` |
| `client.share` | `status(notebook_id)`, `set_public(notebook_id, is_public)`, `invite(notebook_id, email, role="viewer")` |
| `client.download` | `artifact(notebook_id, type, output_path)` |
| `client.note` | `create(notebook_id, content, title=...)`, `list(notebook_id)`, `update(notebook_id, note_id, ...)`, `delete(notebook_id, note_id, confirm=True)` |
| `client.auth` | `refresh()`, `save_tokens(cookies, ...)` |
| `client.export` | `to_docs(notebook_id, artifact_id)`, `to_sheets(notebook_id, artifact_id)` |
## Error Handling
All operations raise `NotebookLMError` subclasses on failure:
```python
from notebooklm_wrapper import (
NotebookLMClient,
AuthenticationError,
NotFoundError,
ValidationError,
RateLimitError,
GenerationError,
)
client = NotebookLMClient()
try:
notebooks = client.notebook.list()
except AuthenticationError:
print("Run 'nlm login' to authenticate")
except NotFoundError as e:
print(f"Not found: {e}")
except RateLimitError as e:
print(f"Rate limited. Retry after: {e.retry_after}s")
```
Error messages include the tool name for easier debugging (e.g. `[notebook_list] Please login first`).
## Development
```bash
# Clone the repository
git clone https://github.com/ai-chitect/notebooklm-wrapper.git
cd notebooklm-wrapper
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # Linux/macOS
# .venv\Scripts\activate # Windows
# Install in editable mode with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run tests with coverage
pytest --cov=notebooklm_wrapper --cov-report=term-missing
# Lint
ruff check src/ tests/
# Format
black src/ tests/
isort src/ tests/
# Type check
mypy src/
# Publish to PyPI (after tests pass)
./scripts/publish.sh
# Test OAuth + list notebooks (integration-style; opens browser for login)
python scripts/test_oauth_list_notebooks.py
# Reuse credentials in a directory (skip login next time):
python scripts/test_oauth_list_notebooks.py --config-dir ./tmp_oauth_test
python scripts/test_oauth_list_notebooks.py --config-dir ./tmp_oauth_test --skip-login
```
## Acknowledgments
This wrapper connects to the [notebooklm-mcp-cli](https://github.com/jacob-bd/notebooklm-mcp-cli) MCP server by [Jacob Ben-David](https://github.com/jacob-bd), which provides the underlying NotebookLM integration. That project is MIT-licensed and compatible with this wrapper.
## License
MIT
| text/markdown | ai-chitect | null | null | null | null | ai, mcp, notebooklm, research, wrapper | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"mcp>=1.26.0",
"notebooklm-mcp-cli>=0.3.0",
"pydantic>=2.0",
"black>=24.0; extra == \"dev\"",
"hatch>=1.0; extra == \"dev\"",
"isort>=5.13; extra == \"dev\"",
"mypy>=1.13; extra == \"dev\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"pytest-cov>=6.0; extra == \"dev\"",
"pytest-mock>=3.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.8; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ai-chitect/notebooklm-wrapper",
"Repository, https://github.com/ai-chitect/notebooklm-wrapper",
"Issues, https://github.com/ai-chitect/notebooklm-wrapper/issues"
] | Hatch/1.16.3 cpython/3.13.5 HTTPX/0.28.1 | 2026-02-20T11:27:49.178957 | notebooklm_wrapper-0.1.1.tar.gz | 143,303 | 62/9c/0758ca447ecbe5af3bc51463de00b2be3c29129ac26c17cbd6369bcc421b/notebooklm_wrapper-0.1.1.tar.gz | source | sdist | null | false | 52800d8725c4cca0a44d30de2bdeb1ac | 7b98b62d0199a0f0e7909332bac1b5f1b8943a709bc13bf1d5eb6aaab74f1e76 | 629c0758ca447ecbe5af3bc51463de00b2be3c29129ac26c17cbd6369bcc421b | MIT | [
"LICENSE"
] | 238 |
2.4 | ground-atc | 0.1.2 | Thin wrapper around OSMnx for generating taxi instructions for aiports | # Ground ATC
Ground ATC produces taxi instructions for aircrafts at any airport around the world.
# Installation
`pip install ground-atc`
or
`uv add ground-atc`
# Usage
```python
>>>from ground_atc import GroundController
>>>airport = GroundController(icao_code="YMML")
>>>instructions = airport.taxi_from_gate_to_runway(gate="13", runway="16")
Taxi via, Golf, Sierra, Uniform, Alpha, Bravo, hold short runway 16
```
# Notes
- This tool is intended to work with LLM based applications and therefore has a string output. But this can be turned off by setting the `turn_instructions=False`
- It uses OSMnx API but requires no API token/authentication. However, airport data is cached locally, which can be deleted if latest data is required.
# Contributions
Contributions are welcome. Feel free to create an issue or raise a PR | text/markdown | Karan Parekh | Karan Parekh <karanparekh501@gmail.com> | null | null | null | atc, airport, aviation, ground, controller | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"osmnx>=2.0.7",
"pydantic>=2.12.5"
] | [] | [] | [] | [
"Repository, https://github.com/karan-parekh/ground-atc.git"
] | uv/0.9.21 {"installer":{"name":"uv","version":"0.9.21","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T11:27:45.801212 | ground_atc-0.1.2.tar.gz | 7,043 | 83/80/055d7ed3ce7127e6de26af7e9d7727c0d9490d15c8061a961d9f9085d376/ground_atc-0.1.2.tar.gz | source | sdist | null | false | 4dccac78a0e0d1a5efa9b1e428088c82 | 4a209754dfa8aa3eb82a49566a60d102696816dabaf9d79f7d5c731071aa622c | 8380055d7ed3ce7127e6de26af7e9d7727c0d9490d15c8061a961d9f9085d376 | MIT | [
"LICENSE"
] | 236 |
2.3 | databricks-sqlalchemy | 2.0.9 | Databricks SQLAlchemy plugin for Python | ## Databricks dialect for SQLALchemy 2.0
The Databricks dialect for SQLAlchemy serves as bridge between [SQLAlchemy](https://www.sqlalchemy.org/) and the Databricks SQL Python driver. A working example demonstrating usage can be found in `sqlalchemy_example.py`.
## Installation
To install the dialect and its dependencies:
```shell
pip install databricks-sqlalchemy
```
If you also plan to use `alembic` you can alternatively run:
```shell
pip install alembic
```
## Connection String
Every SQLAlchemy application that connects to a database needs to use an [Engine](https://docs.sqlalchemy.org/en/20/tutorial/engine.html#tutorial-engine), which you can create by passing a connection string to `create_engine`. The connection string must include these components:
1. Host
2. HTTP Path for a compute resource
3. API access token
4. Initial catalog for the connection
5. Initial schema for the connection
**Note: Our dialect is built and tested on workspaces with Unity Catalog enabled. Support for the `hive_metastore` catalog is untested.**
For example:
```python
import os
from sqlalchemy import create_engine
host = os.getenv("DATABRICKS_SERVER_HOSTNAME")
http_path = os.getenv("DATABRICKS_HTTP_PATH")
access_token = os.getenv("DATABRICKS_TOKEN")
catalog = os.getenv("DATABRICKS_CATALOG")
schema = os.getenv("DATABRICKS_SCHEMA")
engine = create_engine(
f"databricks://token:{access_token}@{host}?http_path={http_path}&catalog={catalog}&schema={schema}"
)
```
## Types
The [SQLAlchemy type hierarchy](https://docs.sqlalchemy.org/en/20/core/type_basics.html) contains backend-agnostic type implementations (represented in CamelCase) and backend-specific types (represented in UPPERCASE). The majority of SQLAlchemy's [CamelCase](https://docs.sqlalchemy.org/en/20/core/type_basics.html#the-camelcase-datatypes) types are supported. This means that a SQLAlchemy application using these types should "just work" with Databricks.
|SQLAlchemy Type|Databricks SQL Type|
|-|-|
[`BigInteger`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.BigInteger)| [`BIGINT`](https://docs.databricks.com/en/sql/language-manual/data-types/bigint-type.html)
[`LargeBinary`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.LargeBinary)| (not supported)|
[`Boolean`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Boolean)| [`BOOLEAN`](https://docs.databricks.com/en/sql/language-manual/data-types/boolean-type.html)
[`Date`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Date)| [`DATE`](https://docs.databricks.com/en/sql/language-manual/data-types/date-type.html)
[`DateTime`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.DateTime)| [`TIMESTAMP_NTZ`](https://docs.databricks.com/en/sql/language-manual/data-types/timestamp-ntz-type.html)|
[`Double`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Double)| [`DOUBLE`](https://docs.databricks.com/en/sql/language-manual/data-types/double-type.html)
[`Enum`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Enum)| (not supported)|
[`Float`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Float)| [`FLOAT`](https://docs.databricks.com/en/sql/language-manual/data-types/float-type.html)
[`Integer`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Integer)| [`INT`](https://docs.databricks.com/en/sql/language-manual/data-types/int-type.html)
[`Numeric`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Numeric)| [`DECIMAL`](https://docs.databricks.com/en/sql/language-manual/data-types/decimal-type.html)|
[`PickleType`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.PickleType)| (not supported)|
[`SmallInteger`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.SmallInteger)| [`SMALLINT`](https://docs.databricks.com/en/sql/language-manual/data-types/smallint-type.html)
[`String`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.String)| [`STRING`](https://docs.databricks.com/en/sql/language-manual/data-types/string-type.html)|
[`Text`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Text)| [`STRING`](https://docs.databricks.com/en/sql/language-manual/data-types/string-type.html)|
[`Time`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Time)| [`STRING`](https://docs.databricks.com/en/sql/language-manual/data-types/string-type.html)|
[`Unicode`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Unicode)| [`STRING`](https://docs.databricks.com/en/sql/language-manual/data-types/string-type.html)|
[`UnicodeText`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.UnicodeText)| [`STRING`](https://docs.databricks.com/en/sql/language-manual/data-types/string-type.html)|
[`Uuid`](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.Uuid)| [`STRING`](https://docs.databricks.com/en/sql/language-manual/data-types/string-type.html)
In addition, the dialect exposes three UPPERCASE SQLAlchemy types which are specific to Databricks:
- [`databricks.sqlalchemy.TINYINT`](https://docs.databricks.com/en/sql/language-manual/data-types/tinyint-type.html)
- [`databricks.sqlalchemy.TIMESTAMP`](https://docs.databricks.com/en/sql/language-manual/data-types/timestamp-type.html)
- [`databricks.sqlalchemy.TIMESTAMP_NTZ`](https://docs.databricks.com/en/sql/language-manual/data-types/timestamp-ntz-type.html)
### `LargeBinary()` and `PickleType()`
Databricks Runtime doesn't currently support binding of binary values in SQL queries, which is a pre-requisite for this functionality in SQLAlchemy.
## `Enum()` and `CHECK` constraints
Support for `CHECK` constraints is not implemented in this dialect. Support is planned for a future release.
SQLAlchemy's `Enum()` type depends on `CHECK` constraints and is therefore not yet supported.
### `DateTime()`, `TIMESTAMP_NTZ()`, and `TIMESTAMP()`
Databricks Runtime provides two datetime-like types: `TIMESTAMP` which is always timezone-aware and `TIMESTAMP_NTZ` which is timezone agnostic. Both types can be imported from `databricks.sqlalchemy` and used in your models.
The SQLAlchemy documentation indicates that `DateTime()` is not timezone-aware by default. So our dialect maps this type to `TIMESTAMP_NTZ()`. In practice, you should never need to use `TIMESTAMP_NTZ()` directly. Just use `DateTime()`.
If you need your field to be timezone-aware, you can import `TIMESTAMP()` and use it instead.
_Note that SQLAlchemy documentation suggests that you can declare a `DateTime()` with `timezone=True` on supported backends. However, if you do this with the Databricks dialect, the `timezone` argument will be ignored._
```python
from sqlalchemy import DateTime
from databricks.sqlalchemy import TIMESTAMP
class SomeModel(Base):
some_date_without_timezone = DateTime()
some_date_with_timezone = TIMESTAMP()
```
### `String()`, `Text()`, `Unicode()`, and `UnicodeText()`
Databricks Runtime doesn't support length limitations for `STRING` fields. Therefore `String()` or `String(1)` or `String(255)` will all produce identical DDL. Since `Text()`, `Unicode()`, `UnicodeText()` all use the same underlying type in Databricks SQL, they will generate equivalent DDL.
### `Time()`
Databricks Runtime doesn't have a native time-like data type. To implement this type in SQLAlchemy, our dialect stores SQLAlchemy `Time()` values in a `STRING` field. Unlike `DateTime` above, this type can optionally support timezone awareness (since the dialect is in complete control of the strings that we write to the Delta table).
```python
from sqlalchemy import Time
class SomeModel(Base):
time_tz = Time(timezone=True)
time_ntz = Time()
```
# Usage Notes
## `Identity()` and `autoincrement`
Identity and generated value support is currently limited in this dialect.
When defining models, SQLAlchemy types can accept an [`autoincrement`](https://docs.sqlalchemy.org/en/20/core/metadata.html#sqlalchemy.schema.Column.params.autoincrement) argument. In our dialect, this argument is currently ignored. To create an auto-incrementing field in your model you can pass in an explicit [`Identity()`](https://docs.sqlalchemy.org/en/20/core/defaults.html#identity-ddl) instead.
Furthermore, in Databricks Runtime, only `BIGINT` fields can be configured to auto-increment. So in SQLAlchemy, you must use the `BigInteger()` type.
```python
from sqlalchemy import Identity, String
class SomeModel(Base):
id = BigInteger(Identity())
value = String()
```
When calling `Base.metadata.create_all()`, the executed DDL will include `GENERATED ALWAYS AS IDENTITY` for the `id` column. This is useful when using SQLAlchemy to generate tables. However, as of this writing, `Identity()` constructs are not captured when SQLAlchemy reflects a table's metadata (support for this is planned).
## Parameters
`databricks-sql-connector` supports two approaches to parameterizing SQL queries: native and inline. Our SQLAlchemy 2.0 dialect always uses the native approach and is therefore limited to DBR 14.2 and above. If you are writing parameterized queries to be executed by SQLAlchemy, you must use the "named" paramstyle (`:param`). Read more about parameterization in `docs/parameters.md`.
## Usage with pandas
Use [`pandas.DataFrame.to_sql`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html) and [`pandas.read_sql`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_sql.html#pandas.read_sql) to write and read from Databricks SQL. These methods both accept a SQLAlchemy connection to interact with Databricks.
### Read from Databricks SQL into pandas
```python
from sqlalchemy import create_engine
import pandas as pd
engine = create_engine("databricks://token:dapi***@***.cloud.databricks.com?http_path=***&catalog=main&schema=test")
with engine.connect() as conn:
# This will read the contents of `main.test.some_table`
df = pd.read_sql("some_table", conn)
```
### Write to Databricks SQL from pandas
```python
from sqlalchemy import create_engine
import pandas as pd
engine = create_engine("databricks://token:dapi***@***.cloud.databricks.com?http_path=***&catalog=main&schema=test")
squares = [(i, i * i) for i in range(100)]
df = pd.DataFrame(data=squares,columns=['x','x_squared'])
with engine.connect() as conn:
# This will write the contents of `df` to `main.test.squares`
df.to_sql('squares',conn)
```
## [`PrimaryKey()`](https://docs.sqlalchemy.org/en/20/core/constraints.html#sqlalchemy.schema.PrimaryKeyConstraint) and [`ForeignKey()`](https://docs.sqlalchemy.org/en/20/core/constraints.html#defining-foreign-keys)
Unity Catalog workspaces in Databricks support PRIMARY KEY and FOREIGN KEY constraints. _Note that Databricks Runtime does not enforce the integrity of FOREIGN KEY constraints_. You can establish a primary key by setting `primary_key=True` when defining a column.
When building `ForeignKey` or `ForeignKeyConstraint` objects, you must specify a `name` for the constraint.
If your model definition requires a self-referential FOREIGN KEY constraint, you must include `use_alter=True` when defining the relationship.
```python
from sqlalchemy import Table, Column, ForeignKey, BigInteger, String
users = Table(
"users",
metadata_obj,
Column("id", BigInteger, primary_key=True),
Column("name", String(), nullable=False),
Column("email", String()),
Column("manager_id", ForeignKey("users.id", name="fk_users_manager_id_x_users_id", use_alter=True))
)
```
| text/markdown | Databricks | databricks-sql-connector-maintainers@databricks.com | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <4.0.0,>=3.8.0 | [] | [] | [] | [
"databricks_sql_connector>=4.0.0",
"pyarrow>=14.0.1",
"sqlalchemy>=2.0.21"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/databricks/databricks-sqlalchemy/issues",
"Homepage, https://github.com/databricks/databricks-sqlalchemy"
] | twine/6.2.0 CPython/3.10.11 | 2026-02-20T11:27:13.051912 | databricks_sqlalchemy-2.0.9.tar.gz | 26,739 | 8b/56/09d2b75e3cfa77e88f96dd832c6f4a2bacbd3134fc9af2b0b0a118c3272c/databricks_sqlalchemy-2.0.9.tar.gz | source | sdist | null | false | f95acd224f46341ec4704839f8e5e103 | 9975830df92541485c0560998ee90f1ac0f76d9d3de4940582e951c09b49b9ed | 8b5609d2b75e3cfa77e88f96dd832c6f4a2bacbd3134fc9af2b0b0a118c3272c | null | [] | 126,428 |
2.4 | ramses-rf | 0.54.3 | A stateful RAMSES-II protocol decoder & analyser. | 


[](https://github.com/ramses-rf/ramses_rf/actions/workflows/check-cov.yml)
## Overview
**ramses_rf** is a Python client library/CLI utility used to interface with some Honeywell-compatible HVAC & CH/DHW systems that use 868MHz RF, such as:
- (Heat) **evohome**, **Sundial**, **Hometronic**, **Chronotherm**
- (HVAC) **Itho**, **Orcon**, **Nuaire**, **Vasco**, **ClimaRad**
> [!NOTE]
> Ramses RF can **not** interpret the new Honeywell Ramses-III (R3) messages used after a firmware upgrade since 2025 and (some) new devices.
It requires a USB-to-RF device, either a Honeywell HGI80 (somewhat rare, expensive) or a USB/MQTT dongle running the [ramses_esp](https://github.com/IndaloTech/ramses_esp) or [evofw3](https://github.com/ghoti57/evofw3) firmware, such as the one from [here](https://indalo-tech.onlineweb.shop/) or your own ESP32-S3-WROOM-1 N16R8 with a CC1100 transponder.
It does four things:
- decodes RAMSES II-compatible packets and converts them into useful JSON
- builds a picture (schema, config & state) of evohome-compatible CH/DHW systems - either passively (by eavesdropping), or actively (probing)
- allows you to send commands to CH/DHW and HVAC systems, or monitor them for state changes
- allows you to emulate some hardware devices (remotes)
> [!WARNING]
> This library is not affiliated with Honeywell, Airios nor any final manufacturer. The developers take no responsibility for anything that may happen to your devices because of this library.
For CH/DHW, the simplest way to know if it will work with your system is to identify the box connected to your boiler/HVAC appliance as one of:
- **R8810A**: OpenTherm Bridge
- **BDR91A**: Wireless Relay (also BDR91T)
- **HC60NG**: Wireless Relay (older hardware)
Other systems may well work, such as some Itho Daalderop HVAC systems, use this protocol. YMMV.
This library includes a CLI and can be used as a standalone tool, but also is used as a client library by:
- [ramses_cc](https://github.com/ramses-rf/ramses_cc), a Home Assistant integration
- [evohome-Listener](https://github.com/smar000/evohome-Listener), an MQTT gateway
## Installation
To use the `ramses_cc` Integration in Home Assistant, just install `Ramses RF` from HACS. It will take care of installing this library. See the [`Ramses_cc wiki`](https://github.com/ramses-rf/ramses_cc/wiki/1.-Installation) for details.
### Ramses_rf CLI
To install the `ramses_rf` command line client:
```
git clone https://github.com/ramses-rf/ramses_rf
cd ramses_rf
pip install -r requirements/requirements.txt
pip install -e .
```
The CLI is called `client.py` and is included in the code root.
It has options to monitor and parse Ramses-II traffic to screen or a log file, and to parse a file containing Ramses-II messages to the screen.
See the [client.py CLI wiki page](https://github.com/ramses-rf/ramses_rf/wiki/2.-The-client.py-command-line) for instructions.
For code development, some more setup is required. Please follow the steps in our [Developer's Resource](README-developers.md)
| text/markdown | null | David Bonnes <zxdavb@bonnes.me>, Egbert Broerse <dcc2@ebroerse.nl> | null | Egbert Broerse <dcc2@ebroerse.nl> | null | airios, chronotherm, climarad, evohome, hometronics, honeywell, itho, nuaire, orcon, ramses, resideo, round thermostat, sundial, vasco | [
"Programming Language :: Python :: 3.13",
"Topic :: Home Automation"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"colorlog>=6.9.0",
"paho-mqtt>=2.1.0",
"pyserial-asyncio-fast>=0.16",
"voluptuous>=0.15.2"
] | [] | [] | [] | [
"Homepage, https://github.com/ramses-rf/ramses_rf",
"Bug Tracker, https://github.com/ramses-rf/ramses_rf/issues",
"Wiki, https://github.com/ramses-rf/ramses_rf/wiki"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:27:00.224923 | ramses_rf-0.54.3.tar.gz | 1,083,940 | 12/6a/ba9369fb96b6580a2aede610575ad6d9f660eefe4c2505a68f7ab4feedcd/ramses_rf-0.54.3.tar.gz | source | sdist | null | false | 5be2e13acda7403565676b783cfc94a7 | 5d0b5de159ec33902930a6e2f31b620a911c8b9da4bd5ed10acb6b29de9cf4f3 | 126aba9369fb96b6580a2aede610575ad6d9f660eefe4c2505a68f7ab4feedcd | MIT | [
"LICENSE"
] | 290 |
2.4 | agent-framework-lib | 0.5.9.post12 | A comprehensive Python framework for building and serving conversational AI agents with FastAPI | # Agent Framework Library
A comprehensive Python framework for building and serving conversational AI agents with FastAPI. Create production-ready AI agents in minutes with automatic session management, streaming responses, file storage, and easy MCP integration.
**Key Features:**
- 🚀 **Quick Setup** - Create agents in 10-15 minutes
- 🔌 **Easy MCP Integration** - Connect to external tools effortlessly
- 🛠️ **Off-the-Shelf Tools** - Pre-built tools for files, PDFs, charts, and more
- 🎯 **Skills System** - On-demand capability loading for token optimization
- 🔄 **Multi-Provider Support** - OpenAI, Anthropic, Gemini
- 🎯 **Smart Model Routing** - Auto mode selects the best model per query complexity
- 💾 **Session Management** - Automatic conversation persistence
- 📁 **File Storage** - Local, S3, MinIO support
## Installation
```bash
# Install with LlamaIndex support (recommended)
uv add agent-framework-lib[llamaindex]
# Install with MCP support
uv add agent-framework-lib[llamaindex,mcp]
# Install with all features
uv add agent-framework-lib[all]
# Or with pip
pip install agent-framework-lib[llamaindex]
```
**Available extras:** `llamaindex`, `mcp`, `mongodb`, `s3`, `minio`, `multimod`
**Optional: System Dependencies**
The framework **automatically detects and configures** system libraries. Manual installation is only needed if you encounter issues:
**For PDF Generation (WeasyPrint):**
```bash
# macOS
brew install pango gdk-pixbuf libffi cairo
# Ubuntu/Debian
sudo apt-get install libpango-1.0-0 libpangoft2-1.0-0 libgdk-pixbuf2.0-0 libffi-dev libcairo2
# Fedora/RHEL
sudo dnf install pango gdk-pixbuf2 libffi-devel cairo
```
**For Chart/Mermaid Image Generation (Playwright):**
```bash
# Install Playwright and browser
uv add playwright
playwright install chromium
# Or with pip
pip install playwright
playwright install chromium
```
**For MCP Python Server (Deno):**
```bash
# macOS/Linux
curl -fsSL https://deno.land/install.sh | sh
# Windows (PowerShell)
irm https://deno.land/install.ps1 | iex
```
### Post-Installation Script (Recommended)
The framework includes a CLI script that automatically installs all optional dependencies (Playwright browsers and Deno runtime):
```bash
# Run after installing the package
agent-framework-post-install
```
This script:
- ✅ Installs Playwright Chromium browser (for charts, mermaid diagrams, tables)
- ✅ Installs Deno runtime (for MCP servers like `mcp-run-python`)
- ✅ Works on Windows, macOS, and Linux
- ✅ Detects if dependencies are already installed (fast path)
**Note:** The framework also attempts lazy auto-installation when tools are first used, but running the post-install script ensures everything is ready upfront.
The framework handles library path configuration automatically on startup.
## 🤖 Framework Helper Agent
The framework includes a built-in AI assistant that helps you create agents! Access it at `/helper` when running any agent server.
**Features:**
- 🧠 Deep knowledge of framework documentation, examples, and source code
- 🔍 Search tools for docs and examples
- 💡 Code generation assistance
- 📚 Indexed knowledge base (30+ files)
- 🗄️ Persistent knowledge graph (FalkorDB) - survives server restarts
**Access:** `http://localhost:8000/helper`
The helper agent indexes:
- All documentation (`docs/*.md`)
- All examples (`examples/*.py`)
- Core framework source (tools, storage, memory, session management)
**Re-indexing:** If you update documentation or examples, trigger a re-index:
```bash
curl -X POST http://localhost:8000/helper/reindex
```
**Model Configuration:**
By default, the helper agent uses Claude (if `ANTHROPIC_API_KEY` is set) or GPT-5 (if `OPENAI_API_KEY` is set). You can override this with:
```env
# Force a specific model (useful if your Anthropic key has reached its limit)
HELPER_AGENT_MODEL=gpt-5
```
**Example questions:**
- "How do I create an agent with memory?"
- "Show me how to use PDF tools"
- "What's the difference between Memori and Graphiti?"
- "How do I configure S3 storage?"
- "Search the web for LlamaIndex best practices"
## 🐳 Docker Development Environment
For local development, use Docker Compose to run all external services (Elasticsearch, MongoDB, PostgreSQL, FalkorDB, MinIO):
```bash
# Start all services
docker-compose --profile all up -d
# Copy environment template
cp .env.docker .env
# Edit .env to add your LLM API keys
# Stop services
docker-compose down
```
Use profiles to start only what you need:
```bash
docker-compose --profile storage up -d # Elasticsearch, MongoDB, MinIO
docker-compose --profile memory up -d # PostgreSQL, FalkorDB
```
**Full documentation:** See [Docker Setup Guide](docs/DOCKER_SETUP.md) for service details, ports, credentials, and troubleshooting.
## 🚀 Getting Started
### Create Your First Agent
Here's a complete, working agent with LlamaIndex:
```python
from typing import List
from agent_framework import LlamaIndexAgent, create_basic_agent_server
class MyAgent(LlamaIndexAgent):
def __init__(self):
super().__init__(
agent_id="my_calculator_agent",
name="Calculator Agent",
description="A helpful calculator assistant that can perform basic math operations."
)
def get_agent_prompt(self) -> str:
"""Define your agent's behavior and personality."""
return "You are a helpful calculator assistant."
def get_agent_tools(self) -> List[callable]:
"""Define the tools your agent can use.
Tools are automatically converted to LlamaIndex FunctionTool instances.
The function name becomes the tool name, and the docstring becomes the description.
"""
def add(a: float, b: float) -> float:
"""Add two numbers together."""
return a + b
def multiply(a: float, b: float) -> float:
"""Multiply two numbers together."""
return a * b
# Just return the functions - automatic conversion to FunctionTool
return [add, multiply]
# Start server - includes streaming, session management, web UI
create_basic_agent_server(MyAgent, port=8000)
```
**Required Methods:**
- `__init__()` - Call `super().__init__(agent_id, name, description)` with required identity info
- `get_agent_prompt()` - Return system prompt string
- `get_agent_tools()` - Return list of tools (can be empty)
**Optional Methods (have default implementations):**
- `create_fresh_context()` - Create new LlamaIndex Context (default provided)
- `serialize_context(ctx)` - Serialize context for persistence (default provided)
- `deserialize_context(state)` - Deserialize context from state (default provided)
- `initialize_agent()` - Customize agent creation (default: FunctionAgent)
- `configure_session()` - Add session setup logic
**That's it!** The framework provides default implementations for context management (state persistence), so you only need to implement the three core methods above.
**Run it:**
```bash
# Set your API key
export OPENAI_API_KEY=sk-your-key-here
# Run the agent
python my_agent.py
# Open http://localhost:8000/ui
```
## ⚙️ Configure Your Agent
### Environment Setup
Create a `.env` file:
```env
# Required: At least one API key
OPENAI_API_KEY=sk-your-openai-key
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key
GEMINI_API_KEY=your-gemini-key
# Model Configuration
DEFAULT_MODEL=gpt-5-mini
# Multi-Model Routing (Auto Mode)
DEFAULT_MODEL_MODE=auto # "auto" or specific model name
AUTO_CLASSIFIER_MODEL=gpt-4o-mini # Model for complexity classification
PREFERRED_LIGHT_MODELS=gpt-4o-mini,claude-haiku-4-5-20251001
PREFERRED_STANDARD_MODELS=gpt-5-mini,claude-sonnet-4-5-20250929
PREFERRED_ADVANCED_MODELS=gpt-5,claude-opus-4-6
# Session Storage (optional)
SESSION_STORAGE_TYPE=memory # or "mongodb" or "elasticsearch"
MONGODB_CONNECTION_STRING=mongodb://localhost:27017
MONGODB_DATABASE_NAME=agent_sessions
# File Storage (optional)
LOCAL_STORAGE_PATH=./file_storage
AWS_S3_BUCKET=my-bucket
S3_AS_DEFAULT=false
```
### Remote Configuration (Elasticsearch-Managed Agents)
For production deployments, you can configure agents to be managed entirely via Elasticsearch, allowing ops teams to modify prompts and models at runtime without code deployments.
**Enable remote configuration:**
```python
from agent_framework import LlamaIndexAgent
class OpsMangedAgent(LlamaIndexAgent):
def __init__(self):
super().__init__(
agent_id="ops_managed_agent",
name="Ops Managed Agent",
description="An agent configured via Elasticsearch."
)
@classmethod
def get_use_remote_config(cls) -> bool:
"""Enable Elasticsearch-only configuration."""
return True
def get_agent_prompt(self) -> str:
# Fallback prompt if ES config not available
return "You are a helpful assistant."
def get_agent_tools(self) -> list:
return []
```
**Behavior:**
| `use_remote_config` | Server Startup | Session Init |
|---------------------|----------------|--------------|
| `False` (default) | Pushes hardcoded config to ES if different | Merges ES config with hardcoded |
| `True` | Skips pushing to ES | Reads ES config only (no merge) |
**When to use:**
- `use_remote_config=False` (default): Code-managed agents where developers control the config
- `use_remote_config=True`: Ops-managed agents where configuration is modified via ES/Kibana
**Fallback:** If `use_remote_config=True` but no ES config exists, the system falls back to hardcoded config and pushes it to ES with a warning.
## 🎯 Multi-Model Selection
The framework includes intelligent model routing that automatically selects the best model based on query complexity.
### Auto Mode (Default)
When `DEFAULT_MODEL_MODE=auto`, the system analyzes each query and routes it to the appropriate tier:
| Tier | Icon | Use Case | Example Models |
|------|------|----------|----------------|
| **Light** | 💨 | Simple queries, greetings, basic info | gpt-4o-mini, claude-haiku |
| **Standard** | ⚖️ | Typical questions, explanations | gpt-5-mini, claude-sonnet |
| **Advanced** | 🧠 | Complex analysis, creative tasks | gpt-5, claude-opus |
**Benefits:**
- 💰 **Cost optimization** - Use cheaper models for simple queries
- ⚡ **Speed** - Faster responses for trivial messages
- 🎯 **Quality** - Powerful models for complex tasks
### Manual Model Selection
Users can also select a specific model from the UI dropdown:
- Models grouped by tier with availability indicators (✓/✗)
- Preference persisted in localStorage
- Real-time routing indicator shows selected model
### Configuration
```env
# Default mode when no user preference
DEFAULT_MODEL_MODE=auto
# Model used for complexity classification (should be fast and cheap)
AUTO_CLASSIFIER_MODEL=gpt-4o-mini
# Preferred models per tier (comma-separated, in order of preference)
PREFERRED_LIGHT_MODELS=gpt-4o-mini,claude-haiku-4-5-20251001,gemini-2.5-flash-lite
PREFERRED_STANDARD_MODELS=gpt-5-mini,claude-sonnet-4-5-20250929,gemini-2.5-flash
PREFERRED_ADVANCED_MODELS=gpt-5,claude-opus-4-6,gemini-2.5-pro
```
### API Endpoint
```bash
# Get available models
curl http://localhost:8000/api/models
# Response
{
"models_by_tier": {
"light": [{"id": "gpt-4o-mini", "provider": "openai", "available": true}, ...],
"standard": [...],
"advanced": [...]
},
"default_mode": "auto",
"classifier_model": "gpt-4o-mini"
}
```
### Backward Compatibility
Agents with hardcoded models continue to work without changes:
```python
class MyAgent(LlamaIndexAgent):
def __init__(self):
super().__init__(...)
self._default_model = "gpt-5" # This model will always be used
```
### LlamaIndex Agent Configuration
Control model behavior in your agent:
```python
class MyAgent(LlamaIndexAgent):
def __init__(self):
super().__init__(
agent_id="my_agent",
name="My Agent",
description="A helpful assistant."
)
# Default model config (can be overridden per session)
self.default_temperature = 0.7
self.default_model = "gpt-5-mini"
```
**Runtime Configuration:**
Users can override settings per session via the API or web UI:
- Model selection (gpt-5, claude-4.5-sonnet, gemini-pro)
- Temperature (0.0 - 1.0)
- Max tokens
- System prompt override
## 🛠️ Off-the-Shelf Tools
The framework provides ready-to-use tools for common tasks. Import from `agent_framework.tools`:
### File Management Tools
```python
from agent_framework.tools import (
CreateFileTool, # Create text files
ListFilesTool, # List stored files
ReadFileTool, # Read file contents
GetFilePathTool # Get file system path
)
```
### PDF Generation Tools
```python
from agent_framework.tools import (
CreatePDFFromMarkdownTool, # Generate PDF from markdown
CreatePDFFromHTMLTool, # Generate PDF from HTML
CreatePDFWithImagesTool # Generate PDF with embedded images
)
```
### Chart & Visualization Tools
```python
from agent_framework.tools import (
ChartToImageTool, # Convert Chart.js config to PNG
MermaidToImageTool, # Convert Mermaid diagram to PNG
TableToImageTool # Convert table data to PNG
)
```
### Using Off-the-Shelf Tools
```python
from agent_framework import LlamaIndexAgent
from agent_framework.storage.file_system_management import FileStorageFactory
from agent_framework.tools import CreateFileTool, ListFilesTool, CreatePDFFromMarkdownTool
class MyAgent(LlamaIndexAgent):
def __init__(self):
super().__init__(
agent_id="my_agent",
name="File Agent",
description="An assistant with file storage and PDF generation capabilities."
)
self.file_storage = None
# Initialize tools
self.tools = [
CreateFileTool(),
ListFilesTool(),
CreatePDFFromMarkdownTool()
]
async def _ensure_file_storage(self):
if self.file_storage is None:
self.file_storage = await FileStorageFactory.create_storage_manager()
async def configure_session(self, session_configuration):
user_id = session_configuration.get('user_id', 'default_user')
session_id = session_configuration.get('session_id')
await self._ensure_file_storage()
# Inject dependencies into tools
for tool in self.tools:
tool.set_context(
file_storage=self.file_storage,
user_id=user_id,
session_id=session_id
)
await super().configure_session(session_configuration)
def get_agent_tools(self):
return [tool.get_tool_function() for tool in self.tools]
```
**Key Pattern:**
1. Instantiate tools in `__init__()`
2. Initialize file storage in `configure_session()`
3. Inject context with `tool.set_context()`
4. Return tool functions in `get_agent_tools()`
## 🔧 Create Custom Tools
Custom tools extend your agent's capabilities. The tool name and docstring are crucial - they tell the agent when and how to use the tool.
### Basic Custom Tool
```python
def get_weather(city: str) -> str:
"""Get the current weather for a specific city.
Args:
city: The name of the city to get weather for
Returns:
A description of the current weather
"""
# Your implementation here
return f"The weather in {city} is sunny, 22°C"
# Add to your agent
class MyAgent(LlamaIndexAgent):
def get_agent_tools(self):
# Just return the function - automatic conversion to FunctionTool
# Function name = tool name, docstring = tool description
return [get_weather]
```
**Important:**
- **Function name** should be explicit and descriptive (e.g., `get_weather`, not `weather`)
- **Docstring** is added as the tool description - the agent uses this to understand when to call the tool
- **Type hints** help the agent understand parameters
- **Args/Returns documentation** provides additional context
### Custom Tool with Dependencies
For tools that need file storage or other dependencies:
```python
from agent_framework.tools.base_tool import AgentTool
class MyCustomTool(AgentTool):
"""Base class handles dependency injection."""
def execute(self, param1: str, param2: int) -> str:
"""Process data and store results.
Args:
param1: Description of first parameter
param2: Description of second parameter
Returns:
Result description
"""
# Access injected dependencies
user_id = self.user_id
session_id = self.session_id
file_storage = self.file_storage
# Your logic here
result = f"Processed {param1} with {param2}"
# Store file if needed
file_id = await file_storage.store_file(
user_id=user_id,
session_id=session_id,
filename="result.txt",
content=result.encode()
)
return f"Result stored with ID: {file_id}"
# Use in your agent
class MyAgent(LlamaIndexAgent):
def __init__(self):
super().__init__(
agent_id="my_agent",
name="My Agent",
description="A helpful assistant with custom tools."
)
self.custom_tool = MyCustomTool()
async def configure_session(self, session_configuration):
# Inject dependencies
self.custom_tool.set_context(
file_storage=self.file_storage,
user_id=session_configuration.get('user_id'),
session_id=session_configuration.get('session_id')
)
await super().configure_session(session_configuration)
def get_agent_tools(self):
return [self.custom_tool.get_tool_function()]
```
### Tool Naming Best Practices
```python
# ✅ GOOD - Explicit and clear
def calculate_mortgage_payment(principal: float, rate: float, years: int) -> float:
"""Calculate monthly mortgage payment."""
pass
def send_email_notification(recipient: str, subject: str, body: str) -> bool:
"""Send an email notification to a recipient."""
pass
# ❌ BAD - Too vague
def calculate(x: float, y: float) -> float:
"""Do calculation."""
pass
def send(data: str) -> bool:
"""Send something."""
pass
```
## 🔌 Adding MCP Servers
MCP (Model Context Protocol) allows your agent to connect to external tools and services.
### Basic MCP Setup
```python
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec
class MyAgent(LlamaIndexAgent):
def __init__(self):
super().__init__(
agent_id="my_agent",
name="MCP Agent",
description="An assistant with access to external tools via MCP servers."
)
self.mcp_tools = []
self._mcp_initialized = False
async def _initialize_mcp_tools(self):
"""Load tools from MCP servers."""
if self._mcp_initialized:
return
# Configure your MCP server
mcp_configs = [
{
"command": "uvx",
"args": ["mcp-server-filesystem"],
"env": {"FILESYSTEM_ROOT": "/path/to/workspace"}
}
]
for config in mcp_configs:
client = BasicMCPClient(
config["command"],
args=config["args"],
env=config.get("env", {})
)
# Load tools from the MCP server
mcp_tool_spec = McpToolSpec(client=client)
tools = await mcp_tool_spec.to_tool_list_async()
self.mcp_tools.extend(tools)
self._mcp_initialized = True
async def initialize_agent(self, model_name, system_prompt, tools, **kwargs):
# Load MCP tools before initializing agent
await self._initialize_mcp_tools()
# Combine with other tools
all_tools = self.get_agent_tools()
await super().initialize_agent(model_name, system_prompt, all_tools, **kwargs)
def get_agent_tools(self):
# Return built-in tools + MCP tools
return self.mcp_tools
```
### Multiple MCP Servers
```python
def _get_mcp_configs(self):
"""Configure multiple MCP servers."""
return [
{
"name": "filesystem",
"command": "uvx",
"args": ["mcp-server-filesystem"],
"env": {"FILESYSTEM_ROOT": "/workspace"}
},
{
"name": "github",
"command": "uvx",
"args": ["mcp-server-github"],
"env": {
"GITHUB_TOKEN": os.getenv("GITHUB_TOKEN")
}
},
{
"name": "python",
"command": "uvx",
"args": ["mcp-run-python", "stdio"]
}
]
```
### Popular MCP Servers
```bash
# Filesystem operations
uvx mcp-server-filesystem
# GitHub integration
uvx mcp-server-github
# Python code execution
uvx mcp-run-python
# Database access
uvx mcp-neo4j-cypher
uvx mcp-server-postgres
```
**Installation:**
```bash
# Install with MCP support
uv add agent-framework-lib[llamaindex,mcp]
# Or add MCP to existing installation
uv add agent-framework-lib[mcp]
# MCP servers are run via uvx (no separate install needed)
```
**Using Deno-based MCP servers:**
If you need to use Deno-based MCP servers (like TypeScript MCP servers), the framework provides a helper function to ensure Deno works correctly even if it's not in your PATH:
```python
from agent_framework import get_deno_command
# Configure a Deno-based MCP server
mcp_config = {
"command": get_deno_command(), # Automatically uses correct Deno path
"args": ["run", "-N", "jsr:@pydantic/mcp-run-python", "stdio"]
}
```
This helper function:
- ✅ Automatically finds Deno even if not in system PATH
- ✅ Works seamlessly after `agent-framework-post-install`
- ✅ Returns absolute path to Deno binary when needed
## 🧠 Memory Module
Add long-term semantic memory to your agents, enabling them to remember information across conversations and provide personalized responses.
### Quick Start
```python
from agent_framework import LlamaIndexAgent
from agent_framework.memory import MemoryConfig
class MyMemoryAgent(LlamaIndexAgent):
def __init__(self):
super().__init__(
agent_id="memory_agent",
name="Memory Agent",
description="An agent with long-term memory."
)
def get_agent_prompt(self) -> str:
return "You are a helpful assistant that remembers user preferences."
def get_agent_tools(self) -> list:
return []
def get_memory_config(self):
"""Enable memory - just override this method!"""
return MemoryConfig.memori_simple(
database_url="sqlite:///memory.db"
)
```
### Memory Providers
| Provider | Backend | Best For |
|----------|---------|----------|
| **Memori** | SQLite, PostgreSQL, MySQL | Fast queries, simple setup |
| **Graphiti** | FalkorDB, Neo4j | Complex relationships, temporal queries |
| **Hybrid** | Both | Best of both worlds |
### Configuration Options
```python
# Memori with SQLite (simplest)
MemoryConfig.memori_simple(database_url="sqlite:///memory.db")
# Graphiti with FalkorDB
MemoryConfig.graphiti_simple(use_falkordb=True)
# Hybrid mode (both providers)
MemoryConfig.hybrid(
memori_database_url="sqlite:///memory.db",
graphiti_use_falkordb=True
)
```
### Memory Modes
- **Passive Injection**: Relevant memories automatically injected into prompts
- **Active Tools**: Agent can explicitly `recall_memory()`, `store_memory()`, `forget_memory()`
### Installation
```bash
# All memory support
uv add agent-framework-lib[memory]
# Or individual providers
uv add agent-framework-lib[memori]
uv add agent-framework-lib[graphiti]
```
**More info:** See [Memory Installation Guide](docs/MEMORY_INSTALLATION.md) and [Creating Agents Guide](docs/CREATING_AGENTS.md#adding-memory-to-your-agent)
## 🎯 Skills System
The Skills System provides modular, on-demand capability loading that reduces token consumption by ~80%. Instead of loading all instructions into every system prompt, skills deliver detailed instructions only when needed.
### How It Works
```
BEFORE: System Prompt = Base (~500) + Rich Content (~3000) = ~3500 tokens/message
AFTER: System Prompt = Base (~500) + Skills Discovery (~200) = ~700 tokens/message
+ On-demand skill loading (~500 tokens, one-time per skill)
```
### Quick Start
Skills are automatically available in all agents via `BaseAgent`. No need to explicitly inherit from `SkillsMixin`:
```python
from agent_framework import LlamaIndexAgent
from agent_framework import LlamaIndexAgent
class MySkillsAgent(LlamaIndexAgent):
def __init__(self):
super().__init__(
agent_id="skills_agent",
name="Skills Agent",
description="An agent with on-demand capabilities."
)
# Built-in skills are automatically registered by BaseAgent.__init__
def get_agent_prompt(self) -> str:
# Skills discovery prompt is automatically appended by BaseAgent
return "You are a helpful assistant."
def get_agent_tools(self) -> list:
# Skill tools are auto-loaded - no need to add them manually!
return [] # Only return custom tools specific to your agent
```
### Built-in Skills
| Category | Skills |
|----------|--------|
| **Visualization** | chart, mermaid, table |
| **Document** | file, pdf, pdf_with_images, file_access |
| **Web** | web_search |
| **Multimodal** | multimodal |
| **UI** | form, optionsblock, image_display |
### Agent Workflow
1. Agent receives user request: "Create a bar chart"
2. Agent calls `list_skills()` → sees available skills
3. Agent calls `load_skill("chart")` → gets Chart.js instructions
4. Agent uses `save_chart_as_image()` tool with loaded knowledge
5. Optionally calls `unload_skill("chart")` when done
**More info:** See [Creating Agents Guide](docs/CREATING_AGENTS.md#skills-integration) and [skills_demo_agent.py](examples/skills_demo_agent.py)
## 📝 Rich Content Capabilities (Automatic)
All agents automatically support rich content generation including:
- 📊 **Mermaid diagrams** (version 10.x syntax)
- 📈 **Chart.js charts** (bar, line, pie, doughnut, polarArea, radar, scatter, bubble)
- 📋 **Interactive forms** (formDefinition JSON)
- 🔘 **Clickable option buttons** (optionsblock)
- 📑 **Formatted tables** (tabledata)
**This is automatic!** The framework injects rich content instructions into all agent system prompts by default. You don't need to add anything to your `get_agent_prompt()`.
### Disabling Rich Content
If you need to disable automatic rich content injection for a specific agent or session:
**Via Session Configuration (UI or API):**
```python
# When initializing a session
session_config = {
"user_id": "user123",
"session_id": "session456",
"enable_rich_content": False # Disable rich content
}
```
**Via Web UI:**
Uncheck the "Enable rich content capabilities" checkbox when creating a session.
### Format Examples
**Chart:**
````markdown
```chart
{
"type": "chartjs",
"chartConfig": {
"type": "bar",
"data": {
"labels": ["Mon", "Tue", "Wed"],
"datasets": [{
"label": "Sales",
"data": [120, 150, 100]
}]
}
}
}
```
````
**Options Block:**
````markdown
```optionsblock
{
"question": "What would you like to do?",
"options": [
{"text": "Continue", "value": "continue"},
{"text": "Cancel", "value": "cancel"}
]
}
```
````
**Table:**
````markdown
```tabledata
{
"caption": "Sales Data",
"headers": ["Month", "Revenue"],
"rows": [["Jan", "$1000"], ["Feb", "$1200"]]
}
```
````
## 🎯 All Together: Complete Multi-Skills Agent
Here's a complete example combining all features - MCP, off-the-shelf tools, custom tools, and format support:
```python
import os
from typing import List, Any, Dict
from agent_framework import LlamaIndexAgent, create_basic_agent_server
from agent_framework.storage.file_system_management import FileStorageFactory
from agent_framework.tools import (
CreateFileTool, ListFilesTool, ReadFileTool,
CreatePDFFromMarkdownTool, CreatePDFFromHTMLTool,
ChartToImageTool, MermaidToImageTool, CreatePDFWithImagesTool, TableToImageTool
)
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec
class MultiSkillsAgent(LlamaIndexAgent):
def __init__(self):
super().__init__(
agent_id="multi_skills_agent_v1",
name="Multi-Skills Agent",
description="A versatile assistant with file storage, PDF generation, charts, and MCP capabilities."
)
self.file_storage = None
self.mcp_tools = []
self._mcp_initialized = False
# Off-the-shelf tools
self.file_tools = [
CreateFileTool(),
ListFilesTool(),
ReadFileTool(),
CreatePDFFromMarkdownTool(),
CreatePDFFromHTMLTool(),
ChartToImageTool(),
MermaidToImageTool(),
TableToImageTool(),
CreatePDFWithImagesTool()
]
async def _ensure_file_storage(self):
if self.file_storage is None:
self.file_storage = await FileStorageFactory.create_storage_manager()
async def configure_session(self, session_configuration: Dict[str, Any]):
user_id = session_configuration.get('user_id', 'default_user')
session_id = session_configuration.get('session_id')
await self._ensure_file_storage()
# Inject context into file tools
for tool in self.file_tools:
tool.set_context(
file_storage=self.file_storage,
user_id=user_id,
session_id=session_id
)
await super().configure_session(session_configuration)
async def _initialize_mcp_tools(self):
if self._mcp_initialized:
return
try:
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec
except ImportError:
return
# Configure MCP servers
mcp_configs = [
{
"command": "uvx",
"args": ["mcp-run-python", "stdio"]
}
]
for config in mcp_configs:
try:
client = BasicMCPClient(config["command"], args=config["args"])
mcp_tool_spec = McpToolSpec(client=client)
tools = await mcp_tool_spec.to_tool_list_async()
self.mcp_tools.extend(tools)
except Exception as e:
print(f"MCP initialization failed: {e}")
self._mcp_initialized = True
def get_agent_prompt(self) -> str:
return """You are a helpful assistant with multiple capabilities:
- Execute Python code via MCP
- Create, read, and list files
- Generate PDF documents from markdown or HTML
- Create charts, mermaid diagrams, and tables
- Present forms and option blocks to users
You can generate markdown, mermaid diagrams, charts, code blocks, forms and optionsblocks.
ALWAYS include option blocks when asking the user to select an option!
... See the format section above
"""
def get_agent_tools(self) -> List[callable]:
# Combine all tools
all_tools = []
all_tools.extend([tool.get_tool_function() for tool in self.file_tools])
all_tools.extend(self.mcp_tools)
return all_tools
async def initialize_agent(self, model_name, system_prompt, tools, **kwargs):
await self._initialize_mcp_tools()
all_tools = self.get_agent_tools()
await super().initialize_agent(model_name, system_prompt, all_tools, **kwargs)
# Start the server
if __name__ == "__main__":
create_basic_agent_server(MultiSkillsAgent, port=8000)
```
**Run it:**
```bash
export OPENAI_API_KEY=sk-your-key
python multi_skills_agent.py
# Open http://localhost:8000/ui
```
**Full example:** See `examples/agent_example_multi_skills.py` for the complete implementation with full format support prompt.
## 🌐 Web Interface
The framework includes a built-in web UI for testing and interacting with your agent.
**Access:** `http://localhost:8000/ui`
**Features:**
- 💬 Real-time message streaming
- 🎨 Rich format rendering (charts, tables, mermaid diagrams)
- 📁 File upload and management
- ⚙️ Model and parameter configuration
- 💾 Session management
- 📊 Conversation history
- 🎯 Interactive option blocks and forms
**Quick Test:**
```bash
# Start your agent
python my_agent.py
# Open in browser
open http://localhost:8000/ui
```
The UI automatically detects and renders:
- Chart.js visualizations from `chart` blocks
- Mermaid diagrams from `mermaid` blocks
- Tables from `tabledata` blocks
- Interactive forms from `formDefinition` JSON
- Clickable options from `optionsblock`
**API Documentation:** `http://localhost:8000/docs` (Swagger UI)
## 📚 Additional Resources
### Documentation
- **[Installation Guide](#installation-guide)** - Detailed setup instructions
- **[Configuration Guide](#configuratio-guide)** - Environment and settings configuration
- **[Creating Agents Guide](#creating-agents)** - Guide to building custom agents
- **[Tools and MCP Guide](#tools-and-mcp)** - Tools and MCP integration
- **[Memory Installation Guide](docs/MEMORY_INSTALLATION.md)** - Memory module setup
- **[API Reference](#api-reference)** - Complete API documentation
### Examples
- **[Simple Agent](#example-simple-agent)** - Basic calculator agent
- **[File Storage Agent](#example-file-storage)** - File management
- **[MCP Integration](#example-mcp)** - MCP integration
- **[Memory Agent](examples/agent_with_memory_simple.py)** - Agent with long-term memory
- **[Multi-Skills Agent](#example-multi-skills)** - Complete multi-skills agent
- **[Custom Framework Agent](#example-custom-framework)** - Custom framework implementation
### API Endpoints
**Core:**
- `POST /message` - Send message to agent
- `POST /init` - Initialize session
- `POST /end` - End session
- `GET /sessions` - List sessions
**Files:**
- `POST /files/upload` - Upload file
- `GET /files/{file_id}/download` - Download file
- `GET /files` - List files
**Full API docs:** `http://localhost:8000/docs`
### Authentication
```env
# API Key Authentication
REQUIRE_AUTH=true
API_KEYS=sk-key-1,sk-key-2
```
```bash
curl -H "Authorization: Bearer sk-key-1" \
http://localhost:8000/message \
-H "Content-Type: application/json" \
-d '{"query": "Hello!"}'
```
---
**Quick Links:**
- 🎨 [Web UI](http://localhost:8000/ui)
- 📖 [API Docs](http://localhost:8000/docs)
- ⚙️ [Config Test](http://localhost:8000/config/models)
| text/markdown | null | Sebastian Pavel <sebastian@cinco.ai>, Elliott Girard <elliott.girard@icloud.com> | null | Sebastian Pavel <sebastian@cinco.ai> | MIT | ai, agents, fastapi, llamaindex, framework, conversational-ai, multi-agent, llm, openai, gemini, chatbot, session-management, framework-agnostic | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Communications :: Chat",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers",
"Framework :: FastAPI",
"Environment :: Web Environment",
"Typing :: Typed"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"aiofiles>=24.1.0",
"fastapi>=0.115.12",
"uvicorn>=0.34.2",
"fastmcp>=2.2.7",
"mcp-python-interpreter",
"pyyaml>=6.0.2",
"pydantic>=2.0.0",
"opentelemetry-sdk>=1.33.1",
"opentelemetry-api>=1.33.1",
"opentelemetry-exporter-otlp-proto-grpc>=1.33.1",
"pymongo>=4.10.1",
"motor>=3.6.0",
"black>=25.1.0",
"markitdown[all]>=0.1.2",
"psutil>=7.0.0",
"weasyprint>=60.0",
"markdown>=3.5",
"playwright>=1.56.0",
"elasticsearch<9.0.0,>=8.11.0",
"ddgs>=9.9.3",
"llama-index>=0.14.10",
"llama-index-core>=0.12.0",
"llama-index-llms-openai>=0.4.0",
"llama-index-llms-anthropic>=0.10.9",
"llama-index-llms-google-genai>=0.1.0",
"graphiti-core>=0.24.3",
"tiktoken>=0.7.0",
"falkordb>=1.0.0",
"grpcio-status>=1.71.2",
"nodeenv>=1.8.0",
"llama-index-core>=0.12.0; extra == \"llamaindex\"",
"llama-index>=0.12.0; extra == \"llamaindex\"",
"llama-index-llms-openai>=0.4.0; extra == \"llamaindex\"",
"llama-index-llms-google-genai>=0.1.0; extra == \"llamaindex\"",
"llama-index-llms-anthropic>=0.10.5; extra == \"llamaindex\"",
"llama-index-tools-mcp>=0.4.0; extra == \"mcp\"",
"ddgs>=8.0.0; extra == \"websearch\"",
"pytest>=8.4.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=6.2.1; extra == \"dev\"",
"pytest-mock>=3.10.0; extra == \"dev\"",
"pytest-benchmark>=4.0.0; extra == \"dev\"",
"pytest-xdist>=3.3.0; extra == \"dev\"",
"black>=25.1.0; extra == \"dev\"",
"flake8>=6.0.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"aiohttp>=3.12.13; extra == \"dev\"",
"httpx>=0.28.1; extra == \"dev\"",
"coverage>=7.0.0; extra == \"dev\"",
"pymongo>=4.10.1; extra == \"mongodb\"",
"motor>=3.6.0; extra == \"mongodb\"",
"elasticsearch>=8.11.0; extra == \"elasticsearch\"",
"boto3>=1.34.0; extra == \"s3\"",
"botocore>=1.34.0; extra == \"s3\"",
"minio>=7.2.0; extra == \"minio\"",
"google-cloud-storage>=2.14.0; extra == \"gcp\"",
"pillow>=10.0.0; extra == \"multimodal\"",
"opencv-python>=4.8.0; extra == \"multimodal\"",
"pytesseract>=0.3.10; extra == \"multimodal\"",
"memori>=0.1.0; extra == \"memory\"",
"graphiti-core>=0.3.0; extra == \"memory\"",
"memori>=0.1.0; extra == \"memori\"",
"graphiti-core>=0.24.3; extra == \"graphiti\"",
"graphiti-core[falkordb]>=0.24.3; extra == \"graphiti-falkordb\"",
"graphiti-core>=0.24.3; extra == \"graphiti-neo4j\"",
"neo4j>=5.0.0; extra == \"graphiti-neo4j\"",
"graphiti-core[falkordb]>=0.24.3; extra == \"graphiti-all\"",
"neo4j>=5.0.0; extra == \"graphiti-all\"",
"opentelemetry-sdk>=1.33.1; extra == \"observability\"",
"opentelemetry-api>=1.33.1; extra == \"observability\"",
"opentelemetry-exporter-otlp-proto-grpc>=1.33.1; extra == \"observability\"",
"traceloop-sdk>=0.30.0; extra == \"observability\"",
"psutil>=5.9.0; extra == \"monitoring\"",
"agent-framework-lib[dev,elasticsearch,gcp,graphiti-all,llamaindex,mcp,memory,microsoft,minio,mongodb,monitoring,multimodal,observability,s3,websearch]; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/Cinco-AI/AgentFramework",
"Repository, https://github.com/Cinco-AI/AgentFramework.git",
"Issues, https://github.com/Cinco-AI/AgentFramework/issues",
"Documentation, https://github.com/Cinco-AI/AgentFramework/blob/main/README.md",
"Changelog, https://github.com/Cinco-AI/AgentFramework/blob/main/docs/CHANGELOG.md",
"Bug Tracker, https://github.com/Cinco-AI/AgentFramework/issues",
"Source Code, https://github.com/Cinco-AI/AgentFramework"
] | twine/6.2.0 CPython/3.9.13 | 2026-02-20T11:26:28.967459 | agent_framework_lib-0.5.9.post12.tar.gz | 1,018,140 | 24/cc/dd81e6ba12d9a4988635aac2a9de9ec8c03b4b317f4158580c630b51c00a/agent_framework_lib-0.5.9.post12.tar.gz | source | sdist | null | false | a5cf171c3ccb7d99830c66da5b7ef119 | 7c29b892efb51e069d37df900396e9831b62b9bdf8644aa8211e6405c7bcc5a5 | 24ccdd81e6ba12d9a4988635aac2a9de9ec8c03b4b317f4158580c630b51c00a | null | [
"LICENSE"
] | 229 |
2.4 | onnxsim | 0.5.0 | Simplify your ONNX model | # ONNX Simplifier
[](https://pypi.python.org/pypi/onnx-simplifier/)
[](https://pypi.python.org/pypi/onnx-simplifier/)
[](https://pypi.python.org/pypi/onnx-simplifier/)
[](https://github.com/daquexian/onnx-simplifier/pulls)
_ONNX is great, but sometimes too complicated._
## Background
One day I wanted to export the following simple reshape operation to ONNX:
```python
import torch
class JustReshape(torch.nn.Module):
def __init__(self):
super(JustReshape, self).__init__()
def forward(self, x):
return x.view((x.shape[0], x.shape[1], x.shape[3], x.shape[2]))
net = JustReshape()
model_name = 'just_reshape.onnx'
dummy_input = torch.randn(2, 3, 4, 5)
torch.onnx.export(net, dummy_input, model_name, input_names=['input'], output_names=['output'])
```
The input shape in this model is static, so what I expected is

However, I got the following complicated model instead:

## Our solution
ONNX Simplifier is presented to simplify the ONNX model. It infers the whole computation graph
and then replaces the redundant operators with their constant outputs (a.k.a. constant folding).
### Web version
We have published ONNX Simplifier on [convertmodel.com](https://www.convertmodel.com/#input=onnx&output=onnx). It works out of the box and **doesn't need any installation**. Note that it runs in the browser locally and your model is completely safe.
### Python version
```
pip3 install -U pip && pip3 install onnxsim
```
Then
```
onnxsim input_onnx_model output_onnx_model
```
For more advanced features, try the following command for help message
```
onnxsim -h
```
## Demonstration
An overall comparison between
[a complicated model](https://github.com/JDAI-CV/DNNLibrary/issues/17#issuecomment-455934190)
and its simplified version:

## In-script workflow
If you would like to embed ONNX simplifier python package in another script, it is just that simple.
```python
import onnx
from onnxsim import simplify
# load your predefined ONNX model
model = onnx.load(filename)
# convert model
model_simp, check = simplify(model)
assert check, "Simplified ONNX model could not be validated"
# use model_simp as a standard ONNX model object
```
You can see more details of the API in [onnxsim/onnx_simplifier.py](onnxsim/onnx_simplifier.py)
## Projects Using ONNX Simplifier
* [MXNet](https://mxnet.apache.org/versions/1.9.1/api/python/docs/tutorials/deploy/export/onnx.html#Simplify-the-exported-ONNX-model)
* [MMDetection](https://github.com/open-mmlab/mmdetection)
* [YOLOv5](https://github.com/ultralytics/yolov5)
* [ncnn](https://github.com/Tencent/ncnn)
* ...
## Chat
We created a Chinese QQ group for ONNX!
ONNX QQ Group (Chinese): 1021964010, verification code: nndab. Welcome to join!
For English users, I'm active on the [ONNX Slack](https://github.com/onnx/onnx#discuss). You can find and chat with me (daquexian) there.
| text/markdown | ONNX Simplifier Authors | daquexian566@gmail.com | null | null | Apache License v2.0 | deep-learning ONNX | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering",
"Topic :: Software Development"
] | [] | https://github.com/daquexian/onnx-simplifier | null | >=3.7 | [] | [] | [] | [
"onnx",
"rich"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:26:25.659844 | onnxsim-0.5.0.tar.gz | 20,995,842 | 26/5b/054c0f307e342fa2c8de26385d830c5a1fa57f44302b5bc121a90ee6fb71/onnxsim-0.5.0.tar.gz | source | sdist | null | false | 1bcd7ad070e2b38443d35f9de55a117d | 0469dcd54185baa2e14d55460559602905c6fa43d458bcffa8ce4b08f5f4e8e8 | 265b054c0f307e342fa2c8de26385d830c5a1fa57f44302b5bc121a90ee6fb71 | null | [
"LICENSE"
] | 7,460 |
2.4 | immuneML | 3.0.18 | immuneML is a software platform for machine learning analysis of immune receptor repertoires. | # immuneML



[](https://docs.airr-community.org/en/stable/swtools/airr_swtools_standard.html)
immuneML is a platform for machine learning-based analysis and
classification of adaptive immune receptors and repertoires (AIRR).
It supports the analyses of experimental B- and T-cell receptor data,
as well as synthetic data for benchmarking purposes.
In immuneML, users can define flexible workflows supporting different
machine learning libraries (such as scikit-learn or PyTorch), benchmarking of different approaches, numerous reports
of data characteristics, ML algorithms and their predictions, and
visualizations of results.
Additionally, users can extend the platform by defining their own data
representations, ML models, reports and visualizations.
Useful links:
- Main website: https://immuneml.uio.no
- Documentation: https://docs.immuneml.uio.no
- Documentation for the latest (unstable) version (development branch): https://uio-bmi.github.io/immuneML/
- Galaxy web interface: https://galaxy.immuneml.uiocloud.no
## Installation
We recommend installing immuneML inside a virtual environment.
immuneML uses **Python 3.9 or later**. If using immuneML simulation, Python 3.11 or later is recommended.
immuneML can be [installed directly using a package manager](<https://docs.immuneml.uio.no/latest/installation/install_with_package_manager.html#>) such as pip or conda,
or [set up via docker](<https://docs.immuneml.uio.no/latest/installation/installation_docker.html>).
#### Quick installation (immuneML essentials):
```bash
python3 -m venv ./immuneml_venv/
source ./immuneml_venv/bin/activate
pip install wheel
pip install immune-ml
```
or
```bash
conda create --prefix immuneml_env/ python=3.11
conda activate immuneml_env/
conda install -c bioconda immuneml
```
#### Detailed installation (immuneML extras):
Please check the documentation for more detailed instructions or [how to install optional dependencies](<https://docs.immuneml.uio.no/latest/installation/install_with_package_manager.html#installing-optional-dependencies>).
### Validating the installation
To validate the installation, run:
```bash
immune-ml -h
```
This should display a help message explaining immuneML usage.
To quickly test out whether immuneML is able to run, try running the quickstart command:
```bash
immune-ml-quickstart ./quickstart_results/
```
This will generate a synthetic dataset and run a simple machine machine learning analysis
on the generated data. The results folder will contain two sub-folders: one for the generated dataset (`synthetic_dataset`)
and one for the results of the machine learning analysis (`machine_learning_analysis`).
The files named `specs.yaml` are the input files for immuneML that describe how to generate
the dataset and how to do the machine learning analysis. The `index.html` files can be used
to navigate through all the results that were produced.
## Usage
### Quickstart
The quickest way to familiarize yourself with immuneML usage is to follow
one of the [Quickstart tutorials](https://docs.immuneml.uio.no/quickstart.html).
These tutorials provide a step-by-step guide on how to use immuneML for a
simple machine learning analysis on an adaptive immune receptor repertoire (AIRR) dataset,
using either the command line tool or the [Galaxy web interface](https://galaxy.immuneml.uiocloud.no).
### Overview of immuneML analyses
The figure below shows an overview of immuneML usage.
All parameters for an immuneML analysis are defined in the YAML specification file.
In this file, the settings of the analysis components are defined (also known as `definitions`,
shown in different colors in the figure).
Additionally, the YAML file describes one or more `instructions`, which are workflows that are
applied to the defined analysis components.
See also: [documentation of the YAML specification](https://docs.immuneml.uio.no/latest/yaml_specs/how_to_specify_an_analysis_with_yaml.html).
Each instruction produces different types of results, including trained ML models,
ML model predictions on a given dataset, plots or other reports describing the
dataset or trained models, or synthetic/simulated datasets.
These results can be navigated through the summary HTML file.
See also: [tutorials for specific immuneML use cases](https://docs.immuneml.uio.no/latest/tutorials.html#).

### Command line usage
The `immune-ml` command takes only two parameters: the YAML specification file and a result path.
An example is given here:
```bash
immune-ml path/to/specification.yaml result/folder/path/
```
### Results of an immuneML run
For each instruction specified in the YAML specification file, a subfolder is created in the
`result/folder/path`. Each subfolder will contain:
- An `index.html` file which shows an overview of the results produced by that instruction. Inspecting the results of an immuneML analysis typically starts here.
- A copy of the used YAML specification (`full_specification.yaml`) with all default parameters explicitly set.
- A log file (`log.txt`).
- A folder containing the imported dataset(s) in immuneML format.
- A folder containing all raw results produced by the instruction.
## Support
We will prioritize fixing important bugs, and try to answer any questions as soon as possible.
Please note we are only 2 people maintaining the platform with occasional absences.
When experiencing an issue, please take the following steps:
1. **Make sure the latest version of immuneML is installed.** immuneML is under constant development, and the issue you experience may already be resolved in the latest version of the platform.
2. Check the ['troubleshooting' page](<https://docs.immuneml.uio.no/latest/troubleshooting.html>) in the immuneML documentation. Any known issues and their solutions are already described there.
3. If you are still experience a problem and suspect a bug in immuneMl, you can [report an issue on GitHub](https://github.com/uio-bmi/immuneML/issues). Please make sure to include the following information:
- The YAML specification you tried to run.
- The full output log file (log.txt).
- A list of dependency versions (can be retrieved with pip list or conda list).
- We primarily test immuneML using Unix-based operating systems, please make sure to mention it if you're using Windows.
- We will be able to help you fastest if you can also provide a small reproducible example, such as a very small dataset for which your run fails.
If this does not answer your question, you can contact us via:
- Twitter [`@immuneml`](https://twitter.com/immuneml)
- Email [`contact@immuneml.uio.no`](mailto:contact@immuneml.uio.no)
# Citing immuneML
If you are using immuneML in any published work, please cite:
Pavlović, M., Scheffer, L., Motwani, K. et al. The immuneML ecosystem for machine learning analysis of adaptive immune
receptor repertoires. Nat Mach Intell 3, 936–944 (2021). https://doi.org/10.1038/s42256-021-00413-z
<hr>
© Copyright 2021-2022, Milena Pavlovic, Lonneke Scheffer, Keshav Motwani, Victor Greiff, Geir Kjetil Sandve
| text/markdown | null | immuneML Team <milenpa@uio.no> | null | null | GNU Affero General Public License v3 | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"setuptools<70",
"numpy<=1.26.4",
"pandas>=2.1.0",
"PyYAML>=5.3",
"scikit-learn>=0.23",
"matplotlib>=3.1",
"editdistance",
"regex",
"tzlocal",
"airr>=1.5.1",
"pystache",
"dill>=0.3",
"plotly>=4",
"matplotlib-venn>=0.11",
"scipy<=1.12.0",
"bionumpy>=1.0.12",
"umap-learn",
"olga>=1.2.4",
"psutil",
"kaleido",
"logomaker",
"python-glmnet",
"gensim>=4; extra == \"word2vec\"",
"fisher>=0.1.9; extra == \"fisher\"",
"fishersapi; extra == \"fisher\"",
"tcrdist3>=0.1.6; extra == \"tcrdist\"",
"sonnia; extra == \"gen-models\"",
"torch; extra == \"gen-models\"",
"transformers; extra == \"gen-models\"",
"datasets; extra == \"gen-models\"",
"tokenizers; extra == \"gen-models\"",
"tensorflow<=2.15.0; extra == \"gen-models\"",
"accelerate>=0.26.0; extra == \"gen-models\"",
"transformers; extra == \"embeddings\"",
"torch; extra == \"embeddings\"",
"sentencepiece; extra == \"embeddings\"",
"esm; extra == \"embeddings\"",
"httpx; extra == \"embeddings\"",
"stitchr; extra == \"ligo\"",
"IMGTgeneDL; extra == \"ligo\"",
"torch; extra == \"dl\"",
"keras; extra == \"dl\"",
"tensorflow; extra == \"dl\"",
"logomaker; extra == \"dl\"",
"gensim; extra == \"dl\"",
"tcrdist3>=0.1.6; extra == \"all\"",
"sonnia; extra == \"all\"",
"torch; extra == \"all\"",
"stitchr; extra == \"all\"",
"IMGTgeneDL; extra == \"all\"",
"keras; extra == \"all\"",
"tensorflow; extra == \"all\"",
"fisher>=0.1.9; extra == \"all\"",
"logomaker; extra == \"all\"",
"fishersapi; extra == \"all\"",
"gensim>=4; extra == \"all\"",
"transformers; extra == \"all\"",
"sentencepiece; extra == \"all\"",
"httpx; extra == \"all\"",
"datasets; extra == \"all\"",
"tokenizers; extra == \"all\"",
"tensorflow<=2.15.0; extra == \"all\"",
"accelerate>=0.26.0; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/uio-bmi/immuneML"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:26:18.843921 | immuneml-3.0.18.tar.gz | 559,511 | 4b/e4/ce5bd6ec494df2e0cc11a622b5573afd6e50cf71727b3769278411af5b04/immuneml-3.0.18.tar.gz | source | sdist | null | false | 8ed160932cec40cd48ae08653cc515d9 | 010737381893458497323e0ebf0043be28cde9d08925951a1e43846c3359869d | 4be4ce5bd6ec494df2e0cc11a622b5573afd6e50cf71727b3769278411af5b04 | null | [
"LICENSE.md"
] | 0 |
2.4 | pygexml | 0.1.2 | A minimal Python wrapper around the PAGE-XML format for OCR output | # pygexml
A minimal Python wrapper around the [PAGE-XML][page-xml] format for OCR output.
[![pygexml checks, tests and docs][workflows-badge]][workflows] [![API docs online][api-docs-badge]][api-docs]
## Installation
```
pip install pygexml
```
Requires Python 3.12+.
## Usage
```python
from pygexml import Page
page = Page.from_xml_string(xml_string)
for line in page.all_text():
print(line)
```
### Data model
| Class | Import from |
|---|---|
| `Page` | `pygexml` |
| `Page`, `TextRegion`, `TextLine`, `Coords` | `pygexml.page` |
| `Point`, `Box`, `Polygon` | `pygexml.geometry` |
`Page`, `TextRegion` and `TextLine` each expose `all_text()` and `all_words()` iterators.
Lookups by ID are available via `lookup_region()` and `lookup_textline()`.
Refer to the [online API docs][api-docs] for details.
### Hypothesis strategies
The `pygexml.strategies` module provides [Hypothesis][hypothesis] strategies for all pygexml types, ready to use in property-based tests - including downstream projects:
```python
from hypothesis import given
from pygexml.strategies import st_pages
@given(st_pages())
def test_my_page_processing(page):
assert process(page) is not None
```
Refer to the [`pygexml.strategies` API docs][api-docs-strategies] for details.
## Development
```bash
pip install ".[dev,test,docs]"
black pygexml test # format
mypy pygexml test # type check
pyright pygexml test # type check
pytest -v # tests
pdoc -o .api_docs pygexml/* # API docs
```
CI runs on Python 3.12, 3.13 and 3.14. [API documentation][api-docs] is published to GitHub Pages on every push to `main`.
## Contributing
[Bug reports, feature requests][gh-issues] and [pull requests][gh-prs] are welcome. Feel free to open draft pull requests early to invite discussion and collaboration.
Please note that this project has a [Code of Conduct](CODE_OF_CONDUCT.md).
## Copyright and License
Copyright (c) 2026 [Mirko Westermeier][gh-memowe] (SCDH, University of Münster)
Released under the [MIT License](LICENSE).
[page-xml]: https://github.com/PRImA-Research-Lab/PAGE-XML
[workflows]: https://github.com/SCDH/pygexml/actions/workflows/checks_tests_docs.yml
[workflows-badge]: https://github.com/SCDH/pygexml/actions/workflows/checks_tests_docs.yml/badge.svg
[hypothesis]: https://hypothesis.readthedocs.io
[api-docs]: https://scdh.github.io/pygexml
[api-docs-strategies]: https://scdh.github.io/pygexml/pygexml/strategies.html
[api-docs-badge]: https://img.shields.io/badge/API%20docs-online-blue?logo=gitbook&logoColor=lightgrey
[gh-issues]: https://github.com/SCDH/pygexml/issues
[gh-prs]: https://github.com/SCDH/pygexml/pulls
[gh-memowe]: https://github.com/memowe
| text/markdown | null | Mirko Westermeier <mirko.westermeier@uni-muenster.de> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"lxml",
"hypothesis; extra == \"strategies\"",
"mypy; extra == \"dev\"",
"pyright; extra == \"dev\"",
"black; extra == \"dev\"",
"lxml-stubs; extra == \"dev\"",
"pytest; extra == \"test\"",
"hypothesis; extra == \"test\"",
"pdoc; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/SCDH/pygexml",
"Repository, https://github.com/SCDH/pygexml",
"Documentation, https://scdh.github.io/pygexml",
"Issues, https://github.com/SCDH/pygexml/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:25:40.555681 | pygexml-0.1.2.tar.gz | 9,072 | 7c/e4/a4d6bc6b4cbe73f8f076bb4e95531f37f94491e9acc6b9c5dcf7e97457a7/pygexml-0.1.2.tar.gz | source | sdist | null | false | ef58badea67bb7b59601be488c9e0338 | bdcb20e837f7608506798d7ea050b9d2eb5c4d4171ba394ed8da15c9144b390c | 7ce4a4d6bc6b4cbe73f8f076bb4e95531f37f94491e9acc6b9c5dcf7e97457a7 | MIT | [
"LICENSE"
] | 233 |
2.4 | esp-matter-dm-validator | 1.0.2 | A command-line utility for validating Matter device data model conformance against the official Matter specification. |
====================================
esp-matter-data-model-validator Tool
====================================
A command-line utility for validating Matter device data model conformance
against the official Matter specification.
Source code for `esp-matter-data-model-validator` is
`hosted on github <https://github.com/espressif/esp-matter-tools/tree/main/dmv_tool>`_.
Documentation
-------------
Visit online `esp-matter-dm-validator documentation <https://github.com/espressif/esp-matter-tools/tree/main/dmv_tool>`_
or run ``esp-matter-dm-validator -h``.
License
-------
The License for the project can be found
`here <https://github.com/espressif/esp-matter-tools/tree/main/LICENSE>`_
| text/markdown | Espressif Systems | null | null | null | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: POSIX",
"Operating System :: MacOS :: MacOS X",
"Topic :: Software Development :: Embedded Systems"
] | [] | https://github.com/espressif/esp-matter-tools/tree/main/dmv_tool | null | >=3.10 | [] | [] | [] | [
"click>=8.1.7",
"tabulate>=0.9.0"
] | [] | [] | [] | [
"Documentation, https://github.com/espressif/esp-matter-tools/tree/main/dmv_tool/README.md",
"Source, https://github.com/espressif/esp-matter-tools/tree/main/dmv_tool"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:24:33.327339 | esp_matter_dm_validator-1.0.2.tar.gz | 144,350 | 64/4a/174bed53021e96b1f8885c9833f16b76d0963caa51cd852632651fe2e122/esp_matter_dm_validator-1.0.2.tar.gz | source | sdist | null | false | 389a4fa69b6d16fcb5da799e4fe36b0d | 9255e15e4d04bb816799d4a6f3c3c706128da4100ddfd4926b0be4f434087585 | 644a174bed53021e96b1f8885c9833f16b76d0963caa51cd852632651fe2e122 | null | [] | 225 |
2.3 | titlani | 0.3.0 | Misfin(C) mail protocol client and server library | # Titlani
**Misfin(C) mail protocol client and server library for Python.**
Titlani is a complete implementation of the [Misfin(C)](https://misfin.org/) mail transport protocol -- a lightweight, privacy-focused mail protocol influenced by Gemini that uses mandatory TLS with self-signed identity certificates and Trust-On-First-Use (TOFU) validation.
## Features
- **Full Misfin(C) protocol** -- Wire format parsing, status codes, gemmail message format
- **Async client and server** -- Built on `asyncio.Protocol` with TLS, TOFU, redirect handling, and middleware
- **Identity certificates** -- Generate and manage Misfin identity certs with custom layout (USER_ID, CN, SAN DNS)
- **At-rest encryption** -- Per-mailbox X25519 ECDH + AES-256-GCM encryption for stored messages
- **Sender verification** -- Probe-based and SPKI-based verification with SQLite caching
- **GMAP** -- Gemini Mailbox Access Protocol for remote mailbox access with tag management
- **Mailing lists** -- Server-side mailing list support with subscriber management and message forwarding
- **Rate limiting and access control** -- Token bucket rate limiting and IP allow/deny lists via tlacacoca
- **Contact blocking** -- Per-mailbox sender block lists
- **Auto-reply** -- Server-side out-of-office automatic replies with loop prevention
- **CLI tool** -- Full-featured `titlani` command for sending, serving, reading mail, and administration
## Installation
With `uv` (preferred):
```bash
uv tool install titlani
```
Or with `pip`:
```bash
pip install titlani
```
Requires **Python 3.13+**.
## Quick Start
Generate an identity, send a message, and start a server:
```bash
# Generate an identity certificate
titlani identity generate alice example.com --blurb "Alice Smith"
# Send a message
titlani send bob@remote.host "Hello from Misfin!" \
--cert alice.pem --key alice.key --subject "Greetings"
# Generate server and client config interactively
titlani init
# Start a server (auto-discovers config from ~/.config/titlani/)
titlani serve
# List and read your mail
titlani mail list
titlani mail read 1
```
Or use the Python API:
```python
import asyncio
from titlani import MisfinClient
async def main():
async with MisfinClient(
client_cert="alice.pem",
client_key="alice.key",
) as client:
response = await client.send(
to="bob@remote.host",
body="Hello from Misfin!",
subject="Greetings",
)
print(f"{response.status} {response.meta}")
asyncio.run(main())
```
## Documentation
Full documentation is available at [titlani.readthedocs.io](https://titlani.readthedocs.io):
- [**Quick Start**](https://titlani.readthedocs.io/quickstart/) -- Core workflow in 6 steps
- [**Tutorials**](https://titlani.readthedocs.io/tutorials/) -- Step-by-step guides for sending messages, running servers, and building clients
- [**How-To Guides**](https://titlani.readthedocs.io/how-to/) -- Recipes for encryption, verification, GMAP, mailing lists, and more
- [**CLI Reference**](https://titlani.readthedocs.io/reference/cli/) -- All `titlani` commands and options
- [**Configuration Reference**](https://titlani.readthedocs.io/reference/configuration/) -- Server and client TOML config
- [**API Reference**](https://titlani.readthedocs.io/reference/api/) -- Python API documentation
## License
See [LICENSE](LICENSE) for details.
| text/markdown | Alan Velasco | Alan Velasco <dev@alanbato.com> | null | null | MIT | mail, misfin, gemini, tls, protocol | [
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Communications :: Email",
"Operating System :: OS Independent"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"cryptography>=46.0.4",
"platformdirs>=4.0.0",
"pydantic-settings>=2.12.0",
"rich>=14.3.2",
"ruff>=0.15.0",
"structlog>=25.5.0",
"tlacacoca>=0.2.0",
"typer>=0.21.1"
] | [] | [] | [] | [
"Homepage, https://titlani.readthedocs.io",
"Repository, https://github.com/alanbato/titlani",
"Issues, https://github.com/alanbato/titlani/issues",
"Documentation, https://titlani.readthedocs.io"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:24:23.908234 | titlani-0.3.0.tar.gz | 59,678 | 71/0d/d175cf16ae83f227d214482b92edda59a98a8675dbcff12d7cfba1b14301/titlani-0.3.0.tar.gz | source | sdist | null | false | 1579f3daade5cdf5dbd08251ef822386 | 7e76e266e89994454d694be2e69bc0bbdd8142bb3a5aa3caf56f6ad1f13ec769 | 710dd175cf16ae83f227d214482b92edda59a98a8675dbcff12d7cfba1b14301 | null | [] | 193 |
2.4 | youtrack-sdk | 1.0.202602201123 | YouTrack SDK | # YouTrack REST API Client
A client library for accessing YouTrack REST API.
## Usage
```python
from datetime import date
from youtrack_sdk import Client
from youtrack_sdk.entities import (
DateIssueCustomField,
EnumBundleElement,
Issue,
Tag,
Project,
SingleEnumIssueCustomField,
SingleUserIssueCustomField,
StateBundleElement,
StateIssueCustomField,
User,
)
client = Client(base_url="https://dummy.myjetbrains.com/youtrack", token="dummy")
result = client.create_issue(
issue=Issue(
project=Project(id="0-0"),
summary="Created from YouTrack SDK",
description="Description **text**.",
tags=[
Tag(id="6-0"),
],
custom_fields=[
StateIssueCustomField(
name="State",
value=StateBundleElement(
name="In Progress",
),
),
SingleUserIssueCustomField(
name="Assignee",
value=User(
ring_id="00000000-a31c-4174-bb27-abd3387df67a",
),
),
SingleEnumIssueCustomField(
name="Type",
value=EnumBundleElement(
name="Bug",
),
),
DateIssueCustomField(
name="Due Date",
value=date(2005, 12, 31),
),
],
),
)
```
## Note
- You should prefer to use internal entity IDs everywhere. Some methods accept readable issue IDs (e.g. HD-99) but it's not supported everywhere.
| text/markdown | moneymeets | service@moneymeets.com | null | null | null | youtrack, sdk | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: MIT License"
] | [] | https://github.com/moneymeets/youtrack-sdk | null | >=3.12 | [] | [] | [] | [
"requests",
"pydantic"
] | [] | [] | [] | [
"Repository, https://github.com/moneymeets/youtrack-sdk"
] | poetry/2.2.1 CPython/3.12.3 Linux/6.11.0-1018-azure | 2026-02-20T11:23:43.433597 | youtrack_sdk-1.0.202602201123.tar.gz | 10,368 | 1d/f8/cd1fd83442cdac0ab3f2441988b366fe72aeef37ad62d04c72b19048b3c9/youtrack_sdk-1.0.202602201123.tar.gz | source | sdist | null | false | 09ed3834b9cdf17bf0dc279deec644b1 | 1b5fb493694001fb6073a11811fb78efaad2e1053de7e283679903c91c8a12fb | 1df8cd1fd83442cdac0ab3f2441988b366fe72aeef37ad62d04c72b19048b3c9 | null | [] | 202 |
2.4 | julien-python-toolkit | 0.2.6 | Important code that I reuse through multiple projects. Please see license for allowed use. | # Readme
Reusable Python utilities used across multiple projects.
## Installation
```bash
pip install julien-python-toolkit
```
## Requirements
- Python 3.10+
## Usage
Example:
```
from julien_python_toolkit.google import GoogleServices
service = GoogleServices(...)
```
## Features
- Google API helpers
- Common reusable utilities
- Designed for automation and backend systems
## License
See `LICENSE.txt`
## Changelog
See `CHANGELOG.md` for release history.
## Links
- PyPI: https://pypi.org/project/julien-python-toolkit/
| text/markdown | Julien Python | python.julien@hotmail.com | null | null | Custom Non-Commercial License | null | [
"Programming Language :: Python :: 3"
] | [] | https://github.com/JulienPython/JulienPythonToolkit-V001 | null | null | [] | [] | [] | [
"certifi==2024.8.30",
"google-api-core==2.19.1",
"google-api-python-client==2.139.0",
"google-auth==2.32.0",
"google-auth-httplib2==0.2.0",
"google-auth-oauthlib==1.2.1",
"googleapis-common-protos==1.63.2"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:23:33.406892 | julien_python_toolkit-0.2.6.tar.gz | 21,894 | ca/66/41ca0cf3e8e024dee3478dbe0738a9c42ccff0321fb99a7c8af92dd0ec59/julien_python_toolkit-0.2.6.tar.gz | source | sdist | null | false | fc93b6d91660f34e973dcd0fd91a9e8d | 6d6c711f677cee7b2bd3bf42aabd51efe714bf2914caa00a1a5493913b926490 | ca6641ca0cf3e8e024dee3478dbe0738a9c42ccff0321fb99a7c8af92dd0ec59 | null | [
"LICENSE.txt"
] | 209 |
2.4 | linreg-core | 0.8.0 | Lightweight linear regression (OLS, Ridge, Lasso, Elastic Net) with diagnostic tests. Pure Rust - no external math dependencies. | # linreg-core
[](https://github.com/jesse-anderson/linreg-core/actions/workflows/ci.yml)
[](https://github.com/jesse-anderson/linreg-core/actions/workflows/ci.yml)
[](LICENSE-MIT)
[](https://crates.io/crates/linreg-core)
[](https://www.npmjs.com/package/linreg-core)
[](https://pypi.org/project/linreg-core/)
[](https://docs.rs/linreg-core)
[](https://jesse-anderson.net/linreg-core/)
A lightweight, self-contained linear regression library written in Rust. Compiles to WebAssembly for browser use, Python bindings via PyO3, a native Windows DLL for Excel VBA, or runs as a native Rust crate.
**Key design principle:** All linear algebra and statistical distribution functions are implemented from scratch — no external math libraries required. This keeps binary sizes small and makes the crate highly portable.
**[Live Demo Link](https://jesse-anderson.net/linreg-core/)**
---
## Table of Contents
| Section | Description |
|---------|-------------|
| [Features](#features) | Regression methods, model statistics, feature importance, diagnostic tests |
| [Rust Usage](#rust-usage) | Native Rust crate usage |
| [WebAssembly Usage](#webassembly-usage) | Browser/JavaScript usage |
| [Python Usage](#python-usage) | Python bindings via PyO3 |
| [VBA / Excel Usage](#vba--excel-usage) | Excel VBA via native Windows DLL |
| [Feature Flags](#feature-flags) | Build configuration options |
| [Validation](#validation) | Testing and verification |
| [Implementation Notes](#implementation-notes) | Technical details |
---
## Features
### Regression Methods
- **OLS Regression:** Coefficients, standard errors, t-statistics, p-values, confidence intervals, model selection criteria (AIC, BIC, log-likelihood)
- **Ridge Regression:** L2-regularized regression with optional standardization, effective degrees of freedom, model selection criteria
- **Lasso Regression:** L1-regularized regression via coordinate descent with automatic variable selection, convergence tracking, model selection criteria
- **Elastic Net:** Combined L1 + L2 regularization for variable selection with multicollinearity handling, active set convergence, model selection criteria
- **Polynomial Regression:** Polynomial fitting of any degree with centering/standardization for numerical stability
- **LOESS:** Locally estimated scatterplot smoothing for non-parametric curve fitting with configurable span, polynomial degree, and robust fitting
- **WLS (Weighted Least Squares):** Regression with observation weights for heteroscedastic data, includes confidence intervals
- **Prediction Intervals:** Uncertainty bounds for individual future observations (OLS, Ridge, Lasso, Elastic Net)
- **K-Fold Cross Validation:** Model evaluation and hyperparameter tuning for all regression types (OLS, Ridge, Lasso, Elastic Net) with customizable folds, shuffling, and seeding
- **Lambda Path Generation:** Create regularization paths for cross-validation
- **Model Serialization:** Save/load trained models to/from JSON for all model types
### Model Statistics
- **Fit Metrics:** R-squared, Adjusted R-squared, F-statistic, F-test p-value
- **Error Metrics:** Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE)
- **Model Selection:** Log-likelihood, AIC (Akaike Information Criterion), BIC (Bayesian Information Criterion)
- **Residuals:** Raw residuals, standardized residuals, fitted values, leverage (hat matrix diagonal)
- **Multicollinearity:** Variance Inflation Factor (VIF) for each predictor
### Feature Importance
- **Standardized Coefficients:** Coefficients scaled by standard deviation for cross-variable comparison
- **SHAP Values:** Exact SHAP (Shapley Additive Explanations) for linear models — local and global importance
- **Permutation Importance:** Performance drop when feature values are randomly shuffled
- **VIF Ranking:** Automatic multicollinearity assessment with interpretation guidance
### Diagnostic Tests
| Category | Tests |
|----------|-------|
| **Linearity** | Rainbow Test, Harvey-Collier Test, RESET Test |
| **Heteroscedasticity** | Breusch-Pagan (Koenker variant), White Test (R & Python methods) |
| **Normality** | Jarque-Bera, Shapiro-Wilk (n ≤ 5000), Anderson-Darling |
| **Autocorrelation** | Durbin-Watson, Breusch-Godfrey (higher-order) |
| **Multicollinearity** | Variance Inflation Factor (VIF) |
| **Influence** | Cook's Distance, DFBETAS, DFFITS |
---
## Rust Usage
Add to your `Cargo.toml`:
```toml
[dependencies]
linreg-core = { version = "0.8", default-features = false }
```
### OLS Regression (Rust)
```rust
use linreg_core::core::ols_regression;
fn main() -> Result<(), linreg_core::Error> {
let y = vec![2.5, 3.7, 4.2, 5.1, 6.3];
let x = vec![vec![1.0, 2.0, 3.0, 4.0, 5.0]];
let names = vec!["Intercept".to_string(), "X1".to_string()];
let result = ols_regression(&y, &x, &names)?;
println!("Coefficients: {:?}", result.coefficients);
println!("R-squared: {:.4}", result.r_squared);
println!("F-statistic: {:.4}", result.f_statistic);
println!("Log-likelihood: {:.4}", result.log_likelihood);
println!("AIC: {:.4}", result.aic);
println!("BIC: {:.4}", result.bic);
Ok(())
}
```
### Ridge Regression (Rust)
```rust,no_run
use linreg_core::regularized::{ridge_fit, RidgeFitOptions};
use linreg_core::linalg::Matrix;
fn main() -> Result<(), linreg_core::Error> {
let y = vec![2.5, 3.7, 4.2, 5.1, 6.3];
let x = Matrix::new(5, 2, vec![
1.0, 1.0, // row 0: intercept, x1
1.0, 2.0, // row 1
1.0, 3.0, // row 2
1.0, 4.0, // row 3
1.0, 5.0, // row 4
]);
let options = RidgeFitOptions {
lambda: 1.0,
standardize: true,
intercept: true,
};
let result = ridge_fit(&x, &y, &options)?;
println!("Intercept: {}", result.intercept);
println!("Coefficients: {:?}", result.coefficients);
println!("R-squared: {:.4}", result.r_squared);
println!("Effective degrees of freedom: {:.2}", result.effective_df);
println!("AIC: {:.4}", result.aic);
println!("BIC: {:.4}", result.bic);
Ok(())
}
```
### Lasso Regression (Rust)
```rust,no_run
use linreg_core::regularized::{lasso_fit, LassoFitOptions};
use linreg_core::linalg::Matrix;
fn main() -> Result<(), linreg_core::Error> {
let y = vec![2.5, 3.7, 4.2, 5.1, 6.3];
let x = Matrix::new(5, 3, vec![
1.0, 1.0, 0.5,
1.0, 2.0, 1.0,
1.0, 3.0, 1.5,
1.0, 4.0, 2.0,
1.0, 5.0, 2.5,
]);
let options = LassoFitOptions {
lambda: 0.1,
standardize: true,
intercept: true,
..Default::default()
};
let result = lasso_fit(&x, &y, &options)?;
println!("Intercept: {}", result.intercept);
println!("Coefficients: {:?}", result.coefficients);
println!("Non-zero coefficients: {}", result.n_nonzero);
println!("AIC: {:.4}", result.aic);
println!("BIC: {:.4}", result.bic);
Ok(())
}
```
### Elastic Net Regression (Rust)
```rust,no_run
use linreg_core::regularized::{elastic_net_fit, ElasticNetOptions};
use linreg_core::linalg::Matrix;
fn main() -> Result<(), linreg_core::Error> {
let y = vec![2.5, 3.7, 4.2, 5.1, 6.3];
let x = Matrix::new(5, 3, vec![
1.0, 1.0, 0.5,
1.0, 2.0, 1.0,
1.0, 3.0, 1.5,
1.0, 4.0, 2.0,
1.0, 5.0, 2.5,
]);
let options = ElasticNetOptions {
lambda: 0.1,
alpha: 0.5, // 0 = Ridge, 1 = Lasso, 0.5 = balanced
standardize: true,
intercept: true,
..Default::default()
};
let result = elastic_net_fit(&x, &y, &options)?;
println!("Intercept: {}", result.intercept);
println!("Coefficients: {:?}", result.coefficients);
println!("Non-zero coefficients: {}", result.n_nonzero);
println!("AIC: {:.4}", result.aic);
println!("BIC: {:.4}", result.bic);
Ok(())
}
```
### Polynomial Regression (Rust)
```rust,no_run
use linreg_core::polynomial::{polynomial_regression, polynomial_predict};
fn main() -> Result<(), linreg_core::Error> {
let y = vec![2.1, 4.9, 10.8, 19.5, 32.1]; // Quadratic relationship
let x = vec![1.0, 2.0, 3.0, 4.0, 5.0];
// Fit degree-2 polynomial with centering for numerical stability
let fit = polynomial_regression(&y, &x, 2, true, true)?;
println!("R²: {:.4}", fit.ols_output.r_squared);
println!("Coefficients: {:?}", fit.ols_output.coefficients);
// Predict at new x values
let new_x = vec![2.5, 5.5];
let predictions = polynomial_predict(&fit, &new_x)?;
println!("Predictions: {:?}", predictions);
Ok(())
}
```
### Feature Importance (Rust)
```rust,no_run
use linreg_core::feature_importance::{
standardized_coefficients, shap_values_linear,
permutation_importance_ols, PermutationImportanceOptions,
};
fn main() -> Result<(), linreg_core::Error> {
let y = vec![2.5, 3.7, 4.2, 5.1, 6.3];
let x1 = vec![1.0, 2.0, 3.0, 4.0, 5.0];
let x2 = vec![2.0, 4.0, 5.0, 4.0, 3.0];
// Standardized coefficients for comparison
let std_coef = standardized_coefficients(&[0.8, 0.5], &[x1.clone(), x2.clone()])?;
println!("Standardized: {:?}", std_coef.standardized_coefficients);
// SHAP values for local explanations
let shap = shap_values_linear(&[x1, x2], &[1.5, 0.3])?;
println!("SHAP: {:?}", shap.mean_abs_shap);
// Permutation importance
let perm = permutation_importance_ols(
&y, &[x1, x2], &PermutationImportanceOptions::default()
)?;
println!("Importance: {:?}", perm.importance);
Ok(())
}
```
### Diagnostic Tests (Rust)
```rust
use linreg_core::diagnostics::{
breusch_pagan_test, durbin_watson_test, jarque_bera_test,
shapiro_wilk_test, rainbow_test, harvey_collier_test,
white_test, anderson_darling_test, breusch_godfrey_test,
cooks_distance_test, dfbetas_test, dffits_test, vif_test,
reset_test, BGTestType, RainbowMethod, ResetType, WhiteMethod
};
fn main() -> Result<(), linreg_core::Error> {
let y = vec![/* your data */];
let x = vec![vec![/* predictor 1 */], vec![/* predictor 2 */]];
// Heteroscedasticity tests
let bp = breusch_pagan_test(&y, &x)?;
println!("Breusch-Pagan: LM={:.4}, p={:.4}", bp.statistic, bp.p_value);
let white = white_test(&y, &x, WhiteMethod::R)?;
println!("White: statistic={:.4}, p={:.4}", white.statistic, white.p_value);
// Autocorrelation tests
let dw = durbin_watson_test(&y, &x)?;
println!("Durbin-Watson: {:.4}", dw.statistic);
let bg = breusch_godfrey_test(&y, &x, 2, BGTestType::Chisq)?;
println!("Breusch-Godfrey (order 2): statistic={:.4}, p={:.4}", bg.statistic, bg.p_value);
// Normality tests
let jb = jarque_bera_test(&y, &x)?;
println!("Jarque-Bera: JB={:.4}, p={:.4}", jb.statistic, jb.p_value);
let sw = shapiro_wilk_test(&y, &x)?;
println!("Shapiro-Wilk: W={:.4}, p={:.4}", sw.statistic, sw.p_value);
let ad = anderson_darling_test(&y, &x)?;
println!("Anderson-Darling: A={:.4}, p={:.4}", ad.statistic, ad.p_value);
// Linearity tests
let rainbow = rainbow_test(&y, &x, 0.5, RainbowMethod::R)?;
println!("Rainbow: F={:.4}, p={:.4}",
rainbow.r_result.as_ref().unwrap().statistic,
rainbow.r_result.as_ref().unwrap().p_value);
let hc = harvey_collier_test(&y, &x)?;
println!("Harvey-Collier: t={:.4}, p={:.4}", hc.statistic, hc.p_value);
let reset = reset_test(&y, &x, &[2, 3], ResetType::Fitted)?;
println!("RESET: F={:.4}, p={:.4}", reset.f_statistic, reset.p_value);
// Influence diagnostics
let cd = cooks_distance_test(&y, &x)?;
println!("Cook's Distance: {} influential points", cd.influential_4_over_n.len());
let dfbetas = dfbetas_test(&y, &x)?;
println!("DFBETAS: {} influential observations", dfbetas.influential_observations.len());
let dffits = dffits_test(&y, &x)?;
println!("DFFITS: {} influential observations", dffits.influential_observations.len());
// Multicollinearity
let vif = vif_test(&y, &x)?;
println!("VIF: {:?}", vif.vif_values);
Ok(())
}
```
### WLS Regression (Rust)
```rust,no_run
use linreg_core::weighted_regression::wls_regression;
fn main() -> Result<(), linreg_core::Error> {
let y = vec![2.0, 4.0, 6.0, 8.0, 10.0];
let x1 = vec![1.0, 2.0, 3.0, 4.0, 5.0];
// Equal weights = OLS
let weights = vec![1.0, 1.0, 1.0, 1.0, 1.0];
let fit = wls_regression(&y, &[x1], &weights)?;
println!("Intercept: {} (SE: {}, t: {}, p: {})",
fit.coefficients[0],
fit.standard_errors[0],
fit.t_statistics[0],
fit.p_values[0]
);
println!("F-statistic: {} (p: {})", fit.f_statistic, fit.f_p_value);
println!("R-squared: {:.4}", fit.r_squared);
// Access confidence intervals
for (i, (&coef, &lower, &upper)) in fit.coefficients.iter()
.zip(fit.conf_int_lower.iter())
.zip(fit.conf_int_upper.iter())
.enumerate()
{
println!("Coefficient {}: [{}, {}]", i, lower, upper);
}
Ok(())
}
```
### LOESS Regression (Rust)
```rust,no_run
use linreg_core::loess::{loess_fit, LoessOptions};
fn main() -> Result<(), linreg_core::Error> {
// Single predictor only
let x = vec![0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0];
let y = vec![1.0, 3.5, 4.8, 6.2, 8.5, 11.0, 13.2, 14.8, 17.5, 19.0, 22.0];
// Default options: span=0.75, degree=2, robust iterations=0
let options = LoessOptions::default();
let result = loess_fit(&y, &[x], &options)?;
println!("Fitted values: {:?}", result.fitted_values);
println!("Residuals: {:?}", result.residuals);
Ok(())
}
```
**Custom LOESS options:**
```rust,no_run
use linreg_core::loess::{loess_fit, LoessOptions, LoessSurface};
let options = LoessOptions {
span: 0.5, // Smoothing parameter (0-1, smaller = less smooth)
degree: 1, // Polynomial degree (0=constant, 1=linear, 2=quadratic)
surface: LoessSurface::Direct, // Note: only "direct" is currently supported; "interpolate" is planned
robust_iterations: 3, // Number of robust fitting iterations (0 = disabled)
};
let result = loess_fit(&y, &[x], &options)?;
```
### K-Fold Cross Validation (Rust)
Cross-validation is used for model evaluation and hyperparameter tuning. The library supports K-Fold CV for all regression types:
```rust,no_run
use linreg_core::cross_validation::{kfold_cv_ols, kfold_cv_ridge, kfold_cv_lasso, kfold_cv_elastic_net, KFoldOptions};
fn main() -> Result<(), linreg_core::Error> {
let y = vec![2.5, 3.7, 4.2, 5.1, 6.3, 7.0, 7.5, 8.1];
let x1 = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0];
let x2 = vec![2.0, 4.0, 5.0, 4.0, 3.0, 4.5, 5.5, 6.0];
let names = vec!["Intercept".to_string(), "X1".to_string(), "X2".to_string()];
// Configure CV options
let options = KFoldOptions {
n_folds: 5,
shuffle: true,
seed: Some(42), // For reproducibility
};
// OLS cross-validation
let ols_cv = kfold_cv_ols(&y, &[x1.clone(), x2.clone()], &names, &options)?;
println!("OLS CV RMSE: {:.4} (±{:.4})", ols_cv.mean_rmse, ols_cv.std_rmse);
println!("OLS CV R²: {:.4} (±{:.4})", ols_cv.mean_r_squared, ols_cv.std_r_squared);
// Ridge cross-validation (for lambda selection)
let lambda = 1.0;
let ridge_cv = kfold_cv_ridge(&[x1.clone(), x2.clone()], &y, lambda, true, &options)?;
println!("Ridge CV RMSE: {:.4}", ridge_cv.mean_rmse);
// Lasso cross-validation
let lasso_cv = kfold_cv_lasso(&[x1.clone(), x2.clone()], &y, 0.1, true, &options)?;
println!("Lasso CV RMSE: {:.4}", lasso_cv.mean_rmse);
// Elastic Net cross-validation
let enet_cv = kfold_cv_elastic_net(&[x1, x2], &y, 0.1, 0.5, true, &options)?;
println!("Elastic Net CV RMSE: {:.4}", enet_cv.mean_rmse);
// Access per-fold results
for fold in &ols_cv.fold_results {
println!("Fold {}: train={}, test={}, R²={:.4}",
fold.fold_index, fold.train_size, fold.test_size, fold.r_squared);
}
Ok(())
}
```
**CV Result fields:**
- `mean_rmse`, `std_rmse` - Mean and std of RMSE across folds
- `mean_mae`, `std_mae` - Mean and std of MAE across folds
- `mean_r_squared`, `std_r_squared` - Mean and std of R² across folds
- `mean_train_r_squared` - Mean training R² (for overfitting detection)
- `fold_results` - Per-fold metrics (train/test sizes, MSE, RMSE, MAE, R²)
- `fold_coefficients` - Coefficients from each fold (for stability analysis)
### Lambda Path Generation (Rust)
```rust,no_run
use linreg_core::regularized::{make_lambda_path, LambdaPathOptions};
use linreg_core::linalg::Matrix;
let x = Matrix::new(100, 5, vec![0.0; 500]);
let y = vec![0.0; 100];
let options = LambdaPathOptions {
nlambda: 100,
lambda_min_ratio: Some(0.01),
alpha: 1.0, // Lasso
..Default::default()
};
let lambdas = make_lambda_path(&x, &y, &options, None, Some(0));
for &lambda in lambdas.iter() {
// Fit model with this lambda
}
```
### Model Save/Load (Rust)
All trained models can be saved to disk and loaded back later:
```rust,no_run
use linreg_core::{ModelSave, ModelLoad};
// Train a model
let result = ols_regression(&y, &[x1], &names)?;
// Save to file
result.save("my_model.json")?;
// Or with a custom name
result.save_with_name("my_model.json", Some("My Housing Model".to_string()))?;
// Load back
let loaded = linreg_core::core::RegressionOutput::load("my_model.json")?;
```
The same `save()` and `load()` methods work for all model types: `RegressionOutput`, `RidgeFit`, `LassoFit`, `ElasticNetFit`, `WlsFit`, and `LoessFit`.
---
## WebAssembly Usage
**[Live Demo →](https://jesse-anderson.net/linreg-core/)**
Build with wasm-pack:
```bash
wasm-pack build --release --target web
```
### OLS Regression (WASM)
```javascript
import init, { ols_regression } from './pkg/linreg_core.js';
async function run() {
await init();
const y = [1, 2, 3, 4, 5];
const x = [[1, 2, 3, 4, 5]];
const names = ["Intercept", "X1"];
const resultJson = ols_regression(
JSON.stringify(y),
JSON.stringify(x),
JSON.stringify(names)
);
const result = JSON.parse(resultJson);
console.log("Coefficients:", result.coefficients);
console.log("R-squared:", result.r_squared);
console.log("Log-likelihood:", result.log_likelihood);
console.log("AIC:", result.aic);
console.log("BIC:", result.bic);
}
run();
```
### Ridge Regression (WASM)
```javascript
const result = JSON.parse(ridge_regression(
JSON.stringify(y),
JSON.stringify(x),
JSON.stringify(["Intercept", "X1", "X2"]),
1.0, // lambda
true // standardize
));
console.log("Coefficients:", result.coefficients);
console.log("R-squared:", result.r_squared);
console.log("Effective degrees of freedom:", result.effective_df);
console.log("AIC:", result.aic);
console.log("BIC:", result.bic);
```
### Lasso Regression (WASM)
```javascript
const result = JSON.parse(lasso_regression(
JSON.stringify(y),
JSON.stringify(x),
JSON.stringify(["Intercept", "X1", "X2"]),
0.1, // lambda
true, // standardize
100000, // max_iter
1e-7 // tol
));
console.log("Coefficients:", result.coefficients);
console.log("Non-zero coefficients:", result.n_nonzero);
console.log("AIC:", result.aic);
console.log("BIC:", result.bic);
```
### Elastic Net Regression (WASM)
```javascript
const result = JSON.parse(elastic_net_regression(
JSON.stringify(y),
JSON.stringify(x),
JSON.stringify(["Intercept", "X1", "X2"]),
0.1, // lambda
0.5, // alpha (0 = Ridge, 1 = Lasso, 0.5 = balanced)
true, // standardize
100000, // max_iter
1e-7 // tol
));
console.log("Coefficients:", result.coefficients);
console.log("Non-zero coefficients:", result.n_nonzero);
console.log("AIC:", result.aic);
console.log("BIC:", result.bic);
```
### Lambda Path Generation (WASM)
```javascript
const path = JSON.parse(make_lambda_path(
JSON.stringify(y),
JSON.stringify(x),
100, // n_lambda
0.01 // lambda_min_ratio (as fraction of lambda_max)
));
console.log("Lambda sequence:", path.lambda_path);
console.log("Lambda max:", path.lambda_max);
```
### WLS Regression (WASM)
```javascript
const result = JSON.parse(wls_regression(
JSON.stringify([2, 4, 6, 8, 10]),
JSON.stringify([[1, 2, 3, 4, 5]]),
JSON.stringify([1, 1, 1, 1, 1]) // weights (equal weights = OLS)
));
console.log("Coefficients:", result.coefficients);
console.log("Standard errors:", result.standard_errors);
console.log("P-values:", result.p_values);
console.log("R-squared:", result.r_squared);
console.log("F-statistic:", result.f_statistic);
console.log("Confidence intervals (lower):", result.conf_int_lower);
console.log("Confidence intervals (upper):", result.conf_int_upper);
```
### LOESS Regression (WASM)
```javascript
const result = JSON.parse(loess_fit(
JSON.stringify(y),
JSON.stringify(x[0]), // Single predictor only (flattened array)
0.5, // span (smoothing parameter: 0-1)
1, // degree (0=constant, 1=linear, 2=quadratic)
"direct", // surface method ("direct" only; "interpolate" is planned)
0 // robust iterations (0=disabled, >0=number of iterations)
));
console.log("Fitted values:", result.fitted_values);
console.log("Residuals:", result.residuals);
```
### K-Fold Cross Validation (WASM)
```javascript
// OLS cross-validation
const ols_cv = JSON.parse(kfold_cv_ols(
JSON.stringify(y),
JSON.stringify(x),
JSON.stringify(["Intercept", "X1", "X2"]),
5, // n_folds
"true", // shuffle (JSON boolean)
"42" // seed (JSON string number, or "null" for no seed)
));
console.log("OLS CV RMSE:", ols_cv.mean_rmse, "±", ols_cv.std_rmse);
console.log("OLS CV R²:", ols_cv.mean_r_squared, "±", ols_cv.std_r_squared);
// Ridge cross-validation
const ridge_cv = JSON.parse(kfold_cv_ridge(
JSON.stringify(y),
JSON.stringify(x),
1.0, // lambda
true, // standardize
5, // n_folds
"true", // shuffle
"42" // seed
));
// Lasso cross-validation
const lasso_cv = JSON.parse(kfold_cv_lasso(
JSON.stringify(y),
JSON.stringify(x),
0.1, // lambda
true, // standardize
5, // n_folds
"true", // shuffle
"42" // seed
));
// Elastic Net cross-validation
const enet_cv = JSON.parse(kfold_cv_elastic_net(
JSON.stringify(y),
JSON.stringify(x),
0.1, // lambda
0.5, // alpha (0 = Ridge, 1 = Lasso)
true, // standardize
5, // n_folds
"true", // shuffle
"42" // seed
));
// Access per-fold results
ols_cv.fold_results.forEach(fold => {
console.log(`Fold ${fold.fold_index}: R²=${fold.r_squared.toFixed(4)}`);
});
```
**Note:** In WASM, boolean and seed parameters are passed as JSON strings. Use `"true"`/`"false"` for shuffle and `"42"` or `"null"` for seed.
### Diagnostic Tests (WASM)
```javascript
// Rainbow test
const rainbow = JSON.parse(rainbow_test(
JSON.stringify(y),
JSON.stringify(x),
0.5, // fraction
"r" // method: "r", "python", or "both"
));
// Harvey-Collier test
const hc = JSON.parse(harvey_collier_test(
JSON.stringify(y),
JSON.stringify(x)
));
// Breusch-Pagan test
const bp = JSON.parse(breusch_pagan_test(
JSON.stringify(y),
JSON.stringify(x)
));
// White test (method selection: "r", "python", or "both")
const white = JSON.parse(white_test(
JSON.stringify(y),
JSON.stringify(x),
"r"
));
// White test - R-specific method
const whiteR = JSON.parse(r_white_test(
JSON.stringify(y),
JSON.stringify(x)
));
// White test - Python-specific method
const whitePy = JSON.parse(python_white_test(
JSON.stringify(y),
JSON.stringify(x)
));
// Jarque-Bera test
const jb = JSON.parse(jarque_bera_test(
JSON.stringify(y),
JSON.stringify(x)
));
// Durbin-Watson test
const dw = JSON.parse(durbin_watson_test(
JSON.stringify(y),
JSON.stringify(x)
));
// Shapiro-Wilk test
const sw = JSON.parse(shapiro_wilk_test(
JSON.stringify(y),
JSON.stringify(x)
));
// Anderson-Darling test
const ad = JSON.parse(anderson_darling_test(
JSON.stringify(y),
JSON.stringify(x)
));
// Cook's Distance
const cd = JSON.parse(cooks_distance_test(
JSON.stringify(y),
JSON.stringify(x)
));
// DFBETAS (influence on coefficients)
const dfbetas = JSON.parse(dfbetas_test(
JSON.stringify(y),
JSON.stringify(x)
));
// DFFITS (influence on fitted values)
const dffits = JSON.parse(dffits_test(
JSON.stringify(y),
JSON.stringify(x)
));
// VIF test (multicollinearity)
const vif = JSON.parse(vif_test(
JSON.stringify(y),
JSON.stringify(x)
));
console.log("VIF values:", vif.vif_values);
// RESET test (functional form)
const reset = JSON.parse(reset_test(
JSON.stringify(y),
JSON.stringify(x),
JSON.stringify([2, 3]), // powers
"fitted" // type: "fitted", "regressor", or "princomp"
));
// Breusch-Godfrey test (higher-order autocorrelation)
const bg = JSON.parse(breusch_godfrey_test(
JSON.stringify(y),
JSON.stringify(x),
1, // order
"chisq" // test_type: "chisq" or "f"
));
```
### Statistical Utilities (WASM)
```javascript
// Student's t CDF: P(T <= t)
const tCDF = get_t_cdf(1.96, 20);
// Critical t-value for two-tailed test
const tCrit = get_t_critical(0.05, 20);
// Normal inverse CDF (probit)
const zScore = get_normal_inverse(0.975);
// Descriptive statistics (all return JSON strings)
const mean = JSON.parse(stats_mean(JSON.stringify([1, 2, 3, 4, 5])));
const variance = JSON.parse(stats_variance(JSON.stringify([1, 2, 3, 4, 5])));
const stddev = JSON.parse(stats_stddev(JSON.stringify([1, 2, 3, 4, 5])));
const median = JSON.parse(stats_median(JSON.stringify([1, 2, 3, 4, 5])));
const quantile = JSON.parse(stats_quantile(JSON.stringify([1, 2, 3, 4, 5]), 0.5));
const correlation = JSON.parse(stats_correlation(
JSON.stringify([1, 2, 3, 4, 5]),
JSON.stringify([2, 4, 6, 8, 10])
));
```
### CSV Parsing (WASM)
```javascript
const csv = parse_csv(csvContent);
const parsed = JSON.parse(csv);
console.log("Headers:", parsed.headers);
console.log("Numeric columns:", parsed.numeric_columns);
```
### Helper Functions (WASM)
```javascript
const version = get_version(); // e.g., "0.5.0"
const msg = test(); // "Rust WASM is working!"
```
### Model Serialization (WASM)
```javascript
// Train a model
const resultJson = ols_regression(
JSON.stringify(y),
JSON.stringify(x),
JSON.stringify(names)
);
const result = JSON.parse(resultJson);
// Serialize with metadata
const serialized = serialize_model(
resultJson, // model JSON
"OLS", // model type: "OLS", "Ridge", "Lasso", "ElasticNet", "WLS", "LOESS"
"My Model" // optional name (null to omit)
);
// Get metadata without loading full model
const metadataJson = get_model_metadata(serialized);
const metadata = JSON.parse(metadataJson);
console.log("Model type:", metadata.model_type);
console.log("Created:", metadata.created_at);
// Deserialize to get model data back
const modelJson = deserialize_model(serialized);
const model = JSON.parse(modelJson);
// Download in browser
const blob = new Blob([serialized], { type: 'application/json' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'model.json';
a.click();
```
### Domain Security (WASM)
Optional domain restriction via build-time environment variable:
```bash
LINREG_DOMAIN_RESTRICT=example.com,mysite.com wasm-pack build --release --target web
```
When NOT set (default), all domains are allowed.
---
## Python Usage
Install from PyPI:
```bash
pip install linreg-core
```
### Quick Start (Python)
The recommended way to use `linreg-core` in Python is with native types (lists or numpy arrays):
```python
import linreg_core
# Works with Python lists
y = [1, 2, 3, 4, 5]
x = [[1, 2, 3, 4, 5]]
names = ["Intercept", "X1"]
result = linreg_core.ols_regression(y, x, names)
# Access attributes directly
print(f"R²: {result.r_squared}")
print(f"Coefficients: {result.coefficients}")
print(f"F-statistic: {result.f_statistic}")
# Get a formatted summary
print(result.summary())
```
**With NumPy arrays:**
```python
import numpy as np
import linreg_core
y = np.array([1, 2, 3, 4, 5])
x = np.array([[1, 2, 3, 4, 5]])
result = linreg_core.ols_regression(y, x, ["Intercept", "X1"])
print(result.summary())
```
**Result objects** provide:
- Direct attribute access (`result.r_squared`, `result.coefficients`, `result.aic`, `result.bic`, `result.log_likelihood`)
- `summary()` method for formatted output
- `to_dict()` method for JSON serialization
### OLS Regression (Python)
```python
import linreg_core
y = [1, 2, 3, 4, 5]
x = [[1, 2, 3, 4, 5]]
names = ["Intercept", "X1"]
result = linreg_core.ols_regression(y, x, names)
print(f"Coefficients: {result.coefficients}")
print(f"R-squared: {result.r_squared}")
print(f"F-statistic: {result.f_statistic}")
print(f"Log-likelihood: {result.log_likelihood}")
print(f"AIC: {result.aic}")
print(f"BIC: {result.bic}")
```
### Ridge Regression (Python)
```python
result = linreg_core.ridge_regression(
y, x, ["Intercept", "X1"],
1.0, # lambda
True # standardize
)
print(f"Intercept: {result.intercept}")
print(f"Coefficients: {result.coefficients}")
print(f"Effective degrees of freedom: {result.effective_df:.2f}")
print(f"AIC: {result.aic}")
print(f"BIC: {result.bic}")
```
### Lasso Regression (Python)
```python
result = linreg_core.lasso_regression(
y, x, ["Intercept", "X1"],
0.1, # lambda
True, # standardize
100000, # max_iter
1e-7 # tol
)
print(f"Intercept: {result.intercept}")
print(f"Coefficients: {result.coefficients}")
print(f"Non-zero: {result.n_nonzero}")
print(f"Converged: {result.converged}")
print(f"AIC: {result.aic}")
print(f"BIC: {result.bic}")
```
### Elastic Net Regression (Python)
```python
result = linreg_core.elastic_net_regression(
y, x, ["Intercept", "X1"],
0.1, # lambda
0.5, # alpha (0 = Ridge, 1 = Lasso, 0.5 = balanced)
True, # standardize
100000, # max_iter
1e-7 # tol
)
print(f"Intercept: {result.intercept}")
print(f"Coefficients: {result.coefficients}")
print(f"Non-zero: {result.n_nonzero}")
print(f"AIC: {result.aic}")
print(f"BIC: {result.bic}")
```
### LOESS Regression (Python)
```python
result = linreg_core.loess_fit(
y, # Single predictor only
[0.5], # span (smoothing parameter: 0-1)
2, # degree (0=constant, 1=linear, 2=quadratic)
"direct", # surface ("direct" only; "interpolate" is planned)
0 # robust iterations (0=disabled, >0=number of iterations)
)
print(f"Fitted values: {result.fitted_values}")
print(f"Residuals: {result.residuals}")
```
### Lambda Path Generation (Python)
```python
path = linreg_core.make_lambda_path(
y, x,
100, # n_lambda
0.01 # lambda_min_ratio
)
print(f"Lambda max: {path.lambda_max}")
print(f"Lambda min: {path.lambda_min}")
print(f"Number: {path.n_lambda}")
```
### Diagnostic Tests (Python)
```python
# Breusch-Pagan test (heteroscedasticity)
bp = linreg_core.breusch_pagan_test(y, x)
print(f"Statistic: {bp.statistic}, p-value: {bp.p_value}")
# Harvey-Collier test (linearity)
hc = linreg_core.harvey_collier_test(y, x)
# Rainbow test (linearity) - supports "r", "python", or "both" methods
rainbow = linreg_core.rainbow_test(y, x, 0.5, "r")
# White test - choose method: "r", "python", or "both"
white = linreg_core.white_test(y, x, "r")
# Or use specific method functions
white_r = linreg_core.r_white_test(y, x)
white_py = linreg_core.python_white_test(y, x)
# Jarque-Bera test (normality)
jb = linreg_core.jarque_bera_test(y, x)
# Durbin-Watson test (autocorrelation)
dw = linreg_core.durbin_watson_test(y, x)
print(f"DW statistic: {dw.statistic}")
# Shapiro-Wilk test (normality)
sw = linreg_core.shapiro_wilk_test(y, x)
# Anderson-Darling test (normality)
ad = linreg_core.anderson_darling_test(y, x)
# Cook's Distance (influential observations)
cd = linreg_core.cooks_distance_test(y, x)
print(f"Influential points: {cd.influential_4_over_n}")
# DFBETAS (influence on each coefficient)
dfbetas = linreg_core.dfbetas_test(y, x)
print(f"Threshold: {dfbetas.threshold}")
print(f"Influential obs: {dfbetas.influential_observations}")
# DFFITS (influence on fitted values)
dffits = linreg_core.dffits_test(y, x)
print(f"Threshold: {dffits.threshold}")
print(f"Influential obs: {dffits.influential_observations}")
# RESET test (model specification)
reset = linreg_core.reset_test(y, x, [2, 3], "fitted")
# Breusch-Godfrey test (higher-order autocorrelation)
bg = linreg_core.breusch_godfrey_test(y, x, 1, "chisq")
```
### Statistical Utilities (Python)
```python
# Student's t CDF
t_cdf = linreg_core.get_t_cdf(1.96, 20)
# Critical t-value (two-tailed)
t_crit = linreg_core.get_t_critical(0.05, 20)
# Normal inverse CDF (probit)
z_score = linreg_core.get_normal_inverse(0.975)
# Library version
version = linreg_core.get_version()
```
### Descriptive Statistics (Python)
```python
import numpy as np
# All return float directly (no parsing needed)
mean = linreg_core.stats_mean([1, 2, 3, 4, 5])
variance = linreg_core.stats_variance([1, 2, 3, 4, 5])
stddev = linreg_core.stats_stddev([1, 2, 3, 4, 5])
median = linreg_core.stats_median([1, 2, 3, 4, 5])
quantile = linreg_core.stats_quantile([1, 2, 3, 4, 5], 0.5)
correlation = linreg_core.stats_correlation([1, 2, 3, 4, 5], [2, 4, 6, 8, 10])
# Works with numpy arrays too
mean = linreg_core.stats_mean(np.array([1, 2, 3, 4, 5]))
```
### CSV Parsing (Python)
```python
csv_content = '''name,value,category
Alice,42.5,A
Bob,17.3,B
Charlie,99.9,A'''
result = linreg_core.parse_csv(csv_content)
print(f"Headers: {result.headers}")
print(f"Numeric columns: {result.numeric_columns}")
print(f"Data rows: {result.n_rows}")
```
### Model Save/Load (Python)
```python
# Train a model
result = linreg_core.ols_regression(y, x, names)
# Save to file
linreg_core.save_model(result, "my_model.json", name="My Housing Model")
# Load back
loaded = linreg_core.load_model("my_model.json")
print(f"R²: {loaded.r_squared}")
print(f"Coefficients: {loaded.coefficients}")
```
The `save_model()` and `load_model()` functions work with all result types: `OLSResult`, `RidgeResult`, `LassoResult`, `ElasticNetResult`, `LoessResult`, and `WlsResult`.
---
## VBA / Excel Usage
The library ships as a native Windows DLL, letting you call it directly from Excel VBA via `Declare` statements. Prebuilt binaries are included in the `VBA_Example/` directory:
| File | Architecture |
|------|-------------|
| `linreg_core_x64.dll` | 64-bit Excel (Office 2010+) |
| `linreg_core_x86.dll` | 32-bit Excel (legacy) |
### Installation
1. Copy `linreg_core_x64.dll` (and/or `linreg_core_x86.dll`) to the same folder as your `.xlsm` workbook.
2. Import `LinregCore.bas` into your VBA project (ALT+F11 → File → Import File).
3. Optionally import `ExampleMacros.bas` for ready-to-run demo macros. Once both files are imported, run `SetupWorkbook()` from the Immediate Window or a button to automatically create example sheets and load sample data.
### Building from Source
```bash
# 64-bit (modern Excel)
cargo build --release --target x86_64-pc-windows-msvc --features ffi
# 32-bit (legacy Excel)
cargo build --release --target i686-pc-windows-msvc --features ffi
```
The 32-bit build automatically uses `linreg_core.def` to strip stdcall decoration, so VBA `Declare` statements work without modification.
### High-Level Wrappers
`LinregCore.bas` exposes friendly wrapper functions that return 2D Excel arrays you can drop straight into cells with `Application.Transpose`:
```vba
' OLS regression - returns (k+6)×5 summary array
Dim result As Variant
result = LinReg_OLS(y, X)
' Regularized regression
result = LinReg_Ridge(y, X, lambda:=1.0, standardize:=True)
result = LinReg_Lasso(y, X, lambda:=0.1)
result = LinReg_ElasticNet(y, X, lambda:=0.1, alpha:=0.5)
' Weighted OLS
result = LinReg_WLS(y, X, weights)
' Prediction intervals (n_new × 4: predicted, lower, upper, SE)
result = LinReg_PredictionIntervals(y, X, newX, alpha:=0.05)
' Diagnostic tests - each returns 1×3: {statistic, p-value, df}
result = LinReg_BreuschPagan(y, X)
result = LinReg_White(y, X)
result = LinReg_JarqueBera(y, X)
result = LinReg_ShapiroWilk(y, X)
result = LinReg_AndersonDarling(y, X)
result = LinReg_HarveyCollier(y, X)
result = LinReg_Rainbow(y, X, fraction:=0.5)
result = LinReg_Reset(y, X)
result = LinReg_DurbinWatson(y, X) ' {DW statistic, ρ, ""}
result = LinReg_BreuschGodfrey(y, X, lagOrder:=1)
' Influence diagnostics
result = LinReg_VIF(y, X) ' p×1
result = LinReg_CooksDistance(y, X) ' n×1
result = LinReg_DFFITS(y, X) ' n×1
result = LinReg_DFBETAS(y, X) ' (n+1)×(p+1) with header row/col
' Regularization path and cross-validation
result = LinReg_LambdaPath(y, X, nLambda:=100, lmr:=0.01, alpha:=1.0)
result = LinReg_KFoldOLS(y, X, nFolds:=5) ' 1×6 CV metrics
result = LinReg_KFoldRidge(y, X, lambda:=1.0)
result = LinReg_KFoldLasso(y, X, lambda:=0.1)
result = LinReg_KFoldElasticNet(y, X, lambda:=0.1, alpha:=0.5)
```
All wrappers return a 1-element array containing an error string on failure:
```vba
If IsArray(result) And UBound(result, 1) = 0 Then
MsgBox "Error: " & result(0)
Exit Sub
End If
```
### Low-Level Handle API
The DLL uses an opaque handle pattern. All `LR_*` functions return a `usize` handle (0 = error); call `LR_Free` when done:
```vba
' --- declarations already in LinregCore.bas ---
' Private Declare PtrSafe Function LR_OLS Lib "linreg_core_x64.dll" ...
' Private Declare PtrSafe Sub LR_Free Lib "linreg_core_x64.dll" ...
Sub LowLevelExample()
Dim n As Lo | text/markdown | null | null | null | null | MIT OR Apache-2.0 | regression, statistics, linear-regression, ridge, lasso | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Rust",
"Topic :: Scientific/Engineering"
] | [] | https://jesse-anderson.net/linreg-core/ | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://docs.rs/linreg-core",
"Homepage, https://jesse-anderson.net/linreg-core/",
"Live Demo, https://jesse-anderson.net/linreg-core/",
"Repository, https://github.com/jesse-anderson/linreg-core"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T11:23:13.605492 | linreg_core-0.8.0-cp39-cp39-win_amd64.whl | 829,960 | 4e/3f/fa1d6303ffe935a0276e52b030bd97976fbb6c94c6e7617ee67306ce624d/linreg_core-0.8.0-cp39-cp39-win_amd64.whl | cp39 | bdist_wheel | null | false | 46da3717b0f2bb0c9be4ea7997c9a0c6 | 4428e274feead2c52274d105c5b5c54bb8f6e1a480f4c88c0b7656773040cc10 | 4e3ffa1d6303ffe935a0276e52b030bd97976fbb6c94c6e7617ee67306ce624d | null | [
"LICENSE-APACHE",
"LICENSE-MIT"
] | 1,012 |
2.1 | dataqe-framework | 0.2.1 | Reusable Data Validation Framework for data migration, ETL validation, and cross-database reconciliation | # DataQE Framework - Data Quality and Equality Testing
A powerful Python framework for validating data quality and ensuring data consistency between source and target databases. Designed for data migration projects, ETL validation, and cross-database reconciliation.
**Version**: 0.0.1
## Overview
DataQE Framework enables organizations to:
- **Validate data migration quality** between different database systems
- **Ensure data consistency** across source and target environments
- **Run comprehensive test suites** with flexible comparison modes
- **Generate detailed reports** for compliance and audit trails
- **Support dynamic dataset replacement** for multi-release environments
## Key Features
### Multi-Database Support
- **MySQL** - Relational database validation
- **Google BigQuery** - Cloud data warehouse validation
- Extensible connector architecture for adding more databases
### Flexible Test Configuration
- YAML-based test definitions
- Single-source validation with expected conditions
- Source vs Target equality checks
- Threshold-based comparisons (percentage and absolute)
- Support for multiple test cases in a single execution
### Dynamic Dataset Replacement
- Replace dataset placeholders with actual release names
- Centralized configuration for dataset mappings
- Support for multiple sources with different release versions
### Comprehensive Reporting
- **ExecutionReport.html** - Full test results with detailed execution times
- **FailedExecutionReport.html** - Failed tests or confirmation of all tests passing
- **ExecutionReport.csv** - Structured test results for further analysis
- **AutomationData.csv** - CI/CD integration data
- Real-time console output with progress tracking
### Enterprise Features
- PHI data protection with KMS encryption support
- Detailed execution timing metrics
- Environment-based configuration
- Flexible credential management
## Installation
### Prerequisites
- Python 3.8+
- pip
### Install from Source
```bash
git clone <repository-url>
cd dataqe-framework
pip install -e .
```
### Verify Installation
```bash
dataqe-run --help
```
## Quick Start
### 1. Create Configuration File
Create `config.yml`:
```yaml
config_block_validation:
source:
database_type: mysql
mysql:
host: source-db.example.com
port: 3306
user: db_user
password: db_password
database: source_db
target:
database_type: gcpbq
gcp:
project_id: my-gcp-project
dataset_id: target_dataset
credentials_path: /path/to/credentials.json
other:
validation_script: test_suite.yml
preprocessor_queries: preprocessor_queries.yml
```
### 2. Create Test Suite
Create `test_suite.yml`:
```yaml
- test_row_count:
severity: critical
source:
query: |
SELECT COUNT(*) as value FROM users
target:
query: |
SELECT COUNT(*) as value FROM users
comparisons:
comment: "User count must match between source and target"
- test_with_threshold:
severity: high
source:
query: |
SELECT SUM(amount) as value FROM transactions
target:
query: |
SELECT SUM(amount) as value FROM transactions
comparisons:
threshold:
value: percentage
limit: 1
comment: "Transaction amounts must match within 1%"
```
### 3. Run Validation
```bash
dataqe-run --config config.yml
```
Check output directory for reports:
```
./output/ExecutionReport.html
./output/ExecutionReport.csv
./output/FailedExecutionReport.html
```
## Configuration
### Config Block Structure
```yaml
config_block_<name>:
source:
database_type: mysql|gcpbq
mysql: {...}
gcp: {...}
config_query_key: optional_query_key
source_name: optional_source_name
target:
database_type: mysql|gcpbq
mysql: {...}
gcp: {...}
config_query_key: optional_query_key
source_name: optional_source_name
other:
validation_script: path/to/test_suite.yml
preprocessor_queries: path/to/preprocessor_queries.yml
```
### Database Configuration
#### MySQL
```yaml
mysql:
host: hostname
port: 3306
user: username
password: password
database: database_name
```
#### Google BigQuery
```yaml
gcp:
project_id: my-project
dataset_id: my-dataset
credentials_path: /path/to/service-account.json
location: us-central1
use_encryption: false
```
See [CONFIGURATION.md](CONFIGURATION.md) for detailed configuration options.
## Test Suite Definition
Each test case has the following structure:
```yaml
- test_name:
severity: critical|high|medium|low
source:
query: |
SELECT COUNT(*) as value FROM table
config_query_key: optional_key
source_name: optional_source_name
target:
query: |
SELECT COUNT(*) as value FROM table
config_query_key: optional_key
source_name: optional_source_name
comparisons:
expected: optional_expected_value
threshold:
value: percentage|absolute
limit: number
comment: "Description of this test"
```
### Comparison Modes
#### 1. Source vs Target Equality
```yaml
comparisons:
comment: "Values must match exactly"
```
#### 2. Expected Value Check
```yaml
comparisons:
expected: ">=1000"
comment: "Count must be at least 1000"
```
#### 3. Percentage Threshold
```yaml
comparisons:
threshold:
value: percentage
limit: 5
comment: "Target can vary up to 5% from source"
```
#### 4. Absolute Difference
```yaml
comparisons:
threshold:
value: absolute
limit: 100
comment: "Target can differ by max 100 units"
```
## Dynamic Dataset Replacement
Replace dataset placeholders with actual release names:
### 1. Create Preprocessor Queries File
Create `preprocessor_queries.yml`:
```yaml
get_releases: |
SELECT source, current_release, previous_release
FROM release_metadata
WHERE is_active = TRUE
get_bcbsa_releases: |
SELECT 'bcbsa' as source, 'bcbsa_export1' as current_release, 'bcbsa_export3' as previous_release
```
### 2. Update Configuration
Add to `config.yml`:
```yaml
other:
validation_script: test_suite.yml
preprocessor_queries: preprocessor_queries.yml
```
### 3. Update Test Suite
Use placeholders in queries and specify the preprocessor key:
```yaml
- test_current_release:
source:
query: |
SELECT COUNT(*) as value FROM BCBSA_CURR_WEEK.users
config_query_key: get_bcbsa_releases
source_name: bcbsa
```
The framework will:
1. Execute `get_bcbsa_releases` query
2. Get current_release value (`bcbsa_export1`)
3. Replace `BCBSA_CURR_WEEK` → `bcbsa_export1`
4. Run the modified query
See [PREPROCESSOR.md](PREPROCESSOR.md) for detailed examples.
## Report Generation
### ExecutionReport.html
Full test execution report with:
- Test results (PASS/FAIL)
- Source and target values
- Execution timestamps
- Query execution times
- Comparison methods
### FailedExecutionReport.html
Summary of failed tests or confirmation of all tests passing
### ExecutionReport.csv
Structured test results for import into analysis tools:
- Test name
- Status
- Severity
- Source/Target values
- Execution time
### AutomationData.csv
CI/CD integration data:
- App name
- Branch
- Platform
- Owner
- Test report path
## Environment Variables
Configure the framework behavior using environment variables:
```bash
# Output directory for reports (default: ./output)
export DATAQE_OUTPUT_DIR=/path/to/output
# CI/CD metadata (used in AutomationData.csv)
export DATAQE_APP_NAME=my-app
export DATAQE_BRANCH=main
export DATAQE_PLATFORM=kubernetes
export DATAQE_OWNER=team-name
```
## Command Line Usage
### Basic Execution
```bash
dataqe-run --config /path/to/config.yml
```
### With Custom Output Directory
```bash
export DATAQE_OUTPUT_DIR=/custom/output
dataqe-run --config /path/to/config.yml
```
### CI/CD Integration
```bash
export DATAQE_APP_NAME=ecommerce-platform
export DATAQE_BRANCH=feature-branch
export DATAQE_PLATFORM=kubernetes
export DATAQE_OWNER=data-team
dataqe-run --config /path/to/config.yml
```
## Project Structure
```
dataqe-framework/
├── src/dataqe_framework/
│ ├── __init__.py
│ ├── cli.py # Command-line interface
│ ├── config_loader.py # YAML config loading
│ ├── executor.py # Test execution engine
│ ├── preprocessor.py # Query preprocessing
│ ├── reporter.py # Report generation
│ ├── comparison/
│ │ ├── comparator.py # Comparison logic
│ │ └── threshold.py # Threshold calculations
│ └── connectors/
│ ├── base_connector.py # Base connector interface
│ ├── mysql_connector.py # MySQL implementation
│ └── bigquery_connector.py # BigQuery implementation
├── example_preprocessor_config.yml
├── example_preprocessor_queries.yml
├── example_preprocessor_test_script.yml
├── README.md
├── CONFIGURATION.md
├── PREPROCESSOR.md
└── pyproject.toml
```
## Examples
### Example 1: Simple Row Count Validation
Test if row counts match between MySQL and BigQuery:
```yaml
- users_row_count:
severity: critical
source:
query: SELECT COUNT(*) as value FROM users
target:
query: SELECT COUNT(*) as value FROM users
comparisons:
comment: "User count must match exactly"
```
### Example 2: Multi-Release Dataset Validation
Validate current and previous release datasets:
```yaml
- current_release_sales:
severity: high
source:
query: |
SELECT SUM(amount) as value FROM BCBSA_CURR_WEEK.sales
config_query_key: get_bcbsa_releases
source_name: bcbsa
- previous_release_sales:
severity: medium
source:
query: |
SELECT SUM(amount) as value FROM BCBSA_PREV_WEEK.sales
config_query_key: get_bcbsa_releases
source_name: bcbsa
```
### Example 3: Threshold-Based Comparison
Allow data variations within acceptable ranges:
```yaml
- transaction_amounts:
severity: high
source:
query: SELECT SUM(amount) as value FROM transactions
target:
query: SELECT SUM(amount) as value FROM transactions
comparisons:
threshold:
value: percentage
limit: 2
comment: "Amounts must match within 2%"
```
## Troubleshooting
### Connection Issues
**MySQL Connection Refused**
```bash
# Check connectivity
mysql -h <host> -u <user> -p<password> <database>
# Verify in config.yml:
# - host is correct
# - port is 3306 (or custom port)
# - user/password are correct
```
**BigQuery Authentication Failed**
```bash
# Verify credentials file
gcloud auth application-default print-access-token
# Check in config.yml:
# - credentials_path points to valid service account JSON
# - credentials file has BigQuery permissions
```
### Query Execution Issues
**Query Timeout**
- Increase database timeout settings
- Optimize query performance
- Check database load
**Dataset Not Found**
- For preprocessor queries: verify `config_query_key` matches a key in `preprocessor_queries.yml`
- For dynamic replacement: verify placeholder format matches expected convention
### Report Generation Issues
**Output directory not writable**
```bash
chmod -R 755 ./output
```
**No output files generated**
- Check logs for errors
- Verify `DATAQE_OUTPUT_DIR` has write permissions
- Ensure test suite has valid queries
## Performance Considerations
- **Large result sets**: Memory usage scales with query result size
- **Many tests**: Execution time is cumulative
- **Database load**: Run during off-peak hours for production databases
- **Network latency**: BigQuery queries may take longer than MySQL
## Security
### Sensitive Data Handling
- Never commit credentials files
- Use environment variables for secrets
- Enable KMS encryption for PHI data in BigQuery
### Best Practices
- Use dedicated read-only database accounts
- Limit query timeout duration
- Monitor execution logs for suspicious patterns
- Review generated reports for sensitive data exposure
## Contributing
For bug reports and feature requests, please open an issue on the repository.
## Installation via pip
### From PyPI (Coming Soon)
```bash
pip install dataqe-framework
```
### From GitHub
```bash
pip install git+https://github.com/ShaikKhadarmohiddin/dataqe-framework.git
```
### From Source
```bash
git clone https://github.com/ShaikKhadarmohiddin/dataqe-framework.git
cd dataqe-framework
pip install -e .
```
## Author
**Khadar Shaik**
- Email: khadarmohiddin.shaik@apree.health
- GitHub: [@ShaikKhadarmohiddin](https://github.com/ShaikKhadarmohiddin)
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
MIT License - You are free to use this project for personal, educational, or commercial purposes.
## Support
For support and questions:
- Check documentation in the project repository
- Open an issue on [GitHub Issues](https://github.com/ShaikKhadarmohiddin/dataqe-framework/issues)
- Review troubleshooting section in [GETTING_STARTED.md](GETTING_STARTED.md)
- Consult test output and logs for error details
## Version History
### 0.0.1 (Initial Release)
- Multi-database support (MySQL, BigQuery)
- YAML-based test configuration
- Flexible comparison modes
- Dynamic dataset replacement
- Comprehensive reporting
- PHI data protection
- CI/CD integration support
| text/markdown | null | Khadar Shaik <khadarmohiddin.shaik@apree.health> | null | null | null | data-validation, data-quality, testing, ETL, migration, mysql, bigquery | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Database",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"google-cloud-bigquery>=3.0.0",
"pymysql>=1.0.0",
"pyyaml>=5.4",
"pandas>=1.3.0"
] | [] | [] | [] | [
"Homepage, https://github.com/ShaikKhadarmohiddin/dataqe-framework",
"Documentation, https://github.com/ShaikKhadarmohiddin/dataqe-framework#readme",
"Repository, https://github.com/ShaikKhadarmohiddin/dataqe-framework.git",
"Issues, https://github.com/ShaikKhadarmohiddin/dataqe-framework/issues"
] | twine/6.2.0 CPython/3.11.7 | 2026-02-20T11:23:02.163862 | dataqe_framework-0.2.1.tar.gz | 26,077 | 7b/cc/2d9f87af1bc32fcfe2a372ca9b69e12f0e521bcb4157466ccbab156703cd/dataqe_framework-0.2.1.tar.gz | source | sdist | null | false | a5bbf6e971964d2785e649afd2e9bb41 | c843e5a992f9845bfa8f4395df2b445432eff124b92549043dba32de7137f253 | 7bcc2d9f87af1bc32fcfe2a372ca9b69e12f0e521bcb4157466ccbab156703cd | null | [] | 218 |
2.4 | apify | 3.2.2b3 | Apify SDK for Python | <h1 align=center>Apify SDK for Python</h1>
<p align="center">
<a href="https://badge.fury.io/py/apify" rel="nofollow"><img src="https://badge.fury.io/py/apify.svg" alt="PyPI package version"></a>
<a href="https://pypi.org/project/apify/" rel="nofollow"><img src="https://img.shields.io/pypi/dm/apify" alt="PyPI package downloads"></a>
<a href="https://codecov.io/gh/apify/apify-sdk-python"><img src="https://codecov.io/gh/apify/apify-sdk-python/graph/badge.svg?token=Y6JBIZQFT6" alt="Codecov report"></a>
<a href="https://pypi.org/project/apify/" rel="nofollow"><img src="https://img.shields.io/pypi/pyversions/apify" alt="PyPI Python version"></a>
<a href="https://discord.gg/jyEM2PRvMU" rel="nofollow"><img src="https://img.shields.io/discord/801163717915574323?label=discord" alt="Chat on Discord"></a>
</p>
The Apify SDK for Python is the official library to create [Apify Actors](https://docs.apify.com/platform/actors)
in Python. It provides useful features like Actor lifecycle management, local storage emulation, and Actor
event handling.
If you just need to access the [Apify API](https://docs.apify.com/api/v2) from your Python applications,
check out the [Apify Client for Python](https://docs.apify.com/api/client/python) instead.
## Installation
The Apify SDK for Python is available on PyPI as the `apify` package.
For default installation, using Pip, run the following:
```bash
pip install apify
```
For users interested in integrating Apify with Scrapy, we provide a package extra called `scrapy`.
To install Apify with the `scrapy` extra, use the following command:
```bash
pip install apify[scrapy]
```
## Documentation
For usage instructions, check the documentation on [Apify Docs](https://docs.apify.com/sdk/python/).
## Examples
Below are few examples demonstrating how to use the Apify SDK with some web scraping-related libraries.
### Apify SDK with HTTPX and BeautifulSoup
This example illustrates how to integrate the Apify SDK with [HTTPX](https://www.python-httpx.org/) and [BeautifulSoup](https://pypi.org/project/beautifulsoup4/) to scrape data from web pages.
```python
from bs4 import BeautifulSoup
from httpx import AsyncClient
from apify import Actor
async def main() -> None:
async with Actor:
# Retrieve the Actor input, and use default values if not provided.
actor_input = await Actor.get_input() or {}
start_urls = actor_input.get('start_urls', [{'url': 'https://apify.com'}])
# Open the default request queue for handling URLs to be processed.
request_queue = await Actor.open_request_queue()
# Enqueue the start URLs.
for start_url in start_urls:
url = start_url.get('url')
await request_queue.add_request(url)
# Process the URLs from the request queue.
while request := await request_queue.fetch_next_request():
Actor.log.info(f'Scraping {request.url} ...')
# Fetch the HTTP response from the specified URL using HTTPX.
async with AsyncClient() as client:
response = await client.get(request.url)
# Parse the HTML content using Beautiful Soup.
soup = BeautifulSoup(response.content, 'html.parser')
# Extract the desired data.
data = {
'url': actor_input['url'],
'title': soup.title.string,
'h1s': [h1.text for h1 in soup.find_all('h1')],
'h2s': [h2.text for h2 in soup.find_all('h2')],
'h3s': [h3.text for h3 in soup.find_all('h3')],
}
# Store the extracted data to the default dataset.
await Actor.push_data(data)
```
### Apify SDK with PlaywrightCrawler from Crawlee
This example demonstrates how to use the Apify SDK alongside `PlaywrightCrawler` from [Crawlee](https://crawlee.dev/python) to perform web scraping.
```python
from crawlee.crawlers import PlaywrightCrawler, PlaywrightCrawlingContext
from apify import Actor
async def main() -> None:
async with Actor:
# Retrieve the Actor input, and use default values if not provided.
actor_input = await Actor.get_input() or {}
start_urls = [url.get('url') for url in actor_input.get('start_urls', [{'url': 'https://apify.com'}])]
# Exit if no start URLs are provided.
if not start_urls:
Actor.log.info('No start URLs specified in Actor input, exiting...')
await Actor.exit()
# Create a crawler.
crawler = PlaywrightCrawler(
# Limit the crawl to max requests. Remove or increase it for crawling all links.
max_requests_per_crawl=50,
headless=True,
)
# Define a request handler, which will be called for every request.
@crawler.router.default_handler
async def request_handler(context: PlaywrightCrawlingContext) -> None:
url = context.request.url
Actor.log.info(f'Scraping {url}...')
# Extract the desired data.
data = {
'url': context.request.url,
'title': await context.page.title(),
'h1s': [await h1.text_content() for h1 in await context.page.locator('h1').all()],
'h2s': [await h2.text_content() for h2 in await context.page.locator('h2').all()],
'h3s': [await h3.text_content() for h3 in await context.page.locator('h3').all()],
}
# Store the extracted data to the default dataset.
await context.push_data(data)
# Enqueue additional links found on the current page.
await context.enqueue_links()
# Run the crawler with the starting URLs.
await crawler.run(start_urls)
```
## What are Actors?
Actors are serverless cloud programs that can do almost anything a human can do in a web browser.
They can do anything from small tasks such as filling in forms or unsubscribing from online services,
all the way up to scraping and processing vast numbers of web pages.
They can be run either locally, or on the [Apify platform](https://docs.apify.com/platform/),
where you can run them at scale, monitor them, schedule them, or publish and monetize them.
If you're new to Apify, learn [what is Apify](https://docs.apify.com/platform/about)
in the Apify platform documentation.
## Creating Actors
To create and run Actors through Apify Console,
see the [Console documentation](https://docs.apify.com/academy/getting-started/creating-actors#choose-your-template).
To create and run Python Actors locally, check the documentation for
[how to create and run Python Actors locally](https://docs.apify.com/sdk/python/docs/quick-start).
## Guides
To see how you can use the Apify SDK with other popular libraries used for web scraping,
check out our guides for using
[Requests and HTTPX](https://docs.apify.com/sdk/python/docs/guides/requests-and-httpx),
[Beautiful Soup](https://docs.apify.com/sdk/python/docs/guides/beautiful-soup),
[Playwright](https://docs.apify.com/sdk/python/docs/guides/playwright),
[Selenium](https://docs.apify.com/sdk/python/docs/guides/selenium),
or [Scrapy](https://docs.apify.com/sdk/python/docs/guides/scrapy).
## Usage concepts
To learn more about the features of the Apify SDK and how to use them,
check out the Usage Concepts section in the sidebar,
particularly the guides for the [Actor lifecycle](https://docs.apify.com/sdk/python/docs/concepts/actor-lifecycle),
[working with storages](https://docs.apify.com/sdk/python/docs/concepts/storages),
[handling Actor events](https://docs.apify.com/sdk/python/docs/concepts/actor-events)
or [how to use proxies](https://docs.apify.com/sdk/python/docs/concepts/proxy-management).
| text/markdown | null | "Apify Technologies s.r.o." <support@apify.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2023 Apify Technologies s.r.o.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | apify, automation, chrome, crawlee, crawler, headless, scraper, scraping, sdk | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"apify-client<3.0.0,>=2.3.0",
"apify-shared<3.0.0,>=2.0.0",
"cachetools>=5.5.0",
"crawlee<2.0.0,>=1.0.4",
"cryptography>=42.0.0",
"impit>=0.8.0",
"lazy-object-proxy>=1.11.0",
"more-itertools>=10.2.0",
"pydantic>=2.11.0",
"typing-extensions>=4.1.0",
"websockets>=14.0",
"yarl>=1.18.0",
"scrapy>=2.11.0; extra == \"scrapy\""
] | [] | [] | [] | [
"Apify Homepage, https://apify.com",
"Changelog, https://docs.apify.com/sdk/python/docs/changelog",
"Discord, https://discord.com/invite/jyEM2PRvMU",
"Documentation, https://docs.apify.com/sdk/python/docs/overview",
"Homepage, https://docs.apify.com/sdk/python/",
"Issue Tracker, https://github.com/apify/apify-sdk-python/issues",
"Release Notes, https://docs.apify.com/sdk/python/docs/upgrading/upgrading-to-v2",
"Source Code, https://github.com/apify/apify-sdk-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:22:58.990188 | apify-3.2.2b3.tar.gz | 7,341,669 | 36/ae/a252a565244aca145a11bcfeb465888cc6d57f17ef91378fd4bfbef77e51/apify-3.2.2b3.tar.gz | source | sdist | null | false | a4b067376be7684b92f182e43a96461b | e08e98b91f6812cbd88336a63f0a4732a3e648a3c0c0f751db0a5c202426922b | 36aea252a565244aca145a11bcfeb465888cc6d57f17ef91378fd4bfbef77e51 | null | [
"LICENSE"
] | 194 |
2.4 | aceteam-aep | 0.1.0 | AEP-native execution layer for AI agents - spans, costs, budget enforcement | # aceteam-aep
AEP-native execution layer for AI agents. Replaces LangChain with direct provider SDKs while adding AEP protocol compliance (spans, costs, budget enforcement).
## Installation
```bash
pip install aceteam-aep
# Or with all providers:
pip install aceteam-aep[all]
```
## Quick Start
```python
from aceteam_aep import create_client, run_agent_loop, ChatMessage, tool
# Create a client
client = create_client("gpt-4o", api_key="sk-...")
# Define tools
@tool
def calculator(expression: str) -> str:
"""Evaluate a math expression."""
return str(eval(expression))
# Run agent loop
result = await run_agent_loop(
client,
[ChatMessage(role="user", content="What is 2+2?")],
tools=[calculator],
system_prompt="You are a helpful assistant.",
)
```
## AEP Compliance
Every execution through `run_agent_loop` can produce AEP-compliant output:
```python
from aceteam_aep import SpanTracker, CostTracker, BudgetEnforcer
tracker = SpanTracker()
costs = CostTracker(entity="org:my-org")
budget = BudgetEnforcer(total="10.00")
result = await run_agent_loop(
client, messages,
span_tracker=tracker,
cost_tracker=costs,
budget=budget,
)
# Access AEP data
print(tracker.get_spans()) # Execution trace
print(costs.get_cost_tree()) # Hierarchical costs
print(budget.state.remaining()) # Budget remaining
```
## Streaming
```python
from aceteam_aep import run_agent_loop_stream
async for event in run_agent_loop_stream(client, messages, tools=tools):
if event.type == "chunk":
print(event.data["text"], end="")
elif event.type == "tool_call_start":
print(f"\nCalling {event.data['name']}...")
elif event.type == "cost":
print(f"\nCost: ${event.data['compute_cost']}")
```
## Providers
- **OpenAI** (GPT-4o, o1, o3, etc.)
- **Anthropic** (Claude Opus, Sonnet, Haiku)
- **Google** (Gemini 2.5, 3.0)
- **xAI** (Grok)
- **Ollama** (local models)
- **OpenAI-compatible** (SambaNova, TheAgentic, DeepSeek)
| text/markdown | null | AceTeam AI <contact@aceteam.ai> | null | null | null | aep, agents, ai, cost-tracking, llm | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"anthropic<1.0.0,>=0.45.0",
"google-genai<2.0.0,>=1.0.0",
"httpx>=0.28.0",
"openai<2.0.0,>=1.65.0",
"pydantic>=2.11.0",
"ollama<1.0.0,>=0.4.0; extra == \"all\"",
"openai<2.0.0,>=1.65.0; extra == \"all\"",
"ollama<1.0.0,>=0.4.0; extra == \"dev\"",
"openai<2.0.0,>=1.65.0; extra == \"dev\"",
"pyright>=1.1; extra == \"dev\"",
"pytest-asyncio>=0.24; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.11; extra == \"dev\"",
"ollama<1.0.0,>=0.4.0; extra == \"ollama\"",
"openai<2.0.0,>=1.65.0; extra == \"xai\""
] | [] | [] | [] | [
"Homepage, https://aceteam.ai",
"Repository, https://github.com/aceteam-ai/aceteam-aep"
] | uv/0.8.23 | 2026-02-20T11:22:57.279236 | aceteam_aep-0.1.0.tar.gz | 66,926 | 91/cc/aa62b3813fcc100f9fb2ff158baddf53741fea1a268705fabbd4652ad79f/aceteam_aep-0.1.0.tar.gz | source | sdist | null | false | f05e170ce28fd26430d7b3c98842fc09 | dad31803f64208b035127b8d7a2149a5b8052d92e29767d847e642500bab4810 | 91ccaa62b3813fcc100f9fb2ff158baddf53741fea1a268705fabbd4652ad79f | Apache-2.0 | [] | 256 |
2.4 | inference-models | 0.19.0 | The new inference engine for Computer Vision models | # 🚀 What is inference-models?
`inference-models` is the library to make predictions from computer vision models provided by Roboflow — designed to
be fast, reliable, and user-friendly. It offers:
- **Multi-Backend Support**: Run models with PyTorch, ONNX, TensorRT, or Hugging Face backends
- **Automatic Model Loading**: Smart model resolution and backend selection
- **Minimal Dependencies**: Composable extras system for installing only what you need
- **Behavior-Based Interfaces**: Models with similar behavior share consistent APIs; custom models can define their own
- **Full Roboflow Platform Support:** Run any model trained on [Roboflow](https://roboflow.com)
Visit our [documentation](https://inference-models.roboflow.com/) for more information.
# 🛣️ Roadmap
With release `0.19.0`, we have reached the first stable release of `inference-models` and fully integrated
the package to `inference` - our main inference package, making it selectable backend for running predictions
from models.
We are still making changes to add new features and models. API should be fairly stable already, but
the problems may still occur. If you encounter any issues, please [report them]((https://github.com/roboflow/inference/issues)).
# 💻 Installation
**CPU installation:**
```bash
uv pip install inference-models
# or with pip
pip install inference-models
```
`inference-models` can be installed with CUDA and TensorRT support - see [Installation Guide](https://inference-models.roboflow.com/getting-started/installation/) for more options.
# 🏃➡️ Usage
## Pretrained Models
Load and run a pretrained model:
```python
import cv2
import supervision as sv
from inference_models import AutoModel
# Load pretrained model from Roboflow
model = AutoModel.from_pretrained("rfdetr-base")
# Run inference (works with numpy arrays or torch.Tensor)
image = cv2.imread("<path-to-your-image>")
predictions = model(image)
# Use with supervision
annotator = sv.BoxAnnotator()
annotated = annotator.annotate(image, predictions[0].to_supervision())
```
## Your Roboflow Models
Load and run models trained on the [Roboflow platform](https://roboflow.com):
```python
import cv2
import supervision as sv
from inference_models import AutoModel
# Load your custom model from Roboflow
model = AutoModel.from_pretrained(
"<your-project>/<version>",
api_key="<your-api-key>" # model access secured with API key
)
# Run inference (works with numpy arrays or torch.Tensor)
image = cv2.imread("<path-to-your-image>")
predictions = model(image)
# Use with supervision
annotator = sv.BoxAnnotator()
annotated = annotator.annotate(image, predictions[0].to_supervision())
```
# 🧠 Supported Model Architectures
- **RFDetr**
- **SAM models family**
- **Vision-Language Models** (Florence, PaliGemma, Qwen, SmolVLM, Moondream)
- **OCR** (DocTR, EasyOCR, TrOCR)
- **YOLO**
- and many more
For detailed model documentation, see [Supported Models](https://inference-models.roboflow.com/models/).
# 🔧 Run your local models
Load your own model implementations from a local directory - models with architectures **not** in the main `inference-models` package. This is especially valuable for **production deployment** of custom models.
```python
from inference_models import AutoModel
model = AutoModel.from_pretrained(
"/path/to/my_custom_model",
allow_local_code_packages=True
)
```
See [Load Models from Local Packages](https://inference-models.roboflow.com/how-to/local-packages/) for complete details on creating custom model packages.
# 📄 License
The `inference-models` package is licensed under Apache 2.0. Individual models may have different licenses - see the [Supported Models](https://inference-models.roboflow.com/models/) for details.
---
Ready to get started? Head to the [Quick Overview](https://inference-models.roboflow.com/getting-started/overview/) →
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.13,>=3.9 | [] | [] | [] | [
"numpy",
"torch<3.0.0,>=2.0.0",
"torchvision",
"opencv-python>=4.8.1.78",
"requests<3.0.0,>=2.32.0",
"supervision>=0.26.0",
"backoff~=2.2.0",
"transformers<5.0.0,>=4.56.0",
"timm<2.0.0,>=1.0.0",
"accelerate<2.0.0,>=1.0.0",
"einops<1.0.0,>=0.7.0",
"peft<0.18.0,>=0.11.1",
"num2words~=0.5.14",
"bitsandbytes<0.48.0,>=0.42.0; sys_platform != \"darwin\"",
"pyvips<3.0.0,>=2.2.3",
"rf-clip==1.1",
"python-doctr[torch]<=0.11.0,>=0.10.0",
"packaging>=24.0.0",
"rich<15.0.0,>=13.0.0",
"pydantic<3.0.0,>=2.0.0",
"filelock<4.0.0,>=3.12.0",
"rich<15.0.0,>=14.1.0",
"segmentation-models-pytorch<1.0.0,>=0.5.0",
"scikit-image<0.26.0,>=0.24.0",
"easyocr~=1.7.2",
"sentencepiece<0.3.0,>=0.2.0",
"rf_groundingdino==0.3.0",
"tldextract~=5.1.2",
"pybase64~=1.0.0",
"rf-segment-anything==1.0",
"rf-sam-2==1.0.3",
"torch<3.0.0,>=2.0.0; extra == \"torch-cpu\"",
"torchvision; extra == \"torch-cpu\"",
"torch<3.0.0,>=2.0.0; extra == \"torch-cu118\"",
"torchvision; extra == \"torch-cu118\"",
"pycuda<2026.0.0,>=2025.0.0; (platform_system != \"darwin\" and python_version >= \"3.10\") and extra == \"torch-cu118\"",
"torch<3.0.0,>=2.0.0; extra == \"torch-cu124\"",
"torchvision; extra == \"torch-cu124\"",
"pycuda<2026.0.0,>=2025.0.0; (platform_system != \"darwin\" and python_version >= \"3.10\") and extra == \"torch-cu124\"",
"torch<3.0.0,>=2.0.0; extra == \"torch-cu126\"",
"torchvision; extra == \"torch-cu126\"",
"pycuda<2026.0.0,>=2025.0.0; (platform_system != \"darwin\" and python_version >= \"3.10\") and extra == \"torch-cu126\"",
"torch<3.0.0,>=2.0.0; extra == \"torch-cu128\"",
"torchvision; extra == \"torch-cu128\"",
"pycuda<2026.0.0,>=2025.0.0; (platform_system != \"darwin\" and python_version >= \"3.10\") and extra == \"torch-cu128\"",
"numpy<2.0.0; extra == \"torch-jp6-cu126\"",
"torch<3.0.0,>=2.0.0; extra == \"torch-jp6-cu126\"",
"torchvision; extra == \"torch-jp6-cu126\"",
"pycuda<2026.0.0,>=2025.0.0; extra == \"torch-jp6-cu126\"",
"onnxruntime<1.23.0,>=1.15.1; extra == \"onnx-cpu\"",
"onnxruntime-gpu<1.23.0,>=1.15.1; platform_system != \"darwin\" and extra == \"onnx-cu118\"",
"pycuda<2026.0.0,>=2025.0.0; platform_system != \"darwin\" and extra == \"onnx-cu118\"",
"onnxruntime-gpu<1.23.0,>=1.17.0; platform_system != \"darwin\" and extra == \"onnx-cu12\"",
"pycuda<2026.0.0,>=2025.0.0; platform_system != \"darwin\" and extra == \"onnx-cu12\"",
"numpy<2.0.0; (platform_system == \"Linux\" and platform_machine == \"aarch64\" and python_version >= \"3.10\") and extra == \"onnx-jp6-cu126\"",
"onnxruntime-gpu<1.24.0,>=1.17.0; (platform_system == \"Linux\" and platform_machine == \"aarch64\" and python_version >= \"3.10\") and extra == \"onnx-jp6-cu126\"",
"pycuda<2026.0.0,>=2025.0.0; (platform_system == \"Linux\" and platform_machine == \"aarch64\" and python_version >= \"3.10\") and extra == \"onnx-jp6-cu126\"",
"rf-mediapipe<0.11.0,>=0.9; extra == \"mediapipe\"",
"tensorrt-cu12<11.0.0,>=10.0.0; (platform_system == \"Linux\" or platform_system == \"Windows\") and extra == \"trt10\"",
"tensorrt-lean-cu12<11.0.0,>=10.0.0; (platform_system == \"Linux\" or platform_system == \"Windows\") and extra == \"trt10\"",
"pycuda<2026.0.0,>=2025.0.0; (platform_system != \"darwin\" and python_version >= \"3.10\") and extra == \"trt10\"",
"pytest>=8.0.0; extra == \"test\"",
"pytest-timeout==2.4.0; extra == \"test\"",
"pytest-xdist>=3.0.0; extra == \"test\"",
"pytest-timeout; extra == \"test\"",
"requests-mock>=1.12.1; extra == \"test\"",
"mkdocs-material[imaging]>=9.5.5; extra == \"docs\"",
"mkdocstrings<0.31.0,>=0.25.2; extra == \"docs\"",
"mkdocstrings-python<2.0.0,>=1.10.9; extra == \"docs\"",
"mike>=2.0.0; extra == \"docs\"",
"mkdocs-jupyter>=0.24.3; extra == \"docs\"",
"mkdocs-git-committers-plugin-2>=2.4.1; (python_version >= \"3.9\" and python_version < \"4\") and extra == \"docs\"",
"mkdocs-git-revision-date-localized-plugin>=1.2.4; extra == \"docs\"",
"mkdocs-literate-nav~=0.6.2; extra == \"docs\"",
"mkdocs-gen-files~=0.6.0; extra == \"docs\""
] | [] | [] | [] | [] | uv/0.7.12 | 2026-02-20T11:22:07.482730 | inference_models-0.19.0.tar.gz | 1,640,364 | a3/78/2203d10e97d85f304d62e8e48819d506eb12f98cbbd8199de4329ab21d3f/inference_models-0.19.0.tar.gz | source | sdist | null | false | 66e4dff9e71b6d6330aa5910a3f136f6 | e2d6e5268e57fffa720f13a2726f1c9c5afd8a31cbcf1d2455af1e3f48f9e6f1 | a3782203d10e97d85f304d62e8e48819d506eb12f98cbbd8199de4329ab21d3f | null | [] | 2,219 |
2.4 | spec2sdk | 1.0.202602201122 | Generate Pydantic models and API client code from OpenAPI 3.1.x specifications | # Usage
## From command line
- Local specification `spec2sdk --schema-path path/to/api.yml --output-dir path/to/output-dir/`
- Remote specification `spec2sdk --schema-url https://example.com/path/to/api.yml --output-dir path/to/output-dir/`
## From the code
```python
from pathlib import Path
from spec2sdk.main import generate
# Local specification
generate(schema_url=Path("path/to/api.yml").absolute().as_uri(), output_dir=Path("path/to/output-dir/"))
# Remote specification
generate(schema_url="https://example.com/path/to/api.yml", output_dir=Path("path/to/output-dir/"))
```
# Open API specification requirements
## Operation ID
`operationId` must be specified for each endpoint to generate meaningful method names. It must be unique among all operations described in the API.
### Input
```yaml
paths:
/health:
get:
operationId: healthCheck
responses:
'200':
description: Successful response
```
### Output
```python
class APIClient:
def health_check(self) -> None:
...
```
## Inline schemas
Inline schemas should be annotated with the schema name in the `x-schema-name` field that doesn't overlap with the existing schema names in the specification.
### Input
```yaml
paths:
/me:
get:
operationId: getMe
responses:
'200':
description: Successful response
content:
application/json:
schema:
x-schema-name: User
type: object
properties:
name:
type: string
email:
type: string
```
### Output
```python
class User(Model):
name: str | None = Field(default=None)
email: str | None = Field(default=None)
```
## Enum variable names
Variable names for enums can be specified by the `x-enum-varnames` field.
### Input
```yaml
components:
schemas:
Direction:
x-enum-varnames: [ NORTH, SOUTH, WEST, EAST ]
type: string
enum: [ N, S, W, E ]
```
### Output
```python
from enum import StrEnum
class Direction(StrEnum):
NORTH = "N"
SOUTH = "S"
WEST = "W"
EAST = "E"
```
# Custom types
Register Python converters and renderers to implement custom types.
## Input
```yaml
components:
schemas:
User:
type: object
properties:
name:
type: string
email:
type: string
format: email
```
```python
from pathlib import Path
from spec2sdk.openapi.entities import DataType, StringDataType
from spec2sdk.models.annotations import TypeAnnotation
from spec2sdk.models.converters import converters, make_type_class_name
from spec2sdk.models.entities import SimpleType
from spec2sdk.models.imports import Import
from spec2sdk.main import generate
class EmailType(SimpleType):
@property
def type_definition(self) -> TypeAnnotation:
return TypeAnnotation(
type_hint="EmailStr",
type_imports=(Import(name="EmailStr", package="pydantic"),),
constraints=(),
)
def is_email_format(data_type: DataType) -> bool:
return isinstance(data_type, StringDataType) and data_type.format == "email"
@converters.register(predicate=is_email_format)
def convert_email_field(data_type: StringDataType) -> EmailType:
return EmailType(name=make_type_class_name(data_type))
if __name__ == "__main__":
generate(schema_url=Path("api.yml").absolute().as_uri(), output_dir=Path("output"))
```
## Output
```python
from pydantic import EmailStr, Field
class User(Model):
name: str | None = Field(default=None)
email: EmailStr | None = Field(default=None)
```
# Using generated client
1. Create HTTP client. It should conform to the `HTTPClientProtocol` which can be found in the generated `http_client.py`. Below is an example of the HTTP client implemented using `httpx` library to handle HTTP requests. Assume that `sdk` is the output directory for the generated code.
```python
from http import HTTPStatus
import httpx
from httpx._types import AuthTypes, TimeoutTypes
from sdk.http_client import HTTPRequest, HTTPResponse
class HTTPClient:
def __init__(self, *, base_url: str, auth: AuthTypes | None = None, timeout: TimeoutTypes | None = None, **kwargs):
self._http_client = httpx.Client(auth=auth, base_url=base_url, timeout=timeout, **kwargs)
def send_request(self, *, request: HTTPRequest) -> HTTPResponse:
response = self._http_client.request(
method=request.method,
url=request.url,
content=request.content,
headers=request.headers,
)
return HTTPResponse(
status_code=HTTPStatus(response.status_code),
content=response.content,
headers=response.headers.multi_items(),
)
```
2. Create API client. It should conform to the `APIClientProtocol` which can be found in the generated `api_client.py`. Below is an example of the API client.
```python
from http import HTTPMethod, HTTPStatus
from types import NoneType
from typing import Any, Mapping, Type
from urllib.parse import urlencode
from pydantic import TypeAdapter
from sdk.api_client import APIClientResponse, SerializedData
from sdk.http_client import HTTPClientProtocol, HTTPRequest
class APIClient:
def __init__(self, http_client: HTTPClientProtocol):
self._http_client = http_client
def serialize[T](self, *, data: T, data_type: Type[T], content_type: str | None) -> SerializedData:
match content_type:
case "application/json":
return SerializedData(
content=TypeAdapter(data_type).dump_json(data, by_alias=True),
content_type=content_type,
)
case _:
return SerializedData(content=data, content_type=content_type)
def deserialize[T](self, *, data: bytes | None, data_type: Type[T], content_type: str | None) -> T:
match content_type:
case "application/json":
return TypeAdapter(data_type).validate_json(data)
case _:
return data
def build_url(self, path: str, query: Mapping[str, Any] | None = None) -> str:
if query is None:
return path
return f"{path}?{urlencode(query, doseq=True)}"
def send_request[I, O](
self,
*,
method: HTTPMethod,
path: str,
query: Mapping[str, Any] | None = None,
content_type: str | None = None,
data: I | None = None,
data_type: Type[I] = NoneType,
accept: str | None = None,
response_type: Type[O] = NoneType,
expected_status_code: HTTPStatus = HTTPStatus.OK,
) -> APIClientResponse[O]:
serialized_data = self.serialize(data=data, data_type=data_type, content_type=content_type)
request = HTTPRequest(
method=method,
url=self.build_url(path, query),
headers=(("Content-Type", serialized_data.content_type),) if serialized_data.content_type else (),
content=serialized_data.content,
)
response = self._http_client.send_request(request=request)
if response.status_code != expected_status_code:
raise Exception(
f"Response has unexpected status code. Expected {expected_status_code}, got {response.status_code}."
)
if accept is not None and not any(
response_content_type := tuple(
value for key, value in response.headers if (key.lower() == "content-type") and (accept in value)
),
):
raise Exception(f"Response has unexpected content type. Expected {accept}, got {response_content_type}.")
return APIClientResponse(
http_response=response,
data=self.deserialize(data=response.content, data_type=response_type, content_type=accept),
)
```
3. Combine clients together to access API.
```python
from sdk.api import API
api = API(
api_client=APIClient(
http_client=HTTPClient(
base_url="https://api.example.com",
auth=BasicAuth(username="user", password="pass"),
),
),
)
```
| text/markdown | moneymeets | service@moneymeets.com | null | null | null | openapi, pydantic, code-generator, openapi-codegen | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: MIT License"
] | [] | https://github.com/moneymeets/spec2sdk | null | >=3.14 | [] | [] | [] | [
"black",
"jinja2",
"openapi-spec-validator",
"pydantic<3,>=2",
"pyhumps",
"ruff"
] | [] | [] | [] | [
"Repository, https://github.com/moneymeets/spec2sdk"
] | poetry/2.2.1 CPython/3.12.3 Linux/6.11.0-1018-azure | 2026-02-20T11:22:02.179178 | spec2sdk-1.0.202602201122.tar.gz | 19,281 | 0e/c7/3626fac33c8afbf675111b7cd1f85e0cf5ea691f93e2ae1d3eed41a34639/spec2sdk-1.0.202602201122.tar.gz | source | sdist | null | false | fbe49b206b2f2ae288a919602edb0e51 | 1801b654b0d20c19e2ede7adfa438694ecf6fdd48a15ce7a85cd04a11303abcb | 0ec73626fac33c8afbf675111b7cd1f85e0cf5ea691f93e2ae1d3eed41a34639 | null | [] | 201 |
2.4 | pypm-cli | 0.0.6 | A zero-config CLI tool that automatically infers dependencies from your Python source code. | <div align="center">
<img src="https://raw.githubusercontent.com/Suriyakumardurai/pypm/main/assets/pypm.png" alt="pypm" width="100%" />
</div>
# pypm – Python Package Manager
[](https://pypi.org/project/pypm-cli/)
[](https://pypi.org/project/pypm-cli/)
[](https://pypi.org/project/pypm-cli/)
[](https://github.com/Suriyakumardurai/pypm/actions/workflows/ci.yml)
[](https://github.com/astral-sh/ruff)
pypm is a zero-config CLI tool that automatically infers dependencies from your Python source code.
## ⚡ Lightning-Fast Performance
pypm is optimized for speed and efficiency:
- **Sub-200ms Inference**: Scans and parses projects in milliseconds.
- **Overlapping Pipeline**: Directory scanning and file parsing run in parallel.
- **Smart Caching**: `mtime`-based import caching skips unchanged files.
- **Memory-Aware**: Dynamic worker scaling for systems with limited RAM (e.g., 4GB).
## 🐍 Supported Python Versions
| Version | Status |
|---------|--------|
| Python 3.5 – 3.7 | ✅ Compatible (**vermin** verified) |
| Python 3.8 – 3.14 | ✅ Fully supported (CI tested) |
## 🚀 Installation
Install from PyPI:
```bash
pip install pypm-cli
```
After installation, you can run:
```bash
pypm --help
```
## ⚡ Quick Start
Scan the current directory and generate/update `pyproject.toml`:
```bash
pypm infer
```
### 2️⃣ Benchmarking speed
Measure precisely how fast pypm is on your project:
```bash
pypm infer --bench
```
### 3️⃣ Dry Run (Preview Only)
See what would be added without modifying files:
```bash
pypm infer --dry-run
```
### 4️⃣ Infer + Install Dependencies
Infer and install packages automatically:
```bash
pypm install --bench
```
> **Note:** If `uv` is available, it will be used for faster installs. Otherwise, it falls back to `pip`.
## ✨ Features
- **Blazing Fast**: Sub-200ms execution on typical projects using overlapping I/O pipelines and `mtime` caching.
- **Offline-First Mapping**: Uses a bundled database of 200+ popular packages to resolve dependencies instantly without network.
- **Smart Inference**: Recursively scans your project for `.py` and `.ipynb` files and extracts all imports.
- **Automatic Resolution**: Maps module names to actual PyPI packages (e.g., `PIL` → `Pillow`, `zmq` → `pyzmq`, `attr` → `attrs`).
- **Standard Library Detection**: Automatically ignores 150+ Python built-in and stdlib modules.
- **Try/Except Import Detection**: Handles `try: import ujson except: import json` patterns correctly.
- **Database DSN Detection**: Automatically detects database dependencies from connection strings.
- **Dynamic Import Detection**: Catches `importlib.import_module()` and `__import__()` calls.
- **Framework-Aware**: Adds extras for FastAPI, Django, Flask, Celery, SQLAlchemy, etc.
- **Modern Standards**: Generates PEP 621–compliant `pyproject.toml`.
- **Secure**: Validates all package names before shell execution, sanitizes PyPI URLs, hardens cache files.
## 🔒 Security
pypm 0.0.6 includes built-in protections:
- **Command injection prevention**: All package names are validated against PEP 508 and checked for shell metacharacters before being passed to `pip`/`uv`.
- **URL sanitization**: Import names are validated before being used in PyPI API URLs to prevent path traversal.
- **Cache hardening**: Cache files use restrictive permissions (600 on Unix) and entries are validated on load.
- **Symlink protection**: Symlinked directories and files are skipped during scanning.
- **File size limits**: Files larger than 10MB are skipped to prevent resource exhaustion.
## 📌 Example Workflow
```bash
# Inside your Python project
pypm infer
# Review generated pyproject.toml
cat pyproject.toml
# Install dependencies
pypm install
```
## 🧠 Why pypm?
Manually maintaining dependencies leads to:
- Duplicate effort
- Forgotten imports
- Mismatched environments
- Dirty `requirements.txt` files
**pypm makes your imports the single source of truth.**
## 📚 Documentation
See full documentation in: `docs/`
## 🔧 Development Setup
If you want to contribute or run locally:
```bash
git clone https://github.com/Suriyakumardurai/pypm.git
cd pypm
pip install -e .[dev]
```
## 📦 Project
Available on PyPI: https://pypi.org/project/pypm-cli/
| text/markdown | null | "D. Suriya Kumar" <suriyakumardurai.sk.in@gmail.com> | null | null | MIT | pip, dependencies, pyproject.toml, inference, generator | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Systems Administration"
] | [] | null | null | >=3.5 | [] | [] | [] | [
"importlib-metadata; python_version < \"3.8\"",
"rich; python_version >= \"3.8\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"ruff; extra == \"dev\"",
"mypy; extra == \"dev\"",
"types-requests; extra == \"dev\"",
"vermin; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Suriyakumardurai/pypm",
"Repository, https://github.com/Suriyakumardurai/pypm"
] | twine/6.2.0 CPython/3.10.0 | 2026-02-20T11:21:37.132078 | pypm_cli-0.0.6.tar.gz | 27,399 | 84/1a/2976669a885f819bde69be4e893fa1093c34e097056688e97fa42a4fdc6a/pypm_cli-0.0.6.tar.gz | source | sdist | null | false | 31fdd4d0b8f02cbce173a76152e7ffb2 | 86f6e07c06b52e38ad6f602124a9106b9dabd98fdc758083fc2b0f7d7d1b416d | 841a2976669a885f819bde69be4e893fa1093c34e097056688e97fa42a4fdc6a | null | [
"LICENSE"
] | 209 |
2.4 | asyncio_for_robotics | 1.3.3 | Asyncio interface for ROS 2, Zenoh, and other robotic middlewares. | # Asyncio For Robotics
| Requirements | Compatibility | Tests |
|---|---|---|
| [](https://pypi.org/project/asyncio_for_robotics/)<br>[](https://opensource.org/license/mit) | [](https://github.com/ros2)<br>[](https://zenoh.io/) | [](https://github.com/2lian/asyncio-for-robotics/actions/workflows/python-pytest.yml)<br>[](https://github.com/2lian/asyncio-for-robotics/actions/workflows/ros-pytest.yml) |
The Asyncio For Robotics (`afor`) library makes `asyncio` usable with ROS 2, Zenoh and more, letting you write linear, testable, and non-blocking Python code.
- Better syntax.
- Only native python: Better docs and support.
- Simplifies testing.
*Will this make my code slower?* [No.](https://github.com/2lian/asyncio-for-robotics/tree/main/README.md#about-speed)
*Will this make my code faster?* No. However, `asyncio` will help YOU write
better, faster code.
> [!TIP]
> `asyncio_for_robotics` interfaces do not replace their primary interfaces! We add capabilities, giving you more choices, not less.
## Install
### Barebone
Compatible with ROS 2 (`jazzy`,`humble` and newer) out of the box. This library is pure python (>=3.10), so it installs easily.
```bash
pip install asyncio_for_robotics
```
### Along with Zenoh
```bash
pip install asyncio_for_robotics eclipse-zenoh
```
## Read more
- [Detailed ROS 2 tutorial](https://github.com/2lian/asyncio-for-robotics/blob/main/using_with_ros.md)
- [Detailed examples](https://github.com/2lian/asyncio-for-robotics/blob/main/asyncio_for_robotics/example)
- [no talking 🦍 show me code 🦍](https://github.com/2lian/asyncio-for-robotics/blob/main/asyncio_for_robotics/example/ros2_pubsub.py)
- [Cross-Platform deployment even with ROS](https://github.com/2lian/asyncio-for-robotics/blob/main/cross_platform.md) [](https://pixi.sh)
- [Usage for software testing](https://github.com/2lian/asyncio-for-robotics/blob/main/tests)
### Available interfaces:
- **Rate**: Every tick of a clock. (native)
- **TextIO**: `stdout` lines of a `Popen` process (and other `TextIO` files). (native)
- **ROS 2**: Subscriber, Service Client, Service Server.
- **Zenoh**: Subscriber.
- [Implement your own interface!](https://github.com/2lian/asyncio-for-robotics/blob/main/own_proto_example.md)
> [!TIP]
> An interface is not required for every operation. ROS 2 native publishers and
> nodes work just fine. Furthermore, advanced behavior can be composed of
> generic `afor` object (see [ROS2 Event Callback
> Example](./asyncio_for_robotics/example/ros2_event_callback.py)).
## Code sample
Syntax is identical between ROS 2, Zenoh, TextIO, Rate...
### Wait for messages one by one
Application:
- Get the latest sensor data
- Get clock value
- Wait for trigger
- Wait for next tick of the Rate
- Wait for system to be operational
```python
sub = afor.Sub(...)
# get the latest message
latest = await sub.wait_for_value()
# get a new message
new = await sub.wait_for_new()
# get the next message received
next = await sub.wait_for_next()
```
### Continuously listen to a data stream
Application:
- Process a whole data stream
- React to changes in sensor data
- Execute on every tick of the Rate
```python
# Continuously process the latest messages
async for msg in sub.listen():
status = foo(msg)
if status == DONE:
break
# Continuously process all incoming messages
async for msg in sub.listen_reliable():
status = foo(msg)
if status == DONE:
break
```
### Improved Services / Queryable for ROS 2
> [!NOTE]
> This is only for ROS 2.
Application:
- Client request reply from a server.
- Servers can delay their response without blocking (not possible in native ROS 2)
```python
# Server is once again a afor subscriber, but generating responder objects
server = afor.Server(...)
# processes all requests.
# listen_reliable method is recommanded as it cannot skip requests
async for responder in server.listen_reliable():
if responder.request == "PING!":
reponder.response = "PONG!"
await asyncio.sleep(...) # reply can be differed
reponder.send()
else:
... # reply is not necessary
```
```python
# the client implements a async call method
client = afor.Client(...)
response = await client.call("PING!")
```
### Process for the right amount of time
Application:
- Test if the system is responding as expected
- Run small tasks with small and local code
```python
# Listen with a timeout
data = await afor.soft_wait_for(sub.wait_for_new(), timeout=1)
if isinstance(data, TimeoutError):
pytest.fail(f"Failed to get new data in under 1 second")
# Process a codeblock with a timeout
async with afor.soft_timeout(1):
sum = 0
total = 0
async for msg in sub.listen_reliable():
number = process(msg)
sum += number
total += 1
last_second_average = sum/total
assert last_second_average == pytest.approx(expected_average)
```
### Apply pre-processing to a data-stream
Application:
- Parse payload of different transport into a common type.
```python
# ROS2 String type afor subscriber
inner_sub: Sub[String] = afor.ros2.Sub(String, "topic_name")
# converted into a subscriber generating python `str`
ros2str_func = lambda msg: msg.data
sub: Sub[str] = afor.ConverterSub(sub=inner_sub, convert_func=ros2str_func)
```
## About Speed
The inevitable question: *“But isn’t this slower than the ROS 2 executor? ROS 2 is the best!”*
In short: `rclpy`'s executor is the bottleneck.
- Comparing to the best ROS 2 Jazzy can do (`SingleThreadedExecutor`), `afor` increases latency from 110us to 150us.
- Comparing to other execution methods, `afor` is equivalent if not faster.
- If you find it slow, you should use C++ or Zenoh (or contribute to this repo?).
Benchmark code is available in [`./tests/bench/`](https://github.com/2lian/asyncio-for-robotics/blob/main/tests/bench/), it consists in two pairs of pub/sub infinitely echoing a message (using one single node). The messaging rate, thus measures the request to response latency.
| With `afor` | Transport | Executor | | Frequency (kHz) | Latency (ms) |
|:----------:|:----------|:----------------------------------|-|---------:|---------:|
| ✔️ | Zenoh | None | | **95** | **0.01** |
| ✔️ | ROS 2 | [Experimental Asyncio](https://github.com/ros2/rclpy/pull/1399) | | **17** | **0.06** |
| ❌ | ROS 2 | [Experimental Asyncio](https://github.com/ros2/rclpy/pull/1399) | | 13 | 0.08 |
| ❌ | ROS 2 | SingleThreaded | | 9 | 0.11 |
| ✔️ | ROS 2 | SingleThreaded | | **7** | **0.15** |
| ✔️ | ROS 2 | MultiThreaded | | **3** | **0.3** |
| ❌ | ROS 2 | MultiThreaded | | **3** | **0.3** |
| ✔️ | ROS 2 | [`ros_loop` Method](https://github.com/m2-farzan/ros2-asyncio) | | 3 | 0.3 |
Details:
- `uvloop` was used, replacing the asyncio executor (more or less doubles the performances for Zenoh)
- RMW was set to `rmw_zenoh_cpp`
- ROS2 benchmarks uses `afor`'s `ros2.ThreadedSession` (the default in `afor`).
- Only the Benchmark of the [`ros_loop` method](https://github.com/m2-farzan/ros2-asyncio) uses `afor`'s second type of session: `ros2.SynchronousSession`.
- ROS 2 executors can easily be changed in `afor` when creating a session.
- The experimental `AsyncioExecutor` PR on ros rolling by nadavelkabets is incredible [https://github.com/ros2/rclpy/pull/1399](https://github.com/ros2/rclpy/pull/1399). Maybe I will add proper support for it (but only a few will want to use an unmerged experimental PR of ROS 2 rolling).
- If there is interest in those benchmarks I will improve them, so others can run them all easily.
Analysis:
- Zenoh is extremely fast, proving that `afor` is not the bottleneck.
- This `AsyncioExecutor` having better perf when using `afor` is interesting, because `afor` does not bypass code.
- I think this is due to `AsyncioExecutor` having some overhead that affects its own callback.
- Without `afor` the ROS 2 callback executes some code and publishes.
- With `afor` the ROS 2 callback returns immediately, and fully delegates execution to `asyncio`.
- The increase of latency on the `SingleThreaded` executors proves that getting data in and out of the `rclpy` executor and thread is the main bottleneck.
- `AsyncioExecutor` does not have such thread, thus can directly communicate.
- Zenoh has its own thread, however it is built exclusively for multi-thread operations, without any executor. Thus achieves far superior performances.
- `MultiThreadedExecutor` is just famously slow.
- Very surprisingly, the well known `ros_loop` method detailed here [https://github.com/m2-farzan/ros2-asyncio](https://github.com/m2-farzan/ros2-asyncio) is slow.
| text/markdown | null | Elian NEPPEL <elian.dev@posteo.com> | null | null | null | asyncio, robotics, ros2, zenoh, publisher, subscriber | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Natural Language :: English",
"Environment :: Console",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System :: POSIX :: Linux",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Topic :: Scientific/Engineering",
"Topic :: Software Development :: Embedded Systems",
"Topic :: System :: Networking",
"Framework :: AsyncIO"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"typing_extensions>=4.0; python_version < \"3.11\"",
"async-timeout; python_version < \"3.11\"",
"eclipse-zenoh>=1.0.0; extra == \"zenoh\"",
"colorama; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\"",
"pyfiglet; extra == \"benchmarks\"",
"uvloop; extra == \"benchmarks\"",
"build; extra == \"build\"",
"twine; extra == \"build\""
] | [] | [] | [] | [
"Homepage, https://github.com/2lian/asyncio-for-robotics",
"Repository, https://github.com/2lian/asyncio-for-robotics",
"Issues, https://github.com/2lian/asyncio-for-robotics/issues"
] | uv/0.8.23 | 2026-02-20T11:21:31.068394 | asyncio_for_robotics-1.3.3.tar.gz | 36,956 | f6/ca/c76b50af79df4314b6201f231201ddc2d41df0aed8cd34aed35bb3973c90/asyncio_for_robotics-1.3.3.tar.gz | source | sdist | null | false | 56dfb81c85a52a58630554069f161a1f | 83eca78e09cd07fcf2101528ac4e9ac8048191b691cde2c4bd9d0acb9ab08911 | f6cac76b50af79df4314b6201f231201ddc2d41df0aed8cd34aed35bb3973c90 | MIT | [
"LICENSE"
] | 0 |
2.4 | prs-ui | 0.1.3 | Reusable Reflex UI components for Polygenic Risk Score computation with PGS Catalog | # prs-ui
[](https://badge.fury.io/py/prs-ui)
Reusable [Reflex](https://reflex.dev/) UI components for **Polygenic Risk Score (PRS)** computation using the [PGS Catalog](https://www.pgscatalog.org/).
Built on top of [`just-prs`](https://pypi.org/project/just-prs/) for the computation engine and [`reflex-mui-datagrid`](https://pypi.org/project/reflex-mui-datagrid/) for data grid display.
## Installation
```bash
pip install prs-ui
```
## Quick Start
```python
import polars as pl
import reflex as rx
from reflex_mui_datagrid import LazyFrameGridMixin
from prs_ui import PRSComputeStateMixin, prs_section
class MyAppState(rx.State):
genome_build: str = "GRCh38"
cache_dir: str = ""
status_message: str = ""
class PRSState(PRSComputeStateMixin, LazyFrameGridMixin, MyAppState):
def load_genotypes(self, parquet_path: str) -> None:
lf = pl.scan_parquet(parquet_path)
self.set_prs_genotypes_lf(lf) # preferred: provide a LazyFrame
self.prs_genotypes_path = parquet_path
def prs_page() -> rx.Component:
return prs_section(PRSState)
```
## Components
| Component | Description |
|-----------|-------------|
| `prs_section(state)` | Complete PRS section: build selector + score grid + compute button + progress + results |
| `prs_build_selector(state)` | Genome build dropdown (GRCh37/GRCh38) |
| `prs_scores_selector(state)` | MUI DataGrid for score selection with checkboxes and filtering |
| `prs_compute_button(state)` | Compute button with disclaimer callout |
| `prs_progress_section(state)` | Progress bar and status text during computation |
| `prs_results_table(state)` | Results table with quality badges, interpretation cards, and CSV download |
## State Mixin
`PRSComputeStateMixin` provides all PRS computation logic as a Reflex state mixin. Mix it into your concrete state class alongside `LazyFrameGridMixin` to get the full PRS workflow.
The preferred input method is a polars LazyFrame via `set_prs_genotypes_lf()` -- memory-efficient and avoids re-reading VCF files on each computation. A parquet path (`prs_genotypes_path`) is supported as a fallback.
## Documentation
See the [just-prs documentation](https://github.com/antonkulaga/just-prs) for the full Python API, CLI reference, and integration guide.
| text/markdown | null | antonkulaga <antonkulaga@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"just-prs>=0.3.3",
"reflex-mui-datagrid>=0.1.9",
"reflex>=0.8.27"
] | [] | [] | [] | [] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T11:21:19.604244 | prs_ui-0.1.3.tar.gz | 13,305 | 3f/87/92813aa413abb366fe1f035c69cfc36060b3213e799c7d77f82bf878a558/prs_ui-0.1.3.tar.gz | source | sdist | null | false | 58f2d8ecb05dd94a71b4eb5d9aae68b7 | ce42cdac4d5271bb23f00421e678b84268d59c77905e2eb1d3bab2f20664c45a | 3f8792813aa413abb366fe1f035c69cfc36060b3213e799c7d77f82bf878a558 | null | [] | 209 |
2.4 | tencoin-core | 0.4.4 | Official Python library for Tencoin – HD wallets, SegWit, BIP39, RPC client | # Tencoin Core
Official Python library for Tencoin – HD wallets, transactions, and RPC client.
## Features
- **HD Wallets**: BIP-39 (seed phrases) + BIP-84 (SegWit native addresses)
- **Full BIP-32 Support**: Standard xprv/xpub extended keys with master/account-level derivation
- **Watch-Only Wallets**: Create wallets directly from xpub (no seed/private key required)
- **Multiple Address Types**:
- **SegWit v0** (P2WPKH): Native SegWit addresses (`tc1q...`)
- **Legacy P2PKH**: Pay-to-pubkey-hash addresses (`T...`)
- **Legacy P2SH**: Pay-to-script-hash addresses (`M...`)
- **P2WSH**: SegWit script addresses (`tc1q...` with 32-byte program)
- **Custom Scripts**: Support for multisig and custom redeem/witness scripts
- **Transaction Signing**: Automatic detection and signing for SegWit (BIP-143) and Legacy (P2PKH/P2SH) inputs
پش 49- **Key Derivation**:
- **BIP-84 (P2WPKH)**: `m/84'/5353'/0'/0/0` for native SegWit addresses
- **BIP-44 (P2PKH)**: `m/44'/5353'/0'/0/0` for legacy P2PKH addresses
- Additional custom BIP-32 paths supported via the BIP-32 API
- **Seed Phrases**: 12–24 word English mnemonics (full BIP-39 compliance)
- **Wallet Recovery**: From mnemonic phrases – compatible with standard BIP-39 implementations
- **Full BIP-39 Compatibility**: Mnemonics generated by other standard libraries (e.g., bip-utils, Electrum, MetaMask, Ian Coleman's tool) are validated and recovered
- **Secure Seed Generation**: PBKDF2-HMAC-SHA512 with official 2048-word English wordlist
## Installation
```bash
pip install tencoin-core
| text/markdown | Tencoin Development Team | null | null | null | MIT | tencoin, blockchain, cryptocurrency, hd-wallet, bip39, segwit, rpc | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Utilities"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"ecdsa>=0.18.0",
"requests>=2.28.0",
"bech32>=1.2.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"black; extra == \"dev\"",
"ruff; extra == \"dev\"",
"aiohttp>=3.8.0; extra == \"async\"",
"mnemonic>=0.20; extra == \"mnemonic\""
] | [] | [] | [] | [
"Homepage, https://github.com/TenCoinOrg/TenCoin",
"Repository, https://github.com/TenCoinOrg/TenCoin",
"Issues, https://github.com/TenCoinOrg/TenCoin/issues",
"Documentation, https://tencoin-core.readthedocs.io"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T11:20:54.729534 | tencoin_core-0.4.4.tar.gz | 39,733 | e1/6f/6174cbddd30a0b59479c37a2faa9725886b033a5b5611de7ab395e3d66e6/tencoin_core-0.4.4.tar.gz | source | sdist | null | false | 1ec8d52c2b929749ca3f0269207ecd59 | 99499cb6426da560bbf40e80f04409f7c8c60b05497f0149452252267c9845ad | e16f6174cbddd30a0b59479c37a2faa9725886b033a5b5611de7ab395e3d66e6 | null | [] | 212 |
2.2 | redturtle.bandi | 1.6.2 | A product for announcements management based on rer.bandi |
===============
redturtle.bandi
===============
|python| |version| |ci| |downloads| |license|
.. |python| image:: https://img.shields.io/pypi/pyversions/redturtle.bandi.svg
:target: https://pypi.python.org/pypi/redturtle.bandi/
.. |version| image:: http://img.shields.io/pypi/v/redturtle.bandi.svg
:target: https://pypi.python.org/pypi/redturtle.bandi
.. |ci| image:: https://github.com/RedTurtle/redturtle.bandi/actions/workflows/test.yml/badge.svg
:target: https://github.com/RedTurtle/redturtle.bandi/actions
.. |downloads| image:: https://img.shields.io/pypi/dm/redturtle.bandi.svg
:target: https://pypi.org/project/redturtle.bandi/
.. |license| image:: https://img.shields.io/pypi/l/redturtle.bandi.svg
:target: https://pypi.org/project/redturtle.bandi/
:alt: License
Introduction
============
redturtle.bandi is a product for announcements based on 3.x branch of `rer.bandi`__.
__ http://pypi.python.org/pypi/rer.bandi
It allows to set some infos about the announcement like the deadline to participate or the closing date.
Migration from rer.bandi
========================
If you need to migrate rer.bandi -> redturtle.bandi, follow these instructions:
- Copy bandi settings somewhere
- Add both products in the buildout
- Uninstall rer.bandi
- Install redturtle.bandi
- Fill Bandi control panel with old settings
- Call "migration-from-rer" view on the Plone site root (this view will change the base classe of already created Bando and Folder Deepening items, and clean history)
- Remove rer.bandi from buildout
Composition
===========
Different layouts
-----------------
There are two allowed views for an announcement:
* default view, with basic infos on the right (like events) and extra infos (folder deepenings) in the middle.
* alternative view that moves extra infos slot below basic infos.
Folder deepening
----------------
Like in **rer.structured_content**, it has a special folder type called "*Folder Deepening*" that allows to manage some extra infos or attachment that should be shown in the announcement's view.
Topic criterias
---------------
There are some new topic criterias that allows to set topic queries for announcements.
Announcements search
--------------------
There is a search form (http://yoursite/search_bandi_form) for quick searches.
Announcement state information
------------------------------
In the search results and in the two new topic views, there are also some infos about the announcement, like his state (open, closed or in progress).
Announcements portlet
---------------------
There is also a portlet that show announcement infos from a topic (this portlet extends base collection portlet)
Configurations
==============
An announcement has some fields for set the announcement type and recipients.
Available values are set in "Bandi Settings" control panel.
Authority Default value
-----------------------
A default authority value can be set for announcements. This information is taken from control panel "Bandi Settings" (default_ente).
If the property is empty, the default value isn't set.
Tile
====
In order to use layout bandi for tile is necessary have installed collective.tiles.collection product.
plone.restapi integrations
==========================
Controlpanel
------------
Bandi controlpanel is also exposed via restapi to allow Volto integration.
DateTime fields deserializer
----------------------------
There is a custom deserializer for DateTime fields to set the right timezone when saving these fields (like start and end in Events).
Dependencies
============
This product has been tested on Plone 5.2
Credits
=======
Developed with the support of `Regione Emilia Romagna`__;
Regione Emilia Romagna supports the `PloneGov initiative`__.
__ http://www.regione.emilia-romagna.it/
__ http://www.plonegov.it/
Authors
=======
This product was developed by RedTurtle Technology team.
.. image:: http://www.redturtle.net/redturtle_banner.png
:alt: RedTurtle Technology Site
:target: http://www.redturtle.net/
Changelog
=========
1.6.2 (2026-02-20)
------------------
- Fix upgrade-step to handle malformed data.
[cekk]
- Traduzioni mancanti, black, isort, fix git test
[thesaintsimon]
1.6.1 (2025-10-10)
------------------
- gestione deserizlizer per datetime, nel caso il campo abbia avvutto
un cambianeto di ttipo (datte->datetime)
[mamico]
- lint/isort
[mamico]
1.6.0 (2025-03-13)
------------------
- Add new criteria for bando_state.
[folix-01]
1.5.1 (2025-03-12)
------------------
- Fix upgrade-step to not broke on missing value.
[cekk]
1.5.0 (2025-02-20)
------------------
- Do not use key/value pairs in tipologia_bando and destinatari.
[cekk]
- Refactor retrieveContentsOfFolderDeepening to be more pluggable and use hooks for content-types based additional data.
[cell]
1.4.7 (2024-12-12)
------------------
- Update it translations
[lucabel]
1.4.6 (2024-09-09)
------------------
- Add effective and modified date to retrieveContentsOfFolderDeepening data.
[cekk]
1.4.5 (2024-04-15)
------------------
- Added "tipologia_bando_label" metadata.
[daniele]
1.4.4 (2024-02-20)
------------------
- Changed translation for states "Open" and "Closed".
[daniele]
1.4.3 (2023-06-27)
------------------
- Fix workaround for Link bug (?) (remoteUrl in catalog)
[mamico]
- Feat url dei file compleata con filename
[mamico]
- Fix invalid tipologie_bando
[mamico]
1.4.2 (2022-10-07)
------------------
- Fix problem with scadenza_bando indexing: due to a
datetime 2 DateTime conversion tz information was
badly transformed
[lucabel]
1.4.1 (2022-07-28)
------------------
- Added metadata "apertura_bando".
[daniele]
1.4.0 (2022-05-31)
------------------
- Add new bando state "scheduled" and new field to manage open date.
[cekk]
1.3.4 (2022-05-10)
------------------
- Re-introduced change from 1.2.0.
[cekk]
1.3.3 (2022-04-19)
------------------
- Fix problem with default values and missing
IContextAwareDefaultFactory
1.3.2 (2022-01-14)
------------------
- Fix attachments dimension calculation.
[cekk]
- Add content-type info in attachments.
[cekk]
1.3.1 (2022-01-14)
------------------
- Fix labels in controlpanel.
[cekk]
1.3.0 (2021-11-17)
------------------
- fixed profile name in mgrate_to_1100 upgrade step-
[eikichi18]
- Remove DateField deserializer customization (already used in redturtle.volto).
[cekk]
1.2.0 (2021-06-07)
------------------
- Save `scadenza_bando` with proper timezone (like start and end fields in Event).
[cekk]
1.1.2 (2021-04-12)
------------------
- Fix typo in upgrade-step for 1.1.0 version.
[cekk]
1.1.1 (2021-02-19)
------------------
- Controlpanel also available for plone.restapi.
[cekk]
1.1.0 (2021-02-19)
------------------
- Rename indexes.
[cekk]
1.0.2 (2020-12-30)
------------------
- Release on pypi.
[cekk]
1.0.1 (2020-10-30)
------------------
- Make some micro fix in bando view when search for attachments in
deepening folder
[lucabel]
1.0.0 (2020-03-06)
------------------
- Start new project from old rer.bandi implementation (3.x)
[cekk]
| null | RedTurtle Technology | sviluppoplone@redturtle.it | null | null | GPL | redturtle bandi announcements | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: Plone :: 5.2",
"Framework :: Plone :: 6.0",
"Framework :: Plone :: Addon",
"Framework :: Plone",
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python"
] | [] | https://github.com/PloneGov-IT/redturtle.bandi | null | >=3.8 | [] | [] | [] | [
"setuptools",
"lxml",
"plone.restapi",
"collective.tiles.collection",
"plone.app.testing; extra == \"test\"",
"plone.testing>=5.0.0; extra == \"test\"",
"plone.app.contenttypes; extra == \"test\"",
"plone.app.robotframework[debug]; extra == \"test\"",
"collective.MockMailHost; extra == \"test\""
] | [] | [] | [] | [
"PyPI, https://pypi.python.org/pypi/redturtle.bandi",
"Source, https://github.com/RedTurtle/redturtle.bandi",
"Tracker, https://github.com/RedTurtle/redturtle.bandi/issues"
] | twine/5.1.1 CPython/3.11.9 | 2026-02-20T11:20:04.037182 | redturtle_bandi-1.6.2.tar.gz | 67,485 | 82/87/839ebdc365ad40f6756e8cfc345908839180fa76d10e9642920d94ac50bd/redturtle_bandi-1.6.2.tar.gz | source | sdist | null | false | cd75567a079517f79c617ecfabe68122 | e55d4b5a09717dae501673ed19922839b5c9633788d720b9569d095410443733 | 8287839ebdc365ad40f6756e8cfc345908839180fa76d10e9642920d94ac50bd | null | [] | 0 |
2.4 | haiku.skills | 0.4.2 | Skill-powered AI agents implementing the Agent Skills specification with pydantic-ai | # haiku.skills
[](https://github.com/ggozad/haiku.skills/actions/workflows/test.yml)
[](https://codecov.io/gh/ggozad/haiku.skills)
Skill-powered AI agents implementing the [Agent Skills specification](https://agentskills.io/specification) with [pydantic-ai](https://ai.pydantic.dev/).
## How it works
`SkillToolset` is a pydantic-ai `FunctionToolset` that you attach to your own agent. It exposes a single `execute_skill` tool. When the agent calls it, a **focused sub-agent** spins up with only that skill's instructions and tools — then returns the result. The main agent never sees the skill's internal tools, so its tool space stays clean no matter how many skills you load.
This sub-agent architecture means each skill runs in isolation with its own system prompt, tools, and token budget. Skills don't interfere with each other, tool descriptions don't compete for attention, and failures in one skill can't confuse another.
## Features
- **Sub-agent execution** — Each skill runs in its own agent with dedicated instructions and tools
- **Skill discovery** — Scan filesystem paths for [SKILL.md](https://agentskills.io/specification) directories or load from Python entrypoints
- **In-process tools** — Attach pydantic-ai `Tool` functions or `AbstractToolset` instances to skills
- **Per-skill state** — Skills declare a Pydantic state model and namespace; state is passed to tools via `RunContext` and tracked on the toolset
- **AG-UI protocol** — State changes emit `StateDeltaEvent` (JSON Patch), compatible with the [AG-UI protocol](https://docs.ag-ui.com)
- **Script tools** — Python scripts in `scripts/` with a `main()` function, discovered and executed via `uv run`
- **MCP integration** — Wrap any MCP server (stdio, SSE, streamable HTTP) as a skill
## Quick start
```bash
uv add haiku.skills
```
```python
from pathlib import Path
from pydantic_ai import Agent
from haiku.skills import SkillToolset
toolset = SkillToolset(skill_paths=[Path("./skills")])
agent = Agent(
"anthropic:claude-sonnet-4-5-20250929",
instructions=toolset.system_prompt,
toolsets=[toolset],
)
result = await agent.run("Analyze this dataset.")
print(result.output)
```
## Documentation
Full documentation at [ggozad.github.io/haiku.skills](https://ggozad.github.io/haiku.skills/).
## License
MIT
| text/markdown | null | Yiorgis Gozadinos <ggozadinos@gmail.com> | null | null | MIT | agent-skills, ai-agents, ai-tools, mcp, pydantic-ai, skill-discovery, sub-agent | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pydantic-ai-slim[mcp]>=1.46.0",
"pydantic>=2.12.5",
"pyyaml>=6.0.3",
"skills-ref>=0.1.1",
"ag-ui-protocol>=0.1.11; extra == \"ag-ui\"",
"jsonpatch>=1.33; extra == \"ag-ui\"",
"ag-ui-protocol>=0.1.11; extra == \"tui\"",
"jsonpatch>=1.33; extra == \"tui\"",
"pydantic-ai-slim[logfire]>=1.46.0; extra == \"tui\"",
"python-dotenv>=1.1.0; extra == \"tui\"",
"textual>=3.2.0; extra == \"tui\"",
"typer>=0.21.2; extra == \"tui\""
] | [] | [] | [] | [
"Homepage, https://github.com/ggozad/haiku.skills",
"Repository, https://github.com/ggozad/haiku.skills",
"Issues, https://github.com/ggozad/haiku.skills/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-20T11:18:47.434755 | haiku_skills-0.4.2.tar.gz | 119,892 | 3b/13/96cb8124fd47d9f7b752f835b8c751bc67908a061a53a2ba392509d1fe68/haiku_skills-0.4.2.tar.gz | source | sdist | null | false | 242426ee9754e5ebdca6311fe262a5d3 | 4846e7ebc1c7919faa576badd77723a0d681cf3a92d90f7b97edf3f6eca8ea8d | 3b1396cb8124fd47d9f7b752f835b8c751bc67908a061a53a2ba392509d1fe68 | null | [
"LICENSE"
] | 0 |
2.4 | Pymeshit | 0.7.6 | Complete mesh generation and manipulation package with GUI | # PyMeshIt
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://github.com/psf/black)
<p align="center">
<img src="resources/images/pymeshit_logo.png"/>
</p>
PyMeshIt is a complete Python package for mesh generation and manipulation with a full-featured Qt-based GUI. It provides a comprehensive workflow to process point clouds and polylines into conforming surface meshes and tetrahedral meshes.
**Note:** This version runs entirely in Python without C++ dependencies, making it easier to install and deploy.
## Highlights (GUI-driven workflow)
The included GUI (main.py) implements a full MeshIt workflow with the following main tabs:
- 1. Load Data — load points, wells (polylines) or VTU/Poly formats; manage multiple datasets and colors.
- 2. Convex Hull — compute dataset boundaries (convex or rim for/quasi-planar sheets) with corner detection.
- 3. Segmentation — refine hulls by target feature size and per-surface length tables (RefineByLength).
- 4. Triangulation — generate surface triangulations with gradient, min-angle, interpolation and uniform options.
- 5. Intersections — compute & visualize global surface–surface and polyline–surface intersections; triple point detection.
- 6. Refine & Mesh — refine intersection/hull lines, generate conforming surface meshes, constraint selection UI, per-surface mesh density table.
- 7. Pre‑Tetramesh — select conforming surfaces, validate for TetGen, manage selection tree for tetrahedralization.
- 8. Tetra Mesh — generate and visualize tetrahedral meshes, assign materials, export results.
## Installation
### From Release (Recommended)
PyMeshIt provides standalone executables for **Windows 10/11** and **Ubuntu (22.04+)**. These do not require Python to be installed.
1. Go to the [Releases page](https://github.com/waqashussain117/PyMeshit/releases).
2. Download the appropriate file for your OS:
- **Windows**: `PyMeshit-vX.X.X-win64.zip`
- **Ubuntu**: `PyMeshit-vX.X.X-linux.zip`
- **MacOS**: `PyMeshit-vX.X.X-macos.zip`
3. Extract the archive.
4. On Linux, you may need to make the file executable:
```bash
chmod +x PyMeshIt
```
5. Run the executable (`./PyMeshIt` on Linux, double-click `PyMeshIt.exe` on Windows).
> **Note**: macOS support is planned for future releases.
### From PyPI (Cross-Platform)
If you prefer to run from source or use the Python API, you can install via pip:
```bash
pip install triangle
pip install pymeshit
```
### From Source
If you want to install from source:
```bash
git clone https://github.com/waqashussain117/PyMeshit
cd PyMeshit
pip install -e .
```
### Requirements
The package will automatically install all required dependencies:
- numpy
- scipy
- matplotlib
- PySide6
- pyvista
- tetgen
- triangle (optional)
**Linux Users**: You may need to install system libraries for Qt. On Ubuntu:
```bash
sudo apt-get install libxcb-cursor0 libxkbcommon-x11-0 libegl1 libopengl0 libgl1
```
## Quick start (GUI)
For installation either install the Requirements and then open through Python.
After installation, run the GUI:
```bash
meshit-gui
```
Or from Python:
```python
import Pymeshit
Pymeshit.main_wrapper()
```
Typical workflow:
1. Load one or more point or VTU files (File → Load).
2. Compute hulls (Convex Hull tab).
3. Compute segmentation (Segmentation tab) — set "Target Feature Size" or per-surface values.
4. Run triangulation (Triangulation tab), choose interpolation and quality settings.
5. Compute intersections (Intersections tab) to extract shared constraints and triple points.
6. Refine intersection lines and generate conforming meshes (Refine & Mesh tab).
7. Select conforming surfaces and validate for TetGen (Pre‑Tetramesh tab).
8. Generate and visualize tetrahedral mesh (Tetra Mesh tab) and export.
## Programmatic Usage
## Troubleshooting
### Linux / Virtual Machine Issues
If you encounter an error like `X Error of failed request: BadWindow` or the application crashes on startup, it is likely due to graphics driver compatibility, especially in Virtual Machines (VirtualBox, VMware).
Try running the application with software rendering forced:
```bash
export LIBGL_ALWAYS_SOFTWARE=1
./PyMeshIt
```
Or force the Qt platform to X11:
```bash
export QT_QPA_PLATFORM=xcb
./PyMeshIt
```
## Contributing
Contributions are welcome. Please open an issue for discussion and submit PRs for fixes and features. Keep GUI behavior consistent with the tab-based workflow.
## License
This project is licensed under the GNU General Public License v3.0 - see the [LICENSE](LICENSE) file for details.
| text/markdown | null | Waqas Hussain <waqas.hussain117@gmail.com> | null | null | GPL-3.0 | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)"
] | [] | null | null | >=3.7 | [] | [] | [] | [
"numpy>=1.20.0",
"scipy>=1.7.0",
"triangle",
"matplotlib>=3.3.0",
"PySide6>=6.0.0",
"pyvista>=0.30.0",
"pyvistaqt>=0.11.0",
"tetgen>=0.6.0"
] | [] | [] | [] | [
"Homepage, https://github.com/waqashussain117/PyMeshit",
"Bug Tracker, https://github.com/waqashussain117/PyMeshit/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:18:46.920184 | pymeshit-0.7.6.tar.gz | 3,717,991 | 90/88/a7cf39c93ccae13471737845ffdc39f0b546a97c9373da4e93c5550cb78e/pymeshit-0.7.6.tar.gz | source | sdist | null | false | 77e6d632c96f294a214911bee17a3118 | 8c882cbf0f05d9029338507f326083d83942a7b3b6ff60f0e3146e5f9828fe16 | 9088a7cf39c93ccae13471737845ffdc39f0b546a97c9373da4e93c5550cb78e | null | [
"LICENSE"
] | 0 |
2.4 | assert-llm-tools | 1.0.0 | Automated Summary Scoring & Evaluation of Retained Text | # ASSERT LLM Tools
> **⚠️ Deprecated — this package is no longer maintained.**
>
> `assert_llm_tools` has been superseded by focused, independently-versioned packages:
>
> | Capability | New package | Install |
> |-----------|-------------|---------|
> | Summary evaluation | **assert-eval** | `pip install assert-eval` |
> | Compliance note evaluation | **assert-review** | `pip install assert-review` |
>
> Version 1.0.0 is the final release. No further updates will be made. Please migrate to the packages above.
---
**A**utomated **S**ummary **S**coring & **E**valuation of **R**etained **T**ext
ASSERT LLM Tools is a lightweight Python library for LLM-based text evaluation. It provides two main capabilities:
- **Summary evaluation** — score a summary against source text for coverage, factual accuracy, coherence, and more
- **Compliance note evaluation** — evaluate adviser meeting notes against regulatory frameworks (FCA, MiFID II) and return a structured gap report
All evaluation is LLM-based. No PyTorch, no BERT, no heavy dependencies.
## Installation
```bash
pip install assert-llm-tools
```
## Quick Start
### Summary Evaluation
```python
from assert_llm_tools import evaluate_summary, LLMConfig
config = LLMConfig(
provider="bedrock",
model_id="us.amazon.nova-pro-v1:0",
region="us-east-1",
)
results = evaluate_summary(
full_text="Original long text goes here...",
summary="Summary to evaluate goes here...",
metrics=["coverage", "factual_consistency", "coherence"],
llm_config=config,
)
print(results)
# {'coverage': 0.85, 'factual_consistency': 0.92, 'coherence': 0.88}
```
### Compliance Note Evaluation
```python
from assert_llm_tools import evaluate_note, LLMConfig
config = LLMConfig(
provider="bedrock",
model_id="us.amazon.nova-pro-v1:0",
region="us-east-1",
)
report = evaluate_note(
note_text="Client meeting note text goes here...",
framework="fca_suitability_v1",
llm_config=config,
)
print(report.overall_rating) # "Compliant" / "Minor Gaps" / "Requires Attention" / "Non-Compliant"
print(report.overall_score) # 0.0–1.0
print(report.passed) # True / False
for item in report.items:
print(f"{item.element_id}: {item.status} (score: {item.score:.2f})")
if item.suggestions:
for s in item.suggestions:
print(f" → {s}")
```
## Summary Evaluation
### Available Metrics
| Metric | Description |
|--------|-------------|
| `coverage` | How completely the summary captures claims from the source text |
| `factual_consistency` | Whether claims in the summary are supported by the source |
| `factual_alignment` | Combined coverage + consistency score |
| `topic_preservation` | How well the summary preserves the main topics |
| `conciseness` | Information density — does the summary avoid padding? |
| `redundancy` | Detects repetitive content within the summary |
| `coherence` | Logical flow and readability of the summary |
> **Deprecated names** (still accepted for backwards compatibility): `faithfulness` → use `coverage`; `hallucination` → use `factual_consistency`.
### Custom Evaluation Instructions
Tailor LLM evaluation criteria for your domain:
```python
results = evaluate_summary(
full_text=text,
summary=summary,
metrics=["coverage", "factual_consistency"],
llm_config=config,
custom_prompt_instructions={
"coverage": "Apply strict standards. Only mark a claim as covered if it is clearly and explicitly represented.",
"factual_consistency": "Flag any claim that adds detail not present in the original text.",
},
)
```
### Verbose Output
Pass `verbose=True` to include per-claim reasoning in the results:
```python
results = evaluate_summary(..., verbose=True)
```
## Compliance Note Evaluation
> ⚠️ **Experimental — do not use in live or production systems.**
>
> `evaluate_note()` is under active development. Outputs are non-deterministic (LLM-based), the API may change between releases, and results have not been validated against real regulatory decisions. This feature is intended for research, prototyping, and internal tooling only. It is not a substitute for qualified compliance review and must not be used to make or support live regulatory or client-facing decisions.
### evaluate_note()
```python
from assert_llm_tools import evaluate_note, LLMConfig
from assert_llm_tools.metrics.note.models import PassPolicy
report = evaluate_note(
note_text=note,
framework="fca_suitability_v1", # built-in ID or path to a custom YAML
llm_config=config,
mask_pii=False, # mask client PII before sending to LLM
verbose=False, # include LLM reasoning in GapItem.notes
custom_instruction=None, # additional instruction appended to all element prompts
pass_policy=None, # custom PassPolicy (see below)
metadata={"note_id": "N-001"}, # arbitrary key/value pairs, passed through to GapReport
)
```
### GapReport
| Field | Type | Description |
|-------|------|-------------|
| `framework_id` | `str` | Framework used for evaluation |
| `framework_version` | `str` | Framework version |
| `passed` | `bool` | Whether the note passes the framework's policy thresholds |
| `overall_score` | `float` | Weighted mean element score, 0.0–1.0 |
| `overall_rating` | `str` | Human-readable compliance rating (see below) |
| `items` | `List[GapItem]` | Per-element evaluation results |
| `summary` | `str` | LLM-generated narrative summary of the evaluation |
| `stats` | `GapReportStats` | Counts by status and severity |
| `pii_masked` | `bool` | Whether PII masking was applied |
| `metadata` | `dict` | Caller-supplied metadata, passed through unchanged |
**Overall rating values:**
| Rating | Meaning |
|--------|---------|
| `Compliant` | Passed — all elements fully present |
| `Minor Gaps` | Passed — but some elements are partial or optional elements missing |
| `Requires Attention` | Failed — high/medium gaps, no critical blockers |
| `Non-Compliant` | Failed — one or more critical required elements missing or below threshold |
### GapItem
| Field | Type | Description |
|-------|------|-------------|
| `element_id` | `str` | Element identifier from the framework |
| `status` | `str` | `"present"`, `"partial"`, or `"missing"` |
| `score` | `float` | 0.0–1.0 quality score for this element |
| `evidence` | `Optional[str]` | Quote or paraphrase from the note supporting the assessment. `None` when element is missing. |
| `severity` | `str` | `"critical"`, `"high"`, `"medium"`, or `"low"` |
| `required` | `bool` | Whether this element is required by the framework |
| `suggestions` | `List[str]` | Actionable remediation suggestions for gaps (empty when `status == "present"`) |
| `notes` | `Optional[str]` | LLM reasoning (only populated when `verbose=True`) |
### Built-in Frameworks
| Framework ID | Description |
|-------------|-------------|
| `fca_suitability_v1` | FCA suitability note requirements under COBS 9.2 / PS13/1 (9 elements) |
### Custom Frameworks
Pass a path to your own YAML file:
```python
report = evaluate_note(
note_text=note,
framework="/path/to/my_framework.yaml",
llm_config=config,
)
```
The YAML schema mirrors the built-in frameworks. See `assert_llm_tools/frameworks/fca_suitability_v1.yaml` for a reference example.
### Configurable Pass Policy
```python
from assert_llm_tools.metrics.note.models import PassPolicy
policy = PassPolicy(
critical_partial_threshold=0.5, # partial critical element treated as blocker if score < this
required_pass_threshold=0.6, # required element must score >= this to pass
score_correction_missing_cutoff=0.2,
score_correction_present_min=0.5,
score_correction_present_floor=0.7,
)
report = evaluate_note(note_text=note, framework="fca_suitability_v1", pass_policy=policy, llm_config=config)
```
## LLM Configuration
```python
from assert_llm_tools import LLMConfig
# AWS Bedrock
config = LLMConfig(
provider="bedrock",
model_id="us.amazon.nova-pro-v1:0",
region="us-east-1",
# api_key / api_secret / aws_session_token for explicit credentials (optional — uses ~/.aws by default)
)
# OpenAI
config = LLMConfig(
provider="openai",
model_id="gpt-4o",
api_key="your-openai-api-key",
)
```
### Supported Bedrock Model Families
| Model Family | Example Model IDs |
|-------------|-------------------|
| Amazon Nova | `us.amazon.nova-pro-v1:0`, `amazon.nova-lite-v1:0` |
| Anthropic Claude | `anthropic.claude-3-sonnet-20240229-v1:0` |
| Meta Llama | `meta.llama3-70b-instruct-v1:0` |
| Mistral AI | `mistral.mistral-large-2402-v1:0` |
| Cohere Command | `cohere.command-r-plus-v1:0` |
| AI21 Labs | `ai21.jamba-1-5-large-v1:0` |
## Proxy Configuration
```python
# Single proxy
config = LLMConfig(provider="bedrock", model_id="...", region="us-east-1",
proxy_url="http://proxy.example.com:8080")
# Protocol-specific
config = LLMConfig(provider="bedrock", model_id="...", region="us-east-1",
http_proxy="http://proxy.example.com:8080",
https_proxy="http://proxy.example.com:8443")
# Authenticated proxy
config = LLMConfig(provider="bedrock", model_id="...", region="us-east-1",
proxy_url="http://username:password@proxy.example.com:8080")
```
Standard `HTTP_PROXY` / `HTTPS_PROXY` environment variables are also respected.
## PII Masking
Apply PII detection and masking before any text is sent to the LLM:
```python
# Summary evaluation
results = evaluate_summary(
full_text=text, summary=summary, metrics=["coverage"],
llm_config=config, mask_pii=True,
)
# Note evaluation
report = evaluate_note(note_text=note, framework="fca_suitability_v1",
llm_config=config, mask_pii=True)
```
> **Note:** `mask_pii=False` is the default. For production use with real client data, set `mask_pii=True`. Output files (e.g. `--output report.json`) may contain verbatim evidence quotes — treat them accordingly.
## License
MIT
| text/markdown | null | Charlie Douglas <cdouglas@gmail.com> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Development Status :: 7 - Inactive",
"Intended Audience :: Developers",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"anthropic>=0.3.0",
"openai>=1.0.0",
"python-dotenv>=0.19.0",
"tiktoken==0.8.0",
"pytest>=7.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"isort>=5.0.0; extra == \"dev\"",
"flake8>=4.0.0; extra == \"dev\"",
"build>=0.10.0; extra == \"dev\"",
"twine>=4.0.0; extra == \"dev\"",
"boto3>=1.28.0; extra == \"bedrock\"",
"openai>=1.53.0; extra == \"openai\"",
"boto3>=1.28.0; extra == \"all\"",
"openai>=1.53.0; extra == \"all\""
] | [] | [] | [] | [
"Homepage, https://github.com/charliedouglas/assert",
"Bug Tracker, https://github.com/charliedouglas/assert/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:18:25.643136 | assert_llm_tools-1.0.0.tar.gz | 49,370 | c9/20/a3db94ea0993c236b9148719bec353e34a60fd21ce3659b2f8940517a909/assert_llm_tools-1.0.0.tar.gz | source | sdist | null | false | bdabd838af13f981efa5de9545fc8ab9 | 7cb9e39528d7e3a2402bbfde0efd3f8f3ed39c68fe6fdc377f950a5a24825437 | c920a3db94ea0993c236b9148719bec353e34a60fd21ce3659b2f8940517a909 | null | [
"LICENSE"
] | 223 |
2.4 | llama-index-llms-openai | 0.6.19 | llama-index llms openai integration | # LlamaIndex Llms Integration: Openai
## Installation
To install the required package, run:
```bash
%pip install llama-index-llms-openai
```
## Setup
1. Set your OpenAI API key as an environment variable. You can replace `"sk-..."` with your actual API key:
```python
import os
os.environ["OPENAI_API_KEY"] = "sk-..."
```
## Basic Usage
### Generate Completions
To generate a completion for a prompt, use the `complete` method:
```python
from llama_index.llms.openai import OpenAI
resp = OpenAI().complete("Paul Graham is ")
print(resp)
```
### Chat Responses
To send a chat message and receive a response, create a list of `ChatMessage` instances and use the `chat` method:
```python
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality."
),
ChatMessage(role="user", content="What is your name?"),
]
resp = OpenAI().chat(messages)
print(resp)
```
## Streaming Responses
### Stream Complete
To stream responses for a prompt, use the `stream_complete` method:
```python
from llama_index.llms.openai import OpenAI
llm = OpenAI()
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
print(r.delta, end="")
```
### Stream Chat
To stream chat responses, use the `stream_chat` method:
```python
from llama_index.llms.openai import OpenAI
from llama_index.core.llms import ChatMessage
llm = OpenAI()
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality."
),
ChatMessage(role="user", content="What is your name?"),
]
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
```
## Configure Model
You can specify a particular model when creating the `OpenAI` instance:
```python
llm = OpenAI(model="gpt-3.5-turbo")
resp = llm.complete("Paul Graham is ")
print(resp)
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality."
),
ChatMessage(role="user", content="What is your name?"),
]
resp = llm.chat(messages)
print(resp)
```
## Asynchronous Usage
You can also use asynchronous methods for completion:
```python
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo")
resp = await llm.acomplete("Paul Graham is ")
print(resp)
```
## Set API Key at a Per-Instance Level
If desired, you can have separate LLM instances use different API keys:
```python
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo", api_key="BAD_KEY")
resp = OpenAI().complete("Paul Graham is ")
print(resp)
```
### LLM Implementation example
https://docs.llamaindex.ai/en/stable/examples/llm/openai/
| text/markdown | llama-index | null | null | null | null | null | [] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"llama-index-core<0.15,>=0.14.5",
"openai<3,>=1.108.1"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T11:18:03.527670 | llama_index_llms_openai-0.6.19.tar.gz | 25,956 | 42/f0/810b09cab0d56de6f9476642d0e016c779f2ac3ec7845eb44ddc12a1796d/llama_index_llms_openai-0.6.19.tar.gz | source | sdist | null | false | 032a244c2b1448f77766c895cf2c0961 | a5e0fcddb7da875759406036e09b949cd64a2bb98da709d933147e41e0e6f78a | 42f0810b09cab0d56de6f9476642d0e016c779f2ac3ec7845eb44ddc12a1796d | MIT | [
"LICENSE"
] | 31,946 |
2.4 | qiskit-scaleway | 0.3.8 | A Qiskit package to connect to Scaleway Quantum as a Service | # Scaleway provider for Qiskit
**Qiskit Scaleway** is a Python package to run quantum circuits on [Scaleway](https://www.scaleway.com/en/) infrastructure, providing access to:
- [AQT](https://www.aqt.eu/) ion-trapped quantum computers
- [IQM](https://meetiqm.com/) superconducting quantum computers
- [Aer](https://github.com/Qiskit/qiskit-aer) state vector and tensor network multi-GPU emulators
- [Qsim](https://github.com/quantumlib/qsim) NISQ emulators
- [CUDA-Q](https://developer.nvidia.com/cuda-q) emulators by NVIDIA
To run circuits over [Quandela](https://www.quandela.com/) backends provided by Scaleway, you must use [Perceval SDK](https://perceval.quandela.net/) through the [Scaleway provider](https://perceval.quandela.net/docs/providers.html).
More info on the [Quantum service web page](https://www.scaleway.com/en/quantum-as-a-service/).
## Installation
We encourage installing Scaleway provider via the pip tool (a Python package manager):
```bash
pip install qiskit-scaleway
```
## Getting started
To instantiate the ScalewayProvider, you need to have an access token and a project_id
```python
from qiskit import QuantumCircuit
from qiskit_scaleway import ScalewayProvider
provider = ScalewayProvider(
project_id="<your-scaleway-project-id>",
secret_key="<your-scaleway-secret-key>",
)
```
Alternatively, the Scaleway Provider can discover your access token from environment variables:
```
export QISKIT_SCALEWAY_PROJECT_ID="project_id"
export QISKIT_SCALEWAY_SECRET_KEY="token"
```
Then you can instantiate the provider without any arguments:
```python
from qiskit import QuantumCircuit
from qiskit_scaleway import ScalewayProvider
provider = ScalewayProvider()
```
Now you can have access to the supported backends:
```python
# List all operational backends
backends = provider.backends(operational=True)
print(backends)
# List all backends with a minimum number of qbits
backends = provider.backends(min_num_qubits=35)
print(backends)
# Retrieve a backend by providing search criteria. The search must have a single match
backend = provider.get_backend("EMU-AER-H100") # Or any gate-based compatible QPU
```
Define a quantum circuit and run it
```python
# Define a quantum circuit that produces a 4-qubit GHZ state.
qc = QuantumCircuit(4)
qc.h(0)
qc.cx(0, 1)
qc.cx(0, 2)
qc.cx(0, 3)
qc.measure_all()
## DO NOT USE TRANSPILATION
## Transpilation is done server side on QaaS service
# Create and send a job to a new QPU's session (or on an existing one)
# Custom noise models are also supported
result = backend.run(qc, method="statevector", shots=1000).result()
if result.success:
print(result.get_counts())
else:
print(result.to_dict()["error"])
```
## Development
This repository is at its early stage and is still in active development. If you are looking for a way to contribute please read [CONTRIBUTING.md](CONTRIBUTING.md).
## Reach us
We love feedback. Feel free to reach us on [Scaleway Slack community](https://slack.scaleway.com/), we are waiting for you on [#opensource](https://scaleway-community.slack.com/app_redirect?channel=opensource)..
## License
[License Apache 2.0](LICENSE)
| text/markdown | The Scaleway Developers | vmacheret@scaleway.com | null | null | Apache 2 | null | [] | [] | null | null | >=3.12.0 | [] | [] | [] | [
"qiskit~=2.1",
"qiskit-aer~=0.17",
"randomname>=0.2.1",
"dataclasses-json>=0.6.4",
"dataclasses>=0.6",
"scaleway-qaas-client>=0.1.23",
"qio>=0.1.17"
] | [] | [] | [] | [
"Documentation, https://www.scaleway.com/en/quantum-as-a-service/",
"Source, https://github.com/scaleway/qiskit-scaleway",
"Tracker, https://github.com/scaleway/qiskit-scaleway/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-20T11:15:49.642248 | qiskit_scaleway-0.3.8.tar.gz | 20,998 | 7f/83/4884782cf0c484d5fbfc27498f4ce3b55fa92e0da2952704068a7d9886ef/qiskit_scaleway-0.3.8.tar.gz | source | sdist | null | false | bd810db194b099e3f05420362d74c6b7 | ceda7bcc4ccb1d672e6249805bcde6c349b7dfb57adaa33ff71dec600ec0b4d0 | 7f834884782cf0c484d5fbfc27498f4ce3b55fa92e0da2952704068a7d9886ef | null | [
"LICENSE"
] | 216 |
2.4 | epi-recorder | 2.6.0 | Verifiable execution evidence for AI systems. Portable, cryptographically signed artifacts. | <p align="center">
<img src="https://raw.githubusercontent.com/mohdibrahimaiml/epi-recorder/main/docs/assets/logo.png" alt="EPI Logo" width="180"/>
<br>
<h1 align="center">EPI</h1>
<p align="center"><strong>Execution evidence system for AI agents</strong></p>
<p align="center">
<em>Capture, seal, and verify every decision your agents make</em>
</p>
</p>
<p align="center">
<a href="https://pypi.org/project/epi-recorder/"><img src="https://img.shields.io/pypi/v/epi-recorder?style=flat-square&label=pypi&color=0073b7" alt="PyPI Version"/></a>
<a href="https://pepy.tech/project/epi-recorder"><img src="https://img.shields.io/pepy/dt/epi-recorder?style=flat-square&label=downloads&color=0073b7" alt="Downloads"/></a>
<a href="https://github.com/mohdibrahimaiml/epi-recorder"><img src="https://img.shields.io/pypi/pyversions/epi-recorder?style=flat-square&color=0073b7" alt="Python"/></a>
<a href="LICENSE"><img src="https://img.shields.io/github/license/mohdibrahimaiml/epi-recorder?style=flat-square&color=0073b7" alt="License"/></a>
<a href="https://github.com/mohdibrahimaiml/epi-recorder/stargazers"><img src="https://img.shields.io/github/stars/mohdibrahimaiml/epi-recorder?style=flat-square&color=0073b7" alt="Stars"/></a>
</p>
---
## What is EPI?
EPI is a **file format and recorder** that turns agent execution into durable, verifiable artifacts.
An `.epi` file is a **flight recorder for AI systems** — it captures every decision, tool call, and state transition, sealed with cryptographic signatures. No cloud dependency. No vendor lock-in. Works offline forever.
**Core guarantees:**
- **Capture once, inspect forever** — self-contained artifacts with embedded viewer
- **Complete execution history** — prompts, responses, state, timestamps, costs
- **Tamper-evident proof** — Ed25519 signatures for compliance and audits
- **Replay production failures** — debug locally with full context
---
## Architecture
```mermaid
flowchart LR
A[Agent Code] -->|"record()"| B(Capture Layer)
B -->|"Wrapper/API"| D[Recorder]
subgraph "Crash-Safe Storage"
D -->|"Atomic Write"| E[(SQLite WAL)]
end
E -->|Finalize| F[Packer]
K[Private Key] -->|"Ed25519 Sign"| F
F -->|ZIP| G[agent.epi]
style E fill:#f9f,stroke:#333
style G fill:#9f9,stroke:#333
```
**Design principles:**
1. **Crash-safe** — SQLite WAL ensures no data loss, even if agents crash mid-execution
2. **Explicit capture** — evidence is intentional and reviewable in code
3. **Cryptographic proof** — Ed25519 signatures that can't be forged or backdated
4. **Offline-first** — no cloud dependency; works in air-gapped environments
---
## Quick Start
### 1. Install
```bash
pip install epi-recorder
```
### 2. Record an Agent Run
```python
from epi_recorder import record, wrap_openai
from openai import OpenAI
client = wrap_openai(OpenAI())
with record("my_agent.epi"):
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Plan a trip to Tokyo"}]
)
```
That's it. EPI captures the full prompt and response, token usage and cost, timestamps and model info, and a complete environment snapshot.
### 3. Inspect the Results
```bash
epi view my_agent.epi # Opens in browser — no login, no cloud
epi verify my_agent.epi # Verify cryptographic integrity
```
### 4. Analyze Performance Across Runs
```python
from epi_recorder import AgentAnalytics
analytics = AgentAnalytics("./production_runs")
summary = analytics.performance_summary()
print(f"Success rate: {summary['success_rate']:.1%}")
print(f"Avg cost: ${summary['avg_cost_per_run']:.3f}")
print(f"Most common error: {summary['top_errors'][0]}")
analytics.generate_report("dashboard.html")
```
**[Full Documentation →](docs/)** · **[Example .epi File →](examples/demo_agent.epi)**
---
## The Problem EPI Solves
Production agents fail in ways traditional logging can't capture.
**Scenario:** A LangGraph agent processes 47 steps overnight. Step 31 makes a bad decision that cascades into failure. CloudWatch logs expired. You have no idea what the agent was "thinking."
| Traditional Logs | EPI Artifacts |
|:-----------------|:--------------|
| Expire after retention period | Persist forever as files |
| Missing agent state and reasoning | Complete checkpoint history |
| Can't replay locally | Full local replay with Ollama |
| No cryptographic proof | Ed25519 signatures for audits |
**Real incident:** An AutoGen agent approved a $12,000 refund instead of $120. With EPI, the team opened the `.epi` file, found the OCR preprocessing bug at step 17, and fixed it in 15 minutes. The signed artifact served as compliance evidence.
---
## Supported Providers
| Provider | Integration |
|:---------|:------------|
| OpenAI | `wrap_openai()` wrapper or explicit API |
| Anthropic | `wrap_anthropic()` wrapper or explicit API |
| Google Gemini | Explicit API |
| Ollama (local) | `wrap_openai()` with local endpoint |
| Any HTTP LLM | `log_llm_call()` explicit API |
EPI is provider-agnostic. The explicit API works with any response format.
---
## Key Features
### Async Support
Non-blocking I/O for LangGraph, AutoGen, and async-first frameworks:
```python
async with record("agent.epi"):
response = await async_client.chat.completions.create(...)
await epi.alog_step("custom.event", {"reasoning": "..."})
```
### Local LLM Support
Record against Ollama for free, unlimited development:
```python
client = wrap_openai(OpenAI(
base_url="http://localhost:11434/v1",
api_key="ollama"
))
with record("test.epi"):
response = client.chat.completions.create(
model="deepseek-r1:7b",
messages=[{"role": "user", "content": "Debug this code..."}]
)
```
### LangGraph Checkpoint Integration
Native checkpoint saver for LangGraph state management:
```python
from langgraph.graph import StateGraph
from epi_recorder.integrations import EPICheckpointSaver
graph = StateGraph(AgentState)
checkpointer = EPICheckpointSaver("my_agent.epi")
result = graph.invoke(
{"messages": [HumanMessage(content="...")]},
{"configurable": {"thread_id": "user_123"}},
checkpointer=checkpointer
)
```
Captures all state transitions, checkpoint metadata, agent decision points, and handles large states (>1MB) via hashing.
### Agent Analytics
Track performance across hundreds of runs:
```python
analytics = AgentAnalytics("./production_runs")
summary = analytics.performance_summary()
analytics.generate_report("performance.html")
```
Provides success rate trends, cost analysis, error pattern detection, tool usage distribution, and period-to-period comparisons.
---
## Why EPI vs. Alternatives
EPI is not an observability dashboard. It's a **durable execution artifact system.**
Dashboards give you live metrics. EPI gives you portable, offline-verifiable records that last forever.
| Feature | **EPI** | LangSmith | Arize | W&B |
|:--------|:--------|:----------|:------|:----|
| **Offline-first** | Works without internet | Cloud required | Cloud required | Cloud required |
| **Agent state capture** | Full checkpoints (LangGraph native) | Traces only | Predictions only | Experiments only |
| **Cryptographic proof** | Ed25519 signatures | None | None | None |
| **Format lock-in** | Open spec (`.epi` format) | Proprietary API | Proprietary | Proprietary |
| **Compliance-ready** | EU AI Act, FDA, litigation | Limited | Limited | Not designed |
| **Local LLM support** | Ollama, llama.cpp | Cloud only | Cloud only | Cloud only |
| **Cost** | Free (open source) | $99+/mo | Custom pricing | $50+/mo |
| **Data privacy** | Self-hosted, offline | Cloud-dependent | Cloud-dependent | Cloud-dependent |
**EPI complements these tools** — use both for complete agent observability.
---
## The `.epi` Artifact Format
An `.epi` file is a self-contained ZIP archive with a defined structure:
```
my_agent.epi
├── mimetype # "application/epi+zip"
├── manifest.json # Metadata + Ed25519 signature
├── steps.jsonl # Execution timeline (NDJSON)
├── env.json # Runtime environment snapshot
└── viewer/
└── index.html # Self-contained offline viewer
```
**Properties:** self-contained (no external dependencies), universally viewable (opens in any browser), tamper-evident (Ed25519 signatures), and durable (works offline forever).
See **[EPI Specification](docs/EPI-SPEC.md)** for technical details.
---
## Cryptographic Properties
| Property | Implementation |
|:---------|:---------------|
| **Signatures** | Ed25519 (RFC 8032) |
| **Hashing** | SHA-256 content addressing |
| **Key Storage** | Local keyring, user-controlled |
| **Verification** | Client-side, zero external dependencies |
Signatures are optional but recommended. Unsigned artifacts are valid but can't prove origin.
---
## Use Cases
### Developer Workflow
- Debug multi-step agent failures with full decision tree visibility
- A/B test prompts and models with side-by-side run comparison
- Track agent performance over time (success rates, costs, errors)
- Replay production failures locally with Ollama or real LLMs
- Share `.epi` files with teammates for collaborative debugging
### Enterprise Compliance
- Audit trails for regulators (EU AI Act, FDA, SEC)
- Litigation-grade evidence with cryptographic signatures
- Data governance with PII redaction and retention policies
- On-premises deployment for air-gapped environments
### Works With
LangGraph · LangChain · AutoGen · CrewAI · Custom frameworks · Any Python agent
---
## CLI Reference
| Command | Purpose |
|:--------|:--------|
| `epi run <script.py>` | Record execution to `.epi` |
| `epi verify <file.epi>` | Verify integrity and signature |
| `epi view <file.epi>` | Open in browser viewer |
| `epi keys list` | Manage signing keys |
| `epi debug <file.epi>` | Heuristic analysis |
| `epi chat <file.epi>` | Natural language querying |
See **[CLI Reference](docs/CLI.md)** for full documentation.
---
## Roadmap
**Current (v2.6.0):**
- Framework-native integrations (LiteLLM, LangChain, OpenTelemetry)
- CI/CD verification (GitHub Action, pytest plugin)
- OpenAI streaming support
- Global install for automatic recording
- Capture, verify, and replay agent runs
**Next:**
- Time-travel debugging (step through any past run)
- Team collaboration features
- Managed cloud platform (optional)
---
## Release History
| Version | Date | Highlights |
|:--------|:-----|:-----------|
| **2.6.0** | 2026-02-20 | LiteLLM, LangChain, OpenTelemetry, pytest plugin, GitHub Action, streaming |
| **2.5.0** | 2026-02-13 | Anthropic Claude wrapper, path resolution fix |
| **2.4.0** | 2026-02-12 | Agent Analytics, async/await, LangGraph, Ollama |
| **2.3.0** | 2026-02-06 | Explicit API, wrapper clients |
| **2.2.0** | 2026-01-30 | SQLite WAL, async support, thread safety |
| **2.1.3** | 2026-01-24 | Google Gemini support |
See **[CHANGELOG.md](./CHANGELOG.md)** for detailed release notes.
---
## Documentation
| Document | Description |
|:---------|:------------|
| **[EPI Specification](docs/EPI-SPEC.md)** | Technical specification for `.epi` format |
| **[CLI Reference](docs/CLI.md)** | Command-line interface documentation |
| **[CHANGELOG](CHANGELOG.md)** | Release notes |
| **[Contributing](CONTRIBUTING.md)** | Contribution guidelines |
| **[Security](SECURITY.md)** | Security policy and vulnerability reporting |
---
## Beta Program
We're looking for teams running agents in production.
**You get:** priority support, free forever, custom integrations.
**[Apply for Beta Access →](https://www.epilabs.org/contact.html)**
---
## Contributing
```bash
git clone https://github.com/mohdibrahimaiml/epi-recorder.git
cd epi-recorder
pip install -e ".[dev]"
pytest
```
See **[CONTRIBUTING.md](./CONTRIBUTING.md)** for guidelines.
---
## Traction
**6,500+ downloads** in 10 weeks · **v2.6.0** shipped Feb 2026
> *"EPI saved us 4 hours debugging a production agent failure."*
> — ML Engineer, Fintech
> *"The LangGraph integration is killer. Zero config."*
> — AI Platform Team Lead
---
## License
MIT License. See **[LICENSE](./LICENSE)**.
<p align="center">
<strong>Built by <a href="https://epilabs.org">EPI Labs</a></strong><br>
<em>Making AI agent execution verifiable.</em>
</p>
| text/markdown | null | EPI Labs <mohdibrahim@epilabs.org> | null | Mohd Ibrahim Afridi <mohdibrahim@epilabs.org> | MIT | evidence, forensics, audit, compliance, cryptography, ai, llm, verification, artifact, execution-trace, reproducibility, tamper-evident | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Legal Industry",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Security :: Cryptography",
"Topic :: System :: Logging",
"Typing :: Typed",
"Framework :: Pydantic",
"Framework :: Pydantic :: 2"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.0.0",
"cryptography>=41.0.0",
"cbor2>=5.6.0",
"typer[all]>=0.12.0",
"rich>=13.0.0",
"google-generativeai>=0.4.0",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"black>=24.0.0; extra == \"dev\"",
"ruff>=0.3.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://epilabs.org",
"Documentation, https://epilabs.org/docs",
"Repository, https://github.com/mohdibrahimaiml/epi-recorder",
"Issues, https://github.com/mohdibrahimaiml/epi-recorder/issues",
"Discussions, https://github.com/mohdibrahimaiml/epi-recorder/discussions"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T11:15:21.445914 | epi_recorder-2.6.0.tar.gz | 153,896 | a1/1f/7f454564ec3e93433dc6792d54e5ca158a3b1c1b05ca669e2ab989b53600/epi_recorder-2.6.0.tar.gz | source | sdist | null | false | 082bb8b339d8029c1321782c0aa63046 | 8ff4d1e951264374880341bcdcfff49595e4edf86421da4cb3bd01480c8a8d1b | a11f7f454564ec3e93433dc6792d54e5ca158a3b1c1b05ca669e2ab989b53600 | null | [
"LICENSE"
] | 214 |
2.4 | fujitsu-quantum | 2.2.2 | Fujitsu Quantum Cloud SDK | An official SDK for Fujitsu Quantum Cloud.
| text/markdown | Fujitsu Limited | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"boto3~=1.36",
"cryptography~=44.0",
"filelock~=3.17",
"grpcio~=1.70",
"protobuf~=5.29",
"pycognito~=2024.5",
"pyjson5~=1.6",
"requests~=2.32"
] | [] | [] | [] | [] | uv/0.8.10 | 2026-02-20T11:14:52.339208 | fujitsu_quantum-2.2.2-py3-none-any.whl | 49,146 | 5d/e2/400b3c6e0ba6c49bd9e3d77e36afd8f2e7e5193c35edf90c02a2c42d55ab/fujitsu_quantum-2.2.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 2f1e64535f76e6fa980748465b024152 | 0f6bd191e460b9719d12b1638ef62a8d3824564d2d384cac72cb9c7c82e6c78f | 5de2400b3c6e0ba6c49bd9e3d77e36afd8f2e7e5193c35edf90c02a2c42d55ab | null | [] | 100 |
2.4 | tarsio | 0.5.0 | Fast Tars/JCE protocol implementation for Python, powered by Rust | # Tarsio
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://github.com/astral-sh/ruff)
[](https://L-1124.github.io/Tarsio/)
**Tarsio** 是一个高性能的 Python Tars (JCE) 协议库,由 Rust 核心驱动。
## 核心特性
* 🚀 **高性能**: 核心编解码由 Rust 实现,比纯 Python 实现快 10-50 倍。
* 🛡️ **类型安全**: 使用标准 Python 类型注解定义 Schema,支持显式/隐式 Tag。
* ✨ **声明式校验**: 支持 `Meta` 元数据约束,在反序列化时自动校验。
* 🧩 **灵活模式**: 支持强类型 `Struct` 与无 Schema 的 `dict` (Raw) 模式。
## 快速上手
```python
from typing import Annotated
from tarsio import Struct, field, Meta, encode, decode
# 1. 定义 Schema
class User(Struct):
# 显式 tag
id: int = field(tag=0)
# 未显式 tag, 按顺序自动分配
name: str
# Annotated 用于约束, tag 仍由 field 指定
groups: Annotated[list[str], Meta(min_len=1)] = field(tag=2, default_factory=list)
# 2. 创建对象
alice = User(id=1001, name="Alice", groups=["admin", "dev"])
print(alice)
# > User(id=1001, name='Alice', groups=['admin', 'dev'])
# 3. 编码 (Encode)
data = encode(alice)
print(data.hex())
# 4. 解码 (Decode)
user = decode(User, data)
assert user == alice
```
## 文档
完整文档请访问 [https://L-1124.github.io/Tarsio/](https://L-1124.github.io/Tarsio/)。
## License
MIT
| text/markdown | null | l-1124 <68656403+L-1124@users.noreply.github.com> | null | null | MIT | binary, jce, protocol, serialization, struct, tars | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Networking",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"typing-extensions>=4.15.0",
"click>=8.0; extra == \"cli\"",
"rich>=14.2.0; extra == \"cli\""
] | [] | [] | [] | [
"Homepage, https://github.com/L-1124/Tarsio",
"Issues, https://github.com/L-1124/Tarsio/issues",
"Repository, https://github.com/L-1124/Tarsio"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:14:40.619378 | tarsio-0.5.0.tar.gz | 171,644 | 09/b2/43949e2b66026b8f83f651736b47e56a571cf05d59d7f74f758f4fde6798/tarsio-0.5.0.tar.gz | source | sdist | null | false | 0b24326df2662ddbc34223dbb9af04e3 | 8c25b136ede400a1d2bc9e5293875fb4ea546b2955e5eccf8cc62b7438c4a5fb | 09b243949e2b66026b8f83f651736b47e56a571cf05d59d7f74f758f4fde6798 | null | [
"LICENSE"
] | 2,241 |
2.4 | malac-hd | 1.4.0 | Mapping Language Compiler for Health Data. | # MaLaC-HD
[](https://pypi.org/project/malac-hd/)
[](https://gitlab.com/cdehealth/malac-hd/)
[](https://pepy.tech/projects/malac-hd)
[](https://gitlab.com/cdehealth/malac-hd/-/graphs/main/charts)
[](https://gitlab.com/cdehealth/malac-hd/-/blob/main/LICENSE)
[](https://gitlab.com/cdehealth/malac-hd/-/pipelines?page=1&scope=branches&ref=main)
MaLaC-HD (MApping LAnguage Compiler for Health Data) is a tool that you can use to convert mappings between different health data formats to executable code. It can also be used as a library to dynamically execute mappings.
[TOC]
## Contributing and Support
Please read [CONTRIBUTING.md](CONTRIBUTING.md) for contributing to cdeHealth projects which are hosted in the [cdeHealth group](https://gitlab.com/cdehealth) on GitLab.com.
Please read [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) to make participation in our community a harassment free experience for everyone.
## Getting Started
These instructions will get you a copy of the project up and running.
### Installation
Install malac-hd and its dependencies via pip, e.g. with CDA support:
```
pip install malac-hd[cda]
```
... or, if you want to build from source:
```
git clone https://gitlab.com/cdehealth/malac/models.git
pip install -e models/fhir
git clone https://gitlab.com/cdehealth/malac/utils.git
pip install -e utils
git clone https://gitlab.com/cdehealth/malac/transformer.git
pip install -e transformer/fhir
git clone https://gitlab.com/cdehealth/malac-hd.git
pip install -e malac-hd
```
... and if you need to transform to or from cda:
```
git clone https://gitlab.com/cdehealth/malac/models.git
pip install -e models/cda
```
### Using MaLaC-HD
with `malac-hd --help` you will get an overview:
```
____________________ MaLaC-HD 1.4.0 started ____________________
usage: malac-hd [-h] -m MAPPING [-co CONVERT_OUTPUT] [-ti TRANSFORM_INPUT] [-to TRANSFORM_OUTPUT] [-s]
You are using the MApping LAnguage Compiler for Health Data, short MaLaC-HD.
We differentiate between two modes, CONVERTING and TRANSFORMING.
The CONVERSION is done by compiling a given mapping to python code, that itself can be run with its own argument handling for TRANSFORMING input files.
Additionally, the TRANSFORMATION can also be be done by MaLaC-HD directly after CONVERSION, i.e. for direct testing purposes.
options:
-h, --help show this help message and exit
-m MAPPING, --mapping MAPPING
the mapping file path, the conversion/mapping rule language is detected by file ending, right now FML maps (*.map), FHIR R4 (*.4.fhir.xml) StructureMaps and ConceptMaps can be given as mappings
-co CONVERT_OUTPUT, --convert_output CONVERT_OUTPUT
the conversion python file path, if not given, saved in the working directory with the map-file name
-ti TRANSFORM_INPUT, --transform_input TRANSFORM_INPUT
the transformation input file path, the ressource type is detected by its root node inside the xml
-to TRANSFORM_OUTPUT, --transform_output TRANSFORM_OUTPUT
the transformation output file path, the ressource type is detected by its root node inside the xml
-s, --silent do not print the converted python mapping to console
```
#### Example Usage
Convert an FML map to Python and directly execute it:
```
malac-hd -m tests/fml/r4/aut_lab/CdaToBundle.4.map -co cdaToBundle.py -ti tests/structuremap/r4/aut_lab/Lab_Allgemeiner_Laborbefund.at.cda.xml -to bundle.4.fhir.xml
```
Execute the generated Python code:
```
python cdaToBundle.py -s tests/structuremap/r4/aut_lab/Lab_Allgemeiner_Laborbefund.at.cda.xml -t bundle.4.fhir.xml
```
The input files for the example are included in the source.
## Why?
When having parallel operations of some sort or another, there is one requirement that is challenging in the context of standardized APIs: mapping data, to make it accessible to the different worlds at least.
There are many methods to map from one semi-structured data format to another. Most of the times, when a mapping requirement reveals, it is solved by a quick small script. After some time, that script needs technical or rule updates. As the rules are directly coded, they can only be altered by someone having knowledge about the mapping content, the mapping rules and the mapping technology.
To divide these concerns, HL7 FHIR created resources for mapping purposes, like the [StructureMap](https://www.hl7.org/fhir/structuremap.html) for [transformation](https://www.hl7.org/fhir/structuremap-operation-transform.html) and [ConceptMap](https://www.hl7.org/fhir/conceptmap.html) for [translation](https://www.hl7.org/fhir/conceptmap-operation-translate.html). Additionally a more human readable metalanguage for creating such StructureMap and ConceptMap resources was created, called [FHIR Mapping Language, or in short FML](https://www.hl7.org/fhir/mapping-language.html). Interestingly enough, FML is not restricted to FHIR as a source or target for the mapping. It could also be used i.e. to map from HL7 CDA to OHDSI OMOP CDM.
Current FML or StructureMap/ConceptMap tools are quite slow in the transformation/translation of the content to be mapped, as both steps, the processing of the rules and the transformation/translation itself is done every time. As a gold standard and sparring partner for this project, the Canadian open source project [HAPI](https://github.com/hapifhir) with its Suisse extension [matchbox](https://www.matchbox.health/) has been used. Additionally, we got notice about some Italian project speeding up matchbox, but couldn't find any public references.
We can't use current tools to map data synchronously, as nearly all mapping executions need more time than the [dorethy threshold of 0.4 seconds](https://lawsofux.com/doherty-threshold/), resulting in the users perception of "it got stuck". Also the practical, but restricted FHIR mapping language does not offer any possibilities to extend it with further functions, in contrast to the StructureMap and ConceptMap resources themselves, that could be easily extended.
## How?
Focusing on but not restricting to the health information technology world, a partnership was created out of the experts in the CDA2FHIR HL7 Austria Working Group, to solve this issue. On one side with expertise in research and on the other side with expertise in EHRs.
By applying the MVP concept of build-measure-learn-repeat, first versions were created and tested against already existing FML-mappings in the HL7 community and matchbox as a mapping tool.
As the partners are themselves heavy users of the resulting MVPs, using them immediately after release, quick iterations of four months are planned.
## What?
MaLaC-HD is intended to focus on transformation speed and easy extensibility after compilation/conversion. This is achieved by seperation of
* the processing from FML to StructureMap/ConceptMap,
* the conversion from StructureMap/ConceptMap to python code,
* the execution of the translation itself from source to target.
It uses the mappings and requirements of some preliminary projects (i.e. https://github.com/HL7Austria/CDA2FHIR) as testing data.
MaLaC-HD is not limited to the use of FML or StructureMap/ConceptMap for the translation, but has already implemented a conversion from StructureMap/ConceptMaps to python code. It can be used to transform/translate different input formats to different output formats, using some conversion/mapping rule language and the XSD or JSON schemas of the input and output format. It also makes it easier to develop new conversion or mapping rule languages, further input formats and/or extensions.
## Detailed Workflow
The processing and conversion from FML/StructureMap/ConceptMap to Python code is handled in different components of MaLaC-HD. The following graph shows which components are responsible and in what order they are called.
[{: style="width: 100%"}](images/workflow.drawio.svg)
Our pipeline is able to directly process FML code, but can also convert StructureMap or ConceptMap ressources to Python. For the FML to StructureMap/ConceptMap conversion and the ConceptMap to Python conversion, individual generators/parsers for the supported FHIR versions exist. Even though they share a lot of code, they contain specific logic for each FHIR version. For the StructureMap to Python conversion, only a generator for the latest FHIR version exists. Legacy StructureMaps are internally transformed to the latest version, to ensure backwards compatibility. As this is the most complex step in the conversion, this allows us to simplify things by only having a single codebase.
## Pursued Objectives
The development of MaLaC-HD focuses on the following three main objectives and their respective sub-objectives:
- The compilation of the conversion/mapping rules by MaLaC-HD and the resulting mapping of these rules in a common programming language must be easily readable and directly related to the conversion/mapping rules.
- The conversion/mapping rules must be ballotable so that they can be queried and discussed for completeness and correctness as part of a guideline or even as a standalone part in a community, such as the HL7 community.
- MaLaC-HD must not be dependent on any conversion rule language in order to be able to easily process conversion/mapping rules that can be mapped in different conversion rule languages.
- In particular, the limits of the respective conversion rule language require simple narrative extensibility in order to draw attention to the fact that further code must be added manually in the respective generated code by means of placeholders.
- The result of compiling the conversion/mapping rules must be trimmed for speed and stability so as not to add any obstructive delays.
- The compiled conversion/mapping rules should be executable as stand-alone program code without MaLaC-HD.
## Timeline
[{: style="width: 100%"}](images/timeline.png)
## Authors and acknowledgment
We want to thank
- [ELGA GmbH](https://www.elga.gv.at/) with their [CDA2FHIR](https://collab.hl7.at/display/BAL/AG+ELGA+CDA+Laborbefund+zu+FHIR) projects and
- [AIT Austrian Institute of Technology GmbH](https://www.ait.ac.at/) with their [SmartFOX](https://www.smart-fox.at/) project.
Additionally, we want to thank
- [Dave Kuhlman](https://davekuhlman.org/) with his open source implementation of [generateDS](https://www.davekuhlman.org/generateDS.html), making quick serializations of new data structures from their XSD schemas possible.
## License
This is a LGPL licensed project, with a small addition that any usage of this project or the results of this project should contain a malac icon which is visible for the consumer. Multiple versions of the malac icon can be found in [images](images/). Changing the color of any malac_simple version is allowed, if the icon itself is still visible on your background.
| text/markdown | null | cdeHealth-Team <contact-project+cdehealth-malac-hd-52276676-issue-@incoming.gitlab.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Information Technology",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Topic :: Software Development :: Code Generators",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"malac-utils>=1.3.1",
"malac-models-fhir>=1.1.4",
"malac-models-cda>=1.1.0",
"malac-transformer-fhir>=1.1.2",
"antlr4-python3-runtime>=4.13.0",
"antlr4-tools>=0.2.1",
"malac-models-cda>=1.1.0; extra == \"cda\"",
"pytest>=7.0.0; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://gitlab.com/cdehealth/malac-hd",
"Documentation, https://gitlab.com/cdehealth/malac-hd",
"Release notes, https://gitlab.com/cdehealth/malac-hd/-/releases",
"Source, https://gitlab.com/cdehealth/malac-hd",
"Tracker, https://gitlab.com/cdehealth/malac-hd/-/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T11:14:38.525629 | malac_hd-1.4.0.tar.gz | 164,599 | 44/ff/8c783efe40c3afbb37e397a1107e1d0b649ec4eeecf55cd500cc9ee50a77/malac_hd-1.4.0.tar.gz | source | sdist | null | false | 83a90bae12fd96eb5a6e3698273e945e | a9cef9de389a2e2a17d0a03c41720eb6b4be2b027052b2f396653ec4250dae11 | 44ff8c783efe40c3afbb37e397a1107e1d0b649ec4eeecf55cd500cc9ee50a77 | null | [
"LICENSE"
] | 221 |
2.4 | ddev | 14.3.1 | The Datadog Agent integration developer tool | # ddev
| | |
| --- | --- |
| Package | [](https://pypi.org/project/ddev/) [](https://pypi.org/project/ddev/) |
| Meta | [](https://github.com/psf/black) [](https://github.com/python/mypy) [](https://github.com/pycqa/isort) [](https://spdx.org/licenses/) |
-----
This is the redesigned command line tooling for Datadog Agent integrations.
## Installation
Read the [`ddev` installation guide](https://datadoghq.dev/integrations-core/setup/#ddev).
## Documentation
Read the [`ddev` documentation](https://datadoghq.dev/integrations-core/ddev/about/).
| text/markdown | null | Datadog <packages@datadoghq.com> | null | null | null | agent, datadog, integration | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: BSD License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"anthropic>=0.18.0",
"click~=8.1.6",
"coverage",
"datadog",
"datadog-api-client==2.20.0",
"datadog-checks-dev[cli]~=35.6",
"hatch>=1.8.1",
"httpx",
"jsonpointer",
"matplotlib",
"pluggy",
"requests",
"rich>=12.5.1",
"squarify",
"stamina==23.2.0",
"tomli-w",
"tomli; python_version < \"3.11\"",
"tomlkit",
"tqdm"
] | [] | [] | [] | [
"Source, https://github.com/DataDog/integrations-core"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:12:34.814153 | ddev-14.3.1.tar.gz | 162,072 | 17/5d/d179cea1932deed1f5211c6a4c8e3125f24a41ce1cc4f579b36a3ae442f1/ddev-14.3.1.tar.gz | source | sdist | null | false | d4a2cff0166fbf0d95ded33afa337e93 | 3b66926572b8a77c208858439ad8aecb10d2065f4e50b95d5502becd7daddb50 | 175dd179cea1932deed1f5211c6a4c8e3125f24a41ce1cc4f579b36a3ae442f1 | BSD-3-Clause | [] | 1,046 |
2.4 | mcd-stitcher | 2.1.1.post1 | MCD to single tiffs, stitched tiff and tiff tools | # MCD STITCHER
[](https://pypi.org/project/mcd_stitcher/)
[](https://www.python.org/downloads/)
[](https://pypistats.org/packages/mcd-stitcher)
[](https://github.com/PawanChaurasia/mcd_stitcher/blob/main/LICENSE)
**MCD Stitcher** is a high-performance Python package designed to streamline the processing of Imaging Mass Cytometry (IMC) data. It simplifies the conversion of `.mcd` files into standards-compliant OME-TIFFs, handles ROI stitching, and provides tools for channel subsetting, pyramid generation and OME-TIFF compression.
### Key Features
- **Convert:** Fast transformation of MCD files to OME-TIFF.
- **Stitch:** Automatic whole-slide reconstruction from ROIs.
- **Optimize:** Channel filtering and pyramidal OME-TIFF generation for smooth viewing in QuPath, Napari, and ImageJ.
## Installation
**MCD Stitcher** requires Python **3.11** or higher.
To install the package, use the following command:
```
pip install mcd_stitcher
```
## ⚡ Workflow Commands
### ▶️ MCD_STITCH
**Description:** Converts all ROIs from MCD files into **whole-slide stitched OME-TIFFs**.
**Command:**
```
mcd_stitch <input_path> [<output_path>] [OPTIONS]
```
**Arguments:**
- **input_path:** Path to an MCD file or a folder containing `.mcd` files.
- **output_path:** (Optional) Output folder for stitched OME-TIFFs. Defaults to: `<input_path>/TIFF_stitched`
**Options:**
- **-d, --output_type [uint16 | float32]:** Output pixel data type. Default: `uint16`
- **-c, --compression [None | LZW | zstd]:** Compression method for the output OME-TIFFs. Default: `zstd`
- **-r, --roi:** Interactively select which ROIs to stitch.
**Example:**
1. **Stitch with default output folder and options**
```
mcd_stitch "/path/to/MCD_folder"
```
2. **Stitch with custom output folder and options**
```
mcd_stitch "/path/to/MCD_folder" "/path/to/TIFF_stitched" -d float32 -c None
```
### ▶️ MCD_CONVERT
**Description:** Converts all ROIs from input MCD files into **individual OME-TIFFs**.
**Command:**
```
mcd_convert <input_path> [<output_path>] [OPTIONS]
```
**Arguments:**
- **input_path:** Path to an MCD file or a folder containing `.mcd` files.
- **output_path:** (Optional) Output folder for stitched OME-TIFFs. Defaults to: `<input_path>/TIFF_Converted`
**Options:**
- **-d, --output_type [uint16 | float32]:** Output pixel data type. Default: `uint16`
- **-c, --compression [None | LZW | zstd]:** Compression method for the output OME-TIFFs. Default: `zstd`
**Example:**
1. **Convert with default output folder and options**
```
mcd_convert "/path/to/MCD_folder"
```
2. **Convert with custom output folder and options**
```
mcd_convert "/path/to/MCD_folder" "/path/to/TIFF_Converted" -d float32 -c LZW
```
### ▶️ TIFF_SUBSET
**Description:**
Subsets channels from OME-TIFF files, with options to list channels, filter specific channels, and generate pyramidal OME-TIFF outputs.
**Command:**
```
tiff_subset <input_path> [OPTIONS]
```
**Arguments:**
- **input_path:** Path to an OME-TIFF file **or a directory containing OME-TIFF files**.
**Options:**
- **-d, --output_type [uint16 | float32]:** Output pixel data type. Default: `uint16`
- **-c, --compression [None | LZW | zstd]:** Compression method for the output OME-TIFFs. Default: `zstd`
- **-l, --list-channels:** List all channels in the input OME-TIFF.
- **-f, --filter "CHANNELS":** Subset channels using a range or list (e.g. `"0-5,7,10"`).
- **-p, --pyramid:** Create a pyramidal (tiled) OME-TIFF output.
### **Examples:**
1. **List all channels in an OME-TIFF file:**
```
tiff_subset "path/to/file.ome.tiff" -l
```
2. **Subset channels 12 to 46 in an individual OME-TIFF:**
```
tiff_subset "path/to/file.ome.tiff" -f "12-46"
```
> **Note:** Other possible combinations: "1,6,20" or "5,6-10,55,60"
3. **Subset channels in all OME-TIFFs in a directory 12 to 46**
```
tiff_subset "path/to/directory" -f "12-46"
```
> **Note:** The output files will have `_filtered.ome.tiff` appended to the original filename.
4. **Convert an OME-TIFF file into pyramid OME-TIFF**
```
tiff_subset "path/to/file.ome.tiff" -p
```
> **Note:** The output files will have `_pyramid.ome.tiff` appended to the original filename.
5. **Subset channels and generate pyramid OME-TIFF:**
```
tiff_subset "path/to/file.ome.tiff" -p -f "12-46"
```
> **Note:** The output files will have `_filtered_pyramid.ome.tiff` appended to the original filename.
## Issues
If you encounter any issues, please open a ticket on the [issue tracker](https://github.com/PawanChaurasia/mcd_stitcher/issues).
| text/markdown | null | Pawan Chaurasia <pchaurasia98@gmail.com> | null | null | GPL-3.0 | imaging, masscytometry, StandardBio, SBT, IMC, stitching, conversion | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"click",
"imagecodecs",
"numpy",
"pandas",
"readimc",
"scikit-image",
"tifffile"
] | [] | [] | [] | [
"Homepage, https://github.com/PawanChaurasia/mcd_stitcher",
"Bug Tracker, https://github.com/PawanChaurasia/mcd_stitcher/issues",
"Repository, https://github.com/PawanChaurasia/mcd_stitcher"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T11:12:27.587661 | mcd_stitcher-2.1.1.post1.tar.gz | 11,934 | 83/13/f6056bea9efaa08585004879944b472a22b2c76381316f26bf50842707d2/mcd_stitcher-2.1.1.post1.tar.gz | source | sdist | null | false | 3c8e7c8fb3d2bd82d5920b0f7b352d15 | 2a572ff20e0624c85f3f1374ad1188878ce9199f7547e17ac28a535927a206a0 | 8313f6056bea9efaa08585004879944b472a22b2c76381316f26bf50842707d2 | null | [
"LICENSE"
] | 224 |
2.4 | maeris | 1.0.0 | MCP server for extracting, analyzing, and documenting React/frontend APIs | # Maeris
Security and API scanning for your codebase, powered by Claude.
Maeris is an MCP (Model Context Protocol) server that gives Claude the ability to scan your code for security vulnerabilities, extract API definitions, and push results to the [Maeris Portal](https://autoe-light-dev.up.railway.app).
## Quick Start
**1. Install**
```bash
pip install maeris
```
**2. Register the MCP server for your project** (run from your project root)
```bash
maeris init
```
**3. Restart Claude Code**
That's it. Claude can now scan your codebase for security vulnerabilities and API calls.
## Authentication
Some features (like pushing scan results to the Maeris Portal) require an account.
```bash
maeris login
```
This opens your browser to authenticate. Credentials are stored per-project so each repo is fully isolated.
## Commands
| Command | Description |
|---|---|
| `maeris init` | Register the MCP server for the current repository |
| `maeris login` | Authenticate with Maeris Portal |
| `maeris logout` | Sign out and remove stored credentials |
| `maeris status` | Show current authentication status |
| `maeris switch-app` | Switch the active application |
## What Claude Can Do
Once the MCP server is running, ask Claude to:
- **Scan for vulnerabilities** — `"Scan this codebase for security issues"`
- **Extract APIs** — `"Extract all API calls from the src/ folder"`
- **Push to Maeris** — `"Push the scan results to Maeris"`
## Requirements
- Python 3.11+
- [Claude Code](https://claude.ai/code)
## License
MIT
| text/markdown | null | Anant Mathur <anant.mathur@autoelight.com> | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.1",
"httpx>=0.27",
"mcp<2.0.0,>=1.0.0",
"pydantic<3.0,>=2.5",
"pyjwt>=2.8",
"python-dotenv>=1.0.0",
"starlette>=0.36",
"structlog>=24.1",
"uvicorn>=0.27",
"mypy>=1.8; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-cov>=4.1; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.2; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.1 | 2026-02-20T11:11:53.147802 | maeris-1.0.0.tar.gz | 160,567 | 57/14/0dca73de41dddc7308e3913bfab1115283fc9a107a20990ec705c1620711/maeris-1.0.0.tar.gz | source | sdist | null | false | 59ef6daf4cfc1be72f3f20f2504cd150 | 321abe72424cd1b003ecef88aea87c311cbe99662d3fe750b0043b8efd3151c4 | 57140dca73de41dddc7308e3913bfab1115283fc9a107a20990ec705c1620711 | null | [] | 248 |
2.4 | clemcore | 3.5.0 | The cLLM (chat-optimized Large Language Model, 'clem') framework tests such models' ability to engage in games, that is, rule-constituted activities played using language. | # clembench: A Framework for the Systematic Evaluation of Chat-Optimized Language Models as Conversational Agents
The cLLM (chat-optimized Large Language Model, "clem") framework allows researchers to easily evaluate the ability of large language models (LLM) by engaging them in games – rule-constituted activities played using language.
The framework is a systematic way of probing for the models' situated language understanding by framing them as agents, i.e., players which interfere with a game master in 1-to-1 conversations.
This repository contains `clemcore`, the core framework code used to implement and run the games discussed in our EMNLP paper:
> Chalamalasetti, K., Götze, J., Hakimov, S., Madureira, B., Sadler, P., & Schlangen, D. (2023).
> [clembench: Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents.](https://aclanthology.org/2023.emnlp-main.689/)
> In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11174–11219,
> Singapore. Association for Computational Linguistics.
**Clembench Repository**
While the paper is called clembench, because it presents the set of games we propose to evaluate the models capabilities, we decided to separate the framework code from the specific game implementations.
You can find the official set of benchmark games in the [clembench repository](https://github.com/clp-research/clembench).
**Clembench Results**
The results of running the benchmark on the games are uploaded to our [main project website](https://clembench.github.io).
The individual results for each run can be inspected via our [transcript browser](https://clembench.github.io/transcript-browser.html).
From the results we constitute a [leaderboard](https://clembench.github.io/leaderboard.html) which shows the performance of the most relevant models.
---
## Table of Contents
- [Overview](#overview)
- [Quickstart](#quickstart)
- [Installation](#installation)
- [CLI Commands](#cli-commands)
- [Games](#games)
- [Models](#models)
- [Backends](#backends)
- [Contributing](#contributing)
---
## Overview
The **clemcore** framework provides a systematic way of assessing *situated language understanding* of large language models by framing them as agents in rule-governed games.
- **This repo (**clemcore**)** → core framework, installable via pip.
- [**clembench repo**](https://github.com/clp-research/clembench) → set of official benchmark games built on top of clemcore.
- [**Project website**](https://clembench.github.io) → results, transcript browser, and leaderboard.
| Component | Purpose |
| ------------- | ------------------------------------------------------ |
| **clemcore** | Framework code, CLI (`clem`), backends, model registry |
| **clembench** | Collection of benchmark games for evaluation |
---
## Quickstart
Install and run your first game in a fresh virtual environment:
```bash
# 1. Create and activate a virtual environment
python3.10 -m venv venv
source venv/bin/activate
# 2. Install clemcore
pip install clemcore
# 3. Download clembench games and install requirements
export CLEMBENCH_HOME=/path/to/clembench
git clone https://github.com/clp-research/clembench $CLEMBENCH_HOME
pip install -r $CLEMBENCH_HOME/requirements.txt
# 4. List available games and try a dry-run with the mock model (programmatic player)
clem list games
clem run -g taboo -m mock mock
# 5. Run llama3-8b against the text-only benchmark (version 2 game instances)
clem run -g "{'benchmark': ['2.0']}" -m Meta-Llama-3.1-8B-Instruct
# 6. Perform a quantitative evaluation of the results
clem score && clem eval
# 7. Inspect episodes of game play for qualitative evaluation
clem transcribe
```
---
## Installation
We recommend to use Python 3.10 in a dedicated virtual environment:
```bash
sudo apt install python3.10 python3.10-venv python3.10-dev
python3.10 -m venv venv
source venv/bin/activate
```
Install clemcore:
```bash
pip install clemcore
```
Optional extras:
```
pip install clemcore[huggingface] # installs dependencies for using the local huggingface backend
pip install clemcore[vllm] # installs dependencies for using the local vllm backend
pip install clemcore[slurk] # installs dependencies for using the slurk backend
```
Check installation:
```
clem --version
# clem 3.3.2
```
---
## CLI Commands
After the installation you will have access to the `clem` CLI tool.
The main functions are:
```
clem list games # list the games available for a run
clem list backends # list the backends available for a run
clem list models # list the models available for a run
clem run -g <game> -m <model> # runs specified game using specified model
clem transcribe # translates interactions into html files
clem score # computes individual performance measures
clem eval # computes overall performances measures; requires scores
```
---
## Games
The [clembench repository](https://github.com/clp-research/clembench) provides a set of ready‑to‑use games.
To make them available to the `-g <game-name>` option of `clem run`:
1. Clone `clembench` to a directory of your choice.
2. Either run `clem` from within that directory, **or** set the `CLEMBENCH_HOME` environment variable to point to it.
Alternatively, you can place a `game_registry.json` file in your current working directory that points to the benchmark folder:
```json
[{
"benchmark_path": "path/to/clembench"
}]
```
To check the available games, run the following command:
```bash
clem list games
# Listing all available games (use -v option to see the whole specs)
# Found '26' game specs that match the game_selector='all'
# adventuregame:
# Interactive Fiction clemgame
# cloudgame:
# A simple game in which a player has to decide whether they see clouds
# or not and a second player has to judge this response.
# ...
```
If you want to list only a subset of games (not all, but more than one), you can use the `-s` (selector) option:
```
clem list games -s "{'benchmark':['2.0']}"
# Listing all available games (use -v option to see the whole specs)
# Found '14' game specs that match the game_selector='{'benchmark': ['2.0']}'
# adventuregame:
# Interactive Fiction clemgame
# codenames:
# Codenames game between a cluegiver and a guesser
# ...
```
> **Note:** These selectors can also be passed to the `-g` option of the `clem run` command!
>
To register custom games extent the `game_registry.json`.
A minimal entry looks like this:
```json
{
"game_name": "mygame",
"game_path": "path/to/mygame",
"description": "A brief description of mygame",
"player": 1,
"image": "none",
"languages": ["en"]
}
```
---
## Models
A model implements an interface to generate a player's response given a message history during game play.
The `clemcore` package already comes with a huge variety of models registered in a bundled `model_registry.json`.
This makes them available for the `-m <model>` option of the `clem run` command.
To check the available list, run the following command:
```bash
clem list models | head
# Listing all available models by name (use -v option to see the whole specs)
# Found '215' registered model specs:
# slurk -> slurk (packaged)
# openchat -> openai_compatible (packaged)
# codellama-34b -> openai_compatible (packaged)
# Llama-3-70B-Instruct-Anyscale -> openai_compatible (packaged)
# Llama-3-70B-Together.ai -> openai_compatible (packaged)
# mistral-large-2411 -> openai_compatible (packaged)
# deepseek-v3 -> openai_compatible (packaged)
# deepseek-r1 -> openai_compatible (packaged)
```
When you want to add a model, then simply specify the model in a `model_registry.json` within the directory where you run `clem` CLI.
A minimal model specification would define the model's name and the backend to be used:
```json
{
"model_name": "custom_model",
"backend": "custom_backend"
}
```
The model will then show up in the listing:
```bash
clem list models | head
# Listing all available models by name (use -v option to see the whole specs)
# Found '216' registered model specs:
# custom_model -> custom_backend (/path/to/cwd/model_registry.json)
# slurk -> slurk (packaged)
# ...
```
> **Note:** Models defined by custom files in the current working directory always precede packaged models.
---
## Backends
The `clemcore` supports out-of-the box a variety of model providers.
These backends are responsible to provide the models (or agents) that are supposed to play the games.
To see the available backends run the following command:
```bash
clem list backends
# Listing all supported backends (use -v option to see full file path)
# Found '14' supported backends.
# Then you can use models that specify one of the following backends:
# anthropic (packaged)
# slurk (packaged)
# vllm (packaged)
# mistral (packaged)
# huggingface_local (packaged)
# openai_compatible (packaged)
# openai (packaged)
# google (packaged)
# huggingface_multimodal (packaged)
# llamacpp (packaged)
# cohere (packaged)
# alephalpha (packaged)
# _player_human (packaged)
# _player_programmed (packaged)
```
You can add your own backend simply by adding a `<backend-name>_api.py` with a `Backend` implementation into the directory where you call the `clem` CLI.
The backend will then show up in the listing:
```bash
clem list backends
# Listing all supported backends (use -v option to see full file path)
# Found '15' supported backends.
# Then you can use models that specify one of the following backends:
# custom_backend (cwd)
# anthropic (packaged)
# ...
```
> **Note:** Models defined by custom files in the current working directory always precede packaged models.
### Proprietary Backends
Proprietary models are often only provided through backends that connect to a remote API.
These remote APIs are usually protected by the use of API keys.
To make all remote API backends fully functional, you have to add your secure key to a `key.json`. Use the provided `key.json.template`:
- Rename the file to `key.json`
- Add your keys in the `api_key` entries
- Place the file in either the working directory or `~/.clemcore`
> **Note:** Keys defined in the current working directory always precede others
### OpenAI Compatible Backend
The openai compatible backend comes with an additional `base_url` field in the `key.json` which allows you to define a remote API that is compatible with the OpenAI client.
This comes in very handy, when you, for example, host your own models via a `vllm` server.
> **Note:** When using this backend you usually want to add your own model specifications (see above).
### Slurk Backend
The `clemcore` framework integrates with the [slurk experiment server](https://github.com/clp-research/slurk).
This "Chat Server for Dialogue Experiments and Data Collection" enables humans to play the games simply by using a browser.
Hence, for this to work, you have to set up a slurk server instance.
For testing purposes you can do this on your local machine using a docker container:
```
docker run -d -e FLASK_ENV=development -p 5000:80 slurk/server
```
Running in dev mode should start a slurk server on port `5000` and expose an api that is protected by the default `api_key`.
Now, similar to the openai compatible backend, `clem` must be informed about the location of the slurk host, by filling in the respective entries in the `key.json`:
```
"slurk": {
"api_key": "00000000-0000-0000-0000-000000000000", # default value
"slurk_host": "http://localhost:5000"
}
```
Finally, the slurk backend is fully functional and becomes available to `clem`.
Note that in terms of benchmarking `clemcore` frames the human player as a "model" backed up by the slurk backend.
Hence, the command to play a single-player game is:
```
clem run -g <game> -m slurk
```
This will set up everything and expose a clickable url in the console output which redirects to the game room on the slurk server:
```
2025-08-20 13:59:16,531 - clemcore.cli - INFO - http://localhost:5000/login?name=player&token=091aee66-eecb-4da4-88dc-a6680384be82
```
Notably, the first 8 characters of the login token act as the model name, e.g. `091aee66`, to distinguish between players in the results folder.
For multi-player games, the index of the model argument determines the role to be played.
Look up the specific game specification to find out which index maps to which role.
You can simply play against a supported model by using, for example:
```
clem run -g <game> -m slurk <model-name>
```
---
## Contributing
Framework developers that want to contribute to the clemcore framework should follow the following steps:
- Fork this repository via GitHub
- Clone the forked repository to your development machine
- Create a venv as mentioned above and install the project with `pip install -e .`
- Make sure that the venv folder is git-ignored
- Create a branch in the fork that is supposed to contain your changes
- Test you changes either by adding a test case or by installing the framework locally and running the CLI
- Commit and push your changes to the branch on your fork
- Create a pull request that aims to merge your branch with the main branch of this repository
---
This repository is tested on `Python 3.10`.
| text/markdown | null | Philipp Sadler <first.last@uni-potsdam.de>, Jonathan Jordan <first.last@uni-potsdam.de>, Sherzod Hakimov <first.last@uni-potsdam.de>, Anne Beyer <first.last@uni-potsdam.de>, "L. Pfennigschmidt" <first.last@uni-potsdam.de>, Kushal Koshti <first.last@uni-potsdam.de> | null | null | MIT | null | [] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"pyyaml>=6.0",
"numpy<2.0.0,>=1.24.3",
"retry>=0.9.2",
"tqdm>=4.65.0",
"nltk<4.0.0,>=3.9.2",
"openai<3.0.0,>=2.15.0",
"anthropic<1.0.0,>=0.75.0",
"cohere<6.0.0,>=5.20.1",
"google-genai<2.0.0,>=1.57.0",
"mistralai<2.0.0,>=1.10.0",
"matplotlib==3.7.1",
"pandas==2.0.1",
"seaborn==0.12.2",
"sparklines>=0.7.0",
"pylatex<2.0.0,>=1.4.0",
"ordered-set<5.0.0,>=4.1.0",
"markdown<4.0.0,>=3.8.0",
"pettingzoo<2.0.0,>=1.25.0",
"openenv-core~=0.1.1",
"datasets<5.0.0,>=4.4.0",
"transformers<5.0.0,>=4.55.2; extra == \"vllm\"",
"vllm<1.0.0,>=0.8.4; extra == \"vllm\"",
"accelerate<2.0.0,>=1.0.0; extra == \"huggingface\"",
"bitsandbytes<1.0.0,>=0.42.0; extra == \"huggingface\"",
"peft<1.0.0,>=0.17.0; extra == \"huggingface\"",
"timm<1.1.0,>=1.0.19; extra == \"huggingface\"",
"transformers<5.0.0,>=4.55.2; extra == \"huggingface\"",
"torchvision; extra == \"huggingface\"",
"python-engineio==4.4.0; extra == \"slurk\"",
"python-socketio==5.7.2; extra == \"slurk\"",
"websocket-client; extra == \"slurk\""
] | [] | [] | [] | [
"Homepage, https://github.com/clp-research/clemcore"
] | twine/6.1.0 CPython/3.10.12 | 2026-02-20T11:11:37.736505 | clemcore-3.5.0.tar.gz | 184,206 | 85/c7/90a2ba4bccd9a753878f3a74eac5b6f48b5e9fb958c40152639c9a27b98c/clemcore-3.5.0.tar.gz | source | sdist | null | false | fa311dbfd1c93c780f1b065f79ea857c | 18cc45394e022d3b6b4e7453b89ec558590f0a04eabc7b69f0832c220a9ce496 | 85c790a2ba4bccd9a753878f3a74eac5b6f48b5e9fb958c40152639c9a27b98c | null | [
"LICENSE"
] | 225 |
2.1 | cdk-sops-secrets | 2.6.3 | CDK Constructs that syncs your sops secrets into AWS SecretsManager secrets. | <img src="https://github.com/dbsystel/cdk-sops-secrets/blob/main/img/banner-dl-small.png?raw=true">

[](https://github.com/dbsystel/cdk-sops-secrets/actions/workflows/release.yml)
[](https://constructs.dev/packages/cdk-sops-secrets)
[](https://www.npmjs.com/package/cdk-sops-secrets)
[](https://www.npmjs.com/package/cdk-sops-secrets)
[](https://pypi.org/project/cdk-sops-secrets)
[](https://pypi.org/project/cdk-sops-secrets)
[](https://codecov.io/gh/dbsystel/cdk-sops-secrets)
[](https://github.com/dbsystel/cdk-sops-secrets/issues?q=is%3Aissue+is%3Aopen+label%3A%22security+vulnerability%22)
# Introduction
*Create secret values in AWS with infrastructure-as-code easily*
This construct library offers CDK Constructs that facilitate syncing [SOPS-encrypted secrets](https://github.com/getsops/sops) to AWS Secrets Manager and SSM Parameter Store.
It enables secure storage of secrets in Git repositories while allowing seamless synchronization and usage within AWS. Even large sets of SSM Parameters can be created quickly from a single file.
* Create AWS Secrets Manager secrets
* Create single SSM Parameter
* Create multiple SSM Parameter in a batch from a file
* Use SOPS json, yaml or dotenv as input files, as well as binary data
* No need for manual permission setups for the Custom Resource due to automatic least-privilege generation for the SyncProvider
# Table Of Contents
* [Introduction](#introduction)
* [Table Of Contents](#table-of-contents)
* [Available Constructs](#available-constructs)
* [SopsSecret — Sops to SecretsManager](#sopssecret--sops-to-secretsmanager)
* [SopsStringParameter — Sops to single SSM ParameterStore Parameter](#sopsstringparameter--sops-to-single-ssm-parameterstore-parameter)
* [MultiStringParameter — Sops to multiple SSM ParameterStore Parameters](#multistringparameter--sops-to-multiple-ssm-parameterstore-parameters)
* [SopsSyncProvider](#sopssyncprovider)
* [Common configuration options for SopsSecret, SopsStringParameter and MultiStringParameter](#common-configuration-options-for-sopssecret-sopsstringparameter-and-multistringparameter)
* [Considerations](#considerations)
* [UploadType: INLINE / ASSET](#uploadtype-inline--asset)
* [Stability](#stability)
* [FAQ](#faq)
* [How can I migrate to V2](#how-can-i-migrate-to-v2)
* [SecretsManager](#secretsmanager)
* [Parameter](#parameter)
* [MultiParameter](#multiparameter)
* [It does not work, what can I do?](#it-does-not-work-what-can-i-do)
* [I get errors with `dotenv` formatted files](#i-get-errors-with-dotenv-formatted-files)
* [Error: Error getting data key: 0 successful groups required, got 0](#error-error-getting-data-key-0-successful-groups-required-got-0)
* [Error: Asset of sync lambda not found](#error-asset-of-sync-lambda-not-found)
* [Can I upload the sops file myself and provide the required information as CloudFormation Parameter?](#can-i-upload-the-sops-file-myself-and-provide-the-required-information-as-cloudformation-parameter)
* [Can I access older versions of the secret stored in the SecretsManager?](#can-i-access-older-versions-of-the-secret-stored-in-the-secretsmanager)
* [I want the `raw` content of the sops file, but I always get the content nested in json](#i-want-the-raw-content-of-the-sops-file-but-i-always-get-the-content-nested-in-json)
* [License](#license)
# Available Constructs
The construct library cdk-sops-secrets supports three different Constructs that help you to sync your encrypted sops secrets to secure places in AWS.
Let's assume we want to store the following secret information in AWS:
```json
{
"apiKey": "sk-1234567890abcdef",
"database": {
"user": "admin",
"password": "P@ssw0rd!",
"host": "db.example.com"
},
"tokens": [
{ "service": "github", "token": "ghp_abcd1234" },
{ "service": "aws", "token": "AKIAIOSFODNN7EXAMPLE" }
],
"someOtherKey": "base64:VGhpcyBpcyBhIHNlY3JldCBrZXk="
}
```
It doesn't matter if this data is in `json`, `yaml` or `dotenv` format, `cdk-sops-secrets` can handle them all.
Even binary data is supported with some limitations.
## SopsSecret — Sops to SecretsManager
If you want to store your secret data in the AWS SecretsManager, use the `SopsSecret` construct. This is a "drop-in-replacement" for the [Secret Construct](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_secretsmanager.Secret.html) of the AWS CDK.
Minimal Example:
```python
const secret = new SopsSecret(stack, 'MySopsSecret', {
secretName: 'mySecret', // name of the secret in AWS SecretsManager
sopsFilePath: 'secrets/sopsfile-encrypted-secret.json', // filepath to the sops encrypted file
});
```
The content referenced sops secret file will be synced to the AWS SecretsManager Secret with the name `mySecret`.
For convenience, several transformations apply:
* Nested structures and arrays will be resolved and flattened to a JSONPath notation
* All values will be stored as strings
This is done also because of limitations of CDK in conjunction with
[dynamic references](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references-secretsmanager.html) and limitations
of the `Key/Value` view of the AWS SecretsManager WebConsole. So the result, saved in the AWS SecretsManager will actually be:
```json
{
"apiKey": "sk-1234567890abcdef",
"database.user": "admin",
"database.password": "P@ssw0rd!",
"database.host": "db.example.com",
"tokens[0].service": "github",
"tokens[0].token": "ghp_abcd1234",
"tokens[1].service": "aws",
"tokens[1].token": "AKIAIOSFODNN7EXAMPLE",
"someOtherKey": "base64:VGhpcyBpcyBhIHNlY3JldCBrZXk="
}
```
This allows you to access the values from your secret via CDK:
```python
secret.secretValueFromJson('"database.password"').toString(),
secret.secretValueFromJson('"tokens[0].token"').toString();
```
If you don't want these conversions, you can completely disable them by using the `rawOutput` property.
```python
const secret = new SopsSecret(stack, 'MySopsSecret', {
rawOutput: RawOutput.STRING,
...
});
```
This will turn off the conversions and just place the decrypted content in the target secret. It's also possible to use
`RawOutput.BINARY` then the AWS SecretsManager Secret will be populated with binary, instead of string data.
## SopsStringParameter — Sops to single SSM ParameterStore Parameter
If you want to sync the whole content of a sops encrypted file to an encrypted AWS SSM ParameterStore Parameter, you can use the SopsStringParameter Construct.
```python
const parameter = new SopsStringParameter(stack, 'MySopsParameter', {
encryptionKey: Key.fromLookup(stack, 'DefaultKey', {
aliasName: 'alias/aws/ssm',
}),
sopsFilePath: 'secrets/sopsfile-encrypted-secret.json',
});
```
This will create a Parameter with the value of the decrypted sops file content. No transformations are applied.
## MultiStringParameter — Sops to multiple SSM ParameterStore Parameters
If you have a structured sops file (yaml, json, dotenv) and want to populate the AWS SSM ParameterStore with it, you want to use the MultiStringParameter Construct.
```python
const multi = new MultiStringParameter(stack, 'MyMultiParameter', {
encryptionKey: Key.fromLookup(stack, 'DefaultKey', {
aliasName: 'alias/aws/ssm',
}),
sopsFilePath: 'secrets/sopsfile-encrypted-secret.json',
});
```
This will create several AWS SSM ParameterStore Parameters:
```bash
ParameterName Value
/apiKey "sk-1234567890abcdef"
/database/user "admin"
/database/password "P@ssw0rd!"
/database/host "db.example.com"
/tokens/0/service "github"
/tokens/0/token "ghp_abcd1234"
/tokens/1/service "aws"
/tokens/1/token "AKIAIOSFODNN7EXAMPLE"
/someOtherKey "base64:VGhpcyBpcyBhIHNlY3JldCBrZXk="
```
You can configure the naming schema via the properties `keySeparator` and `keyPrefix`:
```python
const multi = new MultiStringParameter(stack, 'MyMultiParameter', {
keyPrefix: 'mykeyprefix.' // All keys will start with this string, default '/'
keySeparator: '-' // This separator is used when converting to a flat structure, default '/'
})
```
This would lead to Parameters
```bash
ParameterName Value
mykeyprefix.apiKey "sk-1234567890abcdef"
mykeyprefix.database-user "admin"
mykeyprefix.tokens-0-service "github"
...
```
## SopsSyncProvider
The SOPS-Provider is the custom resource AWS Lambda Function, that is doing all the work. It downloads, decrypts
and stores the secret content in your desired location. This Lambda Function needs several IAM permissions to do its work.
For most use cases, you don't need to create it on your own, as the other Constructs try to create this and derive the required IAM permissions from your input.
But there are use cases, that require you to change the defaults of this Provider. If this is the case,
you have to create the provider on your own and add it to the other constructs.
Note that a SopsSyncProvider is a [SingletonLambda](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_lambda.SingletonFunction.html) that can only exist once.
```python
const provider = new SopsSyncProvider(this, 'MySopsSyncProvider', {
role: customRole, // you can pass a custom role
vpc: customVpc, // The default SopsSync Provider
vpcSubnets: { // won't run in any VPC,
subnets: [ // as it does not require
customSubnet1, // access to any VPC resources.
customSubnet2, // But if you want,
] // you can change this behaviour
}, // and set vpc, subnet and
securityGroups: [ // security groups to your
customSecurityGroup // needs.
],
logGroup: new LogGroup(this, 'MyLogGroup', { // you can add a custom log group
retention: RetentionDays.THREE_MONTHS, // with a custom retention period
encryptionKey: new KmsKey(this, 'MyKmsKey') // and custom encryption
}), //
uuid: 'MySopsSyncProvider', // Create a custom singleton by changing default uuid.
});
provider.addToRolePolicy( // You can pass PolicyStatements
new PolicyStatement({ // via the addToRolePolicy Method
actions: ['...'], //
resources: ['...'], //
}) //
); //
kmsKey.grantDecrypt( // The provider implements
provider // the IGrantable interface,
); // so you can use it as grant target
const secret = new SopsSecret(this, 'MySecret', {
sopsProvider: provider, // this property is available in all Constructs
...
});
```
## Common configuration options for SopsSecret, SopsStringParameter and MultiStringParameter
```python
const construct = new Sops...(this, 'My' {
/**
* use your own SopsSyncProvider
* @see SopsSyncProvider
*/
sopsProvider: myCustomProvider // default - a new provider will be created
/**
* the constructs try to derive the required iam permissions from the sops file
* and the target. If you don't want this, you can disable this behaviour.
* You have to take care of all required permissions on your own.
*/
autoGenerateIamPermissions: false, // default: true
/**
* the default behaviour of passing the sops file content to the provider is
* by embedding the base64 encoded content in the CloudFormation template.
* Using CDK Assets is also supported. It might be required to switch to
* Assets, if your sops files are very large.
*/
uploadType: UploadType.ASSET, // default: UploadType.INLINE
/**
* if you don't want this constructs to take care of passing the encrypted
* sops file to the sops provider, you can upload them yourself to a
* S3 bucket.
* You can pass bucket and key, and the constructs won't pass the content
* as ASSET or in the CloudFormation Template.
* As the construct isn't aware of the sopsfile, we can't derive the required
* permissions to decrypt the sops file. The same applies to the sopsFileFormat.
* You have to pass them all manually.
*/
sopsS3Bucket: 'my-custom-bucket',
sopsS3Key: 'encoded-sops.json',
sopsKmsKey: [
kmsKeyUsedForEncryption,
]
sopsFileFormat: 'json', // Allowed values are json, yaml, dotenv and binary
})
```
# Considerations
## UploadType: INLINE / ASSET
I decided, that the default behavior should be "INLINE" because of the following consideration:
* Fewer permissions
*If we use inline content instead of a S3 asset, the SopsSyncProvider does not need permissions to access the asset bucket and its KMS key.*
* Faster
*If we don't have to upload and download things from and to S3, it should be a little faster.*
* Interchangeable
*As we use the same information to generate the version of the secret,
no new version of the secret should be created, if you change from INLINE to ASSET or vice versa,
even if the CloudFormation resource updates.*
## Stability
You can consider this package as stable. Updates will follow [Semantic Versioning](https://semver.org/).
Nevertheless, I would recommend pinning the exact version of this library in your `package.json`.
# FAQ
## How can I migrate to V2
It was required to change some user facing configuration properties. So minor changes are required to make things work again.
### SecretsManager
* Removed property convertToJSON, flatten, stringifiedValues
* Use property rawOutput instead:
* `undefined / not set`: (default) convertToJSON and flatten and stringifiedValues = true
* `RawOutput.STRING`: convertToJSON and flatten and stringifiedValues = false
* `RawOutput.BINARY`: convertToJSON and flatten and stringifiedValues = false and Secret is binary
### Parameter
* Removed property convertToJSON, flatten, stringifiedValues - all of them made no sense - now only raw output of decrypted secret
### MultiParameter
* Removed property convertToJSON, flatten, stringifiedValues - most of this combinations made no sense
* Always convertToJson and flatten (as we have to parse it to create multiple parameters)
* You are allowed to choose the flattenSeparator
## It does not work, what can I do?
Even if this construct has some unit and integration tests performed, there can be bugs and issues. As everything is performed by a CloudFormation custom resource provider, a good starting point is the log of the corresponding lambda function. It should be located in your AWS Account under Cloudwatch - Log groups:
`/aws/lambda/YOUR-STACK-NAME-SingletonLambdaSopsSyncProviderSOMETHINGsomething1234`
## I get errors with `dotenv` formatted files
Only very basic dotenv syntax is working right now. Only single line values are accepted. The format must match:
```dotenv
key=value
```
comments must be a single line, not after value assignments.
## Error: Error getting data key: 0 successful groups required, got 0
This error message (and failed sync) is related to the getsops/sops issues [#948](https://github.com/getsops/sops/issues/948) and [#634](https://github.com/getsops/sops/issues/634). You must not create your secret with the `--aws-profile` flag. This profile will be written to your sops file and is required in every runtime environment. You have to define the profile to use via the environment variable `AWS_PROFILE` instead, to avoid this.
## Error: Asset of sync lambda not found
The lambda asset code is generated relative to the path of the index.ts in this package. With tools like nx this can lead to wrong results, so that the asset could not be found.
You can override the asset path via the [cdk.json](https://docs.aws.amazon.com/cdk/v2/guide/get_context_var.html) or via the flag `-c` of the CDK CLI.
The context used for this override is `sops_sync_provider_asset_path`.
So for example you can use
```bash
cdk deploy -c "sops_sync_provider_asset_path=some/path/asset.zip"
```
or in your cdk.json
```json
{
"context": {
"sops_sync_provider_asset_path": "some/path/asset.zip"
}
}
```
## Can I upload the sops file myself and provide the required information as CloudFormation Parameter?
This should be possible the following way. Ensure, that you have created a custom sops provider,
with proper IAM permissions.
```python
const sopsS3BucketParam = new CfnParameter(this, "s3BucketName", {
type: "String",
description: "The name of the Amazon S3 bucket where your sopsFile was uploaded."});
const sopsS3KeyParam = new CfnParameter(this, "s3KeyName", {
type: "String",
description: "The name of the key of the sopsFile inside the Amazon S3 bucket."});
const sopsKmsKeyArn = new CfnParameter(this, "sopsKeyArn", {
type: "String",
description: "The ARN of the KMS Key used for sops encryption"});
const sopsKmsKey = Key.fromKeyArn(this, 'Key', sopsKmsKeyArn.valueAsString)
new SopsSecret(stack, 'SopsSecret', {
sopsS3Bucket: sopsS3BucketParam.valueAsString,
sopsS3Key: sopsS3KeyParam.valueAsString,
sopsKmsKey: [
sopsKmsKey
],
sopsFileFormat: 'json',
...
});
```
## Can I access older versions of the secret stored in the SecretsManager?
While creating the secret or updating the entries of a secret, the native CDK function `cdk.FileSystem.fingerprint(...)` is used
to generate the version information of the AWS SecretsManager secret.
Therefore, it is possible to reference the entries from a specific AWS SecretsManager version.
```python
const versionId = cdk.FileSystem.fingerprint(`./sops/SomeSecrets.json`);
const passphrase = ecs.Secret.fromSecretsManagerVersion(
secretMgmt,
{ versionId: versionId },
'MY_PRIVATE_PASSPHRASE',
);
const container = TaskDef.addContainer('Container', {
secrets: {
MY_PRIVATE_PASSPHRASE: passphrase,
},
});
```
## I want the `raw` content of the sops file, but I always get the content nested in json
To get the best raw experience, you should encrypt your sops files in binary format:
```bash
sops encrypt ... my-whatever-file --output my-secret-information.sops.binary --input-type binary
```
You will lose features like only encrypting the values, not the keys.
The whole file content will be stored in the sops file.
You can store everything you like as binary, even binary data[^1].
When using binary encrypted secrets with this constructs, ensure the ending is also binary, or override via
`sopsFormat` property.
This does not work for `MultiStringParameter`
[^1] Even if sops can handle binary data, only the AWS SecretsManager allows to store it.
# License
The Apache-2.0 license. Please have a look at the [LICENSE](LICENSE) and [LICENSE-3RD-PARTY](LICENSE-3RD-PARTY).
| text/markdown | Markus Siebert<markus.siebert@deutschebahn.com> | null | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved"
] | [] | https://constructs.dev/packages/cdk-sops-secrets | null | ~=3.9 | [] | [] | [] | [
"aws-cdk-lib<3.0.0,>=2.227.0",
"constructs<11.0.0,>=10.0.5",
"jsii<2.0.0,>=1.126.0",
"publication>=0.0.3",
"typeguard==2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/dbsystel/cdk-sops-secrets.git"
] | twine/6.1.0 CPython/3.14.2 | 2026-02-20T11:11:35.163090 | cdk_sops_secrets-2.6.3.tar.gz | 16,227,399 | c9/cb/ec3f39326a27815686a5424416edac45620351374d732565640ba13289cb/cdk_sops_secrets-2.6.3.tar.gz | source | sdist | null | false | ed0f62b63fc753c374cf65a41bf7a789 | d344e7d5609d9392a837b94a1a631b3b4ea11c05230149ba33ff9eddd5fc4142 | c9cbec3f39326a27815686a5424416edac45620351374d732565640ba13289cb | null | [] | 271 |
2.4 | vws-web-tools | 2026.2.20 | Tools for interacting with the Vuforia Web Services (VWS) website. | |Build Status| |PyPI|
VWS-Web-Tools
=============
Tools for interacting with the VWS (Vuforia Web Services) website.
Installation
------------
.. code-block:: shell
pip install vws-web-tools
This is tested on Python |minimum-python-version|\+.
Usage
-----
.. code-block:: console
$ export VWS_EMAIL_ADDRESS="[YOUR-EMAIL]"
$ export VWS_PASSWORD="[YOUR-PASSWORD]"
$ TIME="$(date +%s%N | cut -b1-13)"
$ vws-web-tools create-vws-license --license-name "my-licence-$TIME"
$ vws-web-tools create-vws-cloud-database --license-name "my-licence-$TIME" --database-name "my-database-$TIME"
$ vws-web-tools show-database-details --database-name "my-database-$TIME"
Python API
----------
This project also exposes a Python API.
See the `Python API reference <https://vws-python.github.io/vws-web-tools/python-api.html>`__.
Full documentation
------------------
See the `full documentation <https://vws-python.github.io/vws-web-tools/>`__ for more information including how to contribute.
.. |Build Status| image:: https://github.com/VWS-Python/vws-web-tools/actions/workflows/ci.yml/badge.svg?branch=main
:target: https://github.com/VWS-Python/vws-web-tools/actions
.. |PyPI| image:: https://badge.fury.io/py/VWS-Web-Tools.svg
:target: https://badge.fury.io/py/VWS-Web-Tools
.. |minimum-python-version| replace:: 3.12
| text/x-rst | null | Adam Dangoor <adamdangoor@gmail.com> | null | null | null | vuforia, vws | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Framework :: Pytest",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"beartype>=0.22.9",
"click>=8.1.7",
"pyyaml>=6.0.2",
"selenium>=4.25.0",
"tenacity>=9.0.0",
"actionlint-py==1.7.11.24; extra == \"dev\"",
"check-manifest==0.51; extra == \"dev\"",
"deptry==0.24.0; extra == \"dev\"",
"doc8==2.0.0; extra == \"dev\"",
"doccmd==2026.1.31.3; extra == \"dev\"",
"furo==2025.12.19; extra == \"dev\"",
"interrogate==1.7.0; extra == \"dev\"",
"mypy[faster-cache]==1.19.1; extra == \"dev\"",
"mypy-strict-kwargs==2026.1.12; extra == \"dev\"",
"prek==0.3.3; extra == \"dev\"",
"pydocstringformatter==0.7.5; extra == \"dev\"",
"pylint[spelling]==4.0.4; extra == \"dev\"",
"pyproject-fmt==2.16.0; extra == \"dev\"",
"pyrefly==0.52.0; extra == \"dev\"",
"pyright==1.1.408; extra == \"dev\"",
"pyroma==5.0.1; extra == \"dev\"",
"pytest==9.0.2; extra == \"dev\"",
"pytest-cov==7.0.0; extra == \"dev\"",
"pytest-regressions==2.10.0; extra == \"dev\"",
"ruff==0.15.1; extra == \"dev\"",
"shellcheck-py==0.11.0.1; extra == \"dev\"",
"shfmt-py==3.12.0.2; extra == \"dev\"",
"sphinx==9.1.0; extra == \"dev\"",
"sphinx-click==6.2.0; extra == \"dev\"",
"sphinx-copybutton==0.5.2; extra == \"dev\"",
"sphinx-lint==1.0.2; extra == \"dev\"",
"sphinx-pyproject==0.3.0; extra == \"dev\"",
"sphinx-substitution-extensions==2026.1.12; extra == \"dev\"",
"sphinxcontrib-spelling==8.0.2; extra == \"dev\"",
"ty==0.0.17; extra == \"dev\"",
"types-pyyaml==6.0.12.20250915; extra == \"dev\"",
"vulture==2.14; extra == \"dev\"",
"yamlfix==1.19.1; extra == \"dev\"",
"zizmor==1.22.0; extra == \"dev\"",
"check-wheel-contents==0.6.3; extra == \"release\""
] | [] | [] | [] | [
"Documentation, https://vws-python.github.io/vws-web-tools/",
"Source, https://github.com/VWS-Python/vws-web-tools"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:11:19.598856 | vws_web_tools-2026.2.20.tar.gz | 39,382 | 05/32/891d19cd4ba6b78f6050dad31152f1f3ca0e4f57c1578899352e50b0c4b0/vws_web_tools-2026.2.20.tar.gz | source | sdist | null | false | f94f7c8416108ee89bcc77244ff146f0 | fb60b69bb4a64c4c938368051d6f7d51dcf9124973985da8bb3d412926174fee | 0532891d19cd4ba6b78f6050dad31152f1f3ca0e4f57c1578899352e50b0c4b0 | MIT | [
"LICENSE"
] | 9,745 |
2.4 | agentwarden | 0.1.2 | Security and access control for AI Agents | # AgentWarden SDK
Security and permission management for AI Agents.
## Installation
```bash
pip install agentwarden
```
## Quick Start
```python
from agentwarden import AgentWarden
# Initialize with your API key
guard = AgentWarden(api_key="your-api-key")
# Check if agent can execute an action
result = guard.check(
agent_id="agent-001",
action="stripe.refund",
context={"amount": 50}
)
if result.allowed:
# Execute your action
stripe.refund.create(amount=50)
# Log the action
guard.log("agent-001", "stripe.refund", "success")
else:
print(f"Action blocked: {result.reason}")
```
## Helper Method
```python
def refund_payment():
return stripe.refund.create(amount=50)
# Check, execute, and log automatically
result = guard.execute(
"agent-001",
"stripe.refund",
refund_payment,
context={"amount": 50}
)
```
## Documentation
Visit [https://agentwarden.io](https://agentwarden.io) for full documentation.
## License
MIT License
| text/markdown | David Fdez | hello@agentwarden.io | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/securewarden/agentwarden-sdk | null | >=3.9 | [] | [] | [] | [
"httpx>=0.27.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T11:10:15.868610 | agentwarden-0.1.2.tar.gz | 3,944 | 54/21/d4b57d8964fb1e26023219ae245f29ac158a13df6aeb5a7773256d528fb3/agentwarden-0.1.2.tar.gz | source | sdist | null | false | 634b05edd209f7bb840c7c0f69435e98 | ef86adef21d72501710dc4ef89e042578357b45e53d9ec944e8f9c5adae62968 | 5421d4b57d8964fb1e26023219ae245f29ac158a13df6aeb5a7773256d528fb3 | null | [] | 230 |
2.4 | auto-playwright | 0.2.0 | Plain English to Playwright Python Automation | # auto-playwright
Plain-English `.flow` files → Playwright (Python) runner.
## Install
### Poetry
```bash
poetry install
```
Dev tools (pytest/formatters) via extras:
```bash
poetry install --extras dev
# or
poetry install --all-extras
```
### pip
```bash
pip install .
# Dev tools
pip install '.[dev]'
```
## Run
```bash
poetry run ap examples/amazon_search.flow
```
### Safety / Logging env vars
- `AUTO_PLAYWRIGHT_ALLOWED_DOMAINS` (optional): comma-separated hostname allowlist for **top-level** navigations.
- Supports exact hosts (`example.com`) and wildcards (`*.example.com`).
- If unset/empty, no allowlist is enforced.
- `AUTO_PLAYWRIGHT_POLICY_FILE` (optional): YAML policy file path.
- `allowed_domains`: additional domain allowlist.
- `blocked_actions`: DSL actions to block globally (for governance), e.g. `["sleep"]`.
- Example policy: `tools/runtime_policy.example.yaml`
- `AUTO_PLAYWRIGHT_PERSISTENT_PROFILE` (optional, default `0`): use persistent browser profile (`1`/`true`) vs isolated ephemeral context (default).
- `AUTO_PLAYWRIGHT_PROFILE_DIR` (optional): profile path used when persistent profile is enabled.
- Logs and `run_report.*` are automatically redacted for common secrets (tokens/keys in env + URL query params).
Run a directory (batch mode):
```bash
poetry run ap examples/
```
## Artifacts / Output layout
Each run writes all artifacts under:
```
output/<flow>/<timestamp>/
run_report.json
run_report.html
artifacts/
trace.zip (optional)
video/ (optional)
errors/error_step_*.png
code/ (debug/dump-code)
final/ (final healed test)
flows/ (generated flow, if different)
```
## Debug / code generation
Generate both plain + intelligent code and keep the browser open:
```bash
poetry run ap examples/amazon_search.flow --debug
```
Generate code only (no execution):
```bash
poetry run ap examples/amazon_search.flow --dump-code
```
## Tracing / Video / Screenshots
```bash
poetry run ap examples/amazon_search.flow --trace on
poetry run ap examples/amazon_search.flow --trace on-failure
poetry run ap examples/amazon_search.flow --video on
poetry run ap examples/amazon_search.flow --video on-failure
# enabled by default; use --no-screenshot-on-failure to disable
poetry run ap examples/amazon_search.flow --no-screenshot-on-failure
```
## Tests (unit / smoke)
No real browser/network is used in tests.
```bash
poetry run pytest
```
| text/markdown | Arif Shah | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"black; extra == \"dev\" or extra == \"all\"",
"isort; extra == \"dev\" or extra == \"all\"",
"mypy; extra == \"dev\" or extra == \"all\"",
"openai<3.0.0,>=2.16.0",
"pip-audit; extra == \"dev\" or extra == \"all\"",
"playwright<2.0.0,>=1.57.0",
"pytest; extra == \"dev\" or extra == \"all\"",
"pytest-cov; extra == \"dev\" or extra == \"all\"",
"python-decouple<4.0,>=3.8",
"pyyaml<7.0.0,>=6.0.3",
"ruff; extra == \"dev\" or extra == \"all\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.4 | 2026-02-20T11:09:43.867387 | auto_playwright-0.2.0.tar.gz | 40,045 | 6b/10/a298fabc1a903ce4437733e291d519a57c275a4e798afe899bc3207c7c7e/auto_playwright-0.2.0.tar.gz | source | sdist | null | false | 971bd08eade3649f708ea6653cbf1da0 | fddbf4e553b37271cb1a5597c534e63f5c66057bfe03e5ddcab6a11ace0a4303 | 6b10a298fabc1a903ce4437733e291d519a57c275a4e798afe899bc3207c7c7e | null | [] | 247 |
2.4 | sbatchman | 1.0.2 | A utility to create, launch and monitor code experiments on SLURM, PBS, or local machines. | <p align="center" style="padding: 0; margin: 0;">
<img src="https://github.com/LorenzoPichetti/SbatchMan/blob/main/docs/images/SbatchManLogo.png" alt="SbatchManLogo" style="width: 6cm;">
<p>
# SbatchMan
A utility to create, launch, and monitor code experiments on SLURM, PBS, or local machines.
# Documentation
You can find a comprehensive documentation at [https://sbatchman.readthedocs.io/en/latest/](https://sbatchman.readthedocs.io/en/latest/)
PyPi project: [https://pypi.org/project/sbatchman/](https://pypi.org/project/sbatchman/)
| text/markdown | Lorenzo Pichetti | Thomas Pasquali <thomas.pasquali@unitn.it>, Salvatore Andaloro <sbatchman@sasso.dev> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"typer",
"textual",
"rich",
"PyYAML",
"platformdirs",
"pandas"
] | [] | [] | [] | [
"Homepage, https://github.com/LorenzoPichetti/SbatchMan",
"Bug Tracker, https://github.com/LorenzoPichetti/SbatchMan/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:09:42.000753 | sbatchman-1.0.2.tar.gz | 38,247 | 05/f7/add1e6d0181aaaf5c49c96d34b665655634d4940a45374030bd84bca23b7/sbatchman-1.0.2.tar.gz | source | sdist | null | false | 3d6fedf73f92fc12034e9a0d31485f2d | d8564c5555b7a212b63c4126e4fa7e7e7d0f5ea1453cb50ed0b10953afb84643 | 05f7add1e6d0181aaaf5c49c96d34b665655634d4940a45374030bd84bca23b7 | null | [] | 217 |
2.4 | fixtures | 4.3.0 | Fixtures, reusable state for writing clean tests and more. | ************************************************************
fixtures: Fixtures with cleanups for testing and convenience
************************************************************
*fixtures* defines a Python contract for reusable state / support logic,
primarily for unit testing. Helper and adaption logic is included to make it
easy to write your own fixtures using the fixtures contract. Glue code is
provided that makes using fixtures that meet the ``Fixtures`` contract in
``unittest`` compatible test cases easy and straight forward.
Dependencies
============
* Python
This is the base language fixtures is written in and for.
The ``fixtures[streams]`` extra adds:
* ``testtools`` <https://launchpad.net/testtools>
``testtools`` provides helpful glue functions for the details API used to report
information about a fixture (whether its used in a testing or production
environment).
For use in a unit test suite using the included glue, you will need a test
environment that supports ``TestCase.addCleanup``. Writing your own glue code
is easy. Alternatively, you can simply use Fixtures directly without any
support code.
To run the test suite for fixtures, ``testtools`` is needed.
To see exactly what version of Python is supported, see
``requires-python`` in ``pyprojec.toml``.
Why Fixtures
============
Standard Python ``unittest`` provides no obvious method for making and reusing
state needed in a test case other than by adding a method on the test class.
This scales poorly - complex helper functions propagating up a test class
hierarchy is a regular pattern when this is done. Mocking, while a great tool,
doesn't itself prevent this (and helpers to mock complex things can accumulate
in the same way if placed on the test class).
By defining a uniform contract where helpers have no dependency on the test
class we permit all the regular code hygiene activities to take place without
the distorting influence of being in a class hierarchy that is modelling an
entirely different thing - which is what helpers on a ``TestCase`` suffer from.
About Fixtures
==============
A fixture represents some state. Each fixture has attributes on it that are
specific to the fixture. For instance, a fixture representing a directory that
can be used for temporary files might have a attribute ``path``.
Most fixtures have complete ``pydoc`` documentation, so be sure to check
``pydoc fixtures`` for usage information.
Creating Fixtures
=================
Minimally, subclass ``Fixture``, define ``_setUp`` to initialize your state,
schedule a cleanup for when ``cleanUp`` is called, and you're done:
.. code-block:: python
>>> import unittest
>>> import fixtures
>>> class NoddyFixture(fixtures.Fixture):
... def _setUp(self):
... self.frobnozzle = 42
... self.addCleanup(delattr, self, 'frobnozzle')
This will initialize ``frobnozzle`` when ``setUp`` is called, and when
``cleanUp`` is called get rid of the ``frobnozzle`` attribute. Prior to version
1.3.0 *fixtures* recommended overriding ``setUp``. This is still supported, but
since it is harder to write leak-free fixtures in this fashion, it is not
recommended.
If your fixture has diagnostic data - for instance the log file of an
application server, or log messages - it can expose that by creating a content
object (``testtools.content.Content``) and calling ``addDetail``:
.. code-block:: python
>>> from testtools.content import text_content
>>> class WithLog(fixtures.Fixture):
... def _setUp(self):
... self.addDetail('message', text_content('foo bar baz'))
The method ``useFixture`` will use another fixture, call ``setUp`` on it, call
``self.addCleanup(thefixture.cleanUp)``, attach any details from it and return
the fixture. This allows simple composition of different fixtures:
.. code-block:: python
>>> class ReusingFixture(fixtures.Fixture):
... def _setUp(self):
... self.noddy = self.useFixture(NoddyFixture())
There is a helper for adapting a function or function pair into Fixtures. It
puts the result of the function in ``fn_result``:
.. code-block:: python
>>> import os.path
>>> import shutil
>>> import tempfile
>>> def setup_function():
... return tempfile.mkdtemp()
>>> def teardown_function(fixture):
... shutil.rmtree(fixture)
>>> fixture = fixtures.FunctionFixture(setup_function, teardown_function)
>>> fixture.setUp()
>>> print (os.path.isdir(fixture.fn_result))
True
>>> fixture.cleanUp()
This can be expressed even more pithily:
.. code-block:: python
>>> fixture = fixtures.FunctionFixture(tempfile.mkdtemp, shutil.rmtree)
>>> fixture.setUp()
>>> print (os.path.isdir(fixture.fn_result))
True
>>> fixture.cleanUp()
Another variation is ``MethodFixture`` which is useful for adapting alternate
fixture implementations to Fixture:
.. code-block:: python
>>> class MyServer:
... def start(self):
... pass
... def stop(self):
... pass
>>> server = MyServer()
>>> fixture = fixtures.MethodFixture(server, server.start, server.stop)
You can also combine existing fixtures using ``CompoundFixture``:
.. code-block:: python
>>> noddy_with_log = fixtures.CompoundFixture([NoddyFixture(),
... WithLog()])
>>> with noddy_with_log as x:
... print (x.fixtures[0].frobnozzle)
42
The Fixture API
===============
The example above introduces some of the ``Fixture`` API. In order to be able
to clean up after a fixture has been used, all fixtures define a ``cleanUp``
method which should be called when a fixture is finished with.
Because it's nice to be able to build a particular set of related fixtures in
advance of using them, fixtures also have a ``setUp`` method which should be
called before trying to use them.
One common desire with fixtures that are expensive to create is to reuse them
in many test cases; to support this the base ``Fixture`` also defines a
``reset`` which calls ``self.cleanUp(); self.setUp()``. Fixtures that can more
efficiently make themselves reusable should override this method. This can then
be used with multiple test state via things like ``testresources``,
``setUpClass``, or ``setUpModule``.
When using a fixture with a test you can manually call the ``setUp`` and
``cleanUp`` methods. More convenient though is to use the included glue from
``fixtures.TestWithFixtures`` which provides a mixin defining ``useFixture``
(camel case because ``unittest`` is camel case throughout) method. It will call
``setUp`` on the fixture, call ``self.addCleanup(fixture)`` to schedule a
cleanup, and return the fixture. This lets one write:
.. code-block:: python
>>> import testtools
>>> import unittest
Note that we use ``testtools.TestCase``. ``testtools`` has it's own
implementation of ``useFixture`` so there is no need to use
``fixtures.TestWithFixtures`` with ``testtools.TestCase``:
.. code-block:: python
>>> class NoddyTest(testtools.TestCase, fixtures.TestWithFixtures):
... def test_example(self):
... fixture = self.useFixture(NoddyFixture())
... self.assertEqual(42, fixture.frobnozzle)
>>> result = unittest.TestResult()
>>> _ = NoddyTest('test_example').run(result)
>>> print (result.wasSuccessful())
True
Fixtures implement the context protocol, so you can also use a fixture as a
context manager:
.. code-block:: python
>>> with fixtures.FunctionFixture(setup_function, teardown_function) as fixture:
... print (os.path.isdir(fixture.fn_result))
True
When multiple cleanups error, ``fixture.cleanUp()`` will raise a wrapper
exception rather than choosing an arbitrary single exception to raise:
.. code-block:: python
>>> import sys
>>> from fixtures.fixture import MultipleExceptions
>>> class BrokenFixture(fixtures.Fixture):
... def _setUp(self):
... self.addCleanup(lambda:1/0)
... self.addCleanup(lambda:1/0)
>>> fixture = BrokenFixture()
>>> fixture.setUp()
>>> try:
... fixture.cleanUp()
... except MultipleExceptions:
... exc_info = sys.exc_info()
>>> print (exc_info[1].args[0][0].__name__)
ZeroDivisionError
Fixtures often expose diagnostic details that can be useful for tracking down
issues. The ``getDetails`` method will return a dict of all the attached
details but can only be called before ``cleanUp`` is called. Each detail
object is an instance of ``testtools.content.Content``:
.. code-block:: python
>>> with WithLog() as l:
... print(l.getDetails()['message'].as_text())
foo bar baz
Errors in setUp
+++++++++++++++
The examples above used ``_setUp`` rather than ``setUp`` because the base
class implementation of ``setUp`` acts to reduce the chance of leaking
external resources if an error is raised from ``_setUp``. Specifically,
``setUp`` contains a try/except block which catches all exceptions, captures
any registered detail objects, and calls ``self.cleanUp`` before propagating
the error. As long as you take care to register any cleanups before calling
the code that may fail, this will cause them to be cleaned up. The captured
detail objects are provided to the args of the raised exception.
If the error that occurred was a subclass of ``Exception`` then ``setUp`` will
raise ``MultipleExceptions`` with the last element being a ``SetupError`` that
contains the detail objects. Otherwise, to prevent causing normally
uncatchable errors like ``KeyboardInterrupt`` being caught inappropriately in
the calling layer, the original exception will be raised as-is and no
diagnostic data other than that from the original exception will be available.
Shared Dependencies
+++++++++++++++++++
A common use case within complex environments is having some fixtures shared by
other ones.
Consider the case of testing using a ``TempDir`` with two fixtures built on top
of it; say a small database and a web server. Writing either one is nearly
trivial. However handling ``reset()`` correctly is hard: both the database and
web server would reasonably expect to be able to discard operating system
resources they may have open within the temporary directory before its removed.
A recursive ``reset()`` implementation would work for one, but not both.
Calling ``reset()`` on the ``TempDir`` instance between each test is probably
desirable but we don't want to have to do a complete ``cleanUp`` of the higher
layer fixtures (which would make the ``TempDir`` be unused and trivially
resettable. We have a few options available to us.
Imagine that the webserver does not depend on the DB fixture in any way - we
just want the webserver and DB fixture to coexist in the same tempdir.
A simple option is to just provide an explicit dependency fixture for the
higher layer fixtures to use. This pushes complexity out of the core and onto
users of fixtures:
.. code-block:: python
>>> class WithDep(fixtures.Fixture):
... def __init__(self, tempdir, dependency_fixture):
... super(WithDep, self).__init__()
... self.tempdir = tempdir
... self.dependency_fixture = dependency_fixture
... def setUp(self):
... super(WithDep, self).setUp()
... self.addCleanup(self.dependency_fixture.cleanUp)
... self.dependency_fixture.setUp()
... # we assume that at this point self.tempdir is usable.
>>> DB = WithDep
>>> WebServer = WithDep
>>> tempdir = fixtures.TempDir()
>>> db = DB(tempdir, tempdir)
>>> server = WebServer(tempdir, db)
>>> server.setUp()
>>> server.cleanUp()
Another option is to write the fixtures to gracefully handle a dependency
being reset underneath them. This is insufficient if the fixtures would
block the dependency resetting (for instance by holding file locks open
in a tempdir - on Windows this will prevent the directory being deleted).
Another approach which ``fixtures`` neither helps nor hinders is to raise
a signal of some sort for each user of a fixture before it is reset. In the
example here, ``TempDir`` might offer a subscribers attribute that both the
DB and web server would be registered in. Calling ``reset`` or ``cleanUp``
on the tempdir would trigger a callback to all the subscribers; the DB and
web server reset methods would look something like:
.. code-block:: python
>>> def reset(self):
... if not self._cleaned:
... self._clean()
(Their action on the callback from the tempdir would be to do whatever work
was needed and set ``self._cleaned``.) This approach has the (perhaps)
surprising effect that resetting the webserver may reset the DB - if the
webserver were to be depending on ``tempdir.reset`` as a way to reset the
webserver's state.
Another approach which is not currently implemented is to provide an object
graph of dependencies and a reset mechanism that can traverse that, along with
a separation between 'reset starting' and 'reset finishing' - the DB and
webserver would both have their ``reset_starting`` methods called, then the
tempdir would be reset, and finally the DB and webserver would have
``reset_finishing`` called.
Stock Fixtures
==============
In addition to the ``Fixture``, ``FunctionFixture`` and ``MethodFixture``
classes, fixtures includes a number of pre-canned fixtures. The API docs for
fixtures will list the complete set of these, should the docs be out of date or
not to hand. For the complete feature set of each fixture please see the API
docs.
``ByteStream``
++++++++++++++
Trivial adapter to make a ``BytesIO`` (though it may in future auto-spill to
disk for large content) and expose that as a detail object, for automatic
inclusion in test failure descriptions. Very useful in combination with
``MonkeyPatch``:
.. code-block:: python
>>> fixture = fixtures.StringStream('my-content')
>>> fixture.setUp()
>>> with fixtures.MonkeyPatch('sys.something', fixture.stream):
... pass
>>> fixture.cleanUp()
This requires the ``fixtures[streams]`` extra.
``EnvironmentVariable``
+++++++++++++++++++++++
Isolate your code from environmental variables, delete them or set them to a
new value:
.. code-block:: python
>>> fixture = fixtures.EnvironmentVariable('HOME')
``FakeLogger``
++++++++++++++
Isolate your code from an external logging configuration - so that your test
gets the output from logged messages, but they don't go to e.g. the console:
.. code-block:: python
>>> fixture = fixtures.FakeLogger()
``FakePopen``
+++++++++++++
Pretend to run an external command rather than needing it to be present to run
tests:
.. code-block:: python
>>> from io import BytesIO
>>> fixture = fixtures.FakePopen(lambda _:{'stdout': BytesIO('foobar')})
``LogHandler``
++++++++++++++
Replace or extend a logger's handlers. The behavior of this fixture depends on
the value of the ``nuke_handlers`` parameter: if ``true``, the logger's
existing handlers are removed and replaced by the provided handler, while if
``false`` the logger's set of handlers is extended by the provided handler:
.. code-block:: python
>>> from logging import StreamHandler
>>> fixture = fixtures.LogHandler(StreamHandler())
``MockPatchObject``
+++++++++++++++++++
Adapts ``unittest.mock.patch.object`` to be used as a fixture:
.. code-block:: python
>>> class Fred:
... value = 1
>>> fixture = fixtures.MockPatchObject(Fred, 'value', 2)
>>> with fixture:
... Fred().value
2
>>> Fred().value
1
``MockPatch``
+++++++++++++
Adapts ``unittest.mock.patch`` to be used as a fixture:
.. code-block:: python
>>> fixture = fixtures.MockPatch('subprocess.Popen.returncode', 3)
``MockPatchMultiple``
+++++++++++++++++++++
Adapts ``unittest.mock.patch.multiple`` to be used as a ``fixture``:
.. code-block:: python
>>> fixture = fixtures.MockPatchMultiple('subprocess.Popen', returncode=3)
``MonkeyPatch``
+++++++++++++++
Control the value of a named Python attribute
.. code-block:: python
>>> def fake_open(path, mode):
... pass
>>> fixture = fixtures.MonkeyPatch('__builtin__.open', fake_open)
Note that there are some complexities when patching methods - please see the
API documentation for details.
``NestedTempfile``
++++++++++++++++++
Change the default directory that the ``tempfile`` module places temporary
files and directories in. This can be useful for containing the noise created
by code which doesn't clean up its temporary files. This does not affect
temporary file creation where an explicit containing directory was provided
.. code-block:: python
>>> fixture = fixtures.NestedTempfile()
``PackagePathEntry``
++++++++++++++++++++
Adds a single directory to the path for an existing Python package. This adds
to the ``package.__path__`` list. If the directory is already in the path,
nothing happens, if it isn't then it is added on ``setUp`` and removed on
``cleanUp``:
.. code-block:: python
>>> fixture = fixtures.PackagePathEntry('package/name', '/foo/bar')
``PythonPackage``
+++++++++++++++++
Creates a python package directory. Particularly useful for testing code that
dynamically loads packages/modules, or for mocking out the command line entry
points to Python programs:
.. code-block:: python
>>> fixture = fixtures.PythonPackage('foo.bar', [('quux.py', '')])
``PythonPathEntry``
+++++++++++++++++++
Adds a single directory to ``sys.path``. If the directory is already in the
path, nothing happens, if it isn't then it is added on ``setUp`` and removed on
``cleanUp``:
.. code-block:: python
>>> fixture = fixtures.PythonPathEntry('/foo/bar')
``Stream``
++++++++++
Trivial adapter to expose a file-like object as a detail object, for automatic
inclusion in test failure descriptions. ``StringStream`` and ``BytesStream``
provided concrete users of this fixture.
This requires the ``fixtures[streams]`` extra.
``StringStream``
++++++++++++++++
Trivial adapter to make a ``StringIO`` (though it may in future auto-spill to
disk for large content) and expose that as a detail object, for automatic
inclusion in test failure descriptions. Very useful in combination with
``MonkeyPatch``:
.. code-block:: python
>>> fixture = fixtures.StringStream('stdout')
>>> fixture.setUp()
>>> with fixtures.MonkeyPatch('sys.stdout', fixture.stream):
... pass
>>> fixture.cleanUp()
This requires the ``fixtures[streams]`` extra.
``TempDir``
+++++++++++
Create a temporary directory and clean it up later:
.. code-block:: python
>>> fixture = fixtures.TempDir()
The created directory is stored in the ``path`` attribute of the fixture after
``setUp``.
``TempHomeDir``
+++++++++++++++
Create a temporary directory and set it as ``$HOME`` in the environment:
.. code-block:: python
>>> fixture = fixtures.TempHomeDir()
The created directory is stored in the ``path`` attribute of the fixture after
``setUp``.
The environment will now have ``$HOME`` set to the same path, and the value
will be returned to its previous value after ``tearDown``.
``Timeout``
+++++++++++
Aborts if the covered code takes more than a specified number of whole wall-clock
seconds.
There are two possibilities, controlled by the ``gentle`` argument: when gentle,
an exception will be raised and the test (or other covered code) will fail.
When not gentle, the entire process will be terminated, which is less clean,
but more likely to break hangs where no Python code is running.
.. caution::
Only one timeout can be active at any time across all threads in a single
process. Using more than one has undefined results. (This could be improved
by chaining alarms.)
.. note::
Currently supported only on Unix because it relies on the ``alarm`` system
call.
``WarningsCapture``
+++++++++++++++++++
Capture warnings for later analysis:
.. code-block:: python
>>> fixture = fixtures.WarningsCapture()
The captured warnings are stored in the ``captures`` attribute of the fixture
after ``setUp``.
``WarningsFilter``
++++++++++++++++++
Configure warnings filters during test runs:
.. code-block:: python
>>> fixture = fixtures.WarningsFilter(
... [
... {
... 'action': 'ignore',
... 'message': 'foo',
... 'category': DeprecationWarning,
... },
... ]
... )
Order is important: entries closer to the front of the list override entries
later in the list, if both match a particular warning.
Contributing
============
Fixtures has its project homepage on GitHub
<https://github.com/testing-cabal/fixtures>.
License
=======
Copyright (c) 2010, Robert Collins <robertc@robertcollins.net>
Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
license at the users choice. A copy of both licenses are available in the
project source as Apache-2.0 and BSD. You may not use this file except in
compliance with one of these two licences.
Unless required by applicable law or agreed to in writing, software
distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
license you chose for the specific language governing permissions and
limitations under that license.
| text/x-rst | null | Robert Collins <robertc@robertcollins.net> | null | null | Apache-2.0 or BSD | null | [
"Development Status :: 6 - Mature",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Software Development :: Testing",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"docutils; extra == \"docs\"",
"testtools; extra == \"streams\"",
"testtools; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/testing-cabal/fixtures",
"Bug Tracker, https://github.com/testing-cabal/fixtures/issues",
"Source Code, https://github.com/testing-cabal/fixtures"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:09:37.294382 | fixtures-4.3.0.tar.gz | 46,637 | c9/87/554b9583ae1dcc0d3ae63f41be2c42237a6fb884b4f781a549dbb05c028f/fixtures-4.3.0.tar.gz | source | sdist | null | false | 86e4ea1a9c6b85f1d6f4ac7f4edfc799 | 02e18e0e7a9f088337b4e51ae102750788cb34dd882eae96ea76ec5f231996fe | c987554b9583ae1dcc0d3ae63f41be2c42237a6fb884b4f781a549dbb05c028f | null | [
"AUTHORS",
"COPYING"
] | 7,890 |
2.4 | loom-pipeline | 0.3.0 | Visual pipeline editor and runner for task automation | 
[](https://loom-examples.onrender.com/)
[](https://github.com/ljubobratovicrelja/loom/actions/workflows/ci.yml)
[](https://ljubobratovicrelja.github.io/loom/)
[](https://pypi.org/project/loom-pipeline/)
A lightweight visual pipeline runner for research.
Connect your Python scripts into a graph, tweak parameters, run experiments, see results — without setting up Airflow or learning a workflow framework.
**[Try the live demo](https://loom-examples.onrender.com/)** — no installation required. Browse and run the [example pipelines](examples/) in your browser. (First load may take ~30s to wake up.)
Loom gives you a CLI runner and visual editor for pipelines defined in YAML. Your scripts stay as regular Python with argparse — no framework to learn, no rewrites needed.
It's designed for research workflows. For production orchestration, tools like Airflow or Kubeflow are better suited.
## Installation
```bash
# Core runner only
pip install loom-pipeline
# With visual editor
pip install loom-pipeline[ui]
```
That's it. No configuration files to create, no external services to manage.
## Quick Start
Clone the repo and try an example:
```bash
git clone https://github.com/ljubobratovicrelja/loom.git
cd loom
pip install -e .[ui,examples]
# Run a pipeline from the command line
loom examples/image-processing/pipeline.yml
```
```
Pipeline: 3 step(s) to run [parallel]
----------------------------------------
[RUNNING] grayscale
[grayscale] Converted to grayscale: .loom-url-cache/35bb4a6_Lenna.png -> data/grayscale.png
[SUCCESS] grayscale
[RUNNING] blur
[blur] Gaussian blur (radius=15): data/grayscale.png -> data/blurred.png
[SUCCESS] blur
[RUNNING] edge_detect
[edge_detect] Edge detection: data/grayscale.png -> data/edges.png
[SUCCESS] edge_detect
----------------------------------------
Completed: 3/3 steps succeeded
```
Or open it in the visual editor:
```bash
# Edit a single pipeline
loom-ui examples/image-processing/pipeline.yml
# Browse all example pipelines
loom-ui examples/
```
The editor opens in your browser where you can see the pipeline graph, run steps, and view outputs.
## Building Your Own Pipeline
Add Loom to your project's environment:
```bash
pip install loom-pipeline[ui] # or just loom-pipeline for CLI only
```
Now you can run pipelines from within your project. Here's how to set one up.
### 1. Point it at your scripts
Say you have some Python scripts that process data:
```
tasks/
extract_features.py # Takes video, outputs CSV
train_model.py # Takes CSV, outputs model
evaluate.py # Takes model + test data, outputs metrics
```
### 2. Describe the pipeline in YAML
```yaml
# experiment.yml
variables:
video: data/raw/recording.mp4
features: data/processed/features.csv
model: models/classifier.pt
metrics: results/metrics.json
parameters:
learning_rate: 0.001
epochs: 100
pipeline:
- name: extract
task: tasks/extract_features.py
inputs:
video: $video
outputs:
-o: $features
- name: train
task: tasks/train_model.py
inputs:
data: $features
outputs:
-o: $model
args:
--lr: $learning_rate
--epochs: $epochs
- name: evaluate
task: tasks/evaluate.py
inputs:
model: $model
outputs:
-o: $metrics
```
### 3. Run it
```bash
# Run the full pipeline
loom experiment.yml
# Run just one step
loom experiment.yml --step train
# Run from a step onward
loom experiment.yml --from train
# Try different parameters
loom experiment.yml --set learning_rate=0.01 --set epochs=200
# Override file paths
loom experiment.yml --var video=other_recording.mp4
# Run steps in parallel
loom experiment.yml --parallel --max-workers 4
# Preview without executing
loom experiment.yml --dry-run
# Clean all data (move to trash) and re-run from scratch
loom experiment.yml --clean
loom experiment.yml
# Preview what would be cleaned
loom experiment.yml --clean-list
```
### 4. Or use the visual editor
```bash
loom-ui experiment.yml
```
This opens a browser-based editor where you can:
- See your pipeline as a visual graph
- Drag and drop to reorganize
- Run individual steps and see output in real-time
- Quickly see which outputs exist (green) vs missing (grey)
You can also point it at a directory to browse multiple pipelines:
```bash
loom-ui experiments/ # Browse all pipelines in a folder
```
Each pipeline should be in its own subdirectory with a `pipeline.yml` file inside. See [examples/](examples/) for the expected structure.
## How Scripts Work
Your scripts stay normal Python with argparse. Just add a YAML block in the docstring so Loom knows the interface:
```python
"""Extract features from video.
---
inputs:
video:
type: video
description: Input video file
outputs:
-o:
type: csv
description: Output features
args:
--sample-rate:
type: int
default: 30
description: Frames to sample per second
---
"""
import argparse
def main():
parser = argparse.ArgumentParser()
parser.add_argument("video")
parser.add_argument("-o", "--output", required=True)
parser.add_argument("--sample-rate", type=int, default=30)
args = parser.parse_args()
# ... your code ...
if __name__ == "__main__":
main()
```
The YAML frontmatter is optional but enables the editor to show input/output types and provide better validation.
## Use Cases
**Parameter exploration**: Create parallel branches in your pipeline to test different configurations side by side.
**Reproducible experiments**: The YAML file captures your entire experiment setup. Commit it to git alongside your code.
**Iterative development**: Run just the steps you're working on. Loom tracks dependencies so upstream steps run only when needed.
**Result organization**: Variables point to file paths, so your outputs are organized by experiment configuration.
## Philosophy
Loom is intentionally minimal:
- **No database** — Everything is files: your scripts, YAML configs, and outputs
- **No external services** — The visual editor runs a local server that stops when you close it
- **No lock-in** — Your scripts work with or without Loom
- **No magic** — Loom just builds shell commands and runs them
This makes it easy to adopt incrementally. Start with one experiment, see if it helps, expand from there.
## License
MIT License — see [LICENSE](LICENSE) for details.
---
*Built for researchers who want to see their experiments, not manage infrastructure.*
| text/markdown | null | Relja Ljubobratovic <ljubobratovic.relja@gmail.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"PyYAML>=6.0",
"requests>=2.28.0",
"fastapi>=0.100.0; extra == \"ui\"",
"uvicorn>=0.23.0; extra == \"ui\"",
"websockets>=12.0; extra == \"ui\"",
"ruamel.yaml>=0.18.0; extra == \"ui\"",
"send2trash>=1.8.2; extra == \"ui\"",
"opencv-python-headless>=4.8.0; extra == \"ui\"",
"numpy>=1.24.0; extra == \"examples\"",
"scipy>=1.10.0; extra == \"examples\"",
"matplotlib>=3.7.0; extra == \"examples\"",
"opencv-python-headless>=4.8.0; extra == \"examples\"",
"loom-pipeline[examples,ui]; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"httpx>=0.24.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"types-PyYAML>=6.0.0; extra == \"dev\"",
"types-requests>=2.28.0; extra == \"dev\"",
"mkdocs-material>=9.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ljubobratovicrelja/loom",
"Repository, https://github.com/ljubobratovicrelja/loom",
"Documentation, https://ljubobratovicrelja.github.io/loom/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:09:28.966631 | loom_pipeline-0.3.0.tar.gz | 337,202 | df/2c/229b9c2302c72b2b4584d043273559242c84110ae6b1ffb2733748187d12/loom_pipeline-0.3.0.tar.gz | source | sdist | null | false | c8c5f4be6ff00ffe9fe751da1884a3dc | 4036aa47dfd73abb7d369df46e78c192e9099d3b5c9c81b6f7de56f4156a4247 | df2c229b9c2302c72b2b4584d043273559242c84110ae6b1ffb2733748187d12 | MIT | [
"LICENSE"
] | 227 |
2.4 | scanpydoc | 0.17.2 | A series of Sphinx extensions to get maintainable numpydoc style documentation. | scanpydoc |pypi| |docs| |tests| |checks| |cov|
==============================================
A collection of Sphinx extensions similar to (but more flexible than) numpydoc.
Check the self-documenting documentation at https://icb-scanpydoc.readthedocs-hosted.com
.. |pypi| image:: https://img.shields.io/pypi/v/scanpydoc.svg
:target: https://pypi.org/project/scanpydoc/
:alt: PiPI version
.. |docs| image:: https://readthedocs.com/projects/icb-scanpydoc/badge/
:target: https://icb-scanpydoc.readthedocs-hosted.com/
:alt: doc build status
.. |tests| image:: https://github.com/theislab/scanpydoc/actions/workflows/ci.yml/badge.svg
:target: https://github.com/theislab/scanpydoc/actions/workflows/ci.yml
:alt: python test status
.. |checks| image:: https://results.pre-commit.ci/badge/github/theislab/scanpydoc/main.svg
:target: https://results.pre-commit.ci/latest/github/theislab/scanpydoc/main
:alt: pre-commit.ci status
.. |cov| image:: https://codecov.io/gh/theislab/scanpydoc/branch/main/graph/badge.svg
:target: https://codecov.io/gh/theislab/scanpydoc
:alt: coverage
| text/x-rst | null | Philipp Angerer <phil.angerer@gmail.com> | null | null | null | null | [
"Framework :: Sphinx :: Extension",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Topic :: Documentation :: Sphinx",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"sphinx>=8.2",
"pre-commit; extra == \"dev\"",
"myst-parser; extra == \"doc\"",
"sphinx; extra == \"doc\"",
"sphinx-autodoc-typehints>=1.15.2; extra == \"doc\"",
"sphinx-book-theme>=1.1.0; extra == \"doc\"",
"myst-parser; extra == \"myst\"",
"coverage; extra == \"test\"",
"defusedxml; extra == \"test\"",
"legacy-api-wrap; extra == \"test\"",
"pytest; extra == \"test\"",
"sphinx>=8.1.0; extra == \"test\"",
"sphinx-book-theme>=1.1.0; extra == \"theme\"",
"sphinx-autodoc-typehints>=1.15.2; extra == \"typehints\""
] | [] | [] | [] | [
"Source, https://github.com/theislab/scanpydoc/",
"Documentation, https://icb-scanpydoc.readthedocs-hosted.com/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:09:15.737056 | scanpydoc-0.17.2.tar.gz | 43,701 | 4b/d6/dfb191dbea490daf955f0c6192d94fe778c4d252dcd27bcdf11da28cb519/scanpydoc-0.17.2.tar.gz | source | sdist | null | false | fc8e0fa817300c41a751948e48d49ef2 | 7f811aeee3d54663dca8635d891250928c2bbb6423f1fa046164449793e265dc | 4bd6dfb191dbea490daf955f0c6192d94fe778c4d252dcd27bcdf11da28cb519 | GPL-3.0-or-later | [
"LICENSE"
] | 404 |
2.4 | TS-PCA | 0.0.12 | A package for unsupervised representation and principal component analysis of irregularly sampled time series with variable size relying on the shape analysis literature. | ### PCA for time series
Authors: Samuel Gruffaz, Thibaut Germain
This repository gathers the functions developed in the paper [**“Shape Analysis for Time Series”**](https://proceedings.neurips.cc/paper_files/paper/2024/file/ad86418f7bdfa685cd089e028efd75cd-Paper-Conference.pdf), located in the `TS_PCA` directory.
It is possible to represent **irregularly sampled time series of different lengths** and to apply **kernel PCA** to these representations in order to identify the main modes of shape variation in the time series.

<p align="center">
Time series graphs $(\mathsf{G}_i)_{i\in[5]}$ are represented as the deformations of a time series graph of reference $\mathsf{G}_0$ by transformations $(\chi_{\alpha_i})_{i\in[5]} $ parameterized by $(\alpha_i)_{i\in[5]}$.
</p>
These methods work particularly well when the analyzed dataset is **homogeneous in terms of shapes**, for example when each time series corresponds to:
* a heartbeat recording,
* a respiratory cycle,
* an electricity consumption pattern,
* or a heating load curve.
# Dataset format
The main requirement is to represent the time series dataset as a collection of **time series graphs**. Each time series graph should be an array `T` of shape `(n_samples, d+1)`, where `T[:, 0]` contains the time points, and `T[:, 1:]` contains the time series values of dimension `d`.
The full dataset should be an array of fixed shape `(n_time_series, n_samples_max, d+1)` along with a corresponding mask of shape `(n_time_series, n_samples_max, 1)`, where `n_samples_max` is the maximum number of samples among all time series. This accommodates the fact that each time series may have a different number of samples.
Default parameters work well when the distance between two consecutive time points is approximately 1.
# TS-PCA: Basic Usage Example
This example demonstrates the basic workflow of using the `TS-PCA` package to analyze time-series data using TS-LDDMM representations and Kernel PCA.
```python
# Import or generate a toy dataset,
N = 8
dataset, dataset_mask, graph_ref, graph_ref_mask = generate_easy_dataset(N=N)
#dataset is an array of shape (8,200,2) and dataset mask an array of shape (8,200,1)
# Initialize the TS-PCA class
class_test = TS_PCA_()
# Step 1: Fit TS-LDDMM representations
# This learns the temporal-shape embeddings of the dataset.
# Set learning_graph_ref=True to learn the reference graph; here we keep it fixed.
class_test.fit_TS_LDDMM_representations(
dataset,
dataset_mask,
learning_graph_ref=False,
graph_ref=graph_ref,
graph_ref_mask=graph_ref_mask
)
# Step 2: Fit Kernel PCA on the learned representations
class_test.fit_kernel_PCA()
# Step 3: Visualize the principal components
class_test.plot_components()
```

<p align="center">
After applying Kernel PCA to the TS-LDDMM features $(\alpha_j)_{j \in [N]}$ extracted from a dataset of mouse respiratory cycles under drug exposure, we visualize the deformations $\chi_\alpha \cdot \mathsf{G}_0$ of the reference time series graph $\mathsf{G}_0$ as $\alpha$ varies along the principal component $PC_0$.
Notably, $\alpha=- 1.5 \sigma \times PC_0$ captures the deformation accounting for the effect of the drug on the respiratory cycle.
</p>
The `Docs` directory contains the files used to build the package documentation.
The `pages` directory contains the pages used to launch a **Streamlit application** from the menu, allowing users to test the different building blocks of the code.
**Coming next:**
* Complete documentation
* New kernels
| text/markdown | null | Samuel Gruffaz <samuel.gruffaz@ens-paris-saclay.fr>, Thibaut Germain <thibaut.germain@ens-paris-saclay.fr> | null | Samuel Gruffaz <samuel.gruffaz@ens-paris-saclay.fr>, Thibaut Germain <thibaut.germain@ens-paris-saclay.fr> | Creative Commons Legal Code
CC0 1.0 Universal
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator
and subsequent owner(s) (each and all, an "owner") of an original work of
authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for
the purpose of contributing to a commons of creative, cultural and
scientific works ("Commons") that the public can reliably and without fear
of later claims of infringement build upon, modify, incorporate in other
works, reuse and redistribute as freely as possible in any form whatsoever
and for any purposes, including without limitation commercial purposes.
These owners may contribute to the Commons to promote the ideal of a free
culture and the further production of creative, cultural and scientific
works, or to gain reputation or greater distribution for their Work in
part through the use and efforts of others.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he or she
is an owner of Copyright and Related Rights in the Work, voluntarily
elects to apply CC0 to the Work and publicly distribute the Work under its
terms, with knowledge of his or her Copyright and Related Rights in the
Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not
limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person's image or
likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data
in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the
European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation
thereof, including any amended or successor version of such
directive); and
vii. other similar, equivalent or corresponding rights throughout the
world based on applicable law or treaty, and any national
implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention
of, applicable law, Affirmer hereby overtly, fully, permanently,
irrevocably and unconditionally waives, abandons, and surrenders all of
Affirmer's Copyright and Related Rights and associated claims and causes
of action, whether now known or unknown (including existing as well as
future claims and causes of action), in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose whatsoever,
including without limitation commercial, advertising or promotional
purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
member of the public at large and to the detriment of Affirmer's heirs and
successors, fully intending that such Waiver shall not be subject to
revocation, rescission, cancellation, termination, or any other legal or
equitable action to disrupt the quiet enjoyment of the Work by the public
as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason
be judged legally invalid or ineffective under applicable law, then the
Waiver shall be preserved to the maximum extent permitted taking into
account Affirmer's express Statement of Purpose. In addition, to the
extent the Waiver is so judged Affirmer hereby grants to each affected
person a royalty-free, non transferable, non sublicensable, non exclusive,
irrevocable and unconditional license to exercise Affirmer's Copyright and
Related Rights in the Work (i) in all territories worldwide, (ii) for the
maximum duration provided by applicable law or treaty (including future
time extensions), (iii) in any current or future medium and for any number
of copies, and (iv) for any purpose whatsoever, including without
limitation commercial, advertising or promotional purposes (the
"License"). The License shall be deemed effective as of the date CC0 was
applied by Affirmer to the Work. Should any part of the License for any
reason be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the remainder
of the License, and in such case Affirmer hereby affirms that he or she
will not (i) exercise any of his or her remaining Copyright and Related
Rights in the Work or (ii) assert any associated claims and causes of
action with respect to the Work, in either case contrary to Affirmer's
express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or
warranties of any kind concerning the Work, express, implied,
statutory or otherwise, including without limitation warranties of
title, merchantability, fitness for a particular purpose, non
infringement, or the absence of latent or other defects, accuracy, or
the present or absence of errors, whether or not discoverable, all to
the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons
that may apply to the Work or any use thereof, including without
limitation any person's Copyright and Related Rights in the Work.
Further, Affirmer disclaims responsibility for obtaining any necessary
consents, permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to
this CC0 or use of the Work.
| null | [] | [] | null | null | null | [] | [] | [] | [
"jax>=0.4.30",
"optax>=0.2.3",
"matplotlib>=3.9.1",
"numpy>=2.0.2",
"scipy>=1.13.1",
"pandas>=2.2.2",
"scikit-learn>=1.5.1"
] | [] | [] | [] | [
"Repository, https://github.com/samuelgruffaz/PCA_for_time_series.git"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-20T11:08:59.696915 | ts_pca-0.0.12.tar.gz | 28,042 | 82/c3/bf1c74ccb39dac3edc4b766600e96b6d272d70bbb939891326acc9da67ae/ts_pca-0.0.12.tar.gz | source | sdist | null | false | 25316e412c082f316266dbf7e787c9c2 | bd2b03b1f2dc1dc1b02071fda9de3e3a89c3aa78d464a2166cffea10d520e168 | 82c3bf1c74ccb39dac3edc4b766600e96b6d272d70bbb939891326acc9da67ae | null | [
"LICENSE"
] | 0 |
2.2 | gllm-pipeline-binary | 0.4.34 | A library containing components related to Gen AI applications pipeline orchestration. | # GLLM Pipeline
## Description
A library containing components related to Gen AI applications pipeline orchestration, including routers, steps, and utility functions for building and managing AI application workflows.
---
## Installation
### Prerequisites
Mandatory:
1. Python 3.11+ — [Install here](https://www.python.org/downloads/)
2. pip — [Install here](https://pip.pypa.io/en/stable/installation/)
3. uv — [Install here](https://docs.astral.sh/uv/getting-started/installation/)
Extras (required only for Artifact Registry installations):
1. gcloud CLI (for authentication) — [Install here](https://cloud.google.com/sdk/docs/install), then log in using:
```bash
gcloud auth login
```
---
### Option 1: Install from Artifact Registry
This option requires authentication via the `gcloud` CLI.
```bash
uv pip install \
--extra-index-url "https://oauth2accesstoken:$(gcloud auth print-access-token)@glsdk.gdplabs.id/gen-ai-internal/simple/" \
gllm-pipeline
```
---
### Option 2: Install from PyPI
This option requires no authentication.
However, it installs the **binary wheel** version of the package, which is fully usable but **does not include source code**.
```bash
uv pip install gllm-pipeline-binary
```
---
## Local Development Setup
### Prerequisites
1. Python 3.11+ — [Install here](https://www.python.org/downloads/)
2. pip — [Install here](https://pip.pypa.io/en/stable/installation/)
3. uv — [Install here](https://docs.astral.sh/uv/getting-started/installation/)
4. gcloud CLI — [Install here](https://cloud.google.com/sdk/docs/install), then log in using:
```bash
gcloud auth login
```
5. Git — [Install here](https://git-scm.com/downloads)
6. Access to the [GDP Labs SDK GitHub repository](https://github.com/GDP-ADMIN/gl-sdk)
---
### 1. Clone Repository
```bash
git clone git@github.com:GDP-ADMIN/gl-sdk.git
cd gl-sdk/libs/gllm-pipeline
```
---
### 2. Setup Authentication
Set the following environment variables to authenticate with internal package indexes:
```bash
export UV_INDEX_GEN_AI_INTERNAL_USERNAME=oauth2accesstoken
export UV_INDEX_GEN_AI_INTERNAL_PASSWORD="$(gcloud auth print-access-token)"
export UV_INDEX_GEN_AI_USERNAME=oauth2accesstoken
export UV_INDEX_GEN_AI_PASSWORD="$(gcloud auth print-access-token)"
```
---
### 3. Quick Setup
Run:
```bash
make setup
```
---
### 4. Activate Virtual Environment
```bash
source .venv/bin/activate
```
---
## Local Development Utilities
The following Makefile commands are available for quick operations:
### Install uv
```bash
make install-uv
```
### Install Pre-Commit
```bash
make install-pre-commit
```
### Install Dependencies
```bash
make install
```
### Update Dependencies
```bash
make update
```
### Run Tests
```bash
make test
```
---
## Contributing
Please refer to the [Python Style Guide](https://docs.google.com/document/d/1uRggCrHnVfDPBnG641FyQBwUwLoFw0kTzNqRm92vUwM/edit?usp=sharing)
for information about code style, documentation standards, and SCA requirements.
| text/markdown | null | Dimitrij Ray <dimitrij.ray@gdplabs.id>, Henry Wicaksono <henry.wicaksono@gdplabs.id>, Kadek Denaya <kadek.d.r.diana@gdplabs.id> | null | null | null | null | [] | [] | null | null | <3.13,>=3.11 | [] | [] | [] | [
"pydantic<2.12.0,>=2.11.7",
"gllm-core-binary<0.5.0,>=0.3.0",
"gllm-inference-binary<0.6.0,>=0.5.0",
"aiohttp<3.14.0,>=3.13.3",
"langgraph<2.0.0,>=0.6.0",
"typing-extensions<5.0.0,>=4.5.0",
"coverage<7.5.0,>=7.4.4; extra == \"dev\"",
"mypy<1.16.0,>=1.15.0; extra == \"dev\"",
"pre-commit<3.8.0,>=3.7.0; extra == \"dev\"",
"pytest<8.2.0,>=8.1.1; extra == \"dev\"",
"pytest-asyncio<0.24.0,>=0.23.6; extra == \"dev\"",
"pytest-cov<5.1.0,>=5.0.0; extra == \"dev\"",
"ruff<0.7.0,>=0.6.7; extra == \"dev\"",
"gllm-datastore-binary[chroma]<0.6.0,>=0.5.0; extra == \"cache\"",
"gllm-inference-binary[google]<0.6.0,>=0.5.0; extra == \"multimodal-router\"",
"azure-search-documents<12.0.0,>=11.5.1; extra == \"semantic-router\"",
"semantic-router<0.2.0,>=0.1.0; extra == \"semantic-router\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:08:19.153629 | gllm_pipeline_binary-0.4.34-cp312-cp312-macosx_13_0_arm64.whl | 989,409 | 90/73/69db9e45496632155114cd6a185719f3792ee08f41e90deb845b9158c043/gllm_pipeline_binary-0.4.34-cp312-cp312-macosx_13_0_arm64.whl | cp312 | bdist_wheel | null | false | 3ad1091a8c60cfe6bf17a47325bbfa66 | 4ce599abff4799ae356b2bcfea93dab86d89e6031902bb49dfebf2f2a4cd97ee | 907369db9e45496632155114cd6a185719f3792ee08f41e90deb845b9158c043 | null | [] | 377 |
2.4 | nessai-gw | 0.2.0 | Gravitational-wave reparameterisations and proposals for nessai | # nessai-gw
Gravitational-wave specific proposals and reparameterisations for nessai
## Usage
Once installed, these proposals can be used in `nessai` by specifying the
`flow_proposal_class` keyword argument when using the standard nested sampler.
### Example
```python
fs = FlowSampler(
model,
...,
flow_proposal_class="gwflowproposal",
)
```
| text/markdown | null | "Michael J. Williams" <michaeljw1@googlemail.com> | null | null | MIT | nested sampling, normalizing flows, machine learning | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"nessai>=0.14.0",
"numpy",
"scipy",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-timeout; extra == \"test\"",
"pytest-rerunfailures; extra == \"test\"",
"pytest-integration; extra == \"test\"",
"pytest-requires; extra == \"test\"",
"bilby; extra == \"bilby\"",
"nessai-bilby; extra == \"bilby\"",
"lalsuite; extra == \"bilby\"",
"astropy; extra == \"bilby\""
] | [] | [] | [] | [
"Homepage, https://github.com/mj-will/nessai-gw"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:08:09.144850 | nessai_gw-0.2.0.tar.gz | 21,086 | 2b/26/023f98ab7af52e9580038e65f8e6c39abfdc493e41884d0436ab8f6f4019/nessai_gw-0.2.0.tar.gz | source | sdist | null | false | 6a865c7d629a93fcb17837e1d8eaf42c | b1e025200cb98c8cfa5e2590a2e6d891efb6d3a1833eba7d27015c1a2538f2b9 | 2b26023f98ab7af52e9580038e65f8e6c39abfdc493e41884d0436ab8f6f4019 | null | [
"LICENSE"
] | 239 |
2.4 | malac-utils | 1.3.1 | Mapping Language Compiler Utils | # MaLaC Utils
MaLaC-HD (MApping LAnguage Compiler for Health Data) is a tool that you can use to convert mappings between different health data formats to executable code. It can also be used as a library to dynamically execute mappings.
This is the utils package to execute generated mappings. You can use it independently from the `malac-hd` package, together with the appropriate model packages.
[TOC]
## Contributing and Support
Please read [CONTRIBUTING.md](CONTRIBUTING.md) for contributing to cdeHealth projects which are hosted in the [cdeHealth group](https://gitlab.com/cdehealth) on GitLab.com.
Please read [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) to make participation in our community a harassment free experience for everyone.
## Authors and acknowledgment
We want to thank
- [ELGA GmbH](https://www.elga.gv.at/) with their [CDA2FHIR](https://collab.hl7.at/display/BAL/AG+ELGA+CDA+Laborbefund+zu+FHIR) projects and
- [AIT Austrian Institute of Technology GmbH](https://www.ait.ac.at/) with their [SmartFOX](https://www.smart-fox.at/) project.
## License
This is a LGPL licensed project.
| text/markdown | null | cdeHealth-Team <contact-project+cdehealth-malac-hd-52276676-issue-@incoming.gitlab.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Information Technology",
"Intended Audience :: Healthcare Industry",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Topic :: Software Development :: Code Generators",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"python-dateutil>=2.8.2"
] | [] | [] | [] | [
"Homepage, https://gitlab.com/cdehealth/malac-hd",
"Documentation, https://gitlab.com/cdehealth/malac-hd",
"Release notes, https://gitlab.com/cdehealth/malac-hd/-/releases",
"Source, https://gitlab.com/cdehealth/malac-hd",
"Tracker, https://gitlab.com/cdehealth/malac-hd/-/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T11:07:54.764990 | malac_utils-1.3.1.tar.gz | 14,378 | 20/78/b0ef73ed872bb3b67ba84b5b3bf94d70f026fa5a26236acd55fe5058fad0/malac_utils-1.3.1.tar.gz | source | sdist | null | false | 1a1117883f97a30519dca197ac79b347 | 57ed9217f6435a657724e472e285687a4432c2a0a8837f1148fd6b2c038a8857 | 2078b0ef73ed872bb3b67ba84b5b3bf94d70f026fa5a26236acd55fe5058fad0 | null | [
"LICENSE"
] | 246 |
2.4 | gitlab-harvester | 0.2.13 | Build a GitLab instance project index and search repositories for sensitive keywords (API-only, no cloning). | # GitlabHarvester — Global GitLab Code & Secret Search Tool (Python)




**GitlabHarvester** is a fast, scalable tool for searching keywords across an entire GitLab instance using the API — without cloning repositories.
Built for **security audits, secret discovery, compliance checks, and large-scale code intelligence** across thousands of projects.
> Global term search across a full GitLab instance — especially valuable for GitLab CE environments.
---
## ⚡ Quick Start
Search a keyword:
```bash
gitlab-harvester -u https://gitlab.example.com -t $TOKEN --search password
```
Search from file:
```bash
gitlab-harvester -u https://gitlab.example.com -t $TOKEN --terms-file words.txt
```
Build project index only:
```bash
gitlab-harvester -u https://gitlab.example.com -t $TOKEN -m dump-index
```
Deduplicate results:
```bash
gitlab-harvester -m dedup --input-file session.jsonl --output-file clean.jsonl
```
Convert JSONL → JSON:
```bash
gitlab-harvester -m convert --input-file session.jsonl --output-file result.json
```
---
## 🚀 Overview
GitLab Community Edition does not provide full instance-wide code search like EE.
GitlabHarvester fills this gap by:
* building a lightweight instance project index
* scanning repositories via API
* streaming results in JSONL
* supporting resumable sessions
* keeping memory usage constant
Designed to operate efficiently on environments with **10k–100k repositories**.
---
## 🔍 Key Advantages
| Problem | Solution |
| ----------------------- | ---------------------- |
| No global search | Instance-wide scan |
| Cloning thousands repos | API-only scanning |
| Large instances | Streaming architecture |
| Repeated audits | Cached project index |
---
## ✨ Features
* Instance-wide keyword search
* No repository cloning
* JSONL project index
* Branch scanning strategies
* Smart fork analysis
* Resume interrupted scans
* Streaming output
* Low memory footprint
* Automation-friendly
* Built-in post-processing tools
---
## 📦 Installation
### Recommended — install from PyPI
```bash
pipx install gitlab-harvester
```
Run:
```bash
gitlab-harvester --help
```
---
### Alternative — pip
```bash
pip install gitlab-harvester
```
---
### Development install
```bash
git clone https://github.com/Cur1iosity/GitlabHarvester.git
cd GitlabHarvester
pip install .
```
Editable mode:
```bash
pip install -e .
```
---
### Install latest dev version
```bash
pipx install git+https://github.com/Cur1iosity/GitlabHarvester.git
```
---
## Requirements
* Python **3.10+**
* GitLab token with **read_api** permission
---
## 🌿 Branch Control
Two independent controls:
* `--index-branches` — stored branches
* `--scan-branches` — scanned branches
Example:
```bash
gitlab-harvester -u ... -t ... --scan-branches 10
```
Store all + scan all:
```bash
gitlab-harvester -u ... -t ... --index-branches all --scan-branches all
```
Shortcut:
```bash
--branches N
```
---
## 🍴 Fork Strategies
```
--forks skip|include|branch-diff|all-branches
```
Recommended → **branch-diff**
| Mode | Behavior |
| ------------ | ------------------------------ |
| skip | ignore forks |
| include | treat as normal repos |
| branch-diff | scan default + unique branches |
| all-branches | full exhaustive scan |
---
## 💾 Sessions & Resume
Create session:
```bash
gitlab-harvester -u ... -t ... --terms-file words.txt --session audit
```
Resume:
```bash
gitlab-harvester -u ... -t ... --session-file audit.jsonl --resume
```
---
## 📊 Output
Two file types:
| File | Purpose |
| ------------- | ----------------------- |
| Project index | cached project metadata |
| Session file | hits + checkpoints |
Format → JSONL (streaming-friendly)
---
## 🧰 Post-Processing Modes
GitlabHarvester includes built-in post-processing utilities.
### Deduplicate results
```bash
gitlab-harvester -m dedup \
--input-file session.jsonl \
--output-file clean.jsonl
```
Options:
* `--sqlite-path file.sqlite`
* `--hash-algo blake2b|sha1|sha256`
* `--no-normalize-hits`
---
### Convert JSONL → JSON
```bash
gitlab-harvester -m convert \
--input-file session.jsonl \
--output-file result.json
```
Pretty print:
```bash
jq . result.json > formatted.json
```
---
## 🏗 Architecture
```
GitLab API
↓
Indexer
↓
Branch planner
↓
Matcher
↓
JSONL stream
```
Constant memory usage regardless of instance size.
---
## 🎯 Typical Use Cases
* secret discovery
* credential leaks detection
* internal audits
* redteam/pentest reconnaissance
* DevSecOps validation
* large-scale code search
---
## 🔐 Security Notice
Use only on GitLab instances where you are authorized to perform scanning.
---
## 🤝 Contributing
Pull requests and ideas welcome.
---
## 📜 License
MIT
| text/markdown | null | Cur1iosity <cur1iosity@protonmail.com> | null | null | MIT | gitlab, gitlab-api, code-search, secret-scanning, secret-detection, credential-leak, leak-detection, security-audit, devsecops, pentest, red-team, redteam, osint, recon, compliance, cli | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security",
"Topic :: Utilities",
"Topic :: Software Development :: Quality Assurance"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"python-gitlab>=8.0.0",
"tqdm>=4.66.0",
"requests>=2.31.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Cur1iosity/GitlabHarvester",
"Repository, https://github.com/Cur1iosity/GitlabHarvester",
"Issues, https://github.com/Cur1iosity/GitlabHarvester/issues",
"Documentation, https://github.com/Cur1iosity/GitlabHarvester#readme",
"Changelog, https://github.com/Cur1iosity/GitlabHarvester/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:07:52.077679 | gitlab_harvester-0.2.13.tar.gz | 26,872 | e9/ee/e0f968bf454b773fa9dfeb27f007466d6124dc79536b8ddc3fd446f0bbb9/gitlab_harvester-0.2.13.tar.gz | source | sdist | null | false | 454b3a1e88ba6f94b26c2a63ffba074d | 4c7b973eadbede131da3f3b9cf4d7e06507a60828f8db891daa14cafc85de318 | e9eee0f968bf454b773fa9dfeb27f007466d6124dc79536b8ddc3fd446f0bbb9 | null | [
"LICENSE"
] | 231 |
2.4 | c2c-gpx | 0.0.5 | tool for exporting camptocamp searches into gpx files | # c2c_gpx
Export camptocamp search data to a gpx file intended for osmand or oruxmap.
## Install
```shell
python -m pip install c2c_gpx
```
This will add a new command `c2c_gpx` to your PATH.
Use `c2c_gpx -h` for help.
## How-To
Go to camptocamp.org and search for your document/activity/area of interest, add any filter you want.
When satisfied, use the url as a parameter to `c2c_gpx`:
```bash
c2c_gpx https://www.camptocamp.org/routes?bbox=1234,5678,9101,11213&act=rock_climbing -o my_routes.gpx
```
The resulting file can be opened in any map app.
## External ressources
- https://gpx.studio/app to see your gpx file online
- https://osmand.net/docs/technical/osmand-file-formats/osmand-gpx/
- https://www.camptocamp.org/articles/838875/en/api-c2c-v6
- https://github.com/c2corg/v6_api/wiki
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"attrs==25.4.0",
"cattrs==26.1.0",
"certifi==2026.1.4",
"charset-normalizer==3.4.4",
"gpxpy==1.6.2",
"idna==3.11",
"markdown==3.10.2",
"platformdirs==4.9.2",
"pyproj==3.7.2",
"requests-cache==1.3.0",
"requests==2.32.5",
"tqdm==4.67.3",
"typing-extensions==4.15.0",
"url-normalize==2.2.1",
"urllib3==2.6.3"
] | [] | [] | [] | [
"Homepage, https://github.com/UlysseV/c2c_gpx",
"Repository, https://github.com/UlysseV/c2c_gpx.git",
"Issues, https://github.com/UlysseV/c2c_gpx/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T11:07:49.704220 | c2c_gpx-0.0.5.tar.gz | 7,629 | 67/b3/7e124e8b4f512bb2ecdc7a4a1db51c8c04c348edb29ec2f6db2b51155d5b/c2c_gpx-0.0.5.tar.gz | source | sdist | null | false | b88ec612a18c607f04f57f1c86cc7a45 | e98d12a070a6dc6b663f95820744197ef3bfaaf660c60cd619d1791cfe494f4f | 67b37e124e8b4f512bb2ecdc7a4a1db51c8c04c348edb29ec2f6db2b51155d5b | MIT | [
"LICENSE"
] | 232 |
2.4 | med-paper-assistant | 0.3.8 | A medical paper writing assistant using MCP support | # Medical Paper Assistant
<p align="center">
<a href="https://www.python.org/downloads/"><img alt="Python" src="https://img.shields.io/badge/Python-3.12+-blue?logo=python&logoColor=white"></a>
<a href="https://modelcontextprotocol.io/"><img alt="MCP" src="https://img.shields.io/badge/MCP-Compatible-green"></a>
<a href="https://github.com/features/copilot"><img alt="Copilot" src="https://img.shields.io/badge/GitHub_Copilot-Ready-8957e5?logo=github&logoColor=white"></a>
<a href="https://github.com/u9401066/med-paper-assistant"><img alt="License" src="https://img.shields.io/badge/License-Apache_2.0-blue"></a>
</p>
<p align="center">
<img alt="Windows" src="https://img.shields.io/badge/Windows-0078D6?logo=windows&logoColor=white">
<img alt="Linux" src="https://img.shields.io/badge/Linux-FCC624?logo=linux&logoColor=black">
<img alt="macOS" src="https://img.shields.io/badge/macOS-000000?logo=apple&logoColor=white">
</p>
<p align="center">
<b>🔬 An Integrated AI Toolkit for Medical Paper Writing</b><br>
<i>3 MCP Servers · ~107 Tools · 26 Skills · 14 Prompt Workflows — All in VS Code</i>
</p>
> 📖 [繁體中文版](README.zh-TW.md)
---
## 📦 What's in the Box
This is a **monorepo toolkit** that bundles everything a medical researcher needs — from literature search to Word/LaTeX export — into one integrated VS Code environment.
| Component | Type | Tools | Description |
| ------------------------------------------------------------------ | ---------------------- | ------ | ------------------------------------------------------------------------- |
| **[mdpaper](#-mdpaper-mcp-tools)** | Core MCP Server | 57 | Paper writing: projects, references, drafts, analysis, validation, export |
| **[pubmed-search](https://github.com/u9401066/pubmed-search-mcp)** | MCP Server (submodule) | 37 | PubMed/Europe PMC/CORE search, PICO, citation metrics, session mgmt |
| **[CGU](https://github.com/u9401066/creativity-generation-unit)** | MCP Server (submodule) | 13 | Creative generation: brainstorm, deep think, spark collision |
| **[VS Code Extension](vscode-extension/)** | Extension | 3 cmds | MCP server lifecycle, `@mdpaper` chat participant |
| **[Dashboard](dashboard/)** | Next.js Web App | — | Project management UI, diagram editor |
| **[Foam](https://foambubble.github.io/foam/)** | VS Code Extension | — | `[[wikilink]]` citation linking, hover preview, graph view |
| **[Skills](.claude/skills/)** | Agent Workflows | 26 | Guided multi-tool workflows (literature review, draft writing...) |
| **[Prompts](.github/prompts/)** | Prompt Files | 14 | `/mdpaper.search`, `/mdpaper.draft`, etc. |
**External MCP Servers** (optional, installed via uvx):
- **drawio** — CONSORT/PRISMA flowchart generation
- **zotero-keeper** — Import references from Zotero library
### How the Pieces Fit Together
```mermaid
flowchart LR
subgraph IDE["VS Code"]
Agent["Copilot Agent<br/>26 Skills · 14 Prompts"]
Foam[Foam Plugin]
Ext[MedPaper Extension]
Dash[Dashboard]
end
subgraph MCP["MCP Servers (~107 tools)"]
mdpaper["mdpaper<br/>57 tools<br/>Draft · Export · Validate"]
pubmed["pubmed-search<br/>37 tools<br/>Search · Metrics"]
cgu["CGU<br/>13 tools<br/>Deep Think · Ideas"]
end
subgraph Data["Project Data"]
proj[("projects/{slug}/<br/>· .memory/<br/>· references/<br/>· drafts/")]
end
Agent <-->|MCP| mdpaper
Agent <-->|MCP| pubmed
Agent <-->|MCP| cgu
mdpaper -->|HTTP API| pubmed
Foam <-->|Wikilinks| proj
mdpaper <--> proj
Ext --> mdpaper
Dash --> proj
```
---
## 🎯 Why This Tool?
**Traditional paper writing tools** require you to know exactly what you want before you start. But research is rarely that linear.
**Medical Paper Assistant** is different:
- 🔍 **Explore First, Decide Later** — Browse literature freely, save interesting papers, then decide your research direction
- 💬 **Conversational Workflow** — Chat naturally with AI to refine your ideas, not fight with forms
- 🧭 **Guided Process** — Step-by-step prompts guide you from concept to publication-ready manuscript
- 🔗 **All-in-One** — Search, write, cite, analyze, export — all integrated inside VS Code
| Traditional Tools | Medical Paper Assistant |
| ----------------------------------- | -------------------------------------- |
| Fixed templates, rigid workflow | Flexible, exploratory approach |
| Separate apps for search/write/cite | All-in-one: ~107 tools in VS Code |
| Manual reference management | Auto-save with verified PubMed data |
| Export then format | Direct Word export with journal styles |
| Learn complex UI | Natural language conversation |
---
## 🚀 Quick Start
### Prerequisites
| Requirement | Version | Check |
| ------------------ | ---------- | ------------------- |
| **Python** | 3.12+ | `python3 --version` |
| **Git** | Any recent | `git --version` |
| **VS Code** | Latest | Help → About |
| **GitHub Copilot** | Extension | Extensions panel |
### Install
```bash
# Clone with submodules
git clone --recursive https://github.com/u9401066/med-paper-assistant.git
cd med-paper-assistant
# Run setup script
./scripts/setup.sh # Linux/macOS
.\scripts\setup.ps1 # Windows PowerShell
```
The script will:
1. ✅ Create Python virtual environment (`.venv/`)
2. ✅ Install all dependencies (via `uv`)
3. ✅ Create `.vscode/mcp.json` configuration
4. ✅ Verify installation
**Verify**: In Copilot Chat, type `/mcp` — you should see `mdpaper` listed 🎉
### Optional Integrations
```bash
# Foam for reference linking (highly recommended)
code --install-extension foam.foam-vscode
# Draw.io for diagram generation
./scripts/setup-integrations.sh && ./scripts/start-drawio.sh
```
---
## 💬 MCP Prompts — Just Type and Go
In Copilot Chat, type these prompts to trigger guided workflows:
| Prompt | Description |
| ------------------- | --------------------------------------------------- |
| `/mdpaper.search` | 🔍 **Start here!** Explore literature, save papers |
| `/mdpaper.concept` | 📝 Develop research concept with novelty validation |
| `/mdpaper.draft` | ✍️ Write manuscript with auto-citations |
| `/mdpaper.analysis` | 📊 Analyze CSV data, generate figures & Table 1 |
| `/mdpaper.format` | 📄 Export to Word with journal formatting |
| `/mdpaper.clarify` | 🔄 Refine specific sections through conversation |
| `/mdpaper.project` | 📁 Create or switch research projects |
| `/mdpaper.strategy` | ⚙️ Configure search strategy (dates, filters) |
| `/mdpaper.help` | ❓ Show all available commands |
> 💡 **Recommended Workflow**: `/mdpaper.search` → `/mdpaper.concept` → `/mdpaper.draft` → `/mdpaper.format`
---
## 🧠 Skill System + Project Memory
**Our core differentiator:** We don't just provide tools — we provide **guided workflows** that know how to combine tools effectively, AND **project memory** that remembers your research journey across sessions.
### What is a Skill?
```
Tool = Single capability (search, save, analyze...)
Skill = Complete knowledge (how to combine tools to accomplish tasks)
```
**26 Skills** covering the full research lifecycle:
| Category | Skills | Triggers |
| -------------- | ----------------------------------------------------------------------------------- | ----------------------------------------- |
| 🔬 Research | `literature-review`, `concept-development`, `concept-validation`, `parallel-search` | "找論文", "search", "concept", "validate" |
| ✍️ Writing | `draft-writing`, `reference-management`, `word-export` | "寫草稿", "draft", "citation", "export" |
| 📁 Management | `project-management`, `memory-updater`, `memory-checkpoint` | "新專案", "切換", "存檔" |
| 🛠️ Development | `git-precommit`, `code-refactor`, `test-generator`, `code-reviewer` | "commit", "refactor", "test" |
### Project Memory
Each project maintains its own `.memory/` folder, so the AI continues previous research coherently:
```
projects/{slug}/
├── .memory/
│ ├── activeContext.md ← Agent's working memory
│ └── progress.md ← Research milestones
├── concept.md ← Research concept (with 🔒 protected sections)
├── references/ ← Foam-compatible literature library
├── drafts/ ← Markdown drafts with [[citations]]
├── data/ ← CSV data files
└── results/ ← Figures, .docx exports
```
---
## ✨ Key Features
### Literature & References
- **PubMed + Europe PMC + CORE** search (37 search tools)
- **PICO parsing** for clinical questions
- **MCP-to-MCP verified data** — PMID sent directly, no agent hallucination
- Layered trust: 🔒 VERIFIED (PubMed) · 🤖 AGENT (AI notes) · ✏️ USER (your notes)
- Foam wikilinks: `[[author2024_12345678]]` with hover preview & backlinks
### Writing & Editing
- **AI draft generation** per section (Introduction, Methods, Results, Discussion)
- **Citation-Aware Editing** — `patch_draft` validates all `[[wikilinks]]` before saving
- **Auto-fix citation format** — `[[12345678]]` → `[[author2024_12345678]]`
- **Novelty validation** — 3-round independent scoring (threshold: 75/100)
- **Anti-AI writing rules** — Evidence funnel structure, no clichés
### Data Analysis
- CSV dataset analysis with descriptive statistics
- Statistical tests (t-test, ANOVA, chi², correlation, Mann-Whitney, Fisher's)
- **Table 1 generator** — Baseline characteristics with automatic variable detection
- Publication-ready figures (matplotlib/seaborn)
### Export & Submission
- **Word export** with journal template support
- Cover letter + highlights generation
- Manuscript consistency checker
- Reviewer response generator (point-by-point format)
- Submission checklist (word count, figure format, etc.)
### Infrastructure
- **DDD Architecture** (Domain-Driven Design) with clean layer separation
- **15 pre-commit hooks** (ruff, mypy, bandit, pytest, prettier, doc-update...)
- **Workspace State** recovery for cross-session continuity
- **uv** for all Python package management
---
## 🏗️ Architecture
```
┌──────────────────────────────────────────────────────────────────────────┐
│ 👤 User Layer │
│ ┌─────────────────┐ ┌──────────────────────────────┐ ┌──────────┐ │
│ │ VS Code │ │ Foam Extension │ │Dashboard │ │
│ │ Editor │ │ [[wikilinks]] autocomplete │ │(Next.js) │ │
│ │ │ │ hover preview · backlinks │ │ │ │
│ └─────────────────┘ └──────────────────────────────┘ └──────────┘ │
└──────────────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────────────┐
│ 🤖 Copilot Agent (Orchestrator) │
│ 26 Skills + 14 Prompt Workflows + Agent Customization │
│ /mdpaper.search → /mdpaper.concept → /mdpaper.draft → export │
└───────┬──────────────────┬──────────────────┬──────────────────┬─────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ 📝 mdpaper │ │🔍 pubmed- │ │💡 cgu │ │🔌 External │
│ 57 tools │ │ search │ │ 13 tools │ │ MCPs (uvx) │
│ │ │ 37 tools │ │ │ │ │
│ • projects │ │ • PubMed │ │ • brainstorm │ │ 🎨 drawio │
│ • references │ │ • Europe PMC │ │ • deep_think │ │ • diagrams │
│ • drafts │ │ • CORE │ │ • spark │ │ │
│ • validation │ │ • PICO │ │ • methods │ │ 📖 zotero │
│ • analysis │ │ • Gene/Chem │ │ │ │ • import refs │
│ • export │ │ • Session │ │ │ │ │
└───────┬───────┘ └───────────────┘ └───────────────┘ └───────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────────────┐
│ 💾 Local Storage │
│ projects/{slug}/ │
│ ├── concept.md ← Research concept with 🔒 protected sections │
│ ├── references/{pmid}/ ← Foam-compatible .md + metadata.json │
│ ├── drafts/ ← Markdown drafts with [[citations]] │
│ ├── data/ ← CSV data files │
│ └── results/ ← Figures, .docx exports │
└──────────────────────────────────────────────────────────────────────────┘
```
### MCP-to-MCP Direct Communication
When saving references, data flows directly between MCP servers — the Agent only passes a PMID, never full metadata:
```
Agent: "save PMID:24891204"
│
▼
mdpaper.save_reference_mcp(pmid="24891204")
│ Direct HTTP call (not through Agent)
▼
pubmed-search: GET /api/cached_article/24891204
│ Returns verified PubMed data
▼
Saved with layered trust:
🔒 VERIFIED: PubMed data (immutable)
🤖 AGENT: AI notes (marked source)
✏️ USER: Your notes (editable)
```
---
## 🛠️ mdpaper MCP Tools
**57 active tools** organized into 7 groups:
### 📁 Project Management (15 tools)
Projects, exploration mode, workspace state recovery, diagram management.
| Key Tools | Description |
| ------------------------------------------------------ | ------------------------------------ |
| `create_project` / `switch_project` / `delete_project` | Project lifecycle |
| `start_exploration` / `convert_exploration_to_project` | Explore-first workflow |
| `get_workspace_state` / `sync_workspace_state` | Cross-session recovery |
| `save_diagram` / `list_diagrams` | Draw.io integration |
| `setup_project_interactive` | Interactive paper type configuration |
### 📚 Reference Management (10 tools)
Save, search, format, and manage references with Foam integration.
| Key Tools | Description |
| --------------------------------------------------- | ------------------------------------------------------------- |
| `save_reference_mcp` | **Recommended** — Save by PMID via MCP-to-MCP (verified data) |
| `list_saved_references` / `search_local_references` | Browse & search library |
| `format_references` / `set_citation_style` | Vancouver / APA / Nature |
| `sync_references` | Sync `[[wikilinks]]` to numbered references |
### ✍️ Draft & Editing (13 tools)
Write, edit, cite — with built-in validation.
| Key Tools | Description |
| ------------------------------------------ | -------------------------------------------------------- |
| `write_draft` / `draft_section` | Create and write sections |
| `get_available_citations` | List all valid `[[citation_key]]` before editing |
| `patch_draft` | **Citation-aware** partial edit with wikilink validation |
| `insert_citation` / `suggest_citations` | Smart citation insertion |
| `scan_draft_citations` / `sync_references` | Citation management |
| `get_section_template` | Section-specific writing guidelines |
### ✅ Validation (3 tools)
| Tool | Description |
| ------------------------ | --------------------------------------------------- |
| `validate_concept` | Full novelty scoring (3 rounds, threshold 75/100) |
| `validate_concept_quick` | Quick structural check |
| `validate_wikilinks` | Auto-fix `[[12345678]]` → `[[author2024_12345678]]` |
| `validate_for_section` | Check concept before writing specific section |
### 📊 Data Analysis (9 tools)
| Tool | Description |
| ---------------------- | ----------------------------------------------------- |
| `analyze_dataset` | Descriptive statistics for CSV |
| `run_statistical_test` | t-test, ANOVA, chi², correlation, etc. |
| `generate_table_one` | Baseline characteristics with auto variable detection |
| `create_plot` | Publication-ready figures |
| `insert_figure` | Insert figure into draft with archive validation |
| `insert_table` | Insert table into draft with archive validation |
| `list_assets` | List figures and tables in project results |
### 📄 Export & Submission (6 + 1 tools)
| Category | Key Tools |
| --------------- | ---------------------------------------------------------------------------- |
| **Word Export** | `export_word`, `list_templates`, `start_document_session`, `verify_document` |
| **Submission** | `generate_cover_letter`, `check_formatting`, `generate_highlights` |
| **Review** | `create_reviewer_response`, `format_revision_changes` |
### 🔍 pubmed-search MCP Tools (37 tools)
| Category | Key Tools |
| --------------- | ------------------------------------------------------------------------- |
| **Search** | `search_literature`, `generate_search_queries`, `parse_pico` |
| **Databases** | PubMed, Europe PMC (fulltext + text mining), CORE (200M+ open access) |
| **Gene/Chem** | `search_gene`, `get_gene_details`, `search_compound`, `search_clinvar` |
| **Exploration** | `find_related_articles`, `find_citing_articles`, `get_article_references` |
| **Export** | `prepare_export` (RIS/BibTeX/CSV), `get_citation_metrics` (iCite RCR) |
| **Session** | `get_session_pmids`, `list_search_history` (survives AI memory limits) |
### 💡 CGU Creative Tools (13 tools)
| Category | Key Tools |
| ------------ | ----------------------------------------------------------- |
| **Ideation** | `generate_ideas`, `spark_collision`, `spark_collision_deep` |
| **Analysis** | `deep_think`, `multi_agent_brainstorm` |
| **Methods** | `list_methods`, `select_method`, `apply_method` |
---
## 🔗 Foam Integration
| Feature | How to Use | Benefit |
| --------------------- | ----------------------------------- | ------------------------------------- |
| **Wikilinks** | `[[greer2017_27345583]]` | Link references in drafts |
| **Hover Preview** | Mouse over any `[[link]]` | See abstract without opening file |
| **Backlinks Panel** | Open reference file | See which drafts cite this paper |
| **Graph View** | `Ctrl+Shift+P` → `Foam: Show Graph` | Visualize paper connections |
| **Project Isolation** | Auto-switches on `switch_project` | Only see current project's references |
### Citation Autocomplete
Type `[[` in any draft to trigger the autocomplete menu:
<!-- prettier-ignore -->
```markdown
According to previous studies [[ ← Type [[ here
┌─────────────────────────────┐
│ 🔍 greer2017_27345583 │
│ smith2020_12345678 │
│ chen2019_87654321 │
└─────────────────────────────┘
```
Search by author (`[[greer`), year (`[[2017`), PMID (`[[27345583`), or keyword (`[[sedation`).
---
## 📚 Reference File Structure
References are stored with **Foam-optimized, layered-trust** structure:
```
references/{pmid}/
├── {citation_key}.md ← YAML frontmatter + abstract (human-readable)
└── metadata.json ← Full metadata (programmatic access)
```
```yaml
---
# 🔒 VERIFIED (from PubMed, immutable)
title: "Complications of airway management"
author:
- { family: Pacheco-Lopez, given: Paulette C }
year: 2014
journal: Respiratory Care
pmid: "24891204"
_source:
mcp: pubmed-search
verified: true
# 🤖 AGENT (AI-generated, marked)
_agent:
notes: "Key review on airway complications"
relevance: high
# Foam
aliases: [pachecolopez2014, "PMID:24891204"]
tags: [reference, airway, review]
---
```
---
## 📂 Project Structure
```
med-paper-assistant/
├── src/med_paper_assistant/ # Core MCP server (DDD architecture)
│ ├── domain/ # Business logic, entities, value objects
│ ├── application/ # Use cases, services
│ ├── infrastructure/ # DAL, external services
│ └── interfaces/mcp/ # MCP server, 57 tools in 7 groups
│
├── integrations/ # Bundled MCP servers
│ ├── pubmed-search-mcp/ # PubMed/PMC/CORE search (37 tools)
│ └── cgu/ # Creative generation (13 tools)
│
├── vscode-extension/ # VS Code Extension
│ ├── src/ # Extension source
│ ├── skills/ # Agent skill definitions
│ └── prompts/ # Quick-action prompts
│
├── dashboard/ # Next.js project management UI
│ └── src/
│
├── projects/ # Research projects (isolated workspaces)
│ └── {slug}/
│ ├── .memory/ # Cross-session AI memory
│ ├── concept.md # Research concept
│ ├── references/ # Local reference library
│ ├── drafts/ # Markdown drafts
│ └── results/ # Figures, exports
│
├── .claude/skills/ # 26 Agent skill definitions
├── .github/prompts/ # 14 Prompt workflow files
├── templates/ # Journal Word templates
├── memory-bank/ # Global project memory
└── tests/ # pytest test suite
```
---
## 🗺️ Roadmap
| Status | Feature | Description |
| ------ | --------------------------- | ------------------------------------------------------ |
| ✅ | **3 MCP Servers** | mdpaper (57) + pubmed-search (37) + CGU (13) |
| ✅ | **Foam Integration** | Wikilinks, hover preview, backlinks, project isolation |
| ✅ | **Project Memory** | `.memory/` for cross-session AI context |
| ✅ | **Table 1 Generator** | Auto-generate baseline characteristics |
| ✅ | **Novelty Validation** | 3-round scoring with 75/100 threshold |
| ✅ | **Citation-Aware Editing** | `patch_draft` with wikilink validation |
| ✅ | **MCP-to-MCP Trust** | Verified PubMed data via direct HTTP |
| ✅ | **Pre-commit Hooks** | 15 hooks (ruff, mypy, bandit, pytest, prettier...) |
| 🔜 | **Full VSX Extension** | TreeView, CodeLens, Diagnostics (Direction C) |
| 🔜 | **Pandoc Export** | Word + LaTeX dual export with CSL citations |
| 📋 | **Systematic Review** | PRISMA flow, Risk of Bias, meta-analysis |
| 📋 | **AI Writing Intelligence** | Citation intelligence, coherence engine |
| 📋 | **REST API Mode** | Expose tools as REST API |
**Architecture Direction**: [Direction C — Full VSX + Foam + Pandoc](ROADMAP.md)
**Legend:** ✅ Complete | 🔜 In Progress | 📋 Planned
---
## 🤝 Contributing
We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
- 🐛 **Report bugs** — Open an issue
- 💡 **Suggest features** — Share your ideas
- 🔧 **Submit code** — Fork → Branch → PR
---
## 📄 License
Apache License 2.0 — See [LICENSE](LICENSE)
| text/markdown | null | Eric <medpaper@example.com> | null | null | null | assistant, mcp, medical, paper, pubmed | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"mcp>=1.0.0",
"pandas>=2.0.0",
"pydantic>=2.0.0",
"python-docx>=1.0.0",
"requests>=2.28.0",
"tabulate>=0.9.0",
"biopython>=1.80; extra == \"all\"",
"creativity-generation-unit; extra == \"all\"",
"matplotlib>=3.7.0; extra == \"all\"",
"pubmed-search-mcp; extra == \"all\"",
"pypdf>=3.0.0; extra == \"all\"",
"scipy>=1.10.0; extra == \"all\"",
"seaborn>=0.12.0; extra == \"all\"",
"matplotlib>=3.7.0; extra == \"analysis\"",
"pypdf>=3.0.0; extra == \"analysis\"",
"scipy>=1.10.0; extra == \"analysis\"",
"seaborn>=0.12.0; extra == \"analysis\"",
"creativity-generation-unit; extra == \"creativity\"",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"biopython>=1.80; extra == \"pubmed\"",
"pubmed-search-mcp; extra == \"pubmed\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:07:35.153302 | med_paper_assistant-0.3.8.tar.gz | 2,883,043 | 38/16/642c9e408984110613278aeb1ede89da433d15f5f092a85902dc8dfdcf60/med_paper_assistant-0.3.8.tar.gz | source | sdist | null | false | e5bd7d591311b83d464def55a6bbdd8a | f0b45815aabeb768b57afb054819a825d5772fdc9e660b9baf8bdb112ed04cac | 3816642c9e408984110613278aeb1ede89da433d15f5f092a85902dc8dfdcf60 | Apache-2.0 | [
"LICENSE"
] | 226 |
2.4 | nessai | 0.15.2 | Nessai: Nested Sampling with Artificial Intelligence | [](https://doi.org/10.5281/zenodo.4550693)
[](https://pypi.org/project/nessai/)
[](https://anaconda.org/conda-forge/nessai)
[](https://nessai.readthedocs.io/en/latest/?badge=latest)



[](https://codecov.io/gh/mj-will/nessai)
[](https://app.gitter.im/#/room/#nessai:gitter.im)
# nessai: Nested Sampling with Artificial Intelligence
``nessai`` (/ˈnɛsi/): Nested Sampling with Artificial Intelligence
``nessai`` is a nested sampling algorithm for Bayesian Inference that incorporates normalising flows. It is designed for applications where the Bayesian likelihood is computationally expensive.
## Installation
``nessai`` can be installed using ``pip``:
```console
pip install nessai
```
or via ``conda``
```console
conda install -c conda-forge -c pytorch nessai
```
### PyTorch
By default the version of PyTorch will not necessarily match the drivers on your system, to install a different version with the correct CUDA support see the PyTorch homepage for instructions: https://pytorch.org/.
### Using ``bilby``
As of ``bilby`` version 2.3.0, the recommended way to use ``nessai`` is via the [``nessai-bilby`` sampler plugin](https://github.com/bilby-dev/nessai-bilby).
This can be installed via either ``conda`` or ``pip`` and provides the most
up-to-date interface for ``nessai``.
This includes support for the importance nested sampler (``inessai``).
It can be installed using either
```console
pip install nessai-bilby
```
or
```console
conda install -c conda-forge nessai-bilby
```
See the examples included with ``nessai`` for how to run ``nessai`` via ``bilby``.
## Documentation
Documentation is available at: [nessai.readthedocs.io](https://nessai.readthedocs.io/)
## Help
For questions and other support, please either use our [gitter room](https://app.gitter.im/#/room/#nessai:gitter.im) or [open an issue](https://github.com/mj-will/nessai/issues/new/choose).
## Contributing
Please see the guidelines [here](https://github.com/mj-will/nessai/blob/master/CONTRIBUTING.md).
## Acknowledgements
The core nested sampling code, model design and code for computing the posterior in ``nessai`` was based on [`cpnest`](https://github.com/johnveitch/cpnest) with permission from the authors.
The normalising flows implemented in ``nessai`` are all either directly imported from [`nflows`](https://github.com/bayesiains/nflows/tree/master/nflows) or heavily based on it.
Other code snippets that draw on existing code reference the source in their corresponding doc-strings.
The authors also thank Christian Chapman-Bird, Laurence Datrier, Fergus Hayes, Jethro Linley and Simon Tait for their feedback and help finding bugs in ``nessai``.
## Citing
If you find ``nessai`` useful in your work please cite the DOI for this code and our papers:
```bibtex
@software{nessai,
author = {Michael J. Williams},
title = {nessai: Nested Sampling with Artificial Intelligence},
month = feb,
year = 2021,
publisher = {Zenodo},
version = {latest},
doi = {10.5281/zenodo.4550693},
url = {https://doi.org/10.5281/zenodo.4550693}
}
@article{Williams:2021qyt,
author = "Williams, Michael J. and Veitch, John and Messenger, Chris",
title = "{Nested sampling with normalizing flows for gravitational-wave inference}",
eprint = "2102.11056",
archivePrefix = "arXiv",
primaryClass = "gr-qc",
doi = "10.1103/PhysRevD.103.103006",
journal = "Phys. Rev. D",
volume = "103",
number = "10",
pages = "103006",
year = "2021"
}
@article{Williams:2023ppp,
author = "Williams, Michael J. and Veitch, John and Messenger, Chris",
title = "{Importance nested sampling with normalising flows}",
eprint = "2302.08526",
archivePrefix = "arXiv",
primaryClass = "astro-ph.IM",
reportNumber = "LIGO-P2200283",
month = "2",
year = "2023"
}
```
| text/markdown | null | "Michael J. Williams" <michaeljw1@googlemail.com> | null | null | MIT | nested sampling, normalizing flows, machine learning | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"glasflow",
"h5py>=3.0",
"matplotlib>=2.0",
"numpy>=1.9",
"pandas",
"scipy>0.16",
"seaborn",
"torch>=1.11.0",
"importlib-metadata; python_version < \"3.10\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-timeout; extra == \"test\"",
"pytest-rerunfailures; extra == \"test\"",
"pytest-integration; extra == \"test\"",
"lalsuite; sys_platform != \"win32\" and extra == \"gw\"",
"bilby; extra == \"gw\"",
"astropy; extra == \"gw\"",
"faiss-cpu>=1.7.3; extra == \"clustering\"",
"pre-commit; extra == \"dev\"",
"ray[default]; (sys_platform != \"win32\" and python_version < \"3.12\") and extra == \"dev\"",
"multiprocess; extra == \"dev\"",
"corner; extra == \"dev\"",
"ruff; extra == \"dev\"",
"faiss-cpu; extra == \"dev\"",
"sphinx; extra == \"docs\"",
"sphinx_rtd_theme; extra == \"docs\"",
"numpydoc; extra == \"docs\"",
"sphinx-autoapi; extra == \"docs\"",
"nflows; extra == \"nflows\""
] | [] | [] | [] | [
"Homepage, https://github.com/mj-will/nessai",
"Documentation, https://nessai.readthedocs.io/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:07:15.734015 | nessai-0.15.2.tar.gz | 333,182 | 63/6d/c5e66855085447119391acb878803412cfe4bc0a2a9a082d3949c6af22bc/nessai-0.15.2.tar.gz | source | sdist | null | false | eb5c6db5d86e063730585a46aa6e624b | 71d41d72f4597bbaf1277605ae6e9c1ffc5b8dde8ef4e39fb72c16437f98817a | 636dc5e66855085447119391acb878803412cfe4bc0a2a9a082d3949c6af22bc | null | [
"LICENSE.md"
] | 1,847 |
2.1 | qanswer_sdk | 3.1651.0 | QAnswer: Api Documentation | # QAnswer Python SDK
[](https://pypi.org/project/qanswer-sdk/)
[](https://pypi.org/project/qanswer-sdk/)
[](LICENSE)
Official **Python SDK** for the [QAnswer API](https://qanswer.eu), automatically generated from the OpenAPI specification.
This SDK allows Python applications to interact with QAnswer's services programmatically without needing to craft raw HTTP requests.
---
## 🚀 Features
- Full coverage of QAnswer API endpoints
- Type-safe models via [Pydantic](https://docs.pydantic.dev)
- Easy configuration of authentication and base URL
- Auto-generated and versioned with each API release
---
## 📦 Installation
You can install from [PyPI](https://pypi.org/project/qanswer-sdk/):
```bash
pip install qanswer-sdk
```
Or add it to your `requirements.txt`
```txt
qanswer-sdk==3.1184.0
```
---
## 🔑 Authentication
Most endpoints require authentication. You can configure authentication in several ways:
### API Key Authentication
```python
from qanswer_sdk import Configuration, ApiClient
from qanswer_sdk.api.chatbot_api import ChatbotApi
# Configure API key authorization
config = Configuration(
host="https://app.qanswer.ai/backend",
api_key={"QAnswer-Api-Key": "your-api-key-here"}
)
# Initialize client
with ApiClient(config) as client:
api = ChatbotApi(client)
# Use the API...
```
### Bearer Token Authentication
```python
from qanswer_sdk import Configuration, ApiClient
from qanswer_sdk.api.chatbot_api import ChatbotApi
# Configure Bearer token authorization
config = Configuration(
host="https://app.qanswer.ai/backend",
access_token="your-jwt-token-here"
)
# Initialize client
with ApiClient(config) as client:
api = ChatbotApi(client)
# Use the API...
```
---
## 📖 Usage Examples
### Chatbot API
```python
from qanswer_sdk import Configuration, ApiClient
from qanswer_sdk.api.chatbot_api import ChatbotApi
from qanswer_sdk.models.chatbot_chat_payload import ChatbotChatPayload
config = Configuration(
host="https://app.qanswer.ai/backend",
api_key={"QAnswer-Api-Key": "your-api-key"}
)
with ApiClient(config) as client:
api = ChatbotApi(client)
# Create chat payload
payload = ChatbotChatPayload(
question="What is artificial intelligence?",
username="admin",
conversation_id="df150332-97c2-4b6a-9d83-cd35e14cf89c",
llm_choice="openai-large"
# Add other required fields based on your model
)
# Send chat message
response = api.free_text_chatbot_chat(payload)
print(response.ai_response) # Access response data
```
### Chat Completion API
```python
from qanswer_sdk.api.chat_completion_api import ChatCompletionApi
with ApiClient(config) as client:
api = ChatCompletionApi(client)
# Use chat completion endpoints...
```
### Handle Different Response Types
```python
# Get just the data
response_data = api.free_text_chatbot_chat(payload)
# Get full HTTP response info
full_response = api.free_text_chatbot_chat_with_http_info(payload)
print(full_response.status_code)
print(full_response.headers)
print(full_response.data)
# Get raw HTTP response for streaming
raw_response = api.free_text_chatbot_chat_without_preload_content(payload)
```
---
## ⚙️ Configuration Options
```python
from qanswer_sdk import Configuration
config = Configuration(
host="https://app.qanswer.ai/backend", # API base URL
api_key={"QAnswer-Api-Key": "your-key"}, # API key auth
access_token="jwt-token", # Bearer token auth
username="user", # Basic auth username
password="pass", # Basic auth password
verify_ssl=True, # SSL verification
ssl_ca_cert="/path/to/ca.pem", # Custom CA certificate
connection_pool_maxsize=10, # Connection pool size
retries=3, # Number of retries
debug=False, # Enable debug logging
proxy="http://proxy:8080" # Proxy URL
)
```
---
## 🛠 Error Handling
```python
from qanswer_sdk.exceptions import (
ApiException,
BadRequestException,
UnauthorizedException,
NotFoundException
)
try:
response = api.free_text_chatbot_chat(payload)
except UnauthorizedException:
print("Invalid authentication credentials")
except BadRequestException as e:
print(f"Bad request: {e}")
except ApiException as e:
print(f"API error: {e.status} - {e.reason}")
```
---
## 📝 Models and Type Safety
All request and response objects are Pydantic models with full type safety:
```python
from qanswer_sdk.models.chatbot_chat_payload import ChatbotChatPayload
from qanswer_sdk.models.chatbot_response import ChatbotResponse
# Create typed request payload
payload = ChatbotChatPayload(
message="Hello",
# IDE will provide autocomplete for available fields
)
# Response is fully typed
response: ChatbotResponse = api.free_text_chatbot_chat(payload)
# Access typed response fields with autocomplete
print(response.answer)
```
---
## 📌 Versioning
This SDK follows the version of the QAnswer API.
The current version is: `3.1651.0 (branch: main)`
---
## 🤝 Support
For issues related to:
- **SDK usage:** Open an issue in this repository
- **API functionality:** Contact QAnswer support
- **Authentication:** Check your API key and permissions
---
## 📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
---
## Made with ❤️ by The QA Company | text/markdown | OpenAPI Generator Community | team@openapitools.org | null | null | NoLicense | OpenAPI, OpenAPI-Generator, QAnswer: Api Documentation | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/GIT_USER_ID/GIT_REPO_ID | null | <4.0,>=3.8 | [] | [] | [] | [
"urllib3<3.0.0,>=1.25.3",
"python-dateutil>=2.8.2",
"pydantic>=2",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [
"Repository, https://github.com/GIT_USER_ID/GIT_REPO_ID"
] | poetry/1.8.2 CPython/3.12.3 Linux/6.8.0-90-generic | 2026-02-20T11:06:37.610157 | qanswer_sdk-3.1651.0.tar.gz | 285,365 | f6/c6/1baf271a690123e3d95e08d7544d6603a6da211701c09e73949da4c037c0/qanswer_sdk-3.1651.0.tar.gz | source | sdist | null | false | 54eb0c999f2d21b71f110e2f95faa9df | 21620a1c344ed48e98deacd733c98c66f3cf7eac0da684e38d050325b7adfb80 | f6c61baf271a690123e3d95e08d7544d6603a6da211701c09e73949da4c037c0 | null | [] | 0 |
2.4 | struckdown | 0.4.1 | struckdown: markdown-like syntax for structured conversations with language models | # struckdown
Markdown-based syntax for structured conversations with language models.
## Installation
```bash
pip install struckdown
```
## Quick Example
```bash
# Configure
export LLM_API_KEY="sk-..."
export LLM_API_BASE="https://api.openai.com/v1"
# Extract structured data
sd chat "Tell me a joke: [[joke]]"
sd batch *.txt "Purpose: [[purpose]] Price: [[number:price]]"
```
## Documentation
Full documentation, examples, and tutorials:
**https://github.com/benwhalley/struckdown**
## License
MIT
| text/markdown | null | Ben Whalley <benwhalley@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"certifi>=2023.0.0",
"asgiref>=3.7.0",
"instructor[litellm]>=1.11.0",
"jinja2>=3.1.6",
"lark>=1.2.2",
"pydantic>=2.11.7",
"python-box>=7.3.2",
"python-decouple>=3.8",
"typer>=0.16.0",
"more-itertools>=10.7.0",
"jinja-markdown>=1.210911",
"pytest>=8.4.2",
"joblib>=1.3.0",
"dateutils>=0.6.12",
"openpyxl>=3.1.0",
"pandas>=2.0.0",
"rich>=13.0.0",
"requests>=2.28.0",
"readability-lxml>=0.8.1",
"markdownify>=0.14.1",
"validators>=0.35.0",
"ddgs>=8.0.0",
"flask>=3.0.0",
"flask-limiter>=3.5.0",
"pytest-xdist>=3.8.0",
"gunicorn>=21.0.0",
"filelock>=3.12.0",
"rank-bm25>=0.2.2",
"diskcache>=5.6.0",
"requests-cache>=1.2.0",
"playwright>=1.40.0; extra == \"playwright\"",
"sentence-transformers>=2.2.0; extra == \"local\"",
"mkdocs>=1.5; extra == \"docs\"",
"mkdocs-material>=9.0; extra == \"docs\"",
"pymdown-extensions>=10.0; extra == \"docs\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.9 | 2026-02-20T11:06:26.939194 | struckdown-0.4.1.tar.gz | 1,286,342 | 36/1a/b9bb321011be464d9a75a3f42aae8d13992b4cf5c1fc80509b8b5c080aca/struckdown-0.4.1.tar.gz | source | sdist | null | false | 6131064aadc9723ac5765a690e77cfe4 | b02451540a34fa8b405b62617a7dacf06cdb5e5a03a934500d4882b7a7579846 | 361ab9bb321011be464d9a75a3f42aae8d13992b4cf5c1fc80509b8b5c080aca | MIT | [
"LICENSE"
] | 236 |
2.4 | prismadata | 0.1.0 | Python client for the PrismaData location intelligence API | # prismadata
Python client for the [PrismaData](https://prismadata.io) location intelligence API.
## Installation
```bash
pip install prismadata
```
With optional extras:
```bash
pip install prismadata[pandas] # DataFrame enrichment
pip install prismadata[sklearn] # scikit-learn transformer
pip install prismadata[all] # everything (pandas, sklearn, cache, progress bars)
```
## Quick Start
```python
from prismadata import Client
client = Client(api_key="your-api-key")
# Geocode an address
result = client.geocode(full_address="Av Paulista 1000, Sao Paulo")
print(result["prismadata__geocoder__latitude"], result["prismadata__geocoder__longitude"])
# Query slum proximity
slum = client.slum(lat=-23.56, lng=-46.65)
print(slum["prismadata__favela__distancia_m"])
# Calculate a route
route = client.route([(-23.56, -46.65), (-23.57, -46.66)])
print(route["prismadata__routing_route__distancia_m"])
```
## Authentication
The client supports two authentication methods:
```python
# Using API key
client = Client(api_key="your-api-key")
# Using username and password
client = Client(username="your-user", password="your-pass")
```
Credentials can also be provided via environment variables:
```bash
export PRISMADATA_APIKEY="your-api-key"
# or
export PRISMADATA_USERNAME="your-user"
export PRISMADATA_PASSWORD="your-pass"
```
```python
# Picks up credentials from environment automatically
client = Client()
```
Credential resolution order: explicit `api_key` > explicit `username`/`password` > `PRISMADATA_APIKEY` env var > `PRISMADATA_USERNAME`+`PRISMADATA_PASSWORD` env vars.
## DataFrame Enrichment
```python
import pandas as pd
from prismadata import Client
client = Client(api_key="your-api-key")
df = pd.DataFrame({
"lat": [-23.56, -23.57, -23.58],
"lng": [-46.65, -46.66, -46.67],
})
enriched = client.enrich(df, services=["slum", "income_static", "infosc"])
print(enriched.columns.tolist())
# ['lat', 'lng', 'prismadata__favela__distancia_m', ..., 'prismadata__personal_income_static__percentil_br', ...]
# Use clean_columns=True for shorter column names
client = Client(api_key="your-api-key", clean_columns=True)
enriched = client.enrich(df, services=["slum"])
# Column names: 'favela_distancia_m', 'favela_nome', ...
```
## scikit-learn Pipeline
```python
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from prismadata.sklearn import PrismaDataTransformer
pipe = Pipeline([
("enrich", PrismaDataTransformer(
api_key="your-api-key",
services=["slum", "income_static"],
)),
("model", RandomForestClassifier()),
])
pipe.fit(X_train, y_train)
```
## Async Client
All methods are available asynchronously via `AsyncClient`:
```python
from prismadata import AsyncClient
async with await AsyncClient.create(api_key="your-api-key") as client:
result = await client.slum(lat=-23.56, lng=-46.65)
print(result)
# Batch and enrichment work the same way
enriched = await client.enrich(df, services=["slum", "income_static"])
```
## Error Handling
```python
from prismadata import Client
from prismadata.exceptions import (
AuthenticationError,
BatchError,
RateLimitError,
PrismaDataError,
)
try:
result = client.slum_batch(large_point_dict)
except BatchError as e:
# Some chunks succeeded, some failed
print(f"Got {len(e.partial_results)} results, {len(e.failed_keys)} failed")
for key, value in e.partial_results.items():
process(key, value) # use what succeeded
retry(e.failed_keys) # retry what failed
except RateLimitError:
print("Rate limit exceeded, wait and retry")
except AuthenticationError:
print("Invalid credentials")
except PrismaDataError as e:
print(f"API error {e.status_code}: {e}")
```
## Available Methods
### Geocoding
- `client.geocode(full_address=..., zipcode=..., city=..., state=...)` - Address to coordinates
- `client.reverse_geocode(lat, lng)` - Coordinates to address
### Location Services
- `client.slum(lat, lng)` - Nearest slum/favela proximity
- `client.prison(lat, lng)` - Nearest prison proximity
- `client.border(lat, lng)` - Border proximity
- `client.infosc(lat, lng)` - Census sector info
- `client.income_static(lat, lng)` - Income percentiles
- `client.income_pdf(lat, lng, gender=..., age=...)` - Detailed income statistics
### Routing
- `client.route(points, profile="car")` - Route between points
- `client.isochrone(lat, lng, time_limit=600, profile="car")` - Reachable area
### Address Validation
- `client.compare_address(lat, lng, full_address=...)` - Compare address with coordinates
- `client.validate_address(locations, addresses)` - Validate against location history
- `client.cluster_locations(locations)` - Cluster location history
### Credit
- `client.precatory(cpf_cnpj=...)` - Credit summary (precatorios/RPVs)
- `client.precatory_detail(cpf_cnpj=...)` - Detailed credit list
### Batch Operations
- `client.slum_batch(points)` - Batch slum queries
- `client.prison_batch(points)` - Batch prison queries
- `client.border_batch(points)` - Batch border queries
- `client.infosc_batch(points)` - Batch census sector queries
- `client.route_batch(items, profile="car")` - Batch routing
- `client.isochrone_batch(items, profile="car")` - Batch isochrones
### Aggregator
- `client.aggregate(lat, lng, services=[...])` - Multiple services in one call
- `client.aggregate_batch(points, services=[...])` - Batch aggregation
- `client.geocode_aggregate(full_address=..., services=[...])` - Geocode + aggregate
## Configuration
```python
client = Client(
api_key="your-key",
timeout=30, # Request timeout (seconds)
cache=True, # Enable disk cache (requires diskcache)
cache_ttl=86400, # Cache TTL (seconds)
clean_columns=False, # Keep 'prismadata__' prefix (default)
show_progress=True, # Show tqdm progress bars
app_name="my-app", # Sent as X-App header on every request
)
```
## License
MIT
| text/markdown | PrismaData | contato@prismadata.io | null | null | MIT | geolocation, geocoding, data-science, brazil, location-intelligence | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering :: GIS",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"diskcache>=5.0; extra == \"cache\" or extra == \"all\"",
"httpx<0.28,>=0.27",
"pandas>=1.5; extra == \"pandas\" or extra == \"sklearn\" or extra == \"all\"",
"scikit-learn>=1.0; extra == \"sklearn\" or extra == \"all\"",
"tenacity<10.0,>=9.0",
"tqdm>=4.0; extra == \"progress\" or extra == \"all\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/prismadata/python-client/issues",
"Changelog, https://github.com/prismadata/python-client/blob/main/CHANGELOG.md",
"Homepage, https://prismadata.io",
"Repository, https://github.com/prismadata/python-client"
] | poetry/2.3.2 CPython/3.13.0 Linux/6.18.9-arch1-2 | 2026-02-20T11:06:09.821043 | prismadata-0.1.0-py3-none-any.whl | 30,905 | 9c/b1/5dfc8089443f38f3d9b1193348e648f5843d3e0ff1897923d5f124f98ced/prismadata-0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 88450256cc13ba8a37802aed0883ea65 | a7f9ceb18c7e3af1fec5094946be68c3da98c4ae306afde7244708bcd4ec9b64 | 9cb15dfc8089443f38f3d9b1193348e648f5843d3e0ff1897923d5f124f98ced | null | [
"LICENSE"
] | 233 |
2.4 | spice-kernel-db | 0.9.1 | Browse, get, and manage SPICE kernels and metakernels across NASA and ESA mission archives | # spice-kernel-db
[](https://github.com/michaelaye/spice-kernel-db/actions/workflows/ci.yml)
Browse, get, and manage SPICE kernels and metakernels across NASA and ESA mission archives.
## What this tool does
1. **Mission setup**: Configure missions from NASA NAIF or ESA SPICE servers with an interactive dialog.
2. **Browse & get**: Browse available metakernels for a mission, then get one — the tool downloads all missing kernels automatically and makes the metakernel ready to use locally.
3. **Metakernel rewriting**: Rewrites `.tm` files for local use with **minimal edits** — only `PATH_VALUES` is changed, everything else stays identical to the original. A symlink tree bridges the gap between where the metakernel expects files and where they actually live on disk.
4. **Deduplication** (optional): Identifies identical kernel files across missions using SHA-256 hashing and replaces duplicates with symlinks. Per-mission opt-in — you can deduplicate some missions while keeping others untouched.
## Documentation
Full documentation is at [michaelaye.github.io/spice-kernel-db](https://michaelaye.github.io/spice-kernel-db/) and built with [Quarto](https://quarto.org/).
## Installation
```bash
pip install spice-kernel-db
```
Or with conda:
```bash
conda install -c michaelaye spice-kernel-db
```
Or from source:
```bash
git clone https://github.com/michaelaye/spice-kernel-db
cd spice-kernel-db
pip install -e ".[dev]"
```
## Quick start
### Set up a mission
```bash
spice-kernel-db mission add
```
Interactive dialog: choose a server (NASA NAIF / ESA SPICE) → pick a mission from the list → configure deduplication preference.
### Browse available metakernels
```bash
spice-kernel-db browse JUICE
```
Shows all `.tm` files in the mission's remote `mk/` directory, grouped by base name with version counts.
### Get a metakernel
```bash
spice-kernel-db get juice_ops.tm
```
Downloads the metakernel, checks which kernels you already have, downloads the missing ones in parallel, and creates symlinks so the `.tm` file works immediately.
### Use with spiceypy
```python
import spiceypy as spice
from spice_kernel_db import KernelDB
db = KernelDB()
mks = db.list_metakernels(mission="JUICE")
spice.furnsh(mks[0]["mk_path"])
```
### Python API
```python
from spice_kernel_db import KernelDB
db = KernelDB()
# Browse remote metakernels
db.browse_remote_metakernels(
"https://naif.jpl.nasa.gov/pub/naif/JUICE/kernels/mk/",
mission="JUICE",
)
# Get a metakernel (downloads missing kernels automatically)
db.get_metakernel(
"https://naif.jpl.nasa.gov/pub/naif/JUICE/kernels/mk/juice_ops.tm",
mission="JUICE",
)
# Optionally deduplicate across missions
db.deduplicate_with_symlinks(dry_run=True) # preview
db.deduplicate_with_symlinks(dry_run=False) # execute
```
## Supported servers
| Server | URL |
|--------|-----|
| NASA NAIF | `https://naif.jpl.nasa.gov/pub/naif/` |
| ESA SPICE | `https://spiftp.esac.esa.int/data/SPICE/` |
Both use the same `<server>/<MISSION>/kernels/mk/` directory structure.
## Dependencies
- Python >= 3.10
- [DuckDB](https://duckdb.org/) >= 1.0
- [rich](https://rich.readthedocs.io/) >= 13.0
## License
MIT
| text/markdown | Michael | null | null | null | null | esa, kernels, metakernel, naif, planetary-science, spice | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Astronomy"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"duckdb>=1.0",
"rich>=13.0",
"pytest-tmp-files>=0.0.2; extra == \"dev\"",
"pytest>=7; extra == \"dev\"",
"spiceypy>=6.0; extra == \"spice\""
] | [] | [] | [] | [
"Homepage, https://github.com/michaelaye/spice-kernel-db",
"Repository, https://github.com/michaelaye/spice-kernel-db",
"Issues, https://github.com/michaelaye/spice-kernel-db/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T11:05:38.949886 | spice_kernel_db-0.9.1.tar.gz | 110,060 | 15/26/1457d7289cfc3466f4615d3961d3efb58b92b1045e6c0a8bbeb1accbf7ea/spice_kernel_db-0.9.1.tar.gz | source | sdist | null | false | 73d7fb8b4c114ed4d278a7b51fe21091 | e79255fe645d1d68b1148ecca09a0ee77ea3a972dadcb99d1965ca76ebee7771 | 15261457d7289cfc3466f4615d3961d3efb58b92b1045e6c0a8bbeb1accbf7ea | MIT | [
"LICENSE"
] | 226 |
2.4 | brr-cli | 0.3.0 | Research infrastructure management tooling. | # ❄️ brr ❄️
Opinionated research infrastructure tooling. Launch clusters, get SSH access, start building.
## Features
- **Shared filesystem** — All nodes share `$HOME` via EFS (AWS) or virtiofs (Nebius).
- **Coding tools** — Install Claude Code, Codex, or Gemini. Connect with e.g. `brr attach dev claude`
- **Autoscaling** — Ray-based cluster scaling with cached instances.
- **Project-based workflows** — Per-repo cluster configs and project-specific dependencies.
- **Auto-shutdown** — Monitors CPU, GPU, and SSH activity. Shuts down idle instances to save costs.
- **Dotfiles integration** — Take your dev environment (vim, tmux, shell config) to every cluster node.
## Prerequisites
- [uv](https://docs.astral.sh/uv/) (for installation)
## Quick Start
```sh
# Install (AWS only)
uv tool install brr-cli[aws]
# Install (both providers)
# uv tool install brr-cli[aws,nebius]
# Configure (interactive wizard)
brr configure # or: brr configure nebius
# Launch an H100
brr up aws:h100
# brr up nebius:h100
# Connect
brr attach aws:h100 # SSH
brr attach aws:h100 claude # Claude Code on the cluster
brr vscode aws:h100 # VS Code remote
```
Built-in templates use `provider:name` syntax (e.g. `aws:h100`). Inside a [project](#projects), short names like `brr up dev` work automatically.
Supported clouds: [AWS](#aws-setup) · [Nebius](#nebius-setup)
## Projects
For per-repo cluster configs, initialize a project:
```sh
cd my-research-repo/
brr init
```
This creates:
```
.brr/
aws/
dev.yaml # Single GPU for development
cluster.yaml # CPU head + GPU workers
setup.sh # Project-specific dependencies
```
Templates are Ray cluster YAML — edit them or add your own. Inside a project, use short names:
```sh
brr up dev # launches .brr/aws/dev.yaml
brr up cluster # launches .brr/aws/cluster.yaml
brr attach dev # SSH into dev cluster
brr down dev # tear down
```
If your project uses `uv`, `brr init` automatically adds `brr-cli` and `ray` to a `brr` dependency group. The cluster uses your project-locked versions — no manual setup needed.
All global config lives in `~/.brr/config.env`.
## Templates
### Built-in templates
| Template | Instance | GPU | Workers |
| :--- | :--- | :--- | :--- |
| `aws:cpu` | t3.2xlarge | — | 0-2 |
| `aws:l4` | gr6.4xlarge | 1x L4 | — |
| `aws:h100` | p5.4xlarge | 8x H100 | — |
| `aws:cpu-l4` | t3.2xlarge + g6.4xlarge | 1x L4 | 0-4 |
| `nebius:cpu` | 8vcpu-32gb | — | 0-2 |
| `nebius:h100` | 1gpu-16vcpu-200gb | 1x H100 | — |
| `nebius:cpu-h100s` | 8vcpu-32gb + 8gpu-128vcpu-1600gb | 8x H100 | 0-4 |
### Overrides
Override template values inline:
```sh
brr up aws:cpu instance_type=t3.xlarge max_workers=4
brr up aws:h100 spot=true
brr up dev region=us-west-2
```
Preview the rendered config without launching:
```sh
brr up dev --dry-run
```
See available overrides for a template:
```sh
brr templates show dev
```
### Multi-provider
Use the provider prefix for built-in templates:
```sh
brr up aws:h100
brr up nebius:h100
brr attach nebius:h100
brr down nebius:h100
```
Both providers can run simultaneously. For projects with multiple providers, use the prefix: `brr up aws:dev`.
## Customization
### Node setup
`~/.brr/setup.sh` runs on every node boot. It installs packages, mounts shared storage, sets up Python/Ray, GitHub SSH keys, AI coding tools, dotfiles, and the idle shutdown daemon.
Edit it to customize:
```sh
vim ~/.brr/setup.sh
```
Project-specific dependencies go in `.brr/{provider}/setup.sh` (created by `brr init`), which runs after the global setup.
### uv integration
brr wraps the `uv` binary to route virtual environments away from the shared EFS home directory:
| Environment variable | Value | Purpose |
| :--- | :--- | :--- |
| `UV_CACHE_DIR` | `/tmp/uv` | Download cache (per-instance) |
| `UV_PYTHON_INSTALL_DIR` | `/tmp/uv/python` | Managed Python builds (per-instance) |
| `UV_PROJECT_ENVIRONMENT` | `/tmp/venvs/{project}` | Project venvs (per-instance) |
The wrapper lives at `~/.local/bin/uv` and delegates to the real binary at `~/.local/lib/uv`. Both persist on EFS so new instances reuse them without reinstalling. Only caches, Python builds, and venvs are per-instance (rebuilt on boot from lockfiles).
For uv-managed projects, Ray runs inside the project venv via `uv run --group brr ray start`. For non-uv clusters, Ray runs from a standalone venv at `/tmp/brr/venv`.
### AI coding tools
Install AI coding assistants on every cluster node:
```sh
brr configure tools # select Claude Code, Codex, and/or Gemini CLI
```
Then connect and start coding:
```sh
brr up dev
brr attach dev claude
```
### Dotfiles
Set a dotfiles repo to sync your dev environment to every node:
```sh
brr config set DOTFILES_REPO "https://github.com/user/dotfiles"
```
The repo is cloned to `~/dotfiles` and installed via `install.sh` (if present) or GNU Stow.
### Image baking
Bake the global setup into AMIs/images for fast boot:
```sh
brr bake aws # bake both CPU + GPU AMIs
brr bake status # check if baked images are up to date
```
After baking, clusters boot from the pre-built image. Only project-specific deps need to install. `brr up` warns when `setup.sh` has changed since the last bake.
### Idle shutdown
A systemd daemon monitors CPU, GPU, and SSH activity. When all signals are idle for the configured timeout, the instance shuts down.
Configure in `~/.brr/config.env`:
```
IDLE_SHUTDOWN_ENABLED="true"
IDLE_SHUTDOWN_TIMEOUT_MIN="30"
IDLE_SHUTDOWN_CPU_THRESHOLD="10"
IDLE_SHUTDOWN_GRACE_MIN="15"
```
The grace period prevents shutdown during initial setup. Monitor on a node with `journalctl -u idle-shutdown -f`.
### Node caching
By default, Nebius nodes are **deleted** on scale-down. Unlike AWS, stopped Nebius instances still incur disk charges, so deleting is cheaper.
To keep nodes stopped instead (faster restart, but you pay for disks while idle), enable caching in your template's provider config:
```yaml
provider:
cache_stopped_nodes: true
```
AWS nodes are cached (stopped) by default.
## Commands
| Command | Description |
| :--- | :--- |
| `brr up TEMPLATE [OVERRIDES...]` | Launch or update a cluster (`aws:h100`, `dev`, or `path.yaml`) |
| `brr up TEMPLATE --dry-run` | Preview rendered config without launching |
| `brr down TEMPLATE` | Stop a cluster (instances preserved for fast restart) |
| `brr down TEMPLATE --delete` | Terminate all instances and remove staging files |
| `brr attach TEMPLATE [COMMAND]` | SSH into head node, optionally run a command (e.g. `claude`) |
| `brr list [--all]` | List clusters (project-scoped by default, `--all` for everything) |
| `brr clean [TEMPLATE]` | Terminate stopped (cached) instances |
| `brr vscode TEMPLATE` | Open VS Code on a running cluster |
| `brr templates list` | List built-in templates |
| `brr templates show TEMPLATE` | Show template config and overrides |
| `brr init` | Initialize a project (interactive provider selection) |
| `brr configure [cloud\|tools\|general]` | Interactive setup (cloud provider, AI tools, settings) |
| `brr config [list\|get\|set\|path]` | View and manage configuration |
| `brr bake [aws\|nebius]` | Bake setup into cloud images |
| `brr bake status` | Check if baked images are up to date |
| `brr completion [bash\|zsh\|fish]` | Shell completion (`--install` to add to shell rc) |
| `brr nuke [aws\|nebius]` | Tear down all cloud resources |
## Cloud Setup
### AWS Setup
1. Attach the [IAM policy](brr/aws/iam-policy.json) to your IAM user
2. Install the [AWS CLI](https://aws.amazon.com/cli/) and run `aws configure`
3. *(Optional)* For GitHub SSH access on clusters, authenticate the [GitHub CLI](https://cli.github.com/):
```sh
gh auth login
gh auth refresh -h github.com -s admin:public_key
```
4. Run the setup wizard:
```sh
brr configure aws
```
### Nebius Setup
1. Install the [Nebius CLI](https://docs.nebius.com/cli/install) and run `nebius init`
2. Create a service account with editor permissions:
```sh
TENANT_ID="<your-tenant-id>" # from console.nebius.com → Administration
SA_ID=$(nebius iam service-account create \
--name brr-cluster --format json | jq -r '.metadata.id')
EDITORS_GROUP_ID=$(nebius iam group get-by-name \
--name editors --parent-id $TENANT_ID --format json | jq -r '.metadata.id')
nebius iam group-membership create \
--parent-id $EDITORS_GROUP_ID --member-id $SA_ID
```
3. Generate credentials:
```sh
mkdir -p ~/.nebius
nebius iam auth-public-key generate \
--service-account-id $SA_ID --output ~/.nebius/credentials.json
```
4. Run the setup wizard:
```sh
brr configure nebius
```
## Acknowledgments
This project started as a fork of [aws_wiz](https://github.com/besarthoxhaj/aws_wiz) by [Bes](https://github.com/besarthoxhaj) and has been inspired by discussions with colleagues from the [Encode: AI for Science Fellowship](https://encode.pillar.vc/).
| text/markdown | Jon Carter | null | null | null | null | aws, cloud, cluster, gpu, infrastructure, ray | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Topic :: System :: Clustering"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"click",
"inquirerpy",
"pyyaml",
"rich",
"boto3; extra == \"aws\"",
"ray[default]; extra == \"aws\"",
"nebius; extra == \"nebius\"",
"ray[default]; extra == \"nebius\""
] | [] | [] | [] | [
"Homepage, https://github.com/joncarter1/brr",
"Repository, https://github.com/joncarter1/brr",
"Issues, https://github.com/joncarter1/brr/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:05:09.061293 | brr_cli-0.3.0.tar.gz | 194,190 | 13/6f/b609a53a0b492431ae16cc7ee11d0e6c342e50fe5d3431243c527ec916b5/brr_cli-0.3.0.tar.gz | source | sdist | null | false | fa924216bef2ca3d546240a3fcb128ac | 4b0b249547aaded0c60c396b5e234a54ea993c0ec963e7e1a20286cb5dcff925 | 136fb609a53a0b492431ae16cc7ee11d0e6c342e50fe5d3431243c527ec916b5 | MIT | [
"LICENSE"
] | 228 |
2.4 | wandelbots-nova | 4.10.1 | Official Python SDK for the Wandelbots Nova | # wandelbots-nova (Python SDK)
[](https://badge.fury.io/py/wandelbots-nova)
[](https://github.com/wandelbotsgmbh/wandelbots-nova/blob/main/LICENSE)
[](https://github.com/wandelbotsgmbh/wandelbots-nova/actions/workflows/nova-release.yaml)
[](https://deepwiki.com/wandelbotsgmbh/wandelbots-nova)
This library provides an SDK for the Wandelbots NOVA API.
The SDK will help you to build your own apps and services using Python on top of Wandelbots NOVA and makes programming a robot as easy as possible.
[417768496-f6157e4b-eea8-4b96-b302-1f3864ae44a9.webm](https://github.com/user-attachments/assets/ca7de6ba-c78d-414f-ae8f-f76d0890caf3)
## Table of Contents
- [Overview](#overview)
- [Prerequisites](#prerequisites)
- [Quickstart](#quickstart)
- [Installation](#installation)
- [Install with pip](#install-with-pip)
- [Install with uv and rerun visualization](#install-with-uv-and-rerun-visualization)
- [Configure environment variables](#configure-environment-variables)
- [Using the SDK](#using-the-sdk)
- [API essentials](#api-essentials)
- [Example gallery](#example-gallery)
- [Wandelscript](#wandelscript)
- [NOVAx](#novax)
- [Development](#development)
- [Release process](#release-process)
- [Additional resources](#additional-resources)
## Overview
[Wandelbots NOVA OS](https://www.wandelbots.com/) is a robot-agnostic operating system that enables developers to plan, program, control, and operate fleets of six-axis industrial robots through a unified API, across all major robot brands. It integrates modern development tools like Python and JavaScript APIs with AI-based control and motion planning, allowing developers to build automation tasks such as gluing, grinding, welding, and palletizing without needing to account for hardware differences. The software offers a powerful set of tools that support the creation of custom automation solutions throughout the entire automation lifecycle.
## Prerequisites
- A running NOVA instance (Get a Wandelbots NOVA account on [wandelbots.com](https://www.wandelbots.com/contact))
- Valid NOVA API credentials
- Python >=3.11
## Quickstart
1. Install the SDK using `pip` or set up a local `uv` project with extras for visualization. Refer to the [Installation](#installation) section for both options.
2. Copy `.env.template` to `.env` and fill in the base URL and access token for your NOVA deployment. Details are covered in [Configure environment variables](#configure-environment-variables).
3. Run an example to validate the setup, e.g. `uv run python examples/start_here.py`. Install the rerun extras and execute `uv run download-models` if you want interactive 3D visualization out of the box.
## Installation
### Install with pip
Install the library using pip:
```bash
pip install wandelbots-nova
```
### Install with uv and rerun visualization
Install [uv](https://docs.astral.sh/uv/getting-started/installation/) on your system.
Initialize a new uv project with the following command.
```bash
uv init
```
Install the library with the `nova-rerun-bridge` extra to use the visualization tool [rerun](https://rerun.io/).
See [extension README.md](nova_rerun_bridge/README.md) for further details.
```bash
uv add wandelbots-nova --extra nova-rerun-bridge
```
Download the robot models to visualize them in the rerun viewer.
```bash
uv run download-models
```
### Configure Environment Variables
Copy the provided `.env.template` file and rename it to `.env`:
```bash
cp .env.template .env
```
Open the `.env` file in a text editor and fill in the values. Here's what each variable does:
| Variable | Description | Required | Default | Example |
| ------------------- | -------------------------------------------------------------------------------- | -------- | ------- | ------------------------------------------------ |
| `NOVA_API` | Base URL or hostname of the Wandelbots NOVA server instance | Yes | None | `https://nova.example.com` or `http://172.0.0.1` |
| `NOVA_ACCESS_TOKEN` | Pre-obtained access token for Wandelbots NOVA (cloud or self-hosted deployments) | Yes\* | None | `eyJhbGciOi...` |
> **Note:**
>
> - `NOVA_API` is mandatory in every deployment. Always point it to the NOVA base URL you are targeting.
> - `NOVA_ACCESS_TOKEN` is the supported authentication mechanism. It is mandatory for the Wandelbots Cloud environment; for self-hosted deployments generate and supply a token with the required permissions.
> - Username/password authentication (`NOVA_USERNAME`/`NOVA_PASSWORD`) is deprecated and no longer supported.
## Using the SDK
### API essentials
Import the library in your code to get started.
```python
from nova import Nova
```
You can access the automatically generated NOVA API client using the `api` module.
```python
from nova import api
```
### Example gallery
Curated examples in this repository showcase typical SDK workflows:
1. **Basic usage**: [start_here.py](https://github.com/wandelbotsgmbh/wandelbots-nova/tree/main/examples/start_here.py)
2. **Robot movement and I/O control**: [plan_and_execute.py](https://github.com/wandelbotsgmbh/wandelbots-nova/tree/main/examples/plan_and_execute.py)
3. **Collision-free movement**: [collision_setup.py](https://github.com/wandelbotsgmbh/wandelbots-nova/tree/main/examples/collision_setup.py)
<img width="100%" alt="collision_free" src="https://github.com/user-attachments/assets/0416151f-1304-46e2-a4ab-485fcda766fc" />
4. **Multiple robot coordination**: [move_multiple_robots.py](https://github.com/wandelbotsgmbh/wandelbots-nova/tree/main/examples/move_multiple_robots.py)
5. **3D visualization with rerun**: [welding.py](https://github.com/wandelbotsgmbh/wandelbots-nova/tree/main/examples/welding.py)
> **Note**: Install [rerun extras](#install-with-uv-and-rerun-visualization) to enable visualization
<img width="1242" alt="pointcloud" src="https://github.com/user-attachments/assets/8e981f09-81ae-4e71-9851-42611f6b1843" />
6. **Custom TCPs (Tool Center Points)**: [visualize_tool.py](https://github.com/wandelbotsgmbh/wandelbots-nova/tree/main/examples/visualize_tool.py)
<img width="100%" alt="trajectory" src="https://github.com/user-attachments/assets/649de0b7-d90a-4095-ad51-d38d3ac2e716" />
7. **Custom mounting with multiple robots**: [robocore.py](https://github.com/wandelbotsgmbh/wandelbots-nova/tree/main/examples/robocore.py)
<img width="100%" alt="thumbnail" src="https://github.com/user-attachments/assets/6f0c441e-b133-4a3a-bf0e-0e947d3efad4" />
## Wandelscript
Wandelscript is a domain-specific language for programming robots.
It is a declarative language that allows you to describe the robot's behavior in a high-level way.
Wandelscript is suited to get yourself familiar with robot programming.
```bash
uv add wandelbots-nova --extra wandelscript
```
Here is a simple example of a Wandelscript program:
```python
robot = get_controller("controller")[0]
tcp("Flange")
home = read(robot, "pose")
sync
# Set the velocity of the robot to 200 mm/s
velocity(200)
for i = 0..3:
move via ptp() to home
# Move to a pose concatenating the home pose
move via line() to (50, 20, 30, 0, 0, 0) :: home
move via line() to (100, 20, 30, 0, 0, 0) :: home
move via line() to (50, 20, 30, 0, 0, 0) :: home
move via ptp() to home
```
To get started, use the [Quickstart](https://docs.wandelbots.io/latest/pathplanning-maintained/wandelscript/quickstart).
For implementation details or contributing to Wandelscript, refer to the [Wandelscript readme](/wandelscript/README.md).
## NOVAx
NOVAx is an app framework for building server applications on top of Wandelbots NOVA.
It provides common core concepts like the handling of programs and their execution.
You can create a new NOVAx app using the [NOVA CLI](https://github.com/wandelbotsgmbh/nova-cli) generator:
```bash
nova app create "your-nova-app" -g python_app
```
For more information on using NOVAx see the [README](https://github.com/wandelbotsgmbh/wandelbots-nova/tree/main/examples/your-nova-app/README.md). Explore [this example](https://github.com/wandelbotsgmbh/wandelbots-nova/tree/main/examples/your-nova-app/your-nova-app/app.py) to use the NOVAx entry point.
> **Important:** When using NOVAx, you must import the actual program functions from their respective Python files. Only importing the program files won't suffice. This ensures proper function registration and execution within the NOVAx runtime environment.
## Development
To install development dependencies, run
```bash
uv sync --extra "nova-rerun-bridge"
```
### Formatting
```bash
uv run ruff format
uv run ruff check --select I --fix
```
### Yaml linting
```bash
docker run --rm -it -v $(pwd):/data cytopia/yamllint -d .yamllint .
```
### Branch versions for testing
When working with feature branches or forks, it can be helpful to test the library as a dependency in other projects before merging.
You can specify custom sources in your pyproject.toml to pull the library from a specific branch:
Using PEP 621-style table syntax:
```toml
wandelbots-nova = { git = "https://github.com/wandelbotsgmbh/wandelbots-nova.git", branch = "fix/http-prefix" }
```
Using PEP 508 direct URL syntax:
```toml
wandelbots-nova @ git+https://github.com/wandelbotsgmbh/wandelbots-nova.git@fix/http-prefix
```
## Release process
### Branch behaviour overview
| Branch | Purpose | Published to | Example version |
| ----------- | ------------------------------------------------------ | -------------------------------------- | -------------------- |
| `main` | Stable releases (semantic versioning vX.Y.Z) | PyPI (`pip install wandelbots-nova`) | `v1.13.0` |
| `release/*` | LTS-releases, pre-releases or hotfixes for older lines | PyPI (labeled with release suffix) | `v1.8.7-release-1.x` |
| any other | Development builds | GitHub actions (not published to PyPI) | `e4c8af0647839...` |
### Stable releases from `main`
Merging into main triggers the release workflow:
1. `semantic-release` analyzes commit messages and bumps the version automatically.
2. A source distribution and wheel are built and uploaded to PyPI.
3. A GitHub release is created (or updated) with the release assets.
### LTS releases from `release/\*`
If you're on older major versions or under a special LTS contract:
1. Use (or create) a branch like `release/1.x`, `release/customer-foo`, etc.
2. Every commit to these branches triggers the same workflow as on `main`.
3. Versions include the branch name to prevent collisions, e.g. `v1.8.7-release-1.x`
### Create a dev build (manual)
Need a temporary test build? Use GitHub actions:
1. Go to the [actions tab](https://github.com/wandelbotsgmbh/wandelbots-nova/actions).
2. Find **Nova SDK: Build dev wheel** and click `Run workflow`.
3. Select a branch and trigger the job.
4. After completion, open the [Installation step](#installation) to copy the ready-to-use `pip install` command:
```bash
pip install "wandelbots-nova @ git+https://github.com/wandelbotsgmbh/wandelbots-nova.git@<commit>"
```
## Additional resources
- [Examples](https://github.com/wandelbotsgmbh/wandelbots-nova/tree/main/examples) covering basic to advanced SDK scenarios
- [Technical wiki](https://deepwiki.com/wandelbotsgmbh/wandelbots-nova) with architecture notes and troubleshooting tips
- [Official documentation](https://docs.wandelbots.io/) for platform concepts and API guides
- [Code documentation](https://wandelbotsgmbh.github.io/wandelbots-nova/) generated from the latest SDK build
| text/markdown | Wandelbots GmbH | Christoph Biering <christoph.biering@wandelbots.com>, Mahsum Demir <mahsum.demir@wandelbots.com>, Dirk Sonnemann <dirk.sonnemann@wandelbots.com>, Andreas Langenhagen <andreas.langenhagen@wandelbots.com>, Stefan Wagner <stefan.wagner@wandelbots.com>, André Kühnert <andre.kuhnert@wandelbots.com> | null | null | null | null | [] | [] | null | null | <3.13,>=3.11 | [] | [] | [] | [
"aiostream<0.7,>=0.6.4",
"anyio<5,>=4.8.0",
"asyncstdlib<4,>=3.13.0",
"asyncua<2,>=1.1.5",
"blinker>=1.9.0",
"docstring-parser>=0.16.0",
"exceptiongroup>=1.2.2",
"httpx<0.29,>=0.28.0",
"loguru<0.8,>=0.7.2",
"nats-py>=2.11.0",
"numpy>1.1.19",
"pydantic<3,>=2.11.4",
"python-decouple~=3.8",
"scipy<2,>=1.14.1",
"wandelbots-api-client~=25.10.0",
"websockets<15,>=14.1.0",
"apscheduler>=3.11.0; extra == \"benchmark\"",
"pyyaml>5.3; extra == \"benchmark\"",
"requests>=2.32.3; extra == \"benchmark\"",
"rerun-sdk==0.26.2; extra == \"benchmark\"",
"trimesh>=4.5.3; extra == \"benchmark\"",
"apscheduler>=3.11.0; extra == \"nova-rerun-bridge\"",
"requests>=2.32.3; extra == \"nova-rerun-bridge\"",
"rerun-sdk==0.26.2; extra == \"nova-rerun-bridge\"",
"trimesh>=4.5.3; extra == \"nova-rerun-bridge\"",
"fastapi>=0.115.6; extra == \"novax\"",
"python-decouple>=3.8; extra == \"novax\"",
"uvicorn>=0.34.0; extra == \"novax\"",
"aiostream<0.7,>=0.6.1; extra == \"wandelscript\"",
"antlr4-python3-runtime==4.13.2; extra == \"wandelscript\"",
"dotenv; extra == \"wandelscript\"",
"geometricalgebra<0.2,>=0.1.3; extra == \"wandelscript\"",
"numpy>=1.1.19; extra == \"wandelscript\"",
"typer[all]<0.20,>=0.12; extra == \"wandelscript\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:05:04.374555 | wandelbots_nova-4.10.1.tar.gz | 12,494,248 | 23/7e/03c0e9d227961ef67c20568253566b26c954301e0dab1f598c2c8aaa1bbf/wandelbots_nova-4.10.1.tar.gz | source | sdist | null | false | 3aa6f7a29483765a8fb8c8f4d0b8b86e | 357cfd27a701cf5ea896cbeb5cccaa198667ea25de1810f630e36792799cbee3 | 237e03c0e9d227961ef67c20568253566b26c954301e0dab1f598c2c8aaa1bbf | null | [
"LICENSE"
] | 382 |
2.4 | experimaestro | 2.0.0 | Experimaestro is a computer science experiment manager | [](https://badge.fury.io/py/experimaestro)
[](https://experimaestro-python.readthedocs.io)
Experimaestro helps in designing and managing **complex experimental plans**. It allows for the definition of tasks and their dependencies, ensuring that each step in a workflow is executed in the correct order. Some key aspects of Experimaestro are:
- **Task Automation**: The tool automates repetitive tasks, making it easier to run large-scale experiments. It's particularly useful in scenarios where experiments need to be repeated with different parameters or datasets.
- **Resource Management**: It efficiently manages computational resources, which is critical when dealing with data-intensive tasks or when running multiple experiments in parallel.
- **Reproducibility**: By keeping a detailed record of experiments (the experimental plan in python), including parameters and environments, it aids in ensuring the reproducibility of scientific experiments, which is a fundamental requirement in research.
- **User Interface**: While primarily a back-end tool, Experimaestro also offers a user interface to help in managing and visualizing workflows (web and text-based).
The full documentation can be read by going to the following URL: [https://experimaestro-python.readthedocs.io](https://experimaestro-python.readthedocs.io). A tutorial (training a CNN on MNIST) is [available on github](https://github.com/experimaestro/experimaestro-demo).
# Screenshots
## Textual interface (new in v2)



# Install
## With pip
You can then install the package using `pip install experimaestro`
## Develop
Checkout the git directory, then
```
pip install -e .
```
# Example
This very simple example shows how to submit two tasks that concatenate two strings.
Under the curtain,
- A directory is created for each task (in `workdir/jobs/helloworld.add/HASHID`)
based on a unique ID computed from the parameters
- Two processes for `Say` are launched (there are no dependencies, so they will be run in parallel)
- A tag `y` is created for the main task
<!-- SNIPPET: MAIN ARGS[%WORKDIR% --port 0 --sleeptime=0.0001] -->
```python
# --- Task and types definitions
import logging
logging.basicConfig(level=logging.DEBUG)
from pathlib import Path
from experimaestro import Task, Param, experiment, progress
import click
import time
import os
from typing import List
# --- Just to be able to monitor the tasks
def slowdown(sleeptime: int, N: int):
logging.info("Sleeping %ds after each step", sleeptime)
for i in range(N):
time.sleep(sleeptime)
progress((i+1)/N)
# --- Define the tasks
class Say(Task):
word: Param[str]
sleeptime: Param[float]
def execute(self):
slowdown(self.sleeptime, len(self.word))
print(self.word.upper(),)
class Concat(Task):
strings: Param[List[Say]]
sleeptime: Param[float]
def execute(self):
says = []
slowdown(self.sleeptime, len(self.strings))
for string in self.strings:
with open(string.__xpm_stdout__) as fp:
says.append(fp.read().strip())
print(" ".join(says))
# --- Defines the experiment
@click.option("--port", type=int, default=12345, help="Port for monitoring")
@click.option("--sleeptime", type=float, default=2, help="Sleep time")
@click.argument("workdir", type=Path)
@click.command()
def cli(port, workdir, sleeptime):
"""Runs an experiment"""
# Sets the working directory and the name of the xp
with experiment(workdir, "helloworld", port=port) as xp:
# Submit the tasks
hello = Say.C(word="hello", sleeptime=sleeptime).submit()
world = Say.C(word="world", sleeptime=sleeptime).submit()
# Concat will depend on the two first tasks
Concat.C(strings=[hello, world], sleeptime=sleeptime).tag("y", 1).submit()
if __name__ == "__main__":
cli()
```
which can be launched with `python test.py /tmp/helloworld-workdir`
| text/markdown | null | Benjamin Piwowarski <benjamin@piwowarski.fr> | null | null | GPL-3.0-or-later | experiment manager | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"arpeggio<3,>=2",
"attrs<24,>=23.1.0",
"click>=8",
"decorator<6,>=5",
"docstring-parser<1,>=0.15",
"fastapi<1,>=0.109",
"filelock<4,>=3.16",
"httpx<1,>=0.26",
"huggingface-hub>0.17",
"humanfriendly>=10",
"jinja2>=3",
"marshmallow<4,>=3.20",
"omegaconf<3,>=2.3",
"psutil<8,>=7",
"pyparsing<4,>=3.1",
"pyperclip<2,>=1.8",
"pytools<2024,>=2023.1.1",
"pyyaml<7,>=6.0.1",
"requests<3,>=2.31",
"rpyc<7,>=5",
"sortedcontainers<3,>=2.4",
"termcolor<3,>=2.3",
"textual-fspicker>=0.0.11",
"textual>=6",
"tqdm<5,>=4.66.1",
"typing-extensions>=4.2; python_version < \"3.12\"",
"uvicorn[standard]<1,>=0.27",
"watchdog>=2",
"codecarbon>=2.0; sys_platform != \"darwin\" and extra == \"carbon\"",
"zeus[apple]>=0.10.0; sys_platform == \"darwin\" and extra == \"carbon\"",
"docutils>=0.18; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"pygments>=2.15; extra == \"dev\"",
"pytest-dependency>=0.6.0; extra == \"dev\"",
"pytest-order>=1.0.0; extra == \"dev\"",
"pytest-timeout>=2.4.0; extra == \"dev\"",
"pytest>=8.4.1; extra == \"dev\"",
"textual-dev>=1.8.0; extra == \"dev\"",
"myst-parser>=2.0; extra == \"docs\"",
"sphinx-codeautolink>=0.15; extra == \"docs\"",
"sphinx-copybutton>=0.5; extra == \"docs\"",
"sphinx-rtd-theme>=2.0; extra == \"docs\"",
"sphinx>=6; extra == \"docs\"",
"fabric>=3; extra == \"ssh\"",
"paramiko>=3.3; extra == \"ssh\""
] | [] | [] | [] | [
"Homepage, https://github.com/experimaestro/experimaestro-python",
"Documentation, https://experimaestro-python.readthedocs.io/",
"Repository, https://github.com/experimaestro/experimaestro-python",
"Bug Tracker, https://github.com/experimaestro/experimaestro-python/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:04:44.563726 | experimaestro-2.0.0.tar.gz | 27,733,232 | 3f/fa/27992a0e93337c583602c087038d0b782eeeda080a699eab3c42c6e9d037/experimaestro-2.0.0.tar.gz | source | sdist | null | false | f4f07e473e005cf1a0e22a05f4343518 | bc14b477195d5ad36ec2d493b2eaa2fcdc6a0b4550887dde1a86297ea5349082 | 3ffa27992a0e93337c583602c087038d0b782eeeda080a699eab3c42c6e9d037 | null | [
"LICENSE"
] | 277 |
2.4 | zaira | 0.17.0 | CLI tool for offline Jira ticket management. Export tickets to markdown, generate reports, and keep everything in sync. | # Zaira
> **Pronunciation:** *ZAY-rah* /ˈzeɪ.rə/ — named after Jira (*JEE-rah* /ˈdʒiː.rə/), though it works with Confluence too. Finnish speakers may pronounce it however they like.
A CLI tool for Jira and Confluence management. Export tickets to markdown, generate reports, and keep everything in sync.
Designed for AI-assisted development workflows. By exporting Jira tickets to plain markdown files, AI agents and coding assistants can easily read project context, understand requirements, and reference ticket details without needing direct Jira API access.
## Installation
```bash
uv tool install zaira
```
Or with pip:
```bash
pip install zaira
```
## Setup
### 1. Configure credentials
Run `zaira init` to create the credentials file:
```bash
zaira init
```
This creates a credentials file in your platform's config directory (`~/Library/Application Support/zaira/` on macOS, `~/.config/zaira/` on Linux). Edit it with your Jira details:
```toml
site = "your-company.atlassian.net"
email = "your-email@example.com"
api_token = "your-api-token"
```
Get your API token from: https://id.atlassian.com/manage-profile/security/api-tokens
### 2. Initialize project (optional, for project managers)
For advanced workflows with named queries, report templates, and board aliases, generate `zproject.toml`:
```bash
zaira init-project FOO # Single project
zaira init-project FOO BAR # Multiple projects
```
This discovers each project's components, labels, and boards, then generates `zproject.toml` with named queries and reports. This is intended for project managers and power users who need repeatable reports and batch operations. Most commands work without this file.
## Commands
### export
Export individual tickets to stdout (default) or files:
```bash
# Output to stdout (default)
zaira export FOO-1234
# Save to tickets/ directory
zaira export FOO-1234 --files
zaira export FOO-1234 -o tickets/
# Bulk export with JQL, board, or sprint
zaira export --jql "project = FOO AND status = 'In Progress'" --files
zaira export --board 123 --files
zaira export --sprint 456 --files
# Export as JSON
zaira export FOO-1234 --format json
# Include linked pull requests (GitHub only)
zaira export FOO-1234 --with-prs
# Include custom fields (uses cached schema for name lookup)
zaira export FOO-1234 --all-fields
```
### create
Create a ticket from a YAML front matter file:
```bash
# Create ticket from file
zaira create ticket.md
# Create from stdin
zaira create - <<EOF
---
project: FOO
summary: Quick ticket
type: Task
---
Description here
EOF
# Preview without creating
zaira create ticket.md --dry-run
```
The file format matches exported tickets:
```markdown
---
project: FOO
summary: "Implement feature X"
type: Story
priority: High
components: [backend, api]
labels: [v2]
Epic Link: FOO-100 # Custom field (looked up via schema)
---
## Description
Feature description here...
```
Custom field names are mapped to IDs using the cached schema. Run `zaira info fields --refresh` to cache field mappings.
### my
Show your open tickets grouped by status:
```bash
zaira my
```
Tickets are sorted by age (oldest first) within each group. Uses the `my-tickets` query from `zproject.toml` if configured, otherwise defaults to `assignee = currentUser() AND status NOT IN (Done, Closed, Resolved, Disposal, Rejected)`.
### report
Generate markdown reports from JQL queries:
```bash
# Use a named report from zproject.toml
zaira report my-tickets
# Use a named query
zaira report --query my-tickets
# Use raw JQL
zaira report --jql "project = FOO AND type = Bug" --title "Bugs"
# Group by field
zaira report --jql "project = FOO" --group-by status
# Filter by label
zaira report --board main --label backend
# Export tickets along with the report
zaira report my-tickets --full
# Force re-export all tickets
zaira report my-tickets --full --force
# Output as JSON or CSV
zaira report my-tickets --format json
zaira report my-tickets --format csv
# Force file output without zproject.toml
zaira report --jql "project = FOO" --files
```
Reports are saved to `reports/` with YAML front matter containing the refresh command (markdown only).
### refresh
Refresh a report using the command stored in its front matter:
```bash
zaira refresh my-report.md
# Also export tickets referenced in the report
zaira refresh my-report --full
# Force re-export all tickets
zaira refresh my-report --full --force
```
When using `--full`, only tickets that have changed in Jira since the last refresh are re-exported.
### boards
List available Jira boards:
```bash
zaira boards
zaira boards --project FOO
```
### edit
Edit a ticket's fields:
```bash
# Title and description
zaira edit FOO-1234 --title "New title"
zaira edit FOO-1234 --description "New description"
zaira edit FOO-1234 -t "Title" -d "Description"
# Arbitrary fields with -F (repeatable)
zaira edit FOO-1234 -F "Priority=High"
zaira edit FOO-1234 -F "Priority=High" -F "Epic Link=FOO-100"
zaira edit FOO-1234 -F "labels=bug,urgent" -F "Story Points=5"
# Assign ticket
zaira edit FOO-1234 -F "assignee=me" # Assign to yourself
zaira edit FOO-1234 -F "assignee=user@example.com" # Assign by email
# From YAML file
zaira edit FOO-1234 --from fields.yaml
# From stdin
zaira edit FOO-1234 --from - <<EOF
Priority: High
Epic Link: FOO-100
Story Points: 5
labels: [bug, urgent]
EOF
# Multiline description via stdin
zaira edit FOO-1234 -d - <<EOF
h2. Overview
This is a *bold* statement with _italic_ text.
EOF
```
Custom field names are mapped to IDs using the cached schema. Descriptions support [Jira wiki syntax](https://jira.atlassian.com/secure/WikiRendererHelpAction.jspa?section=all).
### comment
Add a comment to a ticket:
```bash
zaira comment FOO-1234 "This is my comment"
# Multiline via stdin
zaira comment FOO-1234 - <<EOF
Line 1
Line 2
EOF
# Pipe from file or command
cat notes.txt | zaira comment FOO-1234 -
```
### transition
Transition a ticket to a new status:
```bash
zaira transition FOO-1234 "In Progress"
zaira transition FOO-1234 Done
```
### check (experimental)
Validate tickets against a `rules.yaml` file in the current directory:
```bash
zaira check FOO-123
zaira check FOO-123 FOO-456 FOO-789
zaira check FOO-123 --rules path/to/rules.yaml
```
Rules are scoped by issue type. Available checks:
- `required` — field must exist and be non-null
- `non_empty` — field must exist and not be empty string/empty list
- `contains` — string field must contain a substring
- `not_contains` — string field must not contain a substring
- `matches` — string field must match a regex (`re.search`; use `(?i)` for case-insensitive)
- `not_matches` — string field must not match a regex
- `one_of` — field value must be one of the allowed values; for list fields, all values must be in the allowed set
- `not_one_of` — field value must not be any of the forbidden values
- `subtask_types` — must have at least one subtask of each listed issue type
- `when.<status>` — additional rules that apply only when the ticket is in that status
- `if` — conditional rules that match on any field value
```yaml
Story:
required: [Story Points, assignee]
non_empty: [Description]
contains:
Description: "acceptance criteria"
matches:
Description: "\\bhttp\\S+" # must contain a link
not_matches:
summary: "(?i)\\bwip\\b" # summary must not contain WIP
one_of:
Priority: [Critical, High, Medium]
when:
Done:
required: [Resolution]
subtask_types: [Deployment Wave]
not_contains:
Description: "TODO"
if:
- match: { Priority: Critical }
then:
required: [Rollback Plan, Deployment Owner]
- match: { components: backend }
then:
required: [API Review]
- match: { labels: security, Priority: Critical }
then:
required: [Security Review]
```
`when` is sugar for the common status case. `if` is a list of `{match, then}` blocks — all fields in `match` must match (AND logic), and `then` contains any of the standard check types including `subtask_types`. For list fields like `components` and `labels`, `match` checks membership (value is in the list). For scalar fields, it checks exact equality. `if` blocks also respect the status override during transition validation.
Field names work for both standard fields (`summary`, `status`, `assignee`) and custom fields (`Release Date`, `Story Points`). Standard field lookup is case-insensitive.
**Transition validation:** When `rules.yaml` exists, `zaira transition` automatically checks the target status rules before transitioning. If the ticket fails validation, the transition is blocked:
```
$ zaira transition FOO-123 Done
Blocked: FOO-123 fails rules for 'Done':
FAIL required Resolution
FAIL not_contains Description
Use --no-check to skip validation.
```
Use `--no-check` to bypass: `zaira transition FOO-123 Done --no-check`
### link
Create a link between two tickets:
```bash
zaira link FOO-1234 FOO-5678 # Default: Relates
zaira link FOO-1234 FOO-5678 --type Blocks
zaira link FOO-1234 FOO-5678 -t Duplicates
```
### log
Log work hours to a ticket:
```bash
# Log time
zaira log FOO-1234 2h
zaira log FOO-1234 30m
zaira log FOO-1234 "1h 30m"
# Log with comment
zaira log FOO-1234 2h --comment "Code review"
zaira log FOO-1234 1h -c "Sprint planning"
# Log to a specific date
zaira log FOO-1234 3h --date 2026-02-05
# List existing worklogs
zaira log FOO-1234 --list
```
Time formats: `30m` (minutes), `2h` (hours), `1d` (day = 8h), `1w` (week = 40h), or compound like `1h 30m`. The `--list` flag shows all worklogs with author, date, and a total.
### hours
Show logged hours across all tickets for a time period:
```bash
# Last 7 days (default)
zaira hours
# Last 14 days
zaira hours --days 14
# Custom date range
zaira hours --from 2026-01-20 --to 2026-01-24
# Ticket totals only (no daily breakdown)
zaira hours --summary
# Hours by person on specific tickets
zaira hours FOO-123 FOO-456
# Combine ticket mode with date filtering
zaira hours FOO-123 --from 2026-01-01 --to 2026-01-31
```
Without ticket keys, shows your personal timesheet with daily breakdown. With ticket keys, shows hours split by person per ticket.
### attach
Upload attachments to a ticket:
```bash
zaira attach FOO-1234 screenshot.png
zaira attach FOO-1234 *.png doc.pdf # Multiple files
```
### dashboards
List available Jira dashboards:
```bash
zaira dashboards
zaira dashboards --mine # Only your dashboards
zaira dashboards --filter "sprint" # Filter by name
zaira dashboards --limit 100 # Max results (default: 50)
```
### dashboard
Export a specific dashboard:
```bash
zaira dashboard 16148
zaira dashboard "https://company.atlassian.net/jira/dashboards/16148"
zaira dashboard 16148 --format json
zaira dashboard 16148 -o dashboard.md
```
### wiki
Access Confluence pages using the same credentials:
```bash
# Get page by ID or URL (outputs markdown with front matter)
zaira wiki get 123456
zaira wiki get "https://site.atlassian.net/wiki/spaces/SPACE/pages/123456/Title"
zaira wiki get 123456 --format html # Raw storage format
zaira wiki get 123456 --format json # Full API response
# Export multiple pages to directory
zaira wiki get 123 456 789 -o docs/
# Export page and all children recursively
zaira wiki get 123 --children -o docs/
# List page and children (without exporting)
zaira wiki get 123 --list
# Search pages
zaira wiki search "search terms"
zaira wiki search "docs" --space TEAM # Filter by space
zaira wiki search --creator "John Doe" # Filter by creator
zaira wiki search "api" --limit 50 # Limit results (default: 25)
zaira wiki search "api" --format url # Output just URLs
zaira wiki search "api" --format json # Full JSON response
# Create page from markdown
zaira wiki create -s SPACE -t "Page Title" -m -b page.md
zaira wiki create -s SPACE -t "Title" -m -b - # From stdin
zaira wiki create -t "Child Page" -p 123 -m -b page.md # Under parent (space inferred)
# Upload attachments
zaira wiki attach 123456 image.png # Single file
zaira wiki attach 123456 *.png # Glob pattern
zaira wiki attach 123456 image.png --replace # Replace if exists
# Delete page
zaira wiki delete 123456 # Prompts for confirmation
zaira wiki delete 123456 --yes # Skip confirmation
# Edit page properties
zaira wiki edit 123456 --title "New Title"
zaira wiki edit 123456 --parent 789 # Move under different parent
zaira wiki edit 123456 --labels "docs,api,v2" # Set labels (replaces existing)
zaira wiki edit 123456 --space NEWSPACE # Move to different space
```
#### wiki put (with sync)
Update Confluence pages from markdown files with automatic sync tracking:
```bash
# Push local changes (page ID from front matter)
zaira wiki put -m page.md
# Multiple files / globs / directories
zaira wiki put -m docs/*.md
zaira wiki put -m docs/
# Check sync status
zaira wiki put -m page.md --status
# View diff between local and remote
zaira wiki put -m page.md --diff
# Pull remote changes to local file
zaira wiki put -m page.md --pull
# Force push (overwrite conflicts)
zaira wiki put -m page.md --force
# Create new pages for files without front matter
zaira wiki put -m docs/*.md --create # Parent auto-detected from siblings
zaira wiki put -m docs/*.md --create --parent 123 # Explicit parent
# Explicit page ID (single file, overrides front matter)
zaira wiki put -m page.md -p 123456
```
**Creating new pages:** With `--create`, files without `confluence:` front matter become new pages. The parent is auto-detected from sibling files (must all share the same parent), or specify with `--parent`. After creation, front matter is added to the file.
**Front matter:** Files link to Confluence pages via YAML front matter. Title and labels sync automatically on push/pull:
```markdown
---
confluence: 123456
title: My Document
labels: [docs, api]
---
Content here with 
```
**Image handling:** Local images (``) are automatically uploaded as Confluence attachments on push, and downloaded to `images/` on pull. Only changed images are re-uploaded.
**Conflict detection:** Tracks versions and content hashes. If both local and remote changed since last sync, you'll get a conflict warning:
```
Error: Conflict detected!
Local file changed since last sync
Remote changed: version 5 -> 7
Use --diff to see changes, --force to overwrite remote, or --pull to discard local changes
```
### info
Query Jira instance metadata. Results are cached locally and served from cache by default:
```bash
zaira info statuses # List statuses and categories
zaira info priorities # List priorities
zaira info issue-types # List issue types
zaira info link-types # List available link types
zaira info fields # List custom fields
zaira info fields --all # Include standard fields
zaira info fields --filter epic # Search by name or ID
# Project-specific metadata
zaira info components FOO # List components for project
zaira info labels FOO # List labels for project
# Refresh from Jira API (also updates cache)
zaira info statuses --refresh
zaira info fields -r
# Refresh all metadata at once
zaira info --save
```
Instance schema is cached at `~/.cache/zaira/zschema_PROFILE.json` and project schemas at `~/.cache/zaira/zproject_PROFILE_PROJECT.json`.
## Project Configuration (for project managers)
The `zproject.toml` file stores project-specific settings for project managers and power users. After running `zaira init-project`, you can edit this file to rename reports, add custom queries, and organize boards to match your workflow:
```toml
[project]
site = "company.atlassian.net"
profile = "work" # Optional: name for schema cache (default: "default")
[boards]
main = 123
support = 456
[queries]
my-tickets = "assignee = currentUser() AND project = FOO AND status != Done"
bugs = "project = FOO AND type = Bug AND status != Done"
# Queries can span multiple projects
all-my-work = "assignee = currentUser() AND project IN (FOO, BAR) AND status != Done"
[reports]
my-tickets = { query = "my-tickets", group_by = "status" }
bugs = { jql = "project = FOO AND type = Bug", group_by = "priority" }
sprint = { board = 123, group_by = "status", full = true }
# Reports can target multiple projects via JQL
cross-team = { jql = "project IN (FOO, BAR) AND type = Bug", group_by = "project" }
```
## Output Structure
```
project/
zproject.toml # Project configuration
tickets/ # Exported tickets
FOO-1234-ticket-title.md
attachments/ # Downloaded attachments (up to 10 MB each)
FOO-1234/
screenshot.png
design.pdf
by-component/ # Symlinks grouped by component (markdown only)
backend/
FOO-1234-ticket-title.md -> ../../FOO-1234-ticket-title.md
by-parent/ # Symlinks grouped by parent ticket
FOO-1000-epic-name/
FOO-1234-ticket-title.md -> ../../FOO-1234-ticket-title.md
reports/ # Generated reports
my-tickets.md
my-tickets.json # with --format json
my-tickets.csv # with --format csv
```
## Ticket Format
Exported tickets include YAML front matter:
```markdown
---
key: FOO-1234
summary: "Implement feature X"
type: Story
status: In Progress
priority: High
assignee: user@example.com
reporter: pm@example.com
components: Backend
labels: api, v2
parent: FOO-1000
Epic Link: FOO-1000 # Custom fields (with --all-fields)
Story Points: 5
synced: 2024-01-15T10:30:00
url: https://company.atlassian.net/browse/FOO-1234
---
# FOO-1234: Implement feature X
## Description
Feature description here...
## Attachments
- [screenshot.png](attachments/FOO-1234/screenshot.png) (145 KB, Jane Doe, 2024-01-14)
## Comments
### John Doe (2024-01-14T09:00:00)
Comment text...
```
## Python API
For programmatic access (or AI agents needing advanced Jira operations):
```python
import zaira
# Authenticated Jira client (jira.JIRA instance)
jira = zaira.client()
issue = jira.issue("FOO-1234")
issues = jira.search_issues("project = FOO AND status = 'In Progress'")
# Instance schema (fields, statuses, priorities, issue types, link types)
s = zaira.schema()
s["statuses"] # {'Open': 'To Do', 'In Progress': 'In Progress', ...}
s["fields"] # {'customfield_10001': 'Epic Link', ...}
s["priorities"] # ['Blocker', 'Critical', 'Major', ...]
# Project schema (components, labels)
ps = zaira.project_schema("FOO")
ps["components"] # ['Backend', 'Frontend', ...]
ps["labels"] # ['bug', 'feature', ...]
```
The client uses credentials from the platform config directory (`~/Library/Application Support/zaira/credentials.toml` on macOS, `~/.config/zaira/credentials.toml` on Linux). Schema functions return cached data populated by `zaira init-project` or `zaira info --save`.
## License
MIT
| text/markdown | null | null | null | null | null | jira, cli, tickets, export, markdown, reports, atlassian | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development",
"Topic :: Utilities"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"jira>=3.5.0",
"markdown>=3.5",
"platformdirs>=4.0.0",
"pyyaml>=6.0"
] | [] | [] | [] | [
"Homepage, https://github.com/vivainio/zaira",
"Repository, https://github.com/vivainio/zaira",
"Issues, https://github.com/vivainio/zaira/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T11:04:34.162450 | zaira-0.17.0.tar.gz | 155,240 | 3f/38/bf21dd327abd412a6a3d74b5cf1a26428a0a4552ba9e76f87b9919cbe95b/zaira-0.17.0.tar.gz | source | sdist | null | false | 6d4b7adef8eeedf5f46ce3e81516a09a | 70a69a520b62a3ba1aac13a5ccc716986820434a27f762a16113b2695c7ee2b5 | 3f38bf21dd327abd412a6a3d74b5cf1a26428a0a4552ba9e76f87b9919cbe95b | MIT | [
"LICENSE"
] | 231 |
2.4 | alfasim-score | 1.1.0 | Python package to convert the SCORE input JSON to Alfacase | ===============
ALFAsim Score
===============
.. image:: https://img.shields.io/pypi/v/alfasim-score.svg
:target: https://pypi.python.org/pypi/alfasim-score
.. image:: https://img.shields.io/pypi/pyversions/alfasim-score.svg
:target: https://pypi.org/project/alfasim-score
.. image:: https://github.com/ESSS/alfasim-score/workflows/test/badge.svg
:target: https://github.com/ESSS/alfasim-score/actions
.. image:: https://codecov.io/gh/ESSS/alfasim-score/branch/master/graph/badge.svg
:target: https://codecov.io/gh/ESSS/alfasim-score
.. image:: https://img.shields.io/readthedocs/alfasim-score.svg
:target: https://alfasim-score.readthedocs.io/en/latest/
.. image:: https://sonarcloud.io/api/project_badges/measure?project=ESSS_alfasim-score&metric=alert_status
:target: https://sonarcloud.io/project/overview?id=ESSS_alfasim-score
What is alfasim-score?
=======================
Python package to convert the SCORE input JSON to Alfacase (ALFAsim input file).
Features
-----------
* Converter from Score input JSON to Alfacase
* Converter from Wellprop pvt tables to `.tab` pvt table format
* Parser for the ALFAsim results and generate a JSON compatible with SCORE
How to use it
-------------
#. First, the user needs to create an instance of the converter::
from pathlib import Path
from alfasim_score.converter.alfacase.alfasim_score_converter import AlfasimScoreConverter
# path indicating where the SCORE input file is
score_input_filepath = Path("path/to/score_input.json")
# path indicating where the output file (converted from ALFAsim results) should be created
score_output_filepath = Path("path/to/score_output_result.json")
# then create a converter instance
alfacase_converter = AlfasimScoreConverter(score_input_filepath, score_output_filepath)
#. To convert the SCORE input into an alfacase file, the user can do the following::
alfacase_filepath = Path("path/where/save/converted_score.alfacase")
alfacase_converter.generate_alfasim_input_file(alfacase_filepath)
#. Run the ALFAsim with the generated file (and the pvt tables in the same folder)
#. Once the result file of ALFAsim is generated, one can call the converter for the output file::
alfasim_results_directory = Path("path/to/alfasim_results_folder")
alfacase_converter.generate_score_output_file(alfasim_results_directory)
#. The user also must remember to convert and save the pvt table (as `.tab` file) if wellprop tables are being used::
from alfasim_score.converter.wellprop.wellprop_pvt_table_converter import WellpropToPvtConverter
table_converter = WellpropToPvtConverter(Path("name_of_folder_with_wellprop_tables"))
table_converter.generate_pvt_table_file(Path("name_of_folder_to_save_converted_pvt_table"))
Development
-----------
For complete description of what type of contributions are possible,
see the full `CONTRIBUTING <CONTRIBUTING.rst>`_ guide.
Here is a quick summary of the steps necessary to setup your environment to contribute to ``alfasim-score``.
#. Create a virtual environment and activate it::
$ python -m virtualenv .env
$ .env\Scripts\activate # windows
$ source .env/bin/activate # linux
.. note::
If you use ``conda``, you can install ``virtualenv`` in the root environment::
$ conda install -n root virtualenv
Don't worry as this is safe to do.
#. Update ``pip``::
$ python -m pip install -U pip
#. Install development dependencies::
$ pip install -e .[testing]
#. Install pre-commit::
$ pre-commit install
#. Run tests::
$ pytest --pyargs alfasim_score
#. Generate docs locally::
$ tox -e docs
The documentation files will be generated in ``docs/_build``.
Release
-------
A reminder for the maintainers on how to make a new release.
Note that the VERSION should folow the semantic versioning as X.Y.Z
Ex.: v1.0.5
1. Create a ``release-VERSION`` branch from ``upstream/master``.
2. Update ``CHANGELOG.rst``.
3. Push a branch with the changes.
4. Once all builds pass, push a ``VERSION`` tag to ``upstream``. Ex: ``git tag v1.0.5; git push origin --tags``
5. Merge the PR.
.. _`GitHub page` : https://github.com/ESSS/alfasim-score
.. _pytest: https://github.com/pytest-dev/pytest
.. _tox: https://github.com/tox-dev/tox
0.1.0 (2024-06-10)
------------------
* First release on PyPI.
0.2.0 (2024-12-18)
------------------
* Improvements on API.
* Add documentation on how to use the API.
1.0.0 (2025-04-11)
------------------
* Update the alfacase converter to create files compatible with ALFAsim APB plugin v1.0.1
* Add new converter for pvt tables from wellprops to `.tab` format
1.1.0 (2026-02-20)
------------------
* Update the alfacase converter to support ALFAsim APB plugin v2025.2.1
* Update convert to improve ALFAsim simulation performance:
* Use Zamora correlation for PVT table input
* Periodic calculation for APB
* Update of thermal properties only in initalization
| null | ESSS | foss@esss.co | null | null | MIT license | ALFAsim, Score | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | https://github.com/ESSS/alfasim-score | null | >=3.8 | [] | [] | [] | [
"alfasim-sdk==1.0.0",
"attrs>=18.1.0",
"numpy>=1.11.0",
"pandas>=2.0.0",
"oop-ext>=1.1",
"typing_extensions",
"codecov; extra == \"testing\"",
"mypy; extra == \"testing\"",
"pre-commit; extra == \"testing\"",
"pytest; extra == \"testing\"",
"pytest-cov; extra == \"testing\"",
"pytest-mock; extra == \"testing\"",
"pytest-regressions; extra == \"testing\"",
"tox; extra == \"testing\""
] | [] | [] | [] | [] | twine/6.0.1 CPython/3.12.8 | 2026-02-20T11:01:25.686950 | alfasim_score-1.1.0.tar.gz | 2,544,552 | b5/1f/83c92453c58d374bcb9548ce66a27a928550e646d8a68b48ca76d66a8ad5/alfasim_score-1.1.0.tar.gz | source | sdist | null | false | 5e4354b8842b9d178e6a6cec77c055a5 | 48360b8f96ba327e8fd0ee2114a8d9fe101380bb16fcb35f41856f146474530e | b51f83c92453c58d374bcb9548ce66a27a928550e646d8a68b48ca76d66a8ad5 | null | [] | 232 |
2.4 | SafePDF | 1.0.12 | A safe PDF manipulation tool | # SafePDF - Privacy-First PDF Toolkit
<p align="center">
<img src="img/SafePDF_small.avif" alt="SafePDF Banner" />
</p>
[](https://github.com/mcagriaksoy/SafePDF/releases/)
[](#license)
[](https://github.com/mcagriaksoy/SafePDF)
**SafePDF** is a privacy-focused, offline PDF manipulation tool. All operations are performed locally on your device—your sensitive documents never leave your computer.
<a href="https://github.com/mcagriaksoy/SafePDF/releases/" download>
<img src="https://img.shields.io/badge/Download-Windows_Installer-blue?style=for-the-badge" alt="Download SafePDF">
</a>
## Why SafePDF?
**100% Offline** - No cloud uploads, no internet required
**Fast & Lightweight** - Operations run directly on your device
**Privacy First** - Ideal for sensitive documents (legal, healthcare, financial)
**Multi-language** - English, German, Turkish support
> **Read more:** [The Security Concerns of Online PDF Tools](https://medium.com/dev-genius/the-untold-security-concerns-of-online-pdf-editing-tools-6ee1d83facd6)

## Features
- **Compress** - Reduce PDF file size with quality control
- **Split** - Separate PDFs by pages or custom ranges
- **Merge** - Combine multiple PDF files into one
- **Convert to Images** - Export PDF pages as JPG/JPEG
- **Rotate** - Rotate pages (90°, 180°, 270°)
- **Repair** - Fix corrupted PDF files
- **Convert to Word** - Export PDF as DOCX documents
- **Extract Text** - Extract plain text from PDFs
- **Extract Info** - View PDF metadata and properties
**Interface Features:**
- Drag & drop file selection
- Real-time progress tracking
- Multi-language UI
- Modern, intuitive design
-


## Quick Start
### Option 1: Download Executable (Recommended)
1. Download the latest release from [Releases](https://github.com/mcagriaksoy/SafePDF/releases/)
2. Extract the ZIP file
3. Run `SafePDF.exe`
### Option 2: Run from Source
**Requirements:**
- Python 3.7+
- pip
**Installation:**
```bash
# Clone the repository
git clone https://github.com/mcagriaksoy/SafePDF.git
cd SafePDF
# Install dependencies
pip install -r requirements.txt
# Run the application
python run_safe_pdf.py
```
## How to Use
1. **Select Operation** - Choose what you want to do (compress, split, merge, etc.)
2. **Select File** - Drag & drop your PDF or click to browse
3. **Adjust Settings** - Configure operation-specific options
4. **Execute** - Click to process your file
5. **View Results** - See output and open the processed file
## Development
### Project Structure
```
SafePDF/
├── SafePDF/
│ ├── ctrl/ # Controllers
│ ├── ui/ # User interface
│ ├── ops/ # PDF operations
│ ├── logger/ # Logging
│ └── text/ # Localization files
├── run_safe_pdf.py # Main launcher
└── requirements.txt # Dependencies
```
### Contributing
Contributions are welcome! Please:
- Report bugs via [Issues](https://github.com/mcagriaksoy/SafePDF/issues)
- Submit pull requests for improvements
- Follow existing code style
## Support
- Email: [info@safepdf.de](mailto:info@safepdf.de)
- Report Issues: [GitHub Issues](https://github.com/mcagriaksoy/SafePDF/issues)
- Documentation: [safepdf.de](https://safepdf.de)
## License
Released under [GPL-3.0](/LICENSE) by [@mcagriaksoy](https://github.com/mcagriaksoy).
## Support the Project
<a href="https://www.buymeacoffee.com/mcagriaksoy">
<img src="https://img.shields.io/badge/-buy_me_a%C2%A0coffee-gray?logo=buy-me-a-coffee" alt="Buy Me A Coffee">
</a>
---
Made for privacy-conscious users
| text/markdown | Mehmet Cagri Aksoy | info@safepdf.de | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.14"
] | [] | https://github.com/mcagriaksoy/safepdf | null | >=3.8 | [] | [] | [] | [
"PyPDF2>=3.0.0",
"Pillow>=9.0.0",
"pypdfium2>=4.0.0",
"tkinterdnd2>=0.3.0",
"python-docx>=0.8.11",
"requests>=2.25.0",
"PyGitHub>=1.55.0",
"python-gnupg>=0.4.8"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.3 | 2026-02-20T11:01:12.502981 | safepdf-1.0.12.tar.gz | 101,740 | d9/0c/c0e6323dd5e79ab8d49a9c4bf418620da2c6cb206ed41de770cb9db7b0b1/safepdf-1.0.12.tar.gz | source | sdist | null | false | d2aeddc610bb78da30d459ec3a6e7da5 | dd905a83d0a1d9b3d8aad0365dc5e0d05456e9e569a02f265577fdb18461c8dd | d90cc0e6323dd5e79ab8d49a9c4bf418620da2c6cb206ed41de770cb9db7b0b1 | null | [
"LICENSE"
] | 0 |
2.3 | flux-networking-shared | 0.4.8 | Shared networking utilities for Flux daemon and TUI | # flux-networking-shared
Shared networking utilities for Flux daemon and TUI.
This package contains platform-independent networking code that can be used by both the flux-configd daemon and the flux_iso_networking TUI package.
## Components
- `UpnpQuerier` - UPnP port mapping discovery
- `NetworkObserver` - Network interface change observation (via probert)
- `IcmpPacketSender` - Raw ICMP ping utilities
- `SystemdConfigParser` - Parse systemd network configurations
- Network models (Route, NetworkInterface, FluxShapingPolicy, etc.)
## Platform Support
- **Linux**: Full functionality with real probert dependency
- **macOS**: Development support with stub probert implementation
| text/markdown | David White | David White <david@runonflux.io> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiofiles<26,>=25.1.0",
"pyyaml<7,>=6.0.3",
"textual<7,>=6.11.0",
"yarl<2,>=1.22.0",
"aiohttp<4,>=3.13.2; extra == \"backend\"",
"miniupnpc<3,>=2.3.3; extra == \"backend\"",
"flux-probert>=0.0.18; sys_platform == \"linux\" and extra == \"backend\"",
"pyroute2<1,>=0.9.2; sys_platform == \"linux\" and extra == \"backend\""
] | [] | [] | [] | [] | uv/0.6.3 | 2026-02-20T11:00:59.897253 | flux_networking_shared-0.4.8.tar.gz | 45,901 | f2/25/c6a10c6146cf4645a079bc7e8c62b4dbe28a08fc05227501c63cab531881/flux_networking_shared-0.4.8.tar.gz | source | sdist | null | false | 8d66dc51ed985643e3a6b86fff2978ba | 2c000f8cf1ea4649de2fec2c8a1cd6fb5eb58fb7af1ec864d986eb96fb323603 | f225c6a10c6146cf4645a079bc7e8c62b4dbe28a08fc05227501c63cab531881 | null | [] | 231 |
2.4 | keys-vals | 0.1.0 | Efficient Inference, Fine-tuning and Key-Value Caching on top of LitGPT | # KeysAndValues: Efficient Language Model Inference, Fine-tuning, and Key-value Caching
This library provides implementations of advanced key-value caching for
efficient long context inference and fine-tuning with large language models.
It sits on top of [LitGPT](https://github.com/Lightning-AI/litgpt/tree/main).
The library is primarily intended for research and evaluation. Using it as part
of a production system will require substantial extra efforts.
## Getting Started
We depend on `LitGPT` and inherits its dependencies. Depending on what you plan
to do, you can:
* Install `LitGPT` via `pip`: In case you do not plan to modify `LitGPT` code.
* Install `LitGPT` from source: In case your project includes modifying `LitGPT`
as well. If you are not sure, choose this path.
### Install `LitGPT` via `pip`
It is best to create a virtual environment:
```bash
git clone git@github.com:awslabs/keys_values.git
python3 -m venv keyval_venv
. keyval_venv/bin/activate
pip install --upgrade pip
pip install 'litgpt[all,test,extra]'
cd keys_values
pip install -e .
```
Run the tests in order to check whether the installation worked:
```bash
pytest test/
```
### Install `LitGPT` from source
First, install `LitGPT` from source:
```bash
git clone git@github.com:Lightning-AI/litgpt.git
cd litgpt
git checkout main
```
If you plan to modify their code beyond simple changes, it may be better to create
a fork. Next, you need to create a virtual environment:
```bash
python3 -m venv keyval_venv
. keyval_venv/bin/activate
pip install --upgrade pip
cd ${LITGPT_PATH}
pip install -e .[all,test,extra]
cd ${KEYS_VALUES_PATH}
pip install -e .
```
Here, replace `${LITGPT_PATH}` with the source path of `LitGPT` and
`${KEYS_VALUES_PATH}` with the source path of `keys_values`.
Run the tests in order to check whether the installation worked:
```bash
cd ${KEYS_VALUES_PATH}
pytest test/
```
## Example: Long Context Fine-tuning on LongBench V2
This example runs on a single `Nvidia A 100` GPU with 40 GB of RAM.
```bash
cd ${KEYS_VALUES_PATH}
python3 keys_values/__main__.py finetune_long_lora Qwen/Qwen2.5-0.5B --out_dir /home/ubuntu/out/finetune/longcontext_lora --data LongBenchV2 --data.max_seq_length 100000 --data.metadata_dir /home/ubuntu/out/finetune/longcontext_lora/data --head_model seq_classification_on_logits --precision bf16-true --verbose some --kv_cache.name h2o-default --kv_cache.cache_length 16384 --kv_cache.chunk_size 1024 --train.save_interval 10 --train.micro_batch_size 4 --train.global_batch_size 4 --eval.interval 10
```
What is happening here?
* `finetune_long_lora`: Default fine-tuning script for `LoRA`
* `--data LongBenchV2`: Using the `LongBenchV2` benchmark with its data loaders.
`--data.max_seq_length 100000` filters for sequences less than 100k tokens.
`--data.metadata_dir` stores metadata information about the dataset, so this
filtering runs much faster next time.
* `--head_model seq_classification_on_logits` selects head model and loss
function. The benchmark task is 4-way classification, each class represented
by a single letter. This loss function reduces the logits to these 4 tokens.
This is much like asking the model to output a single letter, but only allowing
for valid class labels.
* `--kv_cache.name h2o-default` selects the KV cache policy (`h2o`) and its
buffer strategy (`default` -- no quantization). `--kv_cache.cache_length` sets
the cache length (number of slots). Inference with batches at most this length
are done exactly with a single forward pass. `--kv_cache.chunk_size` sets the
chunk size. Sequences are processed in chunks of size
`cache_length, chunk_size, chunk_size, ...`, the first is called the prefill
chunk.
* `--train.micro_batch_size` sets the batch size for forward and backward
computations. `--train.global_batch_size` can be a multiple of the former, in
which case we use gradient averaging.
If you use an AWS `p4d.24xlarge` instance, you can use 8 A 100 GPUs in parallel.
At present, we support data parallelism via
[Lightning Fabric](https://lightning.ai/docs/fabric/stable/). Modifying the
CLI command above like runs training with an effective batch size of 32:
```bash
cd ${KEYS_VALUES_PATH}
python3 keys_values/__main__.py finetune_long_lora Qwen/Qwen2.5-0.5B --out_dir /home/ubuntu/out/finetune/longcontext_lora --devices 8 --data LongBenchV2 --data.max_seq_length 100000 --data.metadata_dir /home/ubuntu/out/finetune/longcontext_lora/data --head_model seq_classification_on_logits --precision bf16-true --verbose some --kv_cache.name h2o-default --kv_cache.cache_length 16384 --kv_cache.chunk_size 1024 --train.save_interval 10 --train.micro_batch_size 4 --train.global_batch_size 32 --eval.interval 10
```
Here, `--devices 8 --train.micro_batch_size 4 --train.global_batch_size 32` sets the
effective batch size to 32, the per-device batch size to 4, and asks to use 8
devices.
### What's Next?
* Try increasing `kv_cache.cache_length` and `kv_cache.chunk_size`. They have
the [largest impact on speed and accuracy](#cache-length-and-chunk-size).
* Play around with different [cache policies](#kv-cache-policy-and-configuration),
or try to use buffer quantization (both by `kv_cache.name`).
* Try using `finetune_offload_lora` instead of `finetune_long_lora`, this will
free up more memory for the backward pass, allowing you to explore options
like `grad.layers_per_cell` and `grad.chunks_per_cell_multiplier`. Larger
values speed up computations, but require more GPU memory. Or try
`finetune_offload_full` to fine-tune all model parameters.
* Your KV cache policy is not supported? Why not implement and
[contribute it back](#implementing-new-kv-cache-policies) to the community?
* You know how to implement GPU kernels in `CUDA` or `Triton` and would like to
help speeding up inference and fine-tuning with advanced cache policies?
Your help would be very welcome! Please [read this](#scaled-dot-product-attention).
## Long Context Inference
The library supports inference in the same rudimentary way than `LitGPT`, but
for contexts of essentially arbitrary length. The code in `generate/base` can
be used in the same way as the original `LitGPT` code. We integrate with
PyTorch `flex_attention` for fast scaled dot product attention (SDPA).
Having said that, we are aware that this is not competitive with leading
inference libraries, such as [vLLM](https://github.com/vllm-project/vllm) or
[SGLang](https://github.com/sgl-project/sglang). Our library lacks support
for multi-device strategies (context parallelism in particular) as well as
many crucial optimizations.
We are actively working towards supporting multi-device fine-tuning in a better
way than what we currently have. As for inference, neither vLLM nor SGLang
support advanced selective KV cache policies in more than an adhoc fashion. If
you want long contexts, you need to provide many GPUs (and cannot use them to
increase batch size). A good strategy would be to try and integrate our KV cache
abstractions and basic implementations there, but rely on their advanced scaled
dot product attention (SDPA) kernels and multi-device low level code.
If you are motivated to work on such an integration, please do get in touch
(see [CONTRIBUTING.md](./CONTRIBUTING.md))! We would love to support users
being able to run inference with long contexts without having to spend a lot
of money on many GPUs, and we think that advanced selective KV cache policies
are an important factor for achieving this goal.
A script for evaluating fine-tuned models on long context test data is provided
in [finetune/longcontext_eval.py](./keys_values/finetune/longcontext_eval.py).
## Long Context Fine-tuning
A major distinguishing factor of this library is its support of long context
fine-tuning. Importantly, we fine-tune a model with a particular KV cache
policy in place. Existing solutions for long context fine-tuning either
restrict the model to a different architecture or store the key-value information
exactly, distributed across several GPU devices (this is called *context
parallelism* or *RingAttention*).
Context parallelism is a good choice if you have the required GPUs (you cannot
use them to achieve larger batch size then), and if you also require exact KV
caching across multiple GPU at inference time. However, if you like to use
advanced selective KV caching during inference (such as H2O), maybe on a single
device only, it may not be a good idea to use context parallelism for fine-tuning,
because this is not aware of the cache restrictions put in place during
inference. In contrast, the techniques provided here compute gradients with your
KV cache policy in place, which allows the model to adapt to it.
The following fine-tuning modes are currently provided:
* [finetune_long_lora](./keys_values/finetune/longcontext_lora.py): Fine-tune
parameters of LoRA adapters. Supports distributed data parallelism.
* [finetune_long_full](./keys_values/finetune/longcontext_full.py): Fine-tune
all model parameters. Supports distributed data parallelism. This is not a
good choice with `Adam` optimization, because the optimizer state is too large
to fit into GPU memory (this is independent of context lengths). Unfortunately,
our gradient computation clashes with assumptions made in `PyTorch
distributed`, so you cannot easily use fully sharded data parallel.
* [finetune_offload_lora](./keys_values/finetune/longcon_offload_lora.py):
Fine-tune parameters of LoRA adapters, using CPU offloading. Supports
distributed data parallelism. We keep model weights and optimizer state on
the CPU, running forward and backward on copies on the GPU. The backward
pass uses model shards, which frees up GPU memory which can be used to speed
up computations. This is the best choice for exploring our method for larger
models on GPUs with 40 GB of RAM or less.
* [finetune_offload_full](./keys_values/finetune/longcon_offload_full.py):
Fine-tune all model parameters, using CPU offloading. Supports distributed
data parallelism. Use this to explore full weights fine-tuning with `Adam`
optimizers.
They mostly share the same command line arguments, which are detailed in the
sequel.
### Basic Arguments
The scripts are called as follows:
```bash
python3 keys_values/__main__.py {mode} {model} [{command line args}]
```
Here, `mode` is the fine-tuning mode (`finetune_long_lora`, `finetune_long_full`,
`finetune_offload_lora`, `finetune_offload_full`), and `model` is the Hugging Face model name (for example,
`Qwen/Qwen2.5-0.5B` selects the 0.5B parameter version of Qwen 2.5). You can also
put a checkpoint path here. The Hugging Face model must be supported by `LitGPT`,
the default configuration is taken from there.
Basic arguments are:
* `precision`: Precision to be used for weights. The same is used for KV cache
buffers.
* `devices`: Number of GPU devices to be used. Defaults to 1. If `devices > 1`,
distributed data parallel optimization is run.
* `verbose`: Verbosity level, can be "none", "some", "more", "all".
* `train.*`: Parameters controlling training. This is taken from `LitGPT` without
modification. Most important ones:
- `train.micro_batch_size`: Batch size for individual computations on single
device.
- `train.global_batch_size`: Not for `finetune_offload_*`. Batch size used
for optimizer updates. Must be multiple of `train.micro_batch_size`. If
`train.global_batch_size == train.micro_batch_size * devices`, this is
distributed data parallel. For `finetune_offload_*`, this value is set
automatically.
- `train.save_interval`: Number of optimizer steps between saving checkpoints.
* `eval.*`: Parameters controlling evaluations on validation set. Taken from
`LitGPT` with little modification. Most important ones:
- `eval.interval`: Number of optimizer steps between evaluations.
- `eval.initial_validation`: Run validation before training starts? If this
is `False`, we run validation on two cases just to check whether things
break.
- `eval.final_validation`: Run validation after end of training?
- `eval.micro_batch_size`: Local batch size to be bused for validation. Overrides
`train.micro_batch_size`. This can often be larger, because evaluation needs
less GPU memory than training.
* `lora.*`: Only for `finetune_long_lora`, `finetune_offload_lora` modes.
Controls LoRA parameterization of base model. This is taken from `LitGPT`
without modification. Most important ones:
- `lora.r`: Rank of LoRA parameterization. One axis of LoRA parameters have
this size.
- `lora.alpha`: This parameter is needed for scaling updates as `alpha / r`.
"This scaling helps to reduce the need to retune hyperparameters when we
vary r", see [Section 4.1](https://arxiv.org/pdf/2106.09685.pdf).
- `lora.dropout`: Dropout applied to input in the LoRA branch (before
multiplying with matrix `A`)
- `lora.query`: Apply LoRA to linear map to `query`?
- `lora.key`: Apply LoRA to linear map to `key`?
- `lora.value`: Apply LoRA to linear map to `value`?
- `lora.projection`: Apply LoRA to linear projection at end of multi-head
self attention?
- `lora.mlp`: Apply LoRA to linear maps of feed-forward network?
- `lora.head`: Apply LoRA to linear map to logits in the head?
### Dataset and Loss Function
These arguments select the dataset for training and evaluation, as well as the
loss function and head model to be used. We inherit dataset management from
`LitGPT`, in that a subclass of `litgpt.data.DataModule` needs to be provided.
An example is given by [data.LongBenchV2](./keys_values/data/longbench_v2.py#L127).
All `DataModule` subclasses imported in the script file can be chosen by `--data`.
Moreover, `--data.*` is used to set constructor parameters for the dataset.
Relevant arguments for `LongBenchV2` (which is the default dataset):
* `data.max_seq_length`: If given, we filter sequences to have token length
less or equal this limit. The remaining data is split into training and
validation sets.
* `data.metadata_dir`: If given, we store meta data into this directory. In
particular, we tokenize all sequences and determine their token lengths, so
that filtering runs much faster in the next call, independent of the value
of `data.max_seq_length`.
* `data.val_split_fraction`: The fraction of the dataset to use for the
validation dataset. The rest is used for training.
* `data.trainloader_longest_first`: If `True`, the training dataloader returns
the longest sequences in the first batch. This is useful in order to detect
out of memory errors early.
* `data.trainloader_shortest_first`: If `True`, the training dataloader returns
the shortest sequences in the first batch. This can be useful for debugging.
* `data.num_workers`, `data.pin_memory`: Arguments passed to
`torch.utils.data.DataLoader`.
* `data.test_set_tag`: If this is given, we also maintain a test dataset and
serve a test dataloader. The tag determines how the test set is chosen. Current
choices:
- "rest": All cases with sequence length > `data.max_seq_length`, sorted by
token sequence length (non-decreasing).
> When implementing a new `DataModule` for your dataset, we strongly recommend
> you adopting [SimilarSequenceLengthIterable](./keys_values/data/iterators.py#L172)
> as `sampler` for the `DataLoader` objects returned by `train_dataloader` and
> `val_dataloader` (as well as `test_dataloader` if this is provided). This
> requires the sequence lengths (in tokens) for all data cases, which you need
> to compute when the dataset is first loaded. Since this takes time, we recommend
> you store these lengths as meta-data. See `LongBenchV2` for a complete example.
Training loss function and head model are represented by
[HeadModel](./keys_values/head_model.py#L24). In general, the LLM outputs a logits
tensor over the vocabulary, which the head model maps to a loss function value,
given a targets tensor as well. Head models support chunk-wise evaluation in
order to limit the amount of memory needed. The main method is
```python
def forward(
self,
model_outputs: torch.Tensor,
targets: Optional[torch.Tensor],
input_pos: int,
) -> torch.Tensor:
```
* `model_outputs`: `(batch_size, chunk_size, config.padded_vocab_size)` or
`(batch_size, chunk_size, config.n_embd)`. Outputs of the LLM for input
batch of shape `(batch_size, chunk_size)`.
* `targets`: `(batch_size, target_size)` or `None`, where
`target_size <= chunk_size`. If shorter, they align with `model_outputs`
on the right. If `None`, the model outputs are processed only (part of
input prompt).
* `input_pos`: Position in total sequence. Starts with `input_pos=0`. Must
be increased by `chunk_size` afterwards. This is not done by the `HeadModel`.
This is called sequentially over chunks, from left to right, and `input_pos=0`
starts a new batch. While most loss functions are just additive, some have a
state which allows for other aggregation modes over chunks. For some loss
functions, `targets` is passed with the final chunk only. If a loss function
is normalized over the number of targets, the
[HeadModel.num_target_entries](./keys_values/head_model.py#L73) method is used
in order to determine the normalization constants for each part.
For head models which operate on top of logits outputs, the
[HeadModel.needs_logits](./keys_values/head_model.py#L35) method returns `True`.
If this returns `False`, the head model operates on top of final layer outputs,
so the LLM skips the final linear map to logits.
The following head models are currently supported:
* `--head_model next_token_prediction`:
[CrossEntropyOnLogits](./keys_values/head_model.py#L132). Cross-entropy loss
on target tokens. Needs logits. `targets` can be shorter than `model_outputs`,
in which case they are aligned on the right. The current implementation only
supports this specific type of masking.<br>
For next-token prediction, ensure that the inputs to the LLM and the targets
are based on the same sequences, but shifted by one token position.
* `--head_model seq_classification_on_logits`:
[SequenceClassificationOnLogits](./keys_values/head_model.py#L222). Works for
multi-way classification. Needs logits. The label of each class must be
represented by a single token. The logits output by the LLM are restricted to
the class label tokens, then cross-entropy loss is applied. For example,
`LongBenchV2` is 4-way classification with class labels `A`, `B`, `C`, `D`.
The logits for these 4 tokens are selected and fed into the cross-entropy
loss.<br>
`targets.shape[1] == 1` for the last chunk (single token), `targets=None` for
the other chunks. This is simpler for the model to learn than using
`--head_model next_token_prediction` with classification targets, because
the model cannot output anything different from class labels.
* `--head_model seq_classification`:
[SequenceClassification](./keys_values/head_model.py#L310). Works for
multi-way classification. Does not need logits. Here, the head model
contains a linear map from last layer outputs to logits over class labels,
whose weights are fine-tuned alongside LLM weights (in return, the final
linear map in the LLM is not trained). For example, `LongBenchV2` is 4-way
classification with class labels `A`, `B`, `C`, `D`, the linear map in the
head model is given by `torch.nn.Linear(config.n_embd, 4, bias=True)`.
### KV Cache Policy and Configuration
For more details on our KV cache abstractions, please study the docstrings in
the codebase. We are preparing a comprehensive technical report on all novelties
implemented here.
A KV cache can be thought of being represented by these variables:
```python
{
"keys": torch.Tensor(batch_size, n_query_groups, cache_length, head_size),
"values": torch.Tensor(batch_size, n_query_groups, cache_length, head_size),
"token_pos": torch.Tensor(batch_size, n_query_groups, cache_length),
}
```
It has up to `cache_length` slots, where key-value information can be stored.
Each slot provides an array of shape `(batch_size, n_query_groups, head_size)`,
in that every batch dimension and query group has its own key and value vectors.
We cannot say that a token (position) is in the cache or not: it may be in the
cache for some `(b, h)`, but not for others. Also, `token_pos[b, h, j]` is the
token position (in the complete sequence batch) for which `keys[b, h, j, :]`,
`values[b, h, j, :]` stores KV information. This is important for book-keeping,
but also to create the causal attention masks for multi-head self attention.
In other words, we do not maintain keys and values as block-sparse tensors, but
as standard dense tensors: this is simple and allows us to use normal `PyTorch`
operators. `token_pos` matters only when creating attention masks. Moreover,
we use `torch.gather` to extract information for slots, and `torch.scatter`
to write information for new tokens into the cache.
For the CLI, a cache is identified by `kv_cache.name`, which can be a string
`{cname}-{bname}`, where `cname` determines the KV cache policy (i.e., which
slots are overwritten once the cache is full) and `bname` determines the buffer
strategy (i.e., how is the KV information stored). These KV cache policies are
currently supported:
* `dense`: [DenseKVCache](./keys_values/kvcache/basics.py#L296). Represents
exact KV caching, in that the KV information for all tokens is stored. Can
only be used for sequences of length up to `cache_length`.
* `lastrec`: [LastRecentlyInsertedKVCache](./keys_values/kvcache/basics.py#L478).
This cache maintains KV information for the `cache_length` last recently
inserted tokens in the cache (but see `init_grace_tokens` argument). When the
cache is full, new information overwrites slots which have not been
overwritten for the longest time.
* `h2o`: [H2OKVCache](./keys_values/kvcache/h2o.py#L28). Implements an improved
variant of the heavy hitter oracle (H2O) strategy (for citation, see
docstring). H2O scores each `(b, h, j)` by the sum of attention weights
assigned to the KV pair since it is in the cache. Information is evicted if
this "usage" score is lowest. In a strong sense, H2O implements the least
recently used (LRU) strategy known from general caches. It requires scaled
dot product attention (SDPA) to return summed attention weights.<br>
We implement a number of simple improvements over what has been published as
H2O.
* `qh2o`: [QuantizedH2OKVCache](./keys_values/kvcache/qh2o.py#L31).
When H2O is combined with buffer quantization (which is recommended), it can
be improved by taking quantization errors into account, as has been published
in a follow-up paper (see docstring for citation).
* `h2o-vlen`: [VLengthH2OKVCache](/keys_values/kvcache/h2o.py#L334). Replaces the
H2O cumulative attention weights score with an expected value norm score,
which accounts for the length of value vectors as well. In the end, the
attention output is a linear combination of value vectors, so these lengths
should play a role. Can be used as alternative to `h2o`.
* `qh2o-vlen`: [QuantizedVLengthH2OKVCache](./keys_values/kvcache/qh2o.py#L216).
Combination of `h2o-vlen` and `qh2o`. Can be used as alternative to `qh2o`.
* `h2o-orig`: [H2OOriginalKVCache](./keys_values/kvcache/h2o.py#L482). Implements
the H2O cache policy as originally published. This has some shortcomings which
we corrected with `h2o`. This cache is for comparison purposes only, we do not
recommend to use it otherwise, use `h2o` or the other variants instead.
The KV cache information across all layers of a model often takes more space on
the GPU than the model weights. It therefore makes sense to compress KV
information by quantization (compression and decompression must be very fast).
This is directed by the buffer strategy, which can be combined the KV cache
policy. Note that KV information is maintained with the same `dtype` as model
weigths, so typically `float16` or `bfloat16`. Buffer strategies are:
* `default`: [DefaultKVCacheBuffers](./keys_values/kvcache/buffers.py#L390).
Buffers are stored as is, no compression. This is fastest, but needs the most
GPU memory.
* `torch-quantized4`, `torch-quantized8`:
[TorchBasicQuantizer](./keys_values/kvcache/quantize/pytorch.py#L119). Default
`PyTorch` quantization to 4 or 8 bits. This quantizer works on CPU as well.
* `bnb-quantized4`, `bnb-quantized8`:
[BitsAndBytesQuantizer](./keys_values/kvcache/quantize/bitsandbytes.py#L48).
`bitsandbytes` quantization to 4 or 8 bits. GPU only.
With 16 bit standard `dtype`, 4 bit quantization reduces GPU memory requirements
by a factor of 4, allowing you to choose a larger `cache_length`.
The most important parameters for KV caching are `kv_cache.cache_length` and
`kv_cache.chunk_size`, they are discussed [below](#cache-length-and-chunk-size).
Other important arguments can be specified as `kv_cache.cache_kwargs.*`. They
are:
* `grace_period`: Not for `dense`, `lastrec`. For a score-based cache policy, we
can define a grace period. Tokens which enter the cache at position `t` cannot
be evicted before position `t + grace_period` then. A grace period makes sense
if scores are noisy when tokens are in the cache for a short time only.
* `max_chunk_size`: Not for `dense`, `lastrec`. Limits the length
`query.shape[2]` for calls to `kv_cache.forward` except for the prefill (when
`input_pos == 0`). This is used to speed up finding the score minimizers.
* `init_grace_tokens`: Only for `lastrec`. KV information for the first
`init_grace_tokens` tokens remains in the cache.
* `keep_initial_fraction`: Not for `dense`, `lastrec`. See docstring of
[AttnWeightsKVCache](./keys_values/kvcache/attn_weights.py#L283).
* `normalize_scores`: Not for `dense`, `lastrec`. Scores are cumulative over
the time (in token positions) some entry is in the cache already. This may
favor earlier tokens. Scores are normalized by the age of the entry if
`normalize_scores=True`.
### Cache Length and Chunk Size
The most important argument for a KV cache is `kv_cache.cache_length`, the
number of slots. Sequences with no more than this number of tokens are processed
with a single forward pass and no cache evictions. Also, the first *prefill*
chunk to be processed is typically of this size, while subsequent chunks (if
any) are smaller.
**Note**: Our code supports different KV cache lengths for each layer, but this
is not yet enabled for the CLI.
As a rule of thumb, choose the cache length as large as possible, before you
run out of memory. Run inference with the longest batch first, using
`--data.trainloader_longest_first True`.
The next most important parameter is `kv_cache.chunk_size`. This is not a property of
the cache (except see `max_chunk_size`), but of inference and gradient
computation. We process a batch of long sequences in chunks. The first chunk
has length close to `cache_length`, subsequent chunks are shorter,
typically of length `chunk_size`. The larger the chunk size is, the faster a
long sequence (prompt) can be processed, but there is an important catch. Once
a KV cache is full, new KV information overwrites earlier content. This is done
in chunks of `chunk_size`. Here, the larger the chunk size, the worse the
approximation to exact KV caching becomes. As an extreme case, if
`chunk_size = cache_length`, the KV cache policy is not used at all, and
inference behaves as if the sequence was split into `cache_length`-sized
chunks, which are processed independently from each other!
This means that `chunk_size` is a real hyper-parameter, which determines both
runtime, but also approximation accuracy, which can affect overall accuracy.
Note that GPU memory requirements do not strongly depend on `chunk_size`.
Finally, if `--kv_cache.randomize_chunk_sizes True` is used, then chunk sizes
after the first are picked at random from a distribution with mean
`kv_cache.chunk_size`. The idea behind randomized chunk sizes is to ensure the
model does not adapt to a fixed chunk size.
### Optimizer
The most popular stochastic gradient optimizers from `PyTorch` can be selected,
and others can easily be added. Optimizer arguments are:
* `--optimizer {name}`: Choose among
[SUPPORTED_OPTIMIZERS](./keys_values/finetune/args.py#L167). Defaults to
"AdamW".
* `optimizer.learning_rate`: Base learning rate
* `optimizer.weight_decay`: Weight decay constant
* `optimizer.eps`: Eps constant
* `optimizer.momentum`: Momentum constant (if supported)
* `optimizer.dampening`: Dampening constant as part of momentum (if supported)
* `optimizer.adam_betas`: Only for `Adam` optimizers. Tuple `(beta1, beta2)`
* `optimizer.adadelta_rho`: Only for `Adadelta`
* `optimizer.rmspprop_alpha`: Only for `RMSprop`
### Multi-head Self Attention, Scaled Dot Product Attention
Key-value information supports the computation of multi-head self attention (MHA),
in the case when queries are shorter than (and aligned on the right with) keys
and values. For token generation, `query` has length 1, while for processing
a long prompt, it often has length close to `chunk_size`. In fact, our KV
cache abstraction has [KVCache.forward](./keys_values/kvcache/base.py#L197)
computing in this case, when `query`, `key`, `value` correspond to *new tokens*.
For exact KV caching, `key` and `value` would be appended to the existing
buffers. In general, they overwrite slots in the cache buffers, evicting the
information for earlier tokens if the cache is full.
The typical structure of this `forward` call is implemented in
[DefaultKVCache.forward](./keys_values/kvcache/base.py#L520). After the cache
is updated, we make a `self.mha(...)` call, passing `query` along with the
full cache content for keys and values. This
[MultiHeadSelfAttention](./keys_values/attention.py#L95) abstraction computes
the *scaled dot product attention* (SDPA) inner part of MHA, after `query,
key, value` are determined and position encoded. SDPA is by far the
computationally most crucial primitive in LLM inference and is usually
represented by highly optimized SDPA kernels written in CUDA.
#### Position Encoding, YaRN
We implement `RoPE` for position encoding, essentially following `LitGPT`. In
terms of adjusting `RoPE` for sequence length, we use `YaRN`, see docstring
of [YaRNPositionEncoding](./keys_values/pos_encoding.py#L259). This can be
switched off with `--yarn_rope False`, in which case the same static RoPE
is used for all sequences. This is not recommended.
Note that KV information passed to SDPA and stored in KV caches has keys (and
queries) encoded already. This works for fine-tuning and inference with some
expected sequence length. Dynamic YaRN would adjust RoPE during inference,
this is not implemented yet. For such a use case, KV information would have to
be stored before encoding.
#### Scaled Dot Product Attention
Scaled dot product attention (SDPA) is represented by
[MultiHeadSelfAttention.__call__](./keys_values/attention.py#L209). Ideally, its
implementations are via fast kernels, such as
[torch.nn.functional.scaled_dot_product_attention](https://docs.pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) or
[torch.nn.attention.flex_attention.flex_attention](https://docs.pytorch.org/docs/stable/nn.attention.flex_attention.html#torch.nn.attention.flex_attention.FlexKernelOptions).
However, we have some special requirements:
* Some KV cache policies require attention weights on top of attention outputs
returned by SDPA. The full attention weights would be a tensor of shape
`(batch_size, n_head, q_len, kv_len)`, where `q_len = query.shape[2]`,
`kv_len = key.shape[2]`, which is much too big to maintain in memory. We
ask for attention weights summed over the query axis, shape
`(batch_size, n_head, kv_len)`, with `return_attn_weights=True`. This is
sufficient to compute H2O and other scores.
* We need the "rectangular" case, where `1 << q_len << kv_len`, not just the
"training" (or prefill) case, `q_len == kv_len`, which many SDPA kernel
developers focus on almost exclusively.
* We need implicit causal attention masking even if `key`, `value` are
reordered, as expressed by `kv_cache.token_positions`. This is the least
important requirements, since `key`, `value` can cheaply be reordered.
We are currently working actively to improve the SDPA kernel situation for this
library (and would be very happy for help, see
[CONTRIBUTING.md](./CONTRIBUTING.md)). At present, we support these kernels:
* PyTorch `flex_attention` SDPA: We use
`torch.nn.attention.flex_attention.flex_attention`, see
[keys_values/flex_attention.py](./keys_values/flex_attention.py) for details.
Use `--sdpa.flex_attention True` to activate these kernels. We support
`config.sliding_window_size` and `config.attention_logit_softcapping` with
these fast kernels. We also reorder `key`, `query` so that the new entries
(corresponding to `query`) are on the right end. Cannot return attention
weights.
* Query-padded PyTorch SDPA: We use
`torch.nn.functional.scaled_dot_product_attention`, but pad `query` with
zeroes on the left to obtain the square "training" case. We also reorder
`key`, `query` so that the new entries (corresponding to `query`) are on
the right end. Cannot return attention weights.
* Naive blockwise SDPA: We use an own implementation
[scaled_dot_product_attention_in_blocks](./keys_values/attention.py#L477).
The computation is done in blocks so that no more than `tmp_array_limit_gb`
GB of GPU memory is needed for the temporary buffers.
We ran an experiment for many different `kv_len` to determine from which
`q_len` value onwards query-padded SDPA is faster than naive SDPA. However, if
attention weights are required, we currently have to use naive SDPA even for
large `q_len`.
Note that SDPA for the initial prefill call always uses the fast PyTorch SDPA.
This is because no scores are computed then, and so attention weights are not
needed even for H2O policies.
Relevant arguments are:
* `sdpa.flex_attention`: Selects `flex_attention`. Otherwise, query-padded SDPA
is used. `sdpa.flex_mask_compile` and `sdpa.flex_extend_kv` are parameters
for `flex_attention`.
* `attention_forward_temp_size_gb`: Size limit (in GB) for temporary buffers
in naive SDPA, used in `forward` pass.
* `attention_backward_temp_size_gb`: Same size limit, but for SDPA computations
during the `backward` pass. This is discussed [below](#gradient-computation).
### Gradient Computation
For more details on how gradient are computed in the presence of KV caches
(this is a novel contribution of this library), please study the docstrings in
the codebase. We are preparing a comprehensive technical report on all novelties
implemented here.
The main difficulty of computing gradients for long context models is large
GPU memory requirements. Even if gradients are blocked for KV cache score
computations, just using `torch.autograd` is out of the question. We do not
go into full details, but our technique is a combination of several ideas:
* Splitting backward computations into cells: Think of computations as an
array, the vertical axis being the model layers, the horizontal axis being
the sequence chunks. The first column has entries of length close to
`cache_length`, remaining columns have length `chunk_size`. We tile this
array with cells. A row of cells covers up to `grad.layers_per_cell` layers,
a column of cells covers a number of chunks.
* Activation and KV cache checkpointing: We run `torch.autograd` gradient
computation on each cell. This needs inputs and head gradients for each cell.
Inputs are obtained by activation checkpointing during forward pass
(horizontal) and checkpointing KV cache buffers (vertical). Checkpoints are
stored on CPU, possibly quantized. Since KV cache buffers are much larger,
we only checkpoint them for the current row of cells.
To be precise, gradients are computed in two phases:
* Forward phase: This is what we also do for inference, with KV cache policies
in action. However, we store activation checkpoints at each cell boundary
to CPU, and we also log all KV cache eviction decisions into a so-called
*replay log*.
* Backward phase: In this phase, we use *replay caches*. These are replicas of
the original KV caches, but instead of running a policy depending on inputs,
they just replay all decisions made during the forward pass. The backward
phase moves top down over rows of cells. For each row, we first run
forward over chunks to store KV cache checkpoints on CPU. Then, we loop
backwards over cells, running `torch.autograd` to accumulate gradients.
Two more ideas are important. The larger cells are the faster our method runs,
because `torch.autograd` is best run as few times as possible on larger graphs.
However, `autograd` stores tensors in its compute graph which are needed during
the backward pass, which quickly fills up GPU memory. The largest such nodes
are KV cache buffers `keys`, `values` after each cache update, of size
`(batch_size, n_query_groups, cache_length, head_size)`. However, a single
chunk update of them is represented by `torch.scatter` calls with *new* entries
of size `(batch_size, n_query_groups, chunk_size, head_size)`. It is not hard
to see that we can reconstruct the sequence of cache buffers per chunk in the
backward direction, storing nodes of the latter size in the `autograd` graph
only.
Implementing this simple idea in `PyTorch` ends up quite challenging, see
[CellComputationAutogradHooks](./keys_values/kvcache/gradient/autograd_hooks.py#L382).
We use the [autograd saved tensors hooks](https://docs.pytorch.org/tutorials/intermediate/autograd_saved_tensors_hooks_tutorial.html)
mechanism. This has some shortcomings, which renders our code somewhat complex.
However, it is only with this mechanism that we can run our method with
non-trivial cell sizes (i.e., not one cell per layer and chunk). How large
should a cell be in the horizontal direction? We argue that the sum of chunk
lengths for a cell should be approximately `cache_length`. With this convention,
the size of tensors stored in the `autograd` graph scales with `cache_length`
rather than `chunk_size`, so becomes comparable to KV cache size.
Second, when using `torch.nn.functional.scaled_dot_product_attention` as
operator, we find that this creates several large arrays in the `autograd` graph.
To get around this, we implemented our own `PyTorch` operator
[KVCacheScatterUpdateAndSDPAFunction](./keys_values/kvcache/gradient/sdpa_op.py#L474).
for SDPA fused with `torch.scatter` KV cache update. Its `backward` requires naive
blockwise SDPA. We are working on a CUDA version for this fused SDPA operator,
which will speed up computations without sacrificing memory efficiency (like
PyTorch SDPA does).
Important arguments for gradient computations are:
* `--grad.layers_per_cell`: Second phase GPU memory requirements depend
linearly on this number. It states how many layers are processed in a cell.
The default is 1. Larger values mean less sequential processing, so faster
computation. Note that the CPU memory for layer input checkpoints scales
inverse linearly with this number.
* `--grad.chunks_per_cell_multiplier`: The length of a cell is the sum of
its chunk's lengths. If `max_cell_length = int(factor * kv_cache.cache_length *
grad.chunks_per_cell_multiplier)`, chunks are grouped into a cell until
its length is close to `max_cell_length`, but not larger. Here,
`factor = 2 * n_query_groups * head_size / n_embd`. By default,
`grad.chunks_per_cell_multiplier = 1`, so that embeddings for a cell need as
much memory as the (uncompressed) KV cache buffers (these two being the
main memory blocks needed). For larger values of the mu | text/markdown | null | Matthias Seeger <mseeger@gmail.com> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.8"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"filelock",
"black==25.12.0",
"flake8"
] | [] | [] | [] | [
"homepage, https://github.com/awslabs/keys_values"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T11:00:03.888887 | keys_vals-0.1.0.tar.gz | 348,321 | ec/6e/059e9edbd52777c0d3f0f1558ee87108ef692a72192c2a25d5e63b785b84/keys_vals-0.1.0.tar.gz | source | sdist | null | false | 663d6be38881e2c7a471928fd0df5db1 | 9b627010ba7a2b4f5e255f4f2016c3c47083d1d0e47c8107e53234d3928e8d10 | ec6e059e9edbd52777c0d3f0f1558ee87108ef692a72192c2a25d5e63b785b84 | null | [
"LICENSE",
"NOTICE"
] | 250 |
2.3 | hotstuff-python-sdk | 0.0.1b11 | Python SDK for interacting with Hotstuff L1 | # Hotstuff Python SDK
[](https://pypi.org/project/hotstuff-python-sdk/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/hotstuff-python-sdk/)
> Python SDK for interacting with Hotstuff L1
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [API Clients](#api-clients)
- [InfoClient](#infoclient)
- [ExchangeClient](#exchangeclient)
- [SubscriptionClient](#subscriptionclient)
- [Transports](#transports)
- [HttpTransport](#httptransport)
- [WebSocketTransport](#websockettransport)
- [Advanced Usage](#advanced-usage)
- [Signing](#signing)
- [Error Handling](#error-handling)
- [Examples](#examples)
## Installation
### Using pip
```bash
pip install hotstuff-python-sdk
```
### Using Poetry
```bash
poetry add hotstuff-python-sdk
```
### Install from source
```bash
git clone https://github.com/hotstuff-labs/python-sdk.git
cd python-sdk
# Using Poetry (recommended)
poetry install
# Or using pip
pip install -e .
```
| text/markdown | hotstuff | null | null | null | MIT | hotstuff, trading, blockchain, defi, exchange | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://github.com/hotstuff-labs/python-sdk | null | <4.0,>=3.8 | [] | [] | [] | [
"requests<3.0.0,>=2.31.0",
"websocket-client<2.0.0,>=1.6.0",
"eth-account<0.12.0,>=0.11.0",
"eth-utils<5.0.0,>=4.0.0",
"msgpack<2.0.0,>=1.0.0",
"web3<7.0.0,>=6.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/hotstuff-labs/python-sdk",
"Repository, https://github.com/hotstuff-labs/python-sdk"
] | poetry/2.1.3 CPython/3.10.19 Linux/6.11.0-1018-azure | 2026-02-20T10:59:05.963247 | hotstuff_python_sdk-0.0.1b11.tar.gz | 22,577 | f9/f8/859d65dec7c101fedaf5996898e15abab080d107b3a59197bd35be6daa5b/hotstuff_python_sdk-0.0.1b11.tar.gz | source | sdist | null | false | 59cedee2dca4d61970eb98f71b7db636 | f89a8abe46ed4262f06def6620c4129fc72bbb0b20314aecc4a284191095dc3d | f9f8859d65dec7c101fedaf5996898e15abab080d107b3a59197bd35be6daa5b | null | [] | 240 |
2.4 | batabyal | 1.2.0 | A lightweight Python package for Machine Learning utilities | ### Package: batabyal
---
**batabyal** is a lightweight Python package for Machine Learning utilities that provides:
- **cleaning_module** - A CSV data cleaning module
- **trainer_kit** - ML module for classification problems
### Installation
---
Use the below command in the terminal
```bash
pip install batabyal
```
### Importation
---
Import a specific thing or the entire module whatever is required
```python
from batabyal import cleaning_module as cm
from batabyal.trainer_kit import TransformedTargetClassifier, autofit_classification_model
```
### Usage
---
**1. cleaning_module:** It provides only one function `clean_csv` used for cleaning .csv datasets efficiently
```python
cm.clean_csv('filename.csv', numericData, charData, True, True)
#structure: clean_csv(file, numericData, charData, fill, case_sensitivity=False, dummies=None) -> pd.DataFrame
#If `fill==True`, it fills NaN in numeric columns with its mean.
#if `case_sensitivity=True`, it will lowercase all labelled values.
#`dummies` are the list of values to replace with NaN before cleaning.
```
**2. trainer_kit:** It provides one wrapper class `TransformedTargetClassifier` for encoding and inversely transforming predictions to the original label and one function `autofit_classification_model` for autofitting classification models with the best algorithm and hyperparameters based on `roc_auc_ovr_weighted` score
```python
model = TransformedTargetClassifier(classifier=svc, transformer=labelEncoder)
#let labelencoder and svc are from sklearn
#you can now use model.fit() , model.predict() with raw labelled data, it will automate the encoding internally for training and prediction
#And model.predict() will return the original label by inversely transforming the encoded numbers back internally
result = autofit_classification_model(x, y, "numeric", 3)
#structure: autofit_classification_model(x:pd.DataFrame, y:pd.DataFrame, x_type:Literal["numeric", "categorical", "mixed"], n_splits:int, cat_features:list[str]=[], whitelisted_algorithms:list[Literal["LogisticRegression", "DecisionTree", "RandomForest", "GaussianNB", "BernoulliNB", "CategoricalNB", "CatBoost", "XGBoost", "Ripper", "SVC", "KNN"]]|Literal["auto"]="auto", enable_votingClassifier:bool=True, random_state:int|None=42, verbosity:bool=True) -> object
model = result.model #now use model.predict
score = result.score #print score
classifier = result.classifier #print classifier to know the best algorithm name that's used
convertible_model = result.convertible_model #extracts the model only (no preprocessing)
preprocessedX = result.preprocessedX #extracts the x features after preprocessing
n_features = result.n_features #returns total number of the preprocessed x features
initial_type = result.initial_type #initial type needed to convert the model to .onnx
result.export_to_onnx() #dump the model as 'model.onnx' in your current working directory
```
| text/markdown | T Batabyal | T Batabyal <tamanashbatabyal@gmail.com> | null | null | null | ML, DataScience | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas>=1.5.0",
"scikit-learn>=1.2.0",
"xgboost>=1.7.0",
"catboost>=1.2.0",
"wittgenstein>=0.3.4",
"skl2onnx>=1.15",
"onnx>=1.14",
"onnxmltools>=1.11",
"onnxruntime>=1.16",
"imodels>=1.3",
"numpy>=1.21"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T10:58:55.103695 | batabyal-1.2.0.tar.gz | 9,112 | 02/83/7a6ea46c26eaa60347502b802eb7264d98bb7e8ef250153bae2e252b28e8/batabyal-1.2.0.tar.gz | source | sdist | null | false | 8b7dff33154a435099e641d868d4595e | 542c95402eece00e9e017b528f679fbf1280a95e0f5622fc1397bd524345a447 | 02837a6ea46c26eaa60347502b802eb7264d98bb7e8ef250153bae2e252b28e8 | MIT | [
"LICENSE"
] | 241 |
2.4 | datajunction | 0.0.69 | DataJunction client library for connecting to a DataJunction server | # DataJunction Python Client
This is a short introduction into the Python version of the DataJunction (DJ) client.
For a full comprehensive intro into the DJ functionality please check out [datajunction.io](https://datajunction.io/).
## Installation
To install:
```
pip install datajunction
```
## Intro
We have three top level client classes that help you choose the right path for your DataJunction actions.
1. `DJClient` for basic read only access to metrics, dimensions, SQL and data.
2. `DJBuilder` for those who would like to modify their DJ data model, build new nodes and/or modify the existing ones.
3. `DJAdmin` for the administrators of the system to define the connections to your data catalog and engines.
## DJ Client : Basic Access
Here you can see how to access and use the most common DataJunction features.
### Examples
To initialize the client:
```python
from datajunction import DJClient
dj = DJClient("http://localhost:8000")
```
**NOTE**
If you are running in our demo docker environment please change the above URL to "http://dj:8000".
You are now connected to your DJ service and you can start looking around. Let's see what namespaces we have in the system:
```python
dj.list_namespaces()
['default']
```
Next let's see what metrics and dimensions exist in the `default` namespace:
```python
dj.list_metrics(namespace="default")
['default.num_repair_orders',
'default.avg_repair_price',
'default.total_repair_cost',
'default.avg_length_of_employment',
'default.total_repair_order_discounts',
'default.avg_repair_order_discounts',
'default.avg_time_to_dispatch']
dj.list_dimensions(namespace="default")
['default.date_dim',
'default.repair_order',
'default.contractor',
'default.hard_hat',
'default.local_hard_hats',
'default.us_state',
'default.dispatcher',
'default.municipality_dim']
```
Now let's pick two metrics and see what dimensions they have in common:
```python
dj.common_dimensions(
metrics=["default.num_repair_orders", "default.total_repair_order_discounts"],
name_only=True
)
['default.dispatcher.company_name',
'default.dispatcher.dispatcher_id',
'default.dispatcher.phone',
'default.hard_hat.address',
'default.hard_hat.birth_date',
'default.hard_hat.city',
...
```
And finally let's ask DJ to show us some data for these metrics and some dimensions:
```python
dj.data(
metrics=["default.num_repair_orders", "default.total_repair_order_discounts"],
dimensions=["default.hard_hat.city"]
)
| default_DOT_num_repair_orders | default_DOT_total_repair_order_discounts | city |
| ----------------------------- | ---------------------------------------- | ----------- |
| 4 | 5475.110138 | Jersey City |
| 3 | 11483.300049 | Billerica |
| 5 | 6725.170074 | Southgate |
...
```
### Reference
List of all available DJ client methods:
- DJClient:
### list
- list_namespaces( prefix: Optional[str])
- list_dimensions( namespace: Optional[str])
- list_metrics( namespace: Optional[str])
- list_cubes( namespace: Optional[str])
- list_sources( namespace: Optional[str])
- list_transforms( namespace: Optional[str])
- list_nodes( namespace: Optional[str], type_: Optional[NodeType])
- list_nodes_with_tags( tag_names: List[str], node_type: Optional[NodeType])
- list_catalogs()
- list_engines()
### find
- common_dimensions( metrics: List[str], name_only: bool = False)
- common_metrics( dimensions: List[str], name_only: bool = False)
### execute
- sql( metrics: List[str],
dimensions: Optional[List[str]],
filters: Optional[List[str]],
engine_name: Optional[str],
engine_version: Optional[str])
- node_sql( node_name: str,
dimensions: Optional[List[str]],
filters: Optional[List[str]],
engine_name: Optional[str],
engine_version: Optional[str])
- data( metrics: List[str],
dimensions: Optional[List[str]],
filters: Optional[List[str]],
engine_name: Optional[str],
engine_version: Optional[str],
async_: bool = True)
- node_data( node_name: str,
dimensions: Optional[List[str]],
filters: Optional[List[str]],
engine_name: Optional[str],
engine_version: Optional[str],
async_: bool = True)
## DJ Builder : Data Modelling
In this section we'll show you few examples to modify the DJ data model and its nodes.
### Start Here
To initialize the DJ builder:
```python
from datajunction import DJBuilder
djbuilder = DJBuilder("http://localhost:8000")
```
**NOTE**
If you are running in our demo docker container please change the above URL to "http://dj:8000".
### Namespaces
To access a namespace or check if it exists you can use the same simple call:
```python
djbuilder.namespace("default")
Namespace(dj_client=..., namespace='default')
```
```python
djbuilder.namespace("foo")
[DJClientException]: Namespace `foo` does not exist.
```
To create a namespace:
```python
djbuilder.create_namespace("foo")
Namespace(dj_client=..., namespace='foo')
```
To delete (or restore) a namespace:
```python
djbuilder.delete_namespace("foo")
djbuilder.restore_namespace("foo")
```
**NOTE:**
The `cascade` parameter in both of above methods allows for cascading
effect applied to all underlying nodes and namespaces. Use it with caution!
### Tags
You can read existing tags as well as create new ones.
```python
djbuilder.tag(name="deprecated", description="This node has been deprecated.", tag_type="standard", tag_metadata={"contact": "Foo Bar"})
Tag(dj_client=..., name='deprecated', description='This node has been deprecated.', tag_type='standard', tag_metadata={"contact": "Foo Bar"})
```
```python
djbuilder.tag("official")
[DJClientException]: Tag `official` does not exist.
```
To create a tag:
```python
djbuilder.create_tag(name="deprecated", description="This node has been deprecated.", tag_type="standard", tag_metadata={"contact": "Foo Bar"})
Tag(dj_client=..., name="deprecated", description="This node has been deprecated.", tag_type="standard", tag_metadata={"contact": "Foo Bar"})
```
To add a tag to a node:
```python
repair_orders = djbuilder.source("default.repair_orders")
repair_orders.tags.append(djbuilder.tag("deprecated"))
repair_orders.save()
```
And to list the node names with a specific tag (or set of tags):
```python
djbuilder.list_nodes_with_tags(tag_names=["deprecated"]) # works with DJClient() as well
["default.repair_orders"]
```
### Nodes
To learn what **Node** means in the context of DJ, please check out [this datajuntion.io page](https://datajunction.io/docs/0.1.0/dj-concepts/nodes/).
To list all (or some) nodes in the system you can use the `list_<node-type>()` methods described
in the **DJ Client : Basic Access** section or you can use the namespace based method:
All nodes for a given namespace can be found with:
```python
djbuilder.namespace("default").nodes()
```
Specific node types can be retrieved with:
```python
djbuilder.namespace("default").sources()
djbuilder.namespace("default").dimensions()
djbuilder.namespace("default").metrics()
djbuilder.namespace("default").transforms()
djbuilder.namespace("default").cubes()
```
To create a source node:
```python
repair_orders = djbuilder.create_source(
name="repair_orders",
display_name="Repair Orders",
description="Repair orders",
catalog="dj",
schema_="roads",
table="repair_orders",
)
```
Nodes can also be created in draft mode:
```python
repair_orders = djbuilder.create_source(
...,
mode=NodeMode.DRAFT
)
```
To create a dimension node:
```python
repair_order = djbuilder.create_dimension(
name="default.repair_order_dim",
query="""
SELECT
repair_order_id,
municipality_id,
hard_hat_id,
dispatcher_id
FROM default.repair_orders
""",
description="Repair order dimension",
primary_key=["repair_order_id"],
)
```
To create a transform node:
```python
large_revenue_payments_only = djbuilder.create_transform(
name="default.large_revenue_payments_only",
query="""
SELECT
payment_id,
payment_amount,
customer_id,
account_type
FROM default.revenue
WHERE payment_amount > 1000000
""",
description="Only large revenue payments",
)
```
To create a metric:
```python
num_repair_orders = djbuilder.create_metric(
name="default.num_repair_orders",
query="""
SELECT
count(repair_order_id)
FROM repair_orders
""",
description="Number of repair orders",
)
```
### Reference
List of all available DJ builder methods:
- DJBuilder:
### namespaces
- namespace( namespace: str)
- create_namespace( namespace: str)
- delete_namespace(self, namespace: str, cascade: bool = False)
- restore_namespace(self, namespace: str, cascade: bool = False)
### nodes
- delete_node(self, node_name: str)
- restore_node(self, node_name: str)
### nodes: source
- source(self, node_name: str)
- create_source( ..., mode: Optional[NodeMode] = NodeMode.PUBLISHED)
- register_table( catalog: str, schema: str, table: str)
- register_view( catalog: str, schema: str, view: str, query: str, replace: bool = False)
### nodes: transform
- transform(self, node_name: str)
- create_transform( ..., mode: Optional[NodeMode] = NodeMode.PUBLISHED)
### nodes: dimension
- dimension(self, node_name: str)
- create_dimension( ..., mode: Optional[NodeMode] = NodeMode.PUBLISHED)
### nodes: metric
- metric(self, node_name: str)
- create_metric( ..., mode: Optional[NodeMode] = NodeMode.PUBLISHED)
### nodes: cube
- cube(self, node_name: str)
- create_cube( ..., mode: Optional[NodeMode] = NodeMode.PUBLISHED)
## DJ System Administration
In this section we'll describe how to manage your catalog and engines.
### Start Here
To initialize the DJ admin:
```python
from datajunction import DJAdmin
djadmin = DJAdmin("http://localhost:8000")
```
**NOTE**
If you are running in our demo docker container please change the above URL to "http://dj:8000".
### Examples
To list available catalogs:
```python
djadmin.list_catalogs()
['warehouse']
```
To list available engines:
```python
djadmin.list_engines()
[{'name': 'duckdb', 'version': '0.7.1'}]
```
To create a catalog:
```python
djadmin.add_catalog(name="my-new-catalog")
```
To create a new engine:
```python
djadmin.add_engine(
name="Spark",
version="3.2.1",
uri="http:/foo",
dialect="spark"
)
```
To linke an engine to a catalog:
```python
djadmin.link_engine_to_catalog(
engine="Spark", version="3.2.1", catalog="my-new-catalog"
)
```
### Reference
List of all available DJ builder methods:
- DJAdmin:
### Catalogs
- list_catalogs() # in DJClient
- get_catalog( name: str)
- add_catalog( name: str)
### Engines
- list_engines() # in DJClient
- get_engine( name: str)
- add_engine( name: str,version: str, uri: Optional[str], dialect: Optional[str])
### Together
- link_engine_to_catalog( engine_name: str, engine_version: str, catalog: str)
| text/markdown | null | DataJunction Authors <yian.shang@gmail.com> | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"alive-progress>=3.1.2",
"httpx>=0.27.0",
"pytest-xdist>=3.5.0",
"pyyaml>=6.0.1",
"requests<3.0.0,>=2.28.2",
"rich>=13.7.0",
"mcp>=1.0.0; extra == \"mcp\"",
"plotext>=5.2.8; extra == \"mcp\"",
"pydantic-settings>=2.10.1; extra == \"mcp\"",
"pydantic>=2.0; extra == \"mcp\"",
"pandas>=2.0.2; extra == \"pandas\""
] | [] | [] | [] | [
"repository, https://github.com/DataJunction/dj"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T10:57:56.300180 | datajunction-0.0.69.tar.gz | 123,400 | a7/c1/21e8441d198c4a5b334cb7033a1c50d86836d85e3dddee44441d440b355b/datajunction-0.0.69.tar.gz | source | sdist | null | false | 7148cae3aaa7db7ee4d3b2a8bad9dbb6 | 420d632ef14663b3c7ed0b309a29e779ca7b05f721833fa4f2ba47b8084eec48 | a7c121e8441d198c4a5b334cb7033a1c50d86836d85e3dddee44441d440b355b | null | [
"LICENSE.txt"
] | 544 |
2.4 | datajunction-reflection | 0.0.69 | OSS Implementation of a DataJunction Reflection Service | # DJ Reflection Service
The reflection service polls the DJ core service for all nodes with associated tables, whether source
tables or materialized tables. For each node, it refreshes the node's schema based on the associated
table's schema that it retrieves from the query service. It also retrieves the available partitions and
the valid through timestamp of these tables and reflects them accordingly to DJ core.
This service uses a celery beat scheduler, with a configurable polling interval that defaults to once per
hour and async tasks for each node's reflection.
| text/markdown | null | DataJunction Authors <roberto@dealmeida.net> | null | null | MIT | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"celery[redis]>=5.2.3",
"importlib-metadata",
"pydantic<2.0",
"python-dotenv==0.19.2",
"pytz",
"requests>=2.26.0",
"uvicorn[standard]>=0.21.1; extra == \"uvicorn\""
] | [] | [] | [] | [
"repository, https://github.com/DataJunction/dj"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T10:57:48.244767 | datajunction_reflection-0.0.69.tar.gz | 87,400 | bd/47/9e3f17fb93229a73f4259332f9da9eef65915e8c234d357a39a98c6325f0/datajunction_reflection-0.0.69.tar.gz | source | sdist | null | false | db715ecb9067435ec8730a2c3861ebfb | 3156236af4a93b8afbaa073bb86ba2e0a97c20cbe0bd81e5f6310aee3ded07ff | bd479e3f17fb93229a73f4259332f9da9eef65915e8c234d357a39a98c6325f0 | null | [
"LICENSE"
] | 252 |
2.4 | datajunction-server | 0.0.69 | DataJunction server library for running to a DataJunction server | # DataJunction
## Introduction
DataJunction (DJ) is an open source **metrics platform** that allows users to define
metrics and the data models behind them using **SQL**, serving as a **semantic layer**
on top of a physical data warehouse. By leveraging this metadata, DJ can enable efficient
retrieval of metrics data across different dimensions and filters.

## Getting Started
To launch the DataJunction UI with a minimal DataJunction backend, start the default docker compose environment.
```sh
docker compose up
```
If you'd like to launch the full suite of services, including open-source implementations of the DataJunction query service and
DataJunction reflection service specifications, use the `demo` profile.
```sh
docker compose --profile demo up
```
DJUI: [http://localhost:3000/](http://localhost:3000/)
DJ Swagger Docs: [http://localhost:8000/docs](http://localhost:8000/docs)
DJQS Swagger Docs: [http://localhost:8001/docs](http://localhost:8001/docs)
Jaeger UI: [http://localhost:16686/search](http://localhost:16686/search)
Jupyter Lab: [http://localhost:8888](http://localhost:8888)
## How does this work?
At its core, DJ stores metrics and their upstream abstractions as interconnected nodes.
These nodes can represent a variety of elements, such as tables in a data warehouse
(**source nodes**), SQL transformation logic (**transform nodes**), dimensions logic,
metrics logic, and even selections of metrics, dimensions, and filters (**cube nodes**).
By parsing each node's SQL into an AST and through dimensional links between columns,
DJ can infer a graph of dependencies between nodes, which allows it to find the
appropriate join paths between nodes to generate queries for metrics.
## AI Integration
DataJunction provides an MCP (Model Context Protocol) client that enables AI assistants like Claude to interact with your semantic layer.
The MCP client is part of the [DataJunction Python client package](../datajunction-clients/python/).
For installation and setup, see the [MCP documentation in the client package](../datajunction-clients/python/README_MCP.md).
## Documentation
For more detailed documentation, visit [datajunction.io](https://datajunction.io).
| text/markdown | null | null | null | null | MIT | metrics, semanticlayer | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | null | null | <3.12,>=3.10 | [] | [] | [] | [
"alembic>=1.10.3",
"antlr4-python3-runtime==4.13.1",
"bcrypt<=4.3.0,>=4.0.1",
"cachelib<1.0.0,>=0.10.2",
"cachetools>=5.3.1",
"celery<6.0.0,>=5.2.7",
"cryptography<=45.0.0",
"fastapi-cache2>=0.2.1",
"fastapi>=0.110.0",
"google-api-python-client>=2.95.0",
"google-auth-httplib2>=0.1.0",
"google-auth-oauthlib>=1.0.0",
"httpx>=0.27.0",
"jinja2>=3.1.4",
"line-profiler>=4.0.3",
"msgpack<2.0.0,>=1.0.5",
"nbformat>=5.10.4",
"opentelemetry-instrumentation-fastapi>=0.48b0",
"passlib>=1.7.4",
"psycopg>=3.1.16",
"pydantic-settings>=2.10.1",
"pydantic<2.11,>=2.0",
"pyjwt[crypto]>=2.8.0",
"python-dotenv<1.0.0,>=0.19.0",
"python-jose>=3.3.0",
"python-multipart>=0.0.20",
"pyyaml>=6.0.1",
"redis<5.0.0,>=4.5.4",
"requests<=2.29.0,>=2.28.2",
"rich<14.0.0,>=13.3.3",
"ruamel-yaml>=0.18.0",
"sqlalchemy-utils<1.0.0,>=0.40.0",
"sqlalchemy>=2",
"sse-starlette<=2.0.0,>=1.6.0",
"strawberry-graphql>=0.235.0",
"types-cachetools>=5.3.0.6",
"yamlfix>=1.16.0",
"yarl<2.0.0,>=1.8.2",
"snowflake-connector-python>=3.0.0; extra == \"all\"",
"snowflake-connector-python>=3.0.0; extra == \"snowflake\"",
"sqlglot>=18.0.1; extra == \"transpilation\"",
"uvicorn[standard]>=0.21.1; extra == \"uvicorn\""
] | [] | [] | [] | [
"Homepage, https://datajunction.io",
"Repository, https://github.com/DataJunction/dj"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T10:57:40.180031 | datajunction_server-0.0.69.tar.gz | 7,663 | a3/d1/67daeb28f9076e9b91def855670b160c72b127d4656d4f4b4e142f84fecc/datajunction_server-0.0.69.tar.gz | source | sdist | null | false | 8a4d7f87be5594c91be4c632d6491afd | 05a10ec2355863387208af58d1893f0a219569ed5a0a8a7ac375741c2167c0f4 | a3d167daeb28f9076e9b91def855670b160c72b127d4656d4f4b4e142f84fecc | null | [] | 267 |
2.4 | atac | 0.1.0 | Agentic Trajectory as Code (ATaC) - A declarative workflow DSL for AI Agents | # ATaC (Agentic Trajectory as Code)
[English](#english) | [中文](#中文)
---
## 中文
ATaC 是一个专为 AI Agent 设计的声明式工作流 DSL 和 CLI 工具。它允许你将复杂的 Agent 行为(工具调用、条件判断、循环执行)定义为可分发、可复用的“轨迹码(Trajectory as Code)”。
### 🚀 核心特性
- **Agent 原生设计**: 专为 LLM Agent 协作设计。不仅提供人类可读的 YAML,还配套 `SKILL.md` 技能描述文件,让 Agent 能瞬间掌握操作技巧。
- **声明式 DSL**: 基于 YAML 定义工作流,支持循环 (`for`) 和条件判断 (`if-else`)。
- **MCP 原生支持**: 通过 `mcp://` 协议无缝集成 Model Context Protocol 服务器。
- **可视化寻址**: 通过 `atac show` 提供的路径坐标(如 `0.2.then`)让 Agent 像操控手术刀一样精确管理嵌套逻辑。
### 🛠 执行器支持矩阵 (Executor Support)
| 执行器 (Executor) | 协议 (Scheme) | 状态 (Status) | 说明 |
| :--- | :--- | :--- | :--- |
| **MCP** | `mcp://` | ✅ 已支持 | 原生支持所有符合 MCP 标准的服务 |
| **Bash** | `bash://` | ✅ 已支持 | 支持本地终端命令及脚本执行 |
| **Claude Code** | - | 🚧 待开发 | 欢迎社区贡献内置工具集成 |
| **Kimi / Moonshot**| `kimi://` | ✅ 已支持 | 支持 Kimi-CLI 所有的内置工具 |
### 📄 Agent 集成 (Skills)
如果你在开发 Agent 辅助系统,只需将项目中的 `SKILL.md` 提供给 Agent(如作为 System Prompt 的一部分或 Skill 文件夹),它就能理解如何自主构建、调试和运行复杂的任务轨迹。
> [!TIP]
> **推荐实践**:
> ```bash
> # 1. 将技能文件集成到 Agent
> cp SKILL.md path/to/your/agent/skills/
>
> # 2. 配置 MCP 服务目录
> export ATAC_MCP_SERVER_CONFIGS="/path/to/your/mcp/config.json"
> ```
### 📦 快速开始
```bash
pip install atac
# 1. 配置高德地图 MCP (在 mcp_config.json 中添加)
# {
# "mcpServers": {
# "amap-maps": {
# "command": "npx",
# "args": ["-y", "@amap/amap-maps-mcp-server"],
# "env": { "AMAP_MAPS_API_KEY": "YOUR_API_KEY_HERE" }
# }
# }
# }
export ATAC_MCP_SERVER_CONFIGS="path/to/mcp_config.json"
# 2. 运行示例轨迹
atac run example/multi_province_center.yaml
```
### 🤝 贡献指南
我们欢迎各种形式的贡献!
1. **Fork** 本仓库并创建特性分支。
2. 确保所有更改都通过了 `pytest` 单测和 `ruff` 代码检查。
3. 提交 Pull Request,并详细描述你的更改。
---
## English
ATaC is a declarative workflow DSL and CLI tool designed specifically for AI Agents. It allows you to define complex agent behaviors—such as sequential tool calls, conditional branching, and iterative loops—as distributable and reusable "Trajectories as Code."
### 🚀 Key Features
- **Agent-Centric**: Built for LLM Agents. Every command and structure is designed to be easily manipulated by an AI, complemented by a dedicated `SKILL.md` for instant proficiency.
- **Declarative DSL**: Define workflows in YAML with built-in logic for `for` loops and `if-else` branches.
- **MCP Native**: Seamless integration with Model Context Protocol servers via the `mcp://` protocol.
- **Visual Addressing**: Precise control over nested logic using path coordinates (e.g., `0.2.then`) from `atac show`.
### 🛠 Executor Support Matrix
| Executor | Scheme | Status | Note |
| :--- | :--- | :--- | :--- |
| **MCP** | `mcp://` | ✅ Supported | Native support for all MCP servers |
| **Bash** | `bash://` | ✅ Supported | Run local terminal commands & scripts |
| **Claude Code** | - | 🚧 Pending | Community contributions are welcome! |
| **Kimi / Moonshot**| `kimi://` | ✅ Supported | Full support for Kimi-CLI built-in tools |
### 📄 Agent Integration (Skills)
The core value of ATaC lies in its **Skill System**. By providing the `SKILL.md` (found in the project root) to your Agent, it gains the immediate ability to autonomously architect, debug, and execute complex task trajectories.
> [!TIP]
> **Best Practice**:
> ```bash
> # 1. Integrate the skill file into your Agent
> cp SKILL.md path/to/your/agent/skills/
>
> # 2. Configure the MCP service directory
> export ATAC_MCP_SERVER_CONFIGS="/path/to/your/mcp/config.json"
> ```
### 📦 Quick Start
```bash
pip install atac
# 1. Configure Amap MCP (Add to your mcp_config.json)
# {
# "mcpServers": {
# "amap-maps": {
# "command": "npx",
# "args": ["-y", "@amap/amap-maps-mcp-server"],
# "env": { "AMAP_MAPS_API_KEY": "YOUR_API_KEY_HERE" }
# }
# }
# }
export ATAC_MCP_SERVER_CONFIGS="path/to/mcp_config.json"
# 2. Run the example trajectory
atac run example/multi_province_center.yaml
```
### 🤝 Contributing
Contributions of any kind are welcome!
1. **Fork** the repository and create your feature branch.
2. Ensure all changes pass `pytest` unit tests and `ruff` linting.
3. Submit a Pull Request with a detailed description of your changes.
---
### License
MIT License.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"click>=8.3.1",
"jinja2>=3.1.6",
"jsonschema>=4.26.0",
"mcp>=1.26.0",
"pydantic>=2.12.5",
"pydantic-settings>=2.13.1",
"pyyaml>=6.0.3"
] | [] | [] | [] | [] | uv/0.9.5 | 2026-02-20T10:56:42.236650 | atac-0.1.0.tar.gz | 21,164 | a3/11/41ce0550977fc8f07f7e5313698d4b480a9ba92be9f1fc53e9a3e7c37f1b/atac-0.1.0.tar.gz | source | sdist | null | false | 908b08cd1d9337ba2ba324debe7eb58e | 346502f536deb93a2c8bfd268685b25e7e686df80276d3ab18132c96818f90e0 | a31141ce0550977fc8f07f7e5313698d4b480a9ba92be9f1fc53e9a3e7c37f1b | null | [
"LICENSE"
] | 253 |
2.4 | logs-py | 4.0.27 | A python interface for the LOGS public API | # LOGS-Py
LOGS is a scientific data management system (SDMS) software allowing for automated data collection, visualization, and organization. Within its internal organizational concepts it allows you to enrich you experimental data with metadata. LOGS allows you to adapt many of its organization structures which enables your data management to follow your internal workflows.
**LOGS-Py is a Python package** interacting with the LOGS web API to enable you to extract and push data to LOGS and generally operateinteract with the LOGS backend in a programatic way. The main motivation behind in the desing of library is to keep this interaction as pythonic as possible. The communication with the API remains mainly in the background while the user of the library handles native Python objects whereby theuser is still able to interact with nearly are LOGS functionalities and entities.
Thus _this library_ firstly _targets lab and data scientist_ allowing them to freely interact with experimental data and its meta-data without any pre-knowledge of Web technologies and communication. Secondly, it _facilitates power-users_ to implement highly specific workflows automations and 3rd-party software intergrations with LOGS and other lab software.
## Installation
The **LOGS-Py** package can be easily installed by using `pip`.
For this open the `terminal` and do:
```bash
pip install logs-py
```
| text/markdown | Sina Kazemi | support@logs-repository.com | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research"
] | [] | https://docs.logs-python.com | null | >=3.8 | [] | [] | [] | [
"numpy",
"requests",
"regex>=2019.12.9",
"Pillow",
"deprecation",
"pytz",
"tzlocal; platform_system == \"Windows\"",
"typing_extensions"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T10:56:28.981315 | logs_py-4.0.27-py3-none-any.whl | 215,394 | e1/a2/91328bf061535feed0167a52a467b28d793a0f4c595031a3e52b824effcb/logs_py-4.0.27-py3-none-any.whl | py3 | bdist_wheel | null | false | 43781ac4e879f166d39083a5c05c4a63 | f36bfd2d5a8d4d14c7d84ec23fa28bba3aa9853f379d90bb693e9d4083bc02a9 | e1a291328bf061535feed0167a52a467b28d793a0f4c595031a3e52b824effcb | null | [] | 104 |
2.4 | arxiv-pulse | 1.2.7 | An intelligent arXiv literature crawler and analyzer with AI-powered features | # arXiv Pulse
> Intelligent arXiv Literature Tracking System
[](https://pypi.org/project/arxiv-pulse/)


> 🌐 **Language**: [中文文档](https://github.com/kYangLi/arXiv-Pulse/blob/main/README_CN.md)
**arXiv Pulse** is a Python package for automated crawling, summarizing, and tracking of the latest research papers from arXiv. It supports all arXiv categories and provides a modern web interface for a professional literature management experience.
## 📸 Screenshots

## ✨ Key Features
- **🌐 Web Interface**: Modern FastAPI + Vue 3 + Element Plus interface with real-time SSE streaming
- **🚀 One-Command Start**: Simply run `pulse serve` to start the service
- **📝 Web Configuration**: First-time setup wizard, all settings stored in database
- **🤖 AI Auto-Processing**: Automatic translation, AI summarization, and figure extraction
- **💬 AI Chat Assistant**: Ask questions about papers with context-aware AI assistant
- **🔍 Smart Search**: Natural language queries with AI-powered keyword parsing
- **📁 Paper Collections**: Create, edit, and delete collections to organize important papers
- **🛒 Paper Basket**: Select multiple papers for batch operations
- **🔒 Secure by Default**: Localhost-only binding, explicit confirmation for remote access
- **🌍 Multilingual Support**: UI in Chinese/English, translation to multiple languages
## 🆕 What's New in 1.2.0
- **Enhanced UI Components**: Redesigned buttons, switches, selects, dialogs with refined shadows and transitions
- **Paper Index Numbers**: Visual index numbers on paper cards for easy reference
- **Back-to-Top Button**: Quick navigation with scroll-aware floating button
- **Tooltips for Floating Buttons**: Helpful labels on hover for all floating action buttons
- **Recent Papers AI Search**: Search within recent papers using natural language
- **Sync Page Improvements**: Better spacing, help icons with tooltips
- **SQLite WAL Mode**: Concurrent read/write operations for better performance
- **Bug Fixes**: Form submission, pagination visibility, index preservation during search
## 🚀 Quick Start
### Installation
```bash
pip install arxiv-pulse
```
### Start Service
```bash
# Create data directory
mkdir my_papers && cd my_papers
# Start web service (background mode by default)
pulse serve .
# Or specify port
pulse serve . --port 3000
# Foreground mode (see logs in terminal)
pulse serve . -f
```
Then visit http://localhost:8000
### Service Management
```bash
pulse status . # Check service status
pulse stop . # Stop service
pulse restart . # Restart service
pulse stop . --force # Force stop (SIGKILL)
```
### Remote Access (SSH Tunnel)
By default, the service only accepts localhost connections for security. For remote access, use SSH tunnel:
```bash
# On server
pulse serve .
# On your computer
ssh -L 8000:localhost:8000 user@server
# Then visit http://localhost:8000
```
This provides encrypted connection without exposing your API keys.
### First-Time Setup
1. Visit http://localhost:8000
2. Follow the setup wizard:
- **Step 1**: Configure AI API (OpenAI/DeepSeek key, model, endpoint)
- **Step 2**: Select research fields
- **Step 3**: Set sync parameters
- **Step 4**: Start initial sync
## 🔒 Security
arXiv Pulse is designed with security in mind:
- **Localhost-only by default**: Service binds to 127.0.0.1, inaccessible from external networks
- **No plaintext credentials**: API keys stored in local SQLite database, never transmitted
- **Explicit remote access**: Opening to non-localhost requires a flag with security warning
**For remote access**, we recommend:
1. **SSH Tunnel** (easiest): `ssh -L 8000:localhost:8000 user@server`
2. **VPN**: WireGuard, OpenVPN, or Tailscale
3. **Reverse Proxy**: Nginx/Caddy with HTTPS
```bash
# If you must open to network (not recommended)
pulse serve . --host 0.0.0.0 --allow-non-localhost-access-with-plaintext-transmission-risk
```
## 📖 Daily Usage
### Pages
| Page | Description |
|------|-------------|
| **Home** | Statistics overview, search by natural language |
| **Recent** | Papers from last N days, filter by field |
| **Sync** | Sync status, field management, manual sync |
| **Collections** | Organize important papers into collections |
### Features
- **Search**: Use natural language like "DFT calculations for battery materials"
- **Filter**: Click "Filter Fields" to select research areas
- **AI Chat**: Click the chat icon (bottom-right) to ask questions
- **Paper Basket**: Click basket icon on cards to collect papers for batch operations
- **Settings**: Click gear icon to modify API key, language, and sync options
## 📁 Project Structure
```
arxiv_pulse/
├── core/ # Core infrastructure (Config, Database, Lock)
├── models/ # SQLAlchemy ORM models
├── services/ # Business logic (AI, translation, papers)
├── crawler/ # ArXiv API crawler
├── ai/ # Paper summarizer, report generator
├── search/ # AI-powered search engine
├── cli/ # Command-line interface
├── web/ # FastAPI web application
│ ├── app.py # FastAPI app
│ ├── api/ # API endpoints
│ └── static/ # Vue 3 frontend (components, stores, i18n)
└── i18n/ # Backend translations
Data Directory/
├── data/arxiv_papers.db # SQLite database
└── web.log # Service log
```
For detailed architecture, see [DEV.md](DEV.md).
## 🔧 API Endpoints
| Endpoint | Method | Description |
|----------|--------|-------------|
| `/api/config` | GET/PUT | Get/update configuration |
| `/api/config/status` | GET | Get initialization status |
| `/api/papers/search/stream` | GET (SSE) | AI-powered search |
| `/api/papers/recent/update` | POST (SSE) | Update recent papers |
| `/api/collections` | GET/POST | List/create collections |
| `/api/stats` | GET | Database statistics |
| `/api/chat/sessions/{id}/send` | POST (SSE) | Send message to AI |
## 🧪 Research Fields
arXiv Pulse supports **all arXiv categories**. Simply select your fields of interest in the Settings page. Pre-configured options include:
| Category | Example Fields |
|----------|----------------|
| Physics | Condensed Matter, Quantum Physics, High Energy, Nuclear, Astrophysics |
| Computation | DFT, First-Principles, MD, Force Fields, Computational Physics |
| AI/ML | Machine Learning, Artificial Intelligence, Computer Vision, NLP |
| Chemistry | Quantum Chemistry, Chemical Physics |
| Math | Mathematical Physics, Numerical Analysis, Statistics |
| Others | Quantitative Biology, Electrical Engineering, Economics |
You can also add custom search queries for any topic on arXiv.
## 🐛 Troubleshooting
**Q: Port already in use?**
```bash
pulse serve . --port 3000
```
**Q: Service shows "not running" but port is occupied?**
```bash
pulse stop . --force
# Or remove stale lock
rm .pulse.lock
```
**Q: How to reinitialize?**
```bash
rm data/arxiv_papers.db
pulse serve .
```
**Q: AI not responding?**
- Check API key in Settings
- Check console for errors (F12 → Console)
- Try foreground mode to see logs: `pulse serve . -f`
## 📄 License
GPL-3.0 - see [LICENSE](LICENSE) for details.
## 🙏 Acknowledgments
This project was developed by [OpenCode](https://github.com/anomalyco/opencode), an AI coding agent.
- **Yang Li** - For 500+ iterations of requirements discussions, design decisions, and testing feedback. This project would not exist without your patience and vision.
- [GLM-5](https://bigmodel.cn/glm-coding) - For providing the core intelligence that powers OpenCode. ~200 million tokens consumed in bringing this project to life.
- [arXiv.org](https://arxiv.org) - For the open API
- Computational materials science community - For inspiration and use cases
---
**arXiv Pulse** - Making arXiv literature tracking simple and efficient!
| text/markdown | null | "Yang Li, OpenCode, GLM-5" <lyang.1915@gmail.com> | null | null | null | arXiv, literature, crawler, research, paper, AI, summarization | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Internet :: WWW/HTTP :: Indexing/Search",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"arxiv>=2.1.3",
"requests>=2.32.3",
"pandas>=2.2.3",
"sqlalchemy>=2.0.36",
"openai>=1.70.0",
"httpx[socks]>=0.27.0",
"tqdm>=4.67.1",
"markdown>=3.7",
"click>=8.1.0",
"fastapi>=0.109.0",
"uvicorn>=0.27.0",
"python-multipart>=0.0.6",
"weasyprint>=62.0",
"pymupdf>=1.24.0",
"pytest>=8.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"black>=24.0.0; extra == \"dev\"",
"ruff>=0.4.0; extra == \"dev\"",
"mypy>=1.10.0; extra == \"dev\"",
"playwright>=1.45.0; extra == \"dev\"",
"types-requests>=2.32.0; extra == \"dev\"",
"types-markdown>=3.7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/kYangLi/ArXiv-Pulse",
"Repository, https://github.com/kYangLi/ArXiv-Pulse.git",
"Documentation, https://github.com/kYangLi/ArXiv-Pulse#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T10:56:05.760219 | arxiv_pulse-1.2.7-py3-none-any.whl | 1,539,675 | b9/a2/3aca961f721122fb9d4a3e1baf202b6646bc7b54c51e0c7314a099901df2/arxiv_pulse-1.2.7-py3-none-any.whl | py3 | bdist_wheel | null | false | a588727873147d716233b09e4d1cc7ba | 6bb87caff45db157efa286fca50f6750de1cc4cc532cf8d136cfe6a6554ecbea | b9a23aca961f721122fb9d4a3e1baf202b6646bc7b54c51e0c7314a099901df2 | GPL-3.0-or-later | [
"LICENSE"
] | 93 |
2.4 | acoustotreams | 0.2.15 | A Python package for acoustic wave scattering based on the T-matrix method | [](https://pypi.org/project/acoustotreams)

[](https://NikUstimenko.github.io/acoustotreams)
# acoustotreams
The package `acoustotreams` adopts the framework of the `treams` package for acoustic wave scattering in finite and periodic arrangements of particles, based on the T-matrix method.
## Installation
The version of Python should be 3.10 or 3.11.
### Installation using pip
To install the package with pip, use
```sh
pip install acoustotreams
```
Preliminarily, you have to also install original `treams>0.4` as well as `numpy` and `scipy<1.17`
```sh
pip install treams
```
## Documentation
The documentation can be found at https://NikUstimenko.github.io/acoustotreams.
## Publications
When using this code please cite:
The following publications document the developments and methods for different parts of the code:
* [N. Ustimenko, I. Fernandez-Corbaton, and C. Rockstuhl, Singular value decomposition to describe bound states in the continuum in periodic metasurfaces, arXiv 2602.15741 (2026).](https://arxiv.org/abs/2602.15741)
* [O. Demeulenaere, N. Ustimenko, A. G. Athanassiadis, L. Gulati, C. Rockstuhl, and P. Fischer, Ultrasonic metamaterial at MHz frequencies using microstructured glass, arXiv 2512.20506 (2026).](https://arxiv.org/abs/2512.20506)
* [N. Ustimenko, A. B. Evlyukhin, V. Kyrimi, A. V. Kildishev, and C. Rockstuhl, Lattice-induced sound trapping in biperiodic metasurfaces of acoustic resonators, Phys. Rev. Res. 8, 013074 (2026).](https://doi.org/10.1103/wnmk-zhrb)
* [N. Ustimenko, C. Rockstuhl, and A. V. Kildishev, Optimal multipole center for subwavelength acoustic scatterers, Appl. Phys. Lett. 126, 142201 (2025).](https://doi.org/10.1063/5.0257760)
## Features
* [x] T-matrix calculations using a spherical or cylindrical wave basis set
* [x] Scattering from clusters of particles
* [x] Scattering from particles and clusters arranged in 3d-, 2d-, and 1d-lattices
* [x] Calculation of sound propagation in stratified media
* [x] Band calculation in crystal structures
| text/markdown | Nikita Ustimenko | nikita.ustimenko@kit.edu | null | null | MIT | null | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/NikUstimenko/acoustotreams | null | <3.14,>=3.10 | [] | [] | [] | [
"numpy",
"scipy<1.17,>=1.14.1",
"treams>=0.4"
] | [] | [] | [] | [
"Bug Tracker, https://github.com/NikUstimenko/acoustotreams/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T10:55:51.412181 | acoustotreams-0.2.15.tar.gz | 44,104 | a5/d6/2c48e183c2ad40769826c632b2c091e5f18bfcf520c3db3f7908fa8e4c45/acoustotreams-0.2.15.tar.gz | source | sdist | null | false | 8a8ed5be3d457dfec4d96125bcb04499 | 6b97273b583fc23a2111c92a260a1740155ac7fbb9f8c56b38637c9daecfaa56 | a5d62c48e183c2ad40769826c632b2c091e5f18bfcf520c3db3f7908fa8e4c45 | null | [
"LICENSE"
] | 231 |
2.4 | gxformat2 | 0.22.0 | Galaxy Workflow Format 2 Descriptions |
.. image:: https://readthedocs.org/projects/gxformat2/badge/?version=latest
:target: https://gxformat2.readthedocs.io/en/latest/
.. image:: https://badge.fury.io/py/gxformat2.svg
:target: https://pypi.python.org/pypi/gxformat2/
.. image:: https://github.com/galaxyproject/gxformat2/workflows/Python%20CI/badge.svg
:target: https://github.com/galaxyproject/gxformat2/actions?query=workflow%3A%22Python+CI%22
.. image:: https://github.com/galaxyproject/gxformat2/workflows/Java%20CI/badge.svg
:target: https://github.com/galaxyproject/gxformat2/actions?query=workflow%3A%22Java+CI%22
.. image:: https://img.shields.io/badge/latest%20schema-v19.09-blue
:target: https://galaxyproject.github.io/gxformat2/v19_09.html
Format 2
--------
This package defines a high-level Galaxy_ workflow description termed "Format
2". The current schema version is v19_09 and the schema can be found
`here <https://galaxyproject.github.io/gxformat2/v19_09.html>`__. This version of
workflow format can be consumed by Galaxy since version 19.09.
The Format 2 workflow description is still somewhat experimental and may
yet change in small potentially backward incompatible ways until the format is
exported by Galaxy by default.
The traditional Galaxy workflow description (files ending in ``.ga`` extension,
sometimes called native workflows in this project) was not designed to be
concise and is neither readily human readable or human writable. Galaxy
workflow Format 2 is being designed to address these limitations,
while also moving Galaxy's workflow description language toward standards such
as the Common Workflow Language.
gxformat2
---------
This Python project can be installed from PyPI using ``pip``.
::
$ pip install gxformat2
Checkout the project tests or how it used in projects such as Planemo and
Galaxy to see how to use the gxformat2 library. Reference documentation for
the `modules <https://gxformat2.readthedocs.io/en/latest/py-modindex.html>`__
can be found as part of the project's documentation.
This project also includes various scripts for working with Galaxy workflows.
Checkout their help for more information.
::
$ gxwf-lint --help
$ gxwf-viz --help
$ gxwf-abstract-export --help
This library and associated scripts are licensed under the MIT License.
.. _Galaxy: https://galaxyproject.org/
History
-------
.. to_doc
---------------------
0.22.0 (2026-02-20)
---------------------
* Support URL and TRS URL references in subworkflow run: fields (thanks to
`@mvdbeek`_). `Pull Request 130`_
* Add ``--compact`` flag (thanks to `@mvdbeek`_). `Pull Request 112`_
* Add support for Python 3.14 (thanks to `@nsoranzo`_). `Pull Request 117`_
* Drop unmaintained codecov.io dependency (thanks to `@nsoranzo`_). `Pull
Request 125`_
* Enable dependabot version updates for GitHub actions (thanks to
`@nsoranzo`_). `Pull Request 116`_
---------------------
0.21.0 (2025-09-19)
---------------------
* Fix gxformat2 to .ga conversion if ``hide: true`` specified on output (thanks to
`@mvdbeek`_). `Pull Request 106`_
* Upgrade schema-salad version and auto-generated documents (thanks to
`@mvdbeek`_). `Pull Request 107`_
* GalaxyWorkflow: improve parsing speed & codegen (thanks to `@mr-c`_). `Pull
Request 108`_
* Fix docs building (thanks to `@nsoranzo`_). `Pull Request 109`_
* Add myst-parser to docs requirements (thanks to `@nsoranzo`_). `Pull Request
111`_
* Rebuild schema, bump up minimum Python version to 3.9 (thanks to
`@mvdbeek`_). `Pull Request 113`_
* Support for sample sheets and records (thanks to `@jmchilton`_). `Pull
Request 114`_
---------------------
0.20.0 (2024-08-23)
---------------------
* Arrays of workflow input parameters (thanks to `@mvdbeek`_). `Pull Request
100`_
* Design goals (thanks to `@jmchilton`_). `Pull Request 97`_
---------------------
0.19.0 (2024-07-23)
---------------------
* Sync markdown_parse with Galaxy (thanks to `@mvdbeek`_). `Pull Request 99`_
* More helpers for reasoning about gxformat2 steps (thanks to `@jmchilton`_).
`Pull Request 98`_
* Add abstraction for popping connection dictionary to model (thanks to
`@jmchilton`_). `Pull Request 96`_
* Add now mandatory readthedocs config files (thanks to `@nsoranzo`_). `Pull
Request 94`_
* Use `ConnectedValue` for connected values (thanks to `@mvdbeek`_). `Pull
Request 95`_
* Refresh codegen using schema-salad 8.4.20230808163024 (thanks to `@mr-c`_).
`Pull Request 92`_
* Update label comment (thanks to `@mvdbeek`_). `Pull Request 90`_
---------------------
0.18.0 (2023-05-12)
---------------------
* Fix input conversion if input has no label by @mvdbeek in https://github.com/galaxyproject/gxformat2/pull/89
---------------------
0.17.0 (2023-01-06)
---------------------
* Enable "when" for workflow steps by @mr-c in https://github.com/galaxyproject/gxformat2/pull/74
* When fixes by @mvdbeek in https://github.com/galaxyproject/gxformat2/pull/86
---------------------
0.16.0 (2022-09-20)
---------------------
* Add dev ``when`` on steps to backend (don't expose in schema yet). by @jmchilton in https://github.com/galaxyproject/gxformat2/pull/48
* Update project plumbing to allow dev release. by @jmchilton in https://github.com/galaxyproject/gxformat2/pull/49
* Drop support for Python 3.5, add 3.9 by @nsoranzo in https://github.com/galaxyproject/gxformat2/pull/52
* Relicense under the MIT license by @nsoranzo in https://github.com/galaxyproject/gxformat2/pull/58
* Format2: Add `label` attribute to `WorkflowInputParameter` and `WorkflowOutputParameter` by @nsoranzo in https://github.com/galaxyproject/gxformat2/pull/56
* Misc fixes and refactorings by @nsoranzo in https://github.com/galaxyproject/gxformat2/pull/55
* Convert Format2 workflow `label` to native `name` by @nsoranzo in https://github.com/galaxyproject/gxformat2/pull/54
* test_abstract_export: use different names for the different outputs by @simleo in https://github.com/galaxyproject/gxformat2/pull/57
* Fix 2 typos by @nsoranzo in https://github.com/galaxyproject/gxformat2/pull/62
* Propagate `doc` field to abstract CWL format by @nsoranzo in https://github.com/galaxyproject/gxformat2/pull/65
* Linting fixes by @mvdbeek in https://github.com/galaxyproject/gxformat2/pull/64
* Maintain collection_type if present by @mvdbeek in https://github.com/galaxyproject/gxformat2/pull/68
* Fix schema doc build by @nsoranzo in https://github.com/galaxyproject/gxformat2/pull/69
* Lint and deprecation fixes by @nsoranzo in https://github.com/galaxyproject/gxformat2/pull/70
* Run java codegenerator by @mvdbeek in https://github.com/galaxyproject/gxformat2/pull/71
* Run maven tests on pull_request by @mvdbeek in https://github.com/galaxyproject/gxformat2/pull/72
* fix schema-salad pycodegen by @mr-c in https://github.com/galaxyproject/gxformat2/pull/76
* Add workflow default file support by @mvdbeek in https://github.com/galaxyproject/gxformat2/pull/79
* Add typescript implementation by @mr-c in https://github.com/galaxyproject/gxformat2/pull/75
* Fix cytoscape HTML exports from dist package. by @jmchilton in https://github.com/galaxyproject/gxformat2/pull/82
* Add missing elements to schema, fix change_datatype conversion, CSS by @mvdbeek in https://github.com/galaxyproject/gxformat2/pull/83
* Support lists as data inputs by @mvdbeek in https://github.com/galaxyproject/gxformat2/pull/84
---------------------
0.15.0 (2020-08-12)
---------------------
* Lint types of default values.
* Fix bugs in schema related to differing type names between Galaxy and CWL.
* Generate cwl v1.2 instead of cwl v1.2.0-dev5 release now that it has been released.
* More testing of linting and CWL 1.2 export.
---------------------
0.14.0 (2020-08-11)
---------------------
* Bug fix where native export had explicit outputs declaration still in it (wouldn't break anything, but
was deceptive).
* Fixes for experimental CWL 1.2 abstract export.
* Improve script structures and documentation.
* Improve code structure - add more types, make more things immutable, mention mutability in docstrings.
---------------------
0.13.1 (2020-08-03)
---------------------
* Improve package structure - publish fixed sphinx docs, fix readme badges, add mypy typing support.
---------------------
0.13.0 (2020-07-30)
---------------------
* Add experimental export to CWL 1.2 using new abstract Operation classes.
---------------------
0.12.0 (2020-07-27)
---------------------
* Drop support for Python 2 - to support next bullet.
* Update schema parser for recent changes to schema salad.
---------------------
0.11.4 (2020-07-27)
---------------------
* Added abstraction for uniform access to workflow outputs across formats.
---------------------
0.11.3 (2020-07-23)
---------------------
* Bug fixes for exporting newer input concepts from native to Format 2.
* Added abstraction for uniform access to workflow inputs across formats.
---------------------
0.11.2 (2020-07-22)
---------------------
* Rework cytoscape and helpers for reuse from Planemo.
* Rev markdown validator for and from latest Galaxy changes.
---------------------
0.11.1 (2020-02-25)
---------------------
* Bug fix for gxwf-lint invocation from setup.py installed script.
---------------------
0.11.0 (2020-02-25)
---------------------
* Validate Galaxy Markdown in workflow reports as part of linting.
* Improved null handling in native ga workflow linting.
* Enhancements to workflow linting from Python. Lint for lack of documentation,
tools using the test toolshed, and implement special linting for training
material workflows to ensure a tag matches the workflow topic.
* Add gxwf-viz script that produces a cytoscape visualization of a workflow.
---------------------
0.10.1 (2019-12-07)
---------------------
* Bug fix to handle outputs without labels in Format 2 - they
don't validate per se but they are important for testing in the
Galaxy framework.
---------------------
0.10.0 (2019-12-06)
---------------------
* Implement scheam, validation, linting (for Format 2 and .ga).
* Handle new reports field in Galaxy 19.09 workflows.
* Numerous fixes for conversiion to and from native workflows.
* Numerous new test cases.
* Implement Java project for valiating and linting both kinds of workflows.
---------------------
0.9.0 (2019-07-08)
---------------------
* Implement default values in gxformat2.
---------------------
0.8.4 (2019-06-24)
---------------------
* Fix output IDs of 0.
---------------------
0.8.3 (2019-05-23)
---------------------
* Implement set_columns PJA.
---------------------
0.8.2 (2019-03-16)
---------------------
* Allow another API return option for experimental tool creation API.
---------------------
0.8.1 (2019-03-11)
---------------------
* Implement change datatype PJA.
---------------------
0.8.0 (2018-11-01)
---------------------
* Implement experimental CWL-style step defaults (see Galaxy PR #6850).
---------------------
0.7.1 (2018-10-09)
---------------------
* Various small fixes for changes in 0.7.1.
---------------------
0.7.0 (2018-10-08)
---------------------
* Add some basic test cases.
* Allow ID-map style listing of steps.
* Ordered load (in addition to existing dump functionality) or ordering of steps in ID-map style variant works.
* Allow CWL-style $graph defs that can define multiple workflows in a single file.
* Initial work on de-duplicating subworkflow definitions on import.
* Fix position handling while exporting workflow.
---------------------
0.6.1 (2018-10-01)
---------------------
* Fix export of non-data parameters and implicit workflow connections.
---------------------
0.6.0 (2018-10-01)
---------------------
* Various fixes, allow id map style workflow input definitions.
---------------------
0.5.0 (2018-10-01)
---------------------
* More fixes for PJA, add the ``doc`` keyword to format 2 workflows to match CWL workflows. Map to and from native Galaxy workflows as annotations.
---------------------
0.4.0 (2018-10-01)
---------------------
* Fixes for exporting PJA when exporting workflows from native .ga to format 2.
---------------------
0.3.2 (2018-10-01)
---------------------
* Fixes for exporting workflow outputs from native .ga to format 2, support for modern map style output definitions like CWL 1.0.
---------------------
0.3.1 (2018-10-01)
---------------------
* Fixes for exporting subworkflows from native .ga to format 2.
---------------------
0.3.0 (2018-09-30)
---------------------
* More cwl style inputs, initial work on conversion from native workflows, various small fixes and tweaks.
---------------------
0.2.0 (2018-02-21)
---------------------
* Bring in latest Galaxy updates - Python 3 fixes, safe YAML usage, and more PJA implemented.
---------------------
0.1.1 (2016-08-15)
---------------------
* Fix one Python 3 incompatibility.
---------------------
0.1.0 (2016-05-02)
---------------------
* Initial version - code from Galaxy's test framework with changes
based on planemo testing.
.. github_links
.. _Pull Request 130: https://github.com/galaxyproject/gxformat2/pull/130
.. _Pull Request 112: https://github.com/galaxyproject/gxformat2/pull/112
.. _Pull Request 117: https://github.com/galaxyproject/gxformat2/pull/117
.. _Pull Request 125: https://github.com/galaxyproject/gxformat2/pull/125
.. _Pull Request 116: https://github.com/galaxyproject/gxformat2/pull/116
.. _Pull Request 106: https://github.com/galaxyproject/gxformat2/pull/106
.. _Pull Request 107: https://github.com/galaxyproject/gxformat2/pull/107
.. _Pull Request 108: https://github.com/galaxyproject/gxformat2/pull/108
.. _Pull Request 109: https://github.com/galaxyproject/gxformat2/pull/109
.. _Pull Request 111: https://github.com/galaxyproject/gxformat2/pull/111
.. _Pull Request 113: https://github.com/galaxyproject/gxformat2/pull/113
.. _Pull Request 114: https://github.com/galaxyproject/gxformat2/pull/114
.. _Pull Request 100: https://github.com/galaxyproject/gxformat2/pull/100
.. _Pull Request 97: https://github.com/galaxyproject/gxformat2/pull/97
.. _Pull Request 99: https://github.com/galaxyproject/gxformat2/pull/99
.. _Pull Request 98: https://github.com/galaxyproject/gxformat2/pull/98
.. _Pull Request 96: https://github.com/galaxyproject/gxformat2/pull/96
.. _Pull Request 94: https://github.com/galaxyproject/gxformat2/pull/94
.. _Pull Request 95: https://github.com/galaxyproject/gxformat2/pull/95
.. _Pull Request 92: https://github.com/galaxyproject/gxformat2/pull/92
.. _Pull Request 90: https://github.com/galaxyproject/gxformat2/pull/90
.. _@mvdbeek: https://github.com/mvdbeek
.. _@mr-c: https://github.com/mr-c
.. _@nsoranzo: https://github.com/nsoranzo
.. _@jmchilton: https://github.com/jmchilton
| null | Galaxy Project and Community | jmchilton@gmail.com | null | null | MIT | galaxy | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Environment :: Console",
"License :: OSI Approved :: MIT License",
"Operating System :: POSIX",
"Topic :: Software Development",
"Topic :: Software Development :: Code Generators",
"Topic :: Software Development :: Testing",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | https://github.com/galaxyproject/gxformat2 | null | null | [] | [] | [] | [
"bioblend",
"pyyaml",
"schema-salad>8.7.20241010092723",
"typing_extensions"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.7 | 2026-02-20T10:55:07.761096 | gxformat2-0.22.0.tar.gz | 76,843 | 97/4d/125f2ebe54662c47b3205bebcad94655f6317889e17fedfbb809dbe77477/gxformat2-0.22.0.tar.gz | source | sdist | null | false | 0599d3a2b02ead92027d06bdee48a1b6 | 16a30fe912c406d15948aaaedfe386c8d07c43ab1d2ddae1b1c39cc57a6fe6fc | 974d125f2ebe54662c47b3205bebcad94655f6317889e17fedfbb809dbe77477 | null | [
"LICENSE"
] | 3,051 |
2.4 | aframexr | 0.7.1 | Python library to visualize data in Virtual and Extended Reality using Aframe components. | URL GitHub Pages: https://davidlab20.github.io/TFG/
| text/markdown | David Díaz | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"IPython>=9.8.0",
"polars>=1.20.0",
"pandas>=2.3.0; extra == \"pandas\""
] | [] | [] | [] | [
"Documentation, https://davidlab20.github.io/TFG/",
"Source, https://github.com/davidlab20/TFG/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-20T10:54:17.009990 | aframexr-0.7.1.tar.gz | 38,471 | c8/90/d210e9fdba1044788a7d24d11cf81e4579d2a0bfcf1d47ca487288d527ee/aframexr-0.7.1.tar.gz | source | sdist | null | false | 8c368b574635dd75527dae715a99963a | c7718dc24351125d825495327f2a889235fa8c2d7cd6bb1a2fb4f7cd6fd5acb0 | c890d210e9fdba1044788a7d24d11cf81e4579d2a0bfcf1d47ca487288d527ee | null | [
"LICENSE"
] | 244 |
2.1 | FoBiS.py | 3.2.1 | a Fortran Building System for poor men | FoBiS.py, a Fortran Building System for poor men, is a KISS tool for automatic building modern Fortran projects, it being able to automatically resolve inter-modules dependancy hierarchy.
| null | Stefano Zaghi | stefano.zaghi@gmail.com | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Environment :: Console",
"Intended Audience :: End Users/Desktop",
"Programming Language :: Python",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Topic :: Text Processing"
] | [] | https://github.com/szaghi/FoBiS | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.3 | 2026-02-20T10:54:05.234622 | fobis_py-3.2.1.tar.gz | 52,650 | e2/c6/d907008cb2994b1fa20324028b5fedafb440615024631a6803231f7768df/fobis_py-3.2.1.tar.gz | source | sdist | null | false | 3d99daa02e3878c8f25eba452f62d65f | 123bb026f8cba20ed81db5355cc39be1d64408d646e5aeaf4706be4b021a8669 | e2c6d907008cb2994b1fa20324028b5fedafb440615024631a6803231f7768df | null | [] | 0 |
2.4 | arize-ax-cli | 0.1.3rc0 | Official Arize CLI tool for managing datasets, experiments, and more | <p align="center">
<a href="https://arize.com/ax">
<img src="https://storage.googleapis.com/arize-assets/arize-logo-white.jpg" width="600" />
</a>
<br/>
<a target="_blank" href="https://pypi.org/project/arize-ax-cli/">
<img src="https://img.shields.io/pypi/v/arize-ax-cli?color=blue">
</a>
<a target="_blank" href="https://pypi.org/project/arize-ax-cli/">
<img src="https://img.shields.io/pypi/pyversions/arize-ax-cli">
</a>
<a target="_blank" href="https://arize-ai.slack.com/join/shared_invite/zt-2w57bhem8-hq24MB6u7yE_ZF_ilOYSBw#/shared-invite/email">
<img src="https://img.shields.io/badge/slack-@arize-blue.svg?logo=slack">
</a>
</p>
---
# Arize AX CLI <!-- omit in toc -->
- [Features](#features)
- [Installation](#installation)
- [Using pip](#using-pip)
- [From source](#from-source)
- [Verify Installation](#verify-installation)
- [Quick Start](#quick-start)
- [1. Initialize Configuration](#1-initialize-configuration)
- [2. Verify Configuration](#2-verify-configuration)
- [3. Start Using the CLI](#3-start-using-the-cli)
- [Configuration](#configuration)
- [Configuration Commands](#configuration-commands)
- [Configuration Modes](#configuration-modes)
- [Simple Configuration (Recommended)](#simple-configuration-recommended)
- [Advanced Configuration](#advanced-configuration)
- [Configuration File Location](#configuration-file-location)
- [Configuration Reference](#configuration-reference)
- [All Available Sections](#all-available-sections)
- [Using Environment Variables](#using-environment-variables)
- [1. Auto-Detection During Setup](#1-auto-detection-during-setup)
- [2. Manual Environment Variable References](#2-manual-environment-variable-references)
- [Viewing Expanded Values](#viewing-expanded-values)
- [Multiple Profiles](#multiple-profiles)
- [Shell Autocompletion](#shell-autocompletion)
- [Quick Install (Recommended)](#quick-install-recommended)
- [Verify Installation](#verify-installation-1)
- [Manual Installation (Alternative)](#manual-installation-alternative)
- [Supported Shells](#supported-shells)
- [Commands](#commands)
- [Datasets](#datasets)
- [Projects](#projects)
- [Spans](#spans)
- [Traces](#traces)
- [Cache](#cache)
- [Global Options](#global-options)
- [Usage Examples](#usage-examples)
- [Creating a Dataset from a CSV File](#creating-a-dataset-from-a-csv-file)
- [Exporting Dataset List to JSON](#exporting-dataset-list-to-json)
- [Exporting Dataset Examples to Parquet](#exporting-dataset-examples-to-parquet)
- [Using a Different Profile for a Command](#using-a-different-profile-for-a-command)
- [Pagination](#pagination)
- [Working with Multiple Environments](#working-with-multiple-environments)
- [Advanced Topics](#advanced-topics)
- [Output Formats](#output-formats)
- [Programmatic Usage](#programmatic-usage)
- [Environment Variables](#environment-variables)
- [Debugging](#debugging)
- [Troubleshooting](#troubleshooting)
- [Configuration Issues](#configuration-issues)
- [Connection Issues](#connection-issues)
- [Shell Completion Not Working](#shell-completion-not-working)
- [Getting Help](#getting-help)
- [Command-specific Help](#command-specific-help)
- [Support](#support)
- [Contributing](#contributing)
- [License](#license)
- [Changelog](#changelog)
Official command-line interface for [Arize AI](https://arize.com) - streamline your MLOps workflows with datasets, experiments, projects, and more.
[](https://badge.fury.io/py/arize-ax-cli)
[](LICENSE)
[](https://www.python.org/downloads/)
## Features
- **Dataset Management**: Create, list, update, and delete datasets
- **Project Management**: Organize your ML projects
- **Spans & Traces**: Query and filter LLM spans and traces
- **Multiple Profiles**: Switch between different Arize environments
- **Flexible Output**: Export to JSON, CSV, Parquet, or display as tables
- **Shell Completion**: Tab completion for bash, zsh, and fish
- **Rich CLI Experience**: Beautiful terminal output with progress indicators
## Installation
### Using pip
```bash
pip install arize-ax-cli
```
### From source
```bash
git clone https://github.com/Arize-ai/arize-ax-cli.git
cd arize-ax-cli
pip install -e .
```
### Verify Installation
```bash
ax --version
```
## Quick Start
### 1. Initialize Configuration
The first time you use the CLI, you'll need to create a _configuration profile_:
```bash
ax profiles create
```
This interactive setup will:
- Detect existing `ARIZE_*` environment variables and offer to use them
- Guide you through credential setup if no environment variables are found
- Create a configuration profile (default or named)
- Save your preferences for output format, caching, and more
**Example output:**
```
_ _ _ __ __
/ \ _ __(_)_______ / \ \ \/ /
/ _ \ | '__| |_ / _ \ / _ \ \ /
/ ___ \| | | |/ / __/ / ___ \ / \
/_/ \_\_| |_/___\___| /_/ \_\_/\_\
AI Observability Platform
Welcome to Arize AX CLI!
No configuration found. Let's set it up!
Environment Variable Detection
✓ Detected ARIZE_API_KEY = ak_***************xyz
Create profile from detected environment variables? [Y/n]: y
Configuration saved to profile 'default'
You're ready to go! Try: ax datasets list
```
### 2. Verify Configuration
Check your configuration:
```bash
ax profiles show
```
### 3. Start Using the CLI
List your datasets:
```bash
ax datasets list
```
List your projects:
```bash
ax projects list
```
List spans in a project:
```bash
ax spans list <project-id>
```
List traces in a project:
```bash
ax traces list <project-id>
```
## Configuration
The Arize CLI uses a flexible configuration system that supports multiple profiles, environment variables, and two setup modes.
### Configuration Commands
| Command | Description |
| ------------------------------ | ------------------------------------------------ |
| `ax profiles create` | Create a new configuration profile interactively |
| `ax profiles list` | List all available profiles |
| `ax profiles show` | Display the current profile's configuration |
| `ax profiles use <profile>` | Switch to a different profile |
| `ax profiles delete <profile>` | Delete a configuration profile |
### Configuration Modes
When you run `ax profiles create`, you'll be prompted to choose between two configuration modes:
#### Simple Configuration (Recommended)
**Best for:** Most users, cloud deployments, standard Arize usage
The simple setup only asks for the essentials:
- **API Key**: Your Arize API key
- **Region**: US, EU, or leave unset (auto-detect)
- **Output Format**: table, json, csv, or parquet
**Example:**
```
Choose configuration mode:
> Simple (recommended)
Advanced
API Key: Insert value
API Key (e.g., ak-123...): [hidden input]
Region:
> (leave empty for unset)
US
EU
Use environment variable
Default output format:
> table
json
csv
parquet
```
**Generated configuration:**
```toml
[profile]
name = "default"
[auth]
api_key = "ak_your_api_key_here"
[routing]
region = "US"
[output]
format = "table"
```
#### Advanced Configuration
**Best for:** On-premise deployments, Private Connect, custom routing, performance tuning
The advanced setup provides full control over:
1. **API Key**: Your Arize credentials
2. **Routing**: Choose from multiple strategies:
- No override (use defaults)
- Region-based routing (US, EU)
- Single endpoint (on-premise deployments)
- Base domain (Private Connect)
- Custom endpoints & ports (granular control)
3. **Transport**: Performance tuning:
- Stream max workers
- Stream max queue bound
- PyArrow max chunksize
- Max HTTP payload size
4. **Security**: TLS certificate verification
5. **Output Format**: Default display format
**Example routing options:**
```
What type of override should we setup?
0 - No override (use defaults)
1 - Region (for region-based routing)
2 - Single endpoint (typical for on-prem deployments)
> 3 - Base Domain (for Private Connect)
4 - Custom endpoints & ports
```
**Generated configuration (example with Private Connect):**
```toml
[profile]
name = "production"
[auth]
api_key = "${ARIZE_API_KEY}"
[routing]
base_domain = "arize-private.yourcompany.com"
[transport]
stream_max_workers = 8
stream_max_queue_bound = 5000
pyarrow_max_chunksize = 10000
max_http_payload_size_mb = 8
[security]
request_verify = true
[storage]
directory = "~/.arize"
cache_enabled = true
[output]
format = "json"
```
### Configuration File Location
Configuration files are stored at:
- **Linux/macOS**: `~/.arize/profiles/<profile>.toml`
- **Windows**: `%USERPROFILE%\.arize\profiles\<profile>.toml`
### Configuration Reference
#### All Available Sections
**Authentication** (required)
```toml
[auth]
api_key = "ak_your_api_key_here"
# Or use environment variable reference:
api_key = "${ARIZE_API_KEY}"
```
**Routing** (choose one strategy)
```toml
[routing]
# Option 1: Region-based (recommended for cloud)
region = "US" # or "EU"
# Option 2: Single endpoint (on-premise)
single_host = "arize.yourcompany.com"
single_port = "443"
# Option 3: Base domain (Private Connect)
base_domain = "arize-private.yourcompany.com"
# Option 4: Custom endpoints (advanced)
api_host = "api.arize.com"
api_scheme = "https"
otlp_host = "otlp.arize.com"
otlp_scheme = "https"
flight_host = "flight.arize.com"
flight_port = "443"
flight_scheme = "grpc+tls"
```
**Transport** (optional, advanced only)
```toml
[transport]
stream_max_workers = 8
stream_max_queue_bound = 5000
pyarrow_max_chunksize = 10000
max_http_payload_size_mb = 8
```
**Security** (optional, advanced only)
```toml
[security]
request_verify = true # Set to false to disable SSL verification (not recommended)
```
**Storage** (optional)
```toml
[storage]
directory = "~/.arize"
cache_enabled = true
```
**Output** (optional)
```toml
[output]
format = "table" # Options: table, json, csv, parquet
```
### Using Environment Variables
The CLI can detect and use environment variables in two ways:
#### 1. Auto-Detection During Setup
When you run `ax profiles create`, the CLI automatically detects existing `ARIZE_*` environment variables and offers to use them:
```bash
ax profiles create
```
```
Environment Variable Detection
✓ Detected ARIZE_API_KEY = ak_***************xyz
✓ Detected ARIZE_REGION = US
Create profiles from detected environment variables? [Y/n]: y
```
This will create a configuration that references the environment variables:
```toml
[auth]
api_key = "${ARIZE_API_KEY}"
[routing]
region = "${ARIZE_REGION}"
```
#### 2. Manual Environment Variable References
During both Simple and Advanced setup, you can choose "Use environment variable" for any field to reference an environment variable:
```
API Key:
Insert value
> Use environment variable
Environment variable name for API Key: ARIZE_API_KEY
```
#### Viewing Expanded Values
To see the actual values (with environment variables expanded):
```bash
ax profiles show --expand
```
Without `--expand`, you'll see the variable references like `${ARIZE_API_KEY}`.
### Multiple Profiles
Create different profiles for different environments:
```bash
# Create a production profile
ax profiles create
# Enter profile name: production
# Create a staging profile
ax profiles create
# Enter profile name: staging
# List all profiles
ax profiles list
# Switch profiles
ax profiles use production
ax profiles use staging
# Use a specific profile for a single command
ax datasets list --profile production
```
## Shell Autocompletion
Enable tab completion for your shell to autocomplete commands, options, and arguments.
### Quick Install (Recommended)
The CLI includes a built-in installer that automatically configures completion for your shell:
```bash
ax --install-completion
```
This will:
- Detect your current shell (bash, zsh, or fish)
- Install the appropriate completion script
- Show you instructions to activate it
After running the command, restart your shell or open a new terminal window for the changes to take effect.
### Verify Installation
Once installed, test tab completion:
```bash
ax <TAB> # Shows available commands (cache, datasets, profiles, projects, spans, traces)
ax datasets <TAB> # Shows dataset subcommands (list, get, create, delete)
ax datasets list --<TAB> # Shows available options
```
### Manual Installation (Alternative)
If you prefer to see or customize the completion script before installing:
```bash
# View the completion script for your shell
ax --show-completion
# Save it to a file and source it manually
ax --show-completion >> ~/.bashrc # For bash
ax --show-completion >> ~/.zshrc # For zsh
```
### Supported Shells
- **Bash** (Linux, macOS, Windows Git Bash)
- **Zsh** (macOS default, Oh My Zsh)
- **Fish** (Linux, macOS)
- **PowerShell** (Windows)
## Commands
### Datasets
Manage your ML datasets:
```bash
# List datasets
ax datasets list --space-id <space-id> [--limit 15] [--cursor <cursor>]
# Get a specific dataset
ax datasets get <dataset-id>
# Create a new dataset
ax datasets create --name "My Dataset" --space-id <space-id> --file data.csv
# List examples from a dataset
ax datasets list_examples <dataset-id> [--version-id <version-id>] [--limit 30]
# Delete a dataset
ax datasets delete <dataset-id> [--force]
```
**Supported data file formats:**
- CSV (`.csv`)
- JSON (`.json`)
- JSON Lines (`.jsonl`)
- Parquet (`.parquet`)
### Projects
Organize your ML projects:
```bash
# List projects
ax projects list --space-id <space-id> [--limit 15] [--cursor <cursor>]
# Get a specific project
ax projects get <project-id>
# Create a new project
ax projects create --name "My Project" --space-id <space-id>
# Delete a project
ax projects delete <project-id> [--force]
```
### Spans
Query LLM spans in a project:
```bash
ax spans list <project-id> [--start-time <iso8601>] [--end-time <iso8601>] \
[--filter "<expr>"] [--limit 15] [--cursor <cursor>]
```
**Filter examples:**
```bash
ax spans list <project-id> --filter "status_code = 'ERROR'"
ax spans list <project-id> --filter "latency_ms > 1000"
ax spans list <project-id> --start-time 2024-01-01T00:00:00Z --end-time 2024-01-02T00:00:00Z
```
### Traces
Query root-level traces (spans with no parent) in a project. Automatically applies
a `parent_id = null` filter; any additional `--filter` is ANDed to it:
```bash
ax traces list <project-id> [--start-time <iso8601>] [--end-time <iso8601>] \
[--filter "<expr>"] [--limit 15] [--cursor <cursor>]
```
**Filter examples:**
```bash
ax traces list <project-id> --filter "status_code = 'ERROR'"
ax traces list <project-id> --start-time 2024-01-01T00:00:00Z
```
### Cache
Manage the local cache. The CLI caches downloaded resource data (e.g., dataset examples) locally as Parquet files to avoid redundant API calls. When you fetch a dataset's examples, the results are stored on disk so subsequent requests for the same version load instantly. The cache is automatically invalidated when a resource's `updated_at` timestamp changes, so you always get fresh data when something changes on the server.
Caching is enabled by default and can be toggled in your profile configuration:
```toml
[storage]
cache_enabled = true
```
```bash
# Clear the cache
ax cache clear
```
### Global Options
Available for all commands:
- `--profile, -p <name>`: Use a specific configuration profile
- `--output, -o <format>`: Set output format (`table`, `json`, `csv`, `parquet`, or a file path)
- `--help, -h`: Show help message
> **Note:** `--verbose, -v` is available on each individual subcommand (e.g., `ax datasets list --verbose`) rather than as a top-level flag.
## Usage Examples
### Creating a Dataset from a CSV File
```bash
ax datasets create \
--name "Customer Churn Dataset" \
--space-id sp_abc123 \
--file ./data/churn.csv
```
### Exporting Dataset List to JSON
```bash
ax datasets list --space-id sp_abc123 --output json > datasets.json
```
### Exporting Dataset Examples to Parquet
```bash
ax datasets list_examples ds_xyz789 --output examples.parquet
```
### Using a Different Profile for a Command
```bash
ax datasets list --space-id sp_abc123 --profile production
```
### Pagination
List more datasets using pagination:
```bash
# First page
ax datasets list --space-id sp_abc123 --limit 20
# Next page (use cursor from previous response)
ax datasets list --space-id sp_abc123 --limit 20 --cursor <cursor-value>
```
### Working with Multiple Environments
```bash
# Setup profiles for different environments
ax profiles create # Create "production" profile
ax profiles create # Create "staging" profile
# Switch contexts
ax profiles use production
ax datasets list --space-id sp_prod123
ax profiles use staging
ax datasets list --space-id sp_stage456
```
### Filtering Spans by Status
```bash
ax spans list <project-id> --filter "status_code = 'ERROR'" --output json
```
### Listing Traces in a Time Window
```bash
ax traces list <project-id> \
--start-time 2024-01-01T00:00:00Z \
--end-time 2024-01-02T00:00:00Z
```
## Advanced Topics
### Output Formats
The CLI supports multiple output formats:
1. **Table** (default): Human-readable table format
2. **JSON**: Machine-readable JSON
3. **CSV**: Comma-separated values
4. **Parquet**: Apache Parquet columnar format
Set default format in profiles:
```bash
ax profiles create # Select output format during setup
```
Or override per command:
```bash
ax datasets list --output json
ax datasets list --output datasets.csv
ax datasets list --output datasets.parquet
```
### Programmatic Usage
Integrate with scripts:
```bash
#!/bin/bash
# Export datasets to JSON
DATASETS=$(ax datasets list --space-id sp_abc123 --output json)
# Process with jq
echo "$DATASETS" | jq '.data[] | select(.name | contains("test"))'
# Export to file
ax datasets list_examples ds_xyz789 --output data.parquet
```
### Environment Variables
The CLI respects these environment variables:
- `ARIZE_API_KEY`: Your Arize API key
- `ARIZE_REGION`: Region (US, EU, etc.)
- Any other `ARIZE_*` variables will be detected during `ax profiles create`
### Debugging
Enable verbose mode to see detailed SDK logs:
```bash
ax datasets list --space-id sp_abc123 --verbose
```
## Troubleshooting
### Configuration Issues
**Problem**: `profiles file not found`
**Solution**: Run `ax profiles create` to create a configuration profile.
---
**Problem**: `Invalid API key`
**Solution**: Verify your API key:
1. Check your configuration: `ax profiles show`
2. Regenerate your API key from the Arize UI
3. Update your profiles: `ax profiles create` (overwrite existing)
---
### Connection Issues
**Problem**: `Connection refused` or `SSL errors`
**Solution**:
1. Check your routing configuration: `ax profiles show`
2. Verify network connectivity
3. For on-premise installations, ensure `single_host` is configured correctly
4. For SSL issues, check `security.request_verify` setting (use with caution)
---
### Shell Completion Not Working
**Problem**: Tab completion doesn't work
**Solution**:
1. Verify completion is installed: Run the installation command for your shell
2. Reload your shell or open a new terminal
3. Ensure `ax` is in your PATH: `which ax`
---
## Getting Help
### Command-specific Help
Every command has detailed help:
```bash
ax --help
ax datasets --help
ax datasets create --help
ax profiles --help
```
### Support
- **Documentation**: [https://docs.arize.com/cli](https://docs.arize.com/cli)
- **Bug Reports**: [GitHub Issues](https://github.com/Arize-ai/arize-ax-cli/issues)
- **Community**: [Arize Community Slack](https://arize-ai.slack.com)
- **Email**: [support@arize.com](mailto:support@arize.com)
## Contributing
We welcome contributions!
- **For developers**: See [DEVELOPMENT.md](DEVELOPMENT.md) for architecture, code structure, and development guide
- **For contributors**: See [CONTRIBUTING.md](CONTRIBUTING.md) for contribution guidelines (coming soon)
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
## Changelog
See [CHANGELOG.md](CHANGELOG.md) for release notes and version history.
---
**Built with ❤️ by [Arize AI](https://arize.com)**
| text/markdown | null | Arize AI <support@arize.com> | null | null | Apache-2.0 | arize, cli, llm, machine-learning, mlops, observability | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"arize<9,>=8.0.0b0",
"pydantic<3,>=2.0.0",
"questionary<3,>=2.0.0",
"rich<14,>=13.0.0",
"shellingham<2,>=1.5.0",
"tomli-w<2,>=1.0.0",
"typer<1,>=0.12.0",
"mypy==1.19.1; extra == \"dev\"",
"pandas-stubs>=2.2.0; extra == \"dev\"",
"pytest-cov==6.0.0; extra == \"dev\"",
"pytest==8.4.2; extra == \"dev\"",
"ruff==0.14.9; extra == \"dev\"",
"taskipy<2,>=1.14.1; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://arize.com",
"Documentation, https://docs.arize.com/cli",
"Repository, https://github.com/Arize-ai/ax-cli",
"Bug Tracker, https://github.com/Arize-ai/ax-cli/issues"
] | twine/5.0.0 CPython/3.12.12 | 2026-02-20T10:53:06.716468 | arize_ax_cli-0.1.3rc0.tar.gz | 38,315 | 9e/ca/db423bcbcd0c59ba0216276bec3463732755ba8bd33ce7c26e545a82bfd5/arize_ax_cli-0.1.3rc0.tar.gz | source | sdist | null | false | 0a234ce78dd6c6a1abbfd8fbbcaa8893 | a98308810abebdd5482b1228c0166df4881cfa2f1360d2a4c2cdef8d729a2777 | 9ecadb423bcbcd0c59ba0216276bec3463732755ba8bd33ce7c26e545a82bfd5 | null | [] | 200 |
2.4 | devdox-sonar | 0.0.1 | A CLI tool to analyze SonarCloud issues and attempt LLM-powered fixes | # DevDox AI Sonar
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://pypi.org/project/devdox-sonar/)
[](https://github.com/montymobile1/devdox-ai-sonar/actions/workflows/build.yml)
DevDox AI Sonar is a command-line tool that reads the analysis reports SonarCloud has already produced for your project — every bug, code smell, and security vulnerability it found — and sends each issue to a Large Language Model along with the relevant source code and context. The LLM generates a structured fix with code blocks, line numbers, and a confidence score. You review it, apply it if it looks good, and a markdown changelog documents everything.
The CLI is built on [Click](https://github.com/pallets/click) for command handling, [Questionary](https://github.com/tmbo/questionary) for interactive prompts, and [Rich](https://github.com/Textualize/rich) for terminal formatting. Issue data and fix suggestions are modeled with [Pydantic](https://github.com/pydantic/pydantic) for validation. LLM prompts are assembled from [Jinja2](https://github.com/pallets/jinja) templates, making them easy to inspect and modify. All file I/O during fix application is async via [aiofiles](https://github.com/Tinche/aiofiles).
**PyPI:** [pypi.org/project/devdox-sonar](https://pypi.org/project/devdox-sonar/)
**Source:** [github.com/montymobile1/devdox-ai-sonar](https://github.com/montymobile1/devdox-ai-sonar)
---
## Table of Contents
- [What is DevDox AI Sonar?](#what-is-devdox-ai-sonar)
- [How It Works](#how-it-works)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Getting Started](#getting-started)
- [First-Time Setup](#first-time-setup)
- [Subsequent Runs](#subsequent-runs)
- [Configuration](#configuration)
- [Configuration Files](#configuration-files)
- [LLM Providers](#llm-providers)
- [Using the CLI](#using-the-cli)
- [Interactive Mode](#interactive-mode)
- [Direct Mode](#direct-mode)
- [CLI Options](#cli-options)
- [The fix_issues Pipeline](#the-fix_issues-pipeline)
- [Understanding the Output](#understanding-the-output)
- [Workflow Recipes](#workflow-recipes)
- [Advanced Topics](#advanced-topics)
- [Rule Exclusions](#rule-exclusions)
- [Supported Languages](#supported-languages)
- [Troubleshooting](#troubleshooting)
- [License](#license)
- [Authors](#authors)
- [Acknowledgments](#acknowledgments)
---
## What is DevDox AI Sonar?
**SonarCloud** is a service that scans your code on every push and produces an analysis report: bugs that will crash at runtime, security holes an attacker could exploit, and code smells that make your codebase harder to maintain. It tells you *what* is wrong. It does not tell you how to fix it.
For a project with hundreds of open issues, fixing them manually means reading each rule, understanding the context, writing a fix, and testing it. Most teams never get to it. The issues pile up.
**DevDox AI Sonar picks up where SonarCloud leaves off.** It requires that SonarCloud has already scanned your project and produced an analysis report. The tool then authenticates with SonarCloud, reads that report, fetches the flagged issues, and for each one, extracts the relevant source code with surrounding context and sends it to an LLM. The LLM returns a structured fix — actual code blocks with line numbers, import changes, helper functions, an explanation, and a confidence score. You review the fix, decide whether to apply it, and a markdown changelog records every change for audit.
You install it from PyPI and use it as a terminal command:
```bash
pip install devdox_sonar
devdox_sonar
```
<details>
<summary><strong>Glossary</strong></summary>
| Term | Meaning |
|------|---------|
| **SonarCloud** | Cloud service that scans your code and produces analysis reports. Free for open-source projects. |
| **Analysis Report** | The output SonarCloud generates after scanning your code. Contains all detected issues. DevDox AI Sonar reads this report. |
| **Issue** | A single problem SonarCloud found. Has a type, severity, rule, file, and line number. |
| **Rule** | The coding standard an issue violates. Example: `python:S1066` = "mergeable if statements should be combined." Each rule has a unique ID in the format `language:SXXXX`. |
| **Bug** | A logic error that will produce wrong results or crash. |
| **Code Smell** | Not a bug, but makes code harder to maintain — excessive complexity, duplication, dead code. |
| **Vulnerability** | A security issue — SQL injection, XSS, hardcoded credentials. |
| **Security Hotspot** | Code that *might* be a security issue and needs manual review. |
| **Severity** | How bad it is: **Blocker** > **Critical** > **Major** > **Minor** > **Info**. |
| **LLM Provider** | The AI service generating fixes. You bring your own API key. Supported: OpenAI, Google Gemini, TogetherAI, OpenRouter. |
| **Confidence Score** | 0.0 to 1.0 rating from the LLM indicating how certain it is about the fix. |
| **Dry Run** | Runs the full pipeline but skips all file writes. Safe to run anytime. |
</details>
---
## How It Works
SonarCloud must have already scanned your project before DevDox AI Sonar can do anything. The tool reads SonarCloud's existing analysis report — it does not perform its own code analysis.
```mermaid
flowchart LR
A["SonarCloud\n(already scanned)"]
B["Fetch Issues\nfrom report"]
C["Clone Repo\nto /tmp"]
D["Extract Code\n+ Context"]
E["Build Prompt\n(Jinja2)"]
F["Call LLM"]
G["Validate Fix"]
H{"Preview"}
I["Apply +\nChangelog"]
J["Skip"]
A -->|"analysis\nreport"| B --> C --> D --> E --> F --> G --> H
H -->|"apply = 1"| I
H -->|"apply = 0"| J
style A fill:#4a90d9,stroke:#2c6faa,color:#fff
style B fill:#7b68ee,stroke:#5b48ce,color:#fff
style C fill:#7b68ee,stroke:#5b48ce,color:#fff
style D fill:#7b68ee,stroke:#5b48ce,color:#fff
style E fill:#7b68ee,stroke:#5b48ce,color:#fff
style F fill:#f5a623,stroke:#d4891c,color:#fff
style G fill:#e67e22,stroke:#c96e1c,color:#fff
style H fill:#e8e8e8,stroke:#999,color:#333
style I fill:#50c878,stroke:#3da85e,color:#fff
style J fill:#ccc,stroke:#999,color:#666
```
1. **Fetch** — Authenticates with SonarCloud and reads the analysis report via the Issues API. Filters by the types and severities you configured. Regular issues (bugs, code smells) are grouped **by rule** so all issues of the same kind are batched together. Security issues are grouped **by file**.
2. **Clone and Extract** — Your repository is cloned to a temporary directory (your working tree is never touched). For each issue, the tool reads the flagged file, locates the exact lines from the report, and extracts code with surrounding context (`context_lines=10` by default). If the code has changed since SonarCloud last scanned, fuzzy matching is used to find the right location.
3. **Build Prompt** — The extracted code, rule description, severity, and issue metadata are assembled into a prompt using Jinja2 templates (located in `prompts/python/`).
4. **Call LLM** — The prompt is sent to your configured provider. The LLM returns structured JSON containing code blocks with line numbers, an explanation, and a confidence score.
5. **Validate** — When applying fixes (not preview-only), a **validator agent** — a second LLM call — reviews the fix for logic errors, security issues, and syntax problems. If it finds issues, it can correct them.
6. **Preview and Apply** — The terminal shows the file path, confidence score, and explanation. If `apply = 1`, the fix is written to disk. If `create_backup = 1`, the entire project directory is copied to `<project>_backup_YYYYMMDD_HHMMSS` in the parent directory first. A markdown changelog documents every change.
---
## Prerequisites
**Python 3.12 or higher**
```bash
python --version
```
**Git** — The tool clones your repo to a temp directory for code extraction.
```bash
git --version
```
**A SonarCloud project that has already been scanned.** DevDox AI Sonar does not scan your code — SonarCloud does. You need an existing SonarCloud project with at least one completed analysis. If you have not set up SonarCloud yet, it is free for open-source: [sonarcloud.io](https://sonarcloud.io/).
You will need these from your SonarCloud account:
- **API Token** — generate at [sonarcloud.io/account/security](https://sonarcloud.io/account/security)
- **Organization Key** — visible in your dashboard URL: `sonarcloud.io/organizations/<org-key>/projects`
- **Project Key** — visible on your project page: `sonarcloud.io/project/overview?id=<project-key>`
**An API key for at least one LLM provider:** [OpenAI](#openai), [Google Gemini](#google-gemini), [TogetherAI](#togetherai), or [OpenRouter](#openrouter).
---
## Installation
```bash
pip install devdox_sonar
```
Verify:
```bash
devdox_sonar --version
# devdox_sonar, version 0.0.1-beta
```
For contributors installing from source:
```bash
git clone https://github.com/montymobile1/devdox-ai-sonar.git
cd devdox-ai-sonar
pip install -e ".[dev]"
```
---
## Getting Started
### First-Time Setup
Run the tool with no arguments to start the setup wizard:
```bash
devdox_sonar
```
The wizard walks you through three steps.
**Step 1 — SonarCloud Connection**
| It asks for | Where to find it | Example |
|---|---|---|
| Token | [sonarcloud.io/account/security](https://sonarcloud.io/account/security) | `squ_abc123def456...` |
| Organization Key | Your SonarCloud dashboard URL | `my-company` |
| Project Key | Your project's SonarCloud page | `my-company_my-app` |
| Project Path | Absolute path to the code on your machine | `/home/user/projects/my-app` |
| Git URL | Repository clone URL | `https://github.com/my-org/my-app.git` |
Saved to `~/devdox/auth.json`.
**Step 2 — LLM Provider**
1. Pick a provider (OpenAI, Gemini, TogetherAI, or OpenRouter)
2. Paste your API key — it is validated against the provider's API immediately
3. Choose a model from the provider's available list
4. Optionally set it as the default provider
You can add multiple providers. Saved to `~/devdox/config.toml`.
**Step 3 — Analysis Parameters**
The wizard prompts you for each of the following. You can press Enter to skip any field and accept the current value.
| Parameter | Prompt | Possible values |
|---|---|---|
| **Max Fixes** | `Maximum fixes to generate (0-20)` | Any integer from 0 to 20 |
| **Issue Types** | `Issue types (comma-separated, or press Enter to skip)` | `BUG`, `VULNERABILITY`, `CODE_SMELL` — pass one or more, comma-separated |
| **Severities** | `Issue severities (comma-separated, or press Enter to skip)` | `BLOCKER`, `CRITICAL`, `MAJOR`, `MINOR`, `INFO` — pass one or more, comma-separated |
| **Apply** | `Apply fixes of SonarQube (press Enter to skip)` | `yes` (apply fixes to files) or `no` (preview only) |
| **Create Backup** | `Create backup before apply fixes (press Enter to skip)` | `yes` (copy project dir before modifying) or `no` |
| **Exclude Rules** | `Rules to be excluded (comma-separated, or press Enter to skip)` | Comma-separated rule IDs, e.g. `python:S7503,python:S3776`. See [Rule Exclusions](#rule-exclusions) for format and recommendations. |
After setup completes, you land in the interactive menu.
### Subsequent Runs
On subsequent runs, `devdox_sonar` detects your existing configuration and skips the setup wizard entirely. It loads your saved credentials and parameters from `~/devdox/auth.json` and `~/devdox/config.toml`, and goes straight to the interactive menu.
To reconfigure, either use the menu options (Add Provider, Update Provider, Change Parameters Configuration) or edit the config files directly.
---
## Configuration
### Configuration Files
All configuration lives in `~/devdox/`:
```
~/devdox/
├── auth.json # SonarCloud credentials (JSON)
└── config.toml # LLM providers + analysis parameters (TOML)
```
**auth.json**
```json
{
"SONAR_TOKEN": "squ_your_token",
"SONAR_ORG": "your-org-key",
"SONAR_PROJ": "your-project-key",
"PROJECT_PATH": "/home/user/projects/my-app",
"GIT_URL": "https://github.com/your-org/my-app.git"
}
```
**config.toml**
```toml
[llm]
default_provider = "openai"
default_model = "gpt-4o"
[[llm.providers]]
name = "openai"
api_key = "sk-your-key"
base_url = "https://api.openai.com/v1"
models = ["gpt-4o", "gpt-4-turbo"]
[configuration]
max_fixes = 5
types = "BUG,CODE_SMELL"
severities = "CRITICAL,MAJOR"
apply = 0
create_backup = 0
exclude_rules = ""
```
**Updating configuration**
Use the interactive menu (run `devdox_sonar` with no arguments) and select:
- **Add Provider** — add another LLM provider
- **Update Provider** — change an existing provider's API key or model
- **Change Parameters Configuration** — adjust types, severities, max fixes, apply, backup, and excluded rules
Or edit `~/devdox/auth.json` and `~/devdox/config.toml` directly.
---
### LLM Providers
You need at least one. The setup wizard lets you configure any of these:
#### OpenAI
- **Get a key:** [platform.openai.com](https://platform.openai.com) → API Keys → Create new secret key
- **Recommended models:** `gpt-4o`, `gpt-4-turbo`
#### Google Gemini
- **Get a key:** [ai.google.dev](https://ai.google.dev) → Google AI Studio → Get API Key
- **Recommended models:** `gemini-2.5-flash`, `gemini-pro`
- Has a **free tier** — useful for trying the tool without spending anything.
#### TogetherAI
- **Get a key:** [together.ai](https://www.together.ai) → Dashboard → Settings → API Keys
- **Recommended models:** `mixtral-8x7b`, `meta-llama/Llama-3-70b`
- Runs open-source models at lower cost.
#### OpenRouter
- **Get a key:** [openrouter.ai](https://openrouter.ai) → Dashboard → Keys → Create Key
- **Example models:** `anthropic/claude-sonnet-4`, `openai/gpt-4o`, `google/gemini-2.5-flash`
- One API key gives access to 400+ models. Model names use `provider/model-name` format.
---
## Using the CLI
DevDox AI Sonar provides two ways to run commands: an **interactive mode** with a menu, and a **direct mode** that skips the menu and runs a specific command immediately.
### Interactive Mode
```bash
devdox_sonar
```
On first run, the setup wizard runs (see [First-Time Setup](#first-time-setup)). On subsequent runs, your saved configuration is loaded and you go straight to the menu:
```
═══════════════════════════════════════════════
DevDox AI Sonar - Interactive Mode
═══════════════════════════════════════════════
? What would you like to do?
➕ Add Provider - Add provider or sonar configuration
✏️ Update Provider - Update provider or sonar configuration
🔧 Fix Issues - Generate and apply LLM-powered fixes
🔒 Fix Security Issues - Specialized security vulnerability fixes
📊 Analyze Project - Display SonarCloud analysis
🔍 Inspect Project - Analyze local directory structure
⚙️ Change Parameters Configuration
❌ Exit
```
Use arrow keys to navigate, Enter to select. Type `/` during any prompt to switch to a different command. Press Ctrl+C to exit.
### Direct Mode
If you already know which command you want, skip the interactive menu entirely using the `-c` flag:
```bash
devdox_sonar -c <command> [OPTIONS]
```
This runs the command immediately using your saved configuration from `~/devdox/`. You can override specific parameters with CLI options (see below).
Four commands are available in direct mode:
| Command | What it does |
|---|---|
| `fix_issues` | Reads SonarCloud's analysis report, fetches bugs and code smells, generates LLM fixes, and lets you preview or apply them. Issues are grouped **by rule**. |
| `fix_security_issues` | Same pipeline, but for security vulnerabilities only. Issues are grouped **by file**. Generates a separate `CHANGES_SECURITY_*.md` changelog. |
| `analyze` | Displays project metrics from SonarCloud (lines of code, coverage, bugs, vulnerabilities) and an issues table. No fixes are generated. |
| `inspect` | Analyzes your local directory: file counts by language, git status, SonarCloud configuration presence. Does not contact SonarCloud. |
The remaining menu options (Add Provider, Update Provider, Change Parameters Configuration) are only available through the interactive menu because they require interactive prompts.
**Examples:**
```bash
# Fix up to 5 issues, preview only (no files modified)
devdox_sonar -c fix_issues --max-fixes 5
# Fix issues and apply them to files
devdox_sonar -c fix_issues --apply 1 --max-fixes 3
# Run the full pipeline but skip all file writes
devdox_sonar -c fix_issues --dry-run
# Only critical bugs
devdox_sonar -c fix_issues --types BUG --severity CRITICAL,BLOCKER
# Show project metrics from SonarCloud
devdox_sonar -c analyze
```
### CLI Options
These options can be passed with any direct mode command to override your saved configuration:
```
-v, --verbose Enable debug logging
--types TEXT Comma-separated issue types: BUG, VULNERABILITY, CODE_SMELL
--severity TEXT Comma-separated severities: BLOCKER, CRITICAL, MAJOR, MINOR, INFO
--max-fixes INTEGER Number of issues to process (0-20)
--apply INTEGER 0 = preview only, 1 = apply fixes to files
--dry-run Run the full pipeline but skip all file writes
```
### The fix_issues Pipeline
```mermaid
flowchart TD
A["Load Config\nauth.json + config.toml"]
B["Clone Repo\nGit clone to /tmp"]
C["Fetch Issues\nfrom SonarCloud report\nFilter by type/severity"]
D["Group by Rule"]
subgraph LOOP ["For Each Rule Group"]
direction TB
E["Extract Code\nLocate lines + context"]
F["Select Handler"]
G["Generate Fix\nLLM or AST-based"]
H{"Preview\nFile, confidence,\nexplanation"}
I["Apply + Validate"]
J["Skip"]
K{"Continue to\nnext issue?"}
end
L["Write Changelog\nCHANGES_REGULAR_*.md"]
A --> B --> C --> D --> E
E --> F --> G --> H
H -->|"apply = 1"| I --> K
H -->|"apply = 0"| J --> K
K -->|Yes| E
K -->|No| L
style A fill:#4a90d9,stroke:#2c6faa,color:#fff
style B fill:#4a90d9,stroke:#2c6faa,color:#fff
style C fill:#4a90d9,stroke:#2c6faa,color:#fff
style D fill:#7b68ee,stroke:#5b48ce,color:#fff
style E fill:#9b59b6,stroke:#7d3c98,color:#fff
style F fill:#9b59b6,stroke:#7d3c98,color:#fff
style G fill:#f5a623,stroke:#d4891c,color:#fff
style H fill:#e8e8e8,stroke:#999,color:#333
style I fill:#50c878,stroke:#3da85e,color:#fff
style J fill:#ccc,stroke:#999,color:#666
style K fill:#e8e8e8,stroke:#999,color:#333
style L fill:#50c878,stroke:#3da85e,color:#fff
```
**Specialized rule handlers** — Most rules go through the LLM via `DefaultRuleHandler`. Two rules have dedicated handlers:
| Rule | Handler | Approach |
|---|---|---|
| `python:S7503` | `AsyncToSyncHandler` | Pure AST analysis. Detects async functions without `await`, removes the `async` keyword, and updates all call sites. No LLM call. |
| `python:S3776` | `CognitiveComplexityHandler` | Specialized refactoring prompt template optimized for breaking down complex functions. |
---
### Understanding the Output
**Terminal preview** — For each fix, the tool prints:
```
Fix Preview:
File: src/my_app/services/parser.py
Confidence: 0.92
Issues fixed: 1
Changes:
╭─ Explanation of changed ─────────────────────────────────────╮
│ Issue: python:S1066 - Collapsible if statements should be │
│ merged → Fix: Combined nested if statements into a single │
│ condition → Validation: Logic preserved, no side effects │
╰──────────────────────────────────────────────────────────────╯
```
When `apply = 1`, a results summary follows:
```
Results:
Attempted: 1
Successful: 1
Failed: 0
Success Rate: 100.0%
```
Between issues, you are prompted:
```
? Continue to next issue? (Y/n)
```
**Markdown changelog** — For every issue processed, a detailed entry is written to a changelog file in your project root. The terminal shows the summary; the changelog contains the full code diffs.
Regular issues generate `CHANGES_REGULAR_YYYYMMDDHHMMSS.md`. Security issues generate `CHANGES_SECURITY_YYYYMMDDHHMMSS.md`.
Each changelog entry includes:
```markdown
## Issue: `python:S1066`
**Severity:** MAJOR
**File:** `src/my_app/services/parser.py`
### Problem
Merge this if statement with the enclosing one.
### Explanation
Issue: python:S1066 - Collapsible if statements should be merged
→ Fix: Combined nested if statements into a single condition
→ Validation: Logic preserved, no side effects
### Impact Assessment
- **Risk Level:** Medium
- **Breaking Change:** No
- **Logic Preserved:** Yes
- **Testing Required:** Recommended
### Suggested Fix
#### `parse_config` (Lines 42-48)
**Original:**
```python
if config is not None:
if config.get("enabled"):
run_parser(config)
```
**Fixed:**
```python
if config is not None and config.get("enabled"):
run_parser(config)
```
### Review Checklist
- [ ] Code change preserves original logic
- [ ] No new bugs introduced
- [ ] Syntax validated
- [ ] Tests pass
- [ ] Ready for commit
```
**Confidence scores**
| Score | Guidance |
|---|---|
| 0.8 - 1.0 | Generally safe to apply. Review the changelog entry to confirm. |
| 0.6 - 0.8 | Read the changelog diff carefully. Test after applying. |
| Below 0.6 | Review manually or skip. |
**Backups** — When `create_backup = 1` and `apply = 1`, the tool copies your entire project directory to `<parent-dir>/<project-name>_backup_YYYYMMDD_HHMMSS/` before modifying any files.
---
### Workflow Recipes
```bash
# Preview fixes without modifying any files
devdox_sonar -c fix_issues --dry-run
# Fix up to 5 critical bugs, preview only
devdox_sonar -c fix_issues --types BUG --severity CRITICAL,BLOCKER --max-fixes 5
# Apply fixes to files
devdox_sonar -c fix_issues --apply 1 --max-fixes 10
# Security audit: review then apply
devdox_sonar -c analyze
devdox_sonar -c fix_security_issues --dry-run
devdox_sonar -c fix_security_issues --apply 1
# Code smell cleanup
devdox_sonar -c fix_issues --types CODE_SMELL --severity CRITICAL,MAJOR --max-fixes 10 --apply 1
# Debug logging
devdox_sonar --verbose -c fix_issues --max-fixes 3
```
---
# Advanced Topics
## Rule Exclusions
Some SonarCloud rules may not apply to your project. You can exclude specific rules so the tool skips them entirely.
**Format:** Comma-separated rule IDs in the format `language:SXXXX`. Example: `python:S7503,python:S3776,python:S107`.
**How to set them:**
Via the interactive menu: run `devdox_sonar`, select **Change Parameters Configuration**, and enter the rule IDs at the `Rules to be excluded` prompt.
Via `~/devdox/config.toml`:
```toml
[configuration]
exclude_rules = "python:S7503,python:S7493,python:S107"
```
**Commonly excluded rules:**
| Rule ID | What it flags | Why teams exclude it |
|---|---|---|
| `python:S7503` | Async functions that do not use `await` | Sometimes `async` is needed for interface compatibility even without `await` |
| `python:S7493` | Synchronous file I/O inside async functions | Intentional for small config files or startup code |
| `python:S107` | Functions with too many parameters | Common in FastAPI dependency injection |
| `python:S5852` | Regular expressions vulnerable to ReDoS | Safe patterns sometimes flagged incorrectly |
| `python:S3776` | Functions with high cognitive complexity | May prefer manual refactoring over LLM-generated rewrites |
**Finding rule IDs:** You can see rule IDs in the SonarCloud dashboard next to each issue, or in the changelog entries this tool generates.
---
## Supported Languages
**Issue fetching** works for all languages SonarCloud supports. The tool reads whatever SonarCloud has in its analysis report.
**Automated fixing** currently processes **Python files only** (`.py` extension, excluding files with a `test_` prefix). The prompt templates in `prompts/python/` are optimized for Python. The architecture (Jinja2 templates, language-agnostic models) is designed to support additional languages in the future.
---
## Troubleshooting
| Error | Cause | Fix |
|---|---|---|
| `401 Unauthorized` | SonarCloud token is invalid or expired | Generate a new token at [sonarcloud.io/account/security](https://sonarcloud.io/account/security) and update `~/devdox/auth.json` |
| `Invalid API key` | LLM provider rejected the key | Verify the key is correct and billing is enabled. Use **Update Provider** in the interactive menu. |
| `Configuration not found` | No config files exist yet | Run `devdox_sonar` to start the setup wizard |
| `File not found` | `PROJECT_PATH` does not match the repo structure | Ensure `PROJECT_PATH` in `~/devdox/auth.json` points to the repository root |
| No issues returned | SonarCloud has not scanned the project yet, or all issues are resolved | Verify your project has a completed analysis at [sonarcloud.io](https://sonarcloud.io/) |
<details>
<summary><strong>FAQ</strong></summary>
**Can I use this with self-hosted SonarQube?**
The underlying `SonarCloudAnalyzer` class accepts a `base_url` parameter. If you are building custom tooling, you could point it at your SonarQube instance. The CLI currently targets SonarCloud.
**What if the LLM generates a bad fix?**
Every fix includes a confidence score. The validator agent catches many issues. You can enable backups before applying. `--dry-run` lets you run the full pipeline without writing any files.
**Does it modify my working directory?**
The tool clones your repo to `/tmp` for code extraction. Applied fixes are written to the path specified in `PROJECT_PATH`. The clone step does not affect your local uncommitted changes.
**Does SonarCloud need to have scanned my project first?**
Yes. DevDox AI Sonar reads SonarCloud's existing analysis report. It does not perform code analysis itself. If SonarCloud has not scanned your project, there are no issues to fix.
**How much does it cost?**
DevDox AI Sonar is free and open-source (Apache 2.0). You pay only for LLM API calls to your chosen provider. Google Gemini offers a free tier.
</details>
---
## License
This project is licensed under the [Apache License 2.0](LICENSE). You are free to use, modify, and distribute this software, including in commercial and proprietary projects, provided you include the original license and notice. The license also provides an express grant of patent rights from contributors.
## Support
[github.com/montymobile1/devdox-ai-sonar/issues](https://github.com/montymobile1/devdox-ai-sonar/issues)
## Authors
Created and maintained by **Hayat Bourji** (hayat.bourgi@montyholding.com) at [Monty Mobile](https://github.com/montymobile1).
## Acknowledgments
Built with [Click](https://github.com/pallets/click), [Rich](https://github.com/Textualize/rich), [Questionary](https://github.com/tmbo/questionary), [Pydantic](https://github.com/pydantic/pydantic), [Jinja2](https://github.com/pallets/jinja), and [aiofiles](https://github.com/Tinche/aiofiles). Powered by [OpenAI](https://openai.com), [Google Gemini](https://ai.google.dev), [TogetherAI](https://together.ai), and [OpenRouter](https://openrouter.ai). Integrates with [SonarCloud](https://sonarcloud.io/).
| text/markdown | null | Hayat Bourji <hayat.bourgi@montyholding.com> | null | Hayat Bourji <hayat.bourgi@montyholding.com> | Apache-2.0 | sonarcloud, code-analysis, ai, llm, code-quality | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Testing",
"Topic :: Software Development :: Quality Assurance",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers",
"Topic :: System :: Networking :: Monitoring"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"click==8.1.7",
"rich-click==1.9.4",
"requests>=2.32",
"pydantic==2.9.0",
"pydantic_core==2.23.2",
"pydantic-settings==2.12.0",
"python-dotenv==1.0.0",
"openai==2.15.0",
"google-genai==1.60.0",
"sonar-tools==3.16.2",
"pycodestyle==2.14.0",
"autopep8==2.3.2",
"pathlib2==2.3.0",
"regex==2026.1.15",
"GitPython==3.1.46",
"together==2.1.1",
"Jinja2==3.1.6",
"types-requests==2.32.4.20250913",
"tomli==2.3.0",
"tomli-w==1.2.0",
"simple-term-menu==1.6.6",
"questionary==2.1.1",
"tomlkit==0.13.3",
"inquirer==3.4.1",
"aiofiles==25.1.0",
"langchain-core<1.0.0,>=0.3.78",
"pytest==8.4.2; extra == \"dev\"",
"black==25.11.0; extra == \"dev\"",
"isort==5.12.0; extra == \"dev\"",
"flake8==7.3.0; extra == \"dev\"",
"pytest-cov==7.0.0; extra == \"dev\"",
"pytest-asyncio==0.26.0; extra == \"dev\"",
"mypy==1.0.0; extra == \"dev\"",
"pre-commit==3.0.0; extra == \"dev\"",
"twine==4.0.0; extra == \"dev\"",
"build==0.10.0; extra == \"dev\"",
"ruff==0.14.7; extra == \"dev\"",
"types-aiofiles==25.1.0.20251011; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/hayatbourgi/devdox-ai-sonar",
"Repository, https://github.com/hayatbourgi/devdox-ai-sonar",
"Issues, https://github.com/hayatbourgi/devdox-ai-sonar/issues"
] | twine/4.0.0 CPython/3.12.12 | 2026-02-20T10:53:05.375353 | devdox_sonar-0.0.1.tar.gz | 236,353 | a6/43/d8a21f3a37b57486f7e4db007bdcce4c45dd84a4b3d3cbe85f5625699705/devdox_sonar-0.0.1.tar.gz | source | sdist | null | false | 70454e3e7e00d688ea2a62abada13dd6 | 25ad278247bbeb256509535cebeb510f6672ac09c636b3519665abdb35f2cf0b | a643d8a21f3a37b57486f7e4db007bdcce4c45dd84a4b3d3cbe85f5625699705 | null | [] | 239 |
2.4 | invenio-stats | 6.1.0 | Invenio module for collecting statistics. | ..
This file is part of Invenio.
Copyright (C) 2017-2018 CERN.
Invenio is free software; you can redistribute it and/or modify it
under the terms of the MIT License; see LICENSE file for more details.
===============
Invenio-Stats
===============
.. image:: https://img.shields.io/github/license/inveniosoftware/invenio-stats.svg
:target: https://github.com/inveniosoftware/invenio-stats/blob/master/LICENSE
.. image:: https://github.com/inveniosoftware/invenio-stats/workflows/CI/badge.svg
:target: https://github.com/inveniosoftware/invenio-stats/actions?query=workflow%3ACI
.. image:: https://img.shields.io/coveralls/inveniosoftware/invenio-stats.svg
:target: https://coveralls.io/r/inveniosoftware/invenio-stats
.. image:: https://img.shields.io/pypi/v/invenio-stats.svg
:target: https://pypi.org/pypi/invenio-stats
Invenio module for collecting statistics.
This module provides the components for **statistical data processing and
querying**.
The most common statistics measure the occurence of events in an invenio
application, e.g. file downloads, record views and others. Invenio-stats
provides the tools to transform, register, compress and query those events.
However, statistics can be fully customized and directly query the database.
The services it uses are:
- RabbitMQ for buffering incoming events.
- Elasticsearch or OpenSearch for aggregating and searching events.
Further documentation is available on: https://invenio-stats.readthedocs.io/
..
This file is part of Invenio.
Copyright (C) 2017-2025 CERN.
Copyright (C) 2024-2026 Graz University of Technology.
Invenio is free software; you can redistribute it and/or modify it
under the terms of the MIT License; see LICENSE file for more details.
Changes
=======
Version v6.1.0 (released 2026-01-29)
- feat(config): add STATS_EVENTS_UTC_DATETIME_ENABLED flag
Introduce STATS_EVENTS_UTC_DATETIME_ENABLED (default: False) to strip
tzinfo from event timestamps at build time. Set to True to opt-in to
timezone-aware UTC datetimes.
Version v6.0.0 (released 2026-01-29)
- chore(setup): bump dependencies
- chore(black): update formatting to >= 26.0
- fix(chore): DeprecationWarning stdlib
- fix: DeprecationWarning warn use warning
- tests: extend support to Python 3.14
- i18n:push translations
Version 5.1.1 (release 2025-06-09)
- tests: fix issues with CI
- translations: add untranslated strings and add translation workflow
Version 5.1.0 (release 2025-01-20)
- aggregations: add yearly interval
Version 5.0.0 (release 2024-12-10)
- tests: remove dependency to invenio-oauth2server
- setup: bump major dependencies
Version 4.2.1 (release 2024-11-30)
- setup: change to reusable workflows
- setup: pin dependencies
Version v4.2.0 (released 2024-08-27)
- processors: allow filtering out robots/machines
Version 4.1.0 (release 2024-08-14)
----------------------------------
- introduce a new config `STATS_REGISTER_INDEX_TEMPLATES` to be able to register
events and aggregations as index templates (ensure backwards compatibility)
Version 4.0.2 (release 2024-03-04)
----------------------------------
- aggregations: consider updated_timestamp field optional (ensure backwards compatibility)
Version 4.0.1 (release 2023-10-09)
----------------------------------
- aggregations: ensure events are aggregated only once
Version 4.0.0 (release 2023-10-03)
----------------------------------
- introduce new field `updated_timestamp`` in the events and stats templates
and mappings
- improved calculation of aggregations skipping already aggregated events
- changed `refresh_interval` from 1m to 5s
- changed default events index name from daily to monthly
- moved BookmarkAPI to a new module
Version 3.1.0 (release 2023-04-20)
----------------------------------
- add extension method for building and caching queries
Version 3.0.0 (release 2023-03-01)
-------------------------------------
- Upgrade to ``invenio-search`` 2.x
- Drop support for Elasticsearch 2, 5, and 6
- Add support for OpenSearch 1 and 2
- Drop support for Python 2.7 and 3.6
- Remove function ``invenio_stats.utils:get_doctype``
- Fix ``validate_arguments`` for query classes
- Add ``build_event_emitter`` function for creating an ``EventEmitter`` but not registering it as a signal handler
- Add ``ext.get_event_emitter(name)``` function for caching built ``EventEmitter`` objects per name
- Replace elasticsearch-specific terminology
Version 2.0.0 (release 2023-02-23)
-------------------------------------
- add opensearch2 compatibility
Version 1.0.0a18 (release 2020-09-01)
-------------------------------------
- Fix isort arguments
- Filter pytest deprecation warnings
- Set default values for metrics instead of None, when no index found
Version 1.0.0a17 (release 2020-03-19)
-------------------------------------
- Removes Python 2.7 support.
- Centralizes Flask dependency via ``invenio-base``.
Version 1.0.0a16 (release 2020-02-24)
-------------------------------------
- bump celery dependency
- pin Werkzeug version
Version 1.0.0a15 (release 2019-11-27)
-------------------------------------
- Pin celery dependency
Version 1.0.0a14 (release 2019-11-27)
-------------------------------------
- Fix `get_bucket_size` method
Version 1.0.0a13 (release 2019-11-08)
-------------------------------------
- Bump invenio-queues
Version 1.0.0a12 (release 2019-11-08)
-------------------------------------
- Fixes templates for ElasticSearch 7
- Updates dependency of invenio-search
Version 1.0.0a11 (release 2019-10-02)
-------------------------------------
- Initial public release.
| null | CERN | info@invenio-software.org | null | null | MIT | invenio statistics | [
"Development Status :: 5 - Production/Stable"
] | [
"any"
] | https://github.com/inveniosoftware/invenio-stats | null | >=3.7 | [] | [] | [] | [
"counter-robots>=2018.6",
"invenio-base<3.0.0,>=2.0.0",
"invenio-cache<4.0.0,>=3.0.0",
"invenio-celery<3.0.0,>=2.0.0",
"invenio-queues>=1.0.0a2",
"maxminddb-geolite2>=2018.703",
"python-dateutil>=2.7.0",
"python-geoip>=1.2",
"invenio-i18n>=2.0.0",
"pytest-black>=0.6.0; extra == \"tests\"",
"invenio-accounts<8.0.0,>=7.0.0; extra == \"tests\"",
"invenio-app<4.0.0,>=3.0.0; extra == \"tests\"",
"invenio-db[postgresql]<3.0.0,>=2.2.0; extra == \"tests\"",
"invenio-files-rest<5.0.0,>=4.0.0; extra == \"tests\"",
"invenio-records<5.0.0,>=4.0.0; extra == \"tests\"",
"invenio-records-ui<4.0.0,>=3.0.0; extra == \"tests\"",
"pytest-invenio<5.0.0,>=4.0.0; extra == \"tests\"",
"Sphinx>=5; extra == \"tests\"",
"invenio-search[elasticsearch7]<4.0.0,>=3.0.0; extra == \"elasticsearch7\"",
"invenio-search[opensearch1]<4.0.0,>=3.0.0; extra == \"opensearch1\"",
"invenio-search[opensearch2]<4.0.0,>=3.0.0; extra == \"opensearch2\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T10:52:13.021393 | invenio_stats-6.1.0.tar.gz | 38,665 | b0/6a/3793b2d1a2924cdf5c554746d8bbd6cba4a42fc7b359666c594223aa64fb/invenio_stats-6.1.0.tar.gz | source | sdist | null | false | acfc85a629cd27eace2f9e79c928c2d8 | 0ca5e2bbe6814a2185eeb33d629d91650b60ac15246f3f90602ee54f42a3ef12 | b06a3793b2d1a2924cdf5c554746d8bbd6cba4a42fc7b359666c594223aa64fb | null | [
"LICENSE",
"AUTHORS.rst"
] | 488 |
2.4 | RenesSQLiteHelper | 0.2.0 | René's minimal wrapper around Python's sqlite3 module | # RenesSQLiteHelper
René's minimal wrapper around Python's built-in `sqlite3` module.
## Installation
```bash
pip install RenesSQLiteHelper
```
## Usage
### Create a database
The database file (Here: `some-data`) is stored by default under `~/.local/share/sqlite-dbs`.
```python
from RenesSQLiteHelper import open_db, bulk_load
con = open_db('some-data', deleteIfExists = True)
con.execute('''
create table tab (
id integer primary key,
val text
)
''')
```
### Use the database
Bulk load
```python
con = open_db('some-data')
with bulk_load(con) as cur:
cur.execute('insert into tab values (?, ?)', (42, 'hello world'))
```
Selecting etc
```python
for rec in con.execute('select * from tab'):
print(f'{rec['id']}: {rec['val']}')
```
| text/markdown | René Nyffenegger | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Repository, https://github.com/ReneNyffenegger/py-RenesSQLiteHelper",
"Homepage, https://renenyffenegger.ch/notes/development/languages/Python/standard-library/sqlite3/RenesSQLiteHelper"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T10:51:57.739397 | renessqlitehelper-0.2.0.tar.gz | 3,270 | 68/b4/b5c07a68fc177f154d6db637eed567a7700997a3913764861ef61490a3d9/renessqlitehelper-0.2.0.tar.gz | source | sdist | null | false | a921d10749f51d1be4076767ea87dbd9 | 6ea970548811ae8b5368afafd2031f1d802215be4817d628b738fc246598268f | 68b4b5c07a68fc177f154d6db637eed567a7700997a3913764861ef61490a3d9 | MIT | [
"LICENSE.md"
] | 0 |
2.4 | kwik | 1.4.0 | Fast, batteries-included, business-oriented, opinionated REST APIs framework | # Kwik
<div align="center">
<img src="docs/handbook/docs/img/logo.png" alt="Kwik Logo" width="400">
</div>
> **⚠️ Pre-Release Software Warning**
>
> Kwik v1.0 has been released and is ready for production use. The internal APIs, data structures, and framework interfaces are now stable. While not guaranteed, we strive to maintain backward compatibility following semantic versioning principles.
---
**Documentation**: https://davide.mezzogori.com/kwik/
[](https://codecov.io/github/dmezzogori/kwik)
---
Fast, batteries-included, business-oriented, opinionated REST APIs framework
## Acknowledgments
Python 3.12+
Kwik stands on the shoulder of a couple of giants:
* [FastAPI](https://fastapi.tiangolo.com/): for the underlying REST API server.
* [Pydantic](https://docs.pydantic.dev/1.10/): for the data validation and serialization.
* [SQLAlchemy](https://www.sqlalchemy.org/): for the ORM part.
## Installation
```console
$ pip install kwik
```
or
```console
$ uv add kwik
```
It will install Kwik and all its dependencies.
## Development
### Setup
```bash
# Clone the repository
git clone https://github.com/dmezzogori/kwik.git
cd kwik
# Install dependencies using uv
uv sync
# Start development server with hot reload
kwik
```
### Testing
```bash
# Run all tests with coverage (testcontainers automatically manages PostgreSQL)
pytest --cov=src/kwik --cov-report=term-missing
# Run tests in parallel (faster)
pytest -n auto
# Run specific test file
pytest tests/crud/test_crud_users.py
# Run only unit tests (skip integration tests)
pytest -m "not integration"
```
**Note**: Tests use testcontainers to automatically manage the PostgreSQL database. No manual database setup required (just docker).
### Code Quality
```bash
# Run linter and formatter
ruff check
ruff format
```
### Documentation
```bash
# Start documentation website locally
cd docs
docker compose up
# Access at http://localhost:8000
```
### Listing queries (DX)
- Unified dependency `kwik.dependencies.ListQuery` combines pagination, sorting, and filtering for list endpoints.
- Query params supported:
- `skip` and `limit` for pagination (stable default ordering by primary key when no sort is provided)
- `sorting` as comma-separated fields with optional direction, e.g. `?sorting=name:asc,id:desc`
- `filter_key` and `value` for simple equality filters, e.g. `?filter_key=is_active&value=true`
- Example endpoint:
- `def list_users(q: ListQuery, context: UserContext) -> Paginated[UserProfile]:
total, data = crud_users.get_multi(context=context, **q)
return {"total": total, "data": data}`
- Invalid filter/sort fields return HTTP 400 with a clear message.
### Contributing
1. Create a feature branch (`git checkout -b feature/your-feature-name`)
2. Make your changes following the existing code style
3. Add tests for new functionality
4. Run tests and ensure they pass
5. Run linting and fix any issues
6. Commit your changes (`git commit -am '<Your commit message>'`)
7. Push to the branch (`git push origin feature/your-feature-name`)
8. Create a Pull Request
## License
This project is licensed under the terms of the MIT license.
| text/markdown | null | dmezzogori <dmezzogori@users.noreply.github.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiofiles==24.1.0",
"bcrypt==4.0.1",
"fastapi[standard]==0.116.1",
"gunicorn==23.0.0",
"httptools==0.6.4",
"loguru>=0.7.3",
"psycopg2-binary==2.9.10",
"pydantic-settings>=2.10.1",
"pydantic[email]<3.0.0,>=2.0.0",
"pyjwt>=2.10.0",
"pytest>=8.0.0",
"sqlalchemy<3.0.0,>=2.0.0",
"testcontainers>=4.0.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T10:51:51.252525 | kwik-1.4.0.tar.gz | 1,029,931 | 6b/eb/c25cc31fedd1aff299e946ab385812c2d593561277f7b7fc7d37ef9c630c/kwik-1.4.0.tar.gz | source | sdist | null | false | 6c4a75a9d82aa8807945d1ecbd705795 | 3140655604959d533846990598534eb0dc6620085ce0c6df9c7e41a54c8f9ad4 | 6bebc25cc31fedd1aff299e946ab385812c2d593561277f7b7fc7d37ef9c630c | null | [
"LICENSE"
] | 227 |
2.4 | vizpy | 0.1.1 | A state of the art prompt optimization library | # vizpy
A state of the art prompt optimization library.
Coming soon.
## Installation
```bash
pip install vizpy
```
## License
MIT License - see LICENSE file for details.
| text/markdown | Vizops AI | null | null | null | MIT | prompt, optimization, llm, ai | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.6 | 2026-02-20T10:51:23.248018 | vizpy-0.1.1.tar.gz | 2,012 | bd/28/7aa8eff4eccb4b7ccc35254354cf3234f0302ef796ac649f4521df384417/vizpy-0.1.1.tar.gz | source | sdist | null | false | 9a01db803753eded3c87e4430b4eda1b | 4d774a283f2650024bb6bd7b443479de12d3c96c314f85abf07efdfc9ecc4a14 | bd287aa8eff4eccb4b7ccc35254354cf3234f0302ef796ac649f4521df384417 | null | [
"LICENSE"
] | 240 |
2.4 | llama-index-llms-modelslab | 0.1.0 | llama-index llms modelslab integration | # LlamaIndex LLMs ModelsLab Integration
Provides [ModelsLab](https://modelslab.com) as an LLM provider for LlamaIndex — giving RAG pipelines, agents, and query engines access to uncensored Llama 3.1 models with 128K context windows.
## Installation
```bash
pip install llama-index-llms-modelslab
```
## Setup
Get your API key at [modelslab.com](https://modelslab.com), then:
```bash
export MODELSLAB_API_KEY="your-api-key"
```
## Usage
### Basic completion
```python
from llama_index.llms.modelslab import ModelsLabLLM
llm = ModelsLabLLM(model="llama-3.1-8b-uncensored")
resp = llm.complete("Explain how attention mechanisms work in transformers.")
print(resp)
```
### Chat
```python
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(
role="user",
content="Write a Python function to merge two sorted lists.",
),
]
resp = llm.chat(messages)
print(resp)
```
### RAG pipeline
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.llms.modelslab import ModelsLabLLM
Settings.llm = ModelsLabLLM(model="llama-3.1-70b-uncensored")
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("Summarize the key findings.")
print(response)
```
### Streaming
```python
llm = ModelsLabLLM(model="llama-3.1-8b-uncensored")
for chunk in llm.stream_complete("Write a haiku about code:"):
print(chunk.delta, end="", flush=True)
```
## Models
| Model | Context Window | Best for |
| -------------------------- | -------------- | -------------------------------------- |
| `llama-3.1-8b-uncensored` | 128K | Fast completions, most tasks (default) |
| `llama-3.1-70b-uncensored` | 128K | Complex reasoning, high quality output |
## Configuration
```python
llm = ModelsLabLLM(
model="llama-3.1-8b-uncensored",
api_key="your-key", # or MODELSLAB_API_KEY env var
context_window=131072, # 128K (default)
temperature=0.7, # sampling temperature
max_tokens=2048, # max output tokens
is_chat_model=True, # use chat endpoint (default)
)
```
## API Reference
- ModelsLab docs: https://docs.modelslab.com
- Uncensored chat endpoint: https://docs.modelslab.com/uncensored-chat
| text/markdown | null | Your Name <you@example.com> | null | null | null | llama, llama-index, llm, modelslab, uncensored | [] | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"llama-index-core<0.15,>=0.13.0",
"llama-index-llms-openai-like<0.6,>=0.5.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T10:49:38.058222 | llama_index_llms_modelslab-0.1.0-py3-none-any.whl | 3,541 | b2/ce/8467ebd564ef34f512cf7302d3a6ed997de922cdae8ed260596e2fe3c402/llama_index_llms_modelslab-0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 28da212029d62b42cd3a44855a8c6fc8 | cf86c1ca4e7a765a7b7d22740d5bd210018d1703389f35d5131a6350e065975d | b2ce8467ebd564ef34f512cf7302d3a6ed997de922cdae8ed260596e2fe3c402 | MIT | [] | 242 |
2.4 | nexusctl | 0.6.0 | CLI for managing Nexus Engine tenants, data sources, and deployments | # nexusctl
Command-line tool for managing [Nexus Engine](https://nexus-engine.io) — tenants, data sources, data views, and deployments.
## Installation
```bash
pip install nexusctl
```
Or install alongside the Nexus Engine Python client:
```bash
pip install nexus-engine[nexusctl]
```
## Usage
```bash
# Authenticate
nexusctl login
# Manage resources
nexusctl get data-sources
nexusctl get data-views
# Apply configuration from YAML
nexusctl apply -f data-source.yaml
# Run a calculation
nexusctl calculate -f my-tree.json
# View history and rollback
nexusctl history data-sources my-ds
nexusctl rollback data-sources my-ds --to <version-id>
```
## Documentation
Full documentation is available at [nexus-engine.io](https://nexus-engine.io).
## License
Dual-licensed under [MIT](https://opensource.org/licenses/MIT)
or [Apache-2.0](https://opensource.org/licenses/Apache-2.0).
| text/markdown; charset=UTF-8; variant=GFM | Synlynx | null | null | null | MIT OR Apache-2.0 | nexus-engine, cli, admin, devops | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Rust",
"Topic :: Office/Business :: Financial",
"Topic :: System :: Systems Administration"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://nexus-engine.io",
"Documentation, https://nexus-engine.io"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T10:48:52.790887 | nexusctl-0.6.0-py3-none-win_amd64.whl | 3,511,084 | 28/95/04076758ba7dfd4e541459643fa6679c4c0ab446fde8fc07828230270cc0/nexusctl-0.6.0-py3-none-win_amd64.whl | py3 | bdist_wheel | null | false | 4c056704f5004b7246545be7e3d19a2c | a5f5d33ebd4025da84150f38faedec552383684b87cf37972bda8a6e265c5a92 | 289504076758ba7dfd4e541459643fa6679c4c0ab446fde8fc07828230270cc0 | null | [] | 161 |
2.4 | nuclear-shape | 0.1.1 | Python package for analysing nuclear shape from 3D-genome models generated from Hi-C data. | # nuclear_shape
A small Python library for analysing nuclear shape from 3D‑genome models (Chrom3D `.cmm` files).
Includes ellipsoid fitting, PCA, basic shape metrics, and simple plotting/rendering.
## Installation
```bash
pip install nuclear_shape
```
## Basic Usage
```python
from nuclear_shape import NuclearShape
shape = NuclearShape("path/to/file.cmm")
shape.ellipsoid_fit()
shape.ellipsoid_inner()
shape.ellipsoid_outer()
shape.compute_pca()
shape.print_metrics()
shape.plot("sphericity", show=True)
shape.render("ellipsoid", show=True)
```
## Example Data
An example .cmm file and a test script are included:
```code
test/example_real_data_0.cmm
python test/test.py
```
## License
MIT License
| text/markdown | Rudi | null | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"scipy",
"scikit-learn",
"tqdm",
"pandas",
"seaborn",
"matplotlib",
"Jinja2",
"trimesh",
"cvxpy",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"black; extra == \"dev\"",
"flake8; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.6 | 2026-02-20T10:48:36.306475 | nuclear_shape-0.1.1.tar.gz | 10,211 | c8/2c/4312f919bd93e5b8b02eb06daec2c7fe043212c15a470b1d9f3e52f1e2e9/nuclear_shape-0.1.1.tar.gz | source | sdist | null | false | 12e8ec29262f0ac668271978221e060e | 62cfd4432eaabcb8695c8ab9dcaa64e2912edaccf777628844e649466107e13f | c82c4312f919bd93e5b8b02eb06daec2c7fe043212c15a470b1d9f3e52f1e2e9 | null | [
"LICENSE"
] | 234 |
2.4 | harden | 0.1.0 | A Python security and hardening utility library | # harden
A Python security and hardening utility library.
Coming soon.
## Installation
```bash
pip install harden
```
## License
MIT License - see LICENSE file for details.
| text/markdown | Vizops AI | null | null | null | MIT | security, hardening, utility | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.6 | 2026-02-20T10:46:52.069740 | harden-0.1.0.tar.gz | 2,037 | 9e/37/a0223ad4389093a188ee1aaaf81a21ca9950ba98b8dd6d3f757b97a17b76/harden-0.1.0.tar.gz | source | sdist | null | false | 97ae7bc3f26ef0b35266293de426d1f5 | 826d097072e02b5028527e94eb60d9265592eea43b81ab42bed549ec068f66a2 | 9e37a0223ad4389093a188ee1aaaf81a21ca9950ba98b8dd6d3f757b97a17b76 | null | [
"LICENSE"
] | 257 |
2.1 | odoo-addon-account-vendor-bank-account-default | 18.0.1.0.1 | Set a default bank account on partners for their vendor bills | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
===================================
Account Vendor Bank Account Default
===================================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:c0eed6d2622a65a14df0a5b06c5e731468dfebf3dbcebf637c1b8af3ec994101
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fbank--payment-lightgray.png?logo=github
:target: https://github.com/OCA/bank-payment/tree/18.0/account_vendor_bank_account_default
:alt: OCA/bank-payment
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/bank-payment-18-0/bank-payment-18-0-account_vendor_bank_account_default
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/bank-payment&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module allows you to set a default bank account on partners for
their vendor bills.
**Table of contents**
.. contents::
:local:
Configuration
=============
To configure this module, you need to:
- Go to a partner form view, and edit the Default Bank Account in the
Sales and Purchase tab. The partner must be of type 'company', or an
individual without a parent company.
**Note:** If you do not set this value, it will be equal to the first
bank account created in the contact. This is the standard behaviour of
Odoo
- You can disable the default bank account option at contact level,
unmarking the "Has Default Bank Account" check.
Usage
=====
To use this module, you need to:
1. Create a vendor bill and select a partner with a default bank
account. Its recipient bank account will be the default one.
Known issues / Roadmap
======================
- This module depends on account_payment_partner. If the payment mode of
the invoice has a payment method that does not require a bank account,
the default bank account will be empty.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/bank-payment/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/bank-payment/issues/new?body=module:%20account_vendor_bank_account_default%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Sygel
Contributors
------------
- `Sygel <https://www.sygel.es>`__:
- Harald Panten
- Valentin Vinagre
- Alberto Martínez
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-tisho99| image:: https://github.com/tisho99.png?size=40px
:target: https://github.com/tisho99
:alt: tisho99
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-tisho99|
This module is part of the `OCA/bank-payment <https://github.com/OCA/bank-payment/tree/18.0/account_vendor_bank_account_default>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Sygel, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Development Status :: 4 - Beta"
] | [] | https://github.com/OCA/bank-payment | null | >=3.10 | [] | [] | [] | [
"odoo-addon-account_payment_partner==18.0.*",
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T10:46:33.254980 | odoo_addon_account_vendor_bank_account_default-18.0.1.0.1-py3-none-any.whl | 51,383 | 3f/d4/1c9cae5ca9f36d4327d71e0993dde08e0bf7cb92ebaf33b7de2a04b1d915/odoo_addon_account_vendor_bank_account_default-18.0.1.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | be864d3818856c496ce272b0b8d36479 | 1bc3ed625dc0f0b23622b95ce0118c7bbc2abe02c2ce83e50af3dff118d7bde9 | 3fd41c9cae5ca9f36d4327d71e0993dde08e0bf7cb92ebaf33b7de2a04b1d915 | null | [] | 90 |
2.1 | shareddata | 6.79.0 | Memory Mapped / Shared Memory Database with S3 repository | # SharedData
A comprehensive ultrafast Python library for financial data.
## 📖 Table of Contents
- [🏗️ Core Features](#-core-features)
- [⚡ Quick Start](#-quick-start)
- [🔧 Configuration](#-configuration)
- [🚀 Advanced Usage](#-advanced-usage)
- [🔄 Development & Documentation](#-development--documentation)
- [📄 License](#-license)
- [🤝 Contributing](#-contributing)
## 🏗️ Core Features
SharedData provides a comprehensive set of features for high-performance financial data management:
- **🗃️ Database Schema & Indexing** - Optimized schemas for financial data types
- **🌐 Storage & Integration** - Multi-storage support (Local, S3, MongoDB, Redis)
- **📈 Performance & Scalability** - Parallel processing and advanced querying
- **📊 Data Containers** - Tables, Collections, Time Series, Streams, Cache, Metadata
- **⚡ Multiprocessing & Parallel Computing** - Sophisticated parallel processing library
- **🤖 Distributed Worker System** - Automated task execution and job scheduling
- **📋 Comprehensive Logging System** - Enterprise-grade logging with multiple destinations
- **🌐 Remote API Client** - REST API for remote data access and operations
**📚 [Read the complete Core Features guide →](docs/CORE_FEATURES.md)**
## ⚡ Quick Start
Get up and running with SharedData in minutes:
### Installation
```bash
# Create virtual environment
python -m venv venv
source venv/bin/activate
# Install SharedData
pip install -r requirements.txt
pip install -e .
```
### Basic Usage
```python
import pandas as pd
from SharedData.SharedData import SharedData
# Initialize SharedData
shdata = SharedData(__file__, user='master')
# Quick example - Tables
dates = pd.date_range('2025-01-01', '2025-01-10', freq='D')
symbols = ['AAPL', 'GOOGL', 'MSFT']
idx = pd.MultiIndex.from_product([dates, symbols], names=['date', 'symbol'])
df = pd.DataFrame({'price': 100, 'volume': 1000}, index=idx)
# Write and read data
tbl = shdata.table('MarketData', 'D1', 'TEST', 'PRICES', value=df)
tbl.write()
data = tbl.loc['2025-01-05':, 'AAPL'] # Fast symbol lookup
print(f"Retrieved {data.shape[0]} rows")
```
**📚 [Read the complete Quick Start guide →](docs/QUICK_START.md)**
## 🔧 Configuration
Configure SharedData with environment variables for your specific needs:
```bash
# Required Variables
SHAREDDATA_SECRET_KEY="your-secret-key"
SHAREDDATA_TOKEN="your-auth-token"
AWS_ACCESS_KEY_ID="your-aws-access-key"
S3_BUCKET="s3://your-bucket-name"
MONGODB_HOST="your-mongodb-host"
SHAREDDATA_ENDPOINT="http://your-server:port"
```
**📚 [Read the complete Configuration guide →](docs/CONFIGURATION.md)**
## 🚀 Advanced Usage
For power users and complex scenarios:
- **Advanced Data Operations** - Complex queries and aggregations
- **Performance Optimization** - Memory management and parallel processing
- **Distributed Computing** - Worker pools and distributed data processing
- **Custom Extensions** - Custom data containers and worker types
- **Production Deployment** - High availability and monitoring
- **Monitoring & Debugging** - Performance profiling and health checks
**📚 [Read the complete Advanced Usage guide →](docs/ADVANCED_USAGE.md)**
## 🔄 Development & Documentation
For developers contributing to SharedData:
### Building Documentation
```bash
# Generate documentation from docstrings
make gitea
# Alternative methods
python generate_docs.py
python update_docs.py
make all
```
### Running Tests
```bash
python -m pytest tests/
```
**📚 [Read the complete Development guide →](docs/DEVELOPMENT.md)**
## 📄 License
See [LICENSE](LICENSE) file for details.
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Update documentation: `make gitea`
5. Submit a pull request
**📚 [Read the complete Contributing guide →](docs/DEVELOPMENT.md#contributing-guidelines)**
---
## 📚 Documentation Index
- **[🏗️ Core Features](docs/CORE_FEATURES.md)** - Comprehensive overview of all SharedData capabilities
- **[⚡ Quick Start](docs/QUICK_START.md)** - Get started with SharedData in minutes
- **[🔧 Configuration](docs/CONFIGURATION.md)** - Complete configuration reference
- **[🚀 Advanced Usage](docs/ADVANCED_USAGE.md)** - Advanced patterns and optimization
- **[🔄 Development](docs/DEVELOPMENT.md)** - Contributing and development guidelines
---
[⬆️ Back to top](#shareddata)
| text/markdown | null | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | <3.13,>=3.9 | [] | [] | [] | [
"ipykernel==6.23.3",
"boto3==1.26.160",
"python-json-logger==2.0.7",
"python-dotenv==1.0.0",
"pandas==2.0.2",
"scipy==1.14.0",
"numpy==1.26.2",
"numba==0.59.1",
"XlsxWriter==3.1.2",
"openpyxl==3.1.2",
"tqdm==4.65.0",
"cffi==1.17.1",
"tzlocal==5.0.1",
"websockets==12.0",
"cryptography==41.0.7",
"lz4==4.3.3",
"flask==3.0.0",
"waitress==3.0.0",
"requests==2.31.0",
"flasgger==0.9.7.1",
"pymongo==4.8.0",
"setuptools==74.1.2",
"filelock==3.18.0",
"gunicorn==23.0.0",
"confluent-kafka==2.10.0",
"aiokafka==0.12.0",
"aiohttp==3.12.13",
"redis==6.2.0"
] | [] | [] | [] | [] | twine/6.0.1 CPython/3.12.3 | 2026-02-20T10:46:17.153268 | shareddata-6.79.0-py3-none-any.whl | 415,778 | 6d/4c/8ba288ace2a201a3392ddd22564fdc2e7df879069868a92a53bd9ad3ab24/shareddata-6.79.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 40df73beefb490473012600afa33e095 | bd7593036e7f0d0ba97b23067691c7bcbe73d091be15707ef78500f6a279e2ab | 6d4c8ba288ace2a201a3392ddd22564fdc2e7df879069868a92a53bd9ad3ab24 | null | [] | 102 |
2.4 | sunset-cli | 0.1.1 | LLM-powered network automation CLI with a Textual TUI | # NetMind
**LLM-powered network automation agent** -- connect to Cisco IOS routers, talk to an AI that configures them for you.
NetMind uses Claude as the orchestration brain. You describe what you want ("Configure OSPF area 0 between these three routers"), and it discovers the topology, generates the right commands, asks for your approval, applies them, and verifies everything works.
---
## Quick Start
```bash
# Clone and enter the project
cd netmind
# Create a virtual environment
python3 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Set your API key
cp .env.example .env
# Edit .env and add your ANTHROPIC_API_KEY
# Run
python main.py
```
## What It Does
1. **You add routers** via the Device Manager (Ctrl+D) -- IP, username, password.
2. **You ask NetMind** to configure something: "Set up OSPF area 0 on all routers."
3. **Claude discovers** the current state by running show commands over SSH.
4. **Claude generates** the exact IOS config commands needed.
5. **You review and approve** in the approval screen.
6. **Claude applies** the config, then verifies the result (neighbors up, routes learned).
7. **If something breaks**, there's a checkpoint to roll back.
## TUI Controls
| Key | Action |
|----------|-------------------------------|
| Ctrl+D | Open Device Manager |
| Ctrl+R | Toggle Read-Only / Interactive |
| Ctrl+L | Clear Chat |
| Ctrl+C | Quit |
| Escape | Back (from sub-screens) |
## Architecture
```
User (TUI)
|
v
ClaudeAgent <----> Claude API (tool calling)
|
v
ToolRegistry ---> DeviceManager ---> DeviceConnection (Netmiko/SSH)
| |
v v
SafetyGuard Cisco IOS Router
ApprovalManager
CheckpointManager
```
**Key principles:**
- Claude is the brain -- it decides what commands to run and in what order
- Every config change requires user approval (ApprovalScreen)
- Checkpoints are created before changes for rollback
- Read-only mode is on by default
## Project Structure
```
netmind/
├── main.py # Entry point
├── netmind/
│ ├── ui/ # Textual TUI
│ │ ├── app.py # Main app class
│ │ ├── screens/ # Main, DeviceManager, Approval screens
│ │ └── widgets/ # ChatPanel, DeviceList, StatusBar
│ ├── core/ # Device connectivity & safety
│ │ ├── device_connection.py # Netmiko SSH wrapper
│ │ ├── device_manager.py # Multi-device management
│ │ └── safety.py # Approval flow, checkpoints, guard rails
│ ├── agent/ # Claude integration
│ │ ├── claude_agent.py # Main agent loop
│ │ ├── tool_registry.py # Tool definitions
│ │ ├── conversation.py # Message history management
│ │ ├── system_prompt.py # Claude's persona and rules
│ │ └── tools/ # Tool handler implementations
│ ├── protocols/ # Protocol-specific helpers
│ │ └── ospf.py # OSPF verification
│ ├── models/ # Pydantic data models
│ └── utils/ # Config, logging, parsers
└── tests/
```
## Safety Model
NetMind starts in **read-only mode**. Claude can run any show command but cannot make changes.
When you switch to **interactive mode** (Ctrl+R):
- Config commands still require explicit approval
- A config checkpoint is taken before every change
- Dangerous commands (reload, erase, no ip address) are blocked or flagged
- Rollback is available if something goes wrong
## Testing
```bash
# Run all tests
python -m pytest tests/ -v
# Run parser tests only
python -m pytest tests/test_parsers.py -v
```
## Requirements
- Python 3.10+
- Access to Cisco IOS devices (GNS3, EVE-NG, or real hardware)
- Anthropic API key (Claude Sonnet 4.5)
## Current Protocol Support
- **OSPF** (MVP) -- full configure + verify + troubleshoot
- BGP, EIGRP, VLANs -- planned
## License
MIT
| text/markdown | null | null | null | null | MIT | network, automation, cisco, ssh, llm, cli | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: System Administrators",
"Topic :: System :: Networking",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"anthropic>=0.42.0",
"netmiko>=4.3.0",
"textual>=0.85.0",
"textual-image>=0.8.0",
"python-dotenv>=1.0.0",
"pydantic>=2.0.0",
"pyyaml>=6.0.0",
"rich>=13.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T10:45:31.297357 | sunset_cli-0.1.1.tar.gz | 118,615 | e7/06/aa7a279a0f57e98719822db8453790e0aae5cb6516f37ffefebe9f8ae881/sunset_cli-0.1.1.tar.gz | source | sdist | null | false | 5e107ffa6eae02126adb4b13b065120f | d392b1735fddecc845af23c492f63ecd064cf404451bba2a0f322b5e3f952031 | e706aa7a279a0f57e98719822db8453790e0aae5cb6516f37ffefebe9f8ae881 | null | [] | 239 |
2.4 | caf.brain | 0.1.0 | Common Analytical Framework package of Machine and Deep Learning tools. | 
<h1 align="center">CAF.brAIn</h1>
<p align="center">
<a href="https://transport-for-the-north.github.io/CAF-Handbook/python_tools/framework.html">
<img alt="CAF Status - Pre-Alpha" src="https://img.shields.io/badge/CAF%20Status-Pre--Alpha-orange">
</a>
<a href="https://pypi.org/project/caf.brain/">
<img alt="Supported Python versions" src="https://img.shields.io/pypi/pyversions/caf.brain.svg?style=flat-square">
</a>
<a href="https://pypi.org/project/caf.brain/">
<img alt="Latest release" src="https://img.shields.io/github/release/transport-for-the-north/caf.brain.svg?style=flat-square&maxAge=86400">
</a>
<a href="https://anaconda.org/conda-forge/caf.brain">
<img alt="Conda" src="https://img.shields.io/conda/v/conda-forge/caf.brain?style=flat-square&logo=condaforge">
</a>
</p>
<p align="center">
<a href="https://app.codecov.io/gh/transport-for-the-north/caf.brain">
<img alt="Coverage" src="https://img.shields.io/codecov/c/github/transport-for-the-north/caf.brain.svg?branch=main&style=flat-square&logo=CodeCov">
</a>
<a href="https://github.com/transport-for-the-north/caf.brain/actions?query=event%3Apush">
<img alt="Testing Badge" src="https://img.shields.io/github/actions/workflow/status/transport-for-the-north/caf.brain/tests.yml?style=flat-square&logo=GitHub&label=Tests">
</a>
<a href='https://cafbrain.readthedocs.io/en/stable/?badge=stable'>
<img alt='Documentation Status' src="https://img.shields.io/readthedocs/cafbrain?style=flat-square&logo=readthedocs">
</a>
<a href="https://github.com/psf/black">
<img alt="code style: black" src="https://img.shields.io/badge/code%20format-black-000000.svg">
</a>
</p>
> [!WARNING]
> This package is in an early stage of development so features may change or be removed.
> If using this package it is recommended to set a specific version and check before
> upgrading to a new version.
Common Analytical Framework package of Machine and Deep Learning tools.
## Common Analytical Framework
This package is sits within the [Common Analytical Framework (CAF)](https://transport-for-the-north.github.io/caf_homepage/intro.html),
which is a collaboration between transport bodies in the UK to develop and maintain commonly used
transport analytics and appraisal tools.
---
<details><summary><h2>Contributing</h2></summary>
CAF.brain happily accepts contributions.
The best way to contribute to this project is to go to the [issues tab](https://github.com/transport-for-the-north/caf.brain/issues)
and report bugs or submit a feature request. This helps CAF.brain become more
stable and full-featured. Please check the closed bugs before submitting a bug report to see if your
question has already been answered.
Please see our [contribution guidelines](https://github.com/Transport-for-the-North/.github/blob/main/CONTRIBUTING.rst)
for details on contributing to the codebase or documentation.
</details>
<details><summary><h2>Documentation</h2></summary>
Documentation is created using [Sphinx](https://www.sphinx-doc.org/en/master/index.html) and is hosted online at
[cafbrain.readthedocs](https://cafbrain.readthedocs.io/en/stable/).
The documentation can be built locally once all the docs requirements
([`docs/requirements.txt`](docs/requirements.txt)) are installed into your Python environment.
The provided make batch file, (inside the docs folder), allow for building the documentation in
various target formats. The command for building the documentation is `make {target}`
(called from within docs/), where `{target}` is the type of documentation format to build. A full
list of all available target formats can be seen by running the `make` command without any
arguments but the two most common are detailed below.
### HTML
The HTML documentation (seen on Read the Docs) can be built using the `make html` command, this
will build the web-based documentation and provide an index.html file as the homepage,
[`docs/build/html/index.html`](docs/build/html/index.html).
### PDF
The PDF documentation has some other requirements before it can be built as Sphinx will first
build a [LaTeX](https://www.latex-project.org/) version of the documentation and then use an
installed TeX distribution to build the PDF from those. If you already have a TeX distribution
setup then you can build the PDF with `make latexpdf`, otherwise follow the instructions below.
Installing LaTeX on Windows is best done using [MiKTeX](https://miktex.org/), as this provides a
simple way of handling any additional TeX packages. Details of other operating systems and TeX
distributions can be found on the [Getting LaTeX](https://www.latex-project.org/get/) page on
LaTeX's website.
MiKTeX provides an installer on its website [miktex.org/download](https://miktex.org/download),
which will run through the process of getting it installed and setup. In addition to MiKTeX
the specific process Sphinx uses for building PDFs is [Latexmk](https://mg.readthedocs.io/latexmk.html),
which is a Perl script and so requires Perl to be installed on your machine, this can be done with an
installer provided by [Strawberry Perl](https://strawberryperl.com/).
Once MiKTex and Perl are installed you are able to build the PDF from the LaTeX files, Sphinx
provides a target (latexpdf) which builds the LaTeX files then immediately builds the PDF. When
running `make latexpdf` MiKTeX may ask for permission to installed some required TeX packages.
Once the command has finished the PDF will be located at
[`docs/build/latex/cafbrain.pdf`](docs/build/latex/cafbrain.pdf).
</details>
## Maintainers
- Adil Zaheer (AdilZ16)
- Ben Taylor (BenTaylor-TfN)
## Credit
This project was created using the Common Analytical Framework cookiecutter template found here:
<https://github.com/Transport-for-the-North/cookiecutter-caf>
| text/markdown | Transport for the North | null | null | null | Copyright © Transport for the North (“TfN”) (2025).
Use of this software and associated documentation files (“the Software”) by you indicates your acceptance of the terms and conditions below (“the Licence”).
We make available the Software to you on the basis of this Licence. We do not sell the Software to you. We remain the owners of the Software at all times.
We grant you a non-exclusive, worldwide, royalty-free, perpetual licence:
(a) to use the Software;
(b) to develop, modify and maintain the Software;
(c) copy, publish, distribute and transmit the Software;
(d) adapt the Software.
Where you do any of the above you must acknowledge the source of the Software in your product or application by including or linking the following attribution statement in a prominent and noticeable location in or in the context of your product or application:
“Outputs derived from CAF.Brain (the Common Analytical Framework AI Toolkit), developed by Transport for the North [https://github.com/Transport-for-the-North/caf.brain]”
Any adaptations applied to the Software before application should be briefly described alongside the above citation. We encourage you to submit to us any adaptations to the Software and where you choose to do so your submissions will be considered for integration into the Software.
The above are important conditions of this Licence and if you fail to comply with them the rights granted to you under this Licence will end automatically.
This Licence does not grant you any right to use the Software in a way that suggests any official status or that we endorse you or your use of the Software.
The Software is licenced 'as is' and we exclude all representations, warranties, obligations and liabilities in relation to the Software to the maximum extent permitted by law.
We are not liable for any errors or omissions in the Software and shall not be liable for any loss, injury or damage of any kind caused by its use. We do not guarantee the continued supply of the Software.
This Licence is governed by the laws of the jurisdiction in which we have our principal place of business.
| null | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"caf.toolkit>=0.9.0",
"joblib>=1.4.2",
"matplotlib>=3.8.4",
"mlxtend>=0.23.1",
"numpy>=1.23.3",
"pandas>=2.2.3",
"PyYAML>=6.0.2",
"scikit-learn>=1.6.1",
"scipy>=1.15.1",
"seaborn>=0.13.2",
"statsmodels>=0.14.0",
"tqdm>=4.66.2",
"black>=24; extra == \"dev\"",
"mypy>=1.10; extra == \"dev\"",
"mypy_extensions>=1.0.0; extra == \"dev\"",
"pydocstyle[toml]>=6.3; extra == \"dev\"",
"pylint>=3.2; extra == \"dev\"",
"isort>=5.13; extra == \"dev\"",
"pytest>=8.3; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest-xdist>=3.6; extra == \"dev\"",
"versioningit>=3.1; extra == \"dev\"",
"geopandas>=1.0.1; extra == \"vision\"",
"albumentations>=2.0.8; extra == \"vision\"",
"beautifulsoup4>=4.12.3; extra == \"vision\"",
"dbfread>=2.0.7; extra == \"vision\"",
"optuna>=4.4.0; extra == \"vision\"",
"Pillow>=11.3.0; extra == \"vision\"",
"pyproj>=3.7.1; extra == \"vision\"",
"rasterio>=1.4.3; extra == \"vision\"",
"ray>=2.48.0; extra == \"vision\"",
"tensorflow>=2.10.0; extra == \"vision\"",
"torch>=2.8.0; extra == \"vision\"",
"ultralytics>=8.3.175; extra == \"vision\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/transport-for-the-north/caf.brain/issues",
"Homepage, https://github.com/transport-for-the-north/caf.brain",
"Source, https://github.com/transport-for-the-north/caf.brain",
"Documentation, https://cafbrain.readthedocs.io/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T10:44:59.090876 | caf_brain-0.1.0.tar.gz | 54,550 | e3/dd/891d4b6040bf58fa4d3b24922934ad18754a3efd8d1357945c050f27a13c/caf_brain-0.1.0.tar.gz | source | sdist | null | false | 548094162dd9d738233fa7b9fdc923e2 | be01c4a3b1d9c11dec3a45693a06d2c06ba63e50de04d9040a0f74e75b2d81e2 | e3dd891d4b6040bf58fa4d3b24922934ad18754a3efd8d1357945c050f27a13c | null | [
"LICENSE"
] | 0 |
2.4 | django-configvars | 0.4.1 | Custom settings management for Django | # Django Convigvars
Configure your Django project in easy and readable way.









## Description
Configvars gives possibility to configure Django-based project with local file and environment variables (i.e. for Docker containers).
Environmental variables are the most important. If not set, the variables from the `local` module will be used, or if these are not present either - the default values will be used:
```
ENV > LOCAL > DEFAULT
```
## Installation
`pip install git+https://gitlab.com/marcinjn/django-configvars.git`
### Basic configuration
Add `configvars` to your `settings.INSTALLED_APPS`:
```python
INSTALLED_APPS = [
# ...
"configvars",
# ...
]
```
### Quickstart
In your `settings.py` add these lines at the top of the file:
```python
from convigvars import config, secret
SOME_API_KEY = config("SOME_API_KEY", "default_api_key")
SOME_API_SECRET = secret("SOME_API_SECRET", "")
```
Then use local settings to set these values or pass them by environment
variables. To use local file, add these settings to `local.py` file in
the same folder where `settings.py` file is located, and fill it with:
```
SOME_API_KEY = "NEW_API_KEY"
SOME_API_SECRET = "NEW_API_SECRET"
```
To check if they are apllied propely run `manage.py configvars`.
You can override these settings by using environment vars (i.e. for
deployment in containers). To do so just declare an environment variable
as usual:
```
SOME_API_KEY="ENV_API_KEY" manage.py configvars
```
In case of secrets, you should provide a path to the secret file
containing a value:
```
SOME_API_SECRET="/run/secrets/SOME_API_SECRET" manage.py configvars
```
If file does not exist, the path will be interpreted as typical string value.
## Usage
### Config vars declaration
In your `settings.py` file declare configurable variables by using `config` or `secret` functions. The first one is used for regular variables, the second one - for secure variables (like passwords, secrets, etc).
```python
DATABASES = {
"default": {
"NAME": config("DB_NAME", "example"), # `example` as default database name
"USER": config("DB_USER", "postgres"), # `postgres` as default username
"PASSWORD": secret("DB_PASSWORD"),
"HOST": config("DB_HOST", "localhost"), # `localhost` as default host
"PORT": config("DB_PORT", 5432), # `5432` as default port
}
}
```
### Show configurable variables for your project
```bash
python manage.py configvars
```
Should result in something like that:
```
DB_NAME = 'example'
DB_USER = 'postgres'
DB_PASSWORD = None
DB_HOST = 'localhost'
DB_PORT = 5432
```
### Show only changed config variables
To show changed config variables by `local.py` or environment variables use:
```bash
python manage.py configvars --changed
```
### Adding short description to your config variables
In your `settings.py` declare `config` or `secret` with additional `desc` argument:
```python
MY_CUSTOM_VARIABLE = config("MY_CUSTOM_VARIABLE", "default_value", desc="Set's custom variable")
```
Then you can dump your config variables with descriptions:
```bash
$ python manage.py configvars --comment
MY_CUSTOM_VARIABLE = 'default_value' # Set's custom variable
```
### Local settings
Django Configvars will try to import `<projectname>.local` module by
default. By using this file you can customize your config variables -
they will be used as current values.
To do so, create empty `local.py` in directory where your `settings.py` file
is located, then assign values to your variables.
*As local config variables are specific to a local machine, consider adding `local.py` to `.gitignore`.*
Note that:
* Local settings can be overriden by environment variables
* Local settings can be skipped for your project
To change location or name of your local settings file, you must
initialize Django Configvars explicitely in `settings.py` module:
```
from configvars import initialize
initialize("other.location.of.settings_local")
```
### Environment variables
Django Config vars will check at the first whether environment name of
the variable is defined. It is important for deployments in containers,
where configuration variables are passed mostly by environment variables.
If environment variable does not exist, a local variable will be
used. If local value is not defined, a default value will be used.
Environment variables can be prefixed to solve issues with eventual name
conflicts. To do so you must initialize Django Configvars explicitely in
`settings.py` file:
```
from configvars import initialize
initialize(env_prefix="MYPREFIX_")
```
## Support
To ask question please create an issue.
## To do
* better support for type casts
* config vars view for Django Admin
## Contributing
You can contribute by creating issues, feature requests or merge requests.
## Authors and acknowledgment
- Marcin Nowak
## License
ISC License
Copyright (c) 2023 Marcin Nowak
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
| text/markdown | null | Marcin Nowak <marcin.j.nowak@gmail.com> | null | null | null | web, python, django, config, settings | [
"Development Status :: 4 - Beta",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 3.1",
"Framework :: Django :: 3.2",
"Framework :: Django :: 4.0",
"Framework :: Django :: 4.1",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.0",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Framework :: Django :: 6.0",
"License :: OSI Approved :: ISC License (ISCL)",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Software Development :: Libraries :: Python Modules",
"Intended Audience :: Developers"
] | [] | null | null | null | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/marcinn/django-configvars"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T10:44:54.369698 | django_configvars-0.4.1.tar.gz | 8,542 | f5/2c/a668dd115419e7b98bdaca6adad2a6fa36540dc927070efda17ff10371d7/django_configvars-0.4.1.tar.gz | source | sdist | null | false | a78d6d93d75f456be9865826ecfe5f04 | eb1c29d57f48b198473f8733dda63fe420a0a551fb88e41c33f7fedcfc7e59a2 | f52ca668dd115419e7b98bdaca6adad2a6fa36540dc927070efda17ff10371d7 | null | [
"LICENSE"
] | 242 |
2.4 | pysurfex | 0.1.4 | Python API to SURFEX |
Python API to SURFEX (pysurfex)
=======================================================
An API in python to the external surface model SURFEX.
- Prepare input and namelists to a SURFEX binary
- Create atmospheric forcing for offline SURFEX runs
- Read SURFEX output
- Quality control of observations with titanlib
- Optimal interpolation with gridpp
- Monitor the observations usage
See online documentation in https://metno.github.io/pysurfex/
Installation of pregenerated packages from pypi (pip)
---------------------------------------------------------
All releases will trigger an autmomatic pre-built package on pypi which can be installed by pip
.. code-block:: bash
pip3 install pysurfex
User installation:
.. code-block:: bash
pip3 install pysurfex --user
Run pysurfex from pre-built container
-------------------------------------------
Releases also trigger an update of the pysurfex container in the github container registry. Below is an example to run pgd without any arguments.
.. code-block:: bash
podman run -it ghcr.io/metno/pysurfex:latest pgd
Installation on debian based Linux system
--------------------------------------------
The following depencies are needed. Install the non-standard ones e.g. with pip or your system installation system.
General dependencies (from pypi)
---------------------------------
.. code-block:: bash
numpy
pyproj
pyyaml
f90nml
To read NetCDF files:
.. code-block:: bash
NetCDF4
cfunits
To read grib files:
.. code-block:: bash
eccodes
from ECMWF https://software.ecmwf.int/wiki/display/ECC/Releases installed with ENABLE_PYTHON=ON
To read FA files:
.. code-block:: bash
falfilfa4py
epygram
To plot:
.. code-block:: bash
matplotlib
To get observations from frost.met.no API:
.. code-block:: bash
requests
For Quality control of observations
.. code-block:: bash
titanlib
For optimal interpolation and observation operators
.. code-block:: bash
gridpp
For testing:
.. code-block:: bash
pytest
Install pysurfex
-------------------------------------------
An environment manager like miniconda or micromamba is recommended to ensure consistency between the packages.
After installing this you need to set it up for the current session or permanently add it to your shell.
Now it is easy to create a suitable environment for pysurfex. Below is a recipie for micromamba.
.. code-block:: bash
# Install micromamba (linux, https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html)
"${SHELL}" <(curl -L micro.mamba.pm/install.sh)
# specify a installation location for micromamba and add it to your path afterwards. Default it will install in $HOME/.local/bin
export PATH=$HOME/.local/bin:$PATH # Use your PATH
# initialize your shell (needed in all shells), e.g:
eval "$(micromamba shell hook --shell bash)"
micromamba create env pysurfex
micromamba activate pysurfex
micromamba install python==3.12 poetry
Download the source code, then install ``pysurfex`` by executing the following inside the extracted
folder:
.. code-block:: bash
poetry install
If not already in a conda/manba environment, this will install ``pysurfex`` in a poetry environment and this environment can be activated interactively by:
.. code-block:: bash
poetry shell
or
Run pysurfex client applications
-------------------------------------------
.. code-block:: bash
poetry run [command]
# e.g.
poetry run python # will run python inside the pysurfex poetry environment
Examples
-----------------------
See https://metno.github.io/pysurfex/#examples
| text/x-rst | Trygve Aspelien | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"black[jupyter]>=23.10.0; extra == \"linting\"",
"cartopy; extra == \"faformat\"",
"cfunits>=3.3.5; extra == \"netcdf\"",
"eccodes>1.5.1; extra == \"points\"",
"epygram>=2.0.0; extra == \"faformat\"",
"f90nml>=1.4.3",
"falfilfa4py>=1.0.0; extra == \"faformat\"",
"findlibs<=1.0.0,>=0.0.5; python_version < \"3.10\" and extra == \"points\"",
"gridpp>=0.7.1; extra == \"points\"",
"isort>=5.12.0; extra == \"linting\"",
"matplotlib>=3.7.1; extra == \"plot\"",
"netcdf4>=1.5.7; extra == \"netcdf\"",
"numpy>=1.20.1",
"pandas>=2.0.0; extra == \"verification\"",
"poethepoet[poetry-plugin]>=0.24.4; extra == \"linting\"",
"pydoclint>0.3.8; extra == \"linting\"",
"pyproj>=3.3.0",
"pytest>=7.2.2; extra == \"test\"",
"pytest-cov>=4.0.0; extra == \"test\"",
"pytest-mock>=3.7.0; extra == \"test\"",
"pytest-profiling>=1.7.0; extra == \"test\"",
"pytest-timeout>=2.1.0; extra == \"test\"",
"pytest-xdist>=3.2.0; extra == \"test\"",
"pyyaml>=6.0; extra == \"points\"",
"requests>=2.32.3; extra == \"points\"",
"ruff>=0.11.0; extra == \"linting\"",
"sphinx>=7.0.0; extra == \"test\"",
"titanlib>=0.3.4.dev3; extra == \"points\"",
"verif>=1.2.3; extra == \"verification\"",
"xarray>=2025.3.1; extra == \"verification\""
] | [] | [] | [] | [] | poetry/2.3.2 CPython/3.12.3 Linux/6.11.0-1018-azure | 2026-02-20T10:44:52.217827 | pysurfex-0.1.4-py3-none-any.whl | 142,570 | c9/76/6aa368a98a7bd92a69d67f6f05d3e99fc6dd633674a31cb5f395aadafcc5/pysurfex-0.1.4-py3-none-any.whl | py3 | bdist_wheel | null | false | 7f39bc92fd15e4bd94bbf0fe560f5f6e | d89ac82eee54f46540af0aa9f46ede621f5b8c758376d9e852187f8bfb0c9018 | c9766aa368a98a7bd92a69d67f6f05d3e99fc6dd633674a31cb5f395aadafcc5 | MIT | [] | 275 |
2.4 | syneto_api | 0.3.65 | Syneto Client API library | # Syneto API
Syneto Client API library: authentication, storage, virtualization and protection
# Installation
```
$ pip install syneto-api
```
# Basic Usage
```
from syneto_api import Authentication, Virtualization, Storage, Protection
auth_api = Authentication(url_base="https://syneto-instance-ip-address/api/auth", insecure_ssl=True)
response = auth_api.login(username="admin", password="admin")
jwt = response['jwt']
virt_api = Virtualization(url_base="https://syneto-instance-ip-address/api/virtualization", insecure_ssl=True)
virt_api.set_auth_jwt(jwt)
print(virt_api.get_vms())
storage_api = Storage(url_base="https://syneto-instance-ip-address/api/storage", insecure_ssl=True)
storage_api.set_auth_jwt(jwt)
print(storage_api.get_pools())
```
# Environment Variables
For conveninence, the base urls for the api endpoints are also accepted as environment variables, please see below.
```
AUTHORIZATION_USER=admin
AUTHORIZATION_PASS=admin
AUTHORIZATION_SERVICE=https://syneto-instance-ip-address/api/auth
VIRTUALIZATION_SERVICE=https://syneto-instance-ip-address/api/virtualization
STORAGE_SERVICE=https://syneto-instance-ip-address/api/storage
PROTECTION_SERVICE=https://syneto-instance-ip-address/api/protection
```
If you are using self-signed SSL certificates, set the following env. so that the http request library does not perform ssl verification.
```
ALLOW_INSECURE_SSL=True
```
# Publishing
See `RELEASE.md`
| text/markdown | Syneto | developers@syneto.eu | null | null | Proprietary | null | [
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | https://github.com/SynetoNet/syneto-api | null | <4.0,>=3.8 | [] | [] | [] | [
"python-dotenv<0.17,>=0.16",
"requests[secure]<3.0,>=2.25",
"inflection<0.6,>=0.5",
"aiohttp<4.0,>=3.8",
"aiodns<4,>=3",
"tenacity!=8.4.0,<9.0.0,>=8.0.1",
"typing_extensions<5.0,>=4.12"
] | [] | [] | [] | [
"Homepage, https://github.com/SynetoNet/syneto-api",
"Repository, https://github.com/SynetoNet/syneto-api"
] | poetry/2.2.1 CPython/3.13.9 Darwin/25.3.0 | 2026-02-20T10:44:37.243175 | syneto_api-0.3.65.tar.gz | 15,207 | f3/75/52895c07b114bff501de1a321514a63c42592ab53e706b97fa568c40e847/syneto_api-0.3.65.tar.gz | source | sdist | null | false | 4d3eb61fd3154a5c109a438acb7efef1 | b26d12d92e8eb329942619a64b76fa2c61e4c7c66ceccaaf799522d4385a5e80 | f37552895c07b114bff501de1a321514a63c42592ab53e706b97fa568c40e847 | null | [] | 0 |
2.4 | blockapi | 2.3.5 | BlockAPI library | # blockapi
Library to interact with numerous cryptocurrency data APIs to get the basic info about account balance, transactions, staking informations, etc.
List of supported coins:
| coin | API name | supported operations
| :---- | :------------| :---------------------
| XTZ | TzscanAPI | balance, transactions, activations, originations, delegations, endorsements, bakings
| | TzStatsAPI | staking (balance, rewards)
| ATOM | CosmosAPI | balance, transactions, rewards, delegates, votes
| DCR | DcrdataAPI | balance, transactions
| ADA | CardanoExplorerAPI | balance, transactions
| ZEC | ChainSoAPI | balance, transactions
| | MercerweissAPI | balance
| | ZchainAPI | balance
| ETC | BlockscoutAPI | balance
| NEO | NeoscanAPI | balance, transactions
| ZEN | ZensystemAPI | balance
| DASH | ChainSoAPI | balance, transactions
| | CryptoIDAPI | balance
| DOGE | ChainSoAPI |balance, transactions
| BNB | BinanceAPI |balance,transactions
| EOS | EosparkAPI |balance, transactions
| | GreymassAPI | balance
| BCH | BtcAPI | balance
| XLM | StellarAPI | balance
| RVN | RavencoinAPI | balance
| TRX | TronscanAPI | balance
| LTC | BlockcypherAPI | balance
| | ChainSoAPI | balance, transactions
| | CryptoIDAPI | balance
| | Ltc1TrezorAPI | balance, transactions
| BTC | BlockchainInfoAPI | balance, transactions
| | BlockonomicsAPI | balance, transactions
| | ChainSoAPI | balance, transactions
| | Btc1TrezorAPI | balance, transactions
| | Btc2TrezorAPI | balance, transactions
| | BitpayAPI | balance
| GRS | CryptoIDAPI | balance
| ETH | AlethioAPI | balance, transactions, events
| | EtherscanAPI | balance, transactions
| | EthplorerAPI | balance
| ONT | OntioAPI | balance, transactions
| VET | DigonchainAPI | balance
| BOS | BlockchainosAPI | balance, transactions
| LUNA | TerraMoneyAPI | balance, transactions, delegations
| DOT | SubscanPolkaAPI | balance, transactions, staking (locked, rewards)
| KSM | SubscanKusamaAPI | balance, transactions, staking (locked, rewards)
## Getting Started
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
### Prerequisites
Python 3.x, PIP (if you'd like to install it this way).
### Installing
Library can be installed simply with pip:
```
pip install blockapi
```
or by running:
```
make install
```
### Usage examples
Example usage to get account balance:
```
import blockapi
myobj = blockapi.api.BlockchainInfoAPI("bitcoin-address-here")
myobj.get_balance()
```
For some coins there are multiple APIs available. With get_random_api_class_for_coin it is possible
to randomly pick any of the available APIs:
```
myapi = blockapi.get_random_api_class_for_coin('BTC')('1F1tAaz5x1HUXrCNLbtMDqcw6o5GNn4xqX')
myapi.get_balance()
```
To directly pick first random working API and ask it for the account balance:
```
>>> blockapi.get_balance_from_random_api('BTC','16ftSEQ4ctQFDtVZiUBusQUjRrGhM3JYwe')
0.010034040000000001
```
It is possible to ask for a list of working APIs for a coin. They are automatically checked first if they work (test is done with asking for a balance). Only APIs which pass this check are returned:
```
>>> blockapi.get_working_apis_for_coin('BTC')
(<class 'blockapi.api.blockchaininfo.BlockchainInfoAPI'>, <class 'blockapi.api.blockonomics.BlockonomicsAPI'>, <class 'blockapi.api.insight.BitpayAPI'>, <class 'blockapi.api.trezor.Btc2TrezorAPI'>, <class 'blockapi.api.trezor.Btc1TrezorAPI'>)
```
During the API instance creation the supplied address is being checked for validity, if the address
is not valid, ValueError exception is being raised:
```
>>> import blockapi
>>> blockapi.api.CosmosAPI('blahblah')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/srv/apps/blockapi/src/blockapi/blockapi/services.py", line 195, in __init__
self.check_validity()
File "/srv/apps/blockapi/src/blockapi/blockapi/services.py", line 201, in check_validity
self.symbol, self.address_info.address
ValueError: Not a valid ATOM address: b'blahblah'
```
It is possible to display the result of the address validation with included details like validity, network type, address type, or the info whether the supplied address is an extended one.
Not for all coins all the details are available though:
```
>>> import blockapi
>>> myapi = blockapi.api.TzscanAPI('valid tezos address here')
>>> myapi.address_info
ValidationResult(name='tezos', ticker='xtz', address=b'valid tezos-address here', valid=True, network='both', is_extended=False, address_type='originated_account')
```
## Running the tests
To run the included tests simply issue:
```
make test
```
## Contributing
TBD
## Authors
* **Devmons s.r.o. - *Initial work* - [crypkit](https://github.com/crypkit)
See also the list of [contributors](https://github.com/crypkit/blockapi/contributors) who participated in this project.
## Credits
* **Chris Priest - *moneywagon library we took many ideas from* - [moneywagon](https://github.com/priestc/moneywagon)
* **Joe Black - *Address validation library* - [coinaddr](https://github.com/joeblackwaslike/coinaddr)
## License
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details
| text/markdown | Devmons s.r.o. | null | null | null | MIT | null | [] | [] | https://github.com/crypkit/blockapi | null | null | [] | [] | [] | [
"requests<3.0,>=2.28",
"python-dateutil>=2.8.0",
"coinaddrng==1.1.1",
"web3<8.0.0,>=5.2.2",
"bs4>=0.0.1",
"lxml>=4.4.1",
"pydantic>=1.10.2",
"marko<2.0.0,>=1.3.0",
"fake_useragent>=1.1.3",
"pytest",
"pytest-vcr",
"requests_mock>=1.9.3",
"attrs<23.0.0,>=17.4.0",
"solders>=0.22.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T10:44:00.276696 | blockapi-2.3.5.tar.gz | 107,838 | 1e/72/604cb2dca91bf03a42b15feeda2cf76e8c026af25d00ea97ccdaee8ba330/blockapi-2.3.5.tar.gz | source | sdist | null | false | 7d2e177661a2c549d52a90f94d1ca132 | a00bac9e239171a397b00633d1ad9e5b221debb31b4d8d8bdb816de32ab5483d | 1e72604cb2dca91bf03a42b15feeda2cf76e8c026af25d00ea97ccdaee8ba330 | null | [
"LICENSE.md"
] | 272 |
2.4 | segtraq | 0.0.3 | SegTraQ - A Python toolkit for quantitative and visual quality control of segmentation and transcript assignment in spatial omics data. | # SegTraQ
[](https://badge.fury.io/py/segtraq)
> ⚠️ Note: SegTraQ is under active development.
> Features, interfaces, and functionality may change in upcoming releases.
> SegTraQ currently supports imaging-based spatial transcriptomics data only.
> Support for sequencing-based spatial transcriptomics is under development and will be included in a future release.
> To install the latest development version, run `pip install git+https://github.com/LazDaria/SegTraQ`.
SegTraQ (**Seg**mentation and **Tra**nscript Assignment **Q**uality Control) is a Python toolkit for quantitative and visual quality control of segmentation and transcript assignment in spatial omics data.
## Getting Started
Please refer to the [documentation](https://lazdaria.github.io/SegTraQ) for details on the API and tutorials.
## Installation
To install `SegTraQ`, first create a python environment and install the package using
```
pip install segtraq
```
The installation of the package should take less than a minute.
## System Requirements
### Hardware Requirements
`SegTraQ` requires only a standard computer with enough RAM to support the in-memory operations.
### Software Requirements
`SegTraQ` depends on the following packages:
```
scanpy
spatialdata
geopandas
igraph
rtree
rasterio
squidpy
anndata
ovrlpy
```
| text/markdown | null | Daria Lazic <daria.lazic@embl.de>, Matthias Meyer-Bender <matthias.meyerbender@embl.de>, Martin Emons <martin.emons@uzh.ch> | null | Daria Lazic <daria.lazic@embl.de>, Matthias Meyer-Bender <matthias.meyerbender@embl.de>, Martin Emons <martin.emons@uzh.ch> | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"typer",
"scanpy",
"spatialdata>=0.7.2",
"joblib",
"geopandas",
"igraph",
"rtree",
"rasterio",
"squidpy>=1.6.2",
"tqdm",
"anndata>=0.12",
"ovrlpy>=1.1.0",
"setuptools<82.0.0",
"coverage; extra == \"test\"",
"pytest; extra == \"test\"",
"ruff; extra == \"test\"",
"ty; extra == \"test\"",
"ipdb; extra == \"test\"",
"black[jupyter]; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"sphinx; extra == \"docs\"",
"sphinx-book-theme; extra == \"docs\"",
"myst-parser; extra == \"docs\"",
"nbsphinx; extra == \"docs\"",
"sphinx-autodoc-typehints; extra == \"docs\"",
"sphinx-copybutton; extra == \"docs\"",
"pandoc; extra == \"docs\"",
"pygments; extra == \"docs\"",
"ipython; extra == \"docs\""
] | [] | [] | [] | [
"bugs, https://github.com/LazDaria/segtraq/issues",
"changelog, https://github.com/LazDaria/segtraq/blob/master/changelog.md",
"homepage, https://github.com/LazDaria/segtraq"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T10:43:26.174584 | segtraq-0.0.3.tar.gz | 80,121 | 85/dc/8320dad4526cb12bd23085cb8802d7c6c676b6a29213b25c738f27c8ae38/segtraq-0.0.3.tar.gz | source | sdist | null | false | c738b0f44c1d0a1f87f659f6d97ec2ca | 073a9234604e4125ce4268db1c85c36c1459e925a85065e268b2677c0c93b90a | 85dc8320dad4526cb12bd23085cb8802d7c6c676b6a29213b25c738f27c8ae38 | null | [
"LICENSE"
] | 224 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.