metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | star-openapi-scalar | 1.44.25 | Provide Scalar UI for star-openapi. | Provide Scalar UI for [star-openapi](https://github.com/luolingchun/star-openapi). | text/markdown | null | null | null | llc <luolingchun@outlook.com> | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"star-openapi"
] | [] | [] | [] | [
"Homepage, https://github.com/luolingchun/star-openapi-plugins/tree/master/star-openapi-scalar",
"Documentation, https://luolingchun.github.io/star-openapi/latest/Usage/UI_Templates/"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T02:50:04.366155 | star_openapi_scalar-1.44.25-py3-none-any.whl | 959,838 | 88/a8/f7d335991b26bd3050f1366044412a1e10f1ace58ef9e03a94d077da25e1/star_openapi_scalar-1.44.25-py3-none-any.whl | py3 | bdist_wheel | null | false | 4a80fd911c1737c472c8d6bd00a44f4c | 6ac04f629a696ce77b4de1e2549f7c0fc4ad9c8b891f401ffc4f73fc0a969df8 | 88a8f7d335991b26bd3050f1366044412a1e10f1ace58ef9e03a94d077da25e1 | null | [] | 226 |
2.4 | blocklink | 0.1.0 | A distributed node network framework for secure registration and instruction transmission | # BlockLink
<div align="center">
🔗 突破网络边界的分布式节点通信框架
[](https://www.python.org/downloads/)
[](LICENSE)
[]()
[English](#english) | [中文](#中文)
</div>
---
## 💫 灵感
还记得《钢铁侠》中的贾维斯吗?托尼·斯塔克只需一句话,贾维斯就能调动所有资源、协调各个系统、处理复杂任务。或是《生化危机》中的红后,一个强大的中央智能系统,掌控着整个设施的数据与运作。
**让我们拥有这样一个属于自己的智能系统。**
- 🏗️ **数据** - 将所有数据(笔记、日记、文章、账号等)以 Block 为单位存储和管理
- 🔌 **关联** - 数据可以相互关联,不再是数据孤岛
- 🔌 **节点** - 节点扩展自己的功能,松耦合与可插拔,节点功能独立,可随意接入或移除。
- 🔌 **网络** - 每个节点都相互连接,跨 NAT 通信,不需要关注IP。
- 🔌 **协作** - Block 在设备间自由传递与链接,节点智能协同,共同构建你的个人数据网络
- 🔌 **灵活扩展**:通过扩展系统实现各种功能,打造专属于你的应用生态
## 💡 BlockLink 是什么?
BlockLink = **Block**(数据块)+ **Link**(链接网络)+ **Node**(节点)
这是一个三层架构的分布式系统:
### 📦 数据层 - Block 生态
- **Block 是最小数据单元**:可以存储任何数据(文本、图片、笔记、文章等)
- **Block 之间可以链接**:通过 `link` 建立 Block 间的关系
- **Block 可以打标签**:通过 `tag` 组织和检索 Block
- **Block 可以跨设备传递**:在你的所有设备间同步和共享
### 🌐 网络层 - 通信基础设施
- **跨 NAT 通信**:通过中继转发,让内网设备也能互相通信
- **智能路由**:自动选择最优路径(直连或中继)
- **局域网发现**:同一网络内自动发现并直连
- **全网寻址**:只需 BID 即可定位任何节点或数据,无需关心 IP 地址、存储位置或网络路径。
### 🔌 扩展层 - 应用生态
- **扩展系统**:开发或使用扩展来实现不同功能
- **灵活组合**:在不同服务器上部署不同扩展
- **自定义应用**:构建你自己的笔记系统、知识库、内容管理平台等
## 💡 解决的核心问题
### 🆔 全网寻址 - 只需知道 BID 即可通信
在传统网络中,你需要知道对方的 IP 地址、端口、是否在 NAT 后面等复杂信息。在 BlockLink 中,**只需要知道对方的 BID **,框架会自动:
- 🔍 在整个网络中定位目标节点或是Block
- 🛤️ 计算最优通信路径(直连或中继)
- 📡 自动处理消息路由和转发
## 🎯 典型应用场景
### 个人数据管理
- 📝 **个人知识库**: 将笔记、日记、文章存储为 Block,在所有设备间同步
- 🗂️ **分布式文件系统**: 构建自己的云存储,数据分散在多个设备
- 🔐 **密码管理器**: 将账号信息存储为加密 Block,安全访问
## 🚀 快速开始
### 安装
```bash
pip install blocklink
```
| text/markdown | Derek X | Derek X <me@derekx.com> | null | null | MIT | distributed, networking, protocol, blockchain, node | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Networking",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent"
] | [] | https://github.com/derek44554/BlockLink | null | >=3.10 | [] | [] | [] | [
"cryptography>=41.0.0",
"pyyaml>=6.0",
"starlette>=0.27.0",
"websockets>=11.0",
"uvicorn>=0.23.0",
"sqlmodel>=0.0.8",
"python-dotenv>=1.0.0",
"fastapi>=0.100.0",
"requests>=2.31.0",
"netifaces>=0.11.0",
"pycryptodome>=3.18.0",
"Pillow>=10.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/derek44554/BlockLink",
"Bug Reports, https://github.com/derek44554/BlockLink/issues",
"Source, https://github.com/derek44554/BlockLink"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-20T02:48:46.534633 | blocklink-0.1.0.tar.gz | 38,826 | bf/6e/a5e1ede41ad85639acbba75dc0deed8fb69c94f747b31b5a62b81d44b28c/blocklink-0.1.0.tar.gz | source | sdist | null | false | 268558d6601a0a869354ff12af5712fa | dcb4d3357536964cddf890e5ad45e5ce9e26006f86c704a42778487cdc7f6ae9 | bf6ea5e1ede41ad85639acbba75dc0deed8fb69c94f747b31b5a62b81d44b28c | null | [
"LICENSE"
] | 254 |
2.4 | xarray-dataclasses | 1.10.0 | xarray data creation by data classes | # xarray-dataclasses
[](https://pypi.org/project/xarray-dataclasses/)
[](https://pypi.org/project/xarray-dataclasses/)
[](https://pepy.tech/project/xarray-dataclasses)
[](https://doi.org/10.5281/zenodo.4624819)
[](https://github.com/astropenguin/xarray-dataclasses/actions)
xarray data creation by data classes
## Overview
xarray-dataclasses is a Python package that makes it easy to create [xarray]'s DataArray and Dataset objects that are "typed" (i.e. fixed dimensions, data type, coordinates, attributes, and name) using [the Python's dataclass]:
```python
from dataclasses import dataclass
from typing import Literal
from xarray_dataclasses import AsDataArray, Coord, Data
X = Literal["x"]
Y = Literal["y"]
@dataclass
class Image(AsDataArray):
"""2D image as DataArray."""
data: Data[tuple[X, Y], float]
x: Coord[X, int] = 0
y: Coord[Y, int] = 0
```
### Features
- Typed DataArray or Dataset objects can easily be created:
```python
image = Image.new([[0, 1], [2, 3]], [0, 1], [0, 1])
```
- NumPy-like filled-data creation is also available:
```python
image = Image.zeros([2, 2], x=[0, 1], y=[0, 1])
```
- Support for features by [the Python's dataclass] (`field`, `__post_init__`, ...).
- Support for static type check by [Pyright].
### Installation
```shell
pip install xarray-dataclasses
```
## Basic usage
xarray-dataclasses uses [the Python's dataclass].
Data (or data variables), coordinates, attributes, and a name of DataArray or Dataset objects will be defined as dataclass fields by special type hints (`Data`, `Coord`, `Attr`, `Name`), respectively.
Note that the following code is supposed in the examples below.
```python
from dataclasses import dataclass
from typing import Literal
from xarray_dataclasses import AsDataArray, AsDataset
from xarray_dataclasses import Attr, Coord, Data, Name
X = Literal["x"]
Y = Literal["y"]
```
### Data field
Data field is a field whose value will become the data of a DataArray object or a data variable of a Dataset object.
The type hint `Data[TDims, TDtype]` fixes the dimensions and the data type of the object.
Here are some examples of how to specify them.
Type hint | Inferred dimensions
--- | ---
`Data[tuple[()], ...]` | `()`
`Data[Literal["x"], ...]` | `("x",)`
`Data[tuple[Literal["x"]], ...]` | `("x",)`
`Data[tuple[Literal["x"], Literal["y"]], ...]` | `("x", "y")`
Type hint | Inferred data type
--- | ---
`Data[..., Any]` | `None`
`Data[..., None]` | `None`
`Data[..., float]` | `numpy.dtype("float64")`
`Data[..., numpy.float128]` | `numpy.dtype("float128")`
`Data[..., Literal["datetime64[ns]"]]` | `numpy.dtype("<M8[ns]")`
### Coordinate field
Coordinate field is a field whose value will become a coordinate of a DataArray or a Dataset object.
The type hint `Coord[TDims, TDtype]` fixes the dimensions and the data type of the object.
### Attribute field
Attribute field is a field whose value will become an attribute of a DataArray or a Dataset object.
The type hint `Attr[TAttr]` specifies the type of the value, which is used only for static type check.
### Name field
Name field is a field whose value will become the name of a DataArray object.
The type hint `Name[TName]` specifies the type of the value, which is used only for static type check.
### DataArray class
DataArray class is a dataclass that defines typed DataArray specifications.
Exactly one data field is allowed in a DataArray class.
The second and subsequent data fields are just ignored in DataArray creation.
```python
@dataclass
class Image(AsDataArray):
"""2D image as DataArray."""
data: Data[tuple[X, Y], float]
x: Coord[X, int] = 0
y: Coord[Y, int] = 0
units: Attr[str] = "cd / m^2"
name: Name[str] = "luminance"
```
A DataArray object will be created by a class method `new()`:
```python
Image.new([[0, 1], [2, 3]], x=[0, 1], y=[0, 1])
<xarray.DataArray "luminance" (x: 2, y: 2)>
array([[0., 1.],
[2., 3.]])
Coordinates:
* x (x) int64 0 1
* y (y) int64 0 1
Attributes:
units: cd / m^2
```
NumPy-like class methods (`zeros()`, `ones()`, ...) are also available:
```python
Image.ones((3, 3))
<xarray.DataArray "luminance" (x: 3, y: 3)>
array([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
Coordinates:
* x (x) int64 0 0 0
* y (y) int64 0 0 0
Attributes:
units: cd / m^2
```
### Dataset class
Dataset class is a dataclass that defines typed Dataset specifications.
Multiple data fields are allowed to define the data variables of the object.
```python
@dataclass
class ColorImage(AsDataset):
"""2D color image as Dataset."""
red: Data[tuple[X, Y], float]
green: Data[tuple[X, Y], float]
blue: Data[tuple[X, Y], float]
x: Coord[X, int] = 0
y: Coord[Y, int] = 0
units: Attr[str] = "cd / m^2"
```
A Dataset object will be created by a class method `new()`:
```python
ColorImage.new(
[[0, 0], [0, 0]], # red
[[1, 1], [1, 1]], # green
[[2, 2], [2, 2]], # blue
)
<xarray.Dataset>
Dimensions: (x: 2, y: 2)
Coordinates:
* x (x) int64 0 0
* y (y) int64 0 0
Data variables:
red (x, y) float64 0.0 0.0 0.0 0.0
green (x, y) float64 1.0 1.0 1.0 1.0
blue (x, y) float64 2.0 2.0 2.0 2.0
Attributes:
units: cd / m^2
```
## Advanced usage
### Coordof and Dataof type hints
xarray-dataclasses provides advanced type hints, `Coordof` and `Dataof`.
Unlike `Data` and `Coord`, they specify a dataclass that defines a DataArray class.
This is useful when users want to add metadata to dimensions for [plotting].
For example:
```python
from xarray_dataclasses import Coordof
@dataclass
class XAxis:
data: Data[X, int]
long_name: Attr[str] = "x axis"
units: Attr[str] = "pixel"
@dataclass
class YAxis:
data: Data[Y, int]
long_name: Attr[str] = "y axis"
units: Attr[str] = "pixel"
@dataclass
class Image(AsDataArray):
"""2D image as DataArray."""
data: Data[tuple[X, Y], float]
x: Coordof[XAxis] = 0
y: Coordof[YAxis] = 0
```
### General data variable names in Dataset creation
Due to the limitation of Python's parameter names, it is not possible to define data variable names that contain white spaces, for example.
In such cases, please define DataArray classes of each data variable so that they have name fields and specify them by `Dataof` in a Dataset class.
Then the values of the name fields will be used as data variable names.
For example:
```python
@dataclass
class Red:
data: Data[tuple[X, Y], float]
name: Name[str] = "Red image"
@dataclass
class Green:
data: Data[tuple[X, Y], float]
name: Name[str] = "Green image"
@dataclass
class Blue:
data: Data[tuple[X, Y], float]
name: Name[str] = "Blue image"
@dataclass
class ColorImage(AsDataset):
"""2D color image as Dataset."""
red: Dataof[Red]
green: Dataof[Green]
blue: Dataof[Blue]
```
```python
ColorImage.new(
[[0, 0], [0, 0]],
[[1, 1], [1, 1]],
[[2, 2], [2, 2]],
)
<xarray.Dataset>
Dimensions: (x: 2, y: 2)
Dimensions without coordinates: x, y
Data variables:
Red image (x, y) float64 0.0 0.0 0.0 0.0
Green image (x, y) float64 1.0 1.0 1.0 1.0
Blue image (x, y) float64 2.0 2.0 2.0 2.0
```
### Customization of DataArray or Dataset creation
For customization, users can add a special class attribute, `__dataoptions__`, to a DataArray or Dataset class.
A custom factory for DataArray or Dataset creation is only supported in the current implementation.
```python
import xarray as xr
from xarray_dataclasses import DataOptions
class Custom(xr.DataArray):
"""Custom DataArray."""
__slots__ = ()
def custom_method(self) -> bool:
"""Custom method."""
return True
@dataclass
class Image(AsDataArray):
"""2D image as DataArray."""
data: Data[tuple[X, Y], float]
x: Coord[X, int] = 0
y: Coord[Y, int] = 0
__dataoptions__ = DataOptions(Custom)
image = Image.ones([3, 3])
isinstance(image, Custom) # True
image.custom_method() # True
```
### DataArray and Dataset creation without shorthands
xarray-dataclasses provides functions, `asdataarray` and `asdataset`.
This is useful when users do not want to inherit the mix-in class (`AsDataArray` or `AsDataset`) in a DataArray or Dataset dataclass.
For example:
```python
from xarray_dataclasses import asdataarray
@dataclass
class Image:
"""2D image as DataArray."""
data: Data[tuple[X, Y], float]
x: Coord[X, int] = 0
y: Coord[Y, int] = 0
image = asdataarray(Image([[0, 1], [2, 3]], [0, 1], [0, 1]))
```
<!-- References -->
[Pyright]: https://github.com/microsoft/pyright
[the Python's dataclass]: https://docs.python.org/3/library/dataclasses.html
[xarray]: https://xarray.pydata.org/en/stable/index.html
[plotting]: https://xarray.pydata.org/en/stable/user-guide/plotting.html#simple-example
| text/markdown | null | Akio Taniguchi <a-taniguchi@mail.kitami-it.ac.jp> | null | null | MIT License Copyright (c) 2020-2026 Akio Taniguchi Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | dataclasses, python, specifications, typing, xarray | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"numpy<3,>=2",
"typing-extensions<5,>=4",
"xarray<2027,>=2024"
] | [] | [] | [] | [
"homepage, https://astropenguin.github.io/xarray-dataclasses",
"repository, https://github.com/astropenguin/xarray-dataclasses"
] | uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T02:47:15.822898 | xarray_dataclasses-1.10.0.tar.gz | 90,800 | eb/e7/bcd2e3a825634fa8d335dc68a38a76933b9b61d639f117404203d25861cd/xarray_dataclasses-1.10.0.tar.gz | source | sdist | null | false | 218ab2bd746c258376a9fcd944567b69 | c0d2f301f9457863946eb7f2973da4c554735a8751df48ce16653ea98c702c5f | ebe7bcd2e3a825634fa8d335dc68a38a76933b9b61d639f117404203d25861cd | null | [
"LICENSE"
] | 635 |
2.4 | OpihiExarata | 2026.1.19 | Analysis software for the IRTF Opihi telescope. | # OpihiExarata
The NASA Infrared Telescope Facility (IRTF) Opihi telescope software tool used primarily for solving asteroid ephemerides using astrometric solutions.
[](https://github.com/psmd-iberutaru/OpihiExarata/actions/workflows/tests_windows.yml) [](https://github.com/psmd-iberutaru/OpihiExarata/actions/workflows/tests_linux.yml) [&color=455A64&suffix=%)](https://psmd-iberutaru.github.io/OpihiExarata/build/html/code/coverage/index.html)
[](https://github.com/psf/black) [](https://github.com/psmd-iberutaru/OpihiExarata/actions/workflows/black.yml)
## Complete Documentation
[You can find the complete html documentation here.](https://psmd-iberutaru.github.io/OpihiExarata)
Alternatively, you may find other versions built with Sphinx at [/docs/build/](https://github.com/psmd-iberutaru/OpihiExarata/tree/master/docs/build)
| text/markdown | null | Sparrow <psmd.iberutaru@gmail.com> | null | null | null | null | [
"Development Status :: 2 - Pre-Alpha",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"astropy",
"matplotlib",
"numpy",
"pandas",
"pillow",
"plotly",
"pyside6",
"pyyaml",
"requests",
"scikit-image",
"scipy"
] | [] | [] | [] | [
"Homepage, http://irtfweb.ifa.hawaii.edu/~opihi/",
"Documentation, https://psmd-iberutaru.github.io/OpihiExarata",
"Issues, https://github.com/psmd-iberutaru/OpihiExarata/issues",
"Source, https://github.com/psmd-iberutaru/OpihiExarata/"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-20T02:46:36.293298 | opihiexarata-2026.1.19.tar.gz | 5,026,264 | 61/74/00cc69c3f03d702d8ac0be9c9bdab9658783e043c9e21133920d4d5e1c16/opihiexarata-2026.1.19.tar.gz | source | sdist | null | false | e9ff9fc1f7c0fbdfcf7aa2145df94fe5 | 5cc868aa6eccb4daaac52374266434cee452018750e5f00a53759ce4aca784b4 | 617400cc69c3f03d702d8ac0be9c9bdab9658783e043c9e21133920d4d5e1c16 | MIT | [
"LICENSE"
] | 0 |
2.4 | verlex | 0.4.1 | Run your code on the cheapest cloud provider automatically - Uber for Cloud Computing | # Verlex
**Run your code in the cloud for the price of a coffee.**
Verlex is a Python SDK that lets you execute code on the cheapest available cloud infrastructure across AWS, GCP, and Azure — all with a single function call.
## Installation
```bash
pip install verlex
```
With ML dependencies:
```bash
pip install verlex[ml]
```
## Quick Start
```python
import verlex
def train_model():
import torch
model = torch.nn.Linear(100, 10)
# Your training code here...
return {"accuracy": 0.95}
# Run it in the cloud - that's it!
with verlex.GateWay(api_key="gw_your_key") as gw:
result = gw.run(train_model)
print(result)
```
## Basic Usage
### Context Manager (Recommended)
```python
import verlex
with verlex.GateWay(api_key="gw_your_key") as gw:
# Analyze resources your function needs
recommendation = gw.analyze(my_function)
print(f"Recommended: {recommendation.gpu_type}")
# Run in the cloud
result = gw.run(my_function)
```
### Specifying Resources
```python
with verlex.GateWay(api_key="gw_your_key") as gw:
result = gw.run(
train_model,
gpu="A100", # Specific GPU type
gpu_count=2, # Multiple GPUs
memory="64GB", # Memory requirement
timeout=7200, # 2 hour timeout
)
```
### Async Execution
```python
with verlex.GateWay(api_key="gw_your_key") as gw:
# Submit jobs (non-blocking)
job1 = gw.run_async(train_model_1)
job2 = gw.run_async(train_model_2)
# Wait for results when needed
result1 = job1.result()
result2 = job2.result()
```
## Pricing Modes
Choose your price-speed tradeoff with a single `fast` flag:
| Mode | Wait Time | Best For |
|------|-----------|----------|
| **Performance** (`fast=True`) | Immediate | Time-sensitive workloads |
| **Standard** (`fast=False`) | Up to 10 min | Batch jobs, cost-sensitive |
```python
# Performance mode - immediate execution
with verlex.GateWay(api_key="gw_your_key", fast=True) as gw:
result = gw.run(my_function)
# Standard mode (default) - wait for lower prices
with verlex.GateWay(api_key="gw_your_key") as gw:
result = gw.run(my_function)
```
## Authentication
### Option 1: Direct API Key
```python
with verlex.GateWay(api_key="gw_your_key") as gw:
result = gw.run(my_function)
```
### Option 2: Environment Variable
```bash
export VERLEX_API_KEY="gw_your_key"
```
```python
with verlex.GateWay() as gw:
result = gw.run(my_function)
```
## Automatic Cloud Offloading
Don't know which functions are heavy? Let Verlex figure it out:
```python
import verlex
verlex.overflow(fast=True)
# Your code runs normally. When CPU or memory exceeds 85%,
# functions are automatically offloaded to the cheapest cloud.
data = load_data()
result = train_model(data) # system overloaded? → cloud
evaluate(result) # resources free → runs locally
```
Install with: `pip install 'verlex[overflow]'`
## Agent Daemon
Monitor your system and offload heavy Python processes:
```bash
# Watch for heavy processes and offer to offload
verlex agent watch
# Auto-offload without prompting
verlex agent watch --auto
# Submit a script directly via source-code pipeline
verlex agent run train.py --gpu A100
```
Install with: `pip install 'verlex[agent]'`
## CLI
```bash
# Login
verlex login
# Run a script
verlex run train.py
# Run with specific GPU
verlex run train.py --gpu A100
# Check job status
verlex jobs
# View account info
verlex whoami
```
## Supported Cloud Providers
- **AWS** - EC2, with Spot instances (up to 90% off)
- **GCP** - Compute Engine, with Preemptible VMs (up to 91% off)
- **Azure** - VMs, with Spot instances (up to 81% off)
## Links
- **Website**: [verlex.dev](https://verlex.dev)
- **Documentation**: [verlex.dev/docs](https://verlex.dev/docs)
## Contact
- **Support**: support@verlex.dev
- **Sales**: sales@verlex.dev
- **General**: contact@verlex.dev
## License
Apache 2.0
| text/markdown | Verlex Team | null | Verlex Team | null | null | aws, azure, cloud, cloud-computing, cost-optimization, distributed-computing, gcp, gpu, machine-learning, orchestration, preemptible, price-comparison, serverless, spot-instances | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: System :: Distributed Computing",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"cloudpickle>=3.0.0",
"httpx>=0.25.0",
"psutil>=5.9.0; extra == \"agent\"",
"rich>=13.0.0; extra == \"cli\"",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"psutil>=5.9.0; extra == \"overflow\"",
"aiohttp>=3.9.0; extra == \"server\"",
"anthropic>=0.18.0; extra == \"server\"",
"anyio>=4.0.0; extra == \"server\"",
"azure-identity>=1.15.0; extra == \"server\"",
"azure-mgmt-compute>=30.0.0; extra == \"server\"",
"azure-mgmt-containerinstance>=10.0.0; extra == \"server\"",
"boto3>=1.28.0; extra == \"server\"",
"fastapi>=0.100.0; extra == \"server\"",
"google-auth>=2.0.0; extra == \"server\"",
"google-cloud-billing>=1.0.0; extra == \"server\"",
"google-cloud-compute>=1.0.0; extra == \"server\"",
"google-cloud-run>=0.10.0; extra == \"server\"",
"prometheus-client>=0.19.0; extra == \"server\"",
"pydantic>=2.0.0; extra == \"server\"",
"pyjwt[crypto]>=2.8.0; extra == \"server\"",
"python-dotenv>=1.0.0; extra == \"server\"",
"python-multipart>=0.0.6; extra == \"server\"",
"pyyaml>=6.0.0; extra == \"server\"",
"redis>=5.0.0; extra == \"server\"",
"sentry-sdk[fastapi]>=1.39.0; extra == \"server\"",
"stripe>=7.0.0; extra == \"server\"",
"supabase>=2.0.0; extra == \"server\"",
"typer>=0.9.0; extra == \"server\"",
"uvicorn>=0.23.0; extra == \"server\"",
"websockets>=12.0; extra == \"server\""
] | [] | [] | [] | [
"Homepage, https://verlex.dev",
"Documentation, https://verlex.dev/docs"
] | twine/6.2.0 CPython/3.10.11 | 2026-02-20T02:46:21.979241 | verlex-0.4.1.tar.gz | 33,843 | 78/8c/6005ed86007cefd8d46b103d60e3b75f99ef258f4d41ee360d07e4ab05b2/verlex-0.4.1.tar.gz | source | sdist | null | false | d6d378955e242cffec9ff02b8cbb57af | 00212db227d0cedb204b28fae0a45209097bf2674d206e263ec84bcd4a3d89a6 | 788c6005ed86007cefd8d46b103d60e3b75f99ef258f4d41ee360d07e4ab05b2 | Apache-2.0 | [
"LICENSE"
] | 232 |
2.4 | centella-lang | 1.2.2 | The official Centella programming language compiler. | # Centella Programming Language (v1.2.2) ⚡
**Centella** es un lenguaje de programación moderno, minimalista y ultra-rápido diseñado específicamente para el **procesamiento masivo de datos** y la automatización empresarial.
Combina la simplicidad de una sintaxis inspirada en Python (pero en español) con la potencia bruta de un backend híbrido (Python Frontend + LLVM/C Backend), generando ejecutables nativos altamente optimizados.
## 🚀 Características Principales
* **⚡ Rendimiento Nativo**: Compila directamente a código máquina usando LLVM y un Runtime escrito en C.
* **📂 Streaming I/O**: Procesa archivos de texto/CSV de gigabytes línea por línea con consumo de memoria constante.
* **🗣️ Sintaxis en Español**: `si`, `sino`, `mientras`, `imprimir`. Intuitivo y fácil de aprender.
* **📊 Funciones Analíticas**: Primitivas integradas para estadística (`max`, `min`, `promedio`) y texto (`contiene`, `empieza_con`).
* **🛠️ Tooling Moderno**: Extensión oficial para VS Code con resaltado de sintaxis e IntelliSense.
## 📦 Instalación
Puedes instalar el compilador oficial desde PyPI:
```bash
pip install centella-lang
```
## ⚡ Quick Start
### 1. Variables & Math
```centella
sea x = 10
sea y = 20
imprimir "La suma es: " (x + y)
```
### 2. Boolean Logic (New!)
```centella
sea activo = verdadero
sea saldo = 0
si activo && saldo == 0 {
imprimir "Cuenta activa pero vacia"
}
```
### 3. User Functions (New!)
```centella
funcion cuadrado(n) {
retornar n * n
}
imprimir "El cuadrado de 5 es: " cuadrado(5)
```
### 4. Reading Data (CSV Processing)
Centella shines at processing big files. Use `procesar` to iterate over rows automatically.
**data.csv**
```csv
1,Laptop,1000
2,Mouse,20
```
**script.centella**
```centella
// 'procesar' abre el archivo y lee linea por linea
// Las variables id, prod, precio se llenan automaticamente
procesar "data.csv" capturando (id, prod: texto, precio) {
sea iva = precio * 0.19
imprimir prod ": $" (precio + iva)
}
```
### 5. Writing Data
```centella
guardar "reporte.txt" {
escribir "Reporte de ventas generada por Centella"
escribir "======================================="
}
```
### 6. Interactive Input
```centella
imprimir "Ingresa tu edad:"
leer edad
si edad >= 18 {
imprimir "Eres mayor de edad."
}
```
## 🛠️ Usage
Save your code as `myscript.centella` and run:
```bash
centella myscript.centella
```
This will compile and execute your program instantly.
## 📄 License
MIT License. Created by Ermes Galvis.te proyecto es Open Source. ¡Disfruta programando!
| text/markdown | Centella Team | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T02:46:02.427541 | centella_lang-1.2.2.tar.gz | 13,041 | ff/08/c8c529dcafbeec916651c36da78223445fac4c5a4f07d21f200d93541462/centella_lang-1.2.2.tar.gz | source | sdist | null | false | 2a17861ae93bcd2d6c1521573e4c41fb | f75d0e43d9e8e9d490c1270306148dcc8ce212fe066e7185ad0ffcfd742c6d7e | ff08c8c529dcafbeec916651c36da78223445fac4c5a4f07d21f200d93541462 | null | [] | 238 |
2.4 | memryapi | 1.0.0 | Official Python SDK for MemryAPI — Memory-as-a-Service for AI Applications. | # MemryAPI Python SDK
The Sovereign Persistence Layer for AI Agents.
## Installation
```bash
pip install memryapi
```
## Quick Start
```python
from memryapi import MemryAPI
client = MemryAPI("your-api-key")
# Store a memory
client.remember("user-123", "Prefers dark chocolate over milk chocolate")
# Recall memories by semantic similarity
result = client.recall("user-123", "What chocolate do they like?")
print(result.results[0].content)
# → "Prefers dark chocolate over milk chocolate"
```
## Session Helper
Avoid repeating `user_id` on every call:
```python
session = client.session("user-123")
session.remember("Has a golden retriever named Max")
session.remember("Works remotely from Austin, TX")
memories = session.recall("pets")
summary = session.summarize()
```
## LLM Wrapper (Experimental)
Automatically inject memory context into your LLM calls:
```python
from openai import OpenAI
openai = OpenAI()
def ask(context: str) -> str:
response = openai.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": f"User context:\n{context}"},
{"role": "user", "content": "What should I get them for their birthday?"},
],
)
return response.choices[0].message.content
# Recalls memories, passes as context, saves the AI response
answer = client.wrap("user-123", ask, query="preferences interests")
```
## Context Manager
```python
with MemryAPI("your-api-key") as client:
client.remember("user-123", "Some fact")
result = client.recall("user-123", "query")
# HTTP client is automatically closed
```
## API Reference
| Method | Description |
|--------|-------------|
| `remember(user_id, text, metadata)` | Store a memory |
| `recall(user_id, query, top_k, threshold, time_weight)` | Semantic recall |
| `forget(memory_id)` | Delete a specific memory |
| `forget_all(user_id)` | Delete all user memories |
| `summarize(user_id, limit, save_as_memory)` | AI-powered summary |
| `session(user_id)` | Session-scoped client |
| `wrap(user_id, fn, query)` | Auto memory-augmented LLM calls |
## License
MIT
| text/markdown | null | MemryAPI <support@memryapi.com> | null | null | MIT | agents, ai, llm, long-term-memory, memory, openai, rag | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"httpx>=0.24.0"
] | [] | [] | [] | [
"Homepage, https://memryapi.com",
"Repository, https://github.com/aanveshh35/MemoryAPI",
"Documentation, https://memryapi.com/docs"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-20T02:45:50.142592 | memryapi-1.0.0.tar.gz | 5,517 | cf/e7/fd50b8db59709d9ff8392695887e771fedfc6a81642a70c94fcdc2bddc35/memryapi-1.0.0.tar.gz | source | sdist | null | false | f0458eaa207d29c81d52c63a3eb563d2 | 90d3a084672d9bea19e13c4b114dcf43e244c2848bb71b9191de6a529568e2d3 | cfe7fd50b8db59709d9ff8392695887e771fedfc6a81642a70c94fcdc2bddc35 | null | [] | 238 |
2.4 | chartbook | 0.0.9 | A tool for generating chart documentation websites | # ChartBook
A developer platform for data science teams.
[](https://pypi.org/project/chartbook)
[](https://pypi.org/project/chartbook)
[](https://github.com/backofficedev/chartbook)
[](https://backofficedev.github.io/chartbook/)
Discover, document, and share data science work across your organization. ChartBook provides a centralized catalog for data pipelines, charts, and documentation—making it easy to find, understand, and reuse analytics work.
## Terminology
ChartBook supports two project types:
- **Pipeline** — A single analytics pipeline with its own charts, dataframes, and documentation
- **Catalog** — A collection of multiple pipelines aggregated into a unified documentation site
See the [Concepts](https://backofficedev.github.io/chartbook/user-guide/concepts.html) page for the full terminology including ChartBooks and ChartHub.
## Features
- **Pipeline Catalog** — Organize and discover data pipelines across your team
- **Documentation Generation** — Build searchable documentation websites from your analytics work
- **Data Governance** — Track data sources, licenses, and access permissions
- **Programmatic Data Access** — Load pipeline outputs directly into pandas or polars
- **Multi-Pipeline Catalogs** — Aggregate multiple pipelines into a single documentation site
## Installation
**Recommended:**
```bash
pip install "chartbook[all]"
```
This gives you everything: data loading, plotting utilities, and the CLI for building documentation.
**Minimal install** (data loading only):
```bash
pip install "chartbook[data]"
```
**Development:**
```bash
pip install -e ".[dev]"
```
## Quick Start
### Load data from a pipeline
```python
from chartbook import data
df = data.load(pipeline="fred_charts", dataframe="interest_rates")
```
### Build documentation
```bash
chartbook build
```
### Browse your catalog
```bash
# List all pipelines, dataframes, and charts
chartbook ls
# List dataframes only
chartbook ls dataframes
# Get path to a dataframe's parquet file
chartbook data get-path --pipeline fred_charts --dataframe interest_rates
```
See the [documentation](https://backofficedev.github.io/chartbook) for configuration options and detailed guides.
## Documentation
Full documentation is available at [backofficedev.github.io/chartbook](https://backofficedev.github.io/chartbook).
- [Getting Started](https://backofficedev.github.io/chartbook/getting-started.html)
- [Configuration Reference](https://backofficedev.github.io/chartbook/configuration.html)
- [CLI Reference](https://backofficedev.github.io/chartbook/cli-reference.html)
## Contributing
Contributions are welcome. See [CONTRIBUTING](https://backofficedev.github.io/chartbook/contributing.html) for guidelines.
## License
Modified BSD License | text/markdown | null | Jeremiah Bejarano <Jeremiah.Bejarano@gmail.com> | null | null | null | Analytics, Catalogs, Dashboards, Data | [
"Development Status :: 4 - Beta",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.1.7",
"tomli-w>=1.1.0",
"tomli>=2.2.1",
"ablog>=0.11.11; extra == \"all\"",
"cruft; extra == \"all\"",
"fredapi>=0.5.0; extra == \"all\"",
"jinja2>=3.1.4; extra == \"all\"",
"kaleido>=0.2.1; extra == \"all\"",
"linkify-it-py>=2.0.3; extra == \"all\"",
"markdown-it-py>=3.0.0; extra == \"all\"",
"matplotlib>=3.7.0; extra == \"all\"",
"myst-nb>=1.1.2; extra == \"all\"",
"myst-parser>=2.0.0; extra == \"all\"",
"packaging; extra == \"all\"",
"pandas>=2.1.4; extra == \"all\"",
"plotly>=5.24.0; extra == \"all\"",
"pluggy>=1.3.0; extra == \"all\"",
"polars>=1.9.0; extra == \"all\"",
"pyarrow; extra == \"all\"",
"pydata-sphinx-theme>=0.15.4; extra == \"all\"",
"python-decouple>=3.8; extra == \"all\"",
"sphinx-autodoc2>=0.5.0; extra == \"all\"",
"sphinx-book-theme>=1.1.3; extra == \"all\"",
"sphinx-copybutton>=0.5.2; extra == \"all\"",
"sphinx-design>=0.6.1; extra == \"all\"",
"sphinx-external-toc>=1.0.1; extra == \"all\"",
"sphinx<9.0,>=7.2.6; extra == \"all\"",
"packaging; extra == \"data\"",
"polars>=1.9.0; extra == \"data\"",
"pyarrow; extra == \"data\"",
"python-decouple>=3.8; extra == \"data\"",
"ablog>=0.11.11; extra == \"dev\"",
"colorama; extra == \"dev\"",
"cruft; extra == \"dev\"",
"doit>=0.36.0; extra == \"dev\"",
"fredapi>=0.5.0; extra == \"dev\"",
"jinja2>=3.1.4; extra == \"dev\"",
"jupytext>=1.16.0; extra == \"dev\"",
"kaleido>=0.2.1; extra == \"dev\"",
"linkify-it-py>=2.0.3; extra == \"dev\"",
"markdown-it-py>=3.0.0; extra == \"dev\"",
"matplotlib>=3.7.0; extra == \"dev\"",
"myst-nb>=1.1.2; extra == \"dev\"",
"myst-parser>=2.0.0; extra == \"dev\"",
"nbconvert>=7.0.0; extra == \"dev\"",
"packaging; extra == \"dev\"",
"pandas>=2.1.4; extra == \"dev\"",
"plotly>=5.24.0; extra == \"dev\"",
"pluggy>=1.3.0; extra == \"dev\"",
"polars>=1.9.0; extra == \"dev\"",
"pyarrow; extra == \"dev\"",
"pydata-sphinx-theme>=0.15.4; extra == \"dev\"",
"pytest; extra == \"dev\"",
"python-decouple>=3.8; extra == \"dev\"",
"sphinx-autodoc2>=0.5.0; extra == \"dev\"",
"sphinx-book-theme>=1.1.3; extra == \"dev\"",
"sphinx-copybutton>=0.5.2; extra == \"dev\"",
"sphinx-design>=0.6.1; extra == \"dev\"",
"sphinx-external-toc>=1.0.1; extra == \"dev\"",
"sphinx<9.0,>=7.2.6; extra == \"dev\"",
"fredapi>=0.5.0; extra == \"plotting\"",
"kaleido>=0.2.1; extra == \"plotting\"",
"matplotlib>=3.7.0; extra == \"plotting\"",
"plotly>=5.24.0; extra == \"plotting\"",
"pluggy>=1.3.0; extra == \"plotting\"",
"ablog>=0.11.11; extra == \"sphinx\"",
"jinja2>=3.1.4; extra == \"sphinx\"",
"linkify-it-py>=2.0.3; extra == \"sphinx\"",
"markdown-it-py>=3.0.0; extra == \"sphinx\"",
"myst-nb>=1.1.2; extra == \"sphinx\"",
"myst-parser>=2.0.0; extra == \"sphinx\"",
"pandas>=2.1.4; extra == \"sphinx\"",
"pydata-sphinx-theme>=0.15.4; extra == \"sphinx\"",
"sphinx-autodoc2>=0.5.0; extra == \"sphinx\"",
"sphinx-book-theme>=1.1.3; extra == \"sphinx\"",
"sphinx-copybutton>=0.5.2; extra == \"sphinx\"",
"sphinx-design>=0.6.1; extra == \"sphinx\"",
"sphinx-external-toc>=1.0.1; extra == \"sphinx\"",
"sphinx<9.0,>=7.2.6; extra == \"sphinx\""
] | [] | [] | [] | [
"Documentation, https://backofficedev.github.io/chartbook",
"Issues, https://github.com/backofficedev/chartbook/issues",
"Source, https://github.com/backofficedev/chartbook"
] | python-httpx/0.28.1 | 2026-02-20T02:44:35.823628 | chartbook-0.0.9.tar.gz | 11,016,443 | cc/6b/2c9d958d62a252fef94f00be5e18dc4fdb5e7b55814dcdcdf4daab576eee/chartbook-0.0.9.tar.gz | source | sdist | null | false | 4b55ed552a579c992733b1f9d781e896 | 31e49db2818f70882e1f718f5406c97a594f8c1be3d139709888facdfa977039 | cc6b2c9d958d62a252fef94f00be5e18dc4fdb5e7b55814dcdcdf4daab576eee | BSD-3-Clause | [
"LICENSE.md"
] | 240 |
2.4 | awx-delinea-secret-server-credential-plugin | 0.2.2 | AWX/AAP credential plugin for Delinea (Thycotic) Secret Server | # Delinea Secret Server — AWX/AAP Credential Plugin
<!-- Badges -->
[](https://github.com/acedya/tss-credential-plugin/actions/workflows/ci.yml)
[](https://github.com/acedya/tss-credential-plugin/actions/workflows/release.yml)
[](https://pypi.org/project/awx-delinea-secret-server-credential-plugin/)
[](https://pypi.org/project/awx-delinea-secret-server-credential-plugin/)
[](tests/)
[](LICENSE)
[](https://github.com/psf/black)
[](https://pycqa.github.io/isort/)
> Custom AWX/AAP credential plugin for **Delinea (Thycotic) Secret Server**.
> Uses the official **Delinea Python SDK** (`python-tss-sdk`) to authenticate via OAuth2 at job launch, retrieves a short-lived access token, and provides it through AWX credential linking — the **raw password is never exposed** to the running job.
---
## Architecture
```
┌───────────────────────────────┐
│ AAP / AWX │
│ │
│ ┌─────────────────────────┐ │ python-tss-sdk (OAuth2)
│ │ Delinea SS Credential │──│──────────────────────────────┐
│ │ (External – Plugin) │ │ │
│ │ base_url, user, pass │◄─│──────────────────────────────┤
│ └────────────┬────────────┘ │ { "access_token": ... } │
│ │ credential │ │
│ │ linking │ ┌─────────┴──────────┐
│ ▼ │ │ Delinea Secret │
│ ┌─────────────────────────┐ │ │ Server │
│ │ Target Credential │ │ │ (OAuth2 endpoint) │
│ │ (fields linked via │ │ └────────────────────┘
│ │ identifier dropdown) │ │
│ └────────────┬────────────┘ │
│ │ injected by │
│ │ target type │
│ ▼ │
│ ┌─────────────────────────┐ │
│ │ Ansible Job (playbook) │ │
│ │ │ │
│ │ TSS_TOKEN ✔ │ │
│ │ PASSWORD ✘ │ │
│ └─────────────────────────┘ │
└───────────────────────────────┘
```
---
## Table of Contents
- [Quick Start](#quick-start)
- [Development](#development)
- [Plugin Details](#plugin-details)
- [Testing](#testing)
- [CI/CD Pipeline](#cicd-pipeline)
- [Release Process](#release-process)
- [Deployment to AAP/AWX](#deployment-to-aapawx)
- [Credential Linking](#credential-linking)
- [Usage in Playbooks](#usage-in-playbooks)
- [Repository Hardening](#repository-hardening)
- [Contributing](#contributing)
---
## Quick Start
### Install
```bash
pip install awx-delinea-secret-server-credential-plugin
```
Then on every AWX/AAP node:
```bash
awx-manage setup_managed_credential_types
```
### Do I need a separate credential type in AWX?
**Yes — for injection.** This plugin is an *external credential source* (it resolves values). To inject those values into your Ansible jobs as environment variables or extra vars, you need a **target credential type** with injectors. See [Credential Linking](#credential-linking) for the recommended setup.
---
## Development
### Prerequisites
- Python 3.8+
- GNU Make
- Git
### Setup
```bash
git clone https://github.com/acedya/tss-credential-plugin.git
cd tss-credential-plugin
make install-dev # creates .venv, installs package + dev deps
```
### Makefile Reference
The Makefile is the **single source of truth** — CI workflows call `make` targets, so local and CI behavior stay perfectly aligned.
| Target | Description |
|--------|-------------|
| `make help` | Show all available targets |
| `make install-dev` | Install package with dev dependencies in `.venv` |
| `make format` | Auto-format code (black + isort) |
| `make lint` | CI-equivalent lint checks (black, isort, flake8, mypy) |
| `make test` | Run unit tests |
| `make test-ci` | CI-equivalent tests with coverage XML |
| `make build` | Build source + wheel distributions |
| `make release-check` | Build + twine check |
| `make ci` | Full CI-equivalent run: lint + test-ci + build |
| `make release-tag TAG=v0.2.1 [PUSH=1]` | Safe validated release tag creation |
| `make clean` | Remove caches, bytecode, build artifacts |
### Project Structure
```
.
├── credential_plugins/
│ ├── __init__.py
│ └── delinea_secret_server.py # Main plugin module
├── tests/
│ ├── __init__.py
│ └── test_delinea_credential_plugin.py
├── examples/
│ └── example_playbook.yaml
├── scripts/
│ └── release.sh # Safe tag helper
├── .github/workflows/
│ ├── ci.yml # Test / lint / build
│ └── release.yml # PyPI publish / GitHub Release
├── pyproject.toml # Package metadata + tool config
├── Makefile # Single source of truth for CI
├── CHANGELOG.md # Release notes
└── README.md
```
---
## Plugin Details
### Credential Input Fields
| Field | Type | Required | Secret | Description |
|-------|------|----------|--------|-------------|
| `base_url` | string | Yes | No | Base URL (e.g. `https://myserver/SecretServer`) |
| `username` | string | Yes | No | Application user name |
| `password` | string | Yes | Yes | Password (encrypted at rest by AAP) |
| `domain` | string | No | No | Application user domain |
### Injector Output
This plugin is an **external credential source** — it does _not_ define its own injectors.
AWX calls `backend(**kwargs)` and uses the returned value to populate a linked credential field.
The `identifier` metadata dropdown selects what the plugin returns:
| Identifier | Returns |
|------------|--------|
| `token` (default) | OAuth2 access token |
| `base_url` | Secret Server base URL (pass-through) |
To inject values as environment variables or extra vars, create a **target credential type**
with those injectors, then link its fields to this plugin (see [Credential Linking](#credential-linking) below).
### Implementation
- **`_get_authorizer(base_url, username, password, domain)`**
Creates a `PasswordGrantAuthorizer` or `DomainPasswordGrantAuthorizer` from `python-tss-sdk`.
The SDK handles the OAuth2 `password` grant internally.
- **`backend(**kwargs)`**
AWX entry point called at job launch. Receives all `fields` and `metadata` as keyword arguments.
Returns a **single string** based on the `identifier` metadata dropdown value (`token` or `base_url`).
---
## Testing
### Run Tests
```bash
make ci # full CI parity: lint + test + build
make test # unit tests only
make test-ci # tests with coverage XML
make test-verbose # verbose output
make lint # lint checks only
```
### Test Matrix
| Test | Description |
|------|-------------|
| `test_get_authorizer_without_domain` | Uses `PasswordGrantAuthorizer` when domain absent |
| `test_get_authorizer_with_domain` | Uses `DomainPasswordGrantAuthorizer` when domain set |
| `test_backend_returns_token` | Returns token when identifier is `token` |
| `test_backend_defaults_to_token` | Defaults to `token` when identifier omitted |
| `test_backend_token_with_domain` | Uses domain authorizer and returns token |
| `test_backend_returns_base_url` | Returns base URL when identifier is `base_url` |
| `test_backend_raises_on_unknown_identifier` | `ValueError` raised for unknown identifier |
| `test_backend_password_not_in_output` | Raw password never in plugin output |
| `test_backend_sdk_error_propagates` | SDK authentication errors propagate to AWX |
| `test_inputs_has_required_fields` | INPUTS declares expected authentication fields |
| `test_inputs_password_is_secret` | Password field is marked as secret |
| `test_inputs_metadata_has_identifier` | Metadata includes `identifier` dropdown |
| `test_inputs_identifier_has_choices` | Identifier has `token` / `base_url` choices |
| `test_inputs_identifier_has_default` | Identifier defaults to `token` |
| `test_inputs_required_includes_identifier` | `identifier` is listed as required |
| `test_credential_plugin_structure` | CredentialPlugin has exactly 3 fields |
| `test_credential_plugin_no_injectors` | Plugin does not include injectors |
| `test_credential_plugin_name` | Plugin name matches AWX UI display |
| `test_credential_plugin_inputs_is_inputs` | Plugin references module-level INPUTS |
| `test_credential_plugin_backend_is_callable` | Plugin backend is callable |
### Dependencies
`pytest`, `pytest-cov`, `black`, `isort`, `flake8`, `mypy` — all installed via `make install-dev`. Tests mock the SDK with `unittest.mock`.
---
## CI/CD Pipeline
### Design Principles
- **Makefile = source of truth**: workflows call `make` targets, never duplicate shell commands
- **Local reproducibility**: `make ci` ≡ `ci.yml`, `make release-check` ≡ `release.yml` build step
- **OIDC Trusted Publishing**: no API token secrets stored in GitHub
### Workflows
| Workflow | Trigger | Jobs |
|----------|---------|------|
| `ci.yml` | Push to `main`, pull requests | Test (matrix: 3.10, 3.11), Lint, Build |
| `release.yml` | Tag `v*.*.*` | PyPI publish + GitHub Release |
### Trusted Publishing Setup
This repository uses **PyPI OIDC Trusted Publishing** — no API token secrets required.
Create a trusted publisher configuration on [pypi.org](https://pypi.org):
| Setting | Value |
|---------|-------|
| Owner | Your GitHub org/user |
| Repository | `tss-credential-plugin` |
| Workflow | `release.yml` |
| Environment | `pypi` |
**Publish trigger:** strict `vX.Y.Z` tags, only if the tagged commit is on `main`.
Release notes are populated from `CHANGELOG.md`.
### Local Publish Fallback
Token-based publishing is available for emergencies:
```bash
make publish-pypi-token PYPI_API_TOKEN=pypi-...
```
---
## Release Process
### Branching Model — [GitHub Flow](https://docs.github.com/en/get-started/using-github/github-flow)
This project follows **GitHub Flow**, the simplest branching model:
1. **`main`** is always deployable
2. **Create a branch** from `main` with a descriptive name (e.g. `add-ssl-toggle`, `fix-token-parsing`)
3. **Commit** your changes and push early for visibility
4. **Open a pull request** to start discussion and trigger CI
5. **Review & approve** — CI must pass, at least one approval required
6. **Merge to `main`** — branch is deleted after merge
7. **Tag & release** when ready: `make release-tag TAG=vX.Y.Z PUSH=1`
### Creating a Release
```bash
# 1. Update CHANGELOG.md with the new version notes
# 2. Create and validate the tag
make release-tag TAG=v0.2.1
# 3. Push when ready (triggers PyPI publish + GitHub Release)
make release-tag TAG=v0.2.1 PUSH=1
```
### Safety Checks (`scripts/release.sh`)
The release helper enforces:
- Strict `vX.Y.Z` semver format
- Must be on `main` branch
- Clean git working tree
- Tag must not exist locally or on `origin`
- `make ci` must pass before tag creation
Server-side guard: `release.yml` verifies the tagged commit is an ancestor of `origin/main`.
---
## Deployment to AAP/AWX
### Containerised AAP (single-node / podman)
The plugin must be installed inside **both** controller containers (`automation-controller-web` and `automation-controller-task`).
**Install from GitHub (quickest):**
```bash
podman exec -it automation-controller-web awx-python -m pip install git+https://github.com/acedya/tss-credential-plugin.git
podman exec -it automation-controller-task awx-python -m pip install git+https://github.com/acedya/tss-credential-plugin.git
podman exec -it automation-controller-web awx-manage setup_managed_credential_types
```
**Install from a local wheel:**
```bash
# Build on your dev machine
make build
# Copy the wheel to the AAP host
scp dist/awx_delinea_secret_server_credential_plugin-*.whl admin@<aap-host>:/tmp/
# Copy into the containers
podman cp /tmp/awx_delinea_secret_server_credential_plugin-*.whl automation-controller-web:/tmp/
podman cp /tmp/awx_delinea_secret_server_credential_plugin-*.whl automation-controller-task:/tmp/
# Install
podman exec -it automation-controller-web awx-python -m pip install /tmp/awx_delinea_secret_server_credential_plugin-*.whl
podman exec -it automation-controller-task awx-python -m pip install /tmp/awx_delinea_secret_server_credential_plugin-*.whl
# Register
podman exec -it automation-controller-web awx-manage setup_managed_credential_types
```
> **Note:** `pip install` inside containers is ephemeral — reinstall after container restarts, or build a custom controller image for persistence.
### Standard (non-containerised) install
1. **Install the plugin**
```bash
awx-python -m pip install awx-delinea-secret-server-credential-plugin
```
2. **Register credential types**
```bash
awx-manage setup_managed_credential_types
```
### After installation
1. **Create a "Delinea Secret Server" credential** — fill in `base_url`, `username`, `password`, and optionally `domain`
2. **Link to a target credential** — see [Credential Linking](#credential-linking) below
---
## Credential Linking
This plugin is an **external credential source**. It authenticates to Secret Server and returns a value (token or base URL) that AWX injects into a linked credential field.
To use the plugin you need **two things** in AWX:
1. A **custom credential type** (defines the fields + injectors for your jobs)
2. A **Delinea Secret Server credential** (the source — authenticates to Secret Server)
Then you create a credential of your custom type and **link** its fields to the Delinea credential.
### Step 1 — Create the Target Credential Type
Go to **Administration → Credential Types → Add**.
| Setting | Value |
|---------|-------|
| **Name** | `Delinea Secret Server Token` (or any name you prefer) |
| **Description** | Injects a Delinea SS OAuth2 token and base URL |
**Input Configuration** (paste as YAML):
```yaml
fields:
- id: tss_token
label: TSS Token
type: string
secret: true
- id: tss_base_url
label: TSS Base URL
type: string
required:
- tss_token
- tss_base_url
```
**Injector Configuration** (paste as YAML):
```yaml
env:
TSS_TOKEN: '{{ tss_token }}'
TSS_BASE_URL: '{{ tss_base_url }}'
extra_vars:
tss_token: '{{ tss_token }}'
tss_base_url: '{{ tss_base_url }}'
```
> **Tip:** Adjust the injectors to your needs — if you only need env vars, remove the `extra_vars` block (and vice versa).
Click **Save**.
### Step 2 — Create the Source Credential (Delinea Secret Server)
Go to **Resources → Credentials → Add**.
| Setting | Value |
|---------|-------|
| **Name** | `Delinea SS - Production` (or any name) |
| **Credential Type** | `Delinea Secret Server` (the plugin type — appears after installing the plugin and running `awx-manage setup_managed_credential_types`) |
| **Secret Server URL** | `https://myserver/SecretServer` or `https://mytenant.secretservercloud.com` |
| **Username** | Your application user username |
| **Password** | The corresponding password |
| **Domain** | *(optional)* Your AD domain if using domain auth |
Click **Save**.
### Step 3 — Create the Target Credential and Link Fields
Go to **Resources → Credentials → Add**.
| Setting | Value |
|---------|-------|
| **Name** | `Delinea SS Token - Production` (or any name) |
| **Credential Type** | `Delinea Secret Server Token` (the custom type from Step 1) |
Now link each field to the source credential:
1. **TSS Token** field — click the **key icon** (🔑) next to the field:
- **Credential** → select `Delinea SS - Production`
- **Output value** → select `token`
2. **TSS Base URL** field — click the **key icon** (🔑) next to the field:
- **Credential** → select `Delinea SS - Production`
- **Output value** → select `base_url`
Click **Save**.
### Step 4 — Attach to a Job Template
Go to **Resources → Templates** → edit your Job Template.
In the **Credentials** section, add the `Delinea SS Token - Production` credential (the target from Step 3).
At launch, AWX will:
1. Call the Delinea plugin to authenticate and get a fresh OAuth2 token
2. Inject `TSS_TOKEN` and `TSS_BASE_URL` as environment variables
3. Inject `tss_token` and `tss_base_url` as extra vars
4. Your playbook can use either method to access the values
---
## Usage in Playbooks
### Via extra vars (recommended)
```yaml
- name: Retrieve a secret from Delinea Secret Server
ansible.builtin.debug:
msg: >-
{{ lookup('delinea.ss.tss', 42,
base_url=tss_base_url,
token=tss_token) }}
```
### Via environment variables
```yaml
- name: Use environment variables
ansible.builtin.debug:
msg: >-
Server: {{ lookup('env', 'TSS_BASE_URL') }}
Token: {{ lookup('env', 'TSS_TOKEN') }}
```
---
## Repository Hardening
Apply these in GitHub UI: **Settings → Rules → Rulesets**.
### Branch Protection
**`main`:**
- Require pull request with at least 1 approval
- Dismiss stale approvals on new commits
- Require status checks: CI jobs from `ci.yml`
- Require conversation resolution
- Block force pushes and branch deletion
### Tag Protection
**`v*.*.*` tags:**
- Restrict creation/update/deletion to maintainers only
- Works with local guard (`scripts/release.sh`) and workflow guard (`release.yml`)
### Environment Protection
| Environment | Configuration |
|-------------|---------------|
| `pypi` | Required reviewers (recommended), limit to protected branches/tags |
---
## Contributing
### Workflow
1. Create a branch from `main` with a descriptive name
2. Make changes, run `make format` before committing
3. Push and open a pull request — CI runs automatically
4. Get review, iterate, then merge to `main`
### Roadmap
- [ ] Client credentials grant (SDK-based auth)
- [ ] Configurable `verify_ssl` toggle in credential input
- [ ] Token caching for rapid successive lookups
- [ ] Custom Execution Environment image with plugin pre-installed
- [ ] Integration tests against a real Secret Server instance
---
## License
[Apache-2.0](LICENSE)
| text/markdown | kew | null | null | null | null | ansible, awx, aap, credential, plugin, delinea, thycotic | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
"Intended Audience :: System Administrators",
"Topic :: System :: Systems Administration"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"python-tss-sdk>=1.2.2",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"flake8>=5.0.0; extra == \"dev\"",
"isort>=5.11.0; extra == \"dev\"",
"mypy>=0.990; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/your-org/tss-credential-plugin",
"Repository, https://github.com/your-org/tss-credential-plugin.git",
"Issues, https://github.com/your-org/tss-credential-plugin/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T02:44:29.213227 | awx_delinea_secret_server_credential_plugin-0.2.2.tar.gz | 12,207 | 52/15/4479b40b97f094e6ac56c0096892701d8959967be7b2f9b9aba1b8439477/awx_delinea_secret_server_credential_plugin-0.2.2.tar.gz | source | sdist | null | false | b9f66bf7f8916bf4491e4d319ba3cdf5 | d1b80d00502a6e98f3f100f283dfba87064b7a07f90c42525b27d4f75d541ce9 | 52154479b40b97f094e6ac56c0096892701d8959967be7b2f9b9aba1b8439477 | Apache-2.0 | [] | 242 |
2.1 | lar-engine | 1.6.0 | Lár: The PyTorch for Agents. A 'define-by-run' agentic framework. | <p align="center">
<img src="https://raw.githubusercontent.com/snath-ai/.github/main/assets/lar-logo.png" width="80" alt="Lár Logo" />
</p>
<p align="center"><em>Lár: The Pytorch for Agents</em></p>
<p align="center">
<a href="https://pypi.org/project/lar-engine/">
<img alt="PyPI - Version" src="https://img.shields.io/pypi/v/lar-engine?style=for-the-badge&color=blue">
</a>
<a href="https://pypi.org/project/lar-engine/">
<img alt="PyPI - Downloads" src="https://img.shields.io/pypi/dm/lar-engine?style=for-the-badge&color=blueviolet">
</a>
<a href="https://github.com/sponsors/axdithyaxo">
<img alt="Sponsor" src="https://img.shields.io/badge/Support-GitHub%20Sponsors-pink?style=for-the-badge&logo=github">
</a>
</p>
# Lár: The PyTorch for Agents
**Lár** (Irish for "core" or "center") is the open source standard for **Deterministic, Auditable, and Air-Gap Capable** AI agents.
It is a **"define-by-run"** framework that acts as a **Flight Recorder** for your agent, creating a complete audit trail for every single step.
> [!NOTE]
> **Lár is NOT a wrapper.**
> It is a standalone, ground-up engine designed for reliability. It does not wrap LangChain, OpenAI Swarm, or any other library. It is pure, dependency-lite Python code optimized for "Code-as-Graph" execution.
## The "Black Box" Problem
You are a developer launching a **mission-critical AI agent**. It works on your machine, but in production, it fails.
You don't know **why**, **where**, or **how much** it cost. You just get a 100-line stack trace from a "magic" framework.
## The "Glass Box" Solution
**Lár removes the magic.**
It is a simple engine that runs **one node at a time**, logging every single step to a forensic **Flight Recorder**.
This means you get:
1. **Instant Debugging**: See the exact node and error that caused the crash.
2. **Free Auditing**: A complete history of every decision and token cost, built-in by default.
3. **Total Control**: Build deterministic "assembly lines," not chaotic chat rooms.
> *"This demonstrates that for a graph without randomness or external model variability, Lár executes deterministically and produces identical state traces."*
*Stop guessing. Start building agents you can trust.*
## Why Lár is Better: The "Glass Box" Advantage
| Feature | The "Black Box" (LangChain / CrewAI) | The "Glass Box" (Lár) |
| :--- | :--- | :--- |
| **Debugging** | **A Nightmare.** When an agent fails, you get a 100-line stack trace from inside the framework's "magic" AgentExecutor. You have to guess what went wrong. | **Instant & Precise.** Your history log is the debugger. You see the exact node that failed (e.g., `ToolNode`), You see the exact error (`APIConnectionError`), and the exact state that caused it. |
| **Auditability** | **External & Paid.** "What happened?" is a mystery. You need an external, paid tool like LangSmith to add a "flight recorder" to your "black box." | **Built-in & Free.** The **"Flight Log"** (history log) is the core, default, open-source output of the `GraphExecutor`. You built this from day one. |
| **Multi-Agent Collaboration** | **Chaotic "Chat Room."** Agents are put in a room to "talk" to each other. It's "magic," but it's uncontrollable. You can't be sure who will talk next or if they'll get stuck in a loop. | **Deterministic "Assembly Line."** You are the architect. You define the exact path of collaboration using `RouterNode` and `ToolNode`. |
| **Deterministic Control** | **None.** You can't guarantee execution order. The "Tweeter" agent might run before the "Researcher" agent is finished. | **Full Control.** The "Tweeter" (`LLMNode`) cannot run until the "RAG Agent" (`ToolNode`) has successfully finished and saved its result to the state. |
| **Data Flow** | **Implicit & Messy.** Agents pass data by "chatting." The `ToolNode`'s output might be polluted by another agent's "thoughts." | **Explicit & Hard-Coded.** The data flow is defined by you: `RAG Output -> Tweet Input`. The "Tweeter" only sees the data it's supposed to. |
| **Resilience & Cost** | **Wasteful & Brittle.** If the RAG agent fails, the Tweeter agent might still run with no data, wasting API calls and money. A loop of 5 agents all chatting can hit rate limits fast. | **Efficient & Resilient.** If the RAG agent fails, the Tweeter never runs. Your graph stops, saving you money and preventing a bad output. Your `LLMNode`'s built-in retry handles transient errors silently. |
| **Core Philosophy** | Sells "Magic." | Sells "Trust." |
---
## Universal Model Support: Powered by LiteLLM
**Lár runs on 100+ Providers.**
Because Lár is built on the robust **[LiteLLM](https://docs.litellm.ai/docs/)** adapter, you are not locked into one vendor.
Start with **OpenAI** for prototyping. Deploy with **Azure/Bedrock** for compliance. Switch to **Ollama** for local privacy. All with **Zero Refactoring**.
| **Task** | **LangChain / CrewAI** | **Lár (The Unified Way)** |
| :--- | :--- | :--- |
| **Switching Providers** | 1. Import new provider class.<br>2. Instantiate specific object.<br>3. Refactor logic. | **Change 1 string.**<br>`model="gpt-4o"` → `model="ollama/phi4"` |
| **Code Changes** | **High.** `ChatOpenAI` vs `ChatBedrock` classes. | **Zero.** The API contract is identical for every model. |
**[Read the Full LiteLLM Setup Guide](https://docs.snath.ai/guides/litellm_setup/)** to learn how to configure:
- **Local Models** (Ollama, Llama.cpp, LocalAI)
- **Cloud Providers** (OpenAI, Anthropic, Vertex, Bedrock, Azure)
- **Advanced Config** (Temperature, API Base, Custom Headers)
```python
# Want to save money? Switch to local.
# No imports to change. No logic to refactor.
# Before (Cloud)
node = LLMNode(model_name="gpt-4o", ...)
# After (Local - Ollama)
node = LLMNode(model_name="ollama/phi4", ...)
# After (Local - Generic Server)
node = LLMNode(
model_name="openai/custom",
generation_config={"api_base": "http://localhost:8080/v1"}
)
```
---
## Quick Start (`v1.4.0`)
**The fastest way to build an agent is the CLI.**
### 1. Install & Scaffold
```bash
pip install lar-engine
lar new agent my-bot
cd my-bot
poetry install # or pip install -e .
python agent.py
```
> This generates a production-ready folder structure with `pyproject.toml`, `.env`, and a template agent.
> *(For Lár v1.4.0+)*
### 2. The "Low Code" Way (`@node`)
Define nodes as simple functions. No boilerplate.
```python
from lar import node
@node(output_key="summary")
def summarize_text(state):
# Access state like a dictionary (New in v1.4.0!)
text = state["text"]
return llm.generate(text)
```
*(See `examples/v1_4_showcase.py` for a full comparison)*
## The Game Changer: Hybrid Cognitive Architecture
**Most frameworks are "All LLM." This doesn't scale.**
You cannot run 1,000 agents if every step costs $0.05 and takes 3 seconds.
### 1. The "Construction Site" Metaphor
* **The Old Way (Standard Agents):**
Imagine a construction site where **every single worker is a high-paid Architect**. To hammer a nail, they stop, "think" about the nail, write a poem about the nail, and charge you $5. It takes forever and costs a fortune.
* **The Lár Way (Hybrid Swarm):**
Imagine **One Architect** and **1,000 Robots**.
1. **The Architect (Orchestrator Node)**: Looks at the blueprint ONCE. Yells: *"Build the Skyscraper!"*
2. **The Robots (Swarm)**: They hear the order. They don't "think." They don't charge $5. They just **execute** thousands of steps instantly.
### 2. The Numbers Don't Lie
We prove this in **[`examples/scale/1_corporate_swarm.py`](examples/scale/1_corporate_swarm.py)**.
| Feature | Standard "Agent Builder" (LangChain/CrewAI) | Lár "Hybrid" Architecture |
| :--- | :--- | :--- |
| **Logic** | 100% LLM Nodes. Every step is a prompt. | **1% LLM (Orchestrator) + 99% Code (Swarm)** |
| **Cost** | **$$$** (60 LLM calls). | **$** (1 LLM call). |
| **Speed** | **Slow** (60s+ latency). | **Instant** (0.08s for 64 steps). |
| **Reliability** | **Low**. "Telephone Game" effect. | **High**. Deterministic execution. |
### 3. Case Study: The "Smoking Gun" Proof
We built the generic "Corporate Swarm" in massive-scale LangChain/LangGraph (`examples/comparisons/langchain_swarm_fail.py`) to compare.
**It crashed at Step 25.**
```text
-> Step 24
CRASH CONFIRMED: Recursion limit of 25 reached without hitting a stop condition.
LangGraph Engine stopped execution due to Recursion Limit.
```
**Why this matters:**
1. **The "Recursion Limit" Crash**: Standard executors treat agents as loops. They cap at 25 steps to prevent infinite loops. Real work (like a 60-step swarm) triggers this safety switch.
2. **Clone the Patterns**: You don't need a framework. You need a pattern. We provide **21 single-file recipes** (Examples 1-21).
3. **The "Token Burn"**: Standard frameworks use an LLM to route every step ($0.60/run). Lár uses code ($0.00/run).
4. **The "Telephone Game"**: Passing data through 60 LLM layers corrupts context. Lár passes explicit state objects.
> "Lár turns Agents from 'Chatbot Prototyping' into 'High-Performance Software'."
---
### A Simple Self-Correcting Loop
```mermaid
graph TD
A[Start] --> B[Step 0: PlannerNode - Writer]
B --> C1[Step 1: ToolNode - Tester]
C1 --> D{Step 2: RouteNode - Judge}
%% Success path
subgraph Success_Path
direction TB
G[Step 5: AddValueNode - Finalize]
end
%% Correction loop
subgraph Correction_Loop
direction TB
E[Step 3: LLMNode - Corrector]
F[Step 4: ClearErrorNode - Cleanup]
end
D -- Success --> G
D -- Failure --> E
E --> F
F --> C1
G --> H[End]
classDef default stroke:#8FA3B0, color:#FFFFFF, fill:#1E293B;
classDef decision stroke:#8FA3B0, color:#FFFFFF, fill:#1E293B;
classDef startend stroke:#8FA3B0, color:#FFFFFF, fill:#1E293B;
class A,H startend;
class B,C1,E,F,G default;
class D decision;
```
---
## The `Lár` Architecture: Core Primitives
You can build any agent with four core components:
1. **`GraphState`**: A simple, unified object that holds the "memory" of the agent. It is passed to every node, allowing one node to write data (`state.set(...)`) and the next to read it (`state.get(...)`).
2. **`BaseNode`**: The abstract class (the "contract") for all executable units. It enforces a single method: `execute(self, state)`. The `execute` method's sole responsibility is to perform its logic and return the *next* `BaseNode` to run, or `None` to terminate the graph.
3. **`GraphExecutor`**: The "engine" that runs the graph. It is a Python generator that runs one node, yields the execution log for that step, and then pauses, waiting for the next call.
**New in v1.3.0:** Modular observability with separated concerns:
- **`AuditLogger`**: Centralizes audit trail logging and file persistence (GxP-compliant)
- **`TokenTracker`**: Aggregates token usage across multiple providers and models
```python
# Default (automatic - recommended)
executor = GraphExecutor(log_dir="my_logs")
# Advanced (custom injection for cost aggregation across workflows)
from lar import AuditLogger, TokenTracker
custom_tracker = TokenTracker()
executor1 = GraphExecutor(logger=AuditLogger("logs1"), tracker=custom_tracker)
executor2 = GraphExecutor(logger=AuditLogger("logs2"), tracker=custom_tracker)
# Both executors share the same tracker → aggregated cost tracking
```
**See:** `examples/patterns/16_custom_logger_tracker.py` for full demo
### 4. Node Implementations (The Building Blocks)
- **`LLMNode`**: The "Thinker." Calls any major LLM API (Gemini, GPT-4, DeepSeek, etc.) to plan, reason, or write code.
- **`ToolNode`**: The "Actor." Executes deterministic Python functions (API calls, DB lookups). Separates success/error routing.
- **`RouterNode`**: The "Traffic Cop." Deterministically routes execution to the next node based on state values.
- **`BatchNode`** *(New)*: The "Parallelizer." Fans out multiple nodes to run concurrently on separate threads.
- **`ReduceNode`** *(New)*: The "Compressor." Summarizes multi-agent outputs and explicitly deletes raw memory to prevent context bloat.
- **`DynamicNode`** *(New)*: The "Architect." Can recursively generate and execute new sub-agents at runtime (Fractal Agency).
- **`HumanJuryNode`**: The "Guard." Pauses execution for explicit human approval via CLI.
- **`ClearErrorNode`**: The "Janitor." Clears error states to allow robust retry loops.
---
## Reasoning Models (System 2 Support)
**Lár treats "Thinking" as a first-class citizen.**
Native support for **DeepSeek R1**, **OpenAI o1**, and **Liquid**.
- **Audit Logic:** Distinct `<think>` tags are captured in metadata, keeping your main context window clean.
- **Robustness:** Handles malformed tags and fallback logic automatically.
- **Example:** `examples/reasoning_models/1_deepseek_r1.py`
## Why Lár?
- **Economic Constraints:** Guarantee that agents cannot exceed mathematically set Token Budgets before execution. (v1.6+)
- **Memory Compression:** explicitly delete context from state via `ReduceNode` map-reduce patterns to prevent "black hole" token bloat. (v1.6+)
- **Fractal Agency:** Agents can spawn sub-agents recursively (`DynamicNode`). (v1.5+)
- **True Parallelism:** Run multiple agents in parallel threads (`BatchNode`). (v1.5+)
- **Lightweight:** No vector DB required. Just Python.
- **Model Agnostic:** Works with OpenAI, Gemini, Claude, DeepSeek, Ollama, etc.
- **Glass Box:** Every step, prompt, and thought is logged to `lar_logs/` for audit.
- **Automatic Capture**: The "thinking process" is extracted and saved to `run_metadata`.
- **Clean Output**: Your downstream nodes only see the final answer.
- **Robustness**: Works with both API-based reasoning (o1) and local raw reasoning (DeepSeek R1 via Ollama).
```python
# examples/reasoning_models/1_deepseek_r1.py
node = LLMNode(
model_name="ollama/deepseek-r1:7b",
prompt_template="Solve: {puzzle}",
output_key="answer"
)
# Result:
# state['answer'] = "The answer is 42."
# log['metadata']['reasoning_content'] = "<think>First, I calculate...</think>"
```
---
## Example "Glass Box" Audit Trail
You don't need to guess why an agent failed. `lar` is a "glass box" that provides a complete, auditable log for every run, especially failures.
This is a **real execution** log from a lar-built agent. The agent's job was to run a "Planner" and then a "Synthesizer" (both LLMNodes). The GraphExecutor caught a fatal error, gracefully stopped the agent, and produced this perfect audit trail.
**Execution Summary (Run ID: a1b2c3d4-...)**
| Step | Node | Outcome | Key Changes |
| :--- | :--- | :--- | :--- |
| 0 | `LLMNode` | `success` | `+ ADDED: 'search_query'` |
| 1 | `ToolNode` | `success` | `+ ADDED: 'retrieved_context'` |
| 2 | `LLMNode` | `success` | `+ ADDED: 'draft_answer'` |
| 3 | `LLMNode` | **`error`** | **`+ ADDED: 'error': "APIConnectionError"`** |
**This is the `lar` difference.** You know the *exact* node (`LLMNode`), the *exact* step (3), and the *exact reason* (`APIConnectionError`) for the failure. You can't debug a "black box," but you can **always** fix a "glass box."
---
## Cryptographic Audit Logs (v1.5.1+)
For enterprise environments (EU AI Act, SOC2, HIPAA), having a log isn't enough—you must prove the log wasn't tampered with.
Lár natively supports **HMAC-SHA256 Cryptographic Signing** of your audit logs. If an agent executes a high-stakes trade or a medical diagnosis, the `GraphExecutor` will mathematically sign the entire execution trace (including nodes visited, LLM reasoning, and token usage) using a Secret Key.
```python
from lar import GraphExecutor
# 1. Instantiating the executor with an HMAC secret turns on Cryptographic Auditing
executor = GraphExecutor(
log_dir="secure_logs",
hmac_secret="your_enterprise_secret_key"
)
# 2. Run your agent as normal. The resulting JSON log will contain a SHA-256 signature.
# 3. To verify the audit log later:
import hmac, hashlib, json
with open("secure_logs/run_xyz.json", "r") as f:
log_data = json.load(f)
saved_signature = log_data.pop("signature")
payload_str = json.dumps(log_data, sort_keys=True, separators=(',', ':'))
mac = hmac.new(b"your_enterprise_secret_key", payload_str.encode(), hashlib.sha256)
assert saved_signature == mac.hexdigest(), "Log Tampered With!"
```
**See the Compliance Pattern Library for full verification scripts:**
* `examples/compliance/8_hmac_audit_log.py` (Basic Authentication)
* `examples/compliance/9_high_risk_trading_hmac.py` (Algorithmic Trading / SEC)
* `examples/compliance/10_pharma_clinical_trials_hmac.py` (FDA 21 CFR Part 11)
## Just-in-Time Integrations
**Stop waiting for "HubSpot Support" to merge.**
Lár does not ship with 500+ brittle API wrappers. Instead, we ship the **Integration Builder**.
1. **Drag** [`IDE_INTEGRATION_PROMPT.md`](IDE_INTEGRATION_PROMPT.md) into your AI Chat (Cursor/Windsurf).
2. **Ask**: *"Make me a tool that queries the Stripe API for failed payments."*
3. **Done**: You get a production-ready, type-safe `ToolNode` in 30 seconds.
**[Read the Full Guide](https://docs.snath.ai/guides/integrations/)** | **[See Example](examples/patterns/7_integration_test.py)**
## Metacognition (Level 4 Agency)
**New in v1.3**: Lár introduces the **Dynamic Graph**, allowing agents to rewrite their own topology at runtime.
This unlocks capabilities previously impossible in static DAGs:
- **Self-Healing**: Detects errors and injects recovery subgraphs.
- **Tool Invention**: Writes and executes its own Python tools on the fly.
- **Adaptive Depth**: Decides between "Quick Answer" (1 node) vs "Deep Research" (N nodes).
- **Custom Observability**: Inject custom logger/tracker instances for advanced cost tracking and audit trail management (`examples/patterns/16_custom_logger_tracker.py`).
> [!IMPORTANT]
> **Risk Mitigation**: Self-Modifying Code is inherently risky. Lár ensures **Compliance** by:
> 1. Logging the exact JSON of the generated graph (Audit Trail).
> 2. Using a deterministic `TopologyValidator` (Non-AI) to prevent unauthorized tools, infinite loops, or **malformed graph structures** (Structural Integrity).
See `examples/metacognition/` for 5 working Proof-of-Concepts.
---
## The DMN Showcase: A Cognitive Architecture
**[snath-ai/DMN](https://github.com/snath-ai/DMN)** - The flagship demonstration of Lár's capabilities.
DMN (Default Mode Network) is a **complete cognitive architecture** built entirely on Lár, showcasing what's possible when you combine:
- **Bicameral Mind**: Fast/Slow thinking systems running in parallel
- **Sleep Cycles**: Automatic memory consolidation during "rest" periods
- **Episodic Memory**: Long-term storage with vectorized recall
- **Self-Awareness**: Metacognitive introspection and adaptive behavior
> [!NOTE]
> **DMN proves that Lár isn't just for chatbots.** It's a platform for building genuinely intelligent systems with memory, learning, and self-improvement capabilities.
### What Makes DMN Special?
| Feature | Traditional Agents | DMN (Built on Lár) |
|---------|-------------------|---------------------|
| **Memory** | Context window only | Persistent episodic memory with sleep consolidation |
| **Learning** | Static prompts | Learns from interactions and self-corrects |
| **Architecture** | Single-path logic | Dual-process (Fast + Slow) cognitive system |
| **Auditability** | Black box | Complete glass-box audit trail of every thought |
**[Explore the DMN Repository →](https://github.com/snath-ai/DMN)**
---
## Installation
This project is managed with [Poetry](https://python-poetry.org/).
1. **Clone the repository:**
```bash
git clone https://github.com/snath-ai/lar.git
cd lar
```
2. **Set Up Environment Variables**
Lár uses the unified LiteLLM adapter under the hood. This means if a model is supported by LiteLLM (100+ providers including Azure, Bedrock, VertexAI), it is supported by Lár.
Create a `.env` file:
```bash
# Required for running Gemini models:
GEMINI_API_KEY="YOUR_GEMINI_KEY_HERE"
# Required for running OpenAI models (e.g., gpt-4o):
OPENAI_API_KEY="YOUR_OPENAI_KEY_HERE"
# Required for running Anthropic models (e.g., Claude):
ANTHROPIC_API_KEY="YOUR_ANTHROPIC_KEY_HERE"
```
3. **Install dependencies:**
This command creates a virtual environment and installs all packages from `pyproject.toml`.
```bash
poetry install
```
---
## Ready to build with Lár? (Agentic IDEs)
Lár is designed for **Agentic IDEs** (Cursor, Windsurf, Antigravity) and strict code generation.
We provide a **3-Step Workflow** directly in the repo to make your IDE an expert Lár Architect.
### 1. The Strategy: "Reference, Don't Copy"
Instead of pasting massive prompts, simply **reference** the master files in the `lar/` directory.
### 2. The Workflow
1. **Context (The Brain)**: In your IDE chat, reference `@lar/IDE_MASTER_PROMPT.md`. This loads the strict typing rules and "Code-as-Graph" philosophy.
2. **Integrations (The Hands)**: Reference `@lar/IDE_INTEGRATION_PROMPT.md` to generate production-ready API wrappers in seconds.
3. **Scaffold (The Ask)**: Open `@lar/IDE_PROMPT_TEMPLATE.md`, fill in your agent's goal, and ask the IDE to "Implement this."
**Example Prompt to Cursor/Windsurf:**
> "Using the rules in @lar/IDE_MASTER_PROMPT.md, implement the agent described in @lar/IDE_PROMPT_TEMPLATE.md."
### 2. Learn by Example
We have provided **21 robust patterns** in the **[`examples/`](examples/)** directory, organized by category:
> **[View the Visual Library](https://snath.ai/examples)**: Browse all patterns with diagrams and use-cases on our website.
#### 1. Basic Primitives (`examples/basic/`)
| # | Pattern | Concept |
| :---: | :--- | :--- |
| **1** | **[`1_simple_triage.py`](examples/basic/1_simple_triage.py)** | Classification & Linear Routing |
| **2** | **[`2_reward_code_agent.py`](examples/basic/2_reward_code_agent.py)** | Code-First Agent Logic |
| **3** | **[`3_support_helper_agent.py`](examples/basic/3_support_helper_agent.py)** | Lightweight Tool Assistant |
| **4** | **[`4_fastapi_server.py`](examples/basic/4_fastapi_server.py)** | FastAPI Wrapper (Deploy Anywhere) |
#### 2. Core Patterns (`examples/patterns/`)
| # | Pattern | Concept |
| :---: | :--- | :--- |
| **1** | **[`1_rag_researcher.py`](examples/patterns/1_rag_researcher.py)** | RAG (ToolNode) & State Merging |
| **2** | **[`2_self_correction.py`](examples/patterns/2_self_correction.py)** | "Judge" Pattern & Error Loops |
| **3** | **[`3_parallel_execution.py`](examples/patterns/3_parallel_execution.py)** | Fan-Out / Fan-In Aggregation |
| **4** | **[`4_structured_output.py`](examples/patterns/4_structured_output.py)** | Strict JSON Enforcement |
| **5** | **[`5_multi_agent_handoff.py`](examples/patterns/5_multi_agent_handoff.py)** | Multi-Agent Collaboration |
| **6** | **[`6_meta_prompt_optimizer.py`](examples/patterns/6_meta_prompt_optimizer.py)** | Self-Modifying Agents (Meta-Reasoning) |
| **7** | **[`7_integration_test.py`](examples/patterns/7_integration_test.py)** | Integration Builder (CoinCap) |
| **8** | **[`8_ab_tester.py`](examples/patterns/8_ab_tester.py)** | A/B Tester (Parallel Prompts) |
| **9** | **[`9_resumable_graph.py`](examples/patterns/9_resumable_graph.py)** | Time Traveller (Crash & Resume) |
#### 3. Compliance & Safety (`examples/compliance/`)
| # | Pattern | Concept |
| :---: | :--- | :--- |
| **1** | **[`1_human_in_the_loop.py`](examples/compliance/1_human_in_the_loop.py)** | User Approval & Interrupts |
| **2** | **[`2_security_firewall.py`](examples/compliance/2_security_firewall.py)** | Blocking Jailbreaks with Code |
| **3** | **[`3_juried_layer.py`](examples/compliance/3_juried_layer.py)** | Proposer -> Jury -> Kernel |
| **4** | **[`4_access_control_agent.py`](examples/compliance/4_access_control_agent.py)** | **Flagship Access Control** |
| **5** | **[`5_context_contamination_test.py`](examples/compliance/5_context_contamination_test.py)** | Red Teaming: Social Engineering |
| **6** | **[`6_zombie_action_test.py`](examples/compliance/6_zombie_action_test.py)** | Red Teaming: Stale Authority |
| **7** | **[`7_hitl_agent.py`](examples/compliance/7_hitl_agent.py)** | Article 14 Compliance Node |
#### 4. High Scale (`examples/scale/`)
| # | Pattern | Concept |
| :---: | :--- | :--- |
| **1** | **[`1_corporate_swarm.py`](examples/scale/1_corporate_swarm.py)** | **Stress Test**: 60+ Node Graph |
| **2** | **[`2_mini_swarm_pruner.py`](examples/scale/2_mini_swarm_pruner.py)** | Dynamic Graph Pruning |
| **3** | **[`3_parallel_newsroom.py`](examples/scale/3_parallel_newsroom.py)** | True Parallelism (`BatchNode`) |
| **4** | **[`4_parallel_corporate_swarm.py`](examples/scale/4_parallel_corporate_swarm.py)** | Concurrent Branch Execution |
| **5** | **[`11_map_reduce_budget.py`](examples/advanced/11_map_reduce_budget.py)** | **Memory Compression & Token Budgets** |
#### 5. Metacognition (`examples/metacognition/`)
See the **[Metacognition Docs](https://docs.snath.ai/core-concepts/9-metacognition)** for a deep dive.
| # | Pattern | Concept |
| :---: | :--- | :--- |
| **1** | **[`1_dynamic_depth.py`](examples/metacognition/1_dynamic_depth.py)** | **Adaptive Complexity** (1 Node vs N Nodes) |
| **2** | **[`2_tool_inventor.py`](examples/metacognition/2_tool_inventor.py)** | **Self-Coding** (Writing Tools at Runtime) |
| **3** | **[`3_self_healing.py`](examples/metacognition/3_self_healing.py)** | **Error Recovery** (Injecting Fix Subgraphs) |
| **4** | **[`4_adaptive_deep_dive.py`](examples/metacognition/4_adaptive_deep_dive.py)** | **Recursive Research** (Spawning Sub-Agents) |
| **5** | **[`5_expert_summoner.py`](examples/metacognition/5_expert_summoner.py)** | **Dynamic Persona Instantiation** |
#### 6. Advanced Showcase (`examples/advanced/`)
| # | Pattern | Concept |
| :---: | :--- | :--- |
| **1** | **[`fractal_polymath.py`](examples/advanced/fractal_polymath.py)** | **Fractal Agency** (Recursion + Parallelism) |
---
## Example: Multi-Agent Orchestration (A Customer Support Agent)
The *real* power of `lar` is not just loops, but **multi-agent orchestration.**
Other frameworks use a "chaotic chat room" model, where agents *talk* to each other and you *hope* for a good result. `lar` is a deterministic **"assembly line."** You are the architect. You build a "glass box" graph that routes a task to specialized agents, guaranteeing order and auditing every step.
### 1. The "Glass Box" Flowchart
This is the simple, powerful "Customer Support" agent we'll build. It's a "Master Agent" that routes tasks to specialists.
```mermaid
graph TD
A[Start] --> B(LLMNode<br/>'Agent 1: Triage');
B --> C(LLMNode<br/>'Agent 2: Planner');
C --> D(ToolNode<br/>'Retriever');
%% This is the "hub" node
D --> E{RouterNode<br/>'Manager: Route By Category'};
%% Define the three parallel paths
E -- "BILLING_AGENT" --> F;
E -- "TECH_AGENT" --> G;
E -- "GENERAL_AGENT" --> H;
%% Define what's INSIDE the subgraphs
subgraph "Finance Department"
F(LLMNode<br/>'Agent 3: Finance Specialist');
end
subgraph "Tech Support Department"
G(LLMNode<br/>'Agent 4: Tech Specialist');
end
subgraph "General"
H(LLMNode<br/>'Agent 5: Generalist');
end
%% Define the "join" point
F --> I[AddValueNode<br/>'Final Answer'];
G --> I;
H --> I;
I --> J[END];
```
## Lár Engine Architecture: The Multi-Agent Assembly Line
### The core of this application is a Multi-Agent Orchestration Graph. `Lár` forces you to define the assembly line, which guarantees predictable, auditable results.
## Compliance & Safety (EU AI Act Ready - Aug 2026)
Lár is engineered for **High-Risk AI Systems** under the **EU AI Act (2026)** and **FDA 21 CFR Part 11**.
| Regulation | Requirement | Lár Implementation |
| :--- | :--- | :--- |
| **EU AI Act Art. 12** | **Record-Keeping** | **State-Diff Ledger**: Automatically creates an immutable, forensic JSON log of every step, variable change, and model decision. |
| **EU AI Act Art. 13** | **Transparency** | **"Glass Box" Architecture**: No hidden prompts or "magic" loops. Every node is explicit code that can be audited by non-technical reviewers. |
| **EU AI Act Art. 14** | **Human Oversight** | **Interrupt Pattern**: Native support for "Human-in-the-Loop". Pause execution, modify state, and resume—ensuring human control over high-stakes decisions. |
| **FDA 21 CFR Part 11** | **Audit Trails** | **Cryptographic Determinism**: The engine is deterministic by design, ensuring reproducible runs for clinical validation. |
---
## Quick Start
### 1. Graph Flow (Execution Sequence)
The agent executes in a fixed, 6-step sequence. The graph is `defined backwards` in the code, but the execution runs forwards:
| Step | Node Name | Lár Primitive | Action | State Output |
|-------------|-------------------|---------------|-------------------------------------------------------------------------------------------|--------------------|
| 0 (Start) | triage_node | LLMNode | Classifies the user's input (`{task}`) into a service category (BILLING, TECH, etc.). | category |
| 1 | planner_node | LLMNode | Converts the task into a concise, high-quality search query. | search_query |
| 2 | retrieve_node | ToolNode | Executes the local FAISS vector search and retrieves the relevant context. | retrieved_context |
| 3 | specialist_router | RouterNode | Decision point. Reads the category and routes the flow to the appropriate specialist. | (No change; routing) |
| 4 | billing/tech_agent| LLMNode | The chosen specialist synthesizes the final answer using the retrieved context. | agent_answer |
| 5 (End) | final_node | AddValueNode | Saves the synthesized answer as `final_response` and terminates the graph. | final_response |
### 2. Architectural Primitives Used
This demo relies on the core Lár primitives to function:
- `LLMNode`: Used 5 times (Triage, Plan, and the 3 Specialists) for all reasoning and synthesis steps.
- `RouterNode`: Used once (specialist_router) for the deterministic if/else branching logic.
- `ToolNode`: Used once (retrieve_node) to securely execute the local RAG database lookup.
- `GraphExecutor`: The engine that runs this entire sequence and produces the complete audit log
### This is the full logic from `support_app.py`. It's just a clean, explicit Python script.
```python
'''
====================================================================
ARCHITECTURE NOTE: Defining the Graph Backwards
The Lár Engine uses a "define-by-run" philosophy. Because a node
references the *next_node* object (e.g., next_node=planner_node),
the nodes MUST be defined in Python in the REVERSE order of execution
to ensure the next object already exists in memory.
Execution runs: START (Triage) -> END (Final)
Definition runs: END (Final) -> START (Triage)
====================================================================
'''
from lar import *
from lar.utils import compute_state_diff # (Used by executor)
# 1. Define the "choice" logic for our Router
def triage_router_function(state: GraphState) -> str:
"""Reads the 'category' from the state and returns a route key."""
category = state.get("category", "GENERAL").strip().upper()
if "BILLING" in category:
return "BILLING_AGENT"
elif "TECH_SUPPORT" in category:
return "TECH_AGENT"
else:
return "GENERAL_AGENT"
# 2. Define the agent's nodes (the "bricks")
# We build from the end to the start.
# --- The End Nodes (the destinations) ---
final_node = AddValueNode(key="final_response", value="{agent_answer}", next_node=None)
critical_fail_node = AddValueNode(key="final_status", value="CRITICAL_FAILURE", next_node=None)
# --- The "Specialist" Agents ---
billing_agent = LLMNode(
model_name="gemini-1.5-pro",
prompt_template="You are a BILLING expert. Answer '{task}' using ONLY this context: {retrieved_context}",
output_key="agent_answer",
next_node=final_node
)
tech_agent = LLMNode(
model_name="gemini-1.5-pro",
prompt_template="You are a TECH SUPPORT expert. Answer '{task}' using ONLY this context: {retrieved_context}",
output_key="agent_answer",
next_node=final_node
)
general_agent = LLMNode(
model_name="gemini-1.5-pro",
prompt_template="You are a GENERAL assistant. Answer '{task}' using ONLY this context: {retrieved_context}",
output_key="agent_answer",
next_node=final_node
)
# --- The "Manager" (Router) ---
specialist_router = RouterNode(
decision_function=triage_router_function,
path_map={
"BILLING_AGENT": billing_agent,
"TECH_AGENT": tech_agent,
"GENERAL_AGENT": general_agent
},
default_node=general_agent
)
# --- The "Retriever" (Tool) ---
retrieve_node = ToolNode(
tool_function=retrieve_relevant_chunks, # This is our local FAISS search
input_keys=["search_query"],
output_key="retrieved_context",
next_node=specialist_router,
error_node=critical_fail_node
)
# --- The "Planner" (LLM) ---
planner_node = LLMNode(
model_name="gemini-1.5-pro",
prompt_template="You are a search query machine. Convert this task to a search query: {task}. Respond with ONLY the query.",
output_key="search_query",
next_node=retrieve_node
)
# --- The "Triage" Node (The *real* start) ---
triage_node = LLMNode(
model_name="gemini-1.5-pro",
prompt_template="You are a triage bot. Classify this task: \"{task}\". Respond ONLY with: BILLING, TECH_SUPPORT, or GENERAL.",
output_key="category",
next_node=planner_node
)
# 3. Run the Agent
executor = GraphExecutor()
initial_state = {"task": "How do I reset my password?"}
result_log = list(executor.run_step_by_step(
start_node=triage_node,
initial_state=initial_state
))
# 4. The "Deploy Anywhere" Feature
# Serialize your entire graph logic to a portable JSON schema.
# This file can be versioned in git or imported into Snath Cloud.
executor.save_to_file("support_agent_v1.json")
print("Agent serialized successfully. Ready for deployment.")
'''
The "glass box" log for Step 0 will show:
"state_diff": {"added": {"category": "TECH_SUPPORT"}}
The log for Step 1 will show:
"Routing to LLMNode" (the tech_support_agent)
'''
```
-----
## Ready to Build a Real Agent?
We have built two "killer demos" that prove this "glass box" model. You can clone, build, and run them today.
- **[snath-ai/DMN](https://github.com/snath-ai/DMN)**: **The Flagship Showcase.** A cognitive architecture with a "Bicameral Mind" (Fast/Slow) that sleeps, dreams, and consolidates long-term memory to solve catastrophic forgetting.
- **[`examples/compliance/4_access_control_agent.py`](examples/compliance/4_access_control_agent.py)**: **The Enterprise Flagship.** A "Juried Layer" demo that combines LLM Reasoning, Deterministic Policy, and Human-in-the-Loop Interrupts for secure infrastructure access.
- **[snath-ai/rag-demo](https://github.com/snath-ai/rag-demo)**: A complete, self-correcting RAG agent that uses a local vector database.
- **[snath-ai/customer-support-demo](https://github.com/snath-ai/customer-support-demo)**: The Customer Support agent described above.
- **[snath-ai/code-repair-demo](https://github.com/snath-ai/code-repair-demo)**: A Self-Healing CI/CD agent that writes tests, detects failures, and patches its own code in a loop.
### Show Your Agents are Auditable
- If you build an agent using the Lár Engine, you are building a **dependable, verifiable system**. Help us spread the philosophy of the **"Glass Box"** by displaying the badge below in your project's README.
- By adopting this badge, you signal to users and collaborators that your agent is built for **production reliability and auditability.**
**Show an Auditable Badge to your project:**
[](https://docs.snath.ai)
**Badge Markdown:**
```markdown
[](https://docs.snath.ai)
```
## Ready for Production?
Lár is designed to be deployed as a standard Python library.
Read our **[Deployment Guide](https://docs.snath.ai/guides/deployment/)** to learn how to wrap your graph in **FastAPI** and deploy to AWS/Heroku.
## Author
**Lár** was created by **[Aadithya Vishnu Sajeev](https://github.com/axdithyaxo)**.
## Support the Project
Lár is an open-source agent framework built to be clear, debuggable, and developer-friendly.
If this project helps you, consider supporting its development through GitHub Sponsors.
Become a sponsor → [Sponsor on GitHub](https://github.com/sponsors/axdithyaxo)
Your support helps me continue improving the framework and building new tools for the community.
## Contributing
We welcome contributions to **`Lár`**.
To get started, please read our **[Contribution Guidelines](CONTRIBUTING.md)** on how to report bugs, submit pull requests, and propose new features.
## License
**`Làr`** is licensed under the `Apache License 2.0`
This means:
- You are free to use Làr in personal, academic, or commercial projects.
- You may modify and distribute the code.
- You MUST retain the `LICENSE` and the `NOTICE` file.
- If you distribute a modified version, you must document what you changed.
`Apache 2.0` protects the original author `(Aadithya Vishnu Sajeev)` while encouraging broad adoption and community collaboration.
For developers building on Làr:
Please ensure that the `LICENSE` and `NOTICE` files remain intact
to preserve full legal compatibility with the `Apache 2.0` terms.
| text/markdown | Aadithya Vishnu Sajeev | axdithya@snath.ai | Aadithya Vishnu Sajeev | axdithya@snath.ai | Apache-2.0 | null | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"click<8.1.0",
"deepdiff<9.0.0,>=8.6.1",
"filelock<4.0.0,>=3.20.1",
"litellm<2.0.0,>=1.80.0",
"networkx<4.0.0,>=3.0.0",
"pydantic<3.0.0,>=2.0.0",
"rich<15.0.0,>=14.2.0",
"setuptools<79.0.0,>=78.0.0",
"typer<0.13.0,>=0.12.0",
"urllib3<3.0.0,>=2.6.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T02:44:15.496225 | lar_engine-1.6.0.tar.gz | 56,963 | 00/e4/c0e104cf236b0439894854c0faaefa6a745c96f563e9fe54709d698285a9/lar_engine-1.6.0.tar.gz | source | sdist | null | false | 97d839901191d82d11a6f98b71d57157 | 1efc8b43dbc08251864e63f4c824fc0c5fa226773f0a540190dd606353d767ef | 00e4c0e104cf236b0439894854c0faaefa6a745c96f563e9fe54709d698285a9 | null | [] | 234 |
2.4 | bonepick | 0.2.0 | CLI tool for training efficient CPU-based text quality classifiers and annotating data for distillation of classifiers. |
<p align="center">
<img src="https://github.com/allenai/olmo-bonepick/blob/main/assets/logo.png?raw=true" alt="Olmo Bonepick library logo" width="500"/>
</p>
`bonepick` is a CLI tool for training efficient text quality classifiers that run on CPU. It supports [**Model2Vec**][1] (static embeddings) and [**FastText**][2] classifiers, with built-in tools for data preparation, LLM-based annotation, batch annotation via async APIs, calibration evaluation, and model distillation.
## Installation
From PyPI:
```
pip install bonepick
```
From source:
```shell
git clone https://github.com/allenai/olmo-bonepick.git
cd olmo-bonepick
uv sync .
```
### Optional Dependencies
The `annotate` extra provides tools for using LLM APIs to label data:
```shell
uv sync --extra annotate
```
This enables the `annotate-dataset`, `batch-annotate-submit`, `batch-annotate-retrieve`, `list-prompts`, `annotation-agreement`, and `label-distribution` commands for automated data annotation using LLM providers via the `lm-deluge` library.
The `distill` extra provides tools for distilling Sentence Transformer models to Model2Vec:
```shell
uv sync --extra distill
```
Install both at once:
```shell
uv sync --extra annotate --extra distill
```
## Data Format
Datasets are stored as compressed JSONL files (`.jsonl.zst`, `.jsonl.gz`, or `.jsonl`) in `train/` and `test/` subdirectories. Each row must have a text field and a label field.
```
dataset/
├── train/
│ ├── shard_0.jsonl.zst
│ └── shard_100000.jsonl.zst
└── test/
└── shard_0.jsonl.zst
```
## Data Preparation Pipeline
### 1. Import from HuggingFace
Download a HuggingFace dataset to local JSONL format:
```shell
uv run bonepick import-hf-dataset \
-n HuggingFaceFW/fineweb-edu-llama3-annotations \
-o data/fineweb-edu-llama3-annotations \
--test-split 0.1
```
### 2. Transform Labels (Optional)
Use jq expressions to reshape fields. Common use case: binarize multi-class labels.
```shell
# Binarize scores: 0-1 → 0 (low quality), 2-5 → 1 (high quality)
uv run bonepick transform-dataset \
--input-dir data/fineweb-edu-llama3-annotations \
--output-dir data/fineweb-edu-binary \
-l '{score: (if .score < 2 then 0 else 1 end)}'
# Or use string labels
uv run bonepick transform-dataset \
--input-dir data/fineweb-edu-llama3-annotations \
--output-dir data/fineweb-edu-binary \
-l '{score: (if .score < 2 then "neg" else "pos" end)}'
```
### 3. Balance Dataset (Optional)
Balance the dataset so each label has equal representation. Useful when one class significantly outnumbers others:
```shell
uv run bonepick balance-dataset \
--input-dir data/fineweb-edu-binary \
--output-dir data/fineweb-edu-binary-balanced \
--seed 42
```
Supports multiple input directories:
```shell
uv run bonepick balance-dataset \
-i data/dataset1 \
-i data/dataset2 \
-o data/combined-balanced \
--seed 42
```
### 3a. Sample Dataset (Optional)
Create a smaller random sample of a dataset. Useful for quick experiments or when you need a subset:
```shell
# Sample 10% of the dataset
uv run bonepick sample-dataset \
-i data/fineweb-edu-binary \
-o data/fineweb-edu-sample \
--sampling-rate 0.1
# Or specify a target size
uv run bonepick sample-dataset \
-i data/fineweb-edu-binary \
-o data/fineweb-edu-sample \
--target-size 500MB
# Supports multiple input directories
uv run bonepick sample-dataset \
-i data/dataset1 \
-i data/dataset2 \
-o data/combined-sample \
--target-size 1GB
```
### 3b. Reshard Dataset (Optional)
Combine multiple small files into a specified number of larger files with roughly equal sizes. Useful for reducing I/O overhead and creating evenly-sized shards:
```shell
# Reshard into 10 output files
uv run bonepick reshard-dataset \
-d data/fineweb-edu-binary \
-o data/fineweb-edu-resharded \
-n 10
# Create train/test splits during resharding
uv run bonepick reshard-dataset \
-d data/raw-dataset \
-o data/split-dataset \
-n 10 \
--test-split-frac 0.1
# Create train/valid/test splits
uv run bonepick reshard-dataset \
-d data/raw-dataset \
-o data/split-dataset \
-n 10 \
--test-split-frac 0.1 \
--valid-split-frac 0.05
# Use more processes for faster resharding
uv run bonepick reshard-dataset \
-d data/large-dataset \
-o data/resharded \
-n 20 \
-p 8
```
The command uses a greedy bin packing algorithm to ensure output files have roughly equal sizes. It supports multiple input directories via repeated `-d` flags, and can optionally create train/test/valid splits with `--test-split-frac` and `--valid-split-frac`.
### 4a. Normalize Text (for Model2Vec)
Apply text normalization before training Model2Vec classifiers:
```shell
uv run bonepick normalize-dataset \
--input-dir data/fineweb-edu-binary \
--output-dir data/fineweb-edu-binary-normalized \
-n plsfix
```
Available normalizers: `whitespace`, `plsfix`, `tokenizer`, `ultrafine`, `ultrafine_commits`, `hyperfine`, `hyperfine_code`, `potion`, `potion_code`
### 4b. Convert to FastText Format (for FastText)
Convert JSONL to FastText's `__label__<label> <text>` format:
```shell
uv run bonepick convert-to-fasttext \
--input-dir data/fineweb-edu-binary \
--output-dir data/fasttext-fineweb-edu-binary \
-n ultrafine
```
#### Auto-binning Numeric Labels
For datasets with continuous or many discrete numeric labels, use `--auto N` to automatically bin labels into N equal-count (quantile-based) bins:
```shell
# Bin numeric scores into 5 quantile-based bins
uv run bonepick convert-to-fasttext \
--input-dir data/scored-dataset \
--output-dir data/fasttext-binned \
--label-expression '.score' \
--auto 5 \
-n ultrafine
```
This performs a two-pass operation:
1. **Pass 1**: Reads all training labels to compute quantile boundaries
2. **Pass 2**: Converts data using the computed bins
The output shows bin edges and sample distribution:
```
Bin edges and labels (equal-count/quantile bins):
bin_0: [0.0000, 11.0000)
bin_1: [11.0000, 13.0000)
bin_2: [13.0000] (single-value bin)
bin_3: (13.0000, 15.0000)
bin_4: [15.0000, 19.0000)
```
Single-value bins (where many samples share the same value) are supported and displayed with `[value]` notation. The bin mapping is saved in the output `report.yaml` for reference.
### 5. Count Tokens (Optional)
Count the total number of tokens in a dataset using a specified tokenizer. Useful for understanding dataset size and token distribution:
```shell
# Count tokens using default tokenizer (bundled dolma2 tokenizer)
uv run bonepick count-tokens \
-d data/fineweb-edu-binary
# Use a custom tokenizer
uv run bonepick count-tokens \
-d data/fineweb-edu-binary \
-t microsoft/deberta-base
# Custom field extraction with JQ expression
uv run bonepick count-tokens \
-d data/custom-dataset \
-i ".content"
# Count tokens across multiple datasets
uv run bonepick count-tokens \
-d data/dataset1 \
-d data/dataset2 \
-d data/dataset3
# Use more processes for faster counting
uv run bonepick count-tokens \
-d data/large-dataset \
-p 16
```
The command outputs:
- Total files processed
- Total token count
- Total dataset size in bytes
- Average tokens per file
- Average tokens per byte
## Training
### Model2Vec Classifier
Trains a classifier head on top of frozen Model2Vec static embeddings:
```shell
uv run bonepick train-model2vec \
-d data/fineweb-edu-binary-normalized \
-o models/model2vec-classifier
```
Key options:
- `-m/--model-name`: Model2Vec model to use (default: `minishlab/potion-base-32M`)
- `--learning-rate`: Learning rate (default: 1e-3)
- `--max-epochs`: Maximum training epochs (default: -1 for unlimited)
- `--early-stopping-patience`: Epochs without improvement before stopping (default: 5)
- `--loss-class-weight`: Class weighting strategy: `balanced`, `uniform`, `sqrt` (default: `uniform`)
- `--regression`: Train a regressor instead of classifier
- `--normalizer`: Apply a normalizer during training
- `--max-length`: Maximum text length
### FastText Classifier
Trains a FastText classifier (requires `fasttext` binary in PATH):
```shell
uv run bonepick train-fasttext \
-d data/fasttext-fineweb-edu-binary \
-o models/fasttext-classifier
```
### Training on Multiple Datasets
All training commands support combining data from multiple directories using repeated `-d` flags:
```shell
# Combine multiple datasets for training
uv run bonepick train-model2vec \
-d data/dataset1-normalized \
-d data/dataset2-normalized \
-d data/dataset3-normalized \
-o models/combined-classifier
```
Data from all directories is concatenated before training. Each directory must have `train/` and `test/` subdirectories.
## Evaluation
Both evaluation commands compute detailed classification metrics using probability predictions (`predict_proba` for Model2Vec, `predict-prob` for FastText). Results include precision, recall, F1-score, and AUC for each class, plus macro averages.
### Model2Vec Evaluation
```shell
uv run bonepick eval-model2vec \
-d data/fineweb-edu-binary-normalized \
-m models/contrastive-classifier \
--text-field text \
--label-field score
```
### FastText Evaluation
```shell
uv run bonepick eval-fasttext \
-d data/fasttext-fineweb-edu-binary \
-m models/fasttext-classifier \
--text-field text \
--label-field score
```
### Multi-Dataset Evaluation
Evaluate on multiple datasets simultaneously. Results are computed on the combined test sets:
```shell
uv run bonepick eval-model2vec \
-d data/dataset1-normalized \
-d data/dataset2-normalized \
-d data/dataset3-normalized \
-m models/combined-classifier
```
### Output Format
Results are saved as YAML files in the model directory with the naming pattern `results_<dataset_signature>.yaml`:
```yaml
dataset_dir:
- data/fineweb-edu-binary-normalized
model_dir: models/model2vec-classifier
overall_results:
macro_precision: 0.8734
macro_recall: 0.8621
macro_f1: 0.8677
macro_auc: 0.9245
per_class_metrics:
- class_name: '0'
precision: 0.8512
recall: 0.8823
f1: 0.8665
support: 1523
auc: 0.9245
- class_name: '1'
precision: 0.8956
recall: 0.8419
f1: 0.8679
support: 1477
auc: 0.9245
```
### Metrics Explained
- **Precision**: Of all predictions for a class, how many were correct
- **Recall**: Of all actual instances of a class, how many were predicted correctly
- **F1**: Harmonic mean of precision and recall
- **AUC**: Area Under the ROC Curve (one-vs-rest for multi-class)
- **Macro averages**: Unweighted mean across all classes
- **Support**: Number of true instances for each class in the test set
### Custom Field Names
Both evaluation commands support custom field names if your dataset uses different column names:
```shell
uv run bonepick eval-model2vec \
-d data/custom-dataset \
-m models/my-classifier \
--text-field document \
--label-field quality_score
```
## Calibration Evaluation
Evaluate and train calibration models for prediction quality assessment.
### Evaluate Calibration
Evaluate scalar predictions (0-1) against ordinal gold labels. Computes AUC, rank correlation, regression, and calibration metrics:
```shell
# Evaluate predictions from a single dataset
uv run bonepick eval-calibration \
-d ./annotated_data \
-p '.metadata.classifier.quality_score' \
-l '.annotation.rating'
# Evaluate from multiple directories with output file
uv run bonepick eval-calibration \
-d ./data1 -d ./data2 \
-p '.prediction' \
-l '.label' \
-o results.yaml
```
Metrics computed:
- **AUC**: Macro, weighted, and ordinal (adjacent pairs) using Mann-Whitney U
- **Correlation**: Spearman, Kendall's Tau-b, Pearson
- **Regression**: MSE, RMSE, MAE, R-squared (labels normalized to 0-1)
- **Calibration**: Expected Calibration Error with bin analysis
### Train Calibration Model
Learn weights for prediction components to approximate gold labels. Useful for understanding how different model prediction dimensions relate to human annotations:
```shell
# Train linear model mapping prediction components to gold ratings
uv run bonepick train-calibration \
-d ./annotated_data \
-p '.prediction.components' \
-l '.annotation.rating' \
-m linear
# Train log-linear model with output file
uv run bonepick train-calibration \
-d ./data \
-p '.model_scores' \
-l '.gold_label' \
-m log-linear \
-o calibration_weights.yaml
```
Model types:
- **linear**: `score = clamp(sum(w_i * pred_i) + bias, 0, 1)`
- **log-linear**: `score = sigmoid(sum(w_i * pred_i) + bias)`
The prediction expression must return a dict of `{component_name: value}`. Outputs include learned weights, fit metrics (R-squared, RMSE, MAE), and a ready-to-use jq expression.
## Data Annotation (Optional)
The annotation features require the `annotate` extra dependencies (`uv sync --extra annotate`).
### List Available Prompts
```shell
# List available task prompts
uv run bonepick list-prompts task
# List available system prompts
uv run bonepick list-prompts system
```
### Annotate Dataset with LLM
Use LLM APIs to automatically label or annotate your dataset:
```shell
uv run bonepick annotate-dataset \
-d data/unlabeled-dataset \
-o data/annotated-dataset \
-m gpt-5.2 \
-T <task-prompt-name> \
-i ".text" \
--max-requests-per-minute 100
```
Key options:
- `-d/--dataset-dir`: Input dataset directory (can specify multiple)
- `-o/--output-dir`: Output directory for annotated data
- `-m/--model-name`: Model to use (default: gpt-5.2)
- `-T/--annotation-task-prompt`: Name of annotation task prompt (required)
- `-S/--annotation-system-prompt`: Name of system prompt (optional)
- `-i/--input-field-expression`: jq expression to extract input text (default: `.text`)
- `-f/--input-field-format`: Input format: `text` or `conversation` (default: text)
- `-r/--reasoning-effort`: Reasoning effort level: `minimal`, `low`, `medium`, `high`, `xhigh`, `none`
- `-e/--service-tier`: Service tier: `auto`, `default`, `flex`, `priority` (optional)
- `-c/--cache-location`: Cache location for LLM responses
- `--reprocess-all-rows/--process-missing-rows`: Reprocess behavior
- `--max-requests-per-minute`, `--max-tokens-per-minute`, `--max-concurrent-requests`: Rate limiting
- `--max-text-length`, `--max-new-tokens`: Length constraints
- `--limit-rows`: Maximum rows to annotate
### Batch Annotation
For large-scale annotation jobs, use the batch API workflow which submits requests asynchronously and retrieves results later:
```shell
# Step 1: Submit batch job
uv run bonepick batch-annotate-submit \
-d data/unlabeled-dataset \
-b data/batch-job \
-m gpt-5.2 \
-T <task-prompt-name> \
-i ".text"
# Step 2: Retrieve results (waits for batch completion)
uv run bonepick batch-annotate-retrieve \
-b data/batch-job \
-o data/annotated-dataset
```
The submit step creates a batch directory with a manifest and compressed rows file, then submits prompts via the provider's batch API (OpenAI or Anthropic). The retrieve step waits for completion and merges results back with the original data.
Key options for `batch-annotate-submit`:
- `-d/--dataset-dir`: Input dataset directory (can specify multiple)
- `-b/--batch-dir`: Batch output directory for job state
- `-m/--model-name`: Model to use (default: gpt-5.2)
- `-T/--annotation-task-prompt`: Name of annotation task prompt (required)
- `-S/--annotation-system-prompt`: Name of system prompt (optional)
- `--annotation-batch-size`: Max items per API batch (default: 50000)
- `--reprocess-all-rows/--process-missing-rows`: Reprocess behavior
- `--limit-rows`: Maximum rows to annotate
### Compare Annotation Agreement
Compare annotations between two datasets to measure inter-annotator agreement:
```shell
uv run bonepick annotation-agreement \
--dataset-dir data/annotator1 \
--dataset-dir data/annotator2 \
--label-expression '.label' \
--key-expression '.id'
```
This command computes agreement metrics between two annotation datasets, useful for:
- Measuring inter-annotator reliability between human annotators
- Comparing human annotations vs LLM annotations
- Validating annotation quality across different annotation rounds
Key options:
- `--dataset-dir`: Paths to the dataset directories (specify multiple times, required)
- `--label-expression`: JQ expression to extract the label/annotation (e.g., `.label`, `.annotation.category`)
- `--key-expression`: JQ expression to extract a unique identifier (e.g., `.id`, `.text`)
- `--show-confusion-matrix/--no-confusion-matrix`: Show confusion matrix (default: true)
- `--show-disagreements/--no-disagreements`: Show examples where annotators disagreed (default: false)
- `--max-disagreements`: Maximum disagreement examples to show (default: 10)
- `--ordinal/--no-ordinal`: Treat labels as ordinal (ordered) values (default: false)
Example with nested fields:
```shell
uv run bonepick annotation-agreement \
--dataset-dir data/human-annotations \
--dataset-dir data/llm-annotations \
--label-expression '.annotation.quality_score' \
--key-expression '.metadata.document_id' \
--show-disagreements \
--max-disagreements 20
```
#### Ordinal Labels
For numeric labels where order matters (e.g., rating scales 1-5), use `--ordinal` to compute metrics that account for the distance between ratings:
```shell
uv run bonepick annotation-agreement \
--dataset-dir data/rater1 \
--dataset-dir data/rater2 \
--label-expression '.score' \
--key-expression '.id' \
--ordinal
```
With `--ordinal`, the command computes:
- **Weighted Kappa (quadratic)**: Penalizes distant disagreements more heavily (13 vs 14 is less severe than 13 vs 30)
- **Mean Absolute Error (MAE)**: Average absolute difference between ratings
- **Root Mean Squared Error (RMSE)**: Emphasizes larger disagreements
- **Pearson Correlation**: Measures linear relationship between raters
- **Difference Histogram**: Visual distribution of rating differences
The command outputs:
- **Dataset coverage**: Samples in each dataset, common samples, unique samples
- **Agreement rate**: Percentage of matching labels
- **Cohen's Kappa**: Accounts for chance agreement (0.00-0.20: slight, 0.21-0.40: fair, 0.41-0.60: moderate, 0.61-0.80: substantial, 0.81-1.00: almost perfect)
- **Label distribution**: Comparison of label frequencies between datasets
- **Confusion matrix**: Shows which labels are confused with each other
- **Disagreement examples**: Optional display of specific cases where annotators disagreed
## Model Distillation
Distill a Sentence Transformer model to a lightweight Model2Vec static embedding model:
```shell
uv run bonepick distill-model2vec \
-m sentence-transformers/all-MiniLM-L6-v2 \
-o models/distilled-model \
-d 256 \
--quantize-to float16
```
Key options:
- `-m/--model-name-or-path`: HuggingFace model name or local path (required)
- `-o/--output-dir`: Output directory (required)
- `-v/--vocabulary-path`: Custom vocabulary file (one token per line)
- `-d/--pca-dims`: PCA dimensions for dimensionality reduction (default: 256, or `auto`)
- `-s/--sif-coefficient`: SIF (Smooth Inverse Frequency) coefficient (default: 1e-4)
- `-t/--token-remove-pattern`: Regex pattern for tokens to remove (default: `\[unused\d+\]`)
- `-r/--trust-remote-code`: Allow remote code execution
- `-q/--quantize-to`: Quantization type: `float16`, `float32`, `float64`, `int8` (default: float16)
- `-k/--vocabulary-quantization`: Vocabulary quantization factor
- `-p/--pooling`: Pooling strategy: `mean`, `last`, `first`, `pooler` (default: mean)
## CLI Reference
```shell
uv run bonepick --help
uv run bonepick <command> --help
```
### Data Pipeline Commands
| Command | Description |
|---------|-------------|
| `import-hf-dataset` | Download HuggingFace dataset to local JSONL |
| `transform-dataset` | Apply jq transforms to reshape fields |
| `balance-dataset` | Balance dataset so each label has equal representation |
| `sample-dataset` | Create a random sample of a dataset by rate or target size |
| `reshard-dataset` | Combine multiple files into specified number of evenly-sized files |
| `normalize-dataset` | Normalize text (for Model2Vec) |
| `convert-to-fasttext` | Convert JSONL to FastText format |
| `count-tokens` | Count tokens in dataset directories using a tokenizer |
### Training Commands
| Command | Description |
|---------|-------------|
| `train-model2vec` | Train Model2Vec classifier or regressor |
| `train-fasttext` | Train FastText classifier |
| `distill-model2vec` | Distill Sentence Transformer to Model2Vec |
### Evaluation Commands
| Command | Description |
|---------|-------------|
| `eval-model2vec` | Evaluate Model2Vec classifier |
| `eval-fasttext` | Evaluate FastText classifier |
| `infer-fasttext` | Run FastText inference on JSONL files |
| `eval-calibration` | Evaluate predictions against ordinal labels (AUC, correlation, calibration) |
| `train-calibration` | Train calibration model mapping prediction components to gold labels |
### Annotation Commands (Requires `annotate` extra)
| Command | Description |
|---------|-------------|
| `annotate-dataset` | Annotate dataset using LLM APIs |
| `batch-annotate-submit` | Submit batch annotation job to LLM batch API |
| `batch-annotate-retrieve` | Retrieve batch annotation results and merge with original data |
| `list-prompts` | List available annotation prompts |
| `annotation-agreement` | Compare annotations between two datasets and compute agreement metrics |
| `label-distribution` | Show label distribution in a dataset |
### Utility Commands
| Command | Description |
|---------|-------------|
| `version` | Print package version |
[1]: https://github.com/MinishLab/model2vec
[2]: https://fasttext.cc
| text/markdown | null | Luca Soldaini <luca@soldaini.net> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"anyascii>=0.3.3",
"backports-weakref>=1.0.post1",
"backports-zstd>=1.2.0",
"click",
"datasets<5,>=4",
"jq>=1.10.0",
"lazy-imports>=1.2.0",
"model2vec[train]>=0.7.0",
"msgspec",
"plsfix>=0.1.8",
"pyyaml>=6.0.3",
"smart-open[s3]>=7.1.0",
"tokenizers>=0.22.1",
"torch>=2.9.1",
"tqdm>=4.66.5",
"fastmcp; extra == \"annotate\"",
"lm-deluge<0.1.0,>=0.0.117; extra == \"annotate\"",
"platformdirs; extra == \"annotate\"",
"pydantic<2.12.0; extra == \"annotate\"",
"rich>=13.0.0; extra == \"annotate\"",
"scikit-learn>=1.3.0; extra == \"annotate\"",
"model2vec[distill]>=0.7.0; extra == \"distill\"",
"tokenlearn>=0.2.1; extra == \"distill\""
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T02:43:57.373435 | bonepick-0.2.0.tar.gz | 6,078,410 | 96/d9/baccbdc58ed1609cd19faa25803e9f387c6c98b651b828bf008494c4c82a/bonepick-0.2.0.tar.gz | source | sdist | null | false | d65313a38e82fdf40b14aeae0dc96056 | b8d26cb015a340ab4e9afdb5e3b1c43d7851df2ec3d2e59a2cc8cc3452db9412 | 96d9baccbdc58ed1609cd19faa25803e9f387c6c98b651b828bf008494c4c82a | null | [
"LICENSE"
] | 231 |
2.4 | snowpark-submit | 1.14.0 | Snowpark Submit | The snowpark-submit is designed for running non-interactive, batch-oriented Spark workloads directly on Snowflake's infrastructure using familiar Spark semantics. It eliminates the need to manage a dedicated Spark cluster while allowing you to maintain your existing Spark development workflows. This tool is ideal for submitting production-ready Spark applications—such as ETL pipelines, and scheduled data transformations—using a simple CLI interface.
| text/markdown | Snowflake, Inc | null | null | null | Apache License, Version 2.0 | null | [] | [] | null | null | <3.13,>=3.9 | [] | [] | [] | [
"snowflake-snowpark-python>=1.32.0",
"pyyaml<7.0.0,>=6.0.2"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.9 | 2026-02-20T02:43:06.694823 | snowpark_submit-1.14.0.tar.gz | 38,932 | e2/e8/5aed2cbedbccdd350c5fe06bf780531c43d96e1c10daa6a94a6862d0510d/snowpark_submit-1.14.0.tar.gz | source | sdist | null | false | f19eb775ee1802aa90cd2d26b643e2ce | b3a191e8f32c23fb740f36410e6d2fe3badaadadbafe4c8c4e5140b836b9285b | e2e85aed2cbedbccdd350c5fe06bf780531c43d96e1c10daa6a94a6862d0510d | null | [
"LICENSE.txt"
] | 239 |
2.4 | roboflex.dynamixel | 0.1.22 | Roboflex Dynamixel Library | # roboflex.dynamixel
Support for Robotis' Dynamixel motor product line. Some of it anyway (tested on XH430-V350-R, XH430-W350, XM540-W270, XL-330-M288-T, others may work as well).
We provide two layers to controlling dynamixel motors:
* dynamixel_controller.h defines DynamixelGroupController: a controller for a group of dynamixels connected together (following Robotis' instructions), which uses no message passing or other roboflex functionality at all.
* dynamixel.h, which defines helper classes to control the above, using roboflex.
## DynamixelGroupController (see [dynamixel_controller.h](dynamixel_controller.h))
Controls a group of dynamixel motors, connected together according to Robotis' instructions, to create robot arms, pan-tilt controllers, and other robots. The DynamixelGroupController abstracts the repetitive part of the communication with the motors, and provides a convenient class that can control a group of motors through a callback function: a 'ReadWriteLoop' function. It calls this function synchronously with communication with the dynamixel motors, as fast as the interface can manage, depending on configured baud rate (think > 100hz).
This code is completely stand-alone; it uses nothing from roboflex.

Each dynamixel can have its own independent operation mode (current, position, velocity, etc), but this class makes it particularly easy to control groups of motors all in the same mode; see the constructors and static methods of DynamixelGroupController in [dynamixel_controller.h](dynamixel_controller.h).
Unless using the PositionController or VelocityController methods to instantiate a controller, the client must decide which control table entries to read and which to write. In general, you can write to the control table entries that are named DXLControlTable::Goal*, and read everything else. Refer to Robotis documentation for details.
The controller automatically paces its read/write loop based on the configured baud rate and the amount of data requested per cycle. If you need to tune this, call `set_loop_sleep_ms()` (to force a specific delay per cycle) or `set_servo_processing_margin_ms()` (to tweak the margin the controller adds after each write). Both getters are exposed so you can inspect the calculated values at runtime.
## Roboflex additions (see [dynamixel.h](dynamixel.h))
This layer defines additional functionality to integrate with roboflex-style message passing. It defines Messages that encapsulate State and Command types, and several useful nodes.
The first of these is DynamixelGroupControllerNode. This Node sub-class adapts the non-roboflex 'business logic' of DynamixelGroupController to the roboflex paradigm by configuration - you give this Node an instance of a DynamixelGroupController. This Node performs three functions:
1. Adapts DynamixelGroupController's control loop to the roboflex threading model: in `child_thread_fn`, it calls `run_readwrite_loop` on the controller. When `stop` is called on this Node, it stops the thread.
2. Provides a simple, overrideable abstract method to actually perform the control: the user should subclass this class and override:
virtual DXLIdsToValues readwrite_loop_function(
const DynamixelGroupState& state,
const core::MessagePtr last_msg)
{
return DXLIdsToValues{
5: {DXLControlTable::GoalVelocity: 12 },
6: {DXLControlTable::GoalVelocity: 2 }
}
}
# in python:
def readwrite_loop_function(state, last_msg):
# use state and last_msg as you wish
return {
5: {DXLControlTable.GoalVelocity: 12 },
6: {DXLControlTable.GoalVelocity: 2 }
}
# In both of these cases, this means you want to set the goal velocity of
# dynamixel motor with id 5 to 12, and motor 6 to 2.
3. Provides basic message handling. It will:
3.1. Save the most recent message. You can use this to send in, for instance, some new position or velocity you want to move the motors to. This Node's control loop might be operating at a different frequency that the incoming messages - it should still work.
3.2. Broadcast the state and command (your command values) sent to the motor group at every loop cycle, in the form of a `DynamixelCommandStateMessage`.
## Messages
### DynamixelCommandStateMessage:
Encapsulates the a state and command of the dynamixel group. Used to communicate the last-known state of the motors, and last-known command sent to the motors. Serializes this schema to flexbuffers:
{
...
"state": { # the present state of the group of motors:
5: { # motor with id 5
128: 2, # has present velocity = 2
132: 204, # has present position = 204
},
6: { # motor with id 6, etc.
128: 12,
132: 301,
}
}
"command": { # the last-known command you sent
5: {104: 4}, # motor with id 5 should get goal velocity = 4
6: {104: 5} # motor with id 6 should get goal velocity = 5
}
"t0": 170002032.2122312, # time just before communication with motor
"t1": 170002032.2283432, # time just after communication with motor
}
## For more 'distributed' operation:
The second of these is DynamixelGroupNode. This node inherits RunnableNode; it is designed to be started and stopped. When instantiated, this node must be given an instance of a DynamixelGroupController. When started, it runs a ReadWriteLoopFunction on that instance inside its run_readwrite_loop method, using the last known GroupCommandMessage it has received, and then emits a GroupStateMessage.

We also provided DynamixelRemoteController. This is an abstract base class that requires implementation of the 'readwrite_loop_function' virtual method. Here is where your custom control logic would go. The benefit of this approach is that the DynamixelGroupNode can run in its own thread, and keep up with the motors. This node, then, can be run from anywhere, and using transport classes, the controller can even run on a totally separate computer.
We also provide DynamixelRemoteFrequencyController, which is exactly the same, but is driven at some frequency from a thread via inheritance from FrequencyGenerator.

## Troubleshooting
You might benefit from the Robotis Dynamixel Wizard program - just google for it.
If you can't access /dev/ttyUSB0, try this:
sudo usermod -a -G dialout $USER
sudo reboot
| text/markdown | Colin Prepscius | colinprepscius@gmail.com | null | null | MIT | dynamixel, robotics, middleware, flexbuffers, python, c++, c++20 | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Embedded Systems",
"Framework :: Robot Framework",
"Framework :: Robot Framework :: Library",
"Framework :: Robot Framework :: Tool",
"Programming Language :: C++",
"Programming Language :: Python :: 3"
] | [] | https://github.com/flexrobotics/roboflex_dynamixel | null | >=3.6 | [] | [] | [] | [
"numpy",
"roboflex"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.5 | 2026-02-20T02:42:59.011374 | roboflex_dynamixel-0.1.22.tar.gz | 27,456 | 7e/41/606d897e58b7d01dbb6259a71b39fdb0e15782bee3c595560880132a2cb8/roboflex_dynamixel-0.1.22.tar.gz | source | sdist | null | false | 203d47765bfff41e82e822bfc0b9d9c9 | 71d67b638bde9842bf32c30139af3fb2fb76a066fc1895231a44805f87d41891 | 7e41606d897e58b7d01dbb6259a71b39fdb0e15782bee3c595560880132a2cb8 | null | [
"LICENSE"
] | 0 |
2.4 | hakowan | 0.4.0 | Hakowan: A 3D data visualization grammar | # Hakowan
Hakowan is a 3D data visualization grammar. It is inspired by the grammar of graphics, and it is
designed for easily creating beautiful 3D visualizations.
## Install
Hakowan relies on [Lagrange](https://opensource.adobe.com/lagrange-docs/) and
[Mitsuba](https://www.mitsuba-renderer.org/) to provide geometry processing and rendering
capabilities. Both Hakowan and its dependencies can be simply installed via pip:
```sh
pip install hakowan
```
Note that hakowan requires python 3.11 and above.
## Quick start
```py
import hakowan as hkw
base = hkw.layer("mesh.obj") # Create a base layer
hkw.render(base, filename="image.exr") # Render!
```
## Documentation
[HTML](https://hakowan.github.io/hakowan/)
```bibtex
@software{hakowan,
title = {Hakowan: A 3D Data Visualization Grammar},
version = {0.3.2},
year = 2024,
}
```
| text/markdown | null | Qingnan Zhou <qnzhou@gmail.com>, Zhicheng Liu <leozcliu@umd.edu> | null | null | null | null | [
"License :: OSI Approved :: Apache Software License"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"lagrange-open>=6.40",
"numpy>=1.22",
"Pillow~=11.0",
"PyYAML>=6.0",
"mitsuba~=3.7",
"bpy<6,>=5.0.1; python_version == \"3.11\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T02:40:02.163271 | hakowan-0.4.0-py3-none-any.whl | 530,165 | e7/a9/64311616ae019a2a68bbb2bd9a899d9e99402773e77abe88d022201e00fd/hakowan-0.4.0-py3-none-any.whl | py3 | bdist_wheel | null | false | fc1ae084681ad6a58809a43ea97e5d53 | f6b399fe9172a60b60625fb3fac33d47987ed8187d33679d838f1ebe3b50b7c1 | e7a964311616ae019a2a68bbb2bd9a899d9e99402773e77abe88d022201e00fd | null | [
"LICENSE"
] | 103 |
2.4 | twinkle-kit | 0.0.1 | Training API for large language models with efficient data handling and advanced optimization techniques. | <h1 align="center">Twinkle: Training workbench to make your model glow</h1>
<p align="center">
<img src="assets/slogan.png" width="200"/>
<p>
<p align="center">
by <a href="https://modelscope.cn/home">ModelScope</a>
<br>
English  |  <a href="README_ZH.md">中文</a> 
</p>
<p align="center">
<img src="https://img.shields.io/badge/python-3.11-5be.svg">
<img src="https://img.shields.io/badge/pytorch-%E2%89%A52.0-orange.svg">
<a href="https://pypi.org/project/twinkle/"><img src="https://badge.fury.io/py/twinkle.svg"></a>
<a href="https://github.com/modelscope/twinkle/blob/main/LICENSE"><img src="https://img.shields.io/github/license/modelscope/twinkle"></a>
<a href="https://pepy.tech/project/twinkle-kit"><img src="https://pepy.tech/badge/twinkle-kit"></a>
<a href="https://github.com/modelscope/twinkle/pulls"><img src="https://img.shields.io/badge/PR-welcome-55EB99.svg"></a>
</p>
<p align="center">
<a href="https://twinkle-kit.readthedocs.io/en/latest/">English Documentation</a>   |   <a href="https://twinkle-kit.readthedocs.io/zh-cn/latest/">中文文档</a>  
</p>
## ✨ What is Twinkle?
Twinkle✨ is a lightweight, client-server training framework engineered
with modular, high-cohesion interfaces. Whether you are executing locally
with `torchrun`, or scaling training across Ray clusters,
Twinkle✨ eliminates infrastructure friction by encapsulating
training logic into standardized APIs. Beyond simple
abstraction, Twinkle✨ serves as a robust backend and gateway to enable serverless Training-as-a-Service (TaaS).
It offers interfaces that constitute a _superset_ of [Tinker](https://thinkingmachines.ai/tinker/) APIs,
thereby making it possible to access a Twinkle✨ training service via Tinker client or native Twinkle✨ client
which offers more functionalities.
🧩 <b>Decoupled Architecture</b>: Standardized Interfaces, backward compatible with Tinker APIs.<br>
🚀 <b>Multiple Runtime Modes</b>: torchrun / Ray / HTTP.<br>
🔌 <b>Versatile Backends</b>: Transformers / Megatron.<br>
👥 <b>Multi-Tenancy Training Service</b>: Train multiple LoRAs that share one base model deployment.<br>
Note: Twinkle✨is built by the team behind [ms-swift](https://github.com/modelscope/ms-swift), and
we expect the two projects to evolve together. We expect some fundamental components in Twinkle✨will likely
be reused in [ms-swift](https://github.com/modelscope/ms-swift).
| Twinkle Wechat Group |
|:------------------------------------------------------:|
| <img src="assets/wechat.jpg" width="200" height="200"> |
## Installation
### Install with package:
```shell
pip install 'twinkle-kit'
```
### Install from Source:
```shell
git clone https://github.com/modelscope/twinkle.git
cd twinkle
pip install -e .
```
## Tutorials
| Training Type | Model Framework | Cookbook Path |
| --------------------------------- | --------------- | ------------------------------------------------- |
| FSDP finetuning | transformers | [Script](cookbook/transformers/fsdp2.py) |
| FSDP MoE finetuning | transformers | [Script](cookbook/transformers/fsdp2_moe.py) |
| ep FSDP MoE finetuning | transformers | [Script](cookbook/transformers/ep_fsdp_qwen3_moe.py) |
| sp FSDP finetuning | transformers | [Script](cookbook/transformers/sp_fsdp_dense.py) |
| EP MoE finetuning | transformers | [Script](cookbook/transformers/ep_fsdp_qwen3_moe.py) |
| pp/tp/cp finetuning | megatron | [Script](cookbook/megatron/tp.py) |
| pp/tp/cp MoE finetuning | megatron | [Script](cookbook/megatron/tp_moe.py) |
| tinker client finetuning | megatron | [Script](cookbook/client/tinker/megatron) |
| tinker client finetuning/sampling | transformers | [Script](cookbook/client/tinker/transformer) |
| twinkle client finetuning | megatron | [Script](cookbook/client/twinkle/megatron) |
| twinkle client finetuning | transformer | [Script](cookbook/client/twinkle/transformer) |
## Changelog
- 🎉2026-02-13 Initial version of Twinkle✨ released, including SFT/PT/RL support for text models and serverless training capabilities on [ModelScope](https://modelscope.cn).
## Training as a Service on ModelScope
We are rolling out training service built atop Twinkle✨ on ModelScope. It is currently in _Beta_. You may
sign up for free access by joining the [Twinkle-Explorers](https://modelscope.cn/organization/twinkle-explorers) organization, and
train via API endpoint `base_url=https://www.modelscope.cn/twinkle`. For more details, please refer to
our [documentation](docs/source_en/Usage%20Guide/Train-as-a-Service.md).
## Supported Hardware
| Hardware Environment | Notes |
| -------------------- | ---------------------------------------------------------------- |
| Nvidia GPUs | ✅ Support for BF16/Flash-Attn may be incomplete in earlier GPUs |
| Ascend NPU | ✅ Some operators may not supported |
| PPU | ✅ |
| CPU | Supports partial components like dataset, dataloader |
## Supported Models
We will be adding support for more models as new models are released. The following table lists current models
supported on Twinkle✨ framework.
>[!Note]
> For serverless training service accessed via `base_url=https://www.modelscope.cn/twinkle`, it currently supports
> one training base at a time, and currently it is [Qwen3-30B-A3B-Instruct-2507](https://modelscope.cn/models/Qwen/Qwen3-30B-A3B-Instruct-2507).
| Model Type | Model ID on [ModelScope](https://modelscope.cn) | Requires | Megatron Support | HF Model ID |
| ------------------- |--------------------------------------------------------------------------------------------------------------------------| -------------------- | ---------------- | ---------------------------------------------------------------------------------------------------------- |
| qwen3 series | [Qwen/Qwen3-0.6B-Base](https://modelscope.cn/models/Qwen/Qwen3-0.6B-Base)~32B | transformers>=4.51 | ✅ | [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) |
| qwen3_moe series | [Qwen/Qwen3-30B-A3B-Base](https://modelscope.cn/models/Qwen/Qwen3-30B-A3B-Base) | transformers>=4.51 | ✅ | [Qwen/Qwen3-30B-A3B-Base](https://huggingface.co/Qwen/Qwen3-30B-A3B-Base) |
| | [Qwen/Qwen3-30B-A3B](https://modelscope.cn/models/Qwen/Qwen3-30B-A3B)~235B | transformers>=4.51 | ✅ | [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) |
| qwen2 series | [Qwen/Qwen2-0.5B-Instruct](https://modelscope.cn/models/Qwen/Qwen2-0.5B-Instruct) ~72B | transformers>=4.37 | ✅ | [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) |
| | [Qwen/Qwen2.5-0.5B-Instruct](https://modelscope.cn/models/Qwen/Qwen2.5-0.5B-Instruct)~72B | transformers>=4.37 | ✅ | [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) |
| | [Qwen/Qwen2.5-0.5B](https://modelscope.cn/models/Qwen/Qwen2.5-0.5B)~72B | transformers>=4.37 | ✅ | [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) |
| qwen2_moe series | [Qwen/Qwen1.5-MoE-A2.7B-Chat](https://modelscope.cn/models/Qwen/Qwen1.5-MoE-A2.7B-Chat) | transformers>=4.40 | ✅ | [Qwen/Qwen1.5-MoE-A2.7B-Chat](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B-Chat) |
| chatglm4 series | [ZhipuAI/glm-4-9b-chat](https://modelscope.cn/models/ZhipuAI/glm-4-9b-chat) | transformers>=4.42 | ✘ | [zai-org/glm-4-9b-chat](https://huggingface.co/zai-org/glm-4-9b-chat) |
| | [ZhipuAI/LongWriter-glm4-9b](https://modelscope.cn/models/ZhipuAI/LongWriter-glm4-9b) | transformers>=4.42 | ✘ | [zai-org/LongWriter-glm4-9b](https://huggingface.co/zai-org/LongWriter-glm4-9b) |
| glm_edge series | [ZhipuAI/glm-edge-1.5b-chat](https://modelscope.cn/models/ZhipuAI/glm-edge-1.5b-chat) | transformers>=4.46 | ✘ | [zai-org/glm-edge-1.5b-chat](https://huggingface.co/zai-org/glm-edge-1.5b-chat) |
| | [ZhipuAI/glm-edge-4b-chat](https://modelscope.cn/models/ZhipuAI/glm-edge-4b-chat) | transformers>=4.46 | ✘ | [zai-org/glm-edge-4b-chat](https://huggingface.co/zai-org/glm-edge-4b-chat) |
| internlm2 series | [Shanghai_AI_Laboratory/internlm2-1_8b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-1_8b) | transformers>=4.38 | ✘ | [internlm/internlm2-1_8b](https://huggingface.co/internlm/internlm2-1_8b) |
| | [Shanghai_AI_Laboratory/internlm2-chat-7b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-chat-7b) | transformers>=4.38 | ✘ | [internlm/internlm2-chat-7b](https://huggingface.co/internlm/internlm2-chat-7b) |
| deepseek_v1 | [deepseek-ai/deepseek-vl-7b-chat](https://modelscope.cn/models/deepseek-ai/deepseek-vl-7b-chat) | transformers>=4.39.4 | ✅ | —— |
| | [deepseek-ai/DeepSeek-V2-Lite](https://modelscope.cn/models/deepseek-ai/DeepSeek-V2-Lite) | transformers>=4.39.3 | ✅ | [deepseek-ai/DeepSeek-V2-Lite](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite) |
| | [deepseek-ai/DeepSeek-V2.5](https://modelscope.cn/models/deepseek-ai/DeepSeek-V2.5) | transformers>=4.39.3 | ✅ | [deepseek-ai/DeepSeek-V2.5](https://huggingface.co/deepseek-ai/DeepSeek-V2.5) |
| | [deepseek-ai/DeepSeek-R1](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1) | transformers>=4.39.3 | ✅ | [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
| deepSeek-r1-distill | [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) ~32B | transformers>=4.37 | ✅ | [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
For more detailed model support list 👉 [Quick Start](docs/source_en/Usage%20Guide/Quick-Start.md)
## Sample Code
### Train with Ray
```python
from peft import LoraConfig
import twinkle
from twinkle import DeviceMesh, DeviceGroup
from twinkle.dataloader import DataLoader
from twinkle.dataset import Dataset, DatasetMeta
from twinkle.model import TransformersModel
from twinkle.preprocessor import SelfCognitionProcessor
device_group = [DeviceGroup(name='default',ranks=8,device_type='cuda')]
device_mesh = DeviceMesh.from_sizes(fsdp_size=4, dp_size=2)
# local for torchrun
twinkle.initialize(mode='ray', groups=device_group, global_device_mesh=device_mesh)
def train():
# to load model from Hugging Face, use 'hf://...'
base_model = 'ms://Qwen/Qwen2.5-7B-Instruct'
# 1000 samples
dataset = Dataset(dataset_meta=DatasetMeta('ms://swift/self-cognition', data_slice=range(1000)))
# Set template to prepare encoding
dataset.set_template('Template', model_id=base_model)
# Preprocess the dataset to standard format
dataset.map(SelfCognitionProcessor('twinkle LLM', 'ModelScope Community'))
# Encode dataset
dataset.encode()
# Global batch size = 8, for GPUs, so 1 sample per GPU
dataloader = DataLoader(dataset=dataset, batch_size=8, min_batch_size=8)
# Use a TransformersModel
model = TransformersModel(model_id=base_model, remote_group='default')
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules='all-linear'
)
# Add a lora to model, with name `default`
# Comment this to use full-parameter training
model.add_adapter_to_model('default', lora_config, gradient_accumulation_steps=2)
# Add Optimizer for lora `default`
model.set_optimizer(optimizer_cls='AdamW', lr=1e-4)
# Add LRScheduler for lora `default`
model.set_lr_scheduler(scheduler_cls='CosineWarmupScheduler', num_warmup_steps=5,
num_training_steps=len(dataloader))
for step, batch in enumerate(dataloader):
# Do forward and backward
model.forward_backward(inputs=batch)
# Step
model.clip_grad_and_step()
if step % 20 == 0:
# Print metric
metric = model.calculate_metric(is_training=True)
print(f'Current is step {step} of {len(dataloader)}, metric: {metric}')
model.save(f'last-checkpoint')
if __name__ == '__main__':
train()
```
### Using Tinker-Like API
```python
import os
from tqdm import tqdm
from tinker import types
from twinkle_client import init_tinker_compat_client
from twinkle.dataloader import DataLoader
from twinkle.dataset import Dataset, DatasetMeta
from twinkle.preprocessor import SelfCognitionProcessor
from twinkle.server.tinker.common import input_feature_to_datum
base_model = 'ms://Qwen/Qwen3-30B-A3B-Instruct-2507'
base_url='http://www.modelscope.cn/twinkle'
api_key=os.environ.get('MODELSCOPE_TOKEN')
# Use twinkle dataset to load the data
dataset = Dataset(dataset_meta=DatasetMeta('ms://swift/self-cognition', data_slice=range(500)))
dataset.set_template('Template', model_id=base_model, max_length=256)
dataset.map(SelfCognitionProcessor('twinkle Model', 'twinkle Team'), load_from_cache_file=False)
dataset.encode(batched=True, load_from_cache_file=False)
dataloader = DataLoader(dataset=dataset, batch_size=8)
# Initialize tinker client
service_client = init_tinker_compat_client(base_url, api_key)
training_client = service_client.create_lora_training_client(base_model=base_model[len('ms://'):], rank=16)
# Training loop: use input_feature_to_datum to transfer the input format
for epoch in range(3):
for step, batch in tqdm(enumerate(dataloader)):
input_datum = [input_feature_to_datum(input_feature) for input_feature in batch]
fwdbwd_future = training_client.forward_backward(input_datum, "cross_entropy")
optim_future = training_client.optim_step(types.AdamParams(learning_rate=1e-4))
fwdbwd_result = fwdbwd_future.result()
optim_result = optim_future.result()
training_client.save_state(f"twinkle-lora-{epoch}").result()
```
## Architecture Design
<img src="assets/framework.jpg" style="max-width: 500px; width: 100%;" />
**Twinkle✨** features a decoupled **Client-Server architecture** designed for maximum flexibility.
The client-side provides two distinct integration paths:
* **Twinkle✨ Native:** A conforming API that mirrors the server-side interface for seamless end-to-end integration.
* **Tinker Compatibility:** Full support for the native Tinker API, enabling developers to leverage Twinkle✨’s backend using Tinker client.
This dual-path design ensures access to Twinkle✨’s training services using Tinker API, with a simple modification of the Tinker base URL.
## Multi-Tenancy
**Twinkle✨** supports simultaneous multi-tenant training on a shared base model. Leveraging a **LoRA Pool + Tenant Application** architecture, Twinkle enables up to **N tenants** to train in parallel with complete isolation. This design offers unprecedented flexibility: from the model's perspective, each tenant's session is distinct, supporting heterogeneous configurations including unique **data padding strategies, optimizers, and loss functions**—all running concurrently on the same base model.
*Note: This feature is currently optimized for [LoRA](https://github.com/huggingface/peft).*
<img src="assets/multi_lora.png" style="max-width: 500px; width: 100%;" />
For example:
- Tenant A: Load local private dataset locally, LoRA rank=8, using base model for SFT
- Tenant B: Load open-source dataset from Hub remotely, LoRA rank=32, using base model for PT
- Tenant C: Use base model for GRPO loss calculation, using Sampler for sampling
- Tenant D: Use base model for logps inference
These processes are executed concurrently on a single base model because the **Model and Sampler**
are integrated as **task-agnostic components** within the Twinkle✨ ecosystem.
Upon completion, checkpoints are automatically pushed to **ModelScope** or **HuggingFace** repositories
(private by default). On the server side, Twinkle✨ provides a robust multi-tenant suite
featuring **automated cluster management** and **dynamic scaling**, making it the
foundation for building customizable, enterprise-grade training services.
> As a modular framework, Twinkle✨ also supports remote temporary exclusive training, i.e., training in full-parameter mode.
## 🛠️ Twinkle✨ Modular Ecosystem
<div align="center">
<table style="width: 100%; border-collapse: separate; border-spacing: 8px;">
<tr>
<td width="20%" bgcolor="#f6f8fa" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Dataset</b><br><sub>Data loading and preprocessing</sub></p>
</td>
<td width="20%" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Template</b><br><sub>Encoding and decoding</sub></p>
</td>
<td width="20%" bgcolor="#f6f8fa" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>DataLoader</b><br><sub>Data distribution and batching</sub></p>
</td>
<td width="20%" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Preprocessor</b><br><sub>Data ETL</sub></p>
</td>
<td width="20%" bgcolor="#f6f8fa" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>InputProcessor</b><br><sub>Task-specific input processing</sub></p>
</td>
</tr>
<tr>
<td bgcolor="#f6f8fa" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Model</b><br><sub>Large models, supports multiple frameworks</sub></p>
</td>
<td style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Sampler</b><br><sub>Sampler logic</sub></p>
</td>
<td bgcolor="#f6f8fa" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Loss</b><br><sub>Loss functions</sub></p>
</td>
<td style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Metric</b><br><sub>Training metrics collection</sub></p>
</td>
<td bgcolor="#f6f8fa" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Reward</b><br><sub>Reward function</sub></p>
</td>
</tr>
<tr>
<td bgcolor="#f6f8fa" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Advantage</b><br><sub>Advantage function</sub></p>
</td>
<td style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>CheckpointEngine</b><br><sub>Weight synchronization</sub></p>
</td>
<td bgcolor="#f6f8fa" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Patch</b><br><sub>Patches for model fixes</sub></p>
</td>
<td style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Module</b><br><sub>Components, e.g., Optimizer</sub></p>
</td>
<td bgcolor="#f6f8fa" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Kernel</b><br><sub>Operators</sub></p>
</td>
</tr>
<tr>
<td bgcolor="#f6f8fa" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Server</b><br><sub>Start backend cluster</sub></p>
</td>
<td style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Client</b><br><sub>Client code</sub></p>
</td>
<td bgcolor="#f6f8fa" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Infra</b><br><sub>Isolate ray and torchrun differences</sub></p>
</td>
<td style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Plugin</b><br><sub>Use hub components</sub></p>
</td>
<td bgcolor="#f6f8fa" style="border: 1px solid #d0d7de; border-radius: 8px; padding: 12px;">
<p align="center"><b>Hub</b><br><sub>Interface with HF/MS libraries</sub></p>
</td>
</tr>
</table>
</div>
## Community Components
| Component Type | Component Link | Component Function | Author |
| -------------- | -------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- | ------------------- |
| Patch | [qwen3_moe_transformers4_patch](https://www.modelscope.cn/models/twinkle-kit/qwen3_moe_transformers4_patch) | Fixes Qwen3 MoE model hang issue during FSDP2 training, effective for transformers==4.x | ModelScope Official |
## Contributions
Twinkle✨ is a collaborative initiative put together by ModelScope in partnership
with the open-source community, with key contributions from strategic stakeholders
including China Merchants Bank Tech Team.
We are grateful to the open-source community, particularly the projects that inspired us,
including [Transformers](https://github.com/huggingface/transformers),
[MS-SWIFT](https://github.com/modelscope/swift),
[veRL](https://github.com/verl-project/verl), [Tinker](https://github.com/thinking-machines-lab/tinker), and many others.
We welcome
open contributions via [issues](https://github.com/modelscope/twinkle/issues) and [pull-requests](https://github.com/modelscope/twinkle/pulls).
| text/markdown | null | ModelScope <contact@modelscope.cn> | null | null | null | null | [] | [] | null | null | <3.13,>=3.11 | [] | [] | [] | [
"datasets<4.0,>=3.0",
"numpy!=2.4.0,<3.0.0,>=2.0.0",
"omegaconf<3.0.0,>=2.3.0",
"fastapi",
"modelscope[framework]>=1.34.0",
"safetensors",
"peft<=0.19.0,>=0.11.0",
"transformers",
"accelerate; extra == \"transformers\"",
"torch<3.0.0,>=2.6.0; extra == \"transformers\"",
"torchvision; extra == \"transformers\"",
"kernels; extra == \"kernels\"",
"megatron-core>=0.12.0; extra == \"megatron\"",
"transformer-engine[pytorch]; extra == \"megatron\"",
"vllm>=0.11; extra == \"vllm\"",
"ray[serve]; extra == \"ray\"",
"sphinx<6.0.0,>=5.3.0; extra == \"docs\"",
"docutils<0.17.0,>=0.16.0; extra == \"docs\"",
"myst_parser; extra == \"docs\"",
"recommonmark; extra == \"docs\"",
"sphinx-book-theme; extra == \"docs\"",
"sphinx-copybutton; extra == \"docs\"",
"sphinx-rtd-theme; extra == \"docs\"",
"sphinx_markdown_tables; extra == \"docs\"",
"sphinxcontrib-mermaid; extra == \"docs\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T02:39:35.955497 | twinkle_kit-0.0.1.tar.gz | 303,457 | 1a/67/d5a3b0da80c8025366d54043d43e72d77fbce73b46e80de232d32a2c079a/twinkle_kit-0.0.1.tar.gz | source | sdist | null | false | ef4a08e82bed1966f26a960e9441bde1 | 51f4f6c3e6fad3eeddd6e07f69184804f840a11321490c172321defb97448f47 | 1a67d5a3b0da80c8025366d54043d43e72d77fbce73b46e80de232d32a2c079a | null | [
"LICENSE"
] | 228 |
2.4 | imlresi | 0.0.13 | Tools for dealing with IML-Resi PowerDrill data. | # imlresi
Tools for dealing with [IML-Resi PowerDrill](https://www.iml-service.com/product/iml-powerdrill/) data using python.
**THIS PACKAGE IS UNOFFICAL AND HAS BEEN DEVELOPED INDEPENDENT OF IML**
Current limitations:
1. Focus is on actual measurements made by the tool rather than meta-info.
1. Poor support for data generated in PD-Tools (e.g. "assessments").
## Install
https://pypi.org/project/imlresi/
```sh
pip install imlresi
```
## Use
```python
from imlresi.trace import Trace
tr = Trace()
tr.read('trace.rgp')
tr.to_json()
```
| text/markdown | Jonathan Harrington | jh@aliente.ch | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering",
"License :: OSI Approved :: MIT License",
"Operating System :: Unix",
"Operating System :: POSIX",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Utilities"
] | [] | https://github.com/hammockman/imlresi-pypi | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [
"Issue Tracker, https://github.com/hammockman/imlresi-pypi/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T02:38:59.486563 | imlresi-0.0.13.tar.gz | 12,657 | d1/75/b847a04c4bbe0d0e23ee838cb3197888a6062df946141be64141abf5d57c/imlresi-0.0.13.tar.gz | source | sdist | null | false | 3ece07be2be68f76b2f2899733039097 | b13b8cd016984e834052f7832d7cb7a312ba370858bc5c5661e94cc995bbf759 | d175b847a04c4bbe0d0e23ee838cb3197888a6062df946141be64141abf5d57c | null | [
"LICENSE"
] | 243 |
2.4 | friend-lite-sdk | 0.3.0 | Python SDK for OMI/Neo1 BLE wearable devices — audio streaming, button events, and device control | # friend-lite-sdk
Python SDK for OMI / Friend Lite BLE wearable devices — audio streaming, button events, device control, and transcription.
Derived from the [OMI Python SDK](https://github.com/BasedHardware/omi/tree/main/sdks/python) (MIT license, Based Hardware Contributors). See `NOTICE` for attribution.
## Installation
```bash
pip install friend-lite-sdk
```
With optional transcription support:
```bash
pip install "friend-lite-sdk[deepgram]" # Deepgram cloud transcription
pip install "friend-lite-sdk[wyoming]" # Local ASR via Wyoming protocol
pip install "friend-lite-sdk[deepgram,wyoming]" # Both
```
## Features
- **BLE Audio Streaming** — Connect to OMI/Friend Lite devices and stream Opus-encoded audio
- **Button Events** — Subscribe to single tap, double tap, long press events
- **Haptic Control** — Trigger haptic feedback patterns on supported devices
- **WiFi Sync** — Configure and trigger WiFi-based audio sync
- **Storage Access** — Read stored audio from device storage
- **Neo1 Support** — Sleep/wake control for Neo1 devices
- **Transcription** — Built-in Deepgram and Wyoming ASR integration
## Quick Start
```python
import asyncio
from friend_lite import OmiConnection, ButtonState, parse_button_event
async def main():
async with OmiConnection("AA:BB:CC:DD:EE:FF") as conn:
# Stream audio
await conn.subscribe_audio(lambda _handle, data: print(len(data), "bytes"))
# Listen for button events
await conn.subscribe_button(
lambda _handle, data: print("Button:", parse_button_event(data))
)
await conn.wait_until_disconnected()
asyncio.run(main())
```
## Device Discovery
```python
import asyncio
from friend_lite import print_devices
asyncio.run(print_devices())
```
## Links
- [Chronicle Project](https://github.com/SimpleOpenSoftware/chronicle)
- [Original OMI Project](https://github.com/BasedHardware/omi)
| text/markdown | null | null | null | null | null | ble, bluetooth, wearable, omi, audio, streaming | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries",
"Topic :: System :: Hardware"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"bleak>=0.22.3",
"numpy>=1.26",
"opuslib>=3.0.1",
"websockets>=14.0.0",
"deepgram-sdk>=3.11.0; extra == \"deepgram\"",
"wyoming; extra == \"wyoming\"",
"mypy>=1.15.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/SimpleOpenSoftware/chronicle",
"Repository, https://github.com/SimpleOpenSoftware/chronicle",
"Original Project, https://github.com/BasedHardware/omi"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-20T02:37:06.715398 | friend_lite_sdk-0.3.0.tar.gz | 11,900 | bf/34/a42eabaf46116472ea8afa3ce480fb91b5ea72847a9b0f03b23ae17511b5/friend_lite_sdk-0.3.0.tar.gz | source | sdist | null | false | c17d31d62dfb5d59ff08a94f3f3a1530 | 9aa495537fe676705ff555093a1f503f4911a6ea7fdac1fd858a5fa1add92de8 | bf34a42eabaf46116472ea8afa3ce480fb91b5ea72847a9b0f03b23ae17511b5 | MIT | [
"LICENSE",
"NOTICE"
] | 232 |
2.4 | roboflex.transport.zmq | 0.1.14 | Roboflex Transport ZMQ Library | # roboflex.transport.zmq
Roboflex support for the ZMQ transport.
any node -> ZMQPublisher ==THEINTERNET==> ZMQSubscriber -> any node
See https://zeromq.org/ for details.
Using ZMQ, nodes can connect to other nodes, running in different threads, different processes, or different computers, with a publisher-subscriber pattern. roboflex.transport.zmq supports:
"inproc" transport -> between threads within same process
"ipc" transport -> between processes on same computer
"tcp" transport -> between processes on different computers
## System Dependencies
None! We build libzmq from source...
## pip install
pip install roboflex.transport.zmq
## Import (python)
import roboflex.transport.zmq as rtz
## Build (for c++ projects):
mkdir build && cd build
cmake ..
make
make install
## Run Examples (see [examples](examples))
go to roboflex_transport_zmq/examples
... create and activate some sort of virtual environment
where you installed roboflex.transport.zmq...
python pub_sub_0_py.py
## Nodes:
There are three: `ZMQContext`, `ZMQPublisher`, `ZMQSubscriber`.
To use the ZMQ transport nodes, first you must create a ZMQContext object. This mirrors the design of ZMQ itself.
# all parameters optional
zmq_context = ZMQContext(
num_io_threads = 1,
)
First, know this. "bind addresses" in this world can be three different things. All are strings, but can create different types of queues. These all implement one-to-many publish-subscribe pattern (in fact, it's actually many-to-many).
1. thread-to-thread only queues; "inproc://somename"; the fastest.
2. process-to-process (or thread-to-thread) queues; "ipc://somename"; sort of fast.
3. computer-to-computer (can work anywhere) queues (uses TCP): "tcp://*:5647"; the slowest, but works across the planet.
Then, create a ZMQPublisher:
zmq_pub = ZMQPublisher(
# the ZMQContext object you created
zmq_context,
# what socket to bind to, or what transport to publish on
bind_address = <bind address>,
# or
bind_addresses = [<bind address>],
# optional
# name of the
name = "ZMQPublisher",
# same as 'high-water mark' in zeromq parlance
max_queued_msgs = 1000,
)
#... when a ZMQPublisher receives a message from some upstream node, #it will wire-serialize it, and publish on its transport.
#You can get the bind_addresses:
ba = zmq_pub.bind_addresses
# you can get the high-water mark
hm = zmq_pub.max_queued_msgs
# You can publish a message 'by hand' - same as calling 'receive' on the node.
zmq_pub.publish(some_message)
Then, create one or more ZMQSubscribers, to listen to what you are publishing. ZMQSubscribes are the equivalent of 'sensors' in that the are root nodes, must be started, and start a thread.
zmq_sub = ZMQSubscriber(
# the ZMQContext object you created
zmq_context,
# what socket to bind to, or what transport to subscribe on
connect_address = <bind address>,
# or
connect_addresses = [<bind address>],
# optional
# name of the
name = "ZMQPublisher",
# same as 'high-water mark' in zeromq parlance
max_queued_msgs = 1000,
# how often to yield control on the thread
# You'll probably never change this.
timeout_milliseconds = 10,
)
# you get get values
zmq_sub.connect_addresses
zmq_sub.connect_address
zmq_sub.max_queued_msgs
zmq_sub.timeout_milliseconds
# you MUST start it!
zmq_sub.start()
# you may pull a message 'by hand':
msg_or_none = zmq_sub.pull(
10, # timeout_milliseconds - how long to wait for a message
)
# you may 'produce' messages 'by hand' - this will wait x milliseconds
# for one message, and if it has received one, signals it downstream
zmq_sub.produce(
10, # timeout_milliseconds
)
| text/markdown | Colin Prepscius | colinprepscius@gmail.com | null | null | MIT | zmq, zeromq, robotics, middleware, flexbuffers, python, c++, c++20 | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Embedded Systems",
"Framework :: Robot Framework",
"Framework :: Robot Framework :: Library",
"Framework :: Robot Framework :: Tool",
"Programming Language :: C++",
"Programming Language :: Python :: 3"
] | [] | https://github.com/flexrobotics/roboflex_transport_zmq | null | >=3.6 | [] | [] | [] | [
"numpy",
"roboflex"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.5 | 2026-02-20T02:37:03.278574 | roboflex_transport_zmq-0.1.14.tar.gz | 14,798 | 96/c1/9f0ec7e2108759df54dcf5e5a62750c4f56d4fc73ce0c4365912a96a4f08/roboflex_transport_zmq-0.1.14.tar.gz | source | sdist | null | false | 5337390e9b0028b4c9250e9958a13f96 | 8f572c0eccce15d95cf56625abdc17e38462e1700cd0bf98a653bc085de779ac | 96c19f0ec7e2108759df54dcf5e5a62750c4f56d4fc73ce0c4365912a96a4f08 | null | [
"LICENSE"
] | 0 |
2.4 | lazylabel-gui | 1.6.3 | An image segmentation GUI for generating ML ready mask tensors and annotations. | # LazyLabel
[](https://pypi.org/project/lazylabel-gui/)
[](https://github.com/dnzckn/LazyLabel/blob/main/LICENSE)
<div align="center">
<img src="https://raw.githubusercontent.com/dnzckn/LazyLabel/main/src/lazylabel/demo_pictures/logo2.png" alt="LazyLabel Logo" style="height:60px; vertical-align:middle;" />
<img src="https://raw.githubusercontent.com/dnzckn/LazyLabel/main/src/lazylabel/demo_pictures/logo_black.png" alt="LazyLabel Cursive" style="height:60px; vertical-align:middle;" />
</div>
**AI-Assisted Image Segmentation for Machine Learning Dataset Preparation**
LazyLabel combines Meta's Segment Anything Model (SAM) with comprehensive manual annotation tools to accelerate the creation of pixel-perfect segmentation masks for computer vision applications.
<div align="center">
<img src="https://raw.githubusercontent.com/dnzckn/LazyLabel/main/src/lazylabel/demo_pictures/gui.PNG" alt="LazyLabel Screenshot" width="800"/>
</div>
---
## Quick Start
```bash
pip install lazylabel-gui
lazylabel-gui
```
**From source:**
```bash
git clone https://github.com/dnzckn/LazyLabel.git
cd LazyLabel
pip install -e .
lazylabel-gui
```
**Requirements:** Python 3.10+, 8GB RAM, ~2.5GB disk space (for model weights)
---
## Core Features
### Annotation Tools
- **AI (SAM)**: Single-click segmentation with point-based refinement (SAM 1.0 & 2.1, GPU/CPU)
- **Polygon**: Vertex-level drawing and editing for precise boundaries
- **Box**: Bounding box annotations for object detection
- **Subtract**: Remove regions from existing masks
### Annotation Modes
- **Single View**: Fine-tune individual masks with maximum precision
- **Multi View**: Annotate up to 4 images simultaneously—ideal for objects in similar positions with slight variations
- **Sequence**: Propagate a refined mask across thousands of frames using SAM 2's video predictor
### Image Processing
- **FFT filtering**: Remove noise and enhance edges
- **Channel thresholding**: Isolate objects by color
- **Border cropping**: Zero out pixels outside defined regions in saved outputs
- **View adjustments**: Brightness, contrast, gamma correction, color saturation
---
## Export Formats
### One-hot encoded tensors (`.npz`)
```python
import numpy as np
data = np.load('image.npz')
mask = data['mask'] # Shape: (height, width, num_classes)
# Each channel represents one class
sky = mask[:, :, 0]
boats = mask[:, :, 1]
cats = mask[:, :, 2]
dogs = mask[:, :, 3]
```
### Normalized polygon coordinates (`.txt`)
```
0 0.234 0.456 0.289 0.478 0.301 0.523 ...
1 0.567 0.123 0.598 0.145 0.612 0.189 ...
```
### Class Aliases (`.json`)
```json
{
"0": "background",
"1": "person",
"2": "vehicle"
}
```
---
## SAM 2.1 Setup
SAM 1.0 models are downloaded automatically on first use. For SAM 2.1 (improved accuracy, required for Sequence mode):
1. Install SAM 2: `pip install git+https://github.com/facebookresearch/sam2.git`
2. Download a model (e.g., `sam2.1_hiera_large.pt`) from the [SAM 2 repository](https://github.com/facebookresearch/sam2)
3. Place in LazyLabel's models folder:
- Via pip: `~/.local/share/lazylabel/models/`
- From source: `src/lazylabel/models/`
4. Select the model from the dropdown in settings
---
## Building Windows Executable
Create a standalone Windows executable with bundled models for offline use:
**Requirements:**
- Windows (native, not WSL)
- Python 3.10+
- PyInstaller: `pip install pyinstaller`
**Build steps:**
```bash
git clone https://github.com/dnzckn/LazyLabel.git
cd LazyLabel
python build_system/windows/build_windows.py
```
The executable will be created in `dist/LazyLabel/`. The entire folder (~7-8GB) can be moved anywhere and runs offline.
---
## Documentation
- [Usage Manual](src/lazylabel/USAGE_MANUAL.md) - Comprehensive feature guide
- [Architecture Guide](src/lazylabel/ARCHITECTURE.md) - Technical implementation details
- [Changelog](CHANGELOG.md) - Version history and release notes
- [GitHub Issues](https://github.com/dnzckn/LazyLabel/issues) - Report bugs or request features
---
| text/markdown | null | "Deniz N. Cakan" <deniz.n.cakan@gmail.com> | null | null | MIT License
Copyright (c) 2025 Deniz N. Cakan
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Image Processing",
"Environment :: X11 Applications :: Qt"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"PyQt6>=6.9.0",
"pyqtdarktheme==2.1.0",
"torch>=2.7.1",
"torchvision>=0.22.1",
"segment-anything==1.0",
"numpy>=2.1.2",
"opencv-python>=4.11.0.86",
"scipy>=1.15.3",
"requests>=2.32.4",
"tqdm>=4.67.1",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest-mock>=3.10.0; extra == \"dev\"",
"pytest-qt>=4.2.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/dnzckn/lazylabel",
"Bug Tracker, https://github.com/dnzckn/lazylabel/issues"
] | twine/6.2.0 CPython/3.10.16 | 2026-02-20T02:35:36.399370 | lazylabel_gui-1.6.3.tar.gz | 223,407 | 8f/ab/8648c5671409edde2984753e38167c2192e29d0dba0d18c6978c41f4b2b5/lazylabel_gui-1.6.3.tar.gz | source | sdist | null | false | 9ad429de53a75a5cc392e035da6cd967 | ea3a97587247860368f9cffaf8d1335c66b0c8abd097dfd97773daea6b12ff9a | 8fab8648c5671409edde2984753e38167c2192e29d0dba0d18c6978c41f4b2b5 | null | [
"LICENSE"
] | 228 |
2.4 | pressoir | 4.9.4 | A Static Book Generator. | # Pressoir
Documentation complète : https://pressoir.org/
## Utilisation rapide
1. Installer uv : https://docs.astral.sh/uv/getting-started/installation/#standalone-installer
2. Se placer dans le dossier contenant les `textes`
3. Construire le livre : `uv run --with pressoir pressoir build serve`
4. Se rendre sur http://127.0.0.1:8000 pour visualiser le livre
Optionnellement, générer un PDF du livre :
5. Lancer : `uv run --with pressoir pressoir export`
6. Récupérer le PDF dans `public/book.pdf`
### À partir d’un corpus Stylo
1. Installer uv : https://docs.astral.sh/uv/getting-started/installation/#standalone-installer
2. Se placer dans un nouveau dossier
3. Récupérer les textes : `uv run --with pressoir pressoir stylo <corpus-id>`
4. Construire le livre : `uv run --with pressoir pressoir build serve`
5. Se rendre sur http://127.0.0.1:8000 pour visualiser le livre
## Installation
Pré-requis : Python3.8+
Installer et activer un environnement virtuel :
$ python3 -m venv venv
$ source venv/bin/activate
Installer les dépendances :
$ make install
## Initialiser un livre
Par exemple :
$ pressoir init --repository-path=../fia --collection sp
ou
$ pressoir init --repository-path=../12-editionscritiques --collection pum
Note : si la destination n’existe pas ou n’a pas de dossier `textes`,
une coquille complète du livre est créée.
Par exemple :
$ mkdir livre-test
$ cd livre-test
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install pressoir
$ pressoir init --collection=sp
## Construire un livre
$ pressoir build --repository-path=../fia-en
Avec `../fia-en` qui est le chemin vers le dépôt du livre.
En bonus, il est possible de passer un chapitre particulier pour ne reconstruire que lui :
$ pressoir build --repository-path=../fia-en --chapter=chapter1
Si vous êtes en local / développement, il faut passer l’option `--local`
pour que les liens de parcours du livre fonctionnent.
## Servir un livre
$ pressoir serve --repository-path=../fia-en
Avec `../fia-en` qui est le chemin vers le dépôt du livre qui a été construit.
## Générer les md+tex+pdf d’un livre
Expérimental : il est possible avec la commande `pressoir export` de générer des fichiers markdown, tex et pdf à partir des sources. Ils vont être créés dans `public/book.{md|tex|pdf}`.
Il est nécessaire d’avoir (xe)latex pour effectuer cette génération.
## Help
### Commands
<!-- [[[cog
import subprocess
import cog
output = subprocess.check_output("pressoir --help", shell=True)
help = output.decode().split("\n", 1)[1] # Remove Pandoc version.
cog.out(f"```\n{help}\n```")
]]] -->
```
usage: pressoir [-h] ...
options:
-h, --help Show this help message and exit
Available commands:
version Return the current version of pressoir.
init Initialize a new book to `repository_path` or current directory.
docs Generate documentation with pressoir itself. #SoMeta
build Build a book from `repository_path` or current directory.
export Generate a single md+tex+pdf file from `repository_path` or
current directory.
serve Serve an HTML book from `repository_path`/public or current
directory/public.
stylo Initialize a new book to current directory from Stylo.
```
<!-- [[[end]]] -->
### Command: `init`
<!-- [[[cog
import subprocess
import cog
output = subprocess.check_output("pressoir init --help", shell=True)
help = output.decode().split("\n", 1)[1] # Remove Pandoc version.
cog.out(f"```\n{help}\n```")
]]] -->
```
usage: pressoir init [-h] [--repository-path REPOSITORY_PATH]
[--collection {pum,sp,blank}]
options:
-h, --help show this help message and exit
--repository-path REPOSITORY_PATH
Absolute or relative path to book’s sources (default:
current).
--collection, -c {pum,sp,blank}
Name of the collection (default: blank).
```
<!-- [[[end]]] -->
### Command: `docs`
<!-- [[[cog
import subprocess
import cog
output = subprocess.check_output("pressoir docs --help", shell=True)
help = output.decode().split("\n", 1)[1] # Remove Pandoc version.
cog.out(f"```\n{help}\n```")
]]] -->
```
usage: pressoir docs [-h] [--target-path TARGET_PATH]
options:
-h, --help show this help message and exit
--target-path TARGET_PATH
```
<!-- [[[end]]] -->
### Command: `build`
<!-- [[[cog
import subprocess
import cog
output = subprocess.check_output("pressoir build --help", shell=True)
help = output.decode().split("\n", 1)[1] # Remove Pandoc version.
cog.out(f"```\n{help}\n```")
]]] -->
```
usage: pressoir build [-h] [--repository-path REPOSITORY_PATH]
[--csl-path CSL_PATH] [--target-path TARGET_PATH]
[--templates-folder TEMPLATES_FOLDER]
[--chapter CHAPTER] [--keep-statics] [--verbose]
options:
-h, --help show this help message and exit
--repository-path REPOSITORY_PATH
Absolute or relative path to book’s sources (default:
current).
--csl-path CSL_PATH Path to .csl file (default: Pandoc’s default).
--target-path TARGET_PATH
Where the book will be built (default:
`repository_path`/public).
--templates-folder TEMPLATES_FOLDER
Folder with header.html/footer.html for before/after
inclusion.
--chapter, -c CHAPTER
Specify a given chapter id (e.g. `chapter1`).
--keep-statics Do not override the statics with regular ones
(default: False).
--verbose, -v Display more informations during the build (default:
False).
```
<!-- [[[end]]] -->
### Command: `export`
<!-- [[[cog
import subprocess
import cog
output = subprocess.check_output("pressoir export --help", shell=True)
help = output.decode().split("\n", 1)[1] # Remove Pandoc version.
cog.out(f"```\n{help}\n```")
]]] -->
```
usage: pressoir export [-h] [--repository-path REPOSITORY_PATH]
[--template-path TEMPLATE_PATH] [--csl-path CSL_PATH]
[--target-path TARGET_PATH] [--verbose]
options:
-h, --help show this help message and exit
--repository-path REPOSITORY_PATH
Path to book’s sources (default: current).
--template-path TEMPLATE_PATH
Path to .tex template (default: Pandoc’s default).
--csl-path CSL_PATH Path to .csl file (default: Pandoc’s default).
--target-path TARGET_PATH
Where the book will be built (default:
`repository_path`/public).
--verbose, -v Display a lot of informations, useful for debugging.
```
<!-- [[[end]]] -->
### Command: `serve`
<!-- [[[cog
import subprocess
import cog
output = subprocess.check_output("pressoir serve --help", shell=True)
help = output.decode().split("\n", 1)[1] # Remove Pandoc version.
cog.out(f"```\n{help}\n```")
]]] -->
```
usage: pressoir serve [-h] [--repository-path REPOSITORY_PATH] [--port PORT]
options:
-h, --help show this help message and exit
--repository-path REPOSITORY_PATH
Absolute or relative path to book’s sources (default:
current).
--port, -p PORT Port to serve the book from (default=8000)
```
<!-- [[[end]]] -->
### Command: `stylo`
<!-- [[[cog
import subprocess
import cog
output = subprocess.check_output("pressoir stylo --help", shell=True)
help = output.decode().split("\n", 1)[1] # Remove Pandoc version.
cog.out(f"```\n{help}\n```")
]]] -->
```
usage: pressoir stylo [-h] [--stylo-instance STYLO_INSTANCE]
[--stylo-export STYLO_EXPORT] [--from-scratch]
[--keep-metadata]
stylo_id
positional arguments:
stylo_id Corpus id from Stylo.
options:
-h, --help show this help message and exit
--stylo-instance STYLO_INSTANCE
Instance of Stylo (default: stylo.huma-num.fr).
--stylo-export STYLO_EXPORT
Stylo export URL (default: https://export.stylo.huma-
num.fr).
--from-scratch Do not ask to override local files (default: False).
--keep-metadata Do not override the `livre.yaml` metadata file
(default: False).
```
<!-- [[[end]]] -->
| text/markdown | null | null | null | null | GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) 2024 Nicolas Sauret
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>. | book, generator, pandoc, publishing | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Education",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Documentation",
"Topic :: Software Development :: Build Tools",
"Topic :: Text Processing :: Markup :: Markdown"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"dataclass-wizard",
"httpx",
"jinja2",
"minicli",
"mistune",
"progressist",
"pypandoc",
"python-slugify",
"pyyaml",
"selectolax",
"tomli>=1.1.0; python_version < \"3.11\"",
"unidecode",
"black; extra == \"dev\"",
"cogapp; extra == \"dev\"",
"hatch; extra == \"dev\"",
"isort; extra == \"dev\"",
"pytest; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://pressoir.org",
"Source, https://gitlab.huma-num.fr/ecrinum/pressoir",
"Documentation, https://pressoir.org",
"Issues, https://gitlab.huma-num.fr/ecrinum/pressoir/-/issues",
"Changelog, https://gitlab.huma-num.fr/ecrinum/pressoir/-/blob/main/CHANGELOG.md?#changelog"
] | Hatch/1.16.2 cpython/3.14.2 HTTPX/0.28.1 | 2026-02-20T02:32:01.986423 | pressoir-4.9.4-py3-none-any.whl | 7,869,874 | df/36/8717b465f19aaeaa6523f9bb594cc72364a0349c90d9adb217eab3368b7a/pressoir-4.9.4-py3-none-any.whl | py3 | bdist_wheel | null | false | ee24a607749cbd61da5a49faa4a1fe11 | fdcc0270f2eed66a2955af38d3d9239b18cfc616f8f97b94ca47cba6f36a7416 | df368717b465f19aaeaa6523f9bb594cc72364a0349c90d9adb217eab3368b7a | null | [
"LICENSE"
] | 225 |
2.1 | odoo-addon-account-analytic-required | 18.0.1.0.0.6 | Account Analytic Required | =========================
Account Analytic Required
=========================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:15b8056031d33069c1fd1e5bbce202272406cfd960c124b610320aaadc0377d1
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/licence-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Faccount--analytic-lightgray.png?logo=github
:target: https://github.com/OCA/account-analytic/tree/18.0/account_analytic_required
:alt: OCA/account-analytic
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/account-analytic-18-0/account-analytic-18-0-account_analytic_required
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/account-analytic&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module adds an option *analytic policy* on accounts. You have the
choice between 4 policies : *always*, *never*, *posted moves* and empty
(*optional*).
**Table of contents**
.. contents::
:local:
Configuration
=============
Example:
If you want to have an analytic account on all your *expenses*, set the
policy to *always* for the account of type *expense*. If you try to save
a journal items with an account of type *expense* without analytic
account, you will get an error message.
Usage
=====
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/account-analytic/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/account-analytic/issues/new?body=module:%20account_analytic_required%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Akretion
Contributors
------------
- Alexis de Lattre <alexis.delattre@akretion.com>
- Stéphane Bidoul
- Stefan Rijnhart
- Laetitia Gangloff
- Luc De Meyer, Noviat <info@noviat.com>
- Yannick Vaucher <yannick.vaucher@camptocamp.com>
- Akim Juillerat <akim.juillerat@camptocamp.com>
- Raf Ven <raf.ven@dynapps.be>
- Iván Todorovich <ivan.todorovich@druidoo.io>
- `Trobz <https://trobz.com>`__:
- Nguyễn Minh Chiến <chien@trobz.com>
- Jairo Llopis (`Moduon <https://www.moduon.team/>`__)
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/account-analytic <https://github.com/OCA/account-analytic/tree/18.0/account_analytic_required>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Akretion, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/account-analytic | null | >=3.10 | [] | [] | [] | [
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T02:31:48.107074 | odoo_addon_account_analytic_required-18.0.1.0.0.6-py3-none-any.whl | 46,753 | e1/d7/2b8cca92fd737fd19d4dffb4b274ae6372e4b5006124c0b2ca7e1ecfea11/odoo_addon_account_analytic_required-18.0.1.0.0.6-py3-none-any.whl | py3 | bdist_wheel | null | false | 9a1e9963ea6e9884226535a1771734c5 | 774134b79861295563a0f0257f173d09d670f6340030fae74496a8f802549f30 | e1d72b8cca92fd737fd19d4dffb4b274ae6372e4b5006124c0b2ca7e1ecfea11 | null | [] | 103 |
2.4 | allianceauth | 4.13.0 | An auth system for EVE Online to help in-game organizations | # Alliance Auth
[](https://pypi.org/project/allianceauth/)
[](https://pypi.org/project/allianceauth/)
[](https://pypi.org/project/allianceauth/)
[](https://pypi.org/project/allianceauth/)
[](https://gitlab.com/allianceauth/allianceauth/commits/master)
[](https://allianceauth.readthedocs.io/?badge=latest)
[](https://gitlab.com/allianceauth/allianceauth/commits/master)
[](https://discord.gg/fjnHAmk)
A flexible authentication platform for EVE Online to help in-game organizations manage access to applications and services. AA provides both, a stable core, and a robust framework for community development and custom applications.
## Content
- [Overview](#overview)
- [Documentation](https://allianceauth.rtfd.io)
- [Support](#support)
- [Release Notes](https://gitlab.com/allianceauth/allianceauth/-/releases)
- [Developer Team](#development-team)
- [Contributing](#contributing)
## Overview
Alliance Auth (AA) is a platform that helps Eve Online organizations efficiently manage access to applications and services.
Main features:
- Automatically grants or revokes user access to external services (e.g.: Discord, Mumble) based on the user's current membership to [a variety of EVE Online affiliation](https://allianceauth.readthedocs.io/en/latest/features/core/states/) and [groups](https://allianceauth.readthedocs.io/en/latest/features/core/groups/)
- Provides a central web site where users can directly access web apps (e.g. SRP requests, Fleet Schedule) and manage their access to external services and groups.
- Includes a set of connectors (called ["Services"](https://allianceauth.readthedocs.io/en/latest/features/services/)) for integrating access management with many popular external applications / services like Discord, Mumble, Teamspeak 3, SMF and others
- Includes a set of web [Apps](https://allianceauth.readthedocs.io/en/latest/features/apps/) which add many useful functions, e.g.: fleet schedule, timer board, SRP request management, fleet activity tracker
- Can be easily extended with additional services and apps. Many are provided by the community and can be found here: [Community Creations](https://gitlab.com/allianceauth/community-creations)
- English :flag_gb:, Chinese :flag_cn:, German :flag_de:, Spanish :flag_es:, Korean :flag_kr:, Russian :flag_ru:, Italian :flag_it:, French :flag_fr:, Japanese :flag_jp: and Ukrainian :flag_ua: Localization
For further details about AA - including an installation guide and a full list of included services and plugin apps - please see the [official documentation](https://allianceauth.rtfd.io).
## Screenshot
Here is an example of the Alliance Auth web site with a mixture of Services, Apps and Community Creations enabled:
### Flatly Theme

### Darkly Theme

## Support
[Get help on Discord](https://discord.gg/fjnHAmk) or submit an [issue](https://gitlab.com/allianceauth/allianceauth/issues).
## Development Team
### Active Developers
- [Aaron Kable](https://gitlab.com/aaronkable/)
- [Ariel Rin](https://gitlab.com/soratidus999/)
- [Col Crunch](https://gitlab.com/colcrunch/)
- [Rounon Dax](https://gitlab.com/ppfeufer)
- [snipereagle1](https://gitlab.com/mckernanin)
### Former Developers
- [Adarnof](https://gitlab.com/adarnof/)
- [Basraah](https://gitlab.com/basraah/)
- [Erik Kalkoken](https://gitlab.com/ErikKalkoken/)
### Beta Testers / Bug Fixers
- [ghoti](https://gitlab.com/ChainsawMcGinny/)
- [kaezon](https://github.com/kaezon/)
- [mmolitor87](https://gitlab.com/mmolitor87/)
- [orbitroom](https://github.com/orbitroom/)
- [TargetZ3R0](https://github.com/TargetZ3R0)
- [tehfiend](https://github.com/tehfiend/)
Special thanks to [Nikdoof](https://github.com/nikdoof/), as his [auth](https://github.com/nikdoof/test-auth) was the foundation for the original work on this project.
## Contributing
Alliance Auth is maintained and developed by the community and we welcome every contribution!
To see what needs to be worked on please review our issue list or chat with our active developers on Discord.
Also, please make sure you have signed the [License Agreement](https://developers.eveonline.com/license-agreement) by logging in at [https://developers.eveonline.com](https://developers.eveonline.com) before submitting any pull requests.
In addition to the core AA system we also very much welcome contributions to our growing list of 3rd party services and plugin apps. Please see [AA Community Creations](https://gitlab.com/allianceauth/community-creations) for details.
| text/markdown | null | Alliance Auth <adarnof@gmail.com> | null | null | null | allianceauth, eveonline | [
"Environment :: Web Environment",
"Framework :: Celery",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Internet :: WWW/HTTP :: Dynamic Content"
] | [] | null | null | <3.13,>=3.8 | [] | [] | [] | [
"bcrypt<5",
"beautifulsoup4",
"celery<6,>=5.2",
"celery-once>=3.0.1",
"django<5,>=4.2",
"django-bootstrap-form",
"django-bootstrap5>=23.3",
"django-celery-beat>=2.3",
"django-esi>=7.0.1",
"django-redis>=5.2",
"django-registration<3.4,>=3.3",
"django-solo",
"django-sortedm2m",
"django-sri",
"dnspython",
"mysqlclient>=2.1",
"openfire-restapi",
"packaging>=21",
"passlib",
"pydiscourse",
"python-slugify>=1.2",
"pyyaml",
"redis>=4",
"requests>=2.9.1",
"requests-oauthlib",
"semantic-version",
"slixmpp<1.9",
"ua-parser",
"user-agents",
"myst-parser; extra == \"docs\"",
"sphinx; extra == \"docs\"",
"sphinx-copybutton; extra == \"docs\"",
"sphinx-rtd-theme<3,>=2; extra == \"docs\"",
"sphinx-tabs; extra == \"docs\"",
"sphinxcontrib-django; extra == \"docs\"",
"coverage>=4.3.1; extra == \"test\"",
"django-webtest; extra == \"test\"",
"requests-mock>=1.2; extra == \"test\""
] | [] | [] | [] | [
"Documentation, https://allianceauth.readthedocs.io/",
"Homepage, https://gitlab.com/allianceauth/allianceauth",
"Source, https://gitlab.com/allianceauth/allianceauth",
"Tracker, https://gitlab.com/allianceauth/allianceauth/-/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T02:30:59.256454 | allianceauth-4.13.0.tar.gz | 1,582,874 | d9/bb/95c1f4b17451e0d7ed8d440d733561268a70c516bd7be550f678cdd11f03/allianceauth-4.13.0.tar.gz | source | sdist | null | false | 5996de624fe4f8ee65f91624f076ea3f | 84c2f1eba03734d5ee2cb8eaf8efe0600d0c56de9bdb48cfec79a7e1f3d2c1cd | d9bb95c1f4b17451e0d7ed8d440d733561268a70c516bd7be550f678cdd11f03 | null | [
"LICENSE"
] | 676 |
2.4 | context-linter | 2.0.1 | Context CLI — LLM Readiness Linter for token efficiency and RAG readiness | # Context CLI
[](https://github.com/hanselhansel/context-cli/actions/workflows/test.yml)
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://pypi.org/project/context-cli/)
**Lint any URL for LLM readiness. Get a 0-100 score for token efficiency, RAG readiness, and LLM extraction quality.**
## What is Context CLI?
Context CLI is an LLM Readiness Linter that checks how well a URL is structured for AI consumption. As LLM-powered search engines, RAG pipelines, and AI agents become primary consumers of web content, your pages need to be optimized for token efficiency, structured data extraction, and machine-readable formatting.
Context CLI analyzes your content across four pillars and returns a structured score from 0 to 100.
## Features
- **Robots.txt AI bot access** -- checks 13 AI crawlers (GPTBot, ClaudeBot, DeepSeek-AI, Grok, and more)
- **llms.txt & llms-full.txt** -- detects both standard and extended LLM instruction files
- **Schema.org JSON-LD** -- extracts and evaluates structured data with high-value type weighting (Product, Article, FAQ, HowTo)
- **Content density** -- measures useful content vs. boilerplate with readability scoring, heading structure analysis, and answer-first detection
- **Batch mode** -- lint multiple URLs from a file with `--file` and configurable `--concurrency`
- **Custom bot list** -- override default bots with `--bots` for targeted checks
- **Verbose output** -- detailed per-pillar breakdown with scoring explanations and recommendations
- **Rich CLI output** -- formatted tables and scores via Rich
- **JSON / CSV / Markdown output** -- machine-readable results for pipelines
- **MCP server** -- expose the linter as a tool for AI agents via FastMCP
- **Context Compiler** -- LLM-powered `llms.txt` and `schema.jsonld` generation, with batch mode for multiple URLs
- **CI/CD integration** -- `--fail-under` threshold, `--fail-on-blocked-bots`, per-pillar thresholds, baseline regression detection, GitHub Step Summary
- **GitHub Action** -- composite action for CI pipelines with baseline support
- **Citation Radar** -- query AI models to see what they cite and recommend, with brand tracking and domain classification
- **Share-of-Recommendation Benchmark** -- track how often AI models mention and recommend your brand vs competitors, with LLM-as-judge analysis
## Installation
```bash
pip install context-linter
```
Context CLI uses a headless browser for content extraction. After installing, run:
```bash
crawl4ai-setup
```
### Development install
```bash
git clone https://github.com/your-org/context-cli.git
cd context-cli
pip install -e ".[dev]"
crawl4ai-setup
```
## Quick Start
```bash
context-cli lint example.com
```
This runs a full lint and prints a Rich-formatted report with your LLM readiness score.
## CLI Usage
### Single Page Lint
Lint only the specified URL (skip multi-page discovery):
```bash
context-cli lint example.com --single
```
### Multi-Page Site Lint (default)
Discover pages via sitemap/spider and lint up to 10 pages:
```bash
context-cli lint example.com
```
### Limit Pages
```bash
context-cli lint example.com --max-pages 5
```
### JSON Output
Get structured JSON for CI pipelines, dashboards, or scripting:
```bash
context-cli lint example.com --json
```
### CSV / Markdown Output
```bash
context-cli lint example.com --format csv
context-cli lint example.com --format markdown
```
### Verbose Mode
Show detailed per-pillar breakdown with scoring explanations:
```bash
context-cli lint example.com --single --verbose
```
### Timeout
Set the HTTP timeout (default: 15 seconds):
```bash
context-cli lint example.com --timeout 30
```
### Custom Bot List
Override the default 13 bots with a custom list:
```bash
context-cli lint example.com --bots "GPTBot,ClaudeBot,PerplexityBot"
```
### Batch Mode
Lint multiple URLs from a file (one URL per line, `.txt` or `.csv`):
```bash
context-cli lint --file urls.txt
context-cli lint --file urls.txt --concurrency 5
context-cli lint --file urls.txt --format csv
```
### CI Mode
Fail the build if the score is below a threshold:
```bash
context-cli lint example.com --fail-under 60
```
Fail if any AI bot is blocked:
```bash
context-cli lint example.com --fail-on-blocked-bots
```
#### Per-Pillar Thresholds
Gate CI on individual pillar scores:
```bash
context-cli lint example.com --robots-min 20 --content-min 30 --overall-min 60
```
Available: `--robots-min`, `--schema-min`, `--content-min`, `--llms-min`, `--overall-min`.
#### Baseline Regression Detection
Save a baseline and detect score regressions in future lints:
```bash
# Save current scores as baseline
context-cli lint example.com --single --save-baseline .context-baseline.json
# Compare against baseline (exit 1 if any pillar drops > 5 points)
context-cli lint example.com --single --baseline .context-baseline.json
# Custom regression threshold
context-cli lint example.com --single --baseline .context-baseline.json --regression-threshold 10
```
Exit codes: 0 = pass, 1 = score below threshold or regression detected, 2 = bots blocked.
When running in GitHub Actions, a markdown summary is automatically written to `$GITHUB_STEP_SUMMARY`.
### Quiet Mode
Suppress output, exit code 0 if score >= 50, 1 otherwise:
```bash
context-cli lint example.com --quiet
```
Use `--fail-under` with `--quiet` to override the default threshold:
```bash
context-cli lint example.com --quiet --fail-under 70
```
### Start MCP server
```bash
context-cli mcp
```
Launches a FastMCP stdio server exposing the linter as a tool for AI agents.
## MCP Integration
To use Context CLI as a tool in Claude Desktop, add this to your Claude Desktop config (`claude_desktop_config.json`):
```json
{
"mcpServers": {
"context-cli": {
"command": "context-cli",
"args": ["mcp"]
}
}
}
```
Once configured, Claude can call the `audit_url` tool directly to check any URL's LLM readiness.
## Context Compiler (Generate)
Generate `llms.txt` and `schema.jsonld` files from any URL using LLM analysis:
```bash
pip install context-linter[generate]
context-cli generate example.com
```
This crawls the URL, sends the content to an LLM, and writes optimized files to `./context-output/`.
### Batch Generate
Generate assets for multiple URLs from a file:
```bash
context-cli generate-batch urls.txt
context-cli generate-batch urls.txt --concurrency 5 --profile ecommerce
context-cli generate-batch urls.txt --json
```
Each URL's output goes to a subdirectory under `--output-dir`.
### BYOK (Bring Your Own Key)
The generate command auto-detects your LLM provider from environment variables:
| Priority | Env Variable | Model Used |
|----------|-------------|------------|
| 1 | `OPENAI_API_KEY` | gpt-4o-mini |
| 2 | `ANTHROPIC_API_KEY` | claude-3-haiku-20240307 |
| 3 | Ollama running locally | ollama/llama3.2 |
Override with `--model`:
```bash
context-cli generate example.com --model gpt-4o
```
### Industry Profiles
Tailor the output with `--profile`:
```bash
context-cli generate example.com --profile saas
context-cli generate example.com --profile ecommerce
```
Available: `generic`, `cpg`, `saas`, `ecommerce`, `blog`.
## Citation Radar
Query AI models to see what they cite and recommend for any search prompt:
```bash
pip install context-linter[generate]
context-cli radar "best project management tools" --brand Asana --brand Monday --model gpt-4o-mini
```
Options:
- `--brand/-b`: Brand name to track (repeatable)
- `--model/-m`: LLM model to query (repeatable, default: gpt-4o-mini)
- `--runs/-r`: Runs per model for statistical significance
- `--json`: Output as JSON
## Share-of-Recommendation Benchmark
Track how AI models mention and recommend your brand across multiple prompts:
```bash
pip install context-linter[generate]
context-cli benchmark prompts.txt -b "YourBrand" -c "Competitor1" -c "Competitor2"
```
Options:
- `prompts.txt`: CSV (with `prompt,category,intent` columns) or plain text (one prompt per line)
- `--brand/-b`: Target brand to track (required)
- `--competitor/-c`: Competitor brand (repeatable)
- `--model/-m`: LLM model to query (repeatable, default: gpt-4o-mini)
- `--runs/-r`: Runs per model per prompt (default: 3)
- `--yes/-y`: Skip cost confirmation prompt
- `--json`: Output as JSON
## GitHub Action
Use Context CLI in your CI pipeline:
```yaml
- name: Run Context Lint
uses: hanselhansel/context-cli@main
with:
url: 'https://your-site.com'
fail-under: '60'
```
With baseline regression detection:
```yaml
- name: Run Context Lint
uses: hanselhansel/context-cli@main
with:
url: 'https://your-site.com'
baseline-file: '.context-baseline.json'
save-baseline: '.context-baseline.json'
regression-threshold: '5'
```
The action sets up Python, installs context-cli, and runs the lint. Outputs `score` and `report-json` for downstream steps. See [docs/ci-integration.md](docs/ci-integration.md) for full documentation.
## Score Breakdown
Context CLI returns a score from 0 to 100, composed of four pillars:
| Pillar | Max Points | What it measures |
|---|---|---|
| Content density | 40 | Quality and depth of extractable text content |
| Robots.txt AI bot access | 25 | Whether AI crawlers are allowed in robots.txt |
| Schema.org JSON-LD | 25 | Structured data markup (Product, Article, FAQ, etc.) |
| llms.txt presence | 10 | Whether a /llms.txt file exists for LLM guidance |
### Scoring rationale (2026-02-18)
The weights reflect how AI search engines (ChatGPT, Perplexity, Claude) actually consume web content:
- **Content density (40 pts)** is weighted highest because it's what LLMs extract and cite when answering questions. Rich, well-structured content with headings and lists gives AI better material to work with.
- **Robots.txt (25 pts)** is the gatekeeper -- if a bot is blocked, it literally cannot crawl. It's critical but largely binary (either you're blocking or you're not).
- **Schema.org (25 pts)** provides structured "cheat sheets" that help AI understand entities. High-value types (Product, Article, FAQ, HowTo, Recipe) receive bonus weighting. Valuable but not required for citation.
- **llms.txt (10 pts)** is an emerging standard. Both `/llms.txt` and `/llms-full.txt` are checked. No major AI search engine heavily weights it yet, but it signals forward-thinking AI readiness.
## AI Bots Checked
Context CLI checks access rules for 13 AI crawlers:
- GPTBot
- ChatGPT-User
- Google-Extended
- ClaudeBot
- PerplexityBot
- Amazonbot
- OAI-SearchBot
- DeepSeek-AI
- Grok
- Meta-ExternalAgent
- cohere-ai
- AI2Bot
- ByteSpider
## Development
```bash
# Install with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Lint
ruff check src/ tests/
```
## License
MIT
| text/markdown | Hansel Wahjono | null | null | null | null | llm, ai, linter, token-waste, rag, robots-txt, schema-org, llms-txt | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Software Development :: Quality Assurance",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"typer>=0.9",
"rich>=13.0",
"httpx>=0.27",
"beautifulsoup4>=4.12",
"pydantic>=2.0",
"crawl4ai>=0.4",
"fastmcp>=2.0",
"pyyaml>=6.0",
"litellm>=1.40; extra == \"generate\"",
"pytest>=8.0; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"mypy>=1.10; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"litellm>=1.40; extra == \"dev\"",
"types-PyYAML>=6.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/hanselhansel/context-cli",
"Repository, https://github.com/hanselhansel/context-cli",
"Issues, https://github.com/hanselhansel/context-cli/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T02:30:45.107251 | context_linter-2.0.1.tar.gz | 235,363 | 3c/25/1a2a23d82e194afdfdead529beb5d5b63113cfbc983c17ee2f6f30e56e09/context_linter-2.0.1.tar.gz | source | sdist | null | false | 229e9f0a0d1e58ca05275e2333723439 | dbf088e72efe44acfe7d3498c94332f8e7e22455d23bb0bfe0dab7d7dff86e19 | 3c251a2a23d82e194afdfdead529beb5d5b63113cfbc983c17ee2f6f30e56e09 | MIT | [
"LICENSE"
] | 250 |
2.3 | programgarden-finance | 1.3.2 | 프로그램 동산 운영진이 관리하는 증권사 데이터 오픈소스 | # Programgarden Finance
Programgarden Finance는 AI 시대에 맞춰 파이썬을 모르는 투자자도 개인화된 시스템 트레이딩을 자동으로 수행할 수 있게 돕는 오픈소스입니다. 본 라이브러리는 LS증권 OpenAPI를 간소화하여 해외 주식 및 해외 선물옵션 거래를 쉽게 자동화할 수 있도록 설계되었습니다.
비전공 투자자도 사용하기 쉽도록 설계되었으며, 동시성, 증권 데이터 업데이트 등의 백그라운드 작업은 Program Garden에서 관리하고 있으므로 투자자는 손쉽게 사용만 하면 됩니다.
- 문서(비개발자 빠른 시작): https://programgarden.gitbook.io/docs/invest/non_dev_quick_guide
- 문서(Finance 가이드): https://programgarden.gitbook.io/docs/develop/finance_guide
- 문서(개발자 구조 안내): https://programgarden.gitbook.io/docs/develop/structure
- 유튜브: https://www.youtube.com/@programgarden
- 실시간소통 오픈톡방: https://open.kakao.com/o/gKVObqUh
## 주요 특징
- **간편한 LS증권 API 통합**: LS증권 OpenAPI의 복잡한 스펙을 간소화하여 몇 줄의 코드로 시작 가능
- **해외 주식 & 선물옵션 지원**: 해외 주식 및 해외 선물옵션 시장의 실시간 데이터 조회, 주문, 잔고 관리 등 통합 지원
- **실시간 WebSocket 스트리밍**: 실시간 시세, 체결, 호가 데이터를 WebSocket으로 간편하게 구독 가능
- **비동기 처리**: 모든 API 요청은 비동기와 동기로 분리하여 처리해서 높은 성능과 동시성 제공
- **토큰 자동 관리**: OAuth 토큰 발급 및 갱신을 자동으로 처리하여 인증 관리 부담 최소화
- **타입 안전성**: Pydantic 기반의 타입 검증으로 IDE 친화적이고 안전한 코드 작성 지원
- **풍부한 예제**: `example/` 폴더에 해외 주식, 선물옵션 각 기능별 실행 가능한 예제 제공
## 설치
```bash
# PyPI에 게시된 경우
pip install programgarden-finance
# Poetry 사용 시 (개발 환경)
poetry add programgarden-finance
```
요구 사항: Python 3.12+
## 빠른 시작
### 1. 토큰 발급
LS증권 API를 사용하려면 먼저 OAuth 토큰을 발급받아야 합니다.
```python
import asyncio
from programgarden_finance import LS
from programgarden_finance.ls.oauth.generate_token import GenerateToken
from programgarden_finance.ls.oauth.generate_token.token.blocks import TokenInBlock
async def get_token():
response = GenerateToken().token(
TokenInBlock(
appkey="YOUR_APPKEY",
appsecretkey="YOUR_APPSECRET",
)
)
result = await response.req_async()
print(f"Access Token: {result.block.access_token}")
asyncio.run(get_token())
```
### 2. 해외 주식 현재가 조회
```python
import asyncio
import os
from programgarden_finance import LS, g3101
import logging
from dotenv import load_dotenv
load_dotenv()
async def get_stock_price():
ls = LS()
# 로그인
if not ls.login(
appkey=os.getenv("APPKEY"),
appsecretkey=os.getenv("APPSECRET")
):
logging.error("로그인 실패")
return
# TSLA 현재가 조회
result = ls.overseas_stock().market().현재가조회(
g3101.G3101InBlock(
delaygb="R",
keysymbol="82TSLA",
exchcd="82",
symbol="TSLA"
)
)
response = await result.req_async()
logging.debug(f"TSLA 현재가: {response}")
asyncio.run(get_stock_price())
```
### 3. 실시간 시세 구독 (WebSocket)
```python
import asyncio
import os
from programgarden_finance import LS
import logging
from dotenv import load_dotenv
load_dotenv()
async def subscribe_realtime():
ls = LS()
if not ls.login(
appkey=os.getenv("APPKEY"),
appsecretkey=os.getenv("APPSECRET")
):
logging.error("로그인 실패")
return
# 실시간 데이터 콜백
def on_message(resp):
print(f"실시간 데이터: {resp}")
# WebSocket 연결
client = ls.overseas_stock().real()
await client.connect()
# GSC(해외주식 실시간 시세) 구독
gsc = client.GSC()
gsc.add_gsc_symbols(symbols=["81SOXL", "82TSLA"])
gsc.on_gsc_message(on_message)
asyncio.run(subscribe_realtime())
```
### 4. 해외 선물옵션 마스터 조회
```python
import asyncio
import os
from programgarden_finance import LS, o3101
import logging
from dotenv import load_dotenv
load_dotenv()
async def get_futures_master():
ls = LS()
if not ls.login(
appkey=os.getenv("APPKEY_FUTURE"),
appsecretkey=os.getenv("APPSECRET_FUTURE")
):
logging.error("로그인 실패")
return
# 해외선물 마스터 조회
result = ls.overseas_futureoption().market().해외선물마스터조회(
body=o3101.O3101InBlock(gubun="1")
)
response = await result.req_async()
print(response)
asyncio.run(get_futures_master())
```
## 주요 모듈 구조
### LS 클래스
LS증권 API의 진입점이 되는 메인 클래스입니다.
```python
from programgarden_finance import LS
ls = LS()
ls.login(appkey="...", appsecretkey="...")
# 해외 주식 API
stock = ls.overseas_stock()
stock.market() # 시장 정보 조회
stock.chart() # 차트 데이터 조회
stock.accno() # 계좌 정보 조회
stock.order() # 주문 처리
stock.real() # 실시간 데이터
# 해외 선물옵션 API
futures = ls.overseas_futureoption()
futures.market() # 시장 정보 조회
futures.chart() # 차트 데이터 조회
futures.accno() # 계좌 정보 조회
futures.order() # 주문 처리
futures.real() # 실시간 데이터
```
### 제공되는 주요 TR 코드
#### 해외 주식
- **시장 정보**: `g3101`(현재가), `g3102`(해외지수), `g3104`(거래소마스터), `g3106`(환율), `g3190`(뉴스)
- **차트**: `g3103`(일별), `g3202`(분봉), `g3203`(틱봉), `g3204`(시간외)
- **계좌**: `COSAQ00102`(예수금), `COSAQ01400`(해외잔고), `COSOQ00201`(체결내역), `COSOQ02701`(미체결)
- **주문**: `COSAT00301`(정정주문), `COSAT00311`(신규주문), `COSMT00300`(취소주문), `COSAT00400`(예약주문)
- **실시간**: `GSC`(체결), `GSH`(호가), `AS0`~`AS4`(각종 실시간 시세)
#### 해외 선물옵션
- **시장 정보**: `o3101`(선물마스터), `o3104`~`o3107`(거래소/통화/가격단위/정산환율), `o3116`(옵션마스터), `o3121`~`o3128`(각종 시장 정보), `o3136`, `o3137`(추가 시장 정보)
- **차트**: `o3103`(일별), `o3108`(분봉), `o3117`(틱봉), `o3139`(시간외)
- **계좌**: `CIDBQ01400`(예수금), `CIDBQ01500`(잔고), `CIDBQ01800`(체결내역), `CIDBQ02400`(미체결), `CIDBQ03000`(일별손익), `CIDBQ05300`(청산가능수량), `CIDEQ00800`(예탁증거금)
- **주문**: `CIDBT00100`(신규), `CIDBT00900`(정정), `CIDBT01000`(취소)
- **실시간**: `OVC`(체결), `OVH`(호가), `TC1`~`TC3`, `WOC`, `WOH`(각종 실시간 데이터)
## 예제 코드
`example/` 폴더에 다양한 실행 가능한 예제가 포함되어 있습니다.
### 예제 폴더 구조
```
example/
├── token/ # OAuth 토큰 발급 예제
│ └── run_token.py
├── overseas_stock/ # 해외 주식 예제
│ ├── run_g3101.py # 현재가 조회
│ ├── run_g3102.py # 해외지수 조회
│ ├── run_COSAT00311.py # 신규주문
│ ├── real_GSC.py # 실시간 체결 구독
│ ├── real_GSH.py # 실시간 호가 구독
│ └── ...
└── overseas_futureoption/ # 해외 선물옵션 예제
├── run_o3101.py # 선물마스터 조회
├── run_CIDBT00100.py # 신규주문
├── real_OVC.py # 실시간 체결 구독
├── real_OVH.py # 실시간 호가 구독
└── ...
```
### 예제 실행 방법
1. `.env` 파일 생성:
LS증권에서 API 키를 발급 받아서 `.env` 파일에 다음과 같이 설정합니다.
```bash
APPKEY=your_stock_appkey
APPSECRET=your_stock_appsecret
APPKEY_FUTURE=your_futures_appkey
APPSECRET_FUTURE=your_futures_appsecret
```
2. 예제 실행:
```bash
# 해외 주식 현재가 조회
python example/overseas_stock/run_g3101.py
# 해외 선물 마스터 조회
python example/overseas_futureoption/run_o3101.py
# 실시간 시세 구독
python example/overseas_stock/real_GSC.py
```
## API 참조
패키지 루트에서 주요 심볼들을 재노출합니다:
```python
from programgarden_finance import (
# 메인 클래스
LS,
# 모듈
oauth,
TokenManager,
overseas_stock,
overseas_futureoption,
# 해외 주식 TR
g3101, g3102, g3103, g3104, g3106, g3190, # 시장/차트
g3202, g3203, g3204, # 차트
COSAQ00102, COSAQ01400, # 계좌 조회
COSOQ00201, COSOQ02701, # 체결/미체결
COSAT00301, COSAT00311, # 주문
COSMT00300, COSAT00400, # 취소/예약
GSC, GSH, AS0, AS1, AS2, AS3, AS4, # 실시간
# 해외 선물옵션 TR
o3101, o3104, o3105, o3106, o3107, # 시장 정보
o3116, o3121, o3123, o3125, o3126, # 시장 정보
o3127, o3128, o3136, o3137, # 시장 정보
o3103, o3108, o3117, o3139, # 차트
CIDBQ01400, CIDBQ01500, CIDBQ01800, # 계좌
CIDBQ02400, CIDBQ03000, CIDBQ05300, # 계좌
CIDEQ00800, # 계좌
CIDBT00100, CIDBT00900, CIDBT01000, # 주문
OVC, OVH, TC1, TC2, TC3, WOC, WOH, # 실시간
# 예외 처리
exceptions,
)
```
| text/markdown | 프로그램동산 | coding@programgarden.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pydantic<3.0.0,>=2.11.7",
"requests<3.0.0,>=2.32.4",
"aiohttp<4.0.0,>=3.10.0",
"redis<7.0.0,>=6.4.0",
"websockets<16.0.0,>=15.0.1",
"python-dotenv<2.0.0,>=1.1.1",
"programgarden-core<2.0.0,>=1.4.0"
] | [] | [] | [] | [] | poetry/2.1.2 CPython/3.13.3 Darwin/25.2.0 | 2026-02-20T02:29:17.500909 | programgarden_finance-1.3.2.tar.gz | 133,480 | 6d/0e/20eac88497aef6709bc52299fbf5073aa2dc31dd7dc6837a33cb999c861d/programgarden_finance-1.3.2.tar.gz | source | sdist | null | false | bafa443c16f08c4c25e8296c251c220e | e99f1a395acd0475b63c36c9f58ef7653aa8145e53ff554a22c5dc57cce7a9af | 6d0e20eac88497aef6709bc52299fbf5073aa2dc31dd7dc6837a33cb999c861d | null | [] | 243 |
2.4 | maestro-reporter | 0.5.0 | Customized tool to run Maestro tests, parse Maestro report file, and push reports to Lark. | ## py-maestro-reporter
  
`py-maestro-reporter` is a lightweight tool that helps you:
- Run Maestro test seamlessly
- Parse Maestro JUnit reports
- Send summarized test results to a Lark respective group (for now)
It can be used either as a CLI tool or a python package into your own test pipelines
### Prerequisites
- Python 3.10 or above
- [Maestro framework](https://docs.maestro.dev/getting-started/installing-maestro) installed on your system (version 2.0.0 or above)
- Device/emulator with the app under test installed
- Lark webhook URL
### Installation
For the installation, you can either install the package from PyPI or from source. If you'd like to install from PyPI, you can install it with:
```bash
pip install maestro-reporter
```
Or, using uv:
```bash
uv pip install maestro-reporter
```
Or, if you'd prefer to install from source, you'll need to clone this repository and install it in editable mode:
```bash
pip install -e .
```
### Usage
**Using as a CLI argument**
This package expose a CLI via the `reporter` module
All you need to do is, ensure you have Maestro installed on your device, example flows which is going to be tested, physical device / emulator and Webhook URL from Lark. Once you have all of these, you can run with :
```bash
python -m reporter \
-c "maestro test examples/facebook-sign-up-flow.yaml --format junit --output tests/report.xml" \
-r "tests/report.xml" \
-w "https://webhook.url.com"
```
**Parsing an existing report**
Or, if you only want to run and parse the report without testing, you can use `--no-run` flag
```bash
python -m reporter \
--no-run \
-r "tests/report.xml" \
-w "https://webhook.url.com"
```
> You can also overrides the Webhook URL by setting the `LARK_URL` or `SLACK_URL` environment variable in your `.env` file, depending on the provider you choose.
**Using the reporter package**
Otherwise, if you'd like to run the tests without using the CLI arguments and you need to run the tests with the `reporter` package, you can follow the example below (this will test the Facebook sign up flow):
```python
import os
from dotenv import load_dotenv
from reporter import parse_xml_report, send_report_to_lark, run_maestro_command
load_dotenv()
command = "maestro test examples/facebook-sign-up-flow.yaml --format junit --output tests/report.xml"
run_maestro_command(command=command, cwd="tests")
parsed_result = parse_xml_report(file_path="report.xml")
report = send_report_to_lark(
summary=parsed_result,
title="Maestro Reporter Test",
color_template="Green",
webhook_url=os.getenv("LARK_URL"),
)
```
> The parameters of `color_template` and `title` are mandatory, if you don't provide them, the default values will be used
All successful tests (from execute the Maestro command -> parse the report -> send the report to Lark) will be displayed in the log stream handler, for example:
```
27-11-2025 : 10:51:46 : main : [WARNING] : No color template provided, using default color template or you can set it with `--color` flag
27-11-2025 : 10:51:46 : main : [WARNING] : No title provided, using default title or you can set it with `--title` flag
27-11-2025 : 10:51:46 : main : [INFO] : --no-run flag is set, skipping Maestro tests
27-11-2025 : 10:51:46 : main : [INFO] : Parsing Maestro report file: tests/report.xml
27-11-2025 : 10:51:46 : main : [INFO] : Sending Maestro report to Lark...
27-11-2025 : 10:51:46 : reporter.sender : [INFO] : Lark message sent successfully
27-11-2025 : 10:51:46 : main : [INFO] : Maestro report sent successfully
```
Once the report is sent successfully, you should be able to see the interactive card message in your Lark group like the following image

Otherwise, if you want to use Slack as a reporting platform, the card message will be displayed as follows

### CLI arguments
List of available CLI arguments that you can use with this package:
| arguments | description |
| --- | --- |
| `-h` / `--help` | show this help message and exit |
| `-c` / `--command` | Maestro command to run |
| `-r` / `--report` | Path to Maestro report, by default it's `report.xml` but you can configure it by yourself |
| `-w` / `--webhook` | Specify a webhook URL to send the report to Lark |
| `-n` / `--no-run` | No need to run Maestro tests, just parse the report and send the result to Lark |
| `-t` / `--title` | Set a custom title for the interactive card Lark message |
| `-ct` / `--color` | Set a custom color template for the interactive card Lark message |
| `-p` / `--provider` | Specify the reporting platform (`lark` or `slack`). Default is `lark` |
**Notes**
- At the moment, this package only supports the parsing of the `junit` format as follows for the Maestro report
- In addition, the webhook integration currently supports **Lark** and **Slack**
- The interactive card message is built using the `msg_actioncard` message type for Lark and `Block Kit` for Slack
**Further references**
- [Generate report with Maestro](https://docs.maestro.dev/cli/test-suites-and-reports)
- [Setup Lark Webhook URL in Lark group](https://open.larksuite.com/document/client-docs/bot-v3/add-custom-bot)
| text/markdown | Ryan Febriansyah | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"requests",
"lxml"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.12 | 2026-02-20T02:27:17.373225 | maestro_reporter-0.5.0.tar.gz | 12,208 | a9/23/0957e2b7ac356b716cce680a5a01c9ae568e2137d2c9eee9653d890de144/maestro_reporter-0.5.0.tar.gz | source | sdist | null | false | b3c7f18652db35c7b46a7fd46cc1a222 | 2235bc0ec608821674a12162cb3b888c0c1b5a617b5891c255834985b88d5c2d | a9230957e2b7ac356b716cce680a5a01c9ae568e2137d2c9eee9653d890de144 | null | [
"LICENSE"
] | 236 |
2.4 | alter-sdk | 0.2.2 | Alter Vault Python SDK - OAuth token management with policy enforcement | # Alter SDK for Python
Official Python SDK for [Alter Vault](https://alterai.dev) - OAuth token management with policy enforcement.
## Features
- **Zero Token Exposure**: Tokens are never exposed to developers - injected automatically
- **Single Entry Point**: One method (`vault.request()`) for all provider APIs
- **Type-Safe Enums**: `Provider` and `HttpMethod` enums with autocomplete
- **URL Templating**: Path parameter substitution with automatic URL encoding
- **Automatic Audit Logging**: All API calls logged with request metadata (HTTP method and URL) for full audit trail
- **Real-time Policy Enforcement**: Every token request checked against current policies
- **Automatic Token Refresh**: Tokens refreshed transparently by the backend
- **Actor Tracking**: First-class support for AI agent and MCP server observability
## Installation
```bash
pip install alter-sdk
```
## Quick Start
```python
import asyncio
from alter_sdk import AlterVault, Provider, HttpMethod
async def main():
vault = AlterVault(api_key="alter_key_...")
# Make API request - token injected automatically, never exposed
response = await vault.request(
Provider.GOOGLE,
HttpMethod.GET,
"https://www.googleapis.com/calendar/v3/calendars/primary/events",
user={"user_id": "alice"},
query_params={"maxResults": "10"},
)
events = response.json()
print(events)
await vault.close()
asyncio.run(main())
```
## Usage
### Simple GET Request
```python
response = await vault.request(
Provider.GOOGLE,
HttpMethod.GET,
"https://www.googleapis.com/calendar/v3/calendars/primary/events",
user={"user_id": "alice"},
)
```
### POST with JSON Body
```python
response = await vault.request(
Provider.SALESFORCE,
HttpMethod.POST,
"https://api.example.com/v1/items",
user={"user_id": "alice"},
json={"name": "New Item", "price": 99.99},
reason="Creating new item",
)
```
### URL Path Templating
```python
response = await vault.request(
Provider.SALESFORCE,
HttpMethod.PUT,
"https://api.example.com/v1/items/{item_id}",
user={"user_id": "alice"},
path_params={"item_id": "123"},
json={"price": 89.99},
)
```
### Query Parameters and Extra Headers
```python
response = await vault.request(
"notion",
HttpMethod.POST,
"https://api.notion.com/v1/databases/{db_id}/query",
user={"user_id": "alice"},
path_params={"db_id": "abc123"},
extra_headers={"Notion-Version": "2022-06-28"},
json={"page_size": 10},
)
```
### Context Manager
```python
async with AlterVault(api_key="alter_key_...") as vault:
response = await vault.request(
Provider.GOOGLE,
HttpMethod.GET,
"https://www.googleapis.com/calendar/v3/calendars/primary/events",
user={"user_id": "alice"},
)
# Automatically closed
```
> **Note:** After `close()` is called, subsequent `request()` calls raise `AlterSDKError`. `close()` is idempotent — calling it multiple times is safe.
### Connection Management
#### List Connections
Retrieve OAuth connections for your app, optionally filtered by provider:
```python
from alter_sdk import AlterVault
async with AlterVault(api_key="alter_key_...") as vault:
# List all connections
result = await vault.list_connections()
for conn in result.connections:
print(f"{conn.provider_id}: {conn.account_display_name} ({conn.status})")
# Filter by provider with pagination
result = await vault.list_connections(provider_id="google", limit=10, offset=0)
print(f"Total: {result.total}, Has more: {result.has_more}")
```
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `provider_id` | `str \| None` | `None` | Filter by provider (e.g., `"google"`) |
| `limit` | `int` | `100` | Max connections to return |
| `offset` | `int` | `0` | Pagination offset |
Returns `ConnectionListResult` with: `connections` (`list[ConnectionInfo]`), `total`, `limit`, `offset`, `has_more`.
#### Create Connect Session
Generate a session URL for end-users to authenticate with OAuth providers:
```python
session = await vault.create_connect_session(
end_user={"id": "alice"},
allowed_providers=["google", "github"],
return_url="https://myapp.com/callback",
)
print(f"Connect URL: {session.connect_url}")
print(f"Expires in: {session.expires_in}s")
```
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `end_user` | `dict` | *required* | Must contain `"id"` key |
| `attributes` | `dict \| None` | `None` | Connection attributes |
| `allowed_providers` | `list[str] \| None` | `None` | Restrict to specific providers |
| `return_url` | `str \| None` | `None` | Redirect URL after OAuth flow |
Returns `ConnectSession` with: `session_token`, `connect_url`, `expires_in`, `expires_at`.
### AI Agent Actor Tracking
```python
vault = AlterVault(
api_key="alter_key_...",
actor_type="ai_agent",
actor_identifier="email-assistant-v2",
actor_name="Email Assistant",
actor_version="2.0.0",
framework="langgraph",
)
response = await vault.request(
Provider.GOOGLE,
HttpMethod.GET,
"https://www.googleapis.com/calendar/v3/calendars/primary/events",
user={"user_id": "alice"},
run_id="550e8400-e29b-41d4-a716-446655440000",
thread_id="thread-xyz",
tool_call_id="call_abc_123",
)
```
### Multi-Agent Deployments
Each agent must create its own `AlterVault` instance with a unique actor identity. Do not share a single instance across agents.
```python
# Each agent gets its own vault instance
email_agent = AlterVault(
api_key="alter_key_...",
actor_type="ai_agent",
actor_identifier="email-assistant-v2",
actor_name="Email Assistant",
)
calendar_agent = AlterVault(
api_key="alter_key_...",
actor_type="ai_agent",
actor_identifier="calendar-agent-v1",
actor_name="Calendar Agent",
)
# Audit logs and policies are tracked per agent
user = {"user_id": "alice"}
await email_agent.request(
Provider.GOOGLE, HttpMethod.GET,
"https://gmail.googleapis.com/gmail/v1/users/me/messages",
user=user,
)
await calendar_agent.request(
Provider.GOOGLE, HttpMethod.GET,
"https://www.googleapis.com/calendar/v3/calendars/primary/events",
user=user,
)
# Clean up each instance
await email_agent.close()
await calendar_agent.close()
```
## Configuration
```python
vault = AlterVault(
api_key="alter_key_...", # Required: Alter Vault API key
base_url="https://api.alter.com", # Optional: Custom API URL
timeout=30.0, # Optional: HTTP timeout in seconds
# Actor tracking (optional)
actor_type="ai_agent", # "ai_agent" or "mcp_server"
actor_identifier="my-agent", # Unique identifier
actor_name="My Agent", # Human-readable name
actor_version="1.0.0", # Version string
framework="langgraph", # AI framework
client_type="cursor", # MCP client type
)
```
## Error Handling
```python
from alter_sdk import AlterVault, Provider, HttpMethod
from alter_sdk.exceptions import (
AlterSDKError, # Base exception for all SDK errors (including validation: api_key, actor_type, URL scheme, path_params)
PolicyViolationError, # Policy denied access (403)
ConnectionNotFoundError, # No OAuth connection found (404)
TokenExpiredError, # Token refresh failed (400/502)
TokenRetrievalError, # Other backend errors
NetworkError, # Backend or provider unreachable
TimeoutError, # Request timed out (subclass of NetworkError)
ProviderAPIError, # Provider API returned error (4xx/5xx)
)
try:
response = await vault.request(
Provider.GOOGLE,
HttpMethod.GET,
"https://www.googleapis.com/calendar/v3/calendars/primary/events",
user={"user_id": "alice"},
)
except PolicyViolationError as e:
print(f"Policy denied: {e.message}")
print(f"Policy error: {e.policy_error}") # Detailed policy failure reason
except ConnectionNotFoundError:
print("No OAuth connection - user needs to authenticate")
except TokenExpiredError as e:
print(f"Token expired for connection: {e.connection_id}")
except TimeoutError as e:
print(f"Request timed out — safe to retry: {e.message}")
except NetworkError as e:
print(f"Network issue: {e.message}")
except ProviderAPIError as e:
print(f"Provider error {e.status_code}: {e.response_body}")
```
## Supported Providers
```python
from alter_sdk import Provider
Provider.GOOGLE # "google"
Provider.GITHUB # "github"
Provider.SLACK # "slack"
Provider.MICROSOFT # "microsoft"
Provider.SALESFORCE # "salesforce"
Provider.SENTRY # "sentry"
# Strings also work for any provider
await vault.request("notion", HttpMethod.GET, url, user=user)
```
## Requirements
- Python 3.11+
- httpx[http2]
- pydantic
## License
MIT License
| text/markdown | Alter Team | founders@alterai.dev | null | null | null | oauth, tokens, security, policy, vault | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4.0,>=3.11 | [] | [] | [] | [
"httpx[http2]<0.26.0,>=0.25.0",
"pydantic<3.0.0,>=2.5.0"
] | [] | [] | [] | [
"Homepage, https://alterai.dev"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T02:24:24.450891 | alter_sdk-0.2.2.tar.gz | 20,028 | af/8e/e269de6107e6bfd622c67dc959ffaa2e060586895a3f6525e7ed5b402def/alter_sdk-0.2.2.tar.gz | source | sdist | null | false | 8d3809e034b98d3025160f30853d7ee8 | 0784bc92e461b778101bb3440158c314b4c12ba7ef55cbee6586ba499b1f267c | af8ee269de6107e6bfd622c67dc959ffaa2e060586895a3f6525e7ed5b402def | null | [] | 242 |
2.4 | git-stream | 3.0.0rc0 | Git Stream Implementation | # git-stream Python Module
A command line tool to implement streams in Git.
| text/markdown | null | "Jeffery G. Smith" <web@pobox.com> | null | null | null | git, programming, utilities | [
"Development Status :: 5 - Production/Stable",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.12",
"Intended Audience :: Developers",
"Topic :: Software Development",
"Natural Language :: English"
] | [] | null | null | ~=3.12 | [] | [] | [] | [
"BatCave",
"bumpver; extra == \"dev\"",
"vjer; extra == \"dev\"",
"types-PyYAML; extra == \"test\""
] | [] | [] | [] | [
"homepage, https://github.com/arisilon/git-stream/",
"documentation, https://git-stream.readthedocs.io",
"repository, https://github.com/arisilon/git-stream/",
"changelog, https://github.com/arisilon/git-stream/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T02:22:00.929740 | git_stream-3.0.0rc0.tar.gz | 9,756 | 7b/f2/869c537a1e09d09ea44dd408d2f73a8661a66962280c0f0fa795f8f3aa5d/git_stream-3.0.0rc0.tar.gz | source | sdist | null | false | 9ac1fb93b32bf7ccc8665582d0708380 | 8616da603d58bdd3197ba3c1b431deadd34d79e4e4d49f059f208d79f1fe1588 | 7bf2869c537a1e09d09ea44dd408d2f73a8661a66962280c0f0fa795f8f3aa5d | null | [
"LICENSE"
] | 223 |
2.4 | virallc | 1.0.20 | ViralLC: A Python package for consistent assignment of viral lineages | # ViralLC: A package rapid assignment of viral lineage nomenclature
## Getting the ViralLC source code
```
git clone https://github.com/ChrispinChaguza/virallc.git
```
## Setup ViralLC software on a local machine
### Installing ViralLC using Pip
The easist way to install the latest version of ViralLC is using Pip
```
pip install virallc
```
Here is a command to install a specific version of ViralLC using Pip
```
pip install virallc=={VERSION HERE}
```
After installing virallc using Pip, remember to install these dependencies (mafft, blast, and nextclade) manually using Conda!
### Installing ViralLC using Conda
Installation using Conda (upcoming!).
```
conda install -c conda-forge virallc
```
```
conda install -c bioconda virallc
```
### Installing ViralLC directly from Github
First, download ViralLC from GitHub and then manually setup the environment for the package
```
git clone https://github.com/ChrispinChaguza/virallc.git
cd virallc
```
Second, manually install the required package dependencies (mafft, nextclade, biopython, blast, pandas, networkx, and gitdir) using Conda and Pip. If the installation fails, create a new Conda environment and then repeat the installation.
```
conda install -c bioconda mafft=7.526 -y
conda install -c conda-forge python=3.14.2 -y
conda install -c bioconda nextclade=3.18.1 -y
conda install -c conda-forge biopython=1.86 -y
conda install -c bioconda blast=2.16.0 -y
conda install -c conda-forge pandas=3.0.0 -y
conda install -c conda-forge networkx=3.6.1 -y
```
```
pip install gitdir==1.2.7
pip install build
```
Alternatively, the packages can be installed in a new Conda environment as shown below.
```
conda env create -n virallc -f environment.yml
```
Follow the instructions below to build and install ViralLC
```
python -m build
pip install --force-reinstall dist/{INSERT THE COMPILED SOFTWARE VERSION}
```
## Basic usage
### Setting up database
When running virallc for the first time, it will automatically download and setup the virallc database in the home directory (~/viraldb/). However, if new lineages or sublineages have been assigned, you can redownload and update your local database as follows:
Here is a command to setup the database after installing the tool
```
virallc database --setupdb
```
```
virallc database -s
```
Here is a command to update a database (if corrupt, not found in the local machine, etc.)
```
virallc database --updatedb
```
```
virallc database -u
```
Here is a command to check version of the databases installed locally
```
virallc database --version
```
```
virallc database -v
```
### Assigning lineages
The simplest way to run virallc is to provide a single or separate multiple input FASTA file containing a single or multiple rotavirus A sequences.
```
virallc assign --in input.fasta --out report.tsv --db dbname
```
```
virallc assign -i input.fasta -o report.tsv -d dbname
```
To assign lineages to several sequences in a multi-FASTA file (each individual sequence represents a single strain):
```
virallc assign --in input1.fasta input2.fasta input3.fasta --out report.tsv --db dbname
```
```
virallc assign -i input1.fasta input2.fasta input3.fasta -o report.tsv -d dbname
```
To include the sequence in the output:
```
virallc assign --in input1.fasta input2.fasta input3.fasta --out report.tsv --db dbname --seq
```
```
virallc assign -i input1.fasta input2.fasta input3.fasta -o report.tsv -d dbname -s
```
To overwrite the output files:
```
virallc assign --in input1.fasta input2.fasta input3.fasta --out report.tsv --db dbname --seq --force
```
```
virallc assign -i input1.fasta input2.fasta input3.fasta -o report.tsv -d dbname -s -f
```
To assign lineages faster using more CPUs/threads:
```
virallc assign --in input1.fasta input2.fasta input3.fasta --out report.tsv --db dbname --seq --threads 10
```
```
virallc assign -i input1.fasta input2.fasta input3.fasta -o report.tsv -d dbname -s -t 10
```
To suppress the results on the terminal:
```
virallc assign --in input1.fasta input2.fasta input3.fasta --out report.tsv --db dbname --seq --threads 10 --quiet
```
```
virallc assign -i input1.fasta input2.fasta input3.fasta -o report.tsv -d dbname -s -t 10 -q
```
### Example dataset (Rotavirus A)
Here is a command to assign rotavirus A lineages to samples in *example* directory
```
virallc assign -i example.fna -o report.tsv -d RotavirusA
```
### Software version
Run the command below to show the software version
```
virallc version
```
## Cite
To be updated!
| text/markdown | null | Chrispin Chaguza <chrispin.chaguza@gmail.com> | null | Chrispin Chaguza <chrispin.chaguza@gmail.com> | MIT License
Copyright (c) 2024 Chrispin Chaguza
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| Virus, Lineage, Strain typing, Lineage classification, Genotype, Sublineage, Genome | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
"Environment :: Console",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"Intended Audience :: Science/Research"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"setuptools",
"toml",
"biopython",
"gitdir",
"pandas"
] | [] | [] | [] | [
"Homepage, https://github.com/ChrispinChaguza/virallc/",
"Repository, https://github.com/ChrispinChaguza/virallc.git",
"Issues, https://github.com/ChrispinChaguza/virallc/issues",
"Documentation, https://github.com/ChrispinChaguza/virallc/",
"Website, https://github.com/ChrispinChaguza/virallc"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T02:20:55.340695 | virallc-1.0.20.tar.gz | 16,913 | a8/d0/ec7e07dc2bb52dc6c4ea60bb09c8c4af920bc1ca6b32022aed6cbb825af7/virallc-1.0.20.tar.gz | source | sdist | null | false | e5948cd791cff9b7693fe0e2a6fc2db4 | f46f841868d861610cf6a5563d4945426f013aff1a44ac03701ac2f43819d5d5 | a8d0ec7e07dc2bb52dc6c4ea60bb09c8c4af920bc1ca6b32022aed6cbb825af7 | null | [
"LICENSE"
] | 255 |
2.4 | pyvista-xarray | 0.2.0.dev0 | xarray DataArray accessors for PyVista | # PyVista xarray
[](https://pypi.org/project/pyvista-xarray/)
[](https://codecov.io/gh/pyvista/pyvista-xarray)
[](https://mybinder.org/v2/gh/pyvista/pyvista-xarray/HEAD)
xarray DataArray accessors for PyVista to visualize datasets in 3D
## Usage
Import `pvxarray` to register the `.pyvista` accessor on xarray `DataArray`
and `Dataset` objects. This gives you access to methods for creating 3D meshes,
plotting, and lazy evaluation of large datasets.
Try on MyBinder: https://mybinder.org/v2/gh/pyvista/pyvista-xarray/HEAD
```py
import pvxarray
import xarray as xr
ds = xr.tutorial.load_dataset("air_temperature")
da = ds.air[dict(time=0)]
# Plot in 3D
da.pyvista.plot(x="lon", y="lat", show_edges=True, cpos='xy')
# Or grab the mesh object for use with PyVista
mesh = da.pyvista.mesh(x="lon", y="lat")
```
<!-- notebook=0, off_screen=1, screenshot='imgs/air_temperature.png' -->

### Coordinate Auto-Detection
If your data follows [CF conventions](https://cfconventions.org/), you can
omit the `x`, `y`, and `z` arguments entirely. `pyvista-xarray` uses
[cf-xarray](https://cf-xarray.readthedocs.io/) to detect coordinate axes
from attributes like `axis`, `standard_name`, and `units`, as well as
variable name heuristics:
```py
import pvxarray
import xarray as xr
ds = xr.tutorial.load_dataset("air_temperature")
da = ds.air[dict(time=0)]
# Coordinates are auto-detected from CF attributes
mesh = da.pyvista.mesh()
# Inspect the detected axes
da.pyvista.axes
# {'X': 'lon', 'Y': 'lat'}
```
### Lazy Evaluation with Algorithm Sources
For large or dask-backed datasets, create a VTK algorithm source that lazily
evaluates data on demand. This avoids loading the entire dataset into memory
and supports time stepping, resolution control, and spatial slicing:
```py
import pvxarray
import pyvista as pv
import xarray as xr
ds = xr.tutorial.load_dataset("air_temperature")
da = ds.air
# Create a lazy algorithm source with time stepping
source = da.pyvista.algorithm(x="lon", y="lat", time="time")
# Add directly to a plotter
pl = pv.Plotter()
pl.add_mesh(source)
pl.show(cpos="xy")
# Step through time
source.time_index = 10
```
Use the `resolution` parameter to downsample large datasets for interactive
rendering:
```py
source = da.pyvista.algorithm(x="lon", y="lat", time="time", resolution=0.5)
```
Algorithm sources also expose human-readable time labels from datetime
coordinates:
```py
source.time_label # e.g. '2013-01-01 00:00:00'
```
### Dataset Accessor
The `.pyvista` accessor also works on `Dataset` objects, letting you load
multiple data variables onto a single mesh. This is useful when a dataset
contains several fields (e.g. wind components, temperature, pressure) that
share the same grid:
```py
import pvxarray
import xarray as xr
ds = xr.tutorial.load_dataset("eraint_uvz")
# Discover which variables share the same dimensions
ds.pyvista.available_arrays()
# ['z', 'u', 'v']
# Create a mesh with all three variables as point data
mesh = ds.pyvista.mesh(
arrays=["u", "v", "z"],
x="longitude",
y="latitude",
)
# Or create a lazy algorithm source for large datasets
source = ds.pyvista.algorithm(
arrays=["u", "v"],
x="longitude",
y="latitude",
z="level",
time="month",
)
```
### Computed Fields
Derive new arrays on the fly with `vtkArrayCalculator` expressions. This is
useful for computing quantities like wind speed from vector components without
modifying the underlying dataset:
```py
import pvxarray
import xarray as xr
ds = xr.tutorial.load_dataset("eraint_uvz")
source = ds.pyvista.algorithm(
arrays=["u", "v"],
x="longitude",
y="latitude",
z="level",
time="month",
)
# Add a derived wind speed field
source.computed = {
"_use_scalars": ["u", "v"],
"wind_speed": "sqrt(u*u + v*v)",
}
```
Expressions follow `vtkArrayCalculator` syntax and can reference any array
loaded onto the mesh.
### Pipeline Extensibility
Inject post-processing filters into the source's evaluation chain. Each
element can be a VTK algorithm or a callable that takes and returns a PyVista
mesh:
```py
# Apply a warp filter after mesh creation
source.pipeline = [lambda mesh: mesh.warp_by_scalar(factor=0.001)]
```
Filters run in order after computed fields are evaluated and the result is
passed downstream to the plotter.
### State Serialization
Save and restore source configurations as JSON for reproducible
visualizations:
```py
# Save the current configuration
config = source.to_json()
# Later, recreate the source with the same settings
restored = PyVistaXarraySource.from_json(
config,
data_array=ds["u"],
dataset=ds,
)
```
The state captures coordinate mappings, time index, resolution, array
selections, and computed field definitions.
### Reading VTK Files as xarray Datasets
Read VTK mesh files directly into xarray using the `pyvista` backend
engine. Supported formats include `.vti`, `.vtr`, `.vts`, and `.vtk`:
```py
import xarray as xr
ds = xr.open_dataset("data.vtk", engine="pyvista")
ds["data array"].pyvista.plot(x="x", y="y", z="z")
```
### Converting PyVista Meshes to xarray
Convert PyVista meshes back to xarray Datasets with `pyvista_to_xarray`.
Supported mesh types: `RectilinearGrid`, `ImageData`, and `StructuredGrid`:
```py
import pyvista as pv
from pvxarray import pyvista_to_xarray
grid = pv.RectilinearGrid([0, 1, 2], [0, 1], [0, 1])
grid["values"] = range(grid.n_points)
ds = pyvista_to_xarray(grid)
```
## Installation
```bash
pip install 'pyvista-xarray[jupyter]'
```
This includes Jupyter rendering support (via Trame), common I/O libraries
(`netcdf4`, `rioxarray`), and dask for lazy evaluation. For a minimal
install without these extras:
```bash
pip install pyvista-xarray
```
`pyvista-xarray` is also available on conda-forge:
```bash
conda install -c conda-forge pyvista-xarray
```
## Examples
The [`examples/`](https://github.com/pyvista/pyvista-xarray/tree/main/examples)
directory contains Jupyter notebooks demonstrating various use cases:
| Notebook | Description |
| ------------------------------------------------------------- | -------------------------------------------------------- |
| [introduction.ipynb](examples/introduction.ipynb) | Quick start with auto-detection, rioxarray, and 3D grids |
| [simple.ipynb](examples/simple.ipynb) | Lazy evaluation, time stepping, and algorithm sources |
| [ocean_model.ipynb](examples/ocean_model.ipynb) | Curvilinear grids with ROMS ocean model data |
| [atmospheric_levels.ipynb](examples/atmospheric_levels.ipynb) | 3D atmospheric data across pressure levels |
| [lightning.ipynb](examples/lightning.ipynb) | Point cloud visualization from scattered observations |
| [cartographic.ipynb](examples/cartographic.ipynb) | Geographic projections with GeoVista |
| [radar.ipynb](examples/radar.ipynb) | Radar data with polar coordinates via xradar |
| [sea_temps.ipynb](examples/sea_temps.ipynb) | Sea surface temperature raster data |
There are also Python scripts for interactive Trame web applications:
`examples/level_of_detail.py` and `examples/level_of_detail_geovista.py`.
### Simple RectilinearGrid
```py
import numpy as np
import pvxarray
import xarray as xr
lon = np.array([-99.83, -99.32])
lat = np.array([42.25, 42.21])
z = np.array([0, 10])
temp = 15 + 8 * np.random.randn(2, 2, 2)
ds = xr.Dataset(
{
"temperature": (["z", "x", "y"], temp),
},
coords={
"lon": (["x"], lon),
"lat": (["y"], lat),
"z": (["z"], z),
},
)
mesh = ds.temperature.pyvista.mesh(x="lon", y="lat", z="z")
mesh.plot()
```
### Raster with rioxarray
```py
import pvxarray
import rioxarray
import xarray as xr
da = rioxarray.open_rasterio("TC_NG_SFBay_US_Geo_COG.tif")
da = da.rio.reproject("EPSG:3857")
# Grab the mesh object for use with PyVista
mesh = da.pyvista.mesh(x="x", y="y", component="band")
mesh.plot(scalars="data", cpos='xy', rgb=True)
```
<!-- notebook=0, off_screen=1, screenshot='imgs/raster.png' -->

```py
import pvxarray
import rioxarray
da = rioxarray.open_rasterio("Elevation.tif")
da = da.rio.reproject("EPSG:3857")
# Grab the mesh object for use with PyVista
mesh = da.pyvista.mesh(x="x", y="y")
# Warp top and plot in 3D
mesh.warp_by_scalar().plot()
```
<!-- notebook=0, off_screen=1, screenshot='imgs/topo.png' -->

### StructuredGrid
```py
import pvxarray
import pyvista as pv
import xarray as xr
ds = xr.tutorial.open_dataset("ROMS_example.nc", chunks={"ocean_time": 1})
if ds.Vtransform == 1:
Zo_rho = ds.hc * (ds.s_rho - ds.Cs_r) + ds.Cs_r * ds.h
z_rho = Zo_rho + ds.zeta * (1 + Zo_rho / ds.h)
elif ds.Vtransform == 2:
Zo_rho = (ds.hc * ds.s_rho + ds.Cs_r * ds.h) / (ds.hc + ds.h)
z_rho = ds.zeta + (ds.zeta + ds.h) * Zo_rho
ds.coords["z_rho"] = z_rho.transpose() # needing transpose seems to be an xarray bug
da = ds.salt[dict(ocean_time=0)]
# Make array ordering consistent
da = da.transpose("s_rho", "xi_rho", "eta_rho", transpose_coords=False)
# Grab StructuredGrid mesh
mesh = da.pyvista.mesh(x="lon_rho", y="lat_rho", z="z_rho")
# Plot in 3D
p = pv.Plotter()
p.add_mesh(mesh, lighting=False, cmap='plasma', clim=[0, 35])
p.view_vector([1, -1, 1])
p.set_scale(zscale=0.001)
p.show()
```

## Feedback
Please share your thoughts and questions on the
[Discussions](https://github.com/pyvista/pyvista-xarray/discussions) board.
If you would like to report any bugs or make feature requests, please open an
[issue](https://github.com/pyvista/pyvista-xarray/issues).
If filing a bug report, please share a scooby Report:
```py
import pvxarray
print(pvxarray.Report())
```
| text/markdown | null | The PyVista Developers <info@pyvista.org> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"cf-xarray",
"pyvista>=0.37",
"scooby",
"xarray>=2022.12.0",
"geovista; extra == \"examples\"",
"open-radar-data; extra == \"examples\"",
"pyvista[jupyter]; extra == \"examples\"",
"siphon; extra == \"examples\"",
"tqdm; extra == \"examples\"",
"xradar; extra == \"examples\"",
"zarr; extra == \"examples\"",
"dask; extra == \"jupyter\"",
"ipywidgets; extra == \"jupyter\"",
"netcdf4; extra == \"jupyter\"",
"pyvista[jupyter]; extra == \"jupyter\"",
"rioxarray; extra == \"jupyter\"",
"tqdm; extra == \"jupyter\"",
"codespell; extra == \"style\"",
"pre-commit; extra == \"style\"",
"pydocstyle; extra == \"style\"",
"ruff; extra == \"style\"",
"dask; extra == \"test\"",
"netcdf4; extra == \"test\"",
"pooch; extra == \"test\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"rioxarray; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/pyvista/pyvista-xarray",
"Source, https://github.com/pyvista/pyvista-xarray",
"Tracker, https://github.com/pyvista/pyvista-xarray/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T02:20:03.211618 | pyvista_xarray-0.2.0.dev0.tar.gz | 39,352 | 73/2f/d4539cea6945eebf47cff656138e3e02456fdf8bf9c182a195f3b1e469ab/pyvista_xarray-0.2.0.dev0.tar.gz | source | sdist | null | false | 58e3ddca8a1fd640e8098c5ce759277d | 647410a990263f62f7ed589c3e424629b51527879b49953f754e0d8df6f574bd | 732fd4539cea6945eebf47cff656138e3e02456fdf8bf9c182a195f3b1e469ab | Apache-2.0 | [
"LICENSE",
"NOTICE"
] | 227 |
2.4 | airbyte-agent-linear | 0.19.103 | Airbyte Linear Connector for AI platforms | # Linear
The Linear agent connector is a Python package that equips AI agents to interact with Linear through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Linear is a modern issue tracking and project management tool built for software
development teams. This connector provides access to issues, projects, and teams
for sprint planning, backlog management, and development workflow analysis.
## Example questions
The Linear connector is optimized to handle prompts like these.
- Show me the open issues assigned to my team this week
- List out all projects I'm currently involved in
- List all users in my Linear workspace
- Who is assigned to the most recently updated issue?
- Create a new issue titled 'Fix login bug'
- Update the priority of a recent issue to urgent
- Change the title of a recent issue to 'Updated feature request'
- Add a comment to a recent issue saying 'This is ready for review'
- Update my most recent comment to say 'Revised feedback after testing'
- Create a high priority issue about API performance
- Assign a recent issue to a teammate
- Unassign the current assignee from a recent issue
- Reassign a recent issue from one teammate to another
- Analyze the workload distribution across my development team
- What are the top priority issues in our current sprint?
- Identify the most active projects in our organization right now
- Summarize the recent issues for \{team_member\} in the last two weeks
- Compare the issue complexity across different teams
- Which projects have the most unresolved issues?
- Give me an overview of my team's current project backlog
## Unsupported questions
The Linear connector isn't currently able to handle prompts like these.
- Delete an outdated project from our workspace
- Schedule a sprint planning meeting
- Delete this issue
- Remove a comment from an issue
## Installation
```bash
uv pip install airbyte-agent-linear
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_linear import LinearConnector
from airbyte_agent_linear.models import LinearAuthConfig
connector = LinearConnector(
auth_config=LinearAuthConfig(
api_key="<Your Linear API key from Settings > API > Personal API keys>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@LinearConnector.tool_utils
async def linear_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_linear import LinearConnector, AirbyteAuthConfig
connector = LinearConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@LinearConnector.tool_utils
async def linear_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Issues | [List](./REFERENCE.md#issues-list), [Get](./REFERENCE.md#issues-get), [Create](./REFERENCE.md#issues-create), [Update](./REFERENCE.md#issues-update), [Search](./REFERENCE.md#issues-search) |
| Projects | [List](./REFERENCE.md#projects-list), [Get](./REFERENCE.md#projects-get), [Search](./REFERENCE.md#projects-search) |
| Teams | [List](./REFERENCE.md#teams-list), [Get](./REFERENCE.md#teams-get), [Search](./REFERENCE.md#teams-search) |
| Users | [List](./REFERENCE.md#users-list), [Get](./REFERENCE.md#users-get), [Search](./REFERENCE.md#users-search) |
| Comments | [List](./REFERENCE.md#comments-list), [Get](./REFERENCE.md#comments-get), [Create](./REFERENCE.md#comments-create), [Update](./REFERENCE.md#comments-update), [Search](./REFERENCE.md#comments-search) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Linear API docs
See the official [Linear API reference](https://linear.app/developers/graphql).
## Version information
- **Package version:** 0.19.103
- **Connector version:** 0.1.10
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/linear/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, linear, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:17:48.501057 | airbyte_agent_linear-0.19.103.tar.gz | 145,507 | 25/67/17ddc3b18fdf7493b8c31780c9325dc9b650b60cd5c758c7b2b1d128c757/airbyte_agent_linear-0.19.103.tar.gz | source | sdist | null | false | 4f8854a99269563de9089fc162a4796e | da1931eb6453b24cf99f99de4caa63c54f90d5b655bf5a03103bf6d29b3447c6 | 256717ddc3b18fdf7493b8c31780c9325dc9b650b60cd5c758c7b2b1d128c757 | null | [] | 342 |
2.4 | roboflex | 0.1.39 | Roboflex Core Library: a C++20 and python library for distributed robotics and automation. | # roboflex.core
At its core, roboflex is a library to make `Node`s that create, signal, and receive `Message`s to and from other nodes, in a distributed manner.

Roboflex.core defines what a `Message` is and what a `Node` is. It provides serialization services for eigen and xtensor. It provides a small library of core message types and useful Node sub-classes. The core only supports sending messages via direct function call to other nodes; the nodes in transport/ (zmq and mqtt so far) support sending messages from one thread to another, from one process to another, and from one computer to another via multiple methods.
One node may connect to multiple downstream nodes, supporting the pub-sub pattern.
## Basic Types
### Message
A roboflex Message is defined as:
1. An 8-byte header. The first four bytes are the letters 'RFLX', and the next four bytes
are a uint32, which is the size of the message in bytes (including the header).
2. A data portion, encoded in flexbuffers.h. Please see [MESSAGEFORMAT.md](MESSAGEFORMAT.md).
3. roboflex.core provides the Message class to facilitate this, as well as meta data and 0-copy functionality. This class is designed to be inherited. Please see [core_messages.h](core_messages/core_messages.h) for examples.
### Node
A roboflex Node represents a basic unit of computation. Nodes can connect to other nodes, can signal Messages, and can receive them. RunnableNode inherits from Node, and adds the ability to run a function in a thread. Both Node and Runnable are designed to be inherited: in order to perform custom logic on message reception, custom nodes should inherit from Node and override receive. In order to run in a thread, custom nodes should inherit RunnableNode and override child_thread_fn.
#### Nodes are designed for sub-classing, in python:
class MyNode(roboflex.Node):
def receive(self, msg):
signal(somenewmsg)
# or, to make a runnable (root) node that runs in a thread:
class MyRunnable(roboflex.RunnableNode):
def child_thread_fn(self):
do whatever
def start(self):
override if you want, probably not
#### and in c++:
struct MyNode: public roboflex::core::Node {
MyNode(): Node("n007") {}
void receive(core::MessagePtr m) override {
std::cout << get_name() << " " << m->to_string() << std::endl;
signal(m);
}
};
struct MyRunnableNode: public roboflex::core::RunnableNode {
MyRunnableNode(): roboflex::core::RunnableNode("n121") {}
void child_thread_fn() override {
// maybe read from a sensor in a loop? control a robot?
}
};
// The examples in roboflex/core/core_nodes explore subclassing further.
## Building (Only if you're doing c++)
mkdir build && cd build
cmake ..
make
make install
## Install for python
pip install roboflex
## Just Show Me The Code Example (in python):
import time
import numpy as np
from roboflex import FrequencyGenerator, MapFun, MessagePrinter, CallbackFun
# This example shows how to create a graph of nodes that pass messages containing
# numpy tensors (that can be interpreted at the c++-level as xtensor or eigen
# objects) between each other in a chain.
# -----------
# create nodes of the graph
# The first node will signal at 2 hz. FrequencyGenerator is a library node that
# simply signals a BlankMessage at a given frequency. It runs in a thread, and must
# be started and stopped.
frequency_generator = FrequencyGenerator(2.0)
# Next, this MapFunction (a node that maps a message to another message) node
# creates a message containing a numpy tensor. The python dict will be
# encapsulated into a DynoFlex message, and be passed to the next node in
# the graph.
tensor_creator = MapFun(lambda m: {"t": np.ones((2, 3)) * m.message_counter})
# These nodes print stuff out.
message_printer = MessagePrinter("MESSAGE IS:")
tensor_printer = CallbackFun(lambda m: print("TENSOR IS:", type(m["t"]), m["t"].shape, m["t"].dtype, "\n", m["t"]))
# -----------
# connect nodes of the graph. It's easy to distribute the graph into
# multiple cpus using nodes in roboflex/transport.
#
# 'frequency_generator > tensor_creator' is syntactic sugar for
# 'frequency_generator.connect(tensor_creator)'.
#
frequency_generator > tensor_creator > message_printer > tensor_printer
# -----------
# start the root node (the other nodes, in this case, will run in the root node's thread).
frequency_generator.start()
# -----------
# go for a while
time.sleep(3)
# -----------
# stop the root node
frequency_generator.stop()
## Examples
see [examples/README.md](examples/README.md)
## 🔗 Related
- [RoboFlex Meta Manifest](https://github.com/flexrobotics/roboflex-meta) — ecosystem overview and repo catalog
| text/markdown | Colin Prepscius | colinprepscius@gmail.com | null | null | MIT | robotics, middleware, flexbuffers, python, c++, c++20 | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Software Development :: Embedded Systems",
"Framework :: Robot Framework",
"Framework :: Robot Framework :: Library",
"Framework :: Robot Framework :: Tool",
"Programming Language :: C++",
"Programming Language :: Python :: 3"
] | [] | https://github.com/flexrobotics/roboflex | null | >=3.6 | [] | [] | [] | [
"numpy"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.5 | 2026-02-20T02:15:57.785754 | roboflex-0.1.39.tar.gz | 76,615 | df/0b/6b8f1190f4ae577f765ea841cbff8978a900a681a95f16f2132b732a90d8/roboflex-0.1.39.tar.gz | source | sdist | null | false | d8d6e35e5955fb3c5642509891ff3216 | ed61a9197e12d391dfe8083d3a9176f0e48b209ceb695fc41d3215d1d24a7feb | df0b6b8f1190f4ae577f765ea841cbff8978a900a681a95f16f2132b732a90d8 | null | [
"LICENSE"
] | 268 |
2.4 | airbyte-agent-tiktok-marketing | 0.1.6 | Airbyte Tiktok-Marketing Connector for AI platforms | # Tiktok-Marketing
The Tiktok-Marketing agent connector is a Python package that equips AI agents to interact with Tiktok-Marketing through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Connector for the TikTok Marketing API (Business API v1.3). Provides access to advertiser accounts, campaigns, ad groups, ads, audiences, creative assets (images and videos), and daily performance reports at the advertiser, campaign, ad group, and ad levels. Requires an Access Token from the TikTok for Business platform. All list operations require an advertiser_id parameter to scope results to a specific advertiser account.
## Example questions
The Tiktok-Marketing connector is optimized to handle prompts like these.
- List all my TikTok advertisers
- Show me all campaigns for my advertiser account
- List all ad groups
- Show me all ads
- List my custom audiences
- Show me all creative asset images
- List creative asset videos
- Show me daily ad performance reports
- Get campaign performance metrics for the last 30 days
- Show me advertiser spend reports
- Which campaigns have the highest budget?
- Find all paused ad groups
- What ads were created last month?
- Show campaigns with lifetime budget mode
- Which ads had the most impressions yesterday?
- What is my total ad spend this month?
- Which campaigns have the highest click-through rate?
## Unsupported questions
The Tiktok-Marketing connector isn't currently able to handle prompts like these.
- Create a new campaign
- Update ad group targeting
- Delete an ad
## Installation
```bash
uv pip install airbyte-agent-tiktok-marketing
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_tiktok_marketing import TiktokMarketingConnector
from airbyte_agent_tiktok_marketing.models import TiktokMarketingAuthConfig
connector = TiktokMarketingConnector(
auth_config=TiktokMarketingAuthConfig(
access_token="<Your TikTok Marketing API access token>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@TiktokMarketingConnector.tool_utils
async def tiktok_marketing_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_tiktok_marketing import TiktokMarketingConnector, AirbyteAuthConfig
connector = TiktokMarketingConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@TiktokMarketingConnector.tool_utils
async def tiktok_marketing_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Advertisers | [List](./REFERENCE.md#advertisers-list), [Search](./REFERENCE.md#advertisers-search) |
| Campaigns | [List](./REFERENCE.md#campaigns-list), [Search](./REFERENCE.md#campaigns-search) |
| Ad Groups | [List](./REFERENCE.md#ad-groups-list), [Search](./REFERENCE.md#ad-groups-search) |
| Ads | [List](./REFERENCE.md#ads-list), [Search](./REFERENCE.md#ads-search) |
| Audiences | [List](./REFERENCE.md#audiences-list), [Search](./REFERENCE.md#audiences-search) |
| Creative Assets Images | [List](./REFERENCE.md#creative-assets-images-list), [Search](./REFERENCE.md#creative-assets-images-search) |
| Creative Assets Videos | [List](./REFERENCE.md#creative-assets-videos-list), [Search](./REFERENCE.md#creative-assets-videos-search) |
| Advertisers Reports Daily | [List](./REFERENCE.md#advertisers-reports-daily-list), [Search](./REFERENCE.md#advertisers-reports-daily-search) |
| Campaigns Reports Daily | [List](./REFERENCE.md#campaigns-reports-daily-list), [Search](./REFERENCE.md#campaigns-reports-daily-search) |
| Ad Groups Reports Daily | [List](./REFERENCE.md#ad-groups-reports-daily-list), [Search](./REFERENCE.md#ad-groups-reports-daily-search) |
| Ads Reports Daily | [List](./REFERENCE.md#ads-reports-daily-list), [Search](./REFERENCE.md#ads-reports-daily-search) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Tiktok-Marketing API docs
See the official [Tiktok-Marketing API reference](https://business-api.tiktok.com/portal/docs?id=1740302848670722).
## Version information
- **Package version:** 0.1.6
- **Connector version:** 1.1.2
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/tiktok-marketing/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, llm, mcp, tiktok-marketing | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:15:22.947218 | airbyte_agent_tiktok_marketing-0.1.6.tar.gz | 164,979 | 01/a5/351e680a38007b09faae22c95f7014036e66e21db1fb6a949cdf1ff2c8b7/airbyte_agent_tiktok_marketing-0.1.6.tar.gz | source | sdist | null | false | 566cbfa0435238ebe3de861be367214f | 629ac6de298613e149825579d5a51c195e80ea50b1e0e59471836f07c7df46dd | 01a5351e680a38007b09faae22c95f7014036e66e21db1fb6a949cdf1ff2c8b7 | null | [] | 341 |
2.4 | airbyte-agent-mcp | 0.1.145 | MCP server for Airbyte connectors - connect AI assistants to 500+ data sources | # Airbyte MCP Server
Connect AI assistants to a growing catalog of data sources through the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/).
This project provides an MCP server that exposes [Airbyte](https://airbyte.com/) connectors as tools, enabling AI assistants like Claude, Cursor, and Codex to interact with your data sources directly.
## Features
- **Growing Connector Catalog**: Access any Airbyte connector (Salesforce, HubSpot, Stripe, databases, and more)
- **Two Execution Modes**:
- **Local Mode**: Direct API calls using your credentials
- **Cloud Mode**: Execute through Airbyte Cloud for managed infrastructure
- **AI Tool Integration**: One-command setup for Claude Code, Claude Desktop, Cursor, and Codex
## Quick Start
1. **List available connectors**:
```bash
uv run adp connectors list-oss
```
2. **Generate a connector configuration** (e.g., Gong):
```bash
uv run adp connectors configure --package airbyte-agent-gong
```
3. **Set your connector credentials** in `.env`:
```bash
GONG_ACCESS_KEY=your-access-key
GONG_ACCESS_KEY_SECRET=your-secret
```
4. **Register with your AI tool**:
```bash
# Claude Code
uv run adp mcp add-to claude-code connector-gong-package.yaml
# Claude Desktop
uv run adp mcp add-to claude-desktop connector-gong-package.yaml
# Cursor
uv run adp mcp add-to cursor connector-gong-package.yaml
# OpenAI Codex
uv run adp mcp add-to codex connector-gong-package.yaml
```
5. **Restart your AI tool** and start asking questions like "List all users from Gong" or "Search for calls from last week".
## Configuration
### Local Mode (Direct API Access)
For local execution with your own credentials. This mode calls the data source API directly and only supports operations that the API provides (e.g., list, get by ID).
> **Info:** Arbitrary search/filter queries are not supported unless the underlying API supports them.
```yaml
connector:
package: airbyte-agent-gong
version: 0.1.13 # optional, defaults to latest
credentials:
access_key: ${env.GONG_ACCESS_KEY}
access_key_secret: ${env.GONG_ACCESS_KEY_SECRET}
```
### Cloud Mode (Airbyte Cloud)
For execution through Airbyte Cloud. This mode supports arbitrary search and filter queries across all entities, as data is kept up to date and indexed in Airbyte's infrastructure.
```yaml
connector:
connector_id: <connector-id>
credentials:
airbyte_client_id: ${env.AIRBYTE_CLIENT_ID}
airbyte_client_secret: ${env.AIRBYTE_CLIENT_SECRET}
```
Credentials use `${env.VAR_NAME}` syntax and are resolved from `.env` files, which the CLI loads automatically.
You can also point the connector to a local path or a git repository — run `uv run adp connectors configure --help` for all options.
### Aggregate Config (Multiple Connectors)
You can run one MCP server with multiple connector configs:
```yaml
name: airbyte-crm-suite
configs:
- connector-gong-package.yaml
- connector-salesforce-cloud.yaml
```
## CLI Commands
All commands are run with `uv run adp <command>`. Use `--help` on any command for full options.
### Login
Save your Airbyte Cloud credentials so they are available to all commands without a local `.env` file:
```bash
uv run adp login <organization-id>
```
This prints a link to the Airbyte authentication page for your organization where you can find your Client ID and Secret, then prompts for both values. Credentials are written to `~/.airbyte_agent_mcp/orgs/<organization-id>/.env` and the organization is set as the default.
You can log into multiple organizations and switch between them:
```bash
uv run adp orgs list # List logged-in organizations
uv run adp orgs default org-xyz # Switch default organization
uv run adp --org org-abc <cmd> # Override for a single command
```
### Connectors
```bash
# List available connectors
uv run adp connectors list-oss
uv run adp connectors list-oss --pattern salesforce
# List cloud connectors
uv run adp connectors list-cloud
uv run adp connectors list-cloud --customer acme
# Generate a connector configuration
uv run adp connectors configure --package airbyte-agent-gong
uv run adp connectors configure --connector-id <id>
```
### MCP Server
```bash
# Start with stdio transport (default)
uv run adp mcp serve connector-gong-package.yaml
# Start with an aggregate config (multiple connectors)
uv run adp mcp serve connectors.yaml
# Start with HTTP transport
uv run adp mcp serve connector-gong-package.yaml --transport http --port 8080
# Register with an AI tool
uv run adp mcp add-to claude-code connector-gong-package.yaml
# Register aggregate config with an AI tool
uv run adp mcp add-to codex connectors.yaml
```
### Chat
Chat with your connector data using natural language, powered by Claude. Requires `ANTHROPIC_API_KEY`.
```bash
# One-shot mode (great for piping)
uv run adp chat connector-gong-package.yaml "show me 5 users"
# Chat with an aggregate config
uv run adp chat connectors.yaml "show me 5 users from each system"
# Interactive REPL
uv run adp chat connector-gong-package.yaml
```
## Development
```bash
# Install dependencies
uv sync --group dev
# Run tests
uv run poe test
# Format and lint
uv run poe format
uv run poe check
```
## Links
- [Airbyte](https://airbyte.com/)
- [Model Context Protocol](https://modelcontextprotocol.io/)
- [Claude Code](https://claude.ai/code)
- [GitHub Issues](https://github.com/airbytehq/airbyte-agent-connectors/issues)
- [Airbyte Community Slack](https://airbyte.com/community)
| text/markdown | null | Airbyte <contact@airbyte.io> | null | null | MIT | ai, airbyte, claude, data-integration, mcp, model-context-protocol | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"anthropic>=0.42.0",
"fastmcp>=3.0.0b1",
"filetype>=1.2",
"httpx>=0.27.0",
"pydantic-ai-slim[anthropic,fastmcp]>=0.1.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pytz>=2025.2",
"pyyaml>=6.0.0",
"rich>=13.0.0",
"typer>=0.12.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://github.com/airbytehq/airbyte-agent-connectors#readme",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:15:19.880205 | airbyte_agent_mcp-0.1.145.tar.gz | 221,351 | 8e/2c/69f709be08d8590176df524e361620406d14cec06286731ea9712dabdb7f/airbyte_agent_mcp-0.1.145.tar.gz | source | sdist | null | false | bd879ea6554f327f4a432ee9512a282b | f9c72ae9fc2d0d6773b0e8bc85ee8f2bc375e16b67609c43beab174387c656d0 | 8e2c69f709be08d8590176df524e361620406d14cec06286731ea9712dabdb7f | null | [] | 264 |
2.4 | edgar-agent-tool | 0.0.5 | A tool to read through filings on the EDGAR site from the SEC | # Edgar Agent Tool
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
## Features
- 🔍 **Company Lookup** — Convert stock tickers to SEC CIK numbers
- 📑 **Filing Retrieval** — Fetch recent filings for any public
company
- 🧩 **Automatic Section Parsing** — Extract specific sections (Risk
Factors, MD&A, etc.)
- 🌐 **Cross-Company Search** — Find filings across all companies by
date
- 🤖 **LLM-Friendly** — Clean markdown output, structured returns,
designed for AI agents
- ⚡ **Smart Caching** — Built-in TTL caches to respect SEC rate limits
## Developer Guide
If you are new to using `nbdev` here are some useful pointers to get you
started.
### Install edgar_tool in Development mode
``` sh
# make sure edgar_tool package is installed in development mode
$ pip install -e .
# make changes under nbs/ directory
# ...
# compile to have changes apply to edgar_tool
$ nbdev_prepare
```
## Usage
## Installation
``` bash
pip install edgar-agent-tool
```
Or install from GitHub for the latest version:
``` bash
pip install git+https://github.com/problemsolversguild/edgar-agent-tool.git
```
### Documentation
Documentation can be found hosted on this GitHub
[repository](https://github.com/problemsolversguild/edgar_tool)’s
[pages](https://problemsolversguild.github.io/edgar_tool/). Additionally
you can find package manager specific guidelines on
[conda](https://anaconda.org/problemsolversguild/edgar_tool) and
[pypi](https://pypi.org/project/edgar_tool/) respectively.
## Quick Start
First, set up your SEC User-Agent (required by SEC):
``` bash
export SEC_USER_AGENT="YourName your@email.com"
```
``` python
from edgar_agent_tool.filings import get_filing, get_recent_filings
# Get Apple's latest 10-K with automatic section parsing
filing = get_filing("AAPL", filing_type="10-K")
print(filing["summary"])
```
ModuleNotFoundError: No module named 'edgar_agent_tool'
[31m---------------------------------------------------------------------------[39m
[31mModuleNotFoundError[39m Traceback (most recent call last)
[36mCell[39m[36m [39m[32mIn[1][39m[32m, line 1[39m
[32m----> [39m[32m1[39m [38;5;28;01mfrom[39;00m[38;5;250m [39m[34;01medgar_agent_tool[39;00m[34;01m.[39;00m[34;01mfilings[39;00m[38;5;250m [39m[38;5;28;01mimport[39;00m get_filing, get_recent_filings
[32m 3[39m [38;5;66;03m# Get Apple's latest 10-K with automatic section parsing[39;00m
[32m 4[39m filing = get_filing([33m"[39m[33mAAPL[39m[33m"[39m, filing_type=[33m"[39m[33m10-K[39m[33m"[39m)
[31mModuleNotFoundError[39m: No module named 'edgar_agent_tool'
``` python
# Extract a specific section (e.g., Risk Factors)
filing = get_filing("AAPL", filing_type="10-K", section="1A")
print(f"Risk Factors: {len(filing['section']['content'])} characters")
```
``` python
# Get a list of recent 8-K filings
filings = get_recent_filings("MSFT", filing_type="8-K", count=5)
for f in filings:
print(f"{f['filingDate']}: {f['form']} - {f['primaryDocDescription']} - {f['items']}")
```
2025-12-08: 8-K - 8-K - 5.02,5.07
2025-10-29: 8-K - 8-K - 2.02,7.01,9.01
2025-09-30: 8-K - 8-K - 5.02
2025-07-30: 8-K - 8-K - 2.02,9.01
2025-07-01: 8-K - 8-K - 5.03,9.01
## Key Functions
| Function | Description |
|----|----|
| `get_filing(ticker, filing_type, section)` | Get a parsed filing with automatic section extraction |
| `get_recent_filings(ticker, filing_type, count)` | List recent filings for a company |
| `get_latest_filings(filing_type, start_date, end_date)` | Search filings across all companies |
| `ticker_to_cik(ticker)` | Convert ticker to SEC CIK number |
| `get_filing_document(url)` | Fetch raw filing content as markdown |
## Filing Classes
For more control, use the typed filing classes directly:
| Class | Description |
|----|----|
| `Filing10K` | Annual reports with 23 standard items across 4 parts |
| `Filing10Q` | Quarterly reports with financial statements and MD&A |
| `Filing8K` | Current reports for material events (earnings, exec changes, etc.) |
Each class provides: - **Lazy content loading** — Content fetched only
when accessed - **Automatic section parsing** — Structured access to all
standard items - **Convenience properties** — `.business`,
`.risk_factors`, `.mda`, `.financials` - **Summary generation** —
Human-readable filing overviews
## Section Reference
### 10-K Sections
- **Item 1**: Business description
- **Item 1A**: Risk Factors ⭐
- **Item 7**: Management’s Discussion & Analysis (MD&A) ⭐
- **Item 8**: Financial Statements ⭐
### 10-Q Sections
- **I-1**: Financial Statements
- **I-2**: MD&A ⭐
- **II-1A**: Risk Factor updates
### 8-K Items
- **2.02**: Results of Operations (earnings) ⭐
- **5.02**: Director/Officer changes
- **7.01**: Regulation FD Disclosure
- **8.01**: Other Events
| text/markdown | null | Kevin Bird <kevin@problemsolversguild.com> | null | null | Apache-2.0 | nbdev, jupyter, notebook, python | [
"Natural Language :: English",
"Intended Audience :: Developers",
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Repository, https://github.com/problemsolversguild/edgar_agent_tool",
"Documentation, https://problemsolversguild.github.io/edgar_agent_tool"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T02:14:50.193055 | edgar_agent_tool-0.0.5.tar.gz | 24,261 | e8/77/bdef7a5b5a28c3ebb04b0068fba8da910e1632b7ba5533e990098e7b8ea3/edgar_agent_tool-0.0.5.tar.gz | source | sdist | null | false | 1a4365577fbd7bb3d7d00e62b91a3201 | 1582a88d0bd604f031abacb9d38118ebc327e4b0229545634b90c7d45557d1b1 | e877bdef7a5b5a28c3ebb04b0068fba8da910e1632b7ba5533e990098e7b8ea3 | null | [
"LICENSE"
] | 279 |
2.4 | airbyte-agent-greenhouse | 0.17.101 | Airbyte Greenhouse Connector for AI platforms | # Greenhouse
The Greenhouse agent connector is a Python package that equips AI agents to interact with Greenhouse through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Greenhouse is an applicant tracking system (ATS) that helps companies manage their
hiring process. This connector provides access to candidates, applications, jobs,
offers, users, departments, offices, job posts, sources, and scheduled interviews
for recruiting analytics and talent acquisition insights.
## Example questions
The Greenhouse connector is optimized to handle prompts like these.
- List all open jobs
- Show me upcoming interviews this week
- Show me recent job offers
- List recent applications
- Show me candidates from \{company\} who applied last month
- What are the top 5 sources for our job applications this quarter?
- Analyze the interview schedules for our engineering candidates this week
- Compare the number of applications across different offices
- Identify candidates who have multiple applications in our system
- Summarize the candidate pipeline for our latest job posting
- Find the most active departments in recruiting this month
## Unsupported questions
The Greenhouse connector isn't currently able to handle prompts like these.
- Create a new job posting for the marketing team
- Schedule an interview for \{candidate\}
- Update the status of \{candidate\}'s application
- Delete a candidate profile
- Send an offer letter to \{candidate\}
- Edit the details of a job description
## Installation
```bash
uv pip install airbyte-agent-greenhouse
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_greenhouse import GreenhouseConnector
from airbyte_agent_greenhouse.models import GreenhouseAuthConfig
connector = GreenhouseConnector(
auth_config=GreenhouseAuthConfig(
api_key="<Your Greenhouse Harvest API Key from the Dev Center>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@GreenhouseConnector.tool_utils
async def greenhouse_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_greenhouse import GreenhouseConnector, AirbyteAuthConfig
connector = GreenhouseConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@GreenhouseConnector.tool_utils
async def greenhouse_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Candidates | [List](./REFERENCE.md#candidates-list), [Get](./REFERENCE.md#candidates-get), [Search](./REFERENCE.md#candidates-search) |
| Applications | [List](./REFERENCE.md#applications-list), [Get](./REFERENCE.md#applications-get), [Search](./REFERENCE.md#applications-search) |
| Jobs | [List](./REFERENCE.md#jobs-list), [Get](./REFERENCE.md#jobs-get), [Search](./REFERENCE.md#jobs-search) |
| Offers | [List](./REFERENCE.md#offers-list), [Get](./REFERENCE.md#offers-get), [Search](./REFERENCE.md#offers-search) |
| Users | [List](./REFERENCE.md#users-list), [Get](./REFERENCE.md#users-get), [Search](./REFERENCE.md#users-search) |
| Departments | [List](./REFERENCE.md#departments-list), [Get](./REFERENCE.md#departments-get), [Search](./REFERENCE.md#departments-search) |
| Offices | [List](./REFERENCE.md#offices-list), [Get](./REFERENCE.md#offices-get), [Search](./REFERENCE.md#offices-search) |
| Job Posts | [List](./REFERENCE.md#job-posts-list), [Get](./REFERENCE.md#job-posts-get), [Search](./REFERENCE.md#job-posts-search) |
| Sources | [List](./REFERENCE.md#sources-list), [Search](./REFERENCE.md#sources-search) |
| Scheduled Interviews | [List](./REFERENCE.md#scheduled-interviews-list), [Get](./REFERENCE.md#scheduled-interviews-get) |
| Application Attachment | [Download](./REFERENCE.md#application-attachment-download) |
| Candidate Attachment | [Download](./REFERENCE.md#candidate-attachment-download) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Greenhouse API docs
See the official [Greenhouse API reference](https://developers.greenhouse.io/harvest.html).
## Version information
- **Package version:** 0.17.101
- **Connector version:** 0.1.6
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/greenhouse/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, greenhouse, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:14:21.962762 | airbyte_agent_greenhouse-0.17.101.tar.gz | 155,667 | 74/89/2d5dca2e90dd96807633f83052183ca86dd32c9b286a55966c6e98564195/airbyte_agent_greenhouse-0.17.101.tar.gz | source | sdist | null | false | 214a4a329352c3017b6a4e33b028c313 | 4143b31406b6e688309e6044b5a501c133af00afa1f7cdd3c2ecf5004b7f5b18 | 74892d5dca2e90dd96807633f83052183ca86dd32c9b286a55966c6e98564195 | null | [] | 339 |
2.4 | airbyte-agent-zendesk-chat | 0.1.56 | Airbyte Zendesk-Chat Connector for AI platforms | # Zendesk-Chat
The Zendesk-Chat agent connector is a Python package that equips AI agents to interact with Zendesk-Chat through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Zendesk Chat enables real-time customer support through live chat. This connector
provides access to chat transcripts, agents, departments, shortcuts, triggers,
and other chat configuration data for analytics and support insights.
## Supported Entities
- **accounts**: Account information and billing details
- **agents**: Chat agents with roles and department assignments
- **agent_timeline**: Agent activity timeline (incremental export)
- **bans**: Banned visitors (IP and visitor-based)
- **chats**: Chat transcripts with full conversation history (incremental export)
- **departments**: Chat departments for routing
- **goals**: Conversion goals for tracking
- **roles**: Agent role definitions
- **routing_settings**: Account-level routing configuration
- **shortcuts**: Canned responses for agents
- **skills**: Agent skills for skill-based routing
- **triggers**: Automated chat triggers
## Rate Limits
Zendesk Chat API uses the `Retry-After` header for rate limit backoff.
The connector handles this automatically.
## Example questions
The Zendesk-Chat connector is optimized to handle prompts like these.
- List all banned visitors
- List all departments with their settings
- Show me all chats from last week
- List all agents in the support department
- What are the most used chat shortcuts?
- Show chat volume by department
- What triggers are currently active?
- Show agent activity timeline for today
## Unsupported questions
The Zendesk-Chat connector isn't currently able to handle prompts like these.
- Start a new chat session
- Send a message to a visitor
- Create a new agent
- Update department settings
- Delete a shortcut
## Installation
```bash
uv pip install airbyte-agent-zendesk-chat
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_zendesk_chat import ZendeskChatConnector
from airbyte_agent_zendesk_chat.models import ZendeskChatAuthConfig
connector = ZendeskChatConnector(
auth_config=ZendeskChatAuthConfig(
access_token="<Your Zendesk Chat OAuth 2.0 access token>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@ZendeskChatConnector.tool_utils
async def zendesk_chat_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_zendesk_chat import ZendeskChatConnector, AirbyteAuthConfig
connector = ZendeskChatConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@ZendeskChatConnector.tool_utils
async def zendesk_chat_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Accounts | [Get](./REFERENCE.md#accounts-get) |
| Agents | [List](./REFERENCE.md#agents-list), [Get](./REFERENCE.md#agents-get), [Search](./REFERENCE.md#agents-search) |
| Agent Timeline | [List](./REFERENCE.md#agent-timeline-list) |
| Bans | [List](./REFERENCE.md#bans-list), [Get](./REFERENCE.md#bans-get) |
| Chats | [List](./REFERENCE.md#chats-list), [Get](./REFERENCE.md#chats-get), [Search](./REFERENCE.md#chats-search) |
| Departments | [List](./REFERENCE.md#departments-list), [Get](./REFERENCE.md#departments-get), [Search](./REFERENCE.md#departments-search) |
| Goals | [List](./REFERENCE.md#goals-list), [Get](./REFERENCE.md#goals-get) |
| Roles | [List](./REFERENCE.md#roles-list), [Get](./REFERENCE.md#roles-get) |
| Routing Settings | [Get](./REFERENCE.md#routing-settings-get) |
| Shortcuts | [List](./REFERENCE.md#shortcuts-list), [Get](./REFERENCE.md#shortcuts-get), [Search](./REFERENCE.md#shortcuts-search) |
| Skills | [List](./REFERENCE.md#skills-list), [Get](./REFERENCE.md#skills-get) |
| Triggers | [List](./REFERENCE.md#triggers-list), [Search](./REFERENCE.md#triggers-search) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Zendesk-Chat API docs
See the official [Zendesk-Chat API reference](https://developer.zendesk.com/api-reference/live-chat/chat-api/introduction/).
## Version information
- **Package version:** 0.1.56
- **Connector version:** 0.1.8
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/zendesk-chat/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, llm, mcp, zendesk-chat | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:14:14.839174 | airbyte_agent_zendesk_chat-0.1.56.tar.gz | 142,816 | 29/75/dd8489c073ceb63ff0929017e828ddede6086157c465a53de9d3c5dfcbd2/airbyte_agent_zendesk_chat-0.1.56.tar.gz | source | sdist | null | false | 7ea68f37bd09ecbec8a684069c82f295 | 31f816ff30fa7f2a897f5e4a8110aeb13f887717bf16c7124da6fd9b2d579a06 | 2975dd8489c073ceb63ff0929017e828ddede6086157c465a53de9d3c5dfcbd2 | null | [] | 346 |
2.4 | artifactr | 0.4.0 | Manage AI artifacts across multiple configurations, tools, & repositories | # Artifactr
Cross-platform CLI for managing AI skills, commands, agents, and other ***Artifacts***. Maintain a library in centralized ***Vaults*** and import them into any project or tool config for use with AI coding assistants.
### Philosophy
- All you need is a **terminal**.
- **Local-first**. No network functionality at all.
- **Extensible**. Easily add support for your coding agent with a simple yaml configuration.
- **Portable**. Cross-platform support (Linux/Mac/Win).
- **Simple**. Easy to install, easy to export your artifacts/configuration.
- **Conventional**. Command syntax attempts to feel familiar by aligning with existing conventions.
[Installation](#installation) •
[Quickstart](#quickstart) •
[The Essentials](#the-essentials) •
[Extended Usage](#extended-usage)
## *Why?* 🤔
AI coding agents accumulate a bunch of configs. And they're usually project-local. Starting a new repo often means rebuilding your environment from scratch, manually copy-pasting skills, etc.
I develop mostly in cloud VMs via `ssh` and `tmux` for security purposes. I got tired of rebuilding my setup every time and desperately wanted a *terminal-centric* solution. Lo-and-behold: **Artifactr**
It takes inspiration from local-first note-taking applications like Obsidian and Logseq. No external connections are made. Your vaults contain YOUR files.
**Vaults** are where you store **Artifacts**. "Artifact" is a shorter way of saying "a file or folder used with LLM tools." Currently, `artifactr` supports standard **Skills**, **Commands**, and **Agents**.

---
## Table of Contents
<details>
<summary><strong>TOC</strong></summary>
- [Why?](#why-)
- [Installation](#installation)
- [Linux & macOS](#linux--macos)
- [Windows](#windows)
- [Manual Installation](#manual-installation)
- [Quickstart](#quickstart)
- [The Essentials](#the-essentials)
- [Creating a New Vault](#creating-a-new-vault)
- [Adding an Existing Vault](#adding-an-existing-vault)
- [Select Your Preferences](#select-your-preferences)
- [Spelunk For and Store Artifacts](#spelunk-for-and-store-artifacts)
- [Removing Artifacts](#removing-artifacts)
- [Importing Artifacts](#importing-artifacts)
- [Syncing Artifacts Automatically](#syncing-artifacts-automatically)
- [Editing Artifacts - art edit](#editing-artifacts---art-edit)
- [Creating Artifacts - art create](#creating-artifacts---art-create)
- [Creating Skills](#creating-skills)
- [Creating Commands](#creating-commands)
- [Creating Agents](#creating-agents)
- [Managing Tools](#managing-tools)
- [Adding a Custom Tool](#adding-a-custom-tool)
- [Vault Structure](#vault-structure)
- [Extended Usage](#extended-usage)
- [Managing Vaults](#managing-vaults)
- [Vault Export & Import](#vault-export--import)
- [Shell Navigation](#shell-navigation)
- [Managing Tools](#managing-tools-1)
- [Listing Vault Contents](#listing-vault-contents)
- [Removing Vault Artifacts](#removing-vault-artifacts)
- [Importing Artifacts (Project)](#importing-artifacts-project)
- [Managing Project Artifacts](#managing-project-artifacts)
- [Importing Artifacts (Global Config)](#importing-artifacts-global-config)
- [Managing Global Config Artifacts](#managing-global-config-artifacts)
- [Linking & Unlinking Artifacts](#linking--unlinking-artifacts)
- [Import with Linking](#import-with-linking)
- [Multi-Vault -V Flag](#multi-vault--v-flag)
- [Tool Discovery with --all](#tool-discovery-with---all)
- [Link State Display](#link-state-display)
- [Creating Artifacts](#creating-artifacts)
- [Editing Artifacts](#editing-artifacts)
- [Listing Artifact Files](#listing-artifact-files)
- [Reading Artifact Content](#reading-artifact-content)
- [Inspecting Artifacts](#inspecting-artifacts)
- [Exporting Artifacts](#exporting-artifacts)
- [Copying Artifacts](#copying-artifacts)
- [Discovering Artifacts](#discovering-artifacts)
- [Storing Artifacts](#storing-artifacts)
</details>
---
## Installation
Requires Python 3.10+
One-liners will attempt to install `artifactr` via pipx, otherwise, they will create a new venv, install the program there, and add it to your `$PATH`.
### Linux & macOS
Install artifactr with a single command on Linux or macOS:
```sh
curl -fsSL https://raw.githubusercontent.com/reg1z/artifactr/main/install.sh | bash
```
To skip all confirmation prompts (useful for scripts or dotfiles):
```sh
curl -fsSL https://raw.githubusercontent.com/reg1z/artifactr/main/install.sh | bash -s -- --yes
```
To uninstall:
```sh
curl -fsSL https://raw.githubusercontent.com/reg1z/artifactr/main/install.sh | bash -s -- --uninstall
```
### Windows
Install with a single command in PowerShell:
```powershell
powershell -ExecutionPolicy ByPass -c "irm https://raw.githubusercontent.com/reg1z/artifactr/main/install.ps1 | iex"
```
To skip all confirmation prompts, download `install.ps1` and run:
```powershell
.\install.ps1 -Yes
```
To uninstall, run:
```powershell
.\install.ps1 -Uninstall
```
### Manual Installation
It's easiest to globally install `artifactr` via `pipx`, which auto-configures a separate python virtual environment (`venv`) for you:
```sh
pipx install artifactr
```
Otherwise, you can install via `pip`, though it's recommended to use a non-system venv.
```sh
pip install artifactr
```
To use a non-system venv, find a good folder to put the new environment in, `cd` into it, and:
- *(NOTE: You will have to `source` this venv whenever you want to use `artifactr`)*
```sh
python -m venv .venv # <-- creates a venv in a new folder named ".venv"
source .venv/bin/activate # <-- use this python env for all `python` execution
pip install artifactr
```
## Quickstart
```sh
art vault init ~/my-new-vault --name favorites # Scaffold a new vault
art store ~/repos/existing-project # Store any existing skills/commands/agents from a repo (AKA artifacts) into your new vault
art tool select claude # Select your choice of agentic tool (opencode/claude/codex || or add your own with `art tool add`)
art project import ~/repos/new-project --link # Import artifacts into your project, using symlinks for automatic syncing
art config import --link # Import artifacts into your selected tool's global configs, using symlinks for automatic syncing
```
---
# The Essentials
## Creating a New Vault
Create a vault with:
```sh
art vault init /path/to/your/vault
```
If the target folder does not exist, the program will make it for you.
If you have no other vault at this point, the program will automatically assign this vault as your default.
## Adding an Existing Vault
If you have an existing vault, simply run:
```sh
# (Adding a shorthand name is convenient!)
art vault add /path/to/your/vault --name favs
```
If you have no other vault at this point, the program will automatically assign this vault as your default.
## Select Your Preferences
Choose your preferred default vault with:
```sh
art vault select {vault_name || path/to/your/vault} # You can use a vault's name or its filepath for vault operations
```
Choose your preferred coding agent tool with:
```sh
art tool select {tool_name} # You can add support for more tools with `art tool add` (see below)
```
## `spelunk` For and `store` Artifacts
You can discover new artifacts with `art spelunk`.
Without any target specified, it will list artifacts found in all tool-specific global config directories.
When given a target directory, it will for search for config folders of all supported tools (`.claude`, `.opencode`, `.agents`, etc.) and the artifacts they contain.
`spelunk` can also explore the contents of other vaults.
You can `art store` what you discover into your vault(s). If a target artifact's name conflicts with an existing one in your vault(s), a confirmation prompt will appear:

## Removing Artifacts
You can remove any number of artifacts from a vault with a single `art rm` command:

## Importing Artifacts
To get your skills, agents, and commands *into* a project or tool config, you import them:
```sh
art project import {optional/target/path} # Without a target path, defaults to the current working directory.
```
```sh
art config import # Same syntax, different location. Defaults to the global user config location of your currently selected coding agent. You can specify specific tools to import to with the `--tools` flag.
```
### Syncing Artifacts Automatically
If you'd like your skill/command/agent definitions to update automatically while editing the contents of a vault, just use the `--link` / `-l` flag when importing artifacts into a project or tool config.
Rather than a direct copy-paste, this creates a symbolic link between vault content and the import target. Whenever you update an artifact in your vault, those changes will instantly be reflected wherever you imported them using `--link`.
```sh
art project import --link
```
```sh
art config import --link
```
## Editing Artifacts - `art edit`
`art edit <artifact-type> <name>`
Opens artifacts in your default editor. Uses environment variables to determine your editor. Includes some common editors as fallbacks in case none are defined:
`$VISUAL > $EDITOR > nano > neovim > vim > vi ( > edit > notepad.exe | Windows Only )`
`art edit skill <name>` opens a skill's `SKILL.md` file. Support for managing all files/folders within a skill is a high priority on the roadmap.

There are convenient short-hand aliases for many commands:
```sh
art ed s my-skill # --> art edit skill my-skill
art ed a my-agent # --> art edit agent my-agent
art ed c my-command # --> art edit command my-command
```
## Creating Artifacts - `art create`
Create skill, agent, and command definitions with `art create`. You can add as much custom frontmatter as you'd like with the `--field` / `-D` flag.
Skills, agents, and commands all require a **description** field in frontmatter. Therefore, `--description` / `-d` is required.
Unless the `--vault` flag is used, artifacts are created in the currently selected default vault.
### Creating Skills
Skills are directories with a `SKILL.md` and optional misc. files/folders for added context. Currently, `artifactr` only supports creating a skill's folder and its `SKILL.md`. Support for managing all files within a single skill is a high priority item on the roadmap.
Skills require a name and description. `artifactr` will always name the folder and fill the `name` field of `SKILL.md` with the same provided argument, unless overridden with the `--name` / `-n` flag.
The `--name` / `-n` flag only applies to frontmatter in `SKILL.md`. This is usually what coding agents parse as the de facto name for slash commands.
#### Verbose Syntax
```sh
# "code-review" will be the name of the skill's folder AND its "name" frontmatter unless overridden with --name / -n
art create skill code-review \
--description 'For assessing code quality & syntax' \
--field disable-model-invocation=true \
--field allowed-tools='Read, Grep' \
--field argument-hint=[filename] \
--content 'To the best of your ability, grade my code like the most insufferable pedant you can imagine.' \
--vault 'claude-vault' \
--name 'nitpick'
# --name overrides the frontmatter name field. Usually this is what defines a skill's slash command name in tools like claude
```
#### Short-hand Syntax
You can also use short-hand aliases the command provides:
```sh
art cr s code-review \
-n 'nitpick' \
-d 'For assessing code quality & syntax' \
-D disable-model-invocation=true \
-D allowed-tools='Read, Grep' \
-D argument-hint=[filename] \
-c 'To the best of your ability, grade my code like the most insufferable pedant you can imagine.' \
--vault 'claude-vault'
```
#### Output
Both of the above commands will produce a `SKILL.md` file at `/claude-vault/skills/code-review/SKILL.md` that reads:
```md
---
name: nitpick
description: For assessing code quality & syntax
disable-model-invocation: 'true'
allowed-tools: Read, Grep
argument-hint: '[filename]'
---
To the best of your ability, grade my code like the most insufferable pedant you can imagine.
```
### Creating Commands
Uses the same syntax as above. Commands do not require a `name` field in frontmatter. The name of the file is the name of the slash command. Thus, the `--name` flag is not supported. If desired, you can still add a `name` field with the `--field` / `-D` flag.
```sh
art create command <name> --description 'a command' --content 'crucial context'
# or the shorthand:
art cr c <name> -d 'a command' -c 'crucial context'
```
### Creating Agents
Uses the same syntax as above. The `name` field in frontmatter is supported by agent definitions, thus the `--name` / `-n` flag is supported.
```sh
art create agent <name> --description 'an agent' --content 'vital context'
# or the shorthand:
art cr a <name> -d 'an agent' -c 'vital context'
```
## Managing Tools
Currently, `artifactr` supports claude-code, codex, and opencode. However, you can easily add support for as many tools as you'd like. Codex itself does not support markdown-based agent and command definitions, just skills.
### Adding a Custom Tool
You can add any number of custom coding agent tools to `artifactr`.
By default, these will be added to the global `artifactr` config at `~/.config/artifactr/config.yaml` on Linux.
🌠 You can configure tools per-vault using the `--vault` flag. These are added to the config at `/your-vault/vault.yaml`.
```sh
art tool add cursor \
--skills .cursor/skills \
--commands .cursor/commands \
--agents .cursor/agents \
--global-skills ~/.cursor/skills \
--global-commands ~/.cursor/commands \
--global-agents ~/.cursor/agents \
--alias cur,c
# You can add any number of aliases to a custom tool.
# --vault your-vault <-- add vault-scoped tool support to `/your-vault/vault.yaml`
```
## Vault Structure
```
vault/
├── vault.yaml # Optional: portable vault name and tool definitions
├── skills/
│ └── skill-name/
│ ├── SKILL.md
│ └── (supporting files...)
├── agents/
│ └── agent-name.md
└── commands/
└── command-name.md
```
The optional `vault.yaml` file stores a portable vault name and vault-scoped tool definitions. When present, its name takes precedence over the name stored in the global config. Tool definitions in `vault.yaml` travel with the vault when shared.
Artifacts are copied (or symlinked with `--link`) to tool-specific directories in the target repo (e.g., `.claude/skills/`, `.opencode/agents/`) and automatically excluded from git tracking.
---
# Extended Usage
### Managing Vaults
```sh
# Initialize a new vault (creates directory with skills/, agents/, commands/ scaffolding)
art vault init ~/my-vault
# Initialize with a name and set as default
art vault init ~/my-vault --name=favorites --set-default
# Add an existing directory as a vault (auto-named as llm-vault-1, llm-vault-2, etc.)
art vault add ~/my-vault
# Add a vault with an explicit name
art vault add ~/my-vault --name=favorites
# Add and set as default in one step
art vault add ~/my-vault --name=favorites --set-default
# Name or rename an existing vault
art vault name ~/my-vault favorites
# List all vaults
art vault list
# List all vaults with full artifact hierarchy
art vault list --all
# Set default vault (by name or path)
art vault select favorites
# Remove a vault (by name or path)
art vault rm favorites
# Copy a vault (copies skills/, commands/, agents/, vault.yaml only)
art vault copy my-vault new-vault-name
# Copy a vault to an explicit path
art vault copy my-vault /path/to/new-vault
# Copy everything except .git/
art vault copy my-vault new-vault-name --all
# Alias: vault cp
art vault cp my-vault new-vault-name
```
Vaults added without `--name` are automatically assigned names using the `llm-vault-N` pattern. Vault names can be used in place of full directory paths in any command that accepts a vault identifier, including `--vault` on `art import`.
### Vault Export & Import
Bundle vaults into portable `.zip` archives to share or back up your artifact library:
```sh
# Export a vault to a zip archive
art vault export my-vault /path/to/bundle.zip
# Export multiple vaults (comma-separated)
art vault export vault-1,vault-2 /path/to/bundle.zip
# Export all registered vaults
art vault export --all /path/to/bundle.zip
# Export vaults matching a glob pattern
art vault export "claude-*" /path/to/bundle.zip
# Import from a zip archive (lists vaults and prompts for confirmation)
art vault import bundle.zip
# Import with auto-confirmation
art vault import bundle.zip --yes
# Import to a specific destination directory
art vault import bundle.zip /path/to/dest/
```
Archives include a `manifest.yaml` at the root. On import, vaults are extracted as subdirectories of the destination and automatically registered in your config.
### Shell Navigation
`art nav` lets you jump to vault directories from your shell. The `art shell setup` command installs a shell wrapper that makes navigation work like `cd`:
```sh
# Install the shell wrapper into your rc file (bash/zsh/fish/etc.)
art shell setup
# Skip confirmation prompts
art shell setup --yes
```
After sourcing your rc file, `art nav` changes your working directory directly:
```sh
# Navigate to the default vault root
art nav
# Navigate to a type subdirectory in the default vault
art nav skills
art nav commands
art nav agents
# Navigate to a specific vault
art nav my-vault
# Navigate to a type subdirectory within a named vault
art nav my-vault/skills
```
Output mode flags (useful before installing the wrapper, or for scripting):
```sh
art nav skills --print # print resolved path to stdout
art nav skills --spawn # open a subshell in the target directory
art nav skills --window # open a new terminal window at the target
```
Set `nav_mode: wrapper` (or `spawn`/`window`/`print`) in your config to make a mode the default when no flag is passed.
### Managing Tools
```sh
# List supported tools — shows artifact support, source, and aliases
art tool list
# Set default tool (defaults to opencode)
art tool select claude-code
# Aliases work anywhere a tool name is accepted
art tool select claude # resolves to claude-code
# Show details for a specific tool
art tool show codex
# Add a custom tool (e.g., Cursor IDE)
art tool add cursor --skills .cursor/skills --commands .cursor/commands \
--global-skills '$HOME/.cursor/skills' --alias cur
# Add a tool scoped to a specific vault
art tool add my-tool --skills .my-tool/skills --vault=team-vault
# Remove a custom tool
art tool rm cursor
```
### Listing Vault Contents
```sh
# List all artifacts in the default vault
art list
# List from a specific vault
art list --vault=favorites
# Filter by type
art list -S # skills only
art list -S -C # skills and commands
art list -S foo,bar # only skills named foo and bar
```
### Removing Vault Artifacts
```sh
# Remove an artifact from the default vault
art rm my-skill
# Use type prefix for disambiguation
art rm skills/my-skill
# Remove without confirmation
art rm my-skill -f
```
### Importing Artifacts (Project)
```sh
# Import into current directory (cwd default)
art proj import
# Import into a specific project
art proj import ~/repos/my-project
# Import from a specific vault
art proj import -V favorites
# Import from multiple vaults
art proj import -V vault1,vault2
# Import for specific tools
art proj import --tools=claude-code,opencode
# Import only skills
art proj import -S
# Symlink instead of copying
art proj import --link
# Import only specific artifacts by name
art proj import --artifacts=helping-hand,code-review
# Don't add artifacts to .git/info/exclude
art proj import --no-exclude
```
### Managing Project Artifacts
```sh
# List imported artifacts in current project
art proj list
# Remove specific imported artifacts
art proj rm my-skill
# Wipe all imported artifacts
art proj wipe
# Filter by type or tool
art proj list -S --tools=claude-code
art proj wipe -S -f
```
### Importing Artifacts (Global Config)
```sh
# Import into global config directories
art conf import
# Import from a specific vault
art conf import -V favorites
# Import only skills for claude-code
art conf import --tools=claude-code -S
```
### Managing Global Config Artifacts
```sh
# List globally imported artifacts
art conf list
# Remove globally imported artifacts
art conf rm my-skill
# Wipe all globally imported artifacts
art conf wipe -f
```
Imported artifacts are tracked in `.art-cache/imported` (project) and `~/.config/artifactr/.art-cache-global/imported` (global), recording which vault and tool each artifact came from.
### Linking & Unlinking Artifacts
Replace imported copies with symlinks pointing to vault sources, or convert symlinks back to copies:
```sh
# Link specific artifacts in current project
art proj link my-skill
# Link all imported artifacts
art proj link --all
# Link with auto-backup on diff (no prompts)
art proj link --all --force
# Link only artifacts from a specific vault
art proj link --all -V favorites
# Unlink (replace symlinks with copies)
art proj unlink my-skill
art proj unlink --all
# Same for global config artifacts
art conf link --all
art conf unlink my-skill
```
### Import with Linking
Use `--link` to create symlinks instead of copies during import. The import summary shows the link state:
```sh
art proj import --link
# Output:
# claude-code:
# skills: 3 (linked)
# commands: 1 (linked)
art proj import
# Output:
# claude-code:
# skills: 3 (copied)
```
### Multi-Vault `-V` Flag
Most commands support targeting multiple vaults with `-V`. Values can be comma-separated or the flag can be repeated:
```sh
# Import from multiple vaults
art proj import -V vault1,vault2
art proj import -V vault1 -V vault2 # equivalent
# List artifacts from multiple vaults (adds VAULT column)
art ls -V vault1,vault2
# Store into multiple vaults
art store ./dir -V vault1,vault2
# Create in multiple vaults
art create skill my-skill -d "desc" -V vault1,vault2
# Filter project/config lists by vault
art proj ls -V favorites
art conf ls -V vault1,vault2
# Filter removal by vault
art proj rm my-skill -V favorites
art proj wipe -V favorites
```
### Tool Discovery with `--all`
```sh
# List tools from all catalog vaults and global config
art tool ls --all
# Show all tool definitions from all sources
art tool info --all
# --all and -V are mutually exclusive
art tool ls --all -V favorites # Error
```
### Link State Display
`art proj ls` and `art conf ls` display a STATE column showing whether each artifact is linked or copied:
```
NAME TYPE TOOL VAULT STATE
helping-hand → skill claude-code favorites linked
code-review skill claude-code favorites copied
deploy-prod ⇒ command opencode favorites hardlinked
```
Arrow indicators: `→` = symlinked, `⇒` = hardlinked. Legacy entries (no suffix) display as `copied`.
### Creating Artifacts
Create skills, commands, and agents directly from the CLI:
```sh
# Create a skill (directory-based, with SKILL.md)
art create skill my-skill -d "A helpful skill" -c "Instructions here"
# Create a command (flat .md file, filename is the name)
art create command deploy-prod -d "Run production deployment"
# Create an agent (flat .md file, name in frontmatter)
art create agent code-reviewer -d "Reviews code changes"
# Add extra frontmatter fields
art create agent my-agent -d "desc" -D model=sonnet -D version=1.0
# Create in a specific vault
art create skill my-skill -d "desc" --vault=favorites
# Create in the current project instead of a vault
art create command my-cmd -d "desc" --here
art create skill my-skill -d "desc" --here --tools=claude-code,opencode
```
All artifact types require `--description` / `-d`. Skills are created as directories with a `SKILL.md` file; commands and agents are created as flat `.md` files.
### Editing Artifacts
Open an artifact in your terminal editor:
```sh
# Edit by name (type auto-detected from default vault)
art edit my-skill
# Edit with explicit type/name specifier
art edit skill/my-skill
art edit command/deploy-prod
# Old two-positional form still works
art edit skill my-skill
art edit command deploy-prod
art edit agent code-reviewer
# Edit in a specific vault
art edit skill my-skill --vault=favorites
# Edit a project-local artifact
art edit skill my-skill --here
# Open a specific file within a skill
art edit skill/my-skill/refs/examples.md
# Create and edit a new file within a skill
art edit my-skill --new-file refs/examples.md
# Open main SKILL.md directly, skipping the file picker
art edit my-skill --main
# Force interactive file picker (even if only SKILL.md exists)
art edit my-skill --interactive
```
When a skill has multiple files, `art edit` shows a numbered file picker unless `-m` is passed. The picker supports creating new files (`n`), importing a file from your filesystem (`i`), and deleting files (`d`). The picker is skipped when stdin is not a TTY (piped input).
The editor is resolved from `$VISUAL`, then `$EDITOR`, then the first available of `nano`, `nvim`, `vim`, `vi`, (and `edit` or `notepad.exe` on Windows).
Names can be resolved by YAML frontmatter `name:` field if no filename/dirname match is found — this applies to `art edit`, `art copy`, and all other artifact name-matching commands.
### Listing Artifact Files
View the files within a directory-based artifact (skill):
```sh
# List files in a skill from the default vault
art ls my-skill
# Disambiguate with type prefix
art ls skill/my-skill
# List from a specific vault
art ls my-skill --vault=favorites
```
Output labels `SKILL.md` as the main file and lists all other files with their relative paths.
### Reading Artifact Content
Print the content of an artifact's primary file to stdout:
```sh
# Print SKILL.md for a skill
art cat my-skill
# Print a command or agent file
art cat command/deploy-prod
art cat agent/code-reviewer
# Print a specific file within a skill
art cat skill/my-skill/refs/examples.md
# Read from a specific vault
art cat my-skill --vault=favorites
# Read a project-local artifact
art cat my-skill --here
```
### Inspecting Artifacts
Display the YAML frontmatter and file tree of an artifact:
```sh
# Inspect a skill
art inspect my-skill
# Inspect with type prefix
art inspect command/deploy-prod
# Inspect from a specific vault
art inspect my-skill --vault=favorites
```
Example output for a skill:
```
Frontmatter:
name: my-skill
description: A helpful skill
version: 1.0
Files:
SKILL.md (main)
refs/examples.md
refs/context.md
```
### Exporting Artifacts
Package an artifact as a portable `.zip` archive:
```sh
# Export a skill (default: <cwd>/<name>.zip)
art export my-skill
# Export with explicit output path
art export my-skill -o ~/backups/my-skill.zip
# Export with type prefix
art export skill/my-skill
art export command/deploy-prod
# Export from a specific vault
art export my-skill --vault=favorites
```
Skill zips contain all files under `<artifact-name>/` at the archive root. Command and agent zips contain `<artifact-name>/<artifact-name>.md`. The exported zip can be re-imported with `art store ./my-skill.zip`.
### Copying Artifacts
Copy artifacts within a vault or across vaults with `art copy` (alias: `art cp`):
```sh
# Copy an artifact to another vault (trailing slash = destination vault)
art copy my-skill vault-2/
# Disambiguate by type
art copy skill/my-skill vault-2/
# Source from a specific vault
art copy vault-1/my-skill vault-2/
# Fully-qualified source (vault/type/name)
art copy vault-1/skill/my-skill vault-2/
# Duplicate within the same vault (new name, not a registered vault)
art copy my-skill my-skill-v2
# Copy to another vault with an explicit destination name
art copy my-skill vault-2/my-skill-renamed
# Copy all artifacts to another vault
art copy '*' vault-2/
# Copy all skills to another vault
art copy 'skills/*' vault-2/
# Copy agents matching a pattern
art copy 'agents/*-runner' vault-6/
```
The artifact type travels with the copy — a skill always lands in `skills/`, a command in `commands/`, etc. Frontmatter `name:` fields are used as a fallback when no filename match exists.
### Discovering Artifacts
Scan directories, vaults, or global configs for existing artifacts:
```sh
# Discover artifacts in a project
art spelunk ~/repos/my-project
# Spelunk global config (default when no target)
art spelunk
# Explicit global flag
art spelunk -g
# Filter by tool or type
art spelunk ~/repos/my-project --tools=claude-code
art spelunk -S
# Include DESCRIPTION column
art spelunk --verbose
# Structured output formats
art spelunk --output=json
art spelunk --output=yaml
art spelunk --output=md
```
Example output:
```
NAME TYPE LOCATION
helping-hand (imported: favs) skill skills/helping-hand
utility-tool skill skills/utility-tool
reviewer agent agents/reviewer.md
deploy command .opencode/commands/deploy.md
```
The `(imported: ...)` marker appears when an artifact was previously imported via `art import`, showing which vault it came from. Global spelunk paths are collapsed to `~/`. Use `--verbose` to add a DESCRIPTION column.
### Storing Artifacts
Collect artifacts from a project directory and store them into a vault:
```sh
# Store artifacts into default vault
art store ~/repos/my-project
# Store into a specific vault
art store ~/repos/my-project --vault=favorites
# Import from a zip archive (single artifact or vault bundle)
art store ./my-skill.zip
art store ./bundle.zip --vault=favorites
```
You'll be presented with a numbered list of discovered artifacts and can select which ones to store using individual numbers (`1`), ranges (`1-3`), comma-separated (`1,3,5`), combinations (`1,3-5`), or `all`.
When the input path ends in `.zip`, artifactr auto-detects the archive type:
- **Single artifact** (skill directory or `.md` file at zip root): stored directly, no selection modal.
- **Vault bundle**: extracted and passed through the normal selection flow.
| text/markdown | null | null | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pyyaml>=6.0"
] | [] | [] | [] | [
"Homepage, https://github.com/reg1z/artifactr",
"Issues, https://github.com/reg1z/artifactr/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T02:14:02.963479 | artifactr-0.4.0.tar.gz | 120,104 | 20/30/067b16e6a25dcfb54286b46e0d13df340deae4e7d15627f5d431f5eae48c/artifactr-0.4.0.tar.gz | source | sdist | null | false | a60c67d0858652b240f783a5978c52b1 | 5403382a69fcff59cd17e59ce3264f047d9d9ae07c8bb4721008645be3bf3eb1 | 2030067b16e6a25dcfb54286b46e0d13df340deae4e7d15627f5d431f5eae48c | null | [
"LICENSE"
] | 270 |
2.4 | pytest-drill-sergeant | 0.4.0 | A pytest plugin that enforces test quality standards through automatic marker detection and AAA structure validation | # Pytest Drill Sergeant
[](https://github.com/jeffrichley/pytest-drill-sergeant/actions)
[](https://codecov.io/gh/jeffrichley/pytest-drill-sergeant)
[](https://badge.fury.io/py/pytest-drill-sergeant)
You want elite tests?
Then stop writing lazy chaos and start writing disciplined test code.
This plugin is your no-excuses drill instructor for:
- marker classification
- AAA structure (`Arrange` / `Act` / `Assert`)
- file-length discipline
It does not care about feelings. It cares about standards.
## Mission Profile
### 1. Marker Rule
What it does:
- validates useful test markers
- auto-detects marker intent from path (for example `tests/unit/` -> `@pytest.mark.unit`)
- can **write** auto-detected markers into source files (`drill_sergeant_write_markers = true`, default)
- supports custom directory-to-marker mappings
- reads marker declarations from both `pytest.ini` and `pyproject.toml` (`[tool.pytest.ini_options]`)
What you do:
- put tests in intentional directories
- keep marker declarations real
- stop shipping unclassified tests
### 2. AAA Rule
What it does:
- enforces explicit AAA section comments in test bodies
- supports two modes:
- `basic`: section presence required
- `strict`: presence + order + no duplicate section declarations
- supports built-in/custom synonyms when enabled
Accepted grammar:
```text
# <Keyword> - <description>
```
Examples:
```python
# Arrange - create test fixture
# Act - call the function
# Assert - verify expected behavior
```
### 3. File-Length Rule
What it does:
- enforces max test file length
- supports modes:
- `error`: fail
- `warn`: report only
- `off`: disabled
- supports path exclusions and inline ignore token
Inline ignore example:
```python
# drill-sergeant: file-length ignore
```
Use this sparingly. If you need it everywhere, the file should be split.
## Quick Start
### Install
```bash
uv add --group dev pytest-drill-sergeant
```
### Minimal `pytest.ini`
```ini
[pytest]
addopts = -p drill_sergeant
markers =
unit: Unit tests
integration: Integration tests
e2e: End-to-end tests
drill_sergeant_enabled = true
drill_sergeant_enforce_markers = true
drill_sergeant_enforce_aaa = true
drill_sergeant_enforce_file_length = true
drill_sergeant_marker_severity = error
drill_sergeant_aaa_severity = error
drill_sergeant_aaa_mode = basic
drill_sergeant_auto_detect_markers = true
drill_sergeant_max_file_length = 350
```
### Minimal Passing Test
```python
import pytest
@pytest.mark.unit
def test_addition() -> None:
# Arrange - prepare operands
left = 2
right = 3
# Act - run operation
total = left + right
# Assert - validate result
assert total == 5
```
### Running by marker
With markers (and optional `drill_sergeant_auto_detect_markers = true`), use pytest’s `-m` to select tests:
- **Only unit tests:**
`pytest -m unit`
- **Everything except e2e:**
`pytest -m "not e2e"`
- **Unit or integration:**
`pytest -m "unit or integration"`
Register every marker you use in `[pytest]` `markers` (as in the minimal config above) so pytest accepts `-m` and doesn’t warn about unknown markers.
## Configuration
Precedence (highest to lowest):
1. environment variables
2. pytest config (`pytest.ini` or `[tool.pytest.ini_options]`)
3. `[tool.drill_sergeant]` in `pyproject.toml`
4. plugin defaults
### `pyproject.toml` Example
```toml
[tool.drill_sergeant]
enabled = true
enforce_markers = true
enforce_aaa = true
aaa_mode = "basic"
marker_severity = "error"
aaa_severity = "error"
enforce_file_length = true
file_length_mode = "error"
file_length_exclude = ["tests/legacy/*"]
file_length_inline_ignore = true
file_length_inline_ignore_token = "drill-sergeant: file-length ignore"
auto_detect_markers = true
write_markers = true
min_description_length = 3
max_file_length = 350
aaa_synonyms_enabled = false
aaa_builtin_synonyms = true
[tool.drill_sergeant.marker_mappings]
contract = "api"
smoke = "integration"
```
### Environment Variables
- `DRILL_SERGEANT_ENABLED`
- `DRILL_SERGEANT_ENFORCE_MARKERS`
- `DRILL_SERGEANT_ENFORCE_AAA`
- `DRILL_SERGEANT_MARKER_SEVERITY` (`error` | `warn` | `off`)
- `DRILL_SERGEANT_AAA_SEVERITY` (`error` | `warn` | `off`)
- `DRILL_SERGEANT_AAA_MODE`
- `DRILL_SERGEANT_ENFORCE_FILE_LENGTH`
- `DRILL_SERGEANT_FILE_LENGTH_MODE`
- `DRILL_SERGEANT_FILE_LENGTH_EXCLUDE`
- `DRILL_SERGEANT_FILE_LENGTH_INLINE_IGNORE`
- `DRILL_SERGEANT_FILE_LENGTH_INLINE_IGNORE_TOKEN`
- `DRILL_SERGEANT_AUTO_DETECT_MARKERS`
- `DRILL_SERGEANT_WRITE_MARKERS` (write auto-detected markers into source files)
- `DRILL_SERGEANT_MIN_DESCRIPTION_LENGTH`
- `DRILL_SERGEANT_MAX_FILE_LENGTH`
- `DRILL_SERGEANT_MARKER_MAPPINGS`
- `DRILL_SERGEANT_DEBUG_CONFIG`
- `DRILL_SERGEANT_DEBUG_TELEMETRY` (prints per-validator timing summary at session end)
## Return Type Policy
Return-type annotation enforcement is intentionally handled by static tooling, not runtime test hooks.
Use Ruff `ANN` rules:
```bash
uv run ruff check --fix src tests
```
## CI Contract
Required gates:
- `uv run pytest -q`
- `uv run ruff check src tests`
- `uv run mypy src tests --config-file=pyproject.toml`
Local parity command:
```bash
just verify
```
Coverage is informational, not a required merge gate.
## Release Workflow (Current)
Release flow is split by responsibility:
1. Conventional commits land on `main`.
2. `Release Please` auto-opens/updates release PRs.
3. Review and merge the generated release PR.
4. Confirm GitHub Release + tag were created.
5. Run `Production Release (PyPI)` workflow manually with `release_tag`.
Operational rule:
- do not hand-edit versions or changelog outside the release PR
- keep `release-please` automated for version/changelog hygiene
- do not auto-publish to PyPI from release events
- keep release notes derived from conventional commits
## Failure Intel
If a rule fails, use:
- `docs/Failure-Catalog.md` for failure-to-fix mapping
- `docs/Decision-Log.md` for scope decisions and rationale
- `docs/Release-Checklist.md` for release execution
- `STABILIZATION_PLAN.md` for phased recovery status
## Development
Common commands:
```bash
just verify
just test
just lint
just type-check
```
## Release Flow
Use `docs/Release-Checklist.md` as the canonical release runbook.
## Final Word
The point is not ceremony.
The point is predictable, readable, maintainable test code under pressure.
Do the basics with discipline and your test suite will stop betraying you.
## License
MIT
| text/markdown | null | Jeff Richley <jeffrichley@gmail.com> | null | null | null | pytest, testing, quality, standards, AAA, markers | [
"Development Status :: 4 - Beta",
"Framework :: Pytest",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Testing"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pytest>=7.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"hypothesis>=6.136.6; extra == \"test\"",
"pytest-benchmark>=4.0.0; extra == \"test\"",
"psutil>=5.9.0; extra == \"test\"",
"mkdocs>=1.5.0; extra == \"docs\"",
"mkdocs-material>=9.6.17; extra == \"docs\"",
"mkdocstrings[python]>=0.30.0; extra == \"docs\"",
"mike>=2.1.3; extra == \"docs\"",
"mkdocs-git-revision-date-localized-plugin>=1.4.7; extra == \"docs\"",
"mkdocs-mermaid2-plugin>=1.2.1; extra == \"docs\"",
"mypy; extra == \"typecheck\"",
"pip-audit; extra == \"security\""
] | [] | [] | [] | [
"Homepage, https://github.com/jeffrichley/pytest-drill-sergeant",
"Repository, https://github.com/jeffrichley/pytest-drill-sergeant.git",
"Issues, https://github.com/jeffrichley/pytest-drill-sergeant/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T02:13:55.545480 | pytest_drill_sergeant-0.4.0.tar.gz | 25,335 | d8/1a/14b52124ca26fd66bb53e739c902b4c55e100683701a45ffd7e627795c62/pytest_drill_sergeant-0.4.0.tar.gz | source | sdist | null | false | 8af321896d11817504c93b1d2683a637 | 4e16c29d06650f39ea40b510dcdbcf5955990e267356f1f2becb8c8a9e86920b | d81a14b52124ca26fd66bb53e739c902b4c55e100683701a45ffd7e627795c62 | MIT | [
"LICENSE"
] | 266 |
2.4 | gracedb-sdk | 1.0.0a1 | REST API SDK for GraceDB | # gracedb-sdk
A modern, performant REST API client for GraceDB, based on [httpx](https://www.python-httpx.org). For documentation, see https://gracedb-sdk.readthedocs.io.
## Benchmarks
Time in seconds to perform common tasks [lscsoft/ligo-gracedb](https://git.ligo.org/lscsoft/gracedb-client). Measured on a residential Internet connection on the US East Coast.

| text/markdown | null | Leo Singer <leo.p.singer@nasa.gov> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"Operating System :: POSIX",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet",
"Topic :: Scientific/Engineering :: Astronomy",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx-gracedb>=0.0.3",
"pytest; extra == \"test\"",
"pytest-asyncio; extra == \"test\""
] | [] | [] | [] | [
"source, https://git.ligo.org/emfollow/gracedb-sdk",
"Bug Tracker, https://git.ligo.org/emfollow/gracedb-sdk/issues",
"Documentation, https://gracedb-sdk.readthedocs.io/",
"Source Code, https://git.ligo.org/emfollow/gracedb-sdk"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T02:13:49.723213 | gracedb_sdk-1.0.0a1.tar.gz | 39,327 | 4a/5d/3abbb7f07784ac44254bb7629036a63b20effdc67ec239c73546a7ae85ba/gracedb_sdk-1.0.0a1.tar.gz | source | sdist | null | false | a92622d854ccb528e93c0c0966045c63 | 9e77fe7cb8971d2636e13853ca4fd0e5a06239709e742ab99ac806d5ffaaf52c | 4a5d3abbb7f07784ac44254bb7629036a63b20effdc67ec239c73546a7ae85ba | GPL-3.0-or-later | [
"LICENSE.md"
] | 240 |
2.4 | airbyte-agent-github | 0.18.109 | Airbyte Github Connector for AI platforms | # Github
The Github agent connector is a Python package that equips AI agents to interact with Github through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
GitHub is a platform for version control and collaborative software development
using Git. This connector provides access to repositories, branches, commits, issues,
pull requests, reviews, comments, releases, organizations, teams, and users for
development workflow analysis and project management insights.
## Example questions
The Github connector is optimized to handle prompts like these.
- Show me all open issues in my repositories this month
- List the top 5 repositories I've starred recently
- Analyze the commit trends in my main project over the last quarter
- Find all pull requests created in the past two weeks
- Search for repositories related to machine learning in my organizations
- Compare the number of contributors across my different team projects
- Identify the most active branches in my main repository
- Get details about the most recent releases in my organization
- List all milestones for our current development sprint
- Show me insights about pull request review patterns in our team
## Unsupported questions
The Github connector isn't currently able to handle prompts like these.
- Create a new issue in the project repository
- Update the status of this pull request
- Delete an old branch from the repository
- Schedule a team review for this code
- Assign a new label to this issue
## Installation
```bash
uv pip install airbyte-agent-github
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_github import GithubConnector
from airbyte_agent_github.models import GithubPersonalAccessTokenAuthConfig
connector = GithubConnector(
auth_config=GithubPersonalAccessTokenAuthConfig(
token="<GitHub personal access token (fine-grained or classic)>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@GithubConnector.tool_utils
async def github_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_github import GithubConnector, AirbyteAuthConfig
connector = GithubConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@GithubConnector.tool_utils
async def github_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Repositories | [Get](./REFERENCE.md#repositories-get), [List](./REFERENCE.md#repositories-list), [API Search](./REFERENCE.md#repositories-api_search) |
| Org Repositories | [List](./REFERENCE.md#org-repositories-list) |
| Branches | [List](./REFERENCE.md#branches-list), [Get](./REFERENCE.md#branches-get) |
| Commits | [List](./REFERENCE.md#commits-list), [Get](./REFERENCE.md#commits-get) |
| Releases | [List](./REFERENCE.md#releases-list), [Get](./REFERENCE.md#releases-get) |
| Issues | [List](./REFERENCE.md#issues-list), [Get](./REFERENCE.md#issues-get), [API Search](./REFERENCE.md#issues-api_search) |
| Pull Requests | [List](./REFERENCE.md#pull-requests-list), [Get](./REFERENCE.md#pull-requests-get), [API Search](./REFERENCE.md#pull-requests-api_search) |
| Reviews | [List](./REFERENCE.md#reviews-list) |
| Comments | [List](./REFERENCE.md#comments-list), [Get](./REFERENCE.md#comments-get) |
| Pr Comments | [List](./REFERENCE.md#pr-comments-list), [Get](./REFERENCE.md#pr-comments-get) |
| Labels | [List](./REFERENCE.md#labels-list), [Get](./REFERENCE.md#labels-get) |
| Milestones | [List](./REFERENCE.md#milestones-list), [Get](./REFERENCE.md#milestones-get) |
| Organizations | [Get](./REFERENCE.md#organizations-get), [List](./REFERENCE.md#organizations-list) |
| Users | [Get](./REFERENCE.md#users-get), [List](./REFERENCE.md#users-list), [API Search](./REFERENCE.md#users-api_search) |
| Teams | [List](./REFERENCE.md#teams-list), [Get](./REFERENCE.md#teams-get) |
| Tags | [List](./REFERENCE.md#tags-list), [Get](./REFERENCE.md#tags-get) |
| Stargazers | [List](./REFERENCE.md#stargazers-list) |
| Viewer | [Get](./REFERENCE.md#viewer-get) |
| Viewer Repositories | [List](./REFERENCE.md#viewer-repositories-list) |
| Projects | [List](./REFERENCE.md#projects-list), [Get](./REFERENCE.md#projects-get) |
| Project Items | [List](./REFERENCE.md#project-items-list) |
| File Content | [Get](./REFERENCE.md#file-content-get) |
| Directory Content | [List](./REFERENCE.md#directory-content-list) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Github API docs
See the official [Github API reference](https://docs.github.com/en/rest).
## Version information
- **Package version:** 0.18.109
- **Connector version:** 0.1.15
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/github/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, github, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:48.894864 | airbyte_agent_github-0.18.109.tar.gz | 141,893 | f2/47/497b94211e7c0458a80902d515d61a33f28cbff648f43bde397d1bfc7cc9/airbyte_agent_github-0.18.109.tar.gz | source | sdist | null | false | 07e5e4a1712c82e9fb2dcef3815c0318 | fca3834c64c7f3065c9c7fbc043cd92567616add16ecd71f2f78727a76d37590 | f247497b94211e7c0458a80902d515d61a33f28cbff648f43bde397d1bfc7cc9 | null | [] | 347 |
2.4 | airbyte-agent-orb | 0.1.36 | Airbyte Orb Connector for AI platforms | # Orb
The Orb agent connector is a Python package that equips AI agents to interact with Orb through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Orb is a usage-based billing platform that enables businesses to implement flexible pricing models,
track customer usage, and manage subscriptions. This connector provides access to customers,
subscriptions, plans, and invoices for billing analytics and customer management.
## Example questions
The Orb connector is optimized to handle prompts like these.
- Show me all my customers in Orb
- List all active subscriptions
- What plans are available?
- Show me recent invoices
- Show me details for a recent customer
- What is the status of a recent subscription?
- Show me the pricing details for a plan
- Confirm the Stripe ID linked to a customer
- What is the payment provider ID for a customer?
- List all invoices for a specific customer
- List all subscriptions for customer XYZ
- Show all active subscriptions for a specific customer
- What subscriptions does customer \{external_customer_id\} have?
- Pull all invoices from the last month
- Show invoices created after \{date\}
- List all paid invoices for customer \{customer_id\}
- What invoices are in draft status?
- Show all issued invoices for subscription \{subscription_id\}
## Unsupported questions
The Orb connector isn't currently able to handle prompts like these.
- Create a new customer in Orb
- Update subscription details
- Delete a customer record
- Send an invoice to a customer
- Filter subscriptions by plan name (must filter client-side after listing)
- Pull customers billed for specific products (must examine invoice line_items client-side)
## Installation
```bash
uv pip install airbyte-agent-orb
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_orb import OrbConnector
from airbyte_agent_orb.models import OrbAuthConfig
connector = OrbConnector(
auth_config=OrbAuthConfig(
api_key="<Your Orb API key>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@OrbConnector.tool_utils
async def orb_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_orb import OrbConnector, AirbyteAuthConfig
connector = OrbConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@OrbConnector.tool_utils
async def orb_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Customers | [List](./REFERENCE.md#customers-list), [Get](./REFERENCE.md#customers-get), [Search](./REFERENCE.md#customers-search) |
| Subscriptions | [List](./REFERENCE.md#subscriptions-list), [Get](./REFERENCE.md#subscriptions-get), [Search](./REFERENCE.md#subscriptions-search) |
| Plans | [List](./REFERENCE.md#plans-list), [Get](./REFERENCE.md#plans-get), [Search](./REFERENCE.md#plans-search) |
| Invoices | [List](./REFERENCE.md#invoices-list), [Get](./REFERENCE.md#invoices-get), [Search](./REFERENCE.md#invoices-search) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Orb API docs
See the official [Orb API reference](https://docs.withorb.com/api-reference).
## Version information
- **Package version:** 0.1.36
- **Connector version:** 0.1.4
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/orb/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, llm, mcp, orb | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:43.564955 | airbyte_agent_orb-0.1.36.tar.gz | 139,250 | 1f/dc/78f4e2042b222a96964e9183d840639f737b497bf70b9d75f8eb412455c7/airbyte_agent_orb-0.1.36.tar.gz | source | sdist | null | false | 8100a24fbfc518b792a8b5d857e77c8e | b4d2f46261822b34df129fdab04f3701b6e4f0a70497d0125d90b4e93aad17d1 | 1fdc78f4e2042b222a96964e9183d840639f737b497bf70b9d75f8eb412455c7 | null | [] | 339 |
2.4 | airbyte-agent-airtable | 0.1.35 | Airbyte Airtable Connector for AI platforms | # Airtable
The Airtable agent connector is a Python package that equips AI agents to interact with Airtable through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Airtable is a cloud-based platform that combines the simplicity of a spreadsheet with the
power of a database. This connector provides access to bases, tables, and records for
data analysis and workflow automation.
## Example questions
The Airtable connector is optimized to handle prompts like these.
- List all my Airtable bases
- What tables are in my first base?
- Show me the schema for tables in a base
- List records from a table in my base
- Show me recent records from a table
- What fields are in a table?
- List records where Status is 'Done' in table tblXXX
- Find records created last week in table tblXXX
- Show me records updated in the last 30 days in base appXXX
## Unsupported questions
The Airtable connector isn't currently able to handle prompts like these.
- Create a new record in Airtable
- Update a record in Airtable
- Delete a record from Airtable
- Create a new table
- Modify table schema
## Installation
```bash
uv pip install airbyte-agent-airtable
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_airtable import AirtableConnector
from airbyte_agent_airtable.models import AirtableAuthConfig
connector = AirtableConnector(
auth_config=AirtableAuthConfig(
personal_access_token="<Airtable Personal Access Token. See https://airtable.com/developers/web/guides/personal-access-tokens>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@AirtableConnector.tool_utils
async def airtable_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_airtable import AirtableConnector, AirbyteAuthConfig
connector = AirtableConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@AirtableConnector.tool_utils
async def airtable_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Bases | [List](./REFERENCE.md#bases-list), [Search](./REFERENCE.md#bases-search) |
| Tables | [List](./REFERENCE.md#tables-list), [Search](./REFERENCE.md#tables-search) |
| Records | [List](./REFERENCE.md#records-list), [Get](./REFERENCE.md#records-get) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Airtable API docs
See the official [Airtable API reference](https://airtable.com/developers/web/api/introduction).
## Version information
- **Package version:** 0.1.35
- **Connector version:** 1.0.5
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/airtable/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, airtable, api, connector, data-integration, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:42.609528 | airbyte_agent_airtable-0.1.35.tar.gz | 127,709 | 7d/b0/c7524fa9b9e0bb390083a5bbade713b7be4cd72702230169c43f30258870/airbyte_agent_airtable-0.1.35.tar.gz | source | sdist | null | false | 255862ae7bb665702352b1674c18f01f | a027da58bb56d0b82ce86dcfe202a229223345794c81d858543138549e89ed5a | 7db0c7524fa9b9e0bb390083a5bbade713b7be4cd72702230169c43f30258870 | null | [] | 335 |
2.4 | expectllm | 0.1.0 | Expect scripts for LLM conversations | # expectllm
[](https://badge.fury.io/py/expectllm)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/entropyvector/expectllm/actions/workflows/ci.yml)
> Expect scripts for LLM conversations.
**The insight:** Agents are just expect scripts. Send a message, expect a pattern, branch on the match. That's it.
<p align="center">
<img src="demo/demo.gif" alt="expectllm demo" width="600">
</p>
## Table of Contents
- [Why expectllm?](#why-expectllm)
- [Requirements](#requirements)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Before/After](#beforeafter)
- [Features](#features)
- [API Reference](#api-reference)
- [Examples](#examples)
- [Environment Variables](#environment-variables)
- [Prompting Tips](#prompting-tips)
- [Important Notes](#important-notes)
- [Contributing](#contributing)
- [License](#license)
## Why expectllm?
1. **Zero boilerplate** — No chains, no schemas, no output parsers. Just send and expect.
2. **Pattern-first design** — Use regex patterns you already know. The LLM adapts to your format, not the other way around.
3. **Conversation as state machine** — Each expect is a transition. Branch on match, retry on failure, build complex flows naturally.
4. **Provider agnostic** — Works with OpenAI, Anthropic, or any compatible API. Switch providers without changing code.
5. **Debuggable** — Every step is visible. No hidden prompts, no magic. Print the history, see what happened.
6. **Lightweight** — Single file, minimal dependencies. No framework lock-in.
7. **Unix philosophy** — Do one thing well. Compose with your existing tools.
## Requirements
- Python 3.9+
- API key for at least one provider (OpenAI or Anthropic)
## Installation
```bash
# Core only (no providers)
pip install expectllm
# With specific provider
pip install expectllm[openai]
pip install expectllm[anthropic]
# All providers
pip install expectllm[all]
```
## Quick Start
```python
from expectllm import Conversation
c = Conversation()
c.send("Is Python dynamically typed? Reply YES or NO")
if c.expect_yesno():
print("Correct!")
```
That's it. Send a message, expect a pattern, branch on the result.
## Before/After
<p align="center">
<img src="demo/before_after.png" alt="Before/After comparison" width="700">
</p>
**Traditional approach (20+ lines):**
```python
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
# ... setup, chains, error handling ...
```
**expectllm (4 lines):**
```python
from expectllm import Conversation
c = Conversation()
c.send("Review this code. Reply with SEVERITY: low/medium/high",
expect=r"SEVERITY: (low|medium|high)")
severity = c.match.group(1)
```
## Features
### Pattern-to-Prompt
When you pass an `expect` pattern to `send()`, expectllm automatically appends format instructions:
```python
# You write:
c.send("Is this secure?", expect=r"^(YES|NO)$")
# expectllm sends:
# "Is this secure?
#
# Reply with exactly 'YES' or 'NO'."
```
### Expect Templates
No regex needed for common patterns:
```python
# Extract JSON
c.send("Return user data as JSON")
data = c.expect_json() # Returns dict
# Extract numbers
c.send("How many items?")
count = c.expect_number() # Returns int
# Yes/No questions
c.send("Is this valid? Reply YES or NO")
if c.expect_yesno(): # Returns bool
print("Valid!")
# Multiple choice
c.send("Classify as: bug, feature, or docs")
category = c.expect_choice(["bug", "feature", "docs"])
# Extract code
c.send("Write a Python function")
code = c.expect_code("python") # Returns code string
```
## API Reference
### Conversation
```python
c = Conversation(
model="claude-sonnet-4-20250514", # Optional, auto-detected from env
system_prompt="You are helpful", # Optional
timeout=60, # Default timeout in seconds
provider="anthropic", # Optional: "openai" or "anthropic"
max_history=20 # Optional: limit conversation history
)
```
### Methods
| Method | Returns | Description |
|--------|---------|-------------|
| `send(message, expect=None)` | `str` | Send message, optionally validate pattern |
| `expect(pattern, flags=0)` | `bool` | Match pattern in last response |
| `send_expect(message, pattern)` | `Match` | Send and expect in one call |
| `expect_json()` | `dict` | Extract JSON from response |
| `expect_number()` | `int` | Extract first number |
| `expect_choice(choices)` | `str` | Match one of the choices |
| `expect_yesno()` | `bool` | Match yes/no variants |
| `expect_code(language=None)` | `str` | Extract code block |
| `clear_history()` | `None` | Clear conversation history |
### Properties
| Property | Type | Description |
|----------|------|-------------|
| `match` | `Match \| None` | Last successful match object |
| `history` | `List[Dict]` | Conversation history (copy) |
| `last_response` | `str` | Most recent response |
### Exceptions
| Exception | When |
|-----------|------|
| `ExpectError` | Pattern not found in response |
| `ProviderError` | API call failed |
| `ConfigError` | Missing API key or invalid config |
## Examples
### Extract Structured Data
<p align="center">
<img src="demo/json_extract.gif" alt="JSON extraction demo" width="600">
</p>
```python
from expectllm import Conversation
c = Conversation()
c.send("Parse this: 'Meeting with John at 3pm tomorrow'")
event = c.expect_json() # {"person": "John", "time": "3pm", "date": "tomorrow"}
```
### Multi-Turn Code Review
```python
from expectllm import Conversation, ExpectError
import re
code = '''
def process_user(data):
query = f"SELECT * FROM users WHERE id = {data['id']}"
return db.execute(query)
'''
c = Conversation()
c.send(f"Review this code for security issues:\n```python\n{code}\n```")
c.expect(r"(found (\d+) issues|no issues found)", re.IGNORECASE)
if c.match.group(2) and int(c.match.group(2)) > 0:
c.send("List the issues with severity ratings")
c.expect(r"(critical|high|medium|low)", re.IGNORECASE)
print(c.last_response)
```
### Data Extraction
```python
from expectllm import Conversation
text = "Contact John Smith at john@example.com or 555-1234"
c = Conversation()
c.send(f"""Extract contact info from:
{text}
Format:
NAME: <name>
EMAIL: <email>
PHONE: <phone>""")
c.expect(r"NAME: (.+)\nEMAIL: (.+)\nPHONE: (.+)")
print(f"Name: {c.match.group(1)}")
print(f"Email: {c.match.group(2)}")
print(f"Phone: {c.match.group(3)}")
```
### Retry Pattern
<p align="center">
<img src="demo/retry.gif" alt="Retry pattern demo" width="600">
</p>
```python
from expectllm import Conversation, ExpectError
def analyze_document(text: str, max_retries: int = 3) -> dict:
c = Conversation(system_prompt="You are a document analyzer.")
c.send(f"Analyze this document and extract key entities:\n\n{text}")
for attempt in range(max_retries):
try:
return c.expect_json()
except ExpectError:
if attempt < max_retries - 1:
c.send("Please format your response as valid JSON.")
raise ExpectError("Failed to extract JSON after retries")
```
## Environment Variables
Set your API key:
```bash
# For Anthropic (Claude)
export ANTHROPIC_API_KEY="your-key"
# For OpenAI (GPT)
export OPENAI_API_KEY="your-key"
```
expectllm auto-detects the provider from the environment. Anthropic is preferred if both are set.
## Prompting Tips
For reliable pattern matching:
1. **Be explicit about format**: "Reply with exactly 'YES' or 'NO'"
2. **Use examples**: "Format: SCORE: 8/10"
3. **Constrain output**: "Reply with just the number, nothing else"
4. **Use code blocks**: "Put your JSON in a ```json code block"
## Important Notes
**LLM Non-Determinism**: LLM outputs are inherently non-deterministic. The same prompt may produce different responses across calls. For production use:
- Use explicit format instructions (expectllm does this automatically with `expect=`)
- Implement retry logic for critical extractions
- Consider temperature=0 for more consistent outputs
- Test patterns against varied response formats
**Not a Testing Framework**: expectllm is for *scripting conversations*, not for unit testing LLM outputs. For assertions about LLM behavior, combine with your existing test framework.
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup and guidelines.
## License
MIT License - see [LICENSE](LICENSE) for details.
| text/markdown | null | Zaur Jafarov <expectllm@gmail.com> | null | null | MIT | agent, ai, anthropic, automation, conversation, expect, llm, openai | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"anthropic>=0.18.0; extra == \"all\"",
"openai>=1.0.0; extra == \"all\"",
"anthropic>=0.18.0; extra == \"anthropic\"",
"anthropic>=0.18.0; extra == \"dev\"",
"build>=1.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"openai>=1.0.0; extra == \"dev\"",
"pre-commit>=3.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"tox>=4.0; extra == \"dev\"",
"twine>=4.0; extra == \"dev\"",
"openai>=1.0.0; extra == \"openai\""
] | [] | [] | [] | [
"Homepage, https://github.com/entropyvector/expectllm",
"Documentation, https://github.com/entropyvector/expectllm#readme",
"Repository, https://github.com/entropyvector/expectllm.git",
"Issues, https://github.com/entropyvector/expectllm/issues",
"Changelog, https://github.com/entropyvector/expectllm/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T02:13:41.419796 | expectllm-0.1.0.tar.gz | 9,775 | 80/59/efec393132d45e49e958eab0f00b42bd9f6be0ba1221a18f322353cc5444/expectllm-0.1.0.tar.gz | source | sdist | null | false | d6bdbcdebac85ebe397e471c2b7caee1 | aadae0b047eda8334b0fec6f9bd329b784e08c6d7c5739d7dce5713385985d5b | 8059efec393132d45e49e958eab0f00b42bd9f6be0ba1221a18f322353cc5444 | null | [
"LICENSE"
] | 276 |
2.4 | airbyte-agent-jira | 0.1.97 | Airbyte Jira Connector for AI platforms | # Jira
The Jira agent connector is a Python package that equips AI agents to interact with Jira through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Connector for Jira API
## Example questions
The Jira connector is optimized to handle prompts like these.
- Show me all open issues in my Jira instance
- List recent issues created in the last 7 days
- List all projects in my Jira instance
- Show me details for the most recently updated issue
- List all users in my Jira instance
- Show me comments on the most recent issue
- Show me worklogs from the last 7 days
- Assign a recent issue to a teammate
- Unassign a recent issue
- Create a new task called 'Sample task' in a project
- Create a bug with high priority
- Update the summary of a recent issue to 'Updated summary'
- Change the priority of a recent issue to high
- Add a comment to a recent issue saying 'Please investigate'
- Update my most recent comment
- Delete a test issue
- Remove my most recent comment
- What issues are assigned to \{team_member\} this week?
- Find all high priority bugs in our current sprint
- Show me overdue issues across all projects
- What projects have the most issues?
- Search for users named \{user_name\}
## Unsupported questions
The Jira connector isn't currently able to handle prompts like these.
- Log time on \{issue_key\}
- Transition \{issue_key\} to Done
## Installation
```bash
uv pip install airbyte-agent-jira
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_jira import JiraConnector
from airbyte_agent_jira.models import JiraAuthConfig
connector = JiraConnector(
auth_config=JiraAuthConfig(
username="<Your Atlassian account email address>",
password="<Your Jira API token from https://id.atlassian.com/manage-profile/security/api-tokens>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@JiraConnector.tool_utils
async def jira_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_jira import JiraConnector, AirbyteAuthConfig
connector = JiraConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@JiraConnector.tool_utils
async def jira_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Issues | [API Search](./REFERENCE.md#issues-api_search), [Create](./REFERENCE.md#issues-create), [Get](./REFERENCE.md#issues-get), [Update](./REFERENCE.md#issues-update), [Delete](./REFERENCE.md#issues-delete), [Search](./REFERENCE.md#issues-search) |
| Projects | [API Search](./REFERENCE.md#projects-api_search), [Get](./REFERENCE.md#projects-get), [Search](./REFERENCE.md#projects-search) |
| Users | [Get](./REFERENCE.md#users-get), [List](./REFERENCE.md#users-list), [API Search](./REFERENCE.md#users-api_search), [Search](./REFERENCE.md#users-search) |
| Issue Fields | [List](./REFERENCE.md#issue-fields-list), [API Search](./REFERENCE.md#issue-fields-api_search), [Search](./REFERENCE.md#issue-fields-search) |
| Issue Comments | [List](./REFERENCE.md#issue-comments-list), [Create](./REFERENCE.md#issue-comments-create), [Get](./REFERENCE.md#issue-comments-get), [Update](./REFERENCE.md#issue-comments-update), [Delete](./REFERENCE.md#issue-comments-delete), [Search](./REFERENCE.md#issue-comments-search) |
| Issue Worklogs | [List](./REFERENCE.md#issue-worklogs-list), [Get](./REFERENCE.md#issue-worklogs-get), [Search](./REFERENCE.md#issue-worklogs-search) |
| Issues Assignee | [Update](./REFERENCE.md#issues-assignee-update) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Jira API docs
See the official [Jira API reference](https://developer.atlassian.com/cloud/jira/platform/rest/v3/intro/).
## Version information
- **Package version:** 0.1.97
- **Connector version:** 1.1.6
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/jira/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, jira, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:39.918071 | airbyte_agent_jira-0.1.97.tar.gz | 165,047 | 1c/6d/0b3861d75f6fb100df95e51cc6babd3262c8ac001371cb78ef8ec2b3058e/airbyte_agent_jira-0.1.97.tar.gz | source | sdist | null | false | 19fd4e4198b9ce265c53ef98f0d71b1f | 1c337ae374e09bbe16f34b3b12d4a010b9ffc98556580c67523623ada0b8c20a | 1c6d0b3861d75f6fb100df95e51cc6babd3262c8ac001371cb78ef8ec2b3058e | null | [] | 329 |
2.4 | airbyte-agent-stripe | 0.5.104 | Airbyte Stripe Connector for AI platforms | # Stripe
The Stripe agent connector is a Python package that equips AI agents to interact with Stripe through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Stripe is a payment processing platform that enables businesses to accept payments,
manage subscriptions, and handle financial transactions. This connector provides
access to customers for payment analytics and customer management.
## Example questions
The Stripe connector is optimized to handle prompts like these.
- List customers created in the last 7 days
- Show me details for a recent customer
- List recent charges
- Show me details for a recent charge
- List recent invoices
- List active subscriptions
- Show me my top 10 customers by total revenue this month
- List all customers who have spent over $5,000 in the last quarter
- Analyze payment trends for my Stripe customers
- Identify which customers have the most consistent subscription payments
- Give me insights into my customer retention rates
- Summarize the payment history for \{customer\}
- Compare customer spending patterns from last month to this month
- Show me details about my highest-value Stripe customers
- What are the key financial insights from my customer base?
- Break down my customers by their average transaction value
## Unsupported questions
The Stripe connector isn't currently able to handle prompts like these.
- Create a new customer profile in Stripe
- Update the billing information for \{customer\}
- Delete a customer record
- Send a payment reminder to \{customer\}
- Schedule an automatic invoice for \{company\}
## Installation
```bash
uv pip install airbyte-agent-stripe
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_stripe import StripeConnector
from airbyte_agent_stripe.models import StripeAuthConfig
connector = StripeConnector(
auth_config=StripeAuthConfig(
api_key="<Your Stripe API Key (starts with sk_test_ or sk_live_)>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@StripeConnector.tool_utils
async def stripe_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_stripe import StripeConnector, AirbyteAuthConfig
connector = StripeConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@StripeConnector.tool_utils
async def stripe_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Customers | [List](./REFERENCE.md#customers-list), [Create](./REFERENCE.md#customers-create), [Get](./REFERENCE.md#customers-get), [Update](./REFERENCE.md#customers-update), [Delete](./REFERENCE.md#customers-delete), [API Search](./REFERENCE.md#customers-api_search), [Search](./REFERENCE.md#customers-search) |
| Invoices | [List](./REFERENCE.md#invoices-list), [Get](./REFERENCE.md#invoices-get), [API Search](./REFERENCE.md#invoices-api_search), [Search](./REFERENCE.md#invoices-search) |
| Charges | [List](./REFERENCE.md#charges-list), [Get](./REFERENCE.md#charges-get), [API Search](./REFERENCE.md#charges-api_search), [Search](./REFERENCE.md#charges-search) |
| Subscriptions | [List](./REFERENCE.md#subscriptions-list), [Get](./REFERENCE.md#subscriptions-get), [API Search](./REFERENCE.md#subscriptions-api_search), [Search](./REFERENCE.md#subscriptions-search) |
| Refunds | [List](./REFERENCE.md#refunds-list), [Create](./REFERENCE.md#refunds-create), [Get](./REFERENCE.md#refunds-get), [Search](./REFERENCE.md#refunds-search) |
| Products | [List](./REFERENCE.md#products-list), [Create](./REFERENCE.md#products-create), [Get](./REFERENCE.md#products-get), [Update](./REFERENCE.md#products-update), [Delete](./REFERENCE.md#products-delete), [API Search](./REFERENCE.md#products-api_search) |
| Balance | [Get](./REFERENCE.md#balance-get) |
| Balance Transactions | [List](./REFERENCE.md#balance-transactions-list), [Get](./REFERENCE.md#balance-transactions-get) |
| Payment Intents | [List](./REFERENCE.md#payment-intents-list), [Get](./REFERENCE.md#payment-intents-get), [API Search](./REFERENCE.md#payment-intents-api_search) |
| Disputes | [List](./REFERENCE.md#disputes-list), [Get](./REFERENCE.md#disputes-get) |
| Payouts | [List](./REFERENCE.md#payouts-list), [Get](./REFERENCE.md#payouts-get) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Stripe API docs
See the official [Stripe API reference](https://docs.stripe.com/api).
## Version information
- **Package version:** 0.5.104
- **Connector version:** 0.1.9
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/stripe/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, llm, mcp, stripe | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:32.368192 | airbyte_agent_stripe-0.5.104.tar.gz | 283,739 | ec/35/3cedf236915ceafe79b545b87f90fcc36b748a0ec77b36eab3a6d10c6ffb/airbyte_agent_stripe-0.5.104.tar.gz | source | sdist | null | false | e66c2d424f019824c1e0dea6d1fd357d | 8402ccc8856646baa333c8868fbbe1edbef0d57528813e1dbc914ca33ac340c6 | ec353cedf236915ceafe79b545b87f90fcc36b748a0ec77b36eab3a6d10c6ffb | null | [] | 329 |
2.4 | airbyte-agent-salesforce | 0.1.99 | Airbyte Salesforce Connector for AI platforms | # Salesforce
The Salesforce agent connector is a Python package that equips AI agents to interact with Salesforce through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Salesforce is a cloud-based CRM platform that helps businesses manage customer
relationships, sales pipelines, and business operations. This connector provides
access to accounts, contacts, leads, opportunities, tasks, events, campaigns, cases,
notes, and attachments for sales analytics and customer relationship management.
## Example questions
The Salesforce connector is optimized to handle prompts like these.
- List recent contacts in my Salesforce account
- List open cases in my Salesforce account
- Show me the notes and attachments for a recent account
- Show me my top 5 opportunities this month
- List all contacts from \{company\} in the last quarter
- Search for leads in the technology sector with revenue over $10M
- What trends can you identify in my recent sales pipeline?
- Summarize the open cases for my key accounts
- Find upcoming events related to my most important opportunities
- Analyze the performance of my recent marketing campaigns
- Identify the highest value opportunities I'm currently tracking
## Unsupported questions
The Salesforce connector isn't currently able to handle prompts like these.
- Create a new lead for \{person\}
- Update the status of my sales opportunity
- Schedule a follow-up meeting with \{customer\}
- Delete this old contact record
- Send an email to all contacts in this campaign
## Installation
```bash
uv pip install airbyte-agent-salesforce
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_salesforce import SalesforceConnector
from airbyte_agent_salesforce.models import SalesforceAuthConfig
connector = SalesforceConnector(
auth_config=SalesforceAuthConfig(
refresh_token="<OAuth refresh token for automatic token renewal>",
client_id="<Connected App Consumer Key>",
client_secret="<Connected App Consumer Secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@SalesforceConnector.tool_utils
async def salesforce_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_salesforce import SalesforceConnector, AirbyteAuthConfig
connector = SalesforceConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@SalesforceConnector.tool_utils
async def salesforce_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Sobjects | [List](./REFERENCE.md#sobjects-list) |
| Accounts | [List](./REFERENCE.md#accounts-list), [Get](./REFERENCE.md#accounts-get), [API Search](./REFERENCE.md#accounts-api_search), [Search](./REFERENCE.md#accounts-search) |
| Contacts | [List](./REFERENCE.md#contacts-list), [Get](./REFERENCE.md#contacts-get), [API Search](./REFERENCE.md#contacts-api_search), [Search](./REFERENCE.md#contacts-search) |
| Leads | [List](./REFERENCE.md#leads-list), [Get](./REFERENCE.md#leads-get), [API Search](./REFERENCE.md#leads-api_search), [Search](./REFERENCE.md#leads-search) |
| Opportunities | [List](./REFERENCE.md#opportunities-list), [Get](./REFERENCE.md#opportunities-get), [API Search](./REFERENCE.md#opportunities-api_search), [Search](./REFERENCE.md#opportunities-search) |
| Tasks | [List](./REFERENCE.md#tasks-list), [Get](./REFERENCE.md#tasks-get), [API Search](./REFERENCE.md#tasks-api_search), [Search](./REFERENCE.md#tasks-search) |
| Events | [List](./REFERENCE.md#events-list), [Get](./REFERENCE.md#events-get), [API Search](./REFERENCE.md#events-api_search) |
| Campaigns | [List](./REFERENCE.md#campaigns-list), [Get](./REFERENCE.md#campaigns-get), [API Search](./REFERENCE.md#campaigns-api_search) |
| Cases | [List](./REFERENCE.md#cases-list), [Get](./REFERENCE.md#cases-get), [API Search](./REFERENCE.md#cases-api_search) |
| Notes | [List](./REFERENCE.md#notes-list), [Get](./REFERENCE.md#notes-get), [API Search](./REFERENCE.md#notes-api_search) |
| Content Versions | [List](./REFERENCE.md#content-versions-list), [Get](./REFERENCE.md#content-versions-get), [Download](./REFERENCE.md#content-versions-download) |
| Attachments | [List](./REFERENCE.md#attachments-list), [Get](./REFERENCE.md#attachments-get), [Download](./REFERENCE.md#attachments-download) |
| Query | [List](./REFERENCE.md#query-list) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Salesforce API docs
See the official [Salesforce API reference](https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_rest.htm).
## Version information
- **Package version:** 0.1.99
- **Connector version:** 1.0.13
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/salesforce/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, llm, mcp, salesforce | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:27.865011 | airbyte_agent_salesforce-0.1.99.tar.gz | 155,485 | 63/3a/c7735ba8c803c3639307a2a0fc05cb14b75f998b0f4baf2d8b00fa098ad6/airbyte_agent_salesforce-0.1.99.tar.gz | source | sdist | null | false | 5f41db6631d0eb5efea45b72673068e4 | b03d3ad633cd2f356cbc399bc6fcf3e4b97f41a94477e653765b93e7038fe0b3 | 633ac7735ba8c803c3639307a2a0fc05cb14b75f998b0f4baf2d8b00fa098ad6 | null | [] | 341 |
2.4 | airbyte-agent-intercom | 0.1.76 | Airbyte Intercom Connector for AI platforms | # Intercom
The Intercom agent connector is a Python package that equips AI agents to interact with Intercom through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Intercom is a customer messaging platform that enables businesses to communicate with
customers through chat, email, and in-app messaging. This connector provides read-only
access to core Intercom entities including contacts, conversations, companies, teams,
admins, tags, and segments for customer support analytics and insights.
## Example questions
The Intercom connector is optimized to handle prompts like these.
- List all contacts in my Intercom workspace
- List all companies in Intercom
- What teams are configured in my workspace?
- Show me all admins in my Intercom account
- List all tags used in Intercom
- Show me all customer segments
- Show me details for a recent contact
- Show me details for a recent company
- Show me details for a recent conversation
- Show me conversations from the last week
- List conversations assigned to team \{team_id\}
- Show me open conversations
## Unsupported questions
The Intercom connector isn't currently able to handle prompts like these.
- Create a new contact in Intercom
- Send a message to a customer
- Delete a conversation
- Update company information
- Assign a conversation to an admin
- Create a new tag
## Installation
```bash
uv pip install airbyte-agent-intercom
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_intercom import IntercomConnector
from airbyte_agent_intercom.models import IntercomAuthConfig
connector = IntercomConnector(
auth_config=IntercomAuthConfig(
access_token="<Your Intercom API Access Token>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@IntercomConnector.tool_utils
async def intercom_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_intercom import IntercomConnector, AirbyteAuthConfig
connector = IntercomConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@IntercomConnector.tool_utils
async def intercom_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Contacts | [List](./REFERENCE.md#contacts-list), [Get](./REFERENCE.md#contacts-get), [Search](./REFERENCE.md#contacts-search) |
| Conversations | [List](./REFERENCE.md#conversations-list), [Get](./REFERENCE.md#conversations-get), [Search](./REFERENCE.md#conversations-search) |
| Companies | [List](./REFERENCE.md#companies-list), [Get](./REFERENCE.md#companies-get), [Search](./REFERENCE.md#companies-search) |
| Teams | [List](./REFERENCE.md#teams-list), [Get](./REFERENCE.md#teams-get), [Search](./REFERENCE.md#teams-search) |
| Admins | [List](./REFERENCE.md#admins-list), [Get](./REFERENCE.md#admins-get) |
| Tags | [List](./REFERENCE.md#tags-list), [Get](./REFERENCE.md#tags-get) |
| Segments | [List](./REFERENCE.md#segments-list), [Get](./REFERENCE.md#segments-get) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Intercom API docs
See the official [Intercom API reference](https://developers.intercom.com/docs/references/rest-api/api.intercom.io).
## Version information
- **Package version:** 0.1.76
- **Connector version:** 0.1.8
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/intercom/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, intercom, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:26.011894 | airbyte_agent_intercom-0.1.76.tar.gz | 153,534 | 59/fb/04729add8b232bb526ff5b2e7742a5b5e1a37fbcdabdbf9f93cbb0a20069/airbyte_agent_intercom-0.1.76.tar.gz | source | sdist | null | false | a1b8ef50697a808f88886fbc063a593b | aa21d97779b91addf3f48ba4d6cdda51e1ec7e7a303cbc12644bc85c37944574 | 59fb04729add8b232bb526ff5b2e7742a5b5e1a37fbcdabdbf9f93cbb0a20069 | null | [] | 332 |
2.4 | airbyte-agent-gmail | 0.1.7 | Airbyte Gmail Connector for AI platforms | # Gmail
The Gmail agent connector is a Python package that equips AI agents to interact with Gmail through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Gmail is Google's email service that provides email sending, receiving, and organization
capabilities. This connector provides access to messages, threads, labels, drafts, and
user profile information. It supports read operations for listing and retrieving email
data, as well as write operations including sending messages, managing drafts, modifying
message labels, and creating or updating labels.
## Example questions
The Gmail connector is optimized to handle prompts like these.
- List my recent emails
- Show me unread messages in my inbox
- Get the details of a specific email
- List all my Gmail labels
- Show me details for a specific label
- List my email drafts
- Get the content of a specific draft
- List my email threads
- Show me the full thread for a conversation
- Get my Gmail profile information
- Send an email to someone
- Create a new email draft
- Archive a message by removing the INBOX label
- Mark a message as read
- Mark a message as unread
- Move a message to trash
- Create a new label
- Update a label name or settings
- Delete a label
- Search for messages matching a query
- Find emails from a specific sender
- Show me emails with attachments
## Unsupported questions
The Gmail connector isn't currently able to handle prompts like these.
- Attach a file to an email
- Forward an email to someone
- Create a filter or rule
- Manage Gmail settings
- Access Google Calendar events
- Manage contacts
## Installation
```bash
uv pip install airbyte-agent-gmail
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_gmail import GmailConnector
from airbyte_agent_gmail.models import GmailAuthConfig
connector = GmailConnector(
auth_config=GmailAuthConfig(
access_token="<Your Google OAuth2 Access Token (optional, will be obtained via refresh)>",
refresh_token="<Your Google OAuth2 Refresh Token>",
client_id="<Your Google OAuth2 Client ID>",
client_secret="<Your Google OAuth2 Client Secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@GmailConnector.tool_utils
async def gmail_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_gmail import GmailConnector, AirbyteAuthConfig
connector = GmailConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@GmailConnector.tool_utils
async def gmail_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Profile | [Get](./REFERENCE.md#profile-get) |
| Messages | [List](./REFERENCE.md#messages-list), [Get](./REFERENCE.md#messages-get), [Create](./REFERENCE.md#messages-create), [Update](./REFERENCE.md#messages-update) |
| Labels | [List](./REFERENCE.md#labels-list), [Create](./REFERENCE.md#labels-create), [Get](./REFERENCE.md#labels-get), [Update](./REFERENCE.md#labels-update), [Delete](./REFERENCE.md#labels-delete) |
| Drafts | [List](./REFERENCE.md#drafts-list), [Create](./REFERENCE.md#drafts-create), [Get](./REFERENCE.md#drafts-get), [Update](./REFERENCE.md#drafts-update), [Delete](./REFERENCE.md#drafts-delete) |
| Drafts Send | [Create](./REFERENCE.md#drafts-send-create) |
| Threads | [List](./REFERENCE.md#threads-list), [Get](./REFERENCE.md#threads-get) |
| Messages Trash | [Create](./REFERENCE.md#messages-trash-create) |
| Messages Untrash | [Create](./REFERENCE.md#messages-untrash-create) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Gmail API docs
See the official [Gmail API reference](https://developers.google.com/gmail/api/reference/rest).
## Version information
- **Package version:** 0.1.7
- **Connector version:** 0.1.2
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/gmail/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, gmail, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:23.519207 | airbyte_agent_gmail-0.1.7.tar.gz | 136,900 | a3/13/f126f136deb95f49b11412084c73181078e9bd546d7fca6066a3dce3ab78/airbyte_agent_gmail-0.1.7.tar.gz | source | sdist | null | false | 6f2ce31bb97ce2c873725ff2707c0c41 | 7f1be3cb94d7a4957c982476aa06d50df72023bc541b6c98688bc95fde2a11ac | a313f126f136deb95f49b11412084c73181078e9bd546d7fca6066a3dce3ab78 | null | [] | 244 |
2.4 | airbyte-agent-klaviyo | 0.1.33 | Airbyte Klaviyo Connector for AI platforms | # Klaviyo
The Klaviyo agent connector is a Python package that equips AI agents to interact with Klaviyo through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Klaviyo is a marketing automation platform that helps businesses build customer relationships
through personalized email, SMS, and push notifications. This connector provides access to
Klaviyo's core entities including profiles, lists, campaigns, events, metrics, flows, and
email templates for marketing analytics and customer engagement insights.
## Example questions
The Klaviyo connector is optimized to handle prompts like these.
- List all profiles in my Klaviyo account
- Show me details for a recent profile
- Show me all email lists
- Show me details for a recent email list
- What campaigns have been created?
- Show me details for a recent campaign
- Show me all email campaigns
- List all events for tracking customer actions
- Show me all metrics (event types)
- Show me details for a recent metric
- What automated flows are configured?
- Show me details for a recent flow
- List all email templates
- Show me details for a recent email template
## Unsupported questions
The Klaviyo connector isn't currently able to handle prompts like these.
- Create a new profile
- Update a profile's email address
- Delete a list
- Send an email campaign
- Add a profile to a list
## Installation
```bash
uv pip install airbyte-agent-klaviyo
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_klaviyo import KlaviyoConnector
from airbyte_agent_klaviyo.models import KlaviyoAuthConfig
connector = KlaviyoConnector(
auth_config=KlaviyoAuthConfig(
api_key="<Your Klaviyo private API key>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@KlaviyoConnector.tool_utils
async def klaviyo_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_klaviyo import KlaviyoConnector, AirbyteAuthConfig
connector = KlaviyoConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@KlaviyoConnector.tool_utils
async def klaviyo_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Profiles | [List](./REFERENCE.md#profiles-list), [Get](./REFERENCE.md#profiles-get), [Search](./REFERENCE.md#profiles-search) |
| Lists | [List](./REFERENCE.md#lists-list), [Get](./REFERENCE.md#lists-get), [Search](./REFERENCE.md#lists-search) |
| Campaigns | [List](./REFERENCE.md#campaigns-list), [Get](./REFERENCE.md#campaigns-get), [Search](./REFERENCE.md#campaigns-search) |
| Events | [List](./REFERENCE.md#events-list), [Search](./REFERENCE.md#events-search) |
| Metrics | [List](./REFERENCE.md#metrics-list), [Get](./REFERENCE.md#metrics-get), [Search](./REFERENCE.md#metrics-search) |
| Flows | [List](./REFERENCE.md#flows-list), [Get](./REFERENCE.md#flows-get), [Search](./REFERENCE.md#flows-search) |
| Email Templates | [List](./REFERENCE.md#email-templates-list), [Get](./REFERENCE.md#email-templates-get), [Search](./REFERENCE.md#email-templates-search) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Klaviyo API docs
See the official [Klaviyo API reference](https://developers.klaviyo.com/en/reference/api_overview).
## Version information
- **Package version:** 0.1.33
- **Connector version:** 1.0.2
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/klaviyo/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, klaviyo, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:22.519994 | airbyte_agent_klaviyo-0.1.33.tar.gz | 137,406 | cb/08/0882184912fdc5eb753ee3f696ab60d49a8c9919d927d2c4627e15ecfc4e/airbyte_agent_klaviyo-0.1.33.tar.gz | source | sdist | null | false | ac82f468df7e3d7106b085cbec41eeb4 | a52db6904124d6d34d3572f9cb6524afaf8cf53f93ca47736cf31240adf18043 | cb080882184912fdc5eb753ee3f696ab60d49a8c9919d927d2c4627e15ecfc4e | null | [] | 329 |
2.4 | airbyte-agent-ashby | 0.1.7 | Airbyte Ashby Connector for AI platforms | # Ashby
The Ashby agent connector is a Python package that equips AI agents to interact with Ashby through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Ashby is a modern applicant tracking system (ATS) and recruiting platform that helps companies manage their hiring process. This connector provides access to candidates, applications, jobs, departments, locations, users, job postings, sources, archive reasons, candidate tags, custom fields, and feedback form definitions for talent acquisition analytics and hiring insights.
## Example questions
The Ashby connector is optimized to handle prompts like these.
- List all open jobs
- Show me all candidates
- List recent applications
- List all departments
- Show me all job postings
- List all users in the organization
- Show me candidates who applied last month
- What are the top sources for job applications?
- Compare the number of applications across different departments
- Find candidates with multiple applications
- Summarize the candidate pipeline for our latest job posting
- Find the most active departments in recruiting this month
## Unsupported questions
The Ashby connector isn't currently able to handle prompts like these.
- Create a new job posting
- Schedule an interview for a candidate
- Update a candidates application status
- Delete a candidate profile
- Send an offer letter to a candidate
## Installation
```bash
uv pip install airbyte-agent-ashby
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_ashby import AshbyConnector
from airbyte_agent_ashby.models import AshbyAuthConfig
connector = AshbyConnector(
auth_config=AshbyAuthConfig(
api_key="<Your Ashby API key>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@AshbyConnector.tool_utils
async def ashby_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_ashby import AshbyConnector, AirbyteAuthConfig
connector = AshbyConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@AshbyConnector.tool_utils
async def ashby_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Candidates | [List](./REFERENCE.md#candidates-list), [Get](./REFERENCE.md#candidates-get) |
| Applications | [List](./REFERENCE.md#applications-list), [Get](./REFERENCE.md#applications-get) |
| Jobs | [List](./REFERENCE.md#jobs-list), [Get](./REFERENCE.md#jobs-get) |
| Departments | [List](./REFERENCE.md#departments-list), [Get](./REFERENCE.md#departments-get) |
| Locations | [List](./REFERENCE.md#locations-list), [Get](./REFERENCE.md#locations-get) |
| Users | [List](./REFERENCE.md#users-list), [Get](./REFERENCE.md#users-get) |
| Job Postings | [List](./REFERENCE.md#job-postings-list), [Get](./REFERENCE.md#job-postings-get) |
| Sources | [List](./REFERENCE.md#sources-list) |
| Archive Reasons | [List](./REFERENCE.md#archive-reasons-list) |
| Candidate Tags | [List](./REFERENCE.md#candidate-tags-list) |
| Custom Fields | [List](./REFERENCE.md#custom-fields-list) |
| Feedback Form Definitions | [List](./REFERENCE.md#feedback-form-definitions-list) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Ashby API docs
See the official [Ashby API reference](https://developers.ashbyhq.com/reference).
## Version information
- **Package version:** 0.1.7
- **Connector version:** 0.1.2
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/ashby/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, ashby, connector, data-integration, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:18.143269 | airbyte_agent_ashby-0.1.7.tar.gz | 135,310 | 0b/cc/67482369fcead2e38ed1d8de4caa19cd0b2d58895e0e1fe1908406630322/airbyte_agent_ashby-0.1.7.tar.gz | source | sdist | null | false | a0d0f8cea0871e39812b8afbe111ef0c | 039e1722fb19930a847f7c791071e37251685b330a2bec87958d9257fdcbe0a4 | 0bcc67482369fcead2e38ed1d8de4caa19cd0b2d58895e0e1fe1908406630322 | null | [] | 326 |
2.4 | airbyte-agent-shopify | 0.1.57 | Airbyte Shopify Connector for AI platforms | # Shopify
The Shopify agent connector is a Python package that equips AI agents to interact with Shopify through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Shopify is an e-commerce platform that enables businesses to create online stores,
manage products, process orders, and handle customer relationships. This connector
provides access to Shopify Admin REST API for reading store data including customers,
orders, products, inventory, and more.
## Example questions
The Shopify connector is optimized to handle prompts like these.
- List all customers in my Shopify store
- Show me details for a recent customer
- What products do I have in my store?
- List all locations for my store
- Show me inventory levels for a recent location
- Show me all draft orders
- List all custom collections in my store
- Show me details for a recent order
- Show me product variants for a recent product
- Show me orders from the last 30 days
- Show me abandoned checkouts from this week
- What price rules are currently active?
## Unsupported questions
The Shopify connector isn't currently able to handle prompts like these.
- Create a new customer in Shopify
- Update product pricing
- Delete an order
- Process a refund
- Send shipping notification to customer
- Create a new discount code
## Installation
```bash
uv pip install airbyte-agent-shopify
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_shopify import ShopifyConnector
from airbyte_agent_shopify.models import ShopifyAuthConfig
connector = ShopifyConnector(
auth_config=ShopifyAuthConfig(
api_key="<Your Shopify Admin API access token>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@ShopifyConnector.tool_utils
async def shopify_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_shopify import ShopifyConnector, AirbyteAuthConfig
connector = ShopifyConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@ShopifyConnector.tool_utils
async def shopify_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Customers | [List](./REFERENCE.md#customers-list), [Get](./REFERENCE.md#customers-get) |
| Orders | [List](./REFERENCE.md#orders-list), [Get](./REFERENCE.md#orders-get) |
| Products | [List](./REFERENCE.md#products-list), [Get](./REFERENCE.md#products-get) |
| Product Variants | [List](./REFERENCE.md#product-variants-list), [Get](./REFERENCE.md#product-variants-get) |
| Product Images | [List](./REFERENCE.md#product-images-list), [Get](./REFERENCE.md#product-images-get) |
| Abandoned Checkouts | [List](./REFERENCE.md#abandoned-checkouts-list) |
| Locations | [List](./REFERENCE.md#locations-list), [Get](./REFERENCE.md#locations-get) |
| Inventory Levels | [List](./REFERENCE.md#inventory-levels-list) |
| Inventory Items | [List](./REFERENCE.md#inventory-items-list), [Get](./REFERENCE.md#inventory-items-get) |
| Shop | [Get](./REFERENCE.md#shop-get) |
| Price Rules | [List](./REFERENCE.md#price-rules-list), [Get](./REFERENCE.md#price-rules-get) |
| Discount Codes | [List](./REFERENCE.md#discount-codes-list), [Get](./REFERENCE.md#discount-codes-get) |
| Custom Collections | [List](./REFERENCE.md#custom-collections-list), [Get](./REFERENCE.md#custom-collections-get) |
| Smart Collections | [List](./REFERENCE.md#smart-collections-list), [Get](./REFERENCE.md#smart-collections-get) |
| Collects | [List](./REFERENCE.md#collects-list), [Get](./REFERENCE.md#collects-get) |
| Draft Orders | [List](./REFERENCE.md#draft-orders-list), [Get](./REFERENCE.md#draft-orders-get) |
| Fulfillments | [List](./REFERENCE.md#fulfillments-list), [Get](./REFERENCE.md#fulfillments-get) |
| Order Refunds | [List](./REFERENCE.md#order-refunds-list), [Get](./REFERENCE.md#order-refunds-get) |
| Transactions | [List](./REFERENCE.md#transactions-list), [Get](./REFERENCE.md#transactions-get) |
| Tender Transactions | [List](./REFERENCE.md#tender-transactions-list) |
| Countries | [List](./REFERENCE.md#countries-list), [Get](./REFERENCE.md#countries-get) |
| Metafield Shops | [List](./REFERENCE.md#metafield-shops-list), [Get](./REFERENCE.md#metafield-shops-get) |
| Metafield Customers | [List](./REFERENCE.md#metafield-customers-list) |
| Metafield Products | [List](./REFERENCE.md#metafield-products-list) |
| Metafield Orders | [List](./REFERENCE.md#metafield-orders-list) |
| Metafield Draft Orders | [List](./REFERENCE.md#metafield-draft-orders-list) |
| Metafield Locations | [List](./REFERENCE.md#metafield-locations-list) |
| Metafield Product Variants | [List](./REFERENCE.md#metafield-product-variants-list) |
| Metafield Smart Collections | [List](./REFERENCE.md#metafield-smart-collections-list) |
| Metafield Product Images | [List](./REFERENCE.md#metafield-product-images-list) |
| Customer Address | [List](./REFERENCE.md#customer-address-list), [Get](./REFERENCE.md#customer-address-get) |
| Fulfillment Orders | [List](./REFERENCE.md#fulfillment-orders-list), [Get](./REFERENCE.md#fulfillment-orders-get) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Shopify API docs
See the official [Shopify API reference](https://shopify.dev/docs/api/admin-rest).
## Version information
- **Package version:** 0.1.57
- **Connector version:** 0.1.8
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/shopify/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, llm, mcp, shopify | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:15.836366 | airbyte_agent_shopify-0.1.57.tar.gz | 173,508 | 7b/92/b3f831ee93d2bdfd8dbd4657c64cd8133f35ad822444435fb4cefba0c0db/airbyte_agent_shopify-0.1.57.tar.gz | source | sdist | null | false | 803fa22902e901c8853c19ad259e61ff | 90329bbd1dc330d507b9e619a2100ef0f1f479fb435f63cecdee5c7712bba751 | 7b92b3f831ee93d2bdfd8dbd4657c64cd8133f35ad822444435fb4cefba0c0db | null | [] | 322 |
2.4 | airbyte-agent-facebook-marketing | 0.1.43 | Airbyte Facebook-Marketing Connector for AI platforms | # Facebook-Marketing
The Facebook-Marketing agent connector is a Python package that equips AI agents to interact with Facebook-Marketing through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Facebook Marketing API connector for managing ad campaigns, ad sets, ads, creatives,
and accessing performance insights, pixel configuration, and event quality data.
This connector provides read access to Facebook Ads Manager data for analytics
and reporting purposes.
## Example questions
The Facebook-Marketing connector is optimized to handle prompts like these.
- List all active campaigns in my ad account
- What ads are currently running in a recent campaign?
- List all ad creatives in my account
- What is the status of my campaigns?
- List all custom conversion events in my account
- Show me all ad images in my account
- What videos are available in my ad account?
- Create a new campaign called 'Summer Sale 2026' with traffic objective
- Pause my most recent campaign
- Create a new ad set with a $50 daily budget in my latest campaign
- Update the daily budget of my top performing ad set to $100
- Rename my most recent ad set to 'Holiday Promo'
- Create a new ad in my latest ad set
- Pause all ads in my most recent ad set
- List all pixels in my ad account
- Show me the event stats for my pixel
- What events is my Facebook pixel tracking?
- Search the Ad Library for political ads in the US
- Find ads about climate change in the Ad Library
- Show me Ad Library ads from a specific Facebook page
- Show me the ad sets with the highest daily budget
- Show me the performance insights for the last 7 days
- Which campaigns have the most spend this month?
- Show me ads with the highest click-through rate
## Unsupported questions
The Facebook-Marketing connector isn't currently able to handle prompts like these.
- Delete this ad creative
- Delete this campaign
- Delete this ad set
- Delete this ad
## Installation
```bash
uv pip install airbyte-agent-facebook-marketing
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_facebook_marketing import FacebookMarketingConnector
from airbyte_agent_facebook_marketing.models import FacebookMarketingServiceAccountKeyAuthenticationAuthConfig
connector = FacebookMarketingConnector(
auth_config=FacebookMarketingServiceAccountKeyAuthenticationAuthConfig(
account_key="<Facebook long-lived access token for Service Account authentication>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@FacebookMarketingConnector.tool_utils
async def facebook_marketing_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_facebook_marketing import FacebookMarketingConnector, AirbyteAuthConfig
connector = FacebookMarketingConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@FacebookMarketingConnector.tool_utils
async def facebook_marketing_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Current User | [Get](./REFERENCE.md#current-user-get) |
| Ad Accounts | [List](./REFERENCE.md#ad-accounts-list), [Search](./REFERENCE.md#ad-accounts-search) |
| Campaigns | [List](./REFERENCE.md#campaigns-list), [Create](./REFERENCE.md#campaigns-create), [Get](./REFERENCE.md#campaigns-get), [Update](./REFERENCE.md#campaigns-update), [Search](./REFERENCE.md#campaigns-search) |
| Ad Sets | [List](./REFERENCE.md#ad-sets-list), [Create](./REFERENCE.md#ad-sets-create), [Get](./REFERENCE.md#ad-sets-get), [Update](./REFERENCE.md#ad-sets-update), [Search](./REFERENCE.md#ad-sets-search) |
| Ads | [List](./REFERENCE.md#ads-list), [Create](./REFERENCE.md#ads-create), [Get](./REFERENCE.md#ads-get), [Update](./REFERENCE.md#ads-update), [Search](./REFERENCE.md#ads-search) |
| Ad Creatives | [List](./REFERENCE.md#ad-creatives-list), [Search](./REFERENCE.md#ad-creatives-search) |
| Ads Insights | [List](./REFERENCE.md#ads-insights-list), [Search](./REFERENCE.md#ads-insights-search) |
| Ad Account | [Get](./REFERENCE.md#ad-account-get), [Search](./REFERENCE.md#ad-account-search) |
| Custom Conversions | [List](./REFERENCE.md#custom-conversions-list), [Search](./REFERENCE.md#custom-conversions-search) |
| Images | [List](./REFERENCE.md#images-list), [Search](./REFERENCE.md#images-search) |
| Videos | [List](./REFERENCE.md#videos-list), [Search](./REFERENCE.md#videos-search) |
| Pixels | [List](./REFERENCE.md#pixels-list), [Get](./REFERENCE.md#pixels-get) |
| Pixel Stats | [List](./REFERENCE.md#pixel-stats-list) |
| Ad Library | [List](./REFERENCE.md#ad-library-list) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Facebook-Marketing API docs
See the official [Facebook-Marketing API reference](https://developers.facebook.com/docs/marketing-api/).
## Version information
- **Package version:** 0.1.43
- **Connector version:** 1.0.18
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/facebook-marketing/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, facebook-marketing, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:14.569860 | airbyte_agent_facebook_marketing-0.1.43.tar.gz | 170,238 | a6/9b/57a98be1dc09f86bc4c5702fb32e2a65af2968721de0d0c39ab8945cbee9/airbyte_agent_facebook_marketing-0.1.43.tar.gz | source | sdist | null | false | 05c6cd0d508241ebe7cf41df36d98afe | 902d123bad63e55bac694752430983a814c16207097b6ea87a338f6215f5aa5b | a69b57a98be1dc09f86bc4c5702fb32e2a65af2968721de0d0c39ab8945cbee9 | null | [] | 328 |
2.4 | airbyte-agent-gong | 0.19.112 | Airbyte Gong Connector for AI platforms | # Gong
The Gong agent connector is a Python package that equips AI agents to interact with Gong through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Gong is a revenue intelligence platform that captures and analyzes customer interactions
across calls, emails, and web conferences. This connector provides access to users,
recorded calls with transcripts, activity statistics, scorecards, trackers, workspaces,
coaching metrics, and library content for sales performance analysis and revenue insights.
## Example questions
The Gong connector is optimized to handle prompts like these.
- List all users in my Gong account
- Show me calls from last week
- Get the transcript for a recent call
- List all workspaces in Gong
- Show me the scorecard configurations
- What trackers are set up in my account?
- Get coaching metrics for a manager
- What are the activity stats for our sales team?
- Find calls mentioning \{keyword\} this month
- Show me calls for rep \{user_id\} in the last 30 days
- Which calls had the longest duration last week?
## Unsupported questions
The Gong connector isn't currently able to handle prompts like these.
- Create a new user in Gong
- Delete a call recording
- Update scorecard questions
- Schedule a new meeting
- Send feedback to a team member
- Modify tracker keywords
## Installation
```bash
uv pip install airbyte-agent-gong
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_gong import GongConnector
from airbyte_agent_gong.models import GongAccessKeyAuthenticationAuthConfig
connector = GongConnector(
auth_config=GongAccessKeyAuthenticationAuthConfig(
access_key="<Your Gong API Access Key>",
access_key_secret="<Your Gong API Access Key Secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@GongConnector.tool_utils
async def gong_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_gong import GongConnector, AirbyteAuthConfig
connector = GongConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@GongConnector.tool_utils
async def gong_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Users | [List](./REFERENCE.md#users-list), [Get](./REFERENCE.md#users-get), [Search](./REFERENCE.md#users-search) |
| Calls | [List](./REFERENCE.md#calls-list), [Get](./REFERENCE.md#calls-get), [Search](./REFERENCE.md#calls-search) |
| Calls Extensive | [List](./REFERENCE.md#calls-extensive-list), [Search](./REFERENCE.md#calls-extensive-search) |
| Call Audio | [Download](./REFERENCE.md#call-audio-download) |
| Call Video | [Download](./REFERENCE.md#call-video-download) |
| Workspaces | [List](./REFERENCE.md#workspaces-list) |
| Call Transcripts | [List](./REFERENCE.md#call-transcripts-list) |
| Stats Activity Aggregate | [List](./REFERENCE.md#stats-activity-aggregate-list) |
| Stats Activity Day By Day | [List](./REFERENCE.md#stats-activity-day-by-day-list) |
| Stats Interaction | [List](./REFERENCE.md#stats-interaction-list) |
| Settings Scorecards | [List](./REFERENCE.md#settings-scorecards-list), [Search](./REFERENCE.md#settings-scorecards-search) |
| Settings Trackers | [List](./REFERENCE.md#settings-trackers-list) |
| Library Folders | [List](./REFERENCE.md#library-folders-list) |
| Library Folder Content | [List](./REFERENCE.md#library-folder-content-list) |
| Coaching | [List](./REFERENCE.md#coaching-list) |
| Stats Activity Scorecards | [List](./REFERENCE.md#stats-activity-scorecards-list), [Search](./REFERENCE.md#stats-activity-scorecards-search) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Gong API docs
See the official [Gong API reference](https://gong.app.gong.io/settings/api/documentation).
## Version information
- **Package version:** 0.19.112
- **Connector version:** 0.1.18
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/gong/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, gong, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:12.602450 | airbyte_agent_gong-0.19.112.tar.gz | 158,254 | d3/40/cb3503476b9477c1a445fdca61164b9125756f94ede2da0a1475e38b9b9a/airbyte_agent_gong-0.19.112.tar.gz | source | sdist | null | false | e2ff3d1e058bebc3245e9c91e27f1f49 | 74b23a165e94e7886be755d44cbfd893bf7338b80002211412da103e34b9902a | d340cb3503476b9477c1a445fdca61164b9125756f94ede2da0a1475e38b9b9a | null | [] | 328 |
2.4 | airbyte-agent-mailchimp | 0.1.62 | Airbyte Mailchimp Connector for AI platforms | # Mailchimp
The Mailchimp agent connector is a Python package that equips AI agents to interact with Mailchimp through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Mailchimp is an email marketing platform that enables businesses to create, send, and analyze
email campaigns, manage subscriber lists, and automate marketing workflows. This connector
provides read access to campaigns, lists, reports, email activity, automations, and more
for marketing analytics and audience management.
## Example questions
The Mailchimp connector is optimized to handle prompts like these.
- List all subscribers in my main mailing list
- List all automation workflows in my account
- Show me all segments for my primary audience
- List all interest categories for my primary audience
- Show me email activity for a recent campaign
- Show me the performance report for a recent campaign
- Show me all my email campaigns from the last month
- What are the open rates for my recent campaigns?
- Who unsubscribed from list \{list_id\} this week?
- What tags are applied to my subscribers?
- How many subscribers do I have in each list?
- What are my top performing campaigns by click rate?
## Unsupported questions
The Mailchimp connector isn't currently able to handle prompts like these.
- Create a new email campaign
- Add a subscriber to my list
- Delete a campaign
- Update subscriber information
- Send a campaign now
- Create a new automation workflow
## Installation
```bash
uv pip install airbyte-agent-mailchimp
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_mailchimp import MailchimpConnector
from airbyte_agent_mailchimp.models import MailchimpAuthConfig
connector = MailchimpConnector(
auth_config=MailchimpAuthConfig(
api_key="<Your Mailchimp API key. You can find this in your Mailchimp account under Account > Extras > API keys.>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@MailchimpConnector.tool_utils
async def mailchimp_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_mailchimp import MailchimpConnector, AirbyteAuthConfig
connector = MailchimpConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@MailchimpConnector.tool_utils
async def mailchimp_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Campaigns | [List](./REFERENCE.md#campaigns-list), [Get](./REFERENCE.md#campaigns-get), [Search](./REFERENCE.md#campaigns-search) |
| Lists | [List](./REFERENCE.md#lists-list), [Get](./REFERENCE.md#lists-get), [Search](./REFERENCE.md#lists-search) |
| List Members | [List](./REFERENCE.md#list-members-list), [Get](./REFERENCE.md#list-members-get) |
| Reports | [List](./REFERENCE.md#reports-list), [Get](./REFERENCE.md#reports-get), [Search](./REFERENCE.md#reports-search) |
| Email Activity | [List](./REFERENCE.md#email-activity-list), [Search](./REFERENCE.md#email-activity-search) |
| Automations | [List](./REFERENCE.md#automations-list) |
| Tags | [List](./REFERENCE.md#tags-list) |
| Interest Categories | [List](./REFERENCE.md#interest-categories-list), [Get](./REFERENCE.md#interest-categories-get) |
| Interests | [List](./REFERENCE.md#interests-list), [Get](./REFERENCE.md#interests-get) |
| Segments | [List](./REFERENCE.md#segments-list), [Get](./REFERENCE.md#segments-get) |
| Segment Members | [List](./REFERENCE.md#segment-members-list) |
| Unsubscribes | [List](./REFERENCE.md#unsubscribes-list) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Mailchimp API docs
See the official [Mailchimp API reference](https://mailchimp.com/developer/marketing/api/).
## Version information
- **Package version:** 0.1.62
- **Connector version:** 1.0.7
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/mailchimp/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, llm, mailchimp, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:09.699844 | airbyte_agent_mailchimp-0.1.62.tar.gz | 168,775 | e3/e5/c870feadd726a71c2bb2d23472b44e9cdbfc74ca8b3f7ff6dc226b1f9170/airbyte_agent_mailchimp-0.1.62.tar.gz | source | sdist | null | false | 0a1c0c1289ebca951bf814e28ee876f1 | 6a033023c2e9d641b1ff16031a9f65a4d81f099092d04e30beca2e5d93bf9578 | e3e5c870feadd726a71c2bb2d23472b44e9cdbfc74ca8b3f7ff6dc226b1f9170 | null | [] | 326 |
2.4 | airbyte-agent-slack | 0.1.70 | Airbyte Slack Connector for AI platforms | # Slack
The Slack agent connector is a Python package that equips AI agents to interact with Slack through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Slack is a business communication platform that offers messaging, file sharing, and integrations
with other tools. This connector provides read access to users, channels, channel members, channel
messages, and threads for workspace analytics. It also supports write operations including sending
and updating messages, creating and renaming channels, setting channel topics and purposes, and
adding reactions to messages.
## Example questions
The Slack connector is optimized to handle prompts like these.
- List all users in my Slack workspace
- Show me all public channels
- List members of a public channel
- Show me recent messages in a public channel
- Show me thread replies for a recent message
- List all channels I have access to
- Show me user details for a workspace member
- List channel members for a public channel
- Send a message to a channel saying 'Hello team!'
- Post a message in the general channel
- Update the most recent message in a channel
- Create a new public channel called 'project-updates'
- Create a private channel named 'team-internal'
- Rename a channel to 'new-channel-name'
- Set the topic for a channel to 'Daily standup notes'
- Update the purpose of a channel
- Add a thumbsup reaction to the latest message in a channel
- React with :rocket: to the latest message in a channel
- Reply to a recent thread with 'Thanks for the update!'
- What messages were posted in channel \{channel_id\} last week?
- Show me the conversation history for channel \{channel_id\}
- Search for messages mentioning \{keyword\} in channel \{channel_id\}
## Unsupported questions
The Slack connector isn't currently able to handle prompts like these.
- Delete a message from channel \{channel_id\}
- Remove a reaction from a message
- Archive channel \{channel_id\}
- Invite user \{user_id\} to channel \{channel_id\}
- Remove user \{user_id\} from channel \{channel_id\}
- Delete channel \{channel_id\}
- Create a new user in the workspace
- Update user profile information
## Installation
```bash
uv pip install airbyte-agent-slack
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_slack import SlackConnector
from airbyte_agent_slack.models import SlackTokenAuthenticationAuthConfig
connector = SlackConnector(
auth_config=SlackTokenAuthenticationAuthConfig(
api_token="<Your Slack Bot Token (xoxb-) or User Token (xoxp-)>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@SlackConnector.tool_utils
async def slack_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_slack import SlackConnector, AirbyteAuthConfig
connector = SlackConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@SlackConnector.tool_utils
async def slack_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Users | [List](./REFERENCE.md#users-list), [Get](./REFERENCE.md#users-get), [Search](./REFERENCE.md#users-search) |
| Channels | [List](./REFERENCE.md#channels-list), [Get](./REFERENCE.md#channels-get), [Create](./REFERENCE.md#channels-create), [Update](./REFERENCE.md#channels-update), [Search](./REFERENCE.md#channels-search) |
| Channel Messages | [List](./REFERENCE.md#channel-messages-list) |
| Threads | [List](./REFERENCE.md#threads-list) |
| Messages | [Create](./REFERENCE.md#messages-create), [Update](./REFERENCE.md#messages-update) |
| Channel Topics | [Create](./REFERENCE.md#channel-topics-create) |
| Channel Purposes | [Create](./REFERENCE.md#channel-purposes-create) |
| Reactions | [Create](./REFERENCE.md#reactions-create) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Slack API docs
See the official [Slack API reference](https://api.slack.com/methods).
## Version information
- **Package version:** 0.1.70
- **Connector version:** 0.1.15
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/slack/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, llm, mcp, slack | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:07.974183 | airbyte_agent_slack-0.1.70.tar.gz | 148,244 | 3f/da/cda7f6df98e354520d3f36faf2e30c8b578a91085e27839f5ae7f0e9fbb3/airbyte_agent_slack-0.1.70.tar.gz | source | sdist | null | false | e5ce0f69a8b4187f84ddb0f70fd187a7 | 891f153d98dc9213cba6e0fb72eb0c8950fdbd3ffcf95c60c17ca3ea5b0f079f | 3fdacda7f6df98e354520d3f36faf2e30c8b578a91085e27839f5ae7f0e9fbb3 | null | [] | 332 |
2.4 | airbyte-agent-amazon-ads | 0.1.55 | Airbyte Amazon-Ads Connector for AI platforms | # Amazon-Ads
The Amazon-Ads agent connector is a Python package that equips AI agents to interact with Amazon-Ads through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Amazon Ads is Amazon's advertising platform that enables sellers and vendors to promote their
products across Amazon's marketplace. This connector provides access to advertising profiles
for managing and analyzing advertising campaigns across different marketplaces.
## Example questions
The Amazon-Ads connector is optimized to handle prompts like these.
- List all my advertising profiles across marketplaces
- Show me the profiles for my seller accounts
- What marketplaces do I have advertising profiles in?
- List all portfolios for one of my profiles
- Show me all sponsored product campaigns
- What campaigns are currently enabled?
- Find campaigns with a specific targeting type
## Unsupported questions
The Amazon-Ads connector isn't currently able to handle prompts like these.
- Create a new advertising campaign
- Update my campaign budget
- Delete an ad group
- Generate a performance report
## Installation
```bash
uv pip install airbyte-agent-amazon-ads
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_amazon_ads import AmazonAdsConnector
from airbyte_agent_amazon_ads.models import AmazonAdsAuthConfig
connector = AmazonAdsConnector(
auth_config=AmazonAdsAuthConfig(
client_id="<The client ID of your Amazon Ads API application>",
client_secret="<The client secret of your Amazon Ads API application>",
refresh_token="<The refresh token obtained from the OAuth authorization flow>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@AmazonAdsConnector.tool_utils
async def amazon_ads_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_amazon_ads import AmazonAdsConnector, AirbyteAuthConfig
connector = AmazonAdsConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@AmazonAdsConnector.tool_utils
async def amazon_ads_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Profiles | [List](./REFERENCE.md#profiles-list), [Get](./REFERENCE.md#profiles-get), [Search](./REFERENCE.md#profiles-search) |
| Portfolios | [List](./REFERENCE.md#portfolios-list), [Get](./REFERENCE.md#portfolios-get) |
| Sponsored Product Campaigns | [List](./REFERENCE.md#sponsored-product-campaigns-list), [Get](./REFERENCE.md#sponsored-product-campaigns-get) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Amazon-Ads API docs
See the official [Amazon-Ads API reference](https://advertising.amazon.com/API/docs/en-us).
## Version information
- **Package version:** 0.1.55
- **Connector version:** 1.0.8
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/amazon-ads/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, amazon-ads, api, connector, data-integration, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:06.270907 | airbyte_agent_amazon_ads-0.1.55.tar.gz | 135,703 | 81/a9/48069809c7383593f74c8cb558fac117f4c9eefdd9e770d41d24ab2d7b78/airbyte_agent_amazon_ads-0.1.55.tar.gz | source | sdist | null | false | ee932e7e87f93053003cd19ba713ce7f | ea6b529a295dd5074a1465523dd434ee5fa7a7fdbd3b5965a0de3772980725c9 | 81a948069809c7383593f74c8cb558fac117f4c9eefdd9e770d41d24ab2d7b78 | null | [] | 324 |
2.4 | airbyte-agent-notion | 0.1.6 | Airbyte Notion Connector for AI platforms | # Notion
The Notion agent connector is a Python package that equips AI agents to interact with Notion through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Notion is an all-in-one workspace for notes, docs, wikis, projects, and collaboration.
This connector provides read access to Notion workspaces including users, pages, data sources,
blocks, and comments through the Notion REST API (version 2025-09-03). It enables querying
workspace structure, page content, data source schemas, and collaboration data for productivity
analysis and content management insights.
## Example questions
The Notion connector is optimized to handle prompts like these.
- List all users in my Notion workspace
- Show me all pages in my Notion workspace
- What data sources exist in my Notion workspace?
- Get the details of a specific page by ID
- List child blocks of a specific page
- Show me comments on a specific page
- What is the schema of a specific data source?
- Who are the bot users in my workspace?
- Find pages created in the last week
- List data sources that have been recently edited
- Show me all archived pages
## Unsupported questions
The Notion connector isn't currently able to handle prompts like these.
- Create a new page in Notion
- Update a data source property
- Delete a block
- Add a comment to a page
## Installation
```bash
uv pip install airbyte-agent-notion
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_notion import NotionConnector
from airbyte_agent_notion.models import NotionAuthConfig
connector = NotionConnector(
auth_config=NotionAuthConfig(
token="<Notion internal integration token (starts with ntn_ or secret_)>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@NotionConnector.tool_utils
async def notion_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_notion import NotionConnector, AirbyteAuthConfig
connector = NotionConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@NotionConnector.tool_utils
async def notion_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Users | [List](./REFERENCE.md#users-list), [Get](./REFERENCE.md#users-get), [Search](./REFERENCE.md#users-search) |
| Pages | [List](./REFERENCE.md#pages-list), [Get](./REFERENCE.md#pages-get), [Search](./REFERENCE.md#pages-search) |
| Data Sources | [List](./REFERENCE.md#data-sources-list), [Get](./REFERENCE.md#data-sources-get), [Search](./REFERENCE.md#data-sources-search) |
| Blocks | [List](./REFERENCE.md#blocks-list), [Get](./REFERENCE.md#blocks-get), [Search](./REFERENCE.md#blocks-search) |
| Comments | [List](./REFERENCE.md#comments-list) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Notion API docs
See the official [Notion API reference](https://developers.notion.com/reference/intro).
## Version information
- **Package version:** 0.1.6
- **Connector version:** 0.1.4
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/notion/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, llm, mcp, notion | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:03.804624 | airbyte_agent_notion-0.1.6.tar.gz | 147,298 | 97/45/04f49086f38d2511c357227c8a597758ed2b2740f598aa514e9aefdb01fa/airbyte_agent_notion-0.1.6.tar.gz | source | sdist | null | false | 97913e6edc5f703b5d1486d42f0ef1cb | 9f8d37dd001d568a185df6b01cd9676a16785c13311a2230857bebdcb4410f63 | 974504f49086f38d2511c357227c8a597758ed2b2740f598aa514e9aefdb01fa | null | [] | 325 |
2.4 | airbyte-agent-google-drive | 0.1.76 | Airbyte Google-Drive Connector for AI platforms | # Google-Drive
The Google-Drive agent connector is a Python package that equips AI agents to interact with Google-Drive through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Google Drive is a cloud-based file storage and synchronization service that allows users
to store files, share content, and collaborate on documents. This connector provides
read-only access to files, shared drives, permissions, comments, replies, revisions,
and change tracking for data analysis and integration workflows.
## Example questions
The Google-Drive connector is optimized to handle prompts like these.
- List all files in my Google Drive
- Show me details for a recent file
- Download a recent file from my Drive
- Export a recent Google Doc as PDF
- Export a recent Google Sheet as CSV
- Show me the content of a recent file
- List all shared drives I have access to
- Show me details for a shared drive I have access to
- Show permissions for a recent file
- List comments on a recent file
- Show replies to a recent comment on a file
- Show revision history for a recent file
- Get my Drive storage quota and user info
- List files in a folder I have access to
- Show me files modified in the last week
- What changes have been made since my last sync?
## Unsupported questions
The Google-Drive connector isn't currently able to handle prompts like these.
- Create a new file in Google Drive
- Upload a document to Drive
- Delete a file from Drive
- Update file permissions
- Add a comment to a file
- Move a file to a different folder
## Installation
```bash
uv pip install airbyte-agent-google-drive
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_google_drive import GoogleDriveConnector
from airbyte_agent_google_drive.models import GoogleDriveAuthConfig
connector = GoogleDriveConnector(
auth_config=GoogleDriveAuthConfig(
access_token="<Your Google OAuth2 Access Token (optional, will be obtained via refresh)>",
refresh_token="<Your Google OAuth2 Refresh Token>",
client_id="<Your Google OAuth2 Client ID>",
client_secret="<Your Google OAuth2 Client Secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@GoogleDriveConnector.tool_utils
async def google_drive_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_google_drive import GoogleDriveConnector, AirbyteAuthConfig
connector = GoogleDriveConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@GoogleDriveConnector.tool_utils
async def google_drive_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Files | [List](./REFERENCE.md#files-list), [Get](./REFERENCE.md#files-get), [Download](./REFERENCE.md#files-download) |
| Files Export | [Download](./REFERENCE.md#files-export-download) |
| Drives | [List](./REFERENCE.md#drives-list), [Get](./REFERENCE.md#drives-get) |
| Permissions | [List](./REFERENCE.md#permissions-list), [Get](./REFERENCE.md#permissions-get) |
| Comments | [List](./REFERENCE.md#comments-list), [Get](./REFERENCE.md#comments-get) |
| Replies | [List](./REFERENCE.md#replies-list), [Get](./REFERENCE.md#replies-get) |
| Revisions | [List](./REFERENCE.md#revisions-list), [Get](./REFERENCE.md#revisions-get) |
| Changes | [List](./REFERENCE.md#changes-list) |
| Changes Start Page Token | [Get](./REFERENCE.md#changes-start-page-token-get) |
| About | [Get](./REFERENCE.md#about-get) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Google-Drive API docs
See the official [Google-Drive API reference](https://developers.google.com/workspace/drive/api/reference/rest/v3).
## Version information
- **Package version:** 0.1.76
- **Connector version:** 0.1.8
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/google-drive/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, google-drive, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:01.934036 | airbyte_agent_google_drive-0.1.76.tar.gz | 155,130 | 88/1f/3d690c441af60b74af63c98f73b4f83def7664c06c0f0c7d478781149039/airbyte_agent_google_drive-0.1.76.tar.gz | source | sdist | null | false | 2e037196e85e7fcaba97410ddec58cae | a7acd47a581dedc9b1c2f4f5f0318130bcc841e7bfeae750d563b16a1cc16b60 | 881f3d690c441af60b74af63c98f73b4f83def7664c06c0f0c7d478781149039 | null | [] | 325 |
2.4 | airbyte-agent-freshdesk | 0.1.3 | Airbyte Freshdesk Connector for AI platforms | # Freshdesk
The Freshdesk agent connector is a Python package that equips AI agents to interact with Freshdesk through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Connector for the Freshdesk customer support platform API (v2). Provides read access to helpdesk data including tickets, contacts, agents, groups, companies, roles, satisfaction ratings, surveys, time entries, and ticket fields. Freshdesk is a cloud-based customer support solution that enables companies to manage customer conversations across email, phone, chat, and social media.
## Example questions
The Freshdesk connector is optimized to handle prompts like these.
- List all open tickets in Freshdesk
- Show me all agents in the support team
- List all groups configured in Freshdesk
- Get the details of ticket #26
- Show me all companies in Freshdesk
- List all roles defined in the helpdesk
- Show me the ticket fields and their options
- List time entries for tickets
- What are the high priority tickets from last week?
- Which tickets have breached their SLA due date?
- Show me tickets assigned to agent \{agent_name\}
- Find all tickets from company \{company_name\}
- How many tickets were created this month by status?
- What are the satisfaction ratings for resolved tickets?
## Unsupported questions
The Freshdesk connector isn't currently able to handle prompts like these.
- Create a new ticket in Freshdesk
- Update the status of ticket #\{ticket_id\}
- Delete a contact from Freshdesk
- Assign a ticket to a different agent
## Installation
```bash
uv pip install airbyte-agent-freshdesk
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_freshdesk import FreshdeskConnector
from airbyte_agent_freshdesk.models import FreshdeskAuthConfig
connector = FreshdeskConnector(
auth_config=FreshdeskAuthConfig(
api_key="<Your Freshdesk API key (found in Profile Settings)>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@FreshdeskConnector.tool_utils
async def freshdesk_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_freshdesk import FreshdeskConnector, AirbyteAuthConfig
connector = FreshdeskConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@FreshdeskConnector.tool_utils
async def freshdesk_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Tickets | [List](./REFERENCE.md#tickets-list), [Get](./REFERENCE.md#tickets-get), [Search](./REFERENCE.md#tickets-search) |
| Contacts | [List](./REFERENCE.md#contacts-list), [Get](./REFERENCE.md#contacts-get) |
| Agents | [List](./REFERENCE.md#agents-list), [Get](./REFERENCE.md#agents-get), [Search](./REFERENCE.md#agents-search) |
| Groups | [List](./REFERENCE.md#groups-list), [Get](./REFERENCE.md#groups-get), [Search](./REFERENCE.md#groups-search) |
| Companies | [List](./REFERENCE.md#companies-list), [Get](./REFERENCE.md#companies-get) |
| Roles | [List](./REFERENCE.md#roles-list), [Get](./REFERENCE.md#roles-get) |
| Satisfaction Ratings | [List](./REFERENCE.md#satisfaction-ratings-list) |
| Surveys | [List](./REFERENCE.md#surveys-list) |
| Time Entries | [List](./REFERENCE.md#time-entries-list) |
| Ticket Fields | [List](./REFERENCE.md#ticket-fields-list) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Freshdesk API docs
See the official [Freshdesk API reference](https://developers.freshdesk.com/api/).
## Version information
- **Package version:** 0.1.3
- **Connector version:** 1.0.1
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/freshdesk/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, freshdesk, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:13:00.024116 | airbyte_agent_freshdesk-0.1.3.tar.gz | 143,827 | 43/be/6789c28113342ce699991073400ad44502197f6dc5a60e0c3810855e2047/airbyte_agent_freshdesk-0.1.3.tar.gz | source | sdist | null | false | 8abc528350c888e9cc9f5af20ddb7bb3 | f8ac651b67b6975af03d388379897ebd16aa65879c05ae368e7361b875ed779a | 43be6789c28113342ce699991073400ad44502197f6dc5a60e0c3810855e2047 | null | [] | 321 |
2.4 | airbyte-agent-asana | 0.19.109 | Airbyte Asana Connector for AI platforms | # Asana
The Asana agent connector is a Python package that equips AI agents to interact with Asana through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Asana is a work management platform that helps teams organize, track, and manage
projects and tasks. This connector provides access to tasks, projects, workspaces,
teams, and users for project tracking, workload analysis, and productivity insights.
## Example questions
The Asana connector is optimized to handle prompts like these.
- What tasks are assigned to me this week?
- List all projects in my workspace
- Show me the tasks for a recent project
- Who are the team members in one of my teams?
- Show me details of my current workspace and its users
- Summarize my team's workload and task completion rates
- Find all tasks related to \{client_name\} across my workspaces
- Analyze the most active projects in my workspace last month
- Compare task completion rates between my different teams
- Identify overdue tasks across all my projects
## Unsupported questions
The Asana connector isn't currently able to handle prompts like these.
- Create a new task for [TeamMember]
- Update the priority of this task
- Delete the project [ProjectName]
- Schedule a new team meeting
- Add a new team member to [Workspace]
- Move this task to another project
## Installation
```bash
uv pip install airbyte-agent-asana
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_asana import AsanaConnector
from airbyte_agent_asana.models import AsanaPersonalAccessTokenAuthConfig
connector = AsanaConnector(
auth_config=AsanaPersonalAccessTokenAuthConfig(
token="<Your Asana Personal Access Token. Generate one at https://app.asana.com/0/my-apps>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@AsanaConnector.tool_utils
async def asana_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_asana import AsanaConnector, AirbyteAuthConfig
connector = AsanaConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@AsanaConnector.tool_utils
async def asana_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Tasks | [List](./REFERENCE.md#tasks-list), [Get](./REFERENCE.md#tasks-get), [Search](./REFERENCE.md#tasks-search) |
| Project Tasks | [List](./REFERENCE.md#project-tasks-list) |
| Workspace Task Search | [List](./REFERENCE.md#workspace-task-search-list) |
| Projects | [List](./REFERENCE.md#projects-list), [Get](./REFERENCE.md#projects-get), [Search](./REFERENCE.md#projects-search) |
| Task Projects | [List](./REFERENCE.md#task-projects-list) |
| Team Projects | [List](./REFERENCE.md#team-projects-list) |
| Workspace Projects | [List](./REFERENCE.md#workspace-projects-list) |
| Workspaces | [List](./REFERENCE.md#workspaces-list), [Get](./REFERENCE.md#workspaces-get), [Search](./REFERENCE.md#workspaces-search) |
| Users | [List](./REFERENCE.md#users-list), [Get](./REFERENCE.md#users-get), [Search](./REFERENCE.md#users-search) |
| Workspace Users | [List](./REFERENCE.md#workspace-users-list) |
| Team Users | [List](./REFERENCE.md#team-users-list) |
| Teams | [Get](./REFERENCE.md#teams-get), [Search](./REFERENCE.md#teams-search) |
| Workspace Teams | [List](./REFERENCE.md#workspace-teams-list) |
| User Teams | [List](./REFERENCE.md#user-teams-list) |
| Attachments | [List](./REFERENCE.md#attachments-list), [Get](./REFERENCE.md#attachments-get), [Download](./REFERENCE.md#attachments-download), [Search](./REFERENCE.md#attachments-search) |
| Workspace Tags | [List](./REFERENCE.md#workspace-tags-list) |
| Tags | [Get](./REFERENCE.md#tags-get), [Search](./REFERENCE.md#tags-search) |
| Project Sections | [List](./REFERENCE.md#project-sections-list) |
| Sections | [Get](./REFERENCE.md#sections-get), [Search](./REFERENCE.md#sections-search) |
| Task Subtasks | [List](./REFERENCE.md#task-subtasks-list) |
| Task Dependencies | [List](./REFERENCE.md#task-dependencies-list) |
| Task Dependents | [List](./REFERENCE.md#task-dependents-list) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Asana API docs
See the official [Asana API reference](https://developers.asana.com/reference/rest-api-reference).
## Version information
- **Package version:** 0.19.109
- **Connector version:** 0.1.15
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/asana/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, asana, connector, data-integration, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:12:57.960237 | airbyte_agent_asana-0.19.109.tar.gz | 150,709 | 22/ec/cdd4a8448e9bb86f65faa939483f26f2aaf5622cdd89f1c15c0c0302946f/airbyte_agent_asana-0.19.109.tar.gz | source | sdist | null | false | 874c29c2952924c11d0ebac46faf4605 | 7d5a58d3e81c40b7bbe8be0de9a8608428d1743cf7da8bb57788f7e9cd573b9b | 22eccdd4a8448e9bb86f65faa939483f26f2aaf5622cdd89f1c15c0c0302946f | null | [] | 326 |
2.4 | airbyte-agent-zendesk-support | 0.18.109 | Airbyte Zendesk-Support Connector for AI platforms | # Zendesk-Support
The Zendesk-Support agent connector is a Python package that equips AI agents to interact with Zendesk-Support through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
Zendesk Support is a customer service platform that helps businesses manage support
tickets, customer interactions, and help center content. This connector provides
access to tickets, users, organizations, groups, comments, attachments, automations,
triggers, macros, views, satisfaction ratings, SLA policies, and help center articles
for customer support analytics and service performance insights.
## Example questions
The Zendesk-Support connector is optimized to handle prompts like these.
- Show me the tickets assigned to me last week
- List all unresolved tickets
- Show me the details of recent tickets
- What are the top 5 support issues our organization has faced this month?
- Analyze the satisfaction ratings for our support team in the last 30 days
- Compare ticket resolution times across different support groups
- Identify the most common ticket fields used in our support workflow
- Summarize the performance of our SLA policies this quarter
## Unsupported questions
The Zendesk-Support connector isn't currently able to handle prompts like these.
- Create a new support ticket for \{customer\}
- Update the priority of this ticket
- Assign this ticket to \{team_member\}
- Delete these old support tickets
- Send an automatic response to \{customer\}
## Installation
```bash
uv pip install airbyte-agent-zendesk-support
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_zendesk_support import ZendeskSupportConnector
from airbyte_agent_zendesk_support.models import ZendeskSupportApiTokenAuthConfig
connector = ZendeskSupportConnector(
auth_config=ZendeskSupportApiTokenAuthConfig(
email="<Your Zendesk account email address>",
api_token="<Your Zendesk API token from Admin Center>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@ZendeskSupportConnector.tool_utils
async def zendesk_support_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_zendesk_support import ZendeskSupportConnector, AirbyteAuthConfig
connector = ZendeskSupportConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@ZendeskSupportConnector.tool_utils
async def zendesk_support_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Tickets | [List](./REFERENCE.md#tickets-list), [Get](./REFERENCE.md#tickets-get), [Search](./REFERENCE.md#tickets-search) |
| Users | [List](./REFERENCE.md#users-list), [Get](./REFERENCE.md#users-get), [Search](./REFERENCE.md#users-search) |
| Organizations | [List](./REFERENCE.md#organizations-list), [Get](./REFERENCE.md#organizations-get), [Search](./REFERENCE.md#organizations-search) |
| Groups | [List](./REFERENCE.md#groups-list), [Get](./REFERENCE.md#groups-get), [Search](./REFERENCE.md#groups-search) |
| Ticket Comments | [List](./REFERENCE.md#ticket-comments-list), [Search](./REFERENCE.md#ticket-comments-search) |
| Attachments | [Get](./REFERENCE.md#attachments-get), [Download](./REFERENCE.md#attachments-download) |
| Ticket Audits | [List](./REFERENCE.md#ticket-audits-list), [List](./REFERENCE.md#ticket-audits-list), [Search](./REFERENCE.md#ticket-audits-search) |
| Ticket Metrics | [List](./REFERENCE.md#ticket-metrics-list), [Search](./REFERENCE.md#ticket-metrics-search) |
| Ticket Fields | [List](./REFERENCE.md#ticket-fields-list), [Get](./REFERENCE.md#ticket-fields-get), [Search](./REFERENCE.md#ticket-fields-search) |
| Brands | [List](./REFERENCE.md#brands-list), [Get](./REFERENCE.md#brands-get), [Search](./REFERENCE.md#brands-search) |
| Views | [List](./REFERENCE.md#views-list), [Get](./REFERENCE.md#views-get) |
| Macros | [List](./REFERENCE.md#macros-list), [Get](./REFERENCE.md#macros-get) |
| Triggers | [List](./REFERENCE.md#triggers-list), [Get](./REFERENCE.md#triggers-get) |
| Automations | [List](./REFERENCE.md#automations-list), [Get](./REFERENCE.md#automations-get) |
| Tags | [List](./REFERENCE.md#tags-list), [Search](./REFERENCE.md#tags-search) |
| Satisfaction Ratings | [List](./REFERENCE.md#satisfaction-ratings-list), [Get](./REFERENCE.md#satisfaction-ratings-get), [Search](./REFERENCE.md#satisfaction-ratings-search) |
| Group Memberships | [List](./REFERENCE.md#group-memberships-list) |
| Organization Memberships | [List](./REFERENCE.md#organization-memberships-list) |
| Sla Policies | [List](./REFERENCE.md#sla-policies-list), [Get](./REFERENCE.md#sla-policies-get) |
| Ticket Forms | [List](./REFERENCE.md#ticket-forms-list), [Get](./REFERENCE.md#ticket-forms-get), [Search](./REFERENCE.md#ticket-forms-search) |
| Articles | [List](./REFERENCE.md#articles-list), [Get](./REFERENCE.md#articles-get) |
| Article Attachments | [List](./REFERENCE.md#article-attachments-list), [Get](./REFERENCE.md#article-attachments-get), [Download](./REFERENCE.md#article-attachments-download) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Zendesk-Support API docs
See the official [Zendesk-Support API reference](https://developer.zendesk.com/api-reference/ticketing/introduction/).
## Version information
- **Package version:** 0.18.109
- **Connector version:** 0.1.15
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/zendesk-support/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, llm, mcp, zendesk-support | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:12:54.335775 | airbyte_agent_zendesk_support-0.18.109.tar.gz | 187,459 | 27/59/d191f304828a6085dee637c7081eda40ca68adf2d1940ecbdc008ddb5285/airbyte_agent_zendesk_support-0.18.109.tar.gz | source | sdist | null | false | d1c18c17ba33a5ae5a1924ba6f0d088a | cd1eccef732bfedd399f3681072ea15b12777e10656013cd09c89ed17d8414e4 | 2759d191f304828a6085dee637c7081eda40ca68adf2d1940ecbdc008ddb5285 | null | [] | 329 |
2.4 | airbyte-agent-hubspot | 0.15.107 | Airbyte Hubspot Connector for AI platforms | # Hubspot
The Hubspot agent connector is a Python package that equips AI agents to interact with Hubspot through strongly typed, well-documented tools. It's ready to use directly in your Python app, in an agent framework, or exposed through an MCP.
HubSpot is a CRM platform that provides tools for marketing, sales, customer service,
and content management. This connector provides access to contacts, companies, deals,
tickets, and custom objects for customer relationship management and sales analytics.
## Example questions
The Hubspot connector is optimized to handle prompts like these.
- List recent deals
- List recent tickets
- List companies in my CRM
- List contacts in my CRM
- Show me all deals from \{company\} this quarter
- What are the top 5 most valuable deals in my pipeline right now?
- Search for contacts in the marketing department at \{company\}
- Give me an overview of my sales team's deals in the last 30 days
- Identify the most active companies in our CRM this month
- Compare the number of deals closed by different sales representatives
- Find all tickets related to a specific product issue and summarize their status
## Unsupported questions
The Hubspot connector isn't currently able to handle prompts like these.
- Create a new contact record for \{person\}
- Update the contact information for \{customer\}
- Delete the ticket from last week's support case
- Schedule a follow-up task for this deal
- Send an email to all contacts in the sales pipeline
## Installation
```bash
uv pip install airbyte-agent-hubspot
```
## Usage
Connectors can run in open source or hosted mode.
### Open source
In open source mode, you provide API credentials directly to the connector.
```python
from airbyte_agent_hubspot import HubspotConnector
from airbyte_agent_hubspot.models import HubspotAuthConfig
connector = HubspotConnector(
auth_config=HubspotAuthConfig(
client_id="<Your HubSpot OAuth2 Client ID>",
client_secret="<Your HubSpot OAuth2 Client Secret>",
refresh_token="<Your HubSpot OAuth2 Refresh Token>",
access_token="<Your HubSpot OAuth2 Access Token (optional if refresh_token is provided)>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@HubspotConnector.tool_utils
async def hubspot_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
### Hosted
In hosted mode, API credentials are stored securely in Airbyte Cloud. You provide your Airbyte credentials instead.
If your Airbyte client can access multiple organizations, also set `organization_id`.
This example assumes you've already authenticated your connector with Airbyte. See [Authentication](AUTH.md) to learn more about authenticating. If you need a step-by-step guide, see the [hosted execution tutorial](https://docs.airbyte.com/ai-agents/quickstarts/tutorial-hosted).
```python
from airbyte_agent_hubspot import HubspotConnector, AirbyteAuthConfig
connector = HubspotConnector(
auth_config=AirbyteAuthConfig(
customer_name="<your_customer_name>",
organization_id="<your_organization_id>", # Optional for multi-org clients
airbyte_client_id="<your-client-id>",
airbyte_client_secret="<your-client-secret>"
)
)
@agent.tool_plain # assumes you're using Pydantic AI
@HubspotConnector.tool_utils
async def hubspot_execute(entity: str, action: str, params: dict | None = None):
return await connector.execute(entity, action, params or {})
```
## Full documentation
### Entities and actions
This connector supports the following entities and actions. For more details, see this connector's [full reference documentation](REFERENCE.md).
| Entity | Actions |
|--------|---------|
| Contacts | [List](./REFERENCE.md#contacts-list), [Get](./REFERENCE.md#contacts-get), [API Search](./REFERENCE.md#contacts-api_search), [Search](./REFERENCE.md#contacts-search) |
| Companies | [List](./REFERENCE.md#companies-list), [Get](./REFERENCE.md#companies-get), [API Search](./REFERENCE.md#companies-api_search), [Search](./REFERENCE.md#companies-search) |
| Deals | [List](./REFERENCE.md#deals-list), [Get](./REFERENCE.md#deals-get), [API Search](./REFERENCE.md#deals-api_search), [Search](./REFERENCE.md#deals-search) |
| Tickets | [List](./REFERENCE.md#tickets-list), [Get](./REFERENCE.md#tickets-get), [API Search](./REFERENCE.md#tickets-api_search) |
| Schemas | [List](./REFERENCE.md#schemas-list), [Get](./REFERENCE.md#schemas-get) |
| Objects | [List](./REFERENCE.md#objects-list), [Get](./REFERENCE.md#objects-get) |
### Authentication
For all authentication options, see the connector's [authentication documentation](AUTH.md).
### Hubspot API docs
See the official [Hubspot API reference](https://developers.hubspot.com/docs/api/crm/understanding-the-crm).
## Version information
- **Package version:** 0.15.107
- **Connector version:** 0.1.12
- **Generated with Connector SDK commit SHA:** cb4380e76ac5cbc67b9089f94522be1bbe9f8d73
- **Changelog:** [View changelog](https://github.com/airbytehq/airbyte-agent-connectors/blob/main/connectors/hubspot/CHANGELOG.md) | text/markdown | null | Airbyte <contact@airbyte.io> | null | null | Elastic-2.0 | agent, ai, airbyte, api, connector, data-integration, hubspot, llm, mcp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Frameworks",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.13 | [] | [] | [] | [
"httpx>=0.24.0",
"jinja2>=3.0.0",
"jsonpath-ng>=1.6.1",
"jsonref>=1.1.0",
"opentelemetry-api>=1.37.0",
"opentelemetry-sdk>=1.37.0",
"pydantic>=2.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"segment-analytics-python>=2.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/airbytehq/airbyte-agent-connectors",
"Documentation, https://docs.airbyte.com/ai-agents/",
"Repository, https://github.com/airbytehq/airbyte-agent-connectors",
"Issues, https://github.com/airbytehq/airbyte-agent-connectors/issues"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T02:12:52.074760 | airbyte_agent_hubspot-0.15.107.tar.gz | 143,450 | 5a/dd/06c1bcb78ff08e5c82a4da279c4bd0a799b494de1249e5d5c6fad448ab8f/airbyte_agent_hubspot-0.15.107.tar.gz | source | sdist | null | false | aab3656d2e58d72b00c52cc9280bb61e | e02dde9aac37009cedd7691e94ec2054bf2ba9fee9e8678fd493fd35ccb262d8 | 5add06c1bcb78ff08e5c82a4da279c4bd0a799b494de1249e5d5c6fad448ab8f | null | [] | 330 |
2.4 | leanprompt | 0.3.3 | A FastAPI-based LLM integration framework for engineering-centric AI development. | # LeanPrompt (Backend)
**LeanPrompt** is an engineering-centric LLM integration framework based on FastAPI. It helps you use LLMs as reliable and predictable software components, not just text generators.
## ✨ Key Features
* **FastAPI Native:** Integrates instantly into existing FastAPI apps as a plugin.
* **Markdown-Driven Prompts:** Manage prompts as `.md` files, separated from code. Filenames become API paths.
* **Session-Based Context Caching:** Saves token costs by sending prompts only at the start of a session and then sending only input deltas.
* **Output Guardrails:** Built-in output validation and automatic retry logic via Pydantic models.
* **WebSocket First:** Highly optimized WebSocket support for real-time streaming feedback.
## 🚀 Quick Start
### Installation
```bash
pip install leanprompt
```
### Basic Usage
```python
from fastapi import FastAPI
from leanprompt import LeanPrompt, Guard
from pydantic import BaseModel
import os
app = FastAPI()
# Initialize LeanPrompt with your preferred provider
# Configure via environment variable: LEANPROMPT_LLM_PROVIDER="provider|api_key"
provider_env = os.getenv("LEANPROMPT_LLM_PROVIDER", "openai|dummy_key")
provider_name, api_key = provider_env.split("|", 1)
lp = LeanPrompt(app, provider=provider_name, prompt_dir="prompts", api_key=api_key)
# Define output model for validation
class CalculationResult(BaseModel):
result: int
# Create a calculator endpoint
@lp.route("/calc/add", prompt_file="add.md")
@Guard.validate(CalculationResult)
async def add(user_input: str):
"""Performs addition based on user input."""
pass # LeanPrompt handles the logic
```
### API Prefix and WebSocket Path
You can apply a shared prefix to all LeanPrompt routes and the WebSocket endpoint:
```python
app = FastAPI()
lp = LeanPrompt(
app,
provider=provider_name,
prompt_dir="prompts",
api_key=api_key,
api_prefix="/api",
ws_path="ws", # relative -> /api/ws/{client_id}
)
@lp.route("/calc/add", prompt_file="add.md")
async def add(user_input: str):
pass
```
Clients can keep using the same LeanPrompt path value (`/calc/add`) while connecting to
`ws://localhost:8000/api/ws/{client_id}`.
Using an absolute `ws_path` (e.g., `"/ws"`) keeps the WebSocket route outside the
`api_prefix`. Avoid `ws_path="/"` to prevent route collisions.
If you already configure a FastAPI router prefix, LeanPrompt can attach to it directly:
```python
app = FastAPI()
api = FastAPI()
app.mount("/api", api)
lp = LeanPrompt(
api,
provider=provider_name,
prompt_dir="prompts",
api_key=api_key,
ws_path="/ws", # -> /api/ws/{client_id}
)
```
### JWT Annotation Example
LeanPrompt routes can reuse a JWT validator annotation for HTTP requests:
```python
from fastapi import Request
from leanprompt import Guard
def require_jwt(request: Request) -> bool:
# Example only. Insecure for production; validate signature, expiry, and claims.
# Example: jwt.decode(token, key, algorithms=["HS256"])
return bool(request.headers.get("authorization"))
@lp.route("/secure/add", prompt_file="add.md")
@Guard.auth(require_jwt)
@Guard.validate(CalculationResult)
async def secure_add(user_input: str):
pass
```
For WebSocket authentication, pass a validation hook when you construct `LeanPrompt`:
```python
from fastapi import WebSocket
def require_ws_jwt(websocket: WebSocket) -> bool:
# Example only. Insecure for production; validate signature, expiry, and claims.
# Example: jwt.decode(token, key, algorithms=["HS256"])
return bool(websocket.headers.get("authorization"))
lp = LeanPrompt(
app,
provider=provider_name,
prompt_dir="prompts",
api_key=api_key,
ws_auth=require_ws_jwt,
)
```
### WebSocket Interceptors
You can intercept inbound/outbound WebSocket messages for metering, auditing, or billing.
If the request interceptor returns `False` or `{"error": "..."}`, the request is blocked and
the error payload is returned immediately.
Interceptor signature:
```python
def interceptor(websocket: WebSocket, event: dict):
...
```
Event payload shape:
```json
{
"direction": "inbound" | "outbound",
"client_id": "...",
"path": "/route",
"payload": { "path": "/route", "message": "..." } | { "response": "...", "path": "/route" },
"raw": "{...}",
"byte_length": 123
}
```
Return behavior:
- Request interceptor (`ws_request_interceptor`)
- Return `None` / no return: request continues to normal processing.
- Return `False`: request is blocked and `{ "error": "WebSocket request rejected" }` is sent.
- Return `{ "error": "..." }`: request is blocked and the dict is sent as-is (path is added if missing).
- Raise an exception: treated as blocked and `{ "error": "<exception message>" }` is sent.
- Response interceptor (`ws_response_interceptor`)
- Return value is ignored; it never blocks the response.
- Exceptions are logged and the response still proceeds.
```python
from fastapi import WebSocket
billing_state = {
"credits": 10_000, # bytes
"usage": 0,
}
def ws_billing(websocket: WebSocket, event: dict):
# event keys: direction, client_id, path, payload, raw, byte_length
if event["direction"] == "inbound":
projected = billing_state["usage"] + event["byte_length"]
if projected > billing_state["credits"]:
return {"error": "Billing failed: insufficient credits", "code": "billing_failed"}
billing_state["usage"] = projected
else:
billing_state["usage"] += event["byte_length"]
lp = LeanPrompt(
app,
provider=provider_name,
prompt_dir="prompts",
api_key=api_key,
ws_request_interceptor=ws_billing,
ws_response_interceptor=ws_billing,
)
```
### Complete Example Server
Here's a full example with multiple endpoints:
```python
from fastapi import FastAPI
from leanprompt import LeanPrompt, Guard
from pydantic import BaseModel
import os
# Define output models
class MoodJson(BaseModel):
current_mood: str
confidence: float
reason: str
class CalculationResult(BaseModel):
result: int
app = FastAPI()
# Initialize LeanPrompt
provider_env = os.getenv("LEANPROMPT_LLM_PROVIDER", "openai|dummy_key")
provider_name, api_key = provider_env.split("|", 1)
lp = LeanPrompt(app, provider=provider_name, prompt_dir="examples/prompts", api_key=api_key)
@lp.route("/calc/add", prompt_file="add.md")
@Guard.validate(CalculationResult)
async def add(user_input: str):
"""Performs addition based on user input."""
pass
@lp.route("/calc/multiply", prompt_file="multiply.md")
@Guard.validate(CalculationResult)
async def multiply(user_input: str):
"""Performs multiplication based on user input."""
pass
@lp.route("/mood/json", prompt_file="mood_json.md")
@Guard.validate(MoodJson)
async def get_mood_json(user_input: str):
"""Returns the mood analysis in JSON format."""
pass
# Custom validation for markdown content
def validate_markdown_content(text: str):
if "##" not in text and "**" not in text:
raise ValueError("Response does not look like Markdown")
if "Meanings" not in text:
raise ValueError("Missing required section: 'Meanings'")
return {"raw_markdown": text}
@lp.route("/linguist", prompt_file="word_relationships.md")
@Guard.custom(validate_markdown_content)
async def analyze_words(user_input: str):
"""Analyzes word relationships and returns markdown."""
pass
```
### Using Local LLM (Ollama)
You can use local LLMs like Qwen 2.5 Coder or DeepSeek-Coder-V2 via [Ollama](https://ollama.com).
1. Install and run Ollama:
```bash
ollama run qwen2.5-coder
```
2. Initialize LeanPrompt with `ollama` provider:
```python
lp = LeanPrompt(
app,
provider="ollama",
base_url="http://localhost:11434", # Optional, defaults to this
model="qwen2.5-coder" # Specify the model name here or in prompt frontmatter
)
```
### Supported Providers
LeanPrompt supports multiple LLM providers:
- **OpenAI**: `provider="openai"`
- **DeepSeek**: `provider="deepseek"`
- **Google Gemini**: `provider="google"`
- **Ollama (Local)**: `provider="ollama"`
## 📂 Project Structure
```
leanprompt/
├── leanprompt/ # Main library code
│ ├── core.py # Core logic (FastAPI integration)
│ ├── guard.py # Validation logic
│ └── providers/ # LLM provider implementations
├── examples/ # Usage examples
│ ├── main.py # Example FastAPI app
│ └── prompts/ # Example prompt files
├── tests/ # Unit tests
├── setup.py # Package installation script
└── requirements.txt # Dependencies
```
## 🏃 Running the Example
1. **Install Dependencies:**
```bash
pip install -r requirements.txt
```
2. **Set Environment Variable:**
```bash
# Format: provider|api_key
export LEANPROMPT_LLM_PROVIDER="openai|your_openai_api_key"
# Or for DeepSeek:
export LEANPROMPT_LLM_PROVIDER="deepseek|your_deepseek_api_key"
```
3. **Run the Example Server:**
```bash
# Run from the root directory
export PYTHONPATH=$PYTHONPATH:$(pwd)
python examples/main.py
```
## 📡 API Examples
### HTTP Endpoints
**Calculation (Add):**
```bash
curl -X POST "http://localhost:8000/calc/add" \
-H "Content-Type: application/json" \
-d '{"message": "50 + 50"}'
# Response: {"result": 100}
```
**Calculation (Multiply):**
```bash
curl -X POST "http://localhost:8000/calc/multiply" \
-H "Content-Type: application/json" \
-d '{"message": "10 * 5"}'
# Response: {"result": 50}
```
**Mood Analysis (JSON):**
```bash
curl -X POST "http://localhost:8000/mood/json" \
-H "Content-Type: application/json" \
-d '{"message": "I am feeling great today!"}'
# Response: {"current_mood": "Happy", "confidence": 0.9, "reason": "Positive language used"}
```
**Word Relationship Analysis:**
```bash
curl -X POST "http://localhost:8000/linguist" \
-H "Content-Type: application/json" \
-d '{"message": "apple, banana, cherry"}'
# Response: Markdown formatted analysis with meanings and relationships
```
### WebSocket Interface
LeanPrompt provides a WebSocket interface for real-time streaming and context management:
```python
import websocket
import json
def on_message(ws, message):
response = json.loads(message)
print(f"Path: {response.get('path')}")
print(f"Response: {response['response']}")
ws = websocket.WebSocketApp(
"ws://localhost:8000/ws/test_client",
on_message=on_message
)
# Send different requests to test routing and context
ws.send(json.dumps({"path": "/add", "message": "10 + 20"}))
ws.send(json.dumps({"path": "/multiply", "message": "5 * 5"}))
ws.send(json.dumps({"path": "/linguist", "message": "apple, banana, cherry"}))
ws.send(json.dumps({"path": "/linguist", "message": "What color are they?"}))
```
### Context Chaining Example
The WebSocket interface maintains separate conversation contexts for each path:
```python
# First message to /linguist path
ws.send(json.dumps({
"path": "/linguist",
"message": "apple, banana, cherry"
}))
# Follow-up message - AI remembers the previous context
ws.send(json.dumps({
"path": "/linguist",
"message": "What color are they?"
}))
# Response will mention red, yellow, etc. showing context memory
```
## 📝 Prompt Templates
LeanPrompt uses markdown files with frontmatter for prompt templates:
**Example: `add.md`**
```markdown
---
model: deepseek-chat
temperature: 0.1
---
You are a calculator.
Perform the addition requested by the user.
Return the result in valid JSON format matching this schema:
{"result": integer}
Example:
User: 1 + 1
AI: {"result": 2}
Only return the JSON object.
```
**Example: `word_relationships.md`**
```markdown
---
model: deepseek-chat
---
You are a helpful linguist.
The user will provide three English words.
Please provide the meaning of each word and explain the relationships between them.
Return the response in Markdown format.
Use headers like "## Meanings" and "## Relationships" to structure your response.
```
## 🛡️ Output Validation
LeanPrompt provides built-in output validation using Pydantic models:
```python
from pydantic import BaseModel
from leanprompt import Guard
class MoodResponse(BaseModel):
mood: str
intensity: int # 1-10
notes: str
@lp.route("/mood", prompt_file="mood.md")
@Guard.validate(MoodResponse)
async def analyze_mood(user_input: str):
pass # Automatically validates and converts LLM response
```
For custom validation logic:
```python
def validate_markdown(text: str):
if "##" not in text:
raise ValueError("Invalid markdown format")
return text
@lp.route("/custom", prompt_file="custom.md")
@Guard.custom(validate_markdown)
async def custom_endpoint(user_input: str):
pass
```
| text/markdown | Youngjune Kwon | yjkwon@winm2m.com | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Framework :: FastAPI",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | https://github.com/yjkwon_wm2m/leanprompt | null | >=3.8 | [] | [] | [] | [
"fastapi",
"uvicorn",
"pydantic",
"httpx",
"pyyaml",
"jinja2"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-20T02:12:36.634089 | leanprompt-0.3.3.tar.gz | 18,422 | a6/06/c55f5f866943813890ca5b40892394c73b9ff68cc1ef2188e666fee7ccbf/leanprompt-0.3.3.tar.gz | source | sdist | null | false | 0519ed9e372ef6954ba3774381b816b2 | f2725de38d9092c2f257c91e5222264b7fc04d9b5d061ca61b33f21a0d8e1747 | a606c55f5f866943813890ca5b40892394c73b9ff68cc1ef2188e666fee7ccbf | null | [
"LICENSE"
] | 274 |
2.4 | devpy-cli | 1.0.4 | AI-powered DevOps CLI Assistant for local and remote Docker management | # DevPy CLI
An intelligent command-line assistant powered by multiple LLM providers (DeepSeek, OpenAI, Anthropic Claude, Google Gemini, Ollama/OpenWebUI) to manage Docker environments, both local and remote via SSH. Designed to simplify DevOps tasks with natural language, ensuring security and control.
## Key Features
* **Natural Language Interaction**: "Restart the nginx container", "Show database logs", "Monitor memory usage".
* **Local and Remote Docker Management**: Connect to your local machine or remote servers via SSH transparently.
* **Secure SSH Key Management**: Encrypted storage (AES-256) of SSH private keys. Import from `~/.ssh`.
* **Granular Permission System**:
* Interactive confirmation for critical operations (write/delete).
* Configurable whitelists.
* Persistent permission rules with hot-reload.
* "Dry-Run" mode to simulate executions.
* **Logging and Auditing**: Detailed logging of all operations and permission decisions in `logs/permissions.log`.
## System Requirements
* Python 3.11 or higher.
* Docker client installed (local) or SSH access to a server with Docker.
* Operating System: Windows, macOS, Linux.
## Installation
1. **Clone the repository:**
```bash
git clone <repo-url>
cd devpy-cli
```
2. **Create virtual environment (recommended):**
```bash
python -m venv venv
# Windows
.\venv\Scripts\activate
# Linux/Mac
source venv/bin/activate
```
3. **Install dependencies:**
```bash
pip install -e .
```
4. **Configure environment:**
Create a `.env` file in the root (you can copy the example if it exists) with your LLM API key:
```ini
DEEPSEEK_API_KEY=your_api_key_here
# Optional: LLM=chatgpt and OPENAI_API_KEY=...
```
## Usage Guide
### Start the CLI
```bash
# From the repository
python app.py
# Or if installed in editable mode
devpy-cli
```
On first run, if no `.env` file exists, an interactive setup wizard will guide you through:
- Choosing your LLM provider.
- Entering the API key.
- Optionally setting a custom base URL.
After setup, the CLI banner appears and you are asked whether to enable dry-run mode.
---
### CLI Mode (Local Docker)
Use this mode when you want to manage containers running on the same machine where DevPy CLI is installed.
- **Requirements**
- Docker is installed and the daemon is running locally.
- Your user can talk to the Docker socket (e.g., `docker ps` works from your shell).
- **Step-by-step**
1. Start the CLI (see above).
2. When prompted, choose whether to enable dry-run mode.
3. Ensure the mode is set to `local` (this is the default):
```bash
config mode local
```
4. Type natural language instructions, for example:
- `What containers are running?`
- `Restart the nginx container and show me its latest logs`
- `Create a redis container called cache`
5. When an action is potentially destructive (creating/stopping/removing containers, starting monitors, etc.), DevPy will:
- Show a preview of the Docker command.
- Ask for confirmation (once, for the command, or for the whole session).
- **Typical local use cases**
- Quickly inspecting and restarting local services from the terminal.
- Checking logs of a misbehaving container.
- Spinning up utility containers (e.g., Redis, Postgres) by name and image.
---
### SSH Mode (Remote Docker over SSH)
Use this mode to manage containers on a remote host over SSH, while still talking to the CLI locally.
- **Prerequisites**
- The remote server:
- Has Docker installed and running.
- Is reachable via SSH (e.g., `ssh user@host` works).
- You have an SSH private key that can authenticate to that server.
- **Step 1: Store your SSH key (encrypted)**
You can import keys from `~/.ssh` or add a specific file:
```bash
# Scan ~/.ssh for potential keys and import one
keys scan
# Or add a specific key path
keys add my-remote /path/to/id_rsa
# List stored keys
keys list
```
During `keys scan` or `keys add`, you are asked for a **passphrase for encryption**.
This passphrase is used to derive a key that encrypts your private key on disk (AES-256 via `cryptography.Fernet`).
- **Step 2: Configure SSH connection**
In the CLI, run:
```bash
config ssh
```
You will be prompted for:
- **SSH Host** (e.g., `myserver.example.com` or `192.168.1.100`)
- **SSH User** (e.g., `ubuntu`, `root`, `deploy`)
- **SSH Key Name** (one of the names returned by `keys list`)
This information is stored in `config.json`.
- **Step 3: Switch to SSH mode**
```bash
config mode ssh
```
From now on, Docker operations happen against the remote host using the stored SSH configuration.
- **Step 4: Authenticate with your key**
When the backend needs to connect to the remote Docker daemon, it:
- Prompts for the passphrase you used when storing the key, **or**
- Uses the `DOCKER_SSH_PASSPHRASE` environment variable if it is set.
This decrypted key is written to a temporary file (with restricted permissions) and used only for the SSH connection.
- **Typical SSH use cases**
- Managing a remote Docker host from your laptop without logging in manually.
- Checking logs and restarting containers in staging/production environments.
- Monitoring memory usage of remote containers and triggering alerts.
---
### Command Reference
#### Configuration Commands
Use these to configure how the CLI connects and which LLM it uses:
```bash
# Show or set connection mode
config mode # shows current mode (local or ssh)
config mode local # use local Docker
config mode ssh # use remote Docker over SSH
# Configure SSH details (host, user, key)
config ssh
# Re-run the LLM setup wizard and regenerate .env
config llm
```
#### SSH Key Management Commands
```bash
# Import keys from ~/.ssh (interactive)
keys scan
# Add a key manually
keys add <name> <path_to_private_key>
# List saved keys
keys list
# Delete a stored key
keys delete <name>
```
#### Permission Management Commands
Control what the agent is allowed to do:
```bash
# View current rules
permissions list
# Block container restarts permanently
permissions add restart_container deny
# Allow container creation (with optional parameters)
permissions add create_container allow
# Reset all persistent permission rules
permissions reset
```
During interactive confirmations, you can choose:
- `y` – allow once.
- `yc` – always allow this exact command during the session.
- `ys` – always allow this operation type during the session.
- `n` – deny.
---
### Interaction Examples with the Agent
Once configured, simply type what you need:
- *"What containers are running?"*
- *"Restart the 'web-app' container and show me its latest logs"*
- *"Create a redis container named 'my-redis'"*
- *"Alert me if memory usage of container 'api' exceeds 80%"*
The agent plans and executes one or more Docker operations, asking for permission when necessary.
---
### Dry-Run Mode
You can enable dry-run mode in two ways:
- At startup, when the CLI asks:
- Answer `y` to run in dry-run mode for the session.
- Via environment variable:
- Set `DRY_RUN=1` before starting the app.
In this mode, the agent **simulates** write actions (creating, deleting, restarting containers, starting monitors, etc.) without actually executing them.
The permission log still records what *would* have been executed.
---
### Built-in Tools
DevPy CLI exposes a set of Docker-focused tools that the agent can call to fulfill your requests:
- **check_resource**
Shows CPU, memory, and disk usage of the local host.
- **get_docker_logs**
Retrieves the last logs of a container (`tail` configurable).
- **list_containers**
Lists active Docker containers with their current status.
- **inspect_container**
Returns low-level attributes and configuration of a container.
- **restart_docker_container**
Restarts a container, going through the permission system before execution.
- **create_container**
Creates and starts a new container from a given image and name.
If the image is not present locally, it is automatically pulled first.
- **delete_container**
Stops and removes the specified container (with confirmation).
- **stop_container**
Gracefully stops a running container.
- **start_monitoring**
Starts a background memory monitor for a container and alerts if usage crosses a threshold.
- **exec_command**
Executes a shell command inside a container. Commands are sanitized to block chaining and substitution.
- **download_image**
Downloads (pulls) a Docker image from a registry.
- **delete_image**
Deletes a Docker image if it exists, behind the same permission and logging layer.
---
## Authentication and Security
- **LLM API Authentication**
- The `.env` file created by the setup wizard stores:
- `LLM` – which provider/adapter to use.
- `<PROVIDER>_API_KEY` – the API key for that provider (for example `DEEPSEEK_API_KEY`, `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GOOGLE_API_KEY`).
- Optionally `LLM_BASE_URL` – custom base URL for compatible providers (including OpenAI-compatible proxies such as Ollama or OpenWebUI).
- You can re-run the wizard at any time with:
```bash
config llm
```
- **Supported LLM Providers**
- **DeepSeek** – uses `LLM=deepseek` and `DEEPSEEK_API_KEY`, with optional `LLM_BASE_URL` override.
- **OpenAI (ChatGPT / GPT-4o family)** – uses `LLM=chatgpt` and `OPENAI_API_KEY`.
- **Anthropic Claude** – uses `LLM=anthropic` or `LLM=claude` with `ANTHROPIC_API_KEY`.
- **Google Gemini** – uses `LLM=google` or `LLM=gemini` with `GOOGLE_API_KEY`.
- **Ollama / OpenWebUI** – uses `LLM=ollama` or `LLM=openwebui` and talks to an OpenAI-compatible endpoint:
- `LLM_BASE_URL`, `OLLAMA_BASE_URL`, or `OPENWEBUI_BASE_URL` define the base URL (e.g. `http://localhost:11434` or an OpenWebUI URL).
- `OLLAMA_MODEL` selects the local model (for example `llama3.1`).
- `OPENAI_API_KEY` can be any non-empty token (often not validated by Ollama/OpenWebUI).
- **SSH Key Encryption**
- Stored SSH keys live in `ssh_keys.enc`.
- Each key is encrypted using a passphrase-derived key (PBKDF2 + AES-256).
- The file permissions are hardened to allow read/write only for the current user.
- **Runtime Environment Variables**
- `DRY_RUN` – if set to `1`, `true`, `yes`, or `y`, forces dry-run mode.
- `DOCKER_SSH_PASSPHRASE` – optional; if set, avoids interactive passphrase prompts for SSH keys.
- `DOCKER_SAFE_COMMANDS` – comma-separated list of operations that never prompt for confirmation.
- `DOCKER_CLI_USER` – overrides the username recorded in permission logs.
- **Logging and Auditing**
- All operations go through a permission and logging layer.
- Logs are written as JSON lines to `logs/permissions.log`.
- Each entry includes timestamp, user, operation, arguments, decision, and optional command preview.
## Project Structure
* `app.py`: Entry point.
* `frontend_cli.py`: User interface and CLI command handling.
* `backend.py`: Agent logic, integration with LangChain/LangGraph and Docker tools.
* `permissions_manager.py`: Access control and auditing system.
* `ssh_key_manager.py`: Encryption and key management.
* `config_manager.py`: Configuration persistence (mode, ssh host).
* `logs/`: Audit log files.
## License
MIT License. See `LICENSE` file for more details.
## Author
Developed by atrox39.
| text/markdown | null | Eddy Ortega <atrox390@gmail.com> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: System :: Systems Administration",
"Topic :: Software Development :: Build Tools"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"docker>=7.0.0",
"paramiko>=3.4.0",
"cryptography>=42.0.0",
"rich>=13.7.0",
"langchain>=0.1.0",
"langchain-openai>=0.0.5",
"langchain-anthropic>=0.1.0",
"langchain-google-genai>=1.0.0",
"langgraph>=0.0.10",
"python-dotenv>=1.0.0",
"psutil>=5.9.0"
] | [] | [] | [] | [
"Homepage, https://github.com/your-username/devpy-cli",
"Bug Tracker, https://github.com/your-username/devpy-cli/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T02:11:16.192822 | devpy_cli-1.0.4.tar.gz | 22,930 | cc/c1/8e8f2821e47a89f2b1777a24966a5394cc857531063d95a56f464dc2b394/devpy_cli-1.0.4.tar.gz | source | sdist | null | false | 5dfd26a68f0cff0cbc3cf87ec2657f69 | 9465bbd9a29e14d596db165926b4e6586206728e44e8b50c2a2016e2440daf80 | ccc18e8f2821e47a89f2b1777a24966a5394cc857531063d95a56f464dc2b394 | null | [] | 236 |
2.4 | hatch-pinned-extra | 0.1.3 | Add a packaging extra with pinned dependencies from a lock file | <div align="center">
<h1>hatch-pinned-extra</h1>
[![License][license-badge]][license]
[![GitHub last commit][commits-latest-badge]][commits-latest]
[![PyPI - Downloads][pypi-downloads-badge]][pypi-downloads]
[![uv][uv-badge]][uv]
Hatch plugin that adds a packaging [_extra_][extras] to the wheel metadata with pinned dependencies from [`uv.lock`][uvlock].
</div>
## Usage
```toml
# pyproject.toml
[build-system]
requires = [
"hatchling",
"hatch-pinned-extra>=0.0.1,<0.1.0",
]
build-backend = "hatchling.build"
[tool.hatch.metadata.hooks.pinned_extra]
name = "pinned"
```
If your package doesn't have any optional dependencies already, you will need to mark them as _dynamic_:
```toml
# pyproject.toml
[project]
dynamic = [
"optional-dependencies",
]
```
### Enabling the Plugin
The plugin requires the `HATCH_PINNED_EXTRA_ENABLE` environment variable to be set to a truthy value to activate (e.g. `1`, `true`, `yes`, `on`). This design allows you to control when pinned dependencies are included:
```bash
# Build with pinned dependencies
HATCH_PINNED_EXTRA_ENABLE=1 uv build
# Update lockfile without constraints from pinned dependencies
uv lock --upgrade
```
This approach solves the circular dependency issue where pinned dependencies become constraints during `uv lock --upgrade`, preventing actual upgrades.
[license]: https://pypi.python.org/pypi/hatch-pinned-extra
[license-badge]: https://img.shields.io/pypi/l/hatch-pinned-extra.svg
[commits-latest-badge]: https://img.shields.io/github/last-commit/edgarrmondragon/hatch-pinned-extra
[commits-latest]: https://github.com/edgarrmondragon/hatch-pinned-extra/commit/main
[pypi-downloads-badge]: https://img.shields.io/pypi/dm/hatch-pinned-extra
[pypi-downloads]: https://pypi.python.org/pypi/hatch-pinned-extra
[uv-badge]: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/uv/main/assets/badge/v0.json
[uv]: https://github.com/astral-sh/uv
[extras]: https://packaging.python.org/en/latest/specifications/core-metadata/#provides-extra-multiple-use
[uvlock]: https://github.com/astral-sh/uv
| text/markdown | null | Edgar Ramírez Mondragón <edgarrm358@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: 3.15"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"hatchling>=1.27.0",
"packaging>=24.2",
"tomli>=1.2.2; python_version < \"3.11\"",
"typing-extensions>=4.4; python_version < \"3.10\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T02:09:57.281152 | hatch_pinned_extra-0.1.3.tar.gz | 200,073 | de/28/b4ae20ee8d9780eadcf80b08e210219ca41f219df3f1d49df2627ff62f1b/hatch_pinned_extra-0.1.3.tar.gz | source | sdist | null | false | 61acd1f6d69643556b08eb258c871fd9 | e28e1b28dda864b13ec34d6cf8779a1fabacca178ae3f1cfcd481172ced60c95 | de28b4ae20ee8d9780eadcf80b08e210219ca41f219df3f1d49df2627ff62f1b | MIT | [
"LICENSE"
] | 355 |
2.3 | msgraph-beta-sdk | 1.56.0 | The Microsoft Graph Beta Python SDK | # Microsoft Graph Beta SDK for Python
[](https://badge.fury.io/py/msgraph-beta-sdk)
[](https://pepy.tech/project/msgraph-beta-sdk)
[](https://pypi.org/project/msgraph-beta-sdk)
[](https://github.com/microsoftgraph/msgraph-beta-sdk-python/graphs/contributors)
Get started with the Microsoft Graph Beta SDK for Python by integrating the [Microsoft Graph API](https://docs.microsoft.com/graph/overview) into your Python application.
> **Note:**
>
> * This SDK allows you to build applications using the latest [beta](https://docs.microsoft.com/graph/use-the-api#version) version of Microsoft Graph. If you want to try the v1.0 Microsoft Graph API, use the [v1.0](https://github.com/microsoftgraph/msgraph-sdk-python) SDK.
## 1. Installation
```py
pip install msgraph-beta-sdk
```
> **Note:**
>
> * The Microsoft Graph Beta SDK for Python is a fairly large package. It may take a few minutes for the initial installation to complete.
> * Enable long paths in your environment if you receive a `Could not install packages due to an OSError`. For details, see [Enable Long Paths in Windows 10, Version 1607, and Later](https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=powershell#enable-long-paths-in-windows-10-version-1607-and-later).
## 2. Getting started with Microsoft Graph
### 2.1 Register your application
Register your application by following the steps at [Register your app with the Microsoft Identity Platform](https://docs.microsoft.com/graph/auth-register-app-v2).
### 2.3 Get a GraphServiceClient object
You must get a **GraphServiceClient** object to make requests against the service.
An instance of the **GraphServiceClient** class handles building client. To create a new instance of this class, you need to provide an instance of **Credential**, which can authenticate requests to Microsoft Graph.
> **Note**: For authentication we support both `sync` and `async` credential classes from `azure.identity`. Please see the azure identity [docs](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity?view=azure-python) for more information.
```py
# Example using async credentials.
from azure.identity.aio import EnvironmentCredential
from msgraph_beta import GraphServiceClient
scopes = ['User.Read', 'Mail.Read']
credential = EnvironmentCredential()
client = GraphServiceClient(credential, scopes=scopes)
```
> **Note**: Refer to the [following documentation page](https://learn.microsoft.com/graph/sdks/customize-client?tabs=python#configuring-the-http-proxy-for-the-client) if you need to configure an HTTP proxy.
## 3. Make requests against the service
After you have a **GraphServiceClient** that is authenticated, you can begin making calls against the service. The requests against the service look like our [REST API](https://docs.microsoft.com/graph/api/overview?view=graph-rest-1.0).
> **Note**: This SDK offers an asynchronous API by default. Async is a concurrency model that is far more efficient than multi-threading, and can provide significant performance benefits and enable the use of long-lived network connections such as WebSockets. We support popular python async envronments such as `asyncio`, `anyio` or `trio`.
The following is a complete example that shows how to fetch a user from Microsoft Graph.
```py
import asyncio
from azure.identity.aio import ClientSecretCredential
from msgraph_beta import GraphServiceClient
credential = ClientSecretCredential(
'tenant_id',
'client_id',
'client_secret'
)
scopes = ['https://graph.microsoft.com/.default']
client = GraphServiceClient(credential, scopes=scopes)
async def get_user():
user = await client.users.by_user_id('userPrincipalName').get()
if user:
print(user.display_name)
asyncio.run(get_user())
```
Note that to calling `me` requires a signed-in user and therefore delegated permissions. See [Authenticating Users](https://learn.microsoft.com/en-us/python/api/overview/azure/identity-readme?view=azure-python#authenticate-users)) for more:
```py
import asyncio
from azure.identity import InteractiveBrowserCredential
from msgraph_beta import GraphServiceClient
credential = InteractiveBrowserCredential()
scopes=['User.Read']
client = GraphServiceClient(credential, scopes=scopes)
async def me():
me = await client.me.get()
if me:
print(me.display_name)
asyncio.run(me())
```
### 3.1 Error Handling
Failed requests raise `APIError` exceptions. You can handle these exceptions using `try` `catch` statements.
```py
from kiota_abstractions.api_error import APIError
async def get_user():
try:
user = await client.users.by_user_id('userID').get()
print(user.user_principal_name, user.display_name, user.id)
except APIError as e:
print(f'Error: {e.error.message}')
asyncio.run(get_user())
```
## Documentation and resources
* [Overview](https://docs.microsoft.com/graph/overview)
* [Microsoft Graph website](https://aka.ms/graph)
* [Samples](docs)
## Upgrading
For detailed information on breaking changes, bug fixes and new functionality introduced during major upgrades, check out our [Upgrade Guide](UPGRADING.md)
## Issues
View or log issues on the [Issues](https://github.com/microsoftgraph/msgraph-beta-sdk-python/issues) tab in the repo.
## Contribute
Please read our [Contributing](CONTRIBUTING.md) guidelines carefully for advice on how to contribute to this repo.
## Copyright and license
Copyright (c) Microsoft Corporation. All Rights Reserved. Licensed under the MIT [license](LICENSE).
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Third Party Notices
[Third-party notices](THIRD%20PARTY%20NOTICES)
| text/markdown | null | Microsoft <graphtooling+python@microsoft.com> | null | null | null | msgraph, openAPI, Microsoft, Graph, beta | [
"Development Status :: 5 - Production/Stable",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | >=3.9 | [] | [
"msgraph_beta"
] | [] | [
"azure-identity>=1.12.0",
"microsoft-kiota-serialization-json<2.0.0,>=1.9.0",
"microsoft-kiota-serialization-text<2.0.0,>=1.9.0",
"microsoft-kiota-serialization-form>=1.9.0",
"microsoft-kiota-serialization-multipart>=1.9.0",
"msgraph_core>=1.3.1",
"yapf; extra == \"dev\"",
"bumpver; extra == \"dev\"",
"isort; extra == \"dev\"",
"pylint; extra == \"dev\"",
"pytest; extra == \"dev\"",
"mypy; extra == \"dev\""
] | [] | [] | [] | [
"documentation, https://github.com/microsoftgraph/msgraph-beta-sdk-python/docs",
"homepage, https://github.com/microsoftgraph/msgraph-beta-sdk-python#readme",
"repository, https://github.com/microsoftgraph/msgraph-beta-sdk-python"
] | python-requests/2.32.5 | 2026-02-20T02:08:46.487664 | msgraph_beta_sdk-1.56.0.tar.gz | 11,664,227 | 76/04/6d88c26ed406b5055d0b5e23200c436aaa2f23239c8efa88e78f901c0e61/msgraph_beta_sdk-1.56.0.tar.gz | source | sdist | null | false | cae57923557e6bc42a334c17a16c6cd7 | 4d9edefb030a5760e739f6d0b3254effca058d2896e447ae3ccdac84d8bc8b5c | 76046d88c26ed406b5055d0b5e23200c436aaa2f23239c8efa88e78f901c0e61 | null | [] | 620 |
2.4 | automata-diags | 0.3.0 | A powerful, modern, and educational Python toolkit for automata theory. Visualize DFAs, NFAs, CFGs, PDAs, and Turing machines (single/multi-tape/multi-head); minimize automata; CYK and CFG algorithms; and more with an elegant, type-safe API. | # Automata Diags
[](https://pypi.org/project/automata-diags/)
[](https://pypi.org/project/automata-diags/)
[](https://opensource.org/licenses/MIT)
[](https://automata-diags.readthedocs.io/en/latest/?badge=latest)
A powerful, modern, and educational Python toolkit for automata theory. Visualize DFAs, NFAs, CFGs, minimize automata, and more with an elegant, type-safe API.
**For the full, comprehensive documentation including tutorials and the API reference, please visit our [Documentation Website](https://automata-diags.readthedocs.io/).**
<!--
## Why Automata Diags?
| Feature | Why It Matters |
| :---------------------- | :------------------------------------------------------------------------------------------------------------------------- |
| **Complete Toolset** | From basic DFAs to complex CFG conversions, all the tools you need for a typical Theory of Computation course are in one place. |
| **Educational Focus** | The API is designed to be intuitive and map closely to textbook concepts, making it an excellent companion for students. |
| **Advanced Algorithms** | Includes research-grade implementations like Hopcroft's minimization, setting it apart from simpler libraries. |
| **Instant Visualization**| Don't just build automata—see them. Instant visual feedback helps solidify complex concepts and makes debugging intuitive. |
| **Modern & Maintained** | Built with modern Python (type hints, clean architecture) and actively maintained for correctness and new features. | -->
## Installation
```bash
pip install automata-diags
```
Requires Python 3.8+ and Graphviz.
## Quick Start
```python
from automata.backend.grammar.dist import State, Symbol
from automata.backend.grammar.regular_languages.dfa.dfa_mod import DFA
from automata.backend.drawings.automata_drawer import AutomataDrawer
# Create a simple DFA
# For more creation methods, see the full documentation.
dfa = DFA.from_string("q0,a,q1;q1,b,q2", start_state="q0", accept_states={"q2"})
# Test it
dfa.accepts([Symbol('a'), Symbol('b')]) # True
# Visualize it
drawer = AutomataDrawer()
drawer.draw_dfa_from_object(dfa, "my_first_dfa")
```
**For more examples and detailed guides, please visit the [Full Documentation Site](https://automata-diags.readthedocs.io/).**
## Web App
https://ajodo-godson.github.io/automata_diags/
## Contributing
Contributions are welcome! Please feel free to open a pull request or submit an issue on our [GitHub repository](https://github.com/Ajodo-Godson/automata_diags).
## License
This project is licensed under the MIT License.
| text/markdown | null | Godson Ajodo <godson@uni.minerva.edu> | null | null | null | CFG, DFA, NFA, PDA, TM, Turing-machine, automata, computer-science, education, finite-automata, formal-languages, graph-theory, visualization | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Education",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Visualization",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"graphviz>=0.20.0",
"pytest>=7.0.0",
"black>=24.1.0; extra == \"dev\"",
"build>=1.2.0; extra == \"dev\"",
"flake8>=7.0.0; extra == \"dev\"",
"mypy>=1.8.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"twine>=6.1.0; extra == \"dev\"",
"types-graphviz>=0.20.1; extra == \"dev\"",
"myst-parser>=2.0.0; extra == \"docs\"",
"sphinx-rtd-theme>=2.0.0; extra == \"docs\"",
"sphinx>=7.2.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/Ajodo-Godson/automata_diags",
"Bug Tracker, https://github.com/Ajodo-Godson/automata_diags/issues",
"Documentation, https://automata-diags.readthedocs.io/",
"Portfolio, https://ajoson.netlify.app"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T02:08:33.678229 | automata_diags-0.3.0.tar.gz | 834,263 | 2c/c5/6f27ca1019f2078544e4465485f638393a7399d357ac3b3210c93ca72e5c/automata_diags-0.3.0.tar.gz | source | sdist | null | false | b24fe2d200f7d63fc56a9b2cdc02c4e8 | 839c34f043f046038d2fe3262f410cccc04ae17e5ef88170c96ca10065da41fd | 2cc56f27ca1019f2078544e4465485f638393a7399d357ac3b3210c93ca72e5c | null | [
"LICENSE"
] | 253 |
2.4 | quantum-pixel | 2.0.1 | When both YES and NO are existed. Only human with emotions can comprehend. | <div align="center">
<h1>Quantum Pixel</h1>
**When everything is possibility.**
[![][latest-release-shield]][latest-release-url]
[![][latest-commit-shield]][latest-commit-url]
[![][pypi-shield]][pypi-url]
[![][python-shield]][python-url]
[latest-release-shield]: https://badgen.net/github/release/Linos1391/quantum-pixel/development?icon=github
[latest-release-url]: https://github.com/Linos1391/quantum-pixel/releases/latest
[latest-commit-shield]: https://badgen.net/github/last-commit/Linos1391/quantum-pixel/master?icon=github
[latest-commit-url]: https://github.com/Linos1391/quantum-pixel/commits/master
[pypi-shield]: https://img.shields.io/badge/pypi-quantum--pixel-blue
[pypi-url]: https://pypi.org/project/quantum-pixel/
[python-shield]: https://img.shields.io/badge/python-3.14+-yellow
[python-url]: https://www.python.org/downloads/
<img alt="intro image" width="75%" src="https://github.com/Linos1391/quantum-pixel/blob/master/assets/intro.png?raw=true">
</div>
<br>
>"When I think about it, maybe quantum mechanics was made to prevent AI. Like being both wave and particle, we as players perceive the environment normally, and computers got strokes while analyzing. Thats why we forgot the precedent memory to prevent AI screenshot reality."
>
> — <cite>**Me** in Dec 11th 2025 for absolutely no reasons.</cite>
<br>
- [1. Introduction](#1-introduction)
- [1.1. Local system (RECOMMEND)](#11-local-system-recommend)
- [1.2. Web service (NOT RECOMMEND)](#12-web-service-not-recommend)
- [2. Can I host from Github?](#2-can-i-host-from-github)
- [3. License](#3-license)
- [4. Disclaimer](#4-disclaimer)
<br>
# 1. Introduction
I made this in response of AI slop. Those so-called AI artists had gone too far that annoy me... I am not against the development of AI, but the disrespects towards ARTS and ARTISTS.
<!-- Change from "assets/mermaid.png" to "https://github.com/Linos1391/quantum-pixel/blob/master/assets/mermaid.png?raw=true" in `quantum_pixel/README_PYPI.md` -->

<details>
<summary>Mermaid source</summary>
```mermaid
flowchart LR
Material[<img src="https://github.com/Linos1391/quantum-pixel/blob/dev/assets/material.png?raw=true" width="50" height="100"/><br><label>material.png</label>]
Preview[<img src="https://github.com/Linos1391/quantum-pixel/blob/dev/assets/preview.png?raw=true" width="50" height="100"/><br><label>preview.png</label>]
Encoded[<img src="https://github.com/Linos1391/quantum-pixel/blob/dev/assets/encoded.png?raw=true" width="50" height="100"/><br><label>encoded.png</label>]
Grok[<img src="https://github.com/Linos1391/quantum-pixel/blob/dev/assets/grok.png?raw=true" width="50" height="100"/><br><label>grok.png</label>]
Material -->|Built-in Generate Preview| Preview
Material -->|Embed Within Steganography| Encoded
Preview -->|Encode Steganography| Encoded
Encoded -->|Decode Steganography| Material
Encoded -->|Edit by Grok| Grok
```
</details>
<br>
**Notice:** it is still in development and not guaranteed protected from img2img. I tried on Grok some details are detected, most are NOT :D.
## 1.1. Local system (RECOMMEND)
| UV | Python |
| ---------------------------------- | --------------------------- |
| `uv run pip install quantum-pixel` | `pip install quantum-pixel` |
| `uv run quantum_pixel` | `quantum_pixel` |
## 1.2. Web service (NOT RECOMMEND)
Really slow and often exceed RAM limit.
- [quantum-pixel.onrender.com/](https://quantum-pixel.onrender.com/)
<br>
# 2. Can I host from [Github](https://github.com/Linos1391/quantum-pixel)?
- For private use or sharing with friends? Absolutely yes. I am using the free version of Render right now and totally recommend to try. Import `render.yml` as blueprint.
- For your website? You may embed this project as it is and let it be entirely free.
<br>
# 3. License
[GNU GPLv3](LICENSE)
<br>
# 4. Disclaimer
Remember, this will NOT 100% prevent AI from analyzing; in fact, the process of Steganography is open-source. I am still researching for better algorithms and would love to hear from YOU!
| text/markdown; charset=UTF-8; variant=GFM | Linos1391 <linos.coding@gmail.com> | Linos1391 <linos.coding@gmail.com> | null | null | null | null | [
"Development Status :: 5 - Production/Stable",
"Natural Language :: English",
"Programming Language :: Rust",
"Programming Language :: Python :: 3.14",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)"
] | [] | https://github.com/Linos1391/quantum-pixel.git | null | >=3.14 | [] | [] | [] | [
"fastapi[standard]>=0.128.0",
"numpy>=2.4.0",
"pillow>=12.0.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T02:08:31.941824 | quantum_pixel-2.0.1-cp314-cp314-win_amd64.whl | 686,819 | ba/29/65cfff95decf921b44f3b0cdb7e41d9c8abafbf07ba8eaceb34d5e872401/quantum_pixel-2.0.1-cp314-cp314-win_amd64.whl | cp314 | bdist_wheel | null | false | ec6e9a34d6c15915a823afc186984be6 | b5f128e98a9c8c4110e6a6089ff62d2a8aa885fa4f852b87e902647c0fed2a14 | ba2965cfff95decf921b44f3b0cdb7e41d9c8abafbf07ba8eaceb34d5e872401 | null | [
"LICENSE"
] | 1,608 |
2.4 | SURE-tools | 4.0.14 | Succinct Representation of Single Cells | # SURE: SUccinct REpresentation of cells
SURE implements a discrete latent state model with normalizing flow encoder for exact estimation of cellular populations.
## Installation
1. Create a virtual environment
```bash
conda create -n SURE python=3.10 scipy numpy pandas scikit-learn && conda activate SURE
```
2. Install [PyTorch](https://pytorch.org/get-started/locally/) following the official instruction.
```bash
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu126
```
3. Install SURE
```bash
pip3 install SURE-tools
```
| text/markdown | Feng Zeng | zengfeng@xmu.edu.cn | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | https://github.com/ZengFLab/SURE | null | >=3.10 | [] | [] | [] | [
"dill==0.3.8",
"scanpy",
"pytorch-ignite",
"datatable",
"scipy",
"numpy",
"scikit-learn",
"pandas",
"pyro-ppl",
"jax[cuda12]",
"leidenalg",
"python-igraph",
"networkx",
"matplotlib",
"seaborn",
"fa2-modified",
"zuko",
"plotly"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.18 | 2026-02-20T02:07:47.499114 | sure_tools-4.0.14.tar.gz | 61,418 | e3/1c/8c5b7589ff5f61ba98d73ff00912f7cf9bbf013315cc6dffc3b8d6ede359/sure_tools-4.0.14.tar.gz | source | sdist | null | false | 6eea73cc6cc98537f9e8ed76f6bbc929 | c55477a5d2bfc47dc1c8d364ed66c4e1049c4ce64847de9f7f4b70ecd916c454 | e31c8c5b7589ff5f61ba98d73ff00912f7cf9bbf013315cc6dffc3b8d6ede359 | null | [
"LICENSE"
] | 0 |
2.1 | OpenEXR | 3.3.7 | Python bindings for the OpenEXR image file format | <!-- SPDX-License-Identifier: BSD-3-Clause -->
<!-- Copyright (c) Contributors to the OpenEXR Project -->
[](https://github.com/AcademySoftwareFoundation/openexr/blob/main/LICENSE.md)
[](https://bestpractices.coreinfrastructure.org/projects/2799)
[](https://securityscorecards.dev/viewer/?uri=github.com/AcademySoftwareFoundation/openexr)
[](https://github.com/AcademySoftwareFoundation/openexr/actions?query=workflow%3ACI)
[](https://github.com/AcademySoftwareFoundation/openexr/actions?query=workflow%3AAnalysis)
[](https://sonarcloud.io/dashboard?id=AcademySoftwareFoundation_openexr)
# OpenEXR
OpenEXR provides the specification and reference implementation of the
EXR file format, the professional-grade image storage format of the
motion picture industry.
The purpose of EXR format is to accurately and efficiently represent
high-dynamic-range scene-linear image data and associated metadata,
with strong support for multi-part, multi-channel use cases.
OpenEXR is widely used in host application software where accuracy is
critical, such as photorealistic rendering, texture access, image
compositing, deep compositing, and DI.
## OpenEXR Project Mission
The goal of the OpenEXR project is to keep the EXR format reliable and
modern and to maintain its place as the preferred image format for
entertainment content creation.
Major revisions are infrequent, and new features will be carefully
weighed against increased complexity. The principal priorities of the
project are:
* Robustness, reliability, security
* Backwards compatibility, data longevity
* Performance - read/write/compression/decompression time
* Simplicity, ease of use, maintainability
* Wide adoption, multi-platform support - Linux, Windows, macOS, and others
OpenEXR is intended solely for 2D data. It is not appropriate for
storage of volumetric data, cached or lit 3D scenes, or more complex
3D data such as light fields.
The goals of the Imath project are simplicity, ease of use,
correctness and verifiability, and breadth of adoption. Imath is not
intended to be a comprehensive linear algebra or numerical analysis
package.
## Python Module
The OpenEXR python module provides full support for reading and
writing all types of ``.exr`` image files, including scanline, tiled,
deep, mult-part, multi-view, and multi-resolution images with pixel
types of unsigned 32-bit integers and 16- and 32-bit floats. It
provides access to pixel data through numpy arrays, as either one
array per channel or with R, G, B, and A interleaved into a single
array RGBA array.
## Project Governance
OpenEXR is a project of the [Academy Software
Foundation](https://www.aswf.io). See the project's [governance
policies](https://github.com/AcademySoftwareFoundation/openexr/blob/main/GOVERNANCE.md), [contribution guidelines](https://github.com/AcademySoftwareFoundation/openexr/blob/main/CONTRIBUTING.md), and [code of conduct](https://github.com/AcademySoftwareFoundation/openexr/blob/main/CODE_OF_CONDUCT.md)
for more information.
# Quick Start
The "Hello, World" image writer:
# Generate a 3D NumPy array for RGB channels with random values
height, width = (20, 10)
RGB = np.random.rand(height, width, 3).astype('f')
channels = { "RGB" : RGB }
header = { "compression" : OpenEXR.ZIP_COMPRESSION,
"type" : OpenEXR.scanlineimage }
with OpenEXR.File(header, channels) as outfile:
outfile.write("readme.exr")
Or alternatively, construct the same output file via separate pixel arrays
for each channel:
# Generate arrays for R, G, and B channels with random values
height, width = (20, 10)
R = np.random.rand(height, width).astype('f')
G = np.random.rand(height, width).astype('f')
B = np.random.rand(height, width).astype('f')
channels = { "R" : R, "G" : G, "B" : B }
header = { "compression" : OpenEXR.ZIP_COMPRESSION,
"type" : OpenEXR.scanlineimage }
with OpenEXR.File(header, channels) as outfile:
outfile.write("readme.exr")
The corresponding example of reading an image is:
with OpenEXR.File("readme.exr") as infile:
RGB = infile.channels()["RGB"].pixels
height, width, _ = RGB.shape
for y in range(height):
for x in range(width):
pixel = tuple(RGB[y, x])
print(f"pixel[{y}][{x}]={pixel}")
Or alternatively, read the data as separate arrays for each channel:
with OpenEXR.File("readme.exr", separate_channels=True) as infile:
header = infile.header()
print(f"type={header['type']}")
print(f"compression={header['compression']}")
R = infile.channels()["R"].pixels
G = infile.channels()["G"].pixels
B = infile.channels()["B"].pixels
height, width = R.shape
for y in range(height):
for x in range(width):
pixel = (R[y, x], G[y, x], B[y, x])
print(f"pixel[{y}][{x}]={pixel}")
To modify the header metadata in a file:
with OpenEXR.File("readme.exr") as f:
f.header()["displayWindow"] = ((3,4),(5,6))
f.header()["screenWindowCenter"] = np.array([1.0,2.0],'float32')
f.header()["comments"] = "test image"
f.header()["longitude"] = -122.5
f.write("readme_modified.exr")
with OpenEXR.File("readme_modified.exr") as o:
dw = o.header()["displayWindow"]
assert (tuple(dw[0]), tuple(dw[1])) == ((3,4),(5,6))
swc = o.header()["screenWindowCenter"]
assert tuple(swc) == (1.0, 2.0)
assert o.header()["comments"] == "test image"
assert o.header()["longitude"] == -122.5
Note that OpenEXR's Imath-based vector and matrix attribute values
appear in the header dictionary as 2-element, 3-element, 3x3, 4x4
numpy arrays, and bounding boxes appear as tuples of 2-element arrays,
or tuples for convenience.
To read and write a multi-part file, use a list of ``Part`` objects:
height, width = (20, 10)
Z0 = np.zeros((height, width), dtype='f')
Z1 = np.ones((height, width), dtype='f')
P0 = OpenEXR.Part({}, {"Z" : Z0 })
P1 = OpenEXR.Part({}, {"Z" : Z1 })
f = OpenEXR.File([P0, P1])
f.write("readme_2part.exr")
with OpenEXR.File("readme_2part.exr") as o:
assert o.parts[0].name() == "Part0"
assert o.parts[0].width() == 10
assert o.parts[0].height() == 20
assert o.parts[1].name() == "Part1"
assert o.parts[1].width() == 10
assert o.parts[1].height() == 20
Deep data is stored in a numpy array whose entries are numpy
arrays. Construct a numpy array with a ``dtype`` of ``object``, and
assign each entry a numpy array holding the samples. Each pixel can
have a different number of samples, including ``None`` for no data,
but all channels in a given part must have the same number of samples.
height, width = (20, 10)
Z = np.empty((height, width), dtype=object)
for y in range(height):
for x in range(width):
Z[y, x] = np.array([y*width+x], dtype='uint32')
channels = { "Z" : Z }
header = { "compression" : OpenEXR.ZIPS_COMPRESSION,
"type" : OpenEXR.deepscanline }
with OpenEXR.File(header, channels) as outfile:
outfile.write("readme_test_tiled_deep.exr")
To read a deep file:
with OpenEXR.File("readme_test_tiled_deep.exr") as infile:
Z = infile.channels()["Z"].pixels
height, width = Z.shape
for y in range(height):
for x in range(width):
for z in Z[y,x]:
print(f"deep sample at {y},{x}: {z}")
# Community
* **Ask a question:**
- Email: openexr-dev@lists.aswf.io
- Slack: [academysoftwarefdn#openexr](https://academysoftwarefdn.slack.com/archives/CMLRW4N73)
* **Attend a meeting:**
- Technical Steering Committee meetings are open to the
public, fortnightly on Thursdays, 1:30pm Pacific Time.
- Calendar: https://zoom-lfx.platform.linuxfoundation.org/meetings/openexr
- Meeting notes: https://wiki.aswf.io/display/OEXR/TSC+Meetings
* **Report a bug:**
- Submit an Issue: https://github.com/AcademySoftwareFoundation/openexr/issues
* **Report a security vulnerability:**
- Email to security@openexr.com
* **Contribute a Fix, Feature, or Improvement:**
- Read the [Contribution Guidelines](https://github.com/AcademySoftwareFoundation/openexr/blob/main/CONTRIBUTING.md) and [Code of Conduct](https://github.com/AcademySoftwareFoundation/openexr/blob/main/CODE_OF_CONDUCT.md)
- Sign the [Contributor License
Agreement](https://contributor.easycla.lfx.linuxfoundation.org/#/cla/project/2e8710cb-e379-4116-a9ba-964f83618cc5/user/564e571e-12d7-4857-abd4-898939accdd7)
- Submit a Pull Request: https://github.com/AcademySoftwareFoundation/openexr/pulls
# Resources
- Website: http://www.openexr.com
- Technical documentation: https://openexr.readthedocs.io
- Porting help: [OpenEXR/Imath Version 2.x to 3.x Porting Guide](https://openexr.readthedocs.io/en/latest/PortingGuide.html)
- Reference images: https://github.com/AcademySoftwareFoundation/openexr-images
- Security policy: [SECURITY.md](https://github.com/AcademySoftwareFoundation/openexr/blob/main/SECURITY.md)
- Release notes: [CHANGES.md](https://github.com/AcademySoftwareFoundation/openexr/blob/main/CHANGES.md)
- Contributors: [CONTRIBUTORS.md](https://github.com/AcademySoftwareFoundation/openexr/blob/main/CONTRIBUTORS.md)
# License
OpenEXR is licensed under the [BSD-3-Clause license](https://github.com/AcademySoftwareFoundation/openexr/blob/main/LICENSE.md).
| text/markdown | null | Contributors to the OpenEXR project <info@openexr.com> | null | null | null | null | [] | [] | null | null | >=3.7 | [] | [] | [] | [
"numpy>=1.7.0",
"pytest; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://openexr.com",
"Source, https://github.com/AcademySoftwareFoundation/OpenEXR",
"Bug tracker, https://github.com/AcademySoftwareFoundation/OpenEXR/issues"
] | twine/6.0.1 CPython/3.12.8 | 2026-02-20T02:06:40.701629 | openexr-3.3.7.tar.gz | 21,077,498 | ef/53/598839ef101dff29605c41789a2d93b7abf914333f3e541307939d861edc/openexr-3.3.7.tar.gz | source | sdist | null | false | f8470c65d0a7098eaa2a3a52ee64d2a7 | b96a12d90f4b0cbb757962abfa8e16f47d4f7aadd79b3f425b1ad2301f0ebe92 | ef53598839ef101dff29605c41789a2d93b7abf914333f3e541307939d861edc | null | [] | 0 |
2.4 | pkg-resources-backport | 1.0.1 | snapshot of last pkg_resources module from setuptools | ========================
pkg-resources backport
========================
For when your runtime dependencies absolutely require ``pkg-resources`` but you're unable to downgrade to setuptools 80.10.x.
.. code:: console
(cpython312) autumn@JudgmentOfCarrion{arm64}:~/software# python -m pip install -U setuptools
Requirement already satisfied: setuptools in /Users/autumn/.virtualenvs/cpython312/lib/python3.12/site-packages (80.9.0)
... /snip
Successfully installed setuptools-82.0.0
(cpython312) autumn@JudgmentOfCarrion{arm64}:~/software# python
Python 3.12.12 (main, Oct 9 2025, 11:07:00) [Clang 17.0.0 (clang-1700.4.4.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pkg_resources
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'pkg_resources'
>>> ^D
(cpython312) autumn@JudgmentOfCarrion{arm64}:~/software# cd pkg-resources-backport
(cpython312) autumn@JudgmentOfCarrion{arm64}:~/software/pkg-resources-backport# python -m pip install -e .
Obtaining file:///Users/autumn/software/pkg-resources-backport
... /snip
Successfully built pkg-resources-backport
Installing collected packages: pkg-resources-backport
Successfully installed pkg-resources-backport-1.0.0
(cpython312) autumn@JudgmentOfCarrion{arm64}:~/software/pkg-resources-backport# cd ..
(cpython312) autumn@JudgmentOfCarrion{arm64}:~/software# python
Python 3.12.12 (main, Oct 9 2025, 11:07:00) [Clang 17.0.0 (clang-1700.4.4.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pkg_resources
>>> ^D
(cpython312) autumn@JudgmentOfCarrion{arm64}:~/software#
| text/x-rst | null | Python Packaging Authority <distutils-sig@python.org> | Autumn Jolitz | null | null | pkg-resources, pkgresources, setuptools | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Archiving :: Packaging",
"Topic :: System :: Systems Administration",
"Topic :: Utilities"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"setuptools>=81.0.0",
"pytest!=8.1.*,>=6; extra == \"tests\"",
"virtualenv>=13.0.0; extra == \"tests\"",
"wheel>=0.44.0; extra == \"tests\"",
"pip>=19.1; extra == \"tests\"",
"packaging>=24.2; extra == \"tests\"",
"jaraco.envs>=2.2; extra == \"tests\"",
"pytest-xdist>=3; extra == \"tests\"",
"jaraco.path>=3.7.2; extra == \"tests\"",
"build[virtualenv]>=1.0.3; extra == \"tests\"",
"filelock>=3.4.0; extra == \"tests\"",
"ini2toml[lite]>=0.14; extra == \"tests\"",
"tomli-w>=1.0.0; extra == \"tests\"",
"pytest-timeout; extra == \"tests\"",
"pytest-perf; sys_platform != \"cygwin\" and extra == \"tests\"",
"jaraco.develop>=7.21; (python_version >= \"3.9\" and sys_platform != \"cygwin\") and extra == \"tests\"",
"pytest-home>=0.5; extra == \"tests\"",
"pytest-subprocess; extra == \"tests\"",
"pyproject-hooks!=1.1; extra == \"tests\"",
"jaraco.test>=5.5; extra == \"tests\""
] | [] | [] | [] | [
"Homepage, https://github.com/autumnjolitz/pkg-resources-backport"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T02:06:31.878846 | pkg_resources_backport-1.0.1.tar.gz | 311,958 | d1/23/073a3a2a0789548f6c8b14dc10c960235616a281810c998364282822ca49/pkg_resources_backport-1.0.1.tar.gz | source | sdist | null | false | 0dd24fb5bc8a145026c580ae551e9460 | 338c2423e6213c810f3fc2cdfb28baabcee2919443a6b8ad0e3762ad6cae3f3c | d123073a3a2a0789548f6c8b14dc10c960235616a281810c998364282822ca49 | MIT | [
"LICENSE"
] | 421 |
2.4 | sinricpro | 5.2.1 | Official SinricPro SDK for Python - Control IoT devices with Alexa and Google Home | # SinricPro Python SDK
Official Python SDK for [SinricPro](https://sinric.pro) - Control your IoT devices with Alexa and Google Home.
[](https://www.python.org/downloads/)
[](LICENSE)
## Features
- ✅ **Easy to Use** - Simple, pythonic API with async/await support
- ✅ **Type Safe** - Full type hints for better IDE support and error detection
- ✅ **Voice Control** - Works with Alexa and Google Home
- ✅ **Real-time** - WebSocket-based bidirectional communication
- ✅ **Secure** - HMAC-SHA256 message signatures
- ✅ **Reliable** - Auto-reconnection and heartbeat monitoring
- ✅ **Flexible** - Support for multiple device types and capabilities
- ✅ **Cross-Platform** - Works on Linux, Windows, macOS, and Raspberry Pi
## Supported Devices
**Lighting & Switches:**
- Smart Switch - On/Off control
- Smart Light - RGB color, brightness, color temperature
- Dimmable Switch - On/Off with brightness control
**Sensors:**
- Motion Sensor - Detect movement
- Contact Sensor - Door/window open/closed detection
- Temperature Sensor - Temperature and humidity monitoring
- Air Quality Sensor - PM1.0, PM2.5, PM10 measurements
- Power Sensor - Voltage, current, power monitoring
**Control Devices:**
- Blinds - Position control (0-100%)
- Garage Door - Open/close control
- Smart Lock - Lock/unlock control
**Climate Control:**
- Thermostat - Temperature control with modes (AUTO, COOL, HEAT, ECO)
- Window AC - Air conditioning control
**Other Devices:**
- Fan - On/Off control
- Doorbell - Doorbell press events
## Installation
```bash
pip install sinricpro
```
## Requirements
- Python 3.10 or higher
- `websockets` library (automatically installed)
## Platform Support
The SDK works on:
- **Linux** (Ubuntu, Debian, Raspberry Pi OS, etc.)
- **Windows** 10/11
- **macOS** 10.14+
- **Raspberry Pi** (All models with Python 3.10+)
## Logging
Enable debug logging to see detailed information:
```python
from sinricpro import SinricProLogger, LogLevel
# Set log level
SinricProLogger.set_level(LogLevel.DEBUG)
```
Available log levels: `DEBUG`, `INFO`, `WARN`, `ERROR`, `NONE`
## Development
### Setup Development Environment
```bash
# Clone the repository
git clone https://github.com/sinricpro/python-sdk.git
cd python-sdk
# Install development dependencies
pip install -e ".[dev]"
# Import sinricpro for development.
```python
import sys
from pathlib import Path
from typing import Any
sys.path.insert(0, str(Path(__file__).parent.parent.parent))
from sinricpro import SinricPro, SinricProAirQualitySensor, SinricProConfig
```
# Run tests
pytest
# Format code
black .
# Type check
mypy sinricpro
### Running Examples
```bash
# Set environment variables
export SINRICPRO_APP_KEY="your-app-key"
export SINRICPRO_APP_SECRET="your-app-secret"
# Run an example
python examples/switch/switch_example.py
```
## API Reference
Full API documentation is available at [Read the Docs](https://sinricpro-python.readthedocs.io) (Coming soon!)
## Troubleshooting
### Connection Issues
1. **Check credentials** - Ensure APP_KEY and APP_SECRET are correct
2. **Check device ID** - Verify the device ID is exactly 24 hexadecimal characters
3. **Check network** - Ensure you have internet connectivity
4. **Enable debug logging** - Set `debug=True` in config to see detailed logs
### Common Errors
**"Invalid app_key format"**
- App key must be a valid UUID (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)
**"Invalid app_secret: must be at least 32 characters"**
- App secret must be at least 32 characters long
**"Invalid device_id format"**
- Device ID must be exactly 24 hexadecimal characters
## Support
- **Documentation**: [help.sinric.pro](https://help.sinric.pro)
- **Community**: [Discord](https://discord.gg/W5299EgB59)
- **Issues**: [GitHub Issues](https://github.com/sinricpro/python-sdk/issues)
- **Email**: support@sinric.pro
## License
Copyright (c) 2019-2025 Sinric. All rights reserved.
This project is licensed under the Creative Commons Attribution-Share Alike 4.0 International License (CC BY-SA 4.0) - see the [LICENSE](LICENSE) file for details.
You are free to share and adapt this work for any purpose (including commercially), as long as you give appropriate credit and distribute your contributions under the same license.
## Acknowledgments
- Based on the official [SinricPro C++ SDK](https://github.com/sinricpro/esp8266-esp32-sdk)
- Inspired by the [SinricPro Node.js SDK](https://github.com/sinricpro/nodejs-sdk)
## Related Projects
- [SinricPro ESP8266/ESP32 SDK](https://github.com/sinricpro/esp8266-esp32-sdk)
- [SinricPro Node.js SDK](https://github.com/sinricpro/nodejs-sdk)
- [SinricPro Documentation](https://github.com/sinricpro/help-docs)
---
## Vibe Coding
If you are to develop agent via vibe coding the llms.txt and the llms-full.txt can be used as context to LLM. While the former one is a summarized one and the later one has the full information in case your LLM has big enough context window.
Made with ❤️ by the SinricPro Team
| text/markdown | null | SinricPro <support@sinric.pro> | null | null | CC-BY-SA-4.0 | sinricpro, alexa, google-home, smart-home, iot, websocket, home-automation | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Topic :: Home Automation",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: Free For Educational Use",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"websockets>=12.0",
"aiohttp>=3.9.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"flake8>=6.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\"",
"isort>=5.12; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://sinric.pro",
"Repository, https://github.com/sinricpro/python-sdk",
"Issues, https://github.com/sinricpro/python-sdk/issues",
"Documentation, https://sinric.pro/python-sdk-docs"
] | twine/6.2.0 CPython/3.12.2 | 2026-02-20T02:05:39.857557 | sinricpro-5.2.1.tar.gz | 43,721 | 84/cb/6372ed38e512950794ec8e7217536355cd25a9fa35204a2bd94eb83878d2/sinricpro-5.2.1.tar.gz | source | sdist | null | false | 59bba761cd135598e3adc907259764f8 | f1ffc971741a9e7955339cc57f4a5ceab1c24f41f852c87a812bfba6bd1bdc7d | 84cb6372ed38e512950794ec8e7217536355cd25a9fa35204a2bd94eb83878d2 | null | [
"LICENSE"
] | 266 |
2.4 | readable-regex | 0.2.0 | A fluent, chainable API for building regular expressions that read like English | # readable-regex
A fluent, chainable Python API for building regular expressions that read like English.
**[Documentation](https://molestreettechllc-dev.github.io/readable-regex/)**
## Install
```bash
pip install readable-regex
```
## Quick Start
```python
from readable_regex import regex
# Email pattern
regex.words.then('@').words.then('.').words.test("user@example.com") # True
# Phone number
regex.digit.exactly(3).then('-').digit.exactly(3).then('-').digit.exactly(4).test("123-456-7890") # True
# Extract all numbers
regex.digits.find_all("Order #42 has 3 items totaling $129") # ['42', '3', '129']
```
## Vocabulary
The API uses a **plural convention**: singular = exactly one, plural = one or more.
### Items — what you match
| Singular | Plural (1+) | Regex |
|---|---|---|
| `digit` | `digits` | `\d` / `\d+` |
| `word` | `words` | `\w` / `\w+` |
| `letter` | `letters` | `[a-zA-Z]` / `[a-zA-Z]+` |
| `whitespace` | `whitespaces` | `\s` / `\s+` |
| `any_char` | `any_chars` | `.` / `.+` |
| `then('text')` | — | escaped literal |
| `any_of('a', 'b')` | — | `[ab]` or `(?:foo\|bar)` |
### Modifiers — how you constrain
| Modifier | Example | Effect |
|---|---|---|
| `exactly(n)` | `digit.exactly(3)` | `\d{3}` |
| `between(n, m)` | `digit.between(1, 3)` | `\d{1,3}` |
| `optional` | `digit.optional` | `\d?` |
| `zero_or_more` | `digit.zero_or_more` | `\d*` |
| `starts_with(text?)` | `starts_with('Hello')` | `^Hello` |
| `ends_with(text?)` | `ends_with('!')` | `!$` |
| `ignore_case` | — | case-insensitive flag |
| `multiline` | — | multiline flag |
| `exclude.digits` | — | `\D+` (negated class) |
| `excluding('_')` | `words.excluding('_')` | `[^\W_]+` |
| `capture(builder)` | `capture(regex.words)` | `(\w+)` |
### Execution — terminal methods
| Method | Returns |
|---|---|
| `test(text)` | `bool` |
| `search(text)` | `re.Match \| None` |
| `match(text)` | `re.Match \| None` |
| `find_all(text)` | `list[str]` |
| `replace(text, repl)` | `str` |
| `split(text)` | `list[str]` |
| `compile()` | `re.Pattern` (cached) |
| `.pattern` | raw regex string |
## Examples
### Email validation
```python
email = regex.words.then('@').words.then('.').words
email.test("user@example.com") # True
email.test("bad@@address") # False
email.pattern # '\w+@\w+\.\w+'
```
### Phone number
```python
phone = (
regex
.digit.exactly(3).then('-')
.digit.exactly(3).then('-')
.digit.exactly(4)
)
phone.test("123-456-7890") # True
phone.pattern # '\d{3}\-\d{3}\-\d{4}'
```
### IP address
```python
ip = (
regex
.digit.between(1, 3).then('.')
.digit.between(1, 3).then('.')
.digit.between(1, 3).then('.')
.digit.between(1, 3)
)
ip.test("192.168.1.1") # True
```
### Capturing groups
```python
kv = regex.capture(regex.words).then('=').capture(regex.any_chars)
m = kv.search("color=blue")
m.group(1) # 'color'
m.group(2) # 'blue'
```
### Case-insensitive matching
```python
greeting = regex.starts_with('hello').ignore_case
greeting.test("HELLO world") # True
greeting.test("hey there") # False
```
### Search and replace
```python
regex.digits.replace("My SSN is 123-45-6789", "***")
# 'My SSN is ***-***-***'
```
### Splitting text
```python
regex.then(',').whitespace.zero_or_more.split("a, b,c, d")
# ['a', 'b', 'c', 'd']
```
### Negated classes
```python
regex.exclude.digits.find_all("a1b2c3") # ['a', 'b', 'c']
regex.words.excluding('_').pattern # '[^\W_]+'
```
### Immutable builder — safe reuse
```python
base = regex.starts_with('LOG-')
errors = base.then('ERROR').any_chars
warns = base.then('WARN').any_chars
errors.test("LOG-ERROR disk full") # True
warns.test("LOG-WARN low memory") # True
base.pattern # '^LOG\-' (unchanged)
```
## Requirements
- Python 3.10+
- Zero runtime dependencies
## License
MIT
| text/markdown | Derwin Emmanuel | null | null | null | null | builder, fluent, readable, regex, regular-expressions | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Text Processing",
"Typing :: Typed"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/molestreettechllc-dev/readable-regex",
"Documentation, https://molestreettechllc-dev.github.io/readable-regex/",
"Repository, https://github.com/molestreettechllc-dev/readable-regex",
"Issues, https://github.com/molestreettechllc-dev/readable-regex/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-20T02:03:46.777187 | readable_regex-0.2.0.tar.gz | 17,289 | d8/09/c34093255fc9c1365631c527223407ed20ec55ea510a835e6d3838393b8d/readable_regex-0.2.0.tar.gz | source | sdist | null | false | a95702c8048ed4529df6828e30c24229 | 1ceb997c3a3fffa7bbe6ff91b13cf7a99e26e1fbf153456672c14c1a524ee3ac | d809c34093255fc9c1365631c527223407ed20ec55ea510a835e6d3838393b8d | MIT | [
"LICENSE"
] | 269 |
2.4 | jovialengine | 0.27.7 | A Simple Pygame-ce Engine | # JovialEngine
A Simple Pygame-ce Engine

| text/markdown | Brett Kaplan | BrettCKaplan@gmail.com | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Intended Audience :: Developers"
] | [] | https://github.com/JovialKnoll/jovialengine | null | >=3.11 | [] | [] | [] | [
"pygame-ce>=2.5"
] | [] | [] | [] | [
"Source, https://github.com/JovialKnoll/jovialengine",
"Issue Tracker, https://github.com/JovialKnoll/jovialengine/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T02:03:06.918013 | jovialengine-0.27.7.tar.gz | 27,519 | 5d/b9/34170ddb82585ea4044b7966572b24bc54f20aeec41d07dbd84a770f193b/jovialengine-0.27.7.tar.gz | source | sdist | null | false | f53098aa2182ccd675f3883160d401e8 | 318bb10c29852d19f052adaac939afc67509d5baed1b310d3350367d99ec40d6 | 5db934170ddb82585ea4044b7966572b24bc54f20aeec41d07dbd84a770f193b | null | [
"LICENSE.txt"
] | 277 |
2.2 | xslope | 0.1.19 | Slope stability analysis (limit equilibrium and FEM) in Python. | # xslope
Python package for limit equilibrium slope stability analysis
## License
This project is licensed under the Apache License, Version 2.0 - see the [LICENSE](LICENSE) file for details.
## Copyright
Copyright 2025 Norman L. Jones
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| text/markdown | Norman L. Jones | null | null | null | null | slope, stability, geotechnical, FEM, limit equilibrium | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"pandas",
"matplotlib",
"scipy",
"shapely",
"openpyxl",
"gmsh; extra == \"fem\""
] | [] | [] | [] | [
"Homepage, https://github.com/njones61/xslope",
"Documentation, https://xslope.readthedocs.io/en/latest/",
"Source, https://github.com/njones61/xslope",
"Issues, https://github.com/njones61/xslope/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-20T02:02:25.629209 | xslope-0.1.19.tar.gz | 158,281 | 60/f6/a86c1958b9f2076b6257c959a07b8253ecd3ed16313c8a165228a13e8ef6/xslope-0.1.19.tar.gz | source | sdist | null | false | 996bae34804fb21896fa58f20668f1a5 | fa9b055eee70e65f7d2362029971e0f1a5aabacdfc2927b199f89fe5e9b5984a | 60f6a86c1958b9f2076b6257c959a07b8253ecd3ed16313c8a165228a13e8ef6 | null | [] | 263 |
2.1 | odoo-addon-hr-timesheet-sheet-attendance | 16.0.1.0.1 | HR Timesheet Sheet Attendance | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=============================
HR Timesheet Sheet Attendance
=============================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:b94c4006fbc663dea177d1b3b52ace6117bbfc24b608581106423ac71ff708fb
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Ftimesheet-lightgray.png?logo=github
:target: https://github.com/OCA/timesheet/tree/16.0/hr_timesheet_sheet_attendance
:alt: OCA/timesheet
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/timesheet-16-0/timesheet-16-0-hr_timesheet_sheet_attendance
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/timesheet&target_branch=16.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module extends the functionality of hr_timesheet_sheet
and help employees to manage their attendance according to timesheet period.
It provide functionality to checkin/checkout directly from timesheet-sheet.
It also help you/management in performace evaluation by displaing
total attendance time and difference of total attendance time and total working time.
**Table of contents**
.. contents::
:local:
Installation
============
This module relies on:
* The OCA module 'HR Timesheet Sheet', and can be downloaded from
Github: https://github.com/OCA/hr-timesheet/tree/15.0/hr_timesheet_sheet
Usage
=====
* Goto Timesheets > My Timesheet Sheets and create a timesheet
* Goto tab Attendances on timesheet form
- You can see there your current status checkin/checkout
- You also can create attendance by clicking on button Check In/Check Out on right side
- You can see your attendance that belongs to current timesheet on left side in same tab
* 'Total Attendance' is total working time based on your attendance
* 'Difference' is the difference betwwen total attandance time and working time (sum(attendace-time) - sum(unit amount in timessheet lines))
* Two smart buttons are present on top-right corner of timesheet form
- First one(with time icon) will take you list of your timesheets (by default filter timesheets related to current timesheet-sheet)
- Second one(labeled as Attendances) will take you to list of your attendances (by default filter ateendances related to current timesheet-sheet)
* It prevents to change in any attendance related to timesheet-sheet that already has submitted
* It also prevents to submit such a timesheet-sheet not having equal number of checkin and checkout
Known issues / Roadmap
======================
By having Check-in/out button in the timesheet, there could perhaps be a case where the user does two clicks in a fast way, then the check-in and check-out times are the same and the attendance gets blocked as there is an Odoo standard check which verifies that the attendance check-in time is strictly minor than the check-out time
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/timesheet/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/timesheet/issues/new?body=module:%20hr_timesheet_sheet_attendance%0Aversion:%2016.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
~~~~~~~
* BizzAppDev
Contributors
~~~~~~~~~~~~
* Ruchir Shukla <ruchir@bizzappdev.com>
* Shruti Singh <shruti.singh@bizzappdev.com>
* Chirag Parmar <chirag.parmar@bizzappdev.com>
* Naglis Jonaitis <naglis@versada.eu>
* `Tecnativa <https://www.tecnativa.com>`_:
* Ernesto Tejeda
Maintainers
~~~~~~~~~~~
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/timesheet <https://github.com/OCA/timesheet/tree/16.0/hr_timesheet_sheet_attendance>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| null | BizzAppDev, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 16.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/timesheet | null | >=3.10 | [] | [] | [] | [
"odoo-addon-hr-timesheet-sheet<16.1dev,>=16.0dev",
"odoo<16.1dev,>=16.0a"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T02:00:17.217246 | odoo_addon_hr_timesheet_sheet_attendance-16.0.1.0.1-py3-none-any.whl | 53,221 | e9/8e/b186b33688f4462fe83fd98493495135925802d00568e0d78bca3445f743/odoo_addon_hr_timesheet_sheet_attendance-16.0.1.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 6aa40073048c70030ba76f71875845c2 | 9b542f2c24d4aa9b91785c1521384a968d39a7ef0f454787e0d7061bd87e01cd | e98eb186b33688f4462fe83fd98493495135925802d00568e0d78bca3445f743 | null | [] | 85 |
2.4 | coala-client | 0.1.0 | A simple command line interface for LLM with MCP server and OpenAI-compatible API support | # Coala Client
A simple command line interface for LLM with MCP (Model Context Protocol) server support and OpenAI-compatible API support.
## Features
- **OpenAI-compatible API support**: Works with OpenAI, Google Gemini, Ollama, and any OpenAI-compatible API
- **MCP Server integration**: Connect to multiple MCP servers for extended tool capabilities
- **Interactive chat**: Rich terminal UI with streaming responses
- **Tool calling**: Automatic tool execution with MCP servers
## Installation
```bash
pip install coala-client
```
## Quick Start
### 1. Initialize Configuration
```bash
coala init
```
This creates a default MCP servers configuration file at `~/.config/coala/mcps/mcp_servers.json`.
### 2. Set API Key
```bash
# For OpenAI
export OPENAI_API_KEY=your-openai-api-key
# For Gemini
export GEMINI_API_KEY=your-gemini-api-key
# Ollama doesn't require an API key (runs locally)
```
### 3. Start Chatting
```bash
# Interactive chat with default provider (OpenAI)
coala
# Use a specific provider
coala -p gemini
coala -p ollama
# Use a specific model
coala -p openai -m gpt-4-turbo
# Single prompt
coala ask "What is the capital of France?"
# Disable MCP servers
coala --no-mcp
```
## Configuration
### Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `PROVIDER` | Default LLM provider | `openai` |
| `OPENAI_API_KEY` | OpenAI API key | - |
| `OPENAI_BASE_URL` | OpenAI base URL | `https://api.openai.com/v1` |
| `OPENAI_MODEL` | OpenAI model | `gpt-4o` |
| `GEMINI_API_KEY` | Gemini API key | - |
| `GEMINI_BASE_URL` | Gemini base URL | `https://generativelanguage.googleapis.com/v1beta/openai` |
| `GEMINI_MODEL` | Gemini model | `gemini-2.5-flash-lite` |
| `OLLAMA_BASE_URL` | Ollama base URL | `http://localhost:11434/v1` |
| `OLLAMA_MODEL` | Ollama model | `qwen3` |
| `SYSTEM_PROMPT` | System prompt | `You are a helpful assistant.` |
| `MAX_TOKENS` | Max tokens in response | `4096` |
| `TEMPERATURE` | Temperature | `0.7` |
| `MCP_CONFIG_FILE` | MCP config file path | `~/.config/coala/mcps/mcp_servers.json` |
### MCP Servers Configuration
Edit `~/.config/coala/mcps/mcp_servers.json`:
```json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/dir"],
"env": {}
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your-token"
}
}
}
}
```
### Environment Variables for MCP Servers
You can set environment variables that will be available to all MCP servers by editing `~/.config/coala/env`:
```bash
# Environment variables for MCP servers
# Format: KEY=value
# Set default provider (openai, gemini, ollama, custom)
PROVIDER=gemini
# API keys and model settings
GEMINI_API_KEY=your-gemini-api-key
GEMINI_MODEL=gemini-2.5-flash-lite
```
**Note:** The `PROVIDER` variable in the env file will set the default LLM provider. These variables will be merged with server-specific `env` settings in `mcp_servers.json`. Server-specific environment variables take precedence over the base environment variables.
## CLI Commands
### Interactive Chat
```bash
coala [OPTIONS]
coala chat [OPTIONS]
```
Options:
- `-p, --provider`: LLM provider (openai/gemini/ollama/custom)
- `-m, --model`: Model name override
- `--no-mcp`: Disable MCP servers
- `--sandbox`: Enable `run_command` tool so the LLM can run basic Linux shell commands (timeout 30s)
### Single Prompt
```bash
coala ask "Your prompt here"
coala -c "Your prompt here"
```
### Chat Commands
During interactive chat:
- `/help` - Show help
- `/exit` / `/quit` - Exit chat
- `/clear` - Clear conversation history
- `/tools` - List available MCP tools
- `/servers` - List connected MCP servers
- `/skill` - List installed skills (from ~/.config/coala/skills/)
- `/skill <name>` - Load a skill into the chat (adds its instructions to context)
- `/model` - Show current model info
- `/switch <provider>` - Switch provider
### Configuration
```bash
coala init # Create default config files
coala config # Show current configuration
```
### CWL toolset as MCP server
```bash
# Import one or more CWL files into a named toolset (copied to ~/.config/coala/mcps/<toolset>/)
coala mcp-import <TOOLSET> file1.cwl [file2.cwl ...]
# Import a zip of CWL files (extracted to ~/.config/coala/mcps/<toolset>/)
coala mcp-import <TOOLSET> tools.zip
# SOURCES can also be http(s) URLs to a .cwl file or a .zip
coala mcp-import <TOOLSET> https://example.com/tools.zip
coala mcp-import <TOOLSET> https://example.com/tool.cwl
```
This creates `run_mcp.py` in `~/.config/coala/mcps/<toolset>/`, adds the server to `~/.config/coala/mcps/mcp_servers.json`, and prints the MCP entry. The generated script uses `coala.mcp_api` (stdio transport). Ensure the `coala` package is installed in the environment that runs the MCP server.
**List servers and tools:**
```bash
# List configured MCP server names
coala mcp-list
# Show tool schemas (name, description, inputSchema) for a server
coala mcp-list <SERVER_NAME>
```
**Call an MCP tool directly:**
```bash
coala mcp-call <SERVER>.<TOOL> --args '<JSON>'
# Example:
coala mcp-call gene-variant.ncbi_datasets_gene --args '{"data": [{"gene": "TP53", "taxon": "human"}]}'
```
### Skills
```bash
# Import skills from a GitHub folder (e.g. vercel-labs/agent-skills/skills)
coala skill https://github.com/vercel-labs/agent-skills/tree/main/skills
# Import from a zip URL or local zip/directory
coala skill http://localhost:3000/files/bedtools/bedtools-skills.zip
coala skill ./my-skills.zip
```
All skills are copied to `~/.config/coala/skills/`. Each source gets its own subfolder (e.g. `skills/bedtools/` for a zip from `.../bedtools/bedtools-skills.zip`, `skills/agent-skills/` for the GitHub repo).
## Examples
### Using with Ollama
```bash
# Start Ollama server
ollama serve
# Pull a model
ollama pull llama3.2
# Chat with Ollama
coala -p ollama -m llama3.2
```
### Using with Gemini
```bash
export GEMINI_API_KEY=your-api-key
coala -p gemini
```
### Using Custom OpenAI-compatible API
```bash
export CUSTOM_API_KEY=your-api-key
export CUSTOM_BASE_URL=https://your-api.com/v1
export CUSTOM_MODEL=your-model
coala -p custom
```
## Development
```bash
# Install with dev dependencies
uv pip install -e ".[dev]"
# Run tests
pytest
```
## Publishing to PyPI
The repo includes a GitHub Action (`.github/workflows/release.yml`) that builds with Poetry and publishes to PyPI when a release is published.
1. **Create a GitHub environment** named `pypi` (optional but recommended).
2. **Configure PyPI** using one of:
- **Trusted Publishing (recommended)**: In PyPI → Your projects → coala-client → Publishing, add a new trusted publisher: GitHub, this repo, workflow `publish-pypi.yml`, environment `pypi`. No secrets needed.
- **API token**: Generate a token at pypi.org, add it as repository (or `pypi` environment) secret `PYPI_API_TOKEN`.
3. **Publish**: Create a new release (tag e.g. `v0.1.0`). The workflow runs on release and uploads the built package. You can also run it manually (Actions → Build and publish to PyPI → Run workflow).
## License
MIT
| text/markdown | coala-info | null | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"anyio>=4.0.0",
"click>=8.1.0",
"httpx>=0.27.0",
"mcp>=1.0.0",
"openai>=1.68.0",
"pydantic>=2.0.0",
"pydantic-settings>=2.0.0",
"rich>=13.0.0"
] | [] | [] | [] | [
"Documentation, https://github.com/coala-info/coala_client#readme",
"Homepage, https://github.com/coala-info/coala_client",
"Repository, https://github.com/coala-info/coala_client"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T02:00:12.971914 | coala_client-0.1.0.tar.gz | 21,962 | 41/2b/9b433bd8f93341c90c502751544b10060780806ccb5f86ae838f034a9621/coala_client-0.1.0.tar.gz | source | sdist | null | false | c23f57ae08f4519ca8d82f2f34207a48 | 47ac79fcce764b30f3318c7ea55037ae77fe3e0568f791b387f4c54e8ae9ca0a | 412b9b433bd8f93341c90c502751544b10060780806ccb5f86ae838f034a9621 | null | [
"LICENSE"
] | 270 |
2.4 | oncoshot-llm-validation-framework | 0.1.7 | Oncoshot LLM validation framework | # LLM Validation Framework
A comprehensive Python framework for evaluating LLM-extracted structured data against ground truth labels. Supports binary classification, scalar value extraction, and list field analysis with detailed performance metrics and confidence-based evaluation.
## ✨ Key Features
- **Multi-field validation** - Binary (True/False), scalar (single values), and list (multiple values) data types
- **Dual usage modes** - Validate pre-computed results OR run live LLM inference with validation
- **Comprehensive metrics** - Precision, recall, F1/F2, accuracy, specificity with both micro and macro aggregation
- **Confidence analysis** - Automatic performance breakdown by confidence levels
- **Production ready** - Parallel processing, intelligent caching, detailed progress tracking
## 🚀 Quick Start
### Prerequisites
```bash
pip install -r requirements.txt # Python 3.11+ required
```
### Demo
```bash
python runme.py
```
Processes the included [samples.csv](samples.csv) (14 test cases covering all validation scenarios) and outputs timestamped results to `validation_results/samples/`:
- **[Results CSV](validation_results/samples/2026-02-06%2012-27-38%20results.csv)** - Row-by-row comparison with confusion matrix counts and item-level details
- **[Metrics CSV](validation_results/samples/2026-02-06%2012-27-38%20metrics.csv)** - Aggregated performance statistics with confidence breakdowns
| Rows | Field Type | Test Scenarios |
|------|------------|----------------|
| **1-4** | Binary (`Has metastasis`) | True Positive, True Negative, False Positive, False Negative |
| **5-9** | Scalar (`Diagnosis`, `Histology`) | Correct, incorrect, missing, spurious, and empty extractions |
| **10-14** | List (`Treatment Drugs`, `Test Results`) | Perfect match, spurious items, missing items, correct empty, mixed results |
## 📊 Usage Modes
### Mode 1: Validate Existing Results
When you have LLM predictions in `Res: {Field Name}` columns:
```python
import pandas as pd
from src.validation import validate
df = pd.read_csv("data.csv", index_col="Patient ID")
# df must contain: "Field Name" and "Res: Field Name" columns
results_df, metrics_df = validate(
source_df=df,
fields=["Diagnosis", "Treatment"], # or None for auto-detection
structure_callback=None,
output_folder="validation_results"
)
```
### Mode 2: Live LLM Inference + Validation
```python
from src.structured import StructuredResult, StructuredGroup, StructuredField
from src.utils import flatten_structured_result
def llm_callback(row, i, raw_text_column_name):
raw_text = row[raw_text_column_name]
# Your LLM inference logic here
result = StructuredResult(
groups=[StructuredGroup(
group_name="medical",
fields=[
StructuredField(name="Diagnosis", value="Cancer", confidence="High"),
StructuredField(name="Treatment", value=["Drug A"], confidence="Medium")
]
)]
)
return flatten_structured_result(result), {}
results_df, metrics_df = validate(
source_df=df,
fields=["Diagnosis", "Treatment"],
structure_callback=llm_callback,
raw_text_column_name="medical_report",
output_folder="validation_results",
max_workers=4
)
```
## 📋 Input Data Requirements
### DataFrame Format
- **Unique index** - Each row must have a unique identifier (e.g., "Patient ID")
- **Label columns** - Ground truth values for each field you want to validate
- **Result columns** (Mode 1 only) - LLM predictions as `Res: {Field Name}` columns
- **Raw text column** (Mode 2 only) - Source text for LLM inference (e.g., "medical_report")
### Supported Field Types
| Type | Description | Label Examples | Result Examples |
|------|-------------|----------------|-----------------|
| **Binary** | True/False detection | `True`, `False` | `True`, `False` |
| **Scalar** | Single text/numeric value | `"Lung Cancer"` <br> `42` | `"Breast Cancer"` <br> `38` |
| **List** | Multiple values | `["Drug A", "Drug B"]` <br> `"['Item1', 'Item2']"` | `["Drug A"]` <br> `[]` |
### Special Value Handling
- **`"-"`** = Labeled as "No information is available in the source document"
- **`null/empty`** = Field not labeled/evaluated
- **Lists** - Can be Python lists `["a", "b"]` or stringified `"['a', 'b']"` (auto-converted)
## 📈 Output Files
The framework generates two timestamped CSV files for each validation run:
### 1. Results CSV (`YYYY-MM-DD HH-MM-SS results.csv`)
**Row-level analysis** with detailed per-case metrics:
**Original Data:**
- All input columns (labels, raw text, etc.)
- `Res: {Field}` columns with LLM predictions
- `Res: {Field} confidence` and `Res: {Field} justification` (if available)
**Binary Fields:**
- `TP/FP/FN/TN: {Field}` - Confusion matrix counts (1 or 0 per row)
**Non-Binary Fields:**
- `Cor/Inc/Mis/Spu: {Field}` - Item counts per row
- `Cor/Inc/Mis/Spu: {Field} items` - Actual item lists
- `Precision/Recall/F1/F2: {Field}` - Per-row metrics (list fields only)
**System Columns:**
- `Sys: from cache` - Whether result was cached (speeds up duplicate text)
- `Sys: exception` - Error information if processing failed
- `Sys: time taken` - Processing time per row in seconds
### 2. Metrics CSV (`YYYY-MM-DD HH-MM-SS metrics.csv`)
**Aggregated statistics** with confidence breakdowns:
**Core Information:**
- `field` - Field name being evaluated
- `confidence` - Confidence level ("Overall", "High", "Medium", "Low", etc.)
- `labeled cases` - Total rows with ground truth labels
- `field-present cases` - Rows where document has information about the field (label is not '-')
**Binary Metrics:** `TP`, `TN`, `FP`, `FN`, `precision`, `recall`, `F1/F2`, `accuracy`, `specificity`
**Non-Binary Metrics:** `cor`, `inc`, `mis`, `spu`, `precision/recall/F1/F2 (micro)`, `precision/recall/F1/F2 (macro)`
## ⚡ Performance Metrics Explained
### Binary Classification Metrics
For fields with True/False values (e.g., "Has metastasis"):
#### Confusion Matrix Counts
| Count | Definition | Example |
|-------|------------|---------|
| **TP (True Positive)** | Correctly predicted positive | Label: `True`, Prediction: `True` → TP=1 |
| **TN (True Negative)** | Correctly predicted negative | Label: `False`, Prediction: `False` → TN=1 |
| **FP (False Positive)** | Incorrectly predicted positive | Label: `False`, Prediction: `True` → FP=1 |
| **FN (False Negative)** | Incorrectly predicted negative | Label: `True`, Prediction: `False` → FN=1 |
#### Binary Classification Formulas
| Metric | Formula | Meaning |
|--------|---------|---------|
| **Precision** | `TP / (TP + FP)` | Of all positive predictions, how many were correct? |
| **Recall** | `TP / (TP + FN)` | Of all actual positives, how many were found? |
| **Accuracy** | `(TP + TN) / (TP + TN + FP + FN)` | Overall percentage of correct predictions |
| **Specificity** | `TN / (TN + FP)` | Of all actual negatives, how many were correctly identified? |
### Structured Extraction Metrics
For scalar and list fields (e.g., "Diagnosis", "Treatment Drugs"):
#### Core Counts (Per Case Analysis)
| Count | Definition | Example |
|-------|------------|---------|
| **Correct (Cor)** | Items extracted correctly | Label: `["DrugA", "DrugB"]`, Prediction: `["DrugA"]` → Cor=1 |
| **Missing (Mis)** | Items present in label but not extracted | (Same example) → Mis=1 (DrugB missing) |
| **Spurious (Spu)** | Items extracted but not in label | Label: `["DrugA"]`, Prediction: `["DrugA", "DrugC"]` → Spu=1 |
| **Incorrect (Inc)** | Wrong values for scalar fields | Label: `"Cancer"`, Prediction: `"Diabetes"` → Inc=1 |
#### Structured Extraction Formulas
| Metric | Formula | Meaning |
|--------|---------|---------|
| **Precision** | `Cor / (Cor + Spu + Inc)` | Of all extracted items, how many were correct? |
| **Recall** | `Cor / (Cor + Mis + Inc)` | Of all labeled items, how many were correctly extracted? |
**Note:** For scalar fields, Inc (incorrect) is used; for list fields, Inc is typically 0 since items are either correct, missing, or spurious.
The following formulas apply to both binary classification and structured extraction metrics:
| Metric | Formula | Meaning |
|--------|---------|--------|
| **F1 Score** | `2 × (P × R) / (P + R)` | Balanced harmonic mean of precision and recall |
| **F2 Score** | `5 × (P × R) / (4P + R)` | Recall-weighted F-score (emphasizes recall over precision) |
Where P = Precision and R = Recall (calculated differently for each metric type).
## 🛠️ Advanced Configuration
### Parallel Processing
```python
validate(
source_df=df,
fields=["diagnosis", "treatment"],
structure_callback=callback,
max_workers=None, # Auto-detect CPU count (or specify number)
use_threads=True # True for I/O-bound (LLM API calls), False for CPU-bound
)
```
### Performance Features
- **Automatic caching** - Identical raw text inputs are deduplicated and cached
- **Progress tracking** - Real-time progress bar for long-running validations
- **Cache statistics** - Check `Sys: from cache` column in results to monitor cache hits
### Confidence Analysis
When LLM inference returns both extracted fields and their associated confidence levels, the framework automatically detects `Res: {Field} confidence` columns and generates:
- Separate metrics for each unique confidence level found in your data
- Overall metrics aggregating across all confidence levels
- Useful for setting confidence thresholds and analyzing prediction reliability
## 🧪 Development & Testing
```bash
# Install development dependencies
pip install -r requirements.txt
# Run all tests
pytest
# Run with coverage reporting
pytest --cov=src
# Run specific test modules
pytest tests/validate_test.py # Core validation logic
pytest tests/compare_results_test.py # Comparison algorithms
pytest tests/compare_results_all_test.py # End-to-end comparisons
```
## 📁 Project Structure
```
llm-validation-framework/
├── src/
│ ├── validation.py # Main validation pipeline and metrics calculation
│ ├── structured.py # Pydantic data models for LLM results
│ ├── utils.py # Utility functions (list conversion, flattening)
│ └── standardize.py # Data standardization helpers
├── tests/ # Comprehensive test suite
├── validation_results/ # Output directory (auto-created)
├── samples.csv # Demo dataset with all validation scenarios
├── runme.py # Demo script
└── requirements.txt # Dependencies (pandas, pydantic, tqdm, etc.)
```
## 🔧 Troubleshooting
| Error | Solution |
|-------|----------|
| **"Cannot infer fields"** | Ensure DataFrame has both `{Field}` and `Res: {Field}` columns when `structure_callback=None` |
| **"Missing fields"** | Verify `fields` parameter contains column names that exist in your DataFrame |
| **"Duplicate index"** | Use `df.reset_index(drop=True)` or ensure your DataFrame index has unique values |
| **Import/dependency errors** | Run `pip install -r requirements.txt` and verify Python 3.11+ |
| **Slow performance** | Enable parallel processing with `max_workers=None` and `use_threads=True` for LLM API calls |
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
| text/markdown | null | null | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"License :: OSI Approved :: MIT License",
"Intended Audience :: Developers"
] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/Oncoshot/llm-validation-framework",
"Repository, https://github.com/Oncoshot/llm-validation-framework",
"Bug Tracker, https://github.com/Oncoshot/llm-validation-framework/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:59:36.296514 | oncoshot_llm_validation_framework-0.1.7.tar.gz | 26,027 | 4a/c2/e0ee28438fe3983ea17b02695746acfec9ca5651768dc6c417cbd69a1a12/oncoshot_llm_validation_framework-0.1.7.tar.gz | source | sdist | null | false | 41f0aebfc9b692a46e0179a958690fa0 | fd9ba89527dde5d75e5aa68adebf52f9b7a108f79296f4db99ee13b8307897df | 4ac2e0ee28438fe3983ea17b02695746acfec9ca5651768dc6c417cbd69a1a12 | null | [
"LICENSE"
] | 259 |
2.3 | conductor-py | 1.73.0 | The official Python library for the conductor API | <!-- markdownlint-disable MD033 MD041 -->
<div align="center">
<a href="https://conductor.is">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/conductor-is/quickbooks-desktop-api/assets/170023/162ee6a9-75ac-41e9-9f1e-2ecc1d88f841">
<img alt="Conductor logo" src="https://github.com/conductor-is/quickbooks-desktop-api/assets/170023/d67464b8-53a7-4d33-afeb-05a2efde1fa8" width="325">
</picture>
</a>
<h3>QuickBooks Desktop/Enterprise real-time API for Python, Node.js, and REST</h3>
<a href="https://docs.conductor.is/quickstart">Quickstart</a>
<span> • </span>
<a href="https://conductor.is">Website</a>
<span> • </span>
<a href="https://docs.conductor.is">Docs</a>
<span> • </span>
<a href="https://docs.conductor.is/qbd-api">Examples</a>
<br />
<br />
<a href="https://pypi.org/project/conductor-py"><img src="https://img.shields.io/pypi/dm/conductor-py.svg?logo=pypi" alt="PyPI download count"></a>
<a href="https://pypi.org/project/conductor-py"><img src="https://img.shields.io/pypi/v/conductor-py.svg?logo=pypi" alt="PyPI version"></a>
<img src="https://img.shields.io/badge/coverage-100%25-brightgreen" alt="Code coverage">
<a href="LICENSE"><img src="https://img.shields.io/pypi/l/conductor-py.svg?color=blue&logo=github" alt="License" /></a>
<hr />
</div>
<!-- prettier-ignore -->
[Conductor](https://conductor.is) is a real-time, fully-typed API for **QuickBooks Desktop** (sometimes called QuickBooks Enterprise). In just a few lines, get real-time access to fetch, create, or update _any_ QuickBooks Desktop object type and receive a fully-typed response.
⭐ **Follow our [Quickstart guide](https://docs.conductor.is/quickstart) to get started.**
The Conductor **Python** library provides convenient access to our QuickBooks Desktop API from any Python 3.9+ application. The library includes type definitions for all request params and response fields, and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
- For Node.js (TypeScript/JavaScript), see [conductor-node](https://github.com/conductor-is/quickbooks-desktop-node).
## MCP Server
Use the Conductor MCP Server to enable AI assistants to interact with this API, allowing them to explore endpoints, make test requests, and use documentation to help integrate this SDK into your application.
[](https://cursor.com/en-US/install-mcp?name=conductor-node-mcp&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsImNvbmR1Y3Rvci1ub2RlLW1jcCJdLCJlbnYiOnsiQ09ORFVDVE9SX1NFQ1JFVF9LRVkiOiJza19jb25kdWN0b3JfLi4uIn19)
[](https://vscode.stainless.com/mcp/%7B%22name%22%3A%22conductor-node-mcp%22%2C%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22conductor-node-mcp%22%5D%2C%22env%22%3A%7B%22CONDUCTOR_SECRET_KEY%22%3A%22sk_conductor_...%22%7D%7D)
> Note: You may need to set environment variables in your MCP client.
## Documentation
The REST API documentation can be found on [docs.conductor.is](https://docs.conductor.is/api-ref). The full API of this library can be found in [api.md](https://github.com/conductor-is/quickbooks-desktop-python/tree/main/api.md).
## Installation
```sh
pip install conductor-py
```
## Key features
- **Any data type**: Query, create, or update any QuickBooks Desktop data type.
- **Real-time**: Get real-time updates on your QuickBooks Desktop data. No queues, no jobs, no cache layer -- just direct access to the data.
- **Modern API**: JSON-based REST API, replacing the old XML-based SOAP model.
- **Typed client libraries**: Fully typed libraries in Node.js and Python with autocomplete, inline docs, and type validation for endpoints, parameters, and responses.
- **Request handling**: Invisibly manages queues, timeouts, retries, and pagination.
- **Auto-pagination**: Automatically handles paginated responses to retrieve complete datasets.
- **Multi-company support**: Connects to multiple QuickBooks Desktop company files.
- **Validation**: Sanitizes and validates all inputs and outputs.
- **Unified error handling**: Streamlines error handling across the QuickBooks stack.
- **Authentication flow UI**: Simple UI for securely connecting QuickBooks Desktop accounts.
- **Dashboard**: UI to monitor and manage your QuickBooks Desktop connections and data.
- **Error resolution**: Detailed guides and instructions for resolving errors and handling edge cases.
## Usage
The full API of this library can be found with code samples at [docs.conductor.is/qbd-api](https://docs.conductor.is/qbd-api).
```python
import os
from conductor import Conductor
conductor = Conductor(
api_key=os.environ.get("CONDUCTOR_SECRET_KEY"), # This is the default and can be omitted
)
page = conductor.qbd.invoices.list(
conductor_end_user_id="YOUR_END_USER_ID",
)
print(page.data)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `CONDUCTOR_SECRET_KEY="sk_conductor_..."` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncConductor` instead of `Conductor` and use `await` with each API call:
```python
import os
import asyncio
from conductor import AsyncConductor
conductor = AsyncConductor(
api_key=os.environ.get("CONDUCTOR_SECRET_KEY"), # This is the default and can be omitted
)
async def main() -> None:
page = await conductor.qbd.invoices.list(
conductor_end_user_id="YOUR_END_USER_ID",
)
print(page.data)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install conductor-py[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from conductor import DefaultAioHttpClient
from conductor import AsyncConductor
async def main() -> None:
async with AsyncConductor(
api_key=os.environ.get("CONDUCTOR_SECRET_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as conductor:
page = await conductor.qbd.invoices.list(
conductor_end_user_id="YOUR_END_USER_ID",
)
print(page.data)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Pagination
List methods in the Conductor API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
```python
from conductor import Conductor
conductor = Conductor()
all_invoices = []
# Automatically fetches more pages as needed.
for invoice in conductor.qbd.invoices.list(
conductor_end_user_id="YOUR_END_USER_ID",
):
# Do something with invoice here
all_invoices.append(invoice)
print(all_invoices)
```
Or, asynchronously:
```python
import asyncio
from conductor import AsyncConductor
conductor = AsyncConductor()
async def main() -> None:
all_invoices = []
# Iterate through items across all pages, issuing requests as needed.
async for invoice in conductor.qbd.invoices.list(
conductor_end_user_id="YOUR_END_USER_ID",
):
all_invoices.append(invoice)
print(all_invoices)
asyncio.run(main())
```
Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
```python
first_page = await conductor.qbd.invoices.list(
conductor_end_user_id="YOUR_END_USER_ID",
)
if first_page.has_next_page():
print(f"will fetch next page using these details: {first_page.next_page_info()}")
next_page = await first_page.get_next_page()
print(f"number of items we just fetched: {len(next_page.data)}")
# Remove `await` for non-async usage.
```
Or just work directly with the returned data:
```python
first_page = await conductor.qbd.invoices.list(
conductor_end_user_id="YOUR_END_USER_ID",
)
print(f"next page cursor: {first_page.next_cursor}") # => "next page cursor: ..."
for invoice in first_page.data:
print(invoice.id)
# Remove `await` for non-async usage.
```
from datetime import date
## Nested params
Nested parameters are dictionaries, typed using `TypedDict`, for example:
```python
from conductor import Conductor
conductor = Conductor()
bill = conductor.qbd.bills.create(
transaction_date=date.fromisoformat("2024-10-01"),
vendor_id="80000001-1234567890",
conductor_end_user_id="end_usr_1234567abcdefg",
vendor_address={},
)
print(bill.vendor_address)
```
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `conductor.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `conductor.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `conductor.APIError`.
```python
import conductor
from conductor import Conductor
conductor = Conductor()
try:
conductor.qbd.invoices.list(
conductor_end_user_id="YOUR_END_USER_ID",
)
except conductor.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except conductor.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except conductor.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from conductor import Conductor
# Configure the default for all requests:
conductor = Conductor(
# default is 2
max_retries=0,
)
# Or, configure per-request:
conductor.with_options(max_retries=5).qbd.invoices.list(
conductor_end_user_id="YOUR_END_USER_ID",
)
```
### Timeouts
By default requests time out after 2 minutes. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from conductor import Conductor
# Configure the default for all requests:
conductor = Conductor(
# 20 seconds (default is 2 minutes)
timeout=20.0,
)
# More granular control:
conductor = Conductor(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
conductor.with_options(timeout=5.0).qbd.invoices.list(
conductor_end_user_id="YOUR_END_USER_ID",
)
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/conductor-is/quickbooks-desktop-python/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `CONDUCTOR_LOG` to `info`.
```shell
$ export CONDUCTOR_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from conductor import Conductor
conductor = Conductor()
response = conductor.qbd.invoices.with_raw_response.list(
conductor_end_user_id="YOUR_END_USER_ID",
)
print(response.headers.get('X-My-Header'))
invoice = response.parse() # get the object that `qbd.invoices.list()` would have returned
print(invoice.id)
```
These methods return an [`APIResponse`](https://github.com/conductor-is/quickbooks-desktop-python/tree/main/src/conductor/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/conductor-is/quickbooks-desktop-python/tree/main/src/conductor/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with conductor.qbd.invoices.with_streaming_response.list(
conductor_end_user_id="YOUR_END_USER_ID",
) as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `conductor.get`, `conductor.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = conductor.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from conductor import Conductor, DefaultHttpxClient
conductor = Conductor(
# Or use the `CONDUCTOR_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
conductor.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from conductor import Conductor
with Conductor() as conductor:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/conductor-is/quickbooks-desktop-python/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import conductor
print(conductor.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/conductor-is/quickbooks-desktop-python/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Conductor <support@conductor.is> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Libraries :: Python Modules",
"Typing :: Typed"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/conductor-is/quickbooks-desktop-python",
"Repository, https://github.com/conductor-is/quickbooks-desktop-python"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-20T01:59:02.820827 | conductor_py-1.73.0.tar.gz | 671,365 | a5/b0/f335834f6d7511387368140f82552d4f39151d26aa8577cb83693c256eef/conductor_py-1.73.0.tar.gz | source | sdist | null | false | 2d1e016563dbfa608d8c2916205698f3 | 1e19092c18f2cc87e47b22cfdc3d8cea7f68672b8b533395f6cba99490170d2f | a5b0f335834f6d7511387368140f82552d4f39151d26aa8577cb83693c256eef | null | [] | 293 |
2.4 | kpf | 0.10.2 | TUI for kubectl port-forward | # kpf - A TUI for port-forwarding with kubectl
This is a Python utility that (attempts to) dramatically improve the experience of port-forwarding with kubectl.
It is essentially a wrapper around `kubectl port-forward` that adds an interactive service selection with automatic reconnects when the pods are restarted or your network connection is interrupted (computer goes to sleep, etc).
This should be compatible with the `kpf` alias that you may already have.
If you like this, check out <https://github.com/jessegoodier/kdebug>, a TUI for debug containers in Kubernetes pods with interactive shell access and backup capabilities.
## Demo
Demo of the TUI and the reconnect when a pod is restarted:

## Features
- 🔄 **Automatic Restart**: Monitors endpoint changes and restarts port-forward automatically
- 🛡️ **Network Watchdog**: Detects zombie connections after laptop sleep/wake and auto-recovers
- 🎯 **Interactive Selection**: Choose services with a colorful, intuitive interface
- 🌈 **Color-coded Status**: Green for services with endpoints, red for those without
- 🔍 **Multi-resource Support**: Services, pods, deployments, etc.
- 🔐 **Smart Port Handling**: Automatically detects privileged port issues (< 1024) and suggests alternatives
## Installation
**Note**: The `oh-my-zsh` kubectl plugin will conflict with this `kpf` command. You must unalias `kpf` before using this tool.
```sh
echo "unalias kpf 2>/dev/null" >> ~/.zshrc
```
### Homebrew (Recommended)
Other methods do not automatically install command completions.
```bash
brew tap jessegoodier/kpf
brew install kpf
```
Or install directly:
```bash
brew install jessegoodier/kpf/kpf
```
### Using uv
```bash
uv tool install kpf
```
from source:
```bash
uv tool install .
```
## Usage
### Interactive Mode (Recommended)
**Warm Tip**: You can use the interactive mode to find the service you want, and it will output the command to connect to that service directly next time.
**Note**: You might think that "warm tip" is something that AI wrote, but that's not the case. It really is just a little bit cooler than a hot tip.
Visual explanation of the features:

Check which endpoints are up on entire cluster (can be slow):

Select services interactively:
Interactive selection in current namespace:
```bash
kpf
```
Interactive selection in specific namespace:
```bash
kpf -n production
```
Interactive selection with namespace prompt:
```bash
kpf -p
```
Show all services across all namespaces:
```bash
kpf --all
```
Include pods and controllers with ports defined:
```bash
kpf --all-ports
```
Combine a few options (interactive mode, all services, and endpoint status checking, debug mode):
```bash
kpf -pAdl
```
### Check Mode
Add endpoint status checking to service selection (slower but shows endpoint health):
```bash
# Interactive selection with endpoint status
kpf --check
# Show all services with endpoint status
kpf --all --check
# Include pods and deployments with status
kpf --all-ports --check
```
### Legacy Mode
Direct port-forward (maintain expected behavior):
```bash
# Traditional kubectl port-forward syntax
kpf svc/frontend 8080:8080 -n production
kpf pod/my-pod 3000:3000
```
### Command Options
```sh
Example usage:
kpf # Interactive mode
kpf svc/frontend 8080:8080 -n production # Direct port-forward (maintain expected behavior)
kpf -n production # Interactive selection in specific namespace
kpf --all (or -A) # Show all services across all namespaces
kpf --all-ports (or -l) # Show all services with their ports
kpf --check -n production # Interactive selection with endpoint status
kpf --prompt-namespace (or -p) # Interactive namespace selection
kpf -z # Listen on 0.0.0.0 (all interfaces)
```
## Examples
### Interactive Service Selection
Fast mode (without endpoint checking):
```bash
$ kpf -n kube-system
Services in namespace: kube-system
# Type Name Ports
1 SERVICE kube-dns 53, 9153
2 SERVICE metrics-server 443
3 SERVICE kubernetes-dashboard 443
Select a service [1]: 1
Local port (press Enter for 53): 5353
```
With endpoint status checking:
```bash
$ kpf --check -n kube-system
Services in namespace: kube-system
# Type Name Ports Status
1 SERVICE kube-dns 53, 9153 ✓
2 SERVICE metrics-server 443 ✓
3 SERVICE kubernetes-dashboard 443 ✗
✓ = Has endpoints ✗ = No endpoints
Select a service [1]: 1
Local port (press Enter for 53): 5353
```
### Cross-Namespace Discovery
```bash
$ kpf --all
Services across all namespaces
# Namespace Type Name Ports Status
1 default SERVICE kubernetes 443 ✓
2 kube-system SERVICE kube-dns 53, 9153 ✓
3 production SERVICE frontend 80, 443 ✓
4 production SERVICE backend 8080 ✗
```
### Smart Low Port Handling
When you try to use privileged ports (< 1024), `kpf` will detect the permission issue and offer to use a higher port automatically:
```bash
$ kpf -n monitoring svc/grafana 80:80
Error: Port 80 requires elevated privileges (root/sudo)
Low ports (< 1024) require administrator permissions on most systems
Suggested alternative: Use port 1080 instead?
This would forward: localhost:1080 -> service:80
Use suggested port? [Y/n]: y
Updated port mapping to 1080:80
Direct command: kpf svc/grafana 1080:80 -n monitoring
http://localhost:1080
🚀 port-forward started 🚀
```
This feature prevents confusing "port already in use" errors when the real issue is insufficient permissions.
## How It Works
1. **Port-Forward Thread**: Runs kubectl port-forward in a separate thread
2. **Endpoint Watcher**: Monitors endpoint changes using `kubectl get ep -w`
3. **Network Watchdog**: Checks both K8s API connectivity and local port health every 5 seconds to detect zombie connections (e.g., after laptop sleep/wake). This catches cases where the API is reachable but the port-forward tunnel is dead.
4. **Automatic Restart**: When endpoints change or connectivity is lost, gracefully restarts the port-forward
5. **Service Discovery**: Uses kubectl to discover services and their endpoint status
## Requirements
- kubectl configured with cluster access
## Configuration
kpf can be configured via `~/.config/kpf/kpf.json` (follows [XDG Base Directory Specification](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html)).
If you create this file, it is suggested to only change the values you want to override in case improvements are made in the future.
```json
{
"autoSelectFreePort": true,
"showDirectCommand": true,
"showDirectCommandIncludeContext": true,
"directCommandMultiLine": true,
"autoReconnect": true,
"reconnectAttempts": 30,
"reconnectDelaySeconds": 5,
"captureUsageDetails": false,
"usageDetailFolder": "${HOME}/.config/kpf/usage-details",
"restartThrottleSeconds": 5,
"networkWatchdogEnabled": true,
"networkWatchdogInterval": 5,
"networkWatchdogFailureThreshold": 2
}
```
Example: Disable auto-reconnect
```sh
mkdir -p ~/.config/kpf
echo '{"autoReconnect": false}' > ~/.config/kpf/kpf.json
```
### Configuration Options
| Option | Type | Default | Description |
| --------------------------------- | ------- | ----------------------------------- | ---------------------------------------------------------------------------- |
| `autoSelectFreePort` | boolean | `true` | When requested port is busy, automatically try next ports (9091, 9092, etc.) |
| `showDirectCommand` | boolean | `true` | Show the direct `kpf` command for future use |
| `showDirectCommandIncludeContext` | boolean | `true` | Include kubectl context in the command display |
| `directCommandMultiLine` | boolean | `true` | Format direct command across multiple lines for readability |
| `autoReconnect` | boolean | `true` | Automatically reconnect when connection drops |
| `reconnectAttempts` | integer | `30` | Number of reconnection attempts before giving up |
| `reconnectDelaySeconds` | integer | `5` | Delay in seconds between reconnection attempts |
| `captureUsageDetails` | boolean | `false` | Capture usage details locally for debugging (not sent anywhere) |
| `usageDetailFolder` | string | `${HOME}/.config/kpf/usage-details` | Where to store usage detail logs |
| `networkWatchdogEnabled` | boolean | `true` | Monitor K8s API connectivity to detect zombie connections |
| `networkWatchdogInterval` | integer | `5` | Seconds between connectivity checks |
| `networkWatchdogFailureThreshold` | integer | `2` | Consecutive failures before triggering restart |
**Notes:**
- All settings are optional - kpf will use defaults if the config file doesn't exist
- Environment variables like `${HOME}` are expanded automatically
- The config file location respects the `XDG_CONFIG_HOME` environment variable
- Invalid JSON or unknown keys will show warnings but won't prevent kpf from running
- CLI arguments override config file values when provided
## Development
### Prerequisites
- [uv](https://github.com/astral-sh/uv)
- [just](https://github.com/casey/just)
### Setup Development Environment
```bash
# Clone the repository
git clone https://github.com/jessegoodier/kpf.git
cd kpf
```
```bash
# Install with development dependencies and create venv
just dev-setup
```
### Code Quality Tools
```bash
# Format and lint code
just format
```
```bash
# Run tests
just test
```
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Run tests and linting
5. Submit a pull request
## Shell Completion
Shell completions can be generated using the `--completions` flag.
### Homebrew
If you install via Homebrew, completions should be installed automatically. You may need to follow Homebrew's [shell completion instructions](https://docs.brew.sh/Shell-Completion) to ensure it's loaded. You may find the bash and zsh examples [here](https://github.com/jessegoodier/toolbox/tree/main/homebrew) useful.
### Manual Installation
#### Bash
```bash
# User-local installation (recommended)
kpf --completions bash > ~/.local/share/bash-completion/completions/kpf
# Or system-wide
kpf --completions bash | sudo tee /etc/bash_completion.d/kpf > /dev/null
```
#### Zsh
```zsh
# Add to a directory in your fpath
kpf --completions zsh > /usr/share/zsh/site-functions/_kpf
# Or for oh-my-zsh users
kpf --completions zsh > ~/.oh-my-zsh/completions/_kpf
```
Then reload your shell: `exec $SHELL`
## License
MIT License - see [LICENSE](LICENSE) file for details.
<p align="center">
<a href="https://www.buymeacoffee.com/jessegoodier">
<img src="https://img.buymeacoffee.com/button-api/?text=Buy me a coffee&emoji=&slug=jessegoodier&button_colour=FFDD00&font_colour=000000&font_family=Cookie&outline_colour=000000&coffee_colour=ffffff" />
</a>
</p>
| text/markdown | Jesse Goodier | null | null | null | MIT License Copyright (c) 2025 Jesse Goodier Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | cli, devops, k8s, kubectl, kubernetes, port-forward | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.14",
"Topic :: System :: Systems Administration",
"Topic :: Utilities"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"prompt-toolkit>=3.0.51",
"readchar>=4.2.1",
"requests>=2.28",
"rich>=14",
"bump-my-version; extra == \"dev\"",
"coverage; extra == \"dev\"",
"hatch; extra == \"dev\"",
"isort; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest-timeout; extra == \"dev\"",
"ruff; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/jessegoodier/kpf",
"Repository, https://github.com/jessegoodier/kpf",
"Issues, https://github.com/jessegoodier/kpf/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-20T01:56:38.788663 | kpf-0.10.2.tar.gz | 41,974 | 61/6b/efc04adc3cec9ec4c2250cf585d21ef20b767cb8fa954296f773ca333bab/kpf-0.10.2.tar.gz | source | sdist | null | false | d74d22bca9ec98711cfc4085d98ca018 | 256497f2f8559ff7516bfc4d66d7532db10d1eedf7eb2ee0e682e27d59394fb4 | 616befc04adc3cec9ec4c2250cf585d21ef20b767cb8fa954296f773ca333bab | null | [
"LICENSE"
] | 259 |
2.4 | pycropwat | 1.2.1 | A Python Package for Computing Effective Precipitation Using Google Earth Engine Climate Data | # pyCropWat
[](https://github.com/montimaj/pyCropWat/releases)
[](https://pypi.org/project/pycropwat/)
[](https://pepy.tech/project/pycropwat)
[](https://doi.org/10.5281/zenodo.18201619)
[](https://github.com/montimaj/pyCropWat/stargazers)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://montimaj.github.io/pyCropWat)
[](https://earthengine.google.com/)
A Python Package for Computing Effective Precipitation Using Google Earth Engine Climate Data.
<p align="center">
<picture>
<source srcset="https://raw.githubusercontent.com/montimaj/pyCropWat/main/docs/assets/pyCropWat.gif" type="image/gif">
<img src="https://raw.githubusercontent.com/montimaj/pyCropWat/main/docs/assets/pyCropWat.png" alt="pyCropWat Logo">
</picture>
</p>
## Project Structure
```
pyCropWat/
├── pycropwat/ # Main package
│ ├── __init__.py # Package exports
│ ├── core.py # EffectivePrecipitation class
│ ├── methods.py # Effective precipitation methods (9 methods)
│ ├── analysis.py # Temporal aggregation, statistics, visualization
│ ├── utils.py # Utility functions (geometry loading, GEE init)
│ └── cli.py # Command-line interface
├── tests/ # Unit tests
│ ├── __init__.py
│ └── test_core.py
├── docs/ # MkDocs documentation
│ ├── index.md # Documentation home
│ ├── installation.md # Installation guide
│ ├── examples.md # Usage examples
│ ├── contributing.md # Contribution guidelines
│ ├── assets/ # Documentation assets
│ │ ├── pyCropWat.png # Logo image
│ │ ├── pyCropWat.gif # Animated logo
│ │ ├── pyCropWat_logo.png # Alternative logo
│ │ └── examples/ # Example output images for docs
│ │ ├── arizona/ # Arizona example figures
│ │ ├── comparisons/ # Dataset comparison figures
│ │ ├── figures/ # Rio de la Plata figures
│ │ ├── method_comparison/ # Method comparison figures
│ │ ├── new_mexico/ # New Mexico example figures
│ │ └── pcml/ # Western U.S. PCML example figures
│ ├── api/ # API reference
│ │ ├── analysis.md
│ │ ├── cli.md
│ │ ├── core.md
│ │ ├── methods.md
│ │ └── utils.md
│ └── user-guide/ # User guide
│ ├── api.md
│ ├── cli.md
│ └── quickstart.md
├── Examples/ # Example scripts and data
│ ├── README.md # Detailed workflow documentation
│ ├── south_america_example.py # Rio de la Plata workflow script
│ ├── arizona_example.py # Arizona workflow script
│ ├── new_mexico_example.py # New Mexico workflow script
│ ├── western_us_pcml_example.py # Western U.S. PCML workflow script
│ ├── ucrb_example.py # UCRB field-scale workflow script
│ ├── AZ.geojson # Arizona boundary GeoJSON
│ ├── NM.geojson # New Mexico boundary GeoJSON
├── .github/ # GitHub configuration
│ └── workflows/
│ ├── docs.yml # GitHub Pages deployment workflow
│ └── publish.yml # PyPI publishing workflow
├── CHANGELOG.md # Release notes
├── MANIFEST.in # PyPI package manifest
├── mkdocs.yml # MkDocs configuration
├── environment.yml # Conda environment file
├── pyproject.toml # Package configuration
├── requirements.txt # pip dependencies
├── LICENSE
└── README.md
```
**Note:** The `Examples/` folder contains complete workflow scripts with detailed documentation in `README.md`.
- **`south_america_example.py`**: A comprehensive Python script demonstrating the complete pyCropWat workflow including data processing, temporal aggregation, statistical analysis, visualization (including anomaly, climatology, and trend maps), and dataset comparison using real Rio de la Plata data.
- **`arizona_example.py`**: A U.S.-focused workflow demonstrating 8 Peff methods with GridMET/PRISM precipitation and SSURGO AWC for Arizona, with U.S. vs Global dataset comparisons (excludes PCML).
- **`new_mexico_example.py`**: A New Mexico workflow comparing 8 Peff methods using PRISM precipitation with SSURGO AWC and gridMET ETo (excludes PCML).
- **`AZ.geojson`**: Arizona boundary GeoJSON for local geometry support.
- **`NM.geojson`**: New Mexico boundary GeoJSON for local geometry support.
**Note:** Output rasters (~32 GB) are not included in the repository. Run the example scripts with a GEE project ID to generate them locally.
See the [Complete Workflow Examples](#complete-workflow-examples) section below for details.
**Changelog:** See [CHANGELOG.md](https://github.com/montimaj/pyCropWat/blob/main/CHANGELOG.md) for release notes and version history.
## Overview
<table>
<tr>
<td>
pyCropWat converts precipitation data from any GEE climate dataset into effective precipitation and effective precipitation fraction rasters. It supports:
- Any GEE ImageCollection with precipitation data from the [GEE Data Catalog](https://developers.google.com/earth-engine/datasets) or [Community Catalog](https://gee-community-catalog.org/)
- Shapefile, GeoJSON, or GEE FeatureCollection asset for region of interest
- **Multiple effective precipitation methods**: CROPWAT, FAO/AGLW, Fixed Percentage, Dependable Rainfall, FarmWest, USDA-SCS, TAGEM-SuET, PCML, Ensemble
- Parallel processing using Dask
- Monthly output rasters in GeoTIFF format
- **Temporal aggregation**: Seasonal, annual, growing season (with cross-year support for Southern Hemisphere), custom date ranges
- **Statistical analysis**: Climatology, anomalies, trend analysis
- **Enhanced exports**: NetCDF, Cloud-Optimized GeoTIFF (COG), zonal statistics CSV
- **Visualization**: Time series plots, maps, climatology charts, anomaly maps, trend maps with significance
</td>
<td width="200">
<img src="https://raw.githubusercontent.com/montimaj/pyCropWat/main/docs/assets/pyCropWat_logo.png" alt="pyCropWat Logo" width="200">
</td>
</tr>
</table>
### Effective Precipitation Methods
pyCropWat supports multiple methods for calculating effective precipitation:
| Method | Description |
|--------|-------------|
| `cropwat` | CROPWAT method from FAO |
| `fao_aglw` | FAO/AGLW Dependable Rainfall (80% exceedance) |
| `fixed_percentage` | Simple fixed percentage method (configurable, default 70%) |
| `dependable_rainfall` | FAO Dependable Rainfall at specified probability level |
| `farmwest` | FarmWest method: Peff = (P - 5) × 0.75 |
| `usda_scs` | USDA-SCS method with AWC and ETo (requires GEE assets) |
| `suet` | TAGEM-SuET method: P - ETo with 75mm threshold (requires ETo asset) |
| `pcml` | Physics-Constrained ML (Western U.S. only, Jan 2000 - Sep 2024); no geometry = full Western U.S., or provide geometry to subset |
| `ensemble` | Ensemble mean of all methods except TAGEM-SuET and PCML - default (requires AWC and ETo assets) |
### CROPWAT
The effective precipitation is calculated using the CROPWAT method (Smith, 1992; Muratoglu et al., 2023):
- If precipitation ≤ 250 mm: `Peff = P × (125 - 0.2 × P) / 125`
- If precipitation > 250 mm: `Peff = 0.1 × P + 125`
### FAO/AGLW Formula (Dependable Rainfall)
The FAO Land and Water Division (AGLW) Dependable Rainfall formula from FAO Irrigation and Drainage Paper No. 33, based on 80% probability exceedance:
- If precipitation ≤ 70 mm: `Peff = max(0.6 × P - 10, 0)`
- If precipitation > 70 mm: `Peff = 0.8 × P - 24`
### Fixed Percentage Method
A simple method assuming a constant fraction of precipitation is effective:
- `Peff = P × f` where `f` is the effectiveness fraction (default: 0.7 or 70%)
### Dependable Rainfall Method
The FAO Dependable Rainfall method (same as FAO/AGLW) estimates rainfall at a given probability level (default 80%):
- If precipitation ≤ 70 mm: `Peff = max(0.6 × P - 10, 0)`
- If precipitation > 70 mm: `Peff = 0.8 × P - 24`
A probability scaling factor is applied:
- 50% probability: ~1.3× base estimate (less conservative)
- 80% probability: 1.0× base estimate (default)
- 90% probability: ~0.9× base estimate (more conservative)
### FarmWest Method
A simple empirical formula used by the [FarmWest](https://farmwest.com/climate/calculator-information/et/effective-precipitation/) program:
- `Peff = max((P - 5) × 0.75, 0)`
Assumes the first 5 mm is lost to interception/evaporation, and 75% of the remaining precipitation is effective.
**Reference:** [FarmWest - Effective Precipitation](https://farmwest.com/climate/calculator-information/et/effective-precipitation/)
### USDA-SCS Method (with AWC and ETo)
The USDA Soil Conservation Service method that accounts for soil water holding capacity and evaporative demand:
1. Calculate soil storage depth: `d = AWC × MAD × rooting_depth` (MAD = Maximum Allowable Depletion, default 0.5)
2. Calculate storage factor: `sf = 0.531747 + 0.295164×d - 0.057697×d² + 0.003804×d³`
3. Calculate effective precipitation: `Peff = sf × (P^0.82416 × 0.70917 - 0.11556) × 10^(ETo × 0.02426)`
4. Peff is clamped between 0 and min(P, ETo)
**Required GEE Assets:**
| Region | AWC Asset | ETo Asset |
|--------|-----------|----------|
| U.S. | `projects/openet/soil/ssurgo_AWC_WTA_0to152cm_composite` | `projects/openet/assets/reference_et/conus/gridmet/monthly/v1` (band: `eto`) |
| Global | `projects/sat-io/open-datasets/FAO/HWSD_V2_SMU` (band: `AWC`) | `projects/climate-engine-pro/assets/ce-ag-era5-v2/daily` (band: `ReferenceET_PenmanMonteith_FAO56`, use `--eto-is-daily`) |
**CLI Example (U.S.):**
```bash
pycropwat process --asset ECMWF/ERA5_LAND/MONTHLY_AGGR --band total_precipitation_sum \
--gee-geometry projects/my-project/assets/study_area \
--start-year 2015 --end-year 2020 --scale-factor 1000 \
--method usda_scs \
--awc-asset projects/openet/soil/ssurgo_AWC_WTA_0to152cm_composite \
--eto-asset projects/openet/assets/reference_et/conus/gridmet/monthly/v1 \
--eto-band eto --rooting-depth 1.0 --mad-factor 0.5 --output ./output
```
**CLI Example (Global):**
```bash
pycropwat process --asset ECMWF/ERA5_LAND/MONTHLY_AGGR --band total_precipitation_sum \
--gee-geometry projects/my-project/assets/study_area \
--start-year 2015 --end-year 2020 --scale-factor 1000 \
--method usda_scs \
--awc-asset projects/sat-io/open-datasets/FAO/HWSD_V2_SMU --awc-band AWC \
--eto-asset projects/climate-engine-pro/assets/ce-ag-era5-v2/daily \
--eto-band ReferenceET_PenmanMonteith_FAO56 --eto-is-daily \
--rooting-depth 1.0 --mad-factor 0.5 --output ./output
```
**Reference:** [USDA SCS (1993). Chapter 2 Irrigation Water Requirements. Part 623 National Engineering Handbook.](https://www.wcc.nrcs.usda.gov/ftpref/wntsc/waterMgt/irrigation/NEH15/ch2.pdf)
### TAGEM-SuET Method (with ETo)
The TAGEM-SuET (Türkiye'de Sulanan Bitkilerin Bitki Su Tüketimleri - Turkish Irrigation Management and Plant Water Consumption System) method calculates effective precipitation based on the difference between precipitation and reference evapotranspiration:
- If P ≤ ETo: `Peff = 0`
- If P > ETo and (P - ETo) < 75: `Peff = P - ETo`
- Otherwise: `Peff = 75 + 0.0011×(P - ETo - 75)² + 0.44×(P - ETo - 75)`
> ⚠️ **Note:** Studies have shown that the TAGEM-SuET method tends to underperform compared to other methods, particularly in arid and semi-arid climates where ETo often exceeds precipitation. In our method comparison analyses, TAGEM-SuET consistently produced the lowest effective precipitation estimates. Users should consider this limitation when selecting a method for their application.
**Reference:** [Muratoglu, A., Bilgen, G. K., Angin, I., & Kodal, S. (2023). Performance analyses of effective rainfall estimation methods for accurate quantification of agricultural water footprint. Water Research, 238, 120011.](https://doi.org/10.1016/j.watres.2023.120011)
**CLI Example:**
```bash
pycropwat process --asset ECMWF/ERA5_LAND/MONTHLY_AGGR --band total_precipitation_sum \
--gee-geometry projects/my-project/assets/study_area \
--start-year 2015 --end-year 2020 --scale-factor 1000 \
--method suet \
--eto-asset projects/openet/assets/reference_et/conus/gridmet/monthly/v1 \
--eto-band eto --output ./output
```
### PCML (Physics-Constrained Machine Learning)
The PCML method uses pre-computed effective precipitation from a physics-constrained machine learning model trained specifically for the Western United States. Unlike other methods, PCML Peff is retrieved directly from a GEE asset.
**Coverage:**
- **Region**: Western U.S. (17 states: AZ, CA, CO, ID, KS, MT, NE, NV, NM, ND, OK, OR, SD, TX, UT, WA, WY)
- **Temporal**: January 2000 - September 2024 (monthly)
- **Resolution**: ~2 km (native scale retrieved dynamically from GEE asset)
- **GEE Asset**: `projects/ee-peff-westus-unmasked/assets/effective_precip_monthly_unmasked`
- **Band Format**: `bYYYY_M` (e.g., `b2015_9` for September 2015, `b2016_10` for October 2016)
> 📝 **Note:** PCML provides pre-computed Peff values from a trained ML model. When using `--method pcml`, the default PCML asset is automatically used and bands are dynamically selected based on the year/month being processed. The native scale (~2km) is retrieved from the asset using GEE's `nominalScale()` function. **Only annual (water year, Oct-Sep)** effective precipitation fractions are available for PCML, loaded directly from a separate GEE asset (`projects/ee-peff-westus-unmasked/assets/effective_precip_fraction_unmasked`, WY 2000-2024, band format: `bYYYY`).
> 💡 **PCML Geometry Options:**
> - **No geometry provided**: Downloads the entire PCML asset (full Western U.S. - 17 states)
> - **User provides geometry**: PCML data is clipped/subsetted to that geometry. **Note:** Only Western U.S. vectors that overlap with the 17-state extent can be used (e.g., AZ.geojson, pacific_northwest.geojson)
**Reference:** [Hasan, M. F., Smith, R. G., Majumdar, S., Huntington, J. L., Alves Meira Neto, A., & Minor, B. A. (2025). Satellite data and physics-constrained machine learning for estimating effective precipitation in the Western United States and application for monitoring groundwater irrigation. *Agricultural Water Management*, 319, 109821.](https://doi.org/10.1016/j.agwat.2025.109821)
**CLI Example (full Western U.S.):**
```bash
pycropwat process \
--method pcml \
--start-year 2000 --end-year 2024 \
--output ./WesternUS_PCML
```
**CLI Example (subset to specific region):**
```bash
pycropwat process \
--method pcml \
--geometry pacific_northwest.geojson \
--start-year 2000 --end-year 2024 \
--output ./PacificNW_PCML
```
### Ensemble - Default (Mean of Methods)
The ensemble method provides a robust estimate by calculating the mean of all methods except TAGEM-SuET and PCML. The ensemble includes:
1. **CROPWAT** - FAO standard method
2. **FAO/AGLW** - Dependable Rainfall (80% exceedance)
3. **Fixed Percentage** - 70% of precipitation
4. **Dependable Rainfall** - 75% probability level
5. **FarmWest** - Pacific Northwest method
6. **USDA-SCS** - Soil-based method
Formula: `Peff_ensemble = (Peff_cropwat + Peff_fao_aglw + Peff_fixed + Peff_dependable + Peff_farmwest + Peff_usda_scs) / 6`
> 💡 **Note:** The ensemble method requires AWC and ETo assets (same as USDA-SCS) since it internally calculates all component methods. This method is recommended when users want a robust, multi-method average that reduces bias from any single method.
**CLI Example:**
```bash
pycropwat process --asset ECMWF/ERA5_LAND/MONTHLY_AGGR --band total_precipitation_sum \
--gee-geometry projects/my-project/assets/study_area \
--start-year 2015 --end-year 2020 --scale-factor 1000 \
--method ensemble \
--awc-asset projects/sat-io/open-datasets/FAO/HWSD_V2_SMU --awc-band AWC \
--eto-asset projects/openet/assets/reference_et/conus/gridmet/monthly/v1 \
--eto-band eto --output ./output
```
### Method Comparison
| Method | Use Case | Characteristics |
|--------|----------|-----------------|
| **CROPWAT** | General irrigation planning | Balanced, widely validated |
| **FAO/AGLW** | Yield response studies | FAO Dependable Rainfall (80% exceedance) |
| **Fixed Percentage** | Quick estimates, calibration | Simple, requires local calibration |
| **Dependable Rainfall** | Risk-averse planning | Same as FAO/AGLW, with probability scaling |
| **FarmWest** | Pacific Northwest irrigation | Simple, accounts for interception loss |
| **USDA-SCS** | Site-specific irrigation planning | Accounts for soil AWC and ETo |
| **TAGEM-SuET** | ET-based irrigation planning | Based on P - ETo difference |
| **PCML** | Western U.S. applications | ML-based, pre-computed (2000-2024) |
| **Ensemble** | Robust multi-method estimate | Mean of 6 methods (excludes TAGEM-SuET and PCML) |
## Installation
### Quick Install (PyPI)
```bash
pip install pycropwat
```
Or with optional interactive map support:
```bash
pip install pycropwat[interactive]
```
### Disk Space Requirements
| Component | Size | Notes |
|-----------|------|-------|
| Repository (tracked files) | ~50 MB | Core package, documentation, examples, and assets |
| **Generated by example scripts:** | | |
| Examples/RioDelaPlata | ~5 GB | ERA5-Land & TerraClimate outputs (2000-2025) |
| Examples/Arizona | ~12 GB | GridMET, PRISM, ERA5-Land & TerraClimate outputs (1985-2025) |
| Examples/NewMexico | ~8 GB | PRISM 8-method outputs (1986-2025) |
| Examples/WesternUS_PCML | ~3 GB | PCML effective precipitation outputs |
| Examples/UCRB | ~3 MB | Field-scale analysis outputs |
**Note:** Large generated data files are excluded from the repository via `.gitignore`.
Run the example scripts to generate them locally:
```bash
python Examples/south_america_example.py --gee-project your-project-id
python Examples/arizona_example.py --gee-project your-project-id
python Examples/new_mexico_example.py --gee-project your-project-id
python Examples/western_us_pcml_example.py --gee-project your-project-id
python Examples/ucrb_example.py --gee-project your-project-id
```
> ⚠️ **UCRB GeoPackage:** The `ucrb_field_effective_precip_intercomparison_geopackage.gpkg` file (~7 GB) is not included in the repository. Contact the authors if you need access to this dataset.
### Using Conda (Recommended for Development)
```bash
# Clone the repository
git clone https://github.com/montimaj/pyCropWat.git
cd pyCropWat
# Create conda environment from environment.yml
conda env create -f environment.yml
# Activate the environment
conda activate pycropwat
# Install the package (registers the 'pycropwat' CLI command)
pip install -e .
# Or with interactive map support (leafmap, localtileserver)
pip install -e ".[interactive]"
# Verify installation
pycropwat --help
```
### From Source (pip)
```bash
# Clone the repository
git clone https://github.com/montimaj/pyCropWat.git
cd pyCropWat
# Create and activate a virtual environment (optional but recommended)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install the package (registers the 'pycropwat' CLI command)
pip install -e .
# Or with interactive map support (leafmap, localtileserver)
pip install -e ".[interactive]"
# Verify installation
pycropwat --help
```
**Note:** After running `pip install -e .`, the `pycropwat` command will be available globally in your environment. Do not use `./pycropwat` - just use `pycropwat` directly.
**Optional Dependencies:**
- `pip install -e ".[interactive]"` - Adds leafmap and localtileserver for interactive HTML maps
- `pip install -e ".[dev]"` - Adds development tools (pytest, black, ruff)
- `pip install -e ".[docs]"` - Adds documentation tools (mkdocs)
```
## Requirements
- Python >= 3.9
- Google Earth Engine account and authentication
- Dependencies: earthengine-api, numpy, xarray, rioxarray, geopandas, shapely, dask
### Conda Environment
The `environment.yml` file provides a complete conda environment with all dependencies:
```bash
# Create environment
conda env create -f environment.yml
# Activate environment
conda activate pycropwat
# Update existing environment
conda env update -f environment.yml --prune
# Remove environment
conda env remove -n pycropwat
```
## Usage
### Python API
```python
from pycropwat import EffectivePrecipitation
# Initialize the processor with a local file
ep = EffectivePrecipitation(
asset_id='ECMWF/ERA5_LAND/MONTHLY_AGGR',
precip_band='total_precipitation_sum',
geometry_path='path/to/region.geojson',
start_year=2015,
end_year=2020,
precip_scale_factor=1000, # ERA5 precipitation is in meters, convert to mm
gee_project='your-gee-project' # Optional
)
# Or use a GEE FeatureCollection asset for the study area
ep = EffectivePrecipitation(
asset_id='ECMWF/ERA5_LAND/MONTHLY_AGGR',
precip_band='total_precipitation_sum',
gee_geometry_asset='projects/my-project/assets/study_boundary',
start_year=2015,
end_year=2020,
precip_scale_factor=1000
)
# Use alternative effective precipitation methods
ep = EffectivePrecipitation(
asset_id='ECMWF/ERA5_LAND/MONTHLY_AGGR',
precip_band='total_precipitation_sum',
gee_geometry_asset='projects/my-project/assets/study_boundary',
start_year=2015,
end_year=2020,
precip_scale_factor=1000,
method='fao_aglw' # Options: 'cropwat', 'fao_aglw', 'fixed_percentage', 'dependable_rainfall', 'farmwest', 'usda_scs', 'suet', 'ensemble'
)
# Process with parallel execution (using dask)
results = ep.process(
output_dir='./output',
n_workers=4,
months=[6, 7, 8] # Optional: process only specific months
)
# Or process sequentially (useful for debugging)
results = ep.process_sequential(output_dir='./output')
```
### Temporal Aggregation & Analysis
```python
from pycropwat import TemporalAggregator, StatisticalAnalyzer, Visualizer
# Temporal aggregation
agg = TemporalAggregator('./output')
# Annual total
annual = agg.annual_aggregate(2020, method='sum', output_path='./annual_2020.tif')
# Seasonal aggregate (JJA = June-July-August)
summer = agg.seasonal_aggregate(2020, 'JJA', method='sum')
# Growing season - Northern Hemisphere (April-October, same year)
growing_nh = agg.growing_season_aggregate(2020, start_month=4, end_month=10)
# Growing season - Southern Hemisphere (October-March, cross-year)
# Aggregates Oct 2020 - Mar 2021 when start_month > end_month
growing_sh = agg.growing_season_aggregate(2020, start_month=10, end_month=3)
# Multi-year climatology
climatology = agg.multi_year_climatology(2000, 2020, output_dir='./climatology')
# Statistical analysis
stats = StatisticalAnalyzer('./output')
# Calculate anomaly
anomaly = stats.calculate_anomaly(2020, 6, clim_start=1990, clim_end=2020,
anomaly_type='percent')
# Trend analysis (returns slope in mm/year and p-value)
slope, pvalue = stats.calculate_trend(start_year=2000, end_year=2020, month=6)
# Zonal statistics
zonal_df = stats.zonal_statistics('./zones.shp', 2000, 2020, output_path='./zonal_stats.csv')
# Visualization
viz = Visualizer('./output')
viz.plot_time_series(2000, 2020, output_path='./timeseries.png')
viz.plot_monthly_climatology(2000, 2020, output_path='./climatology.png')
viz.plot_raster(2020, 6, output_path='./map_2020_06.png')
# Interactive map (requires leafmap or folium: pip install leafmap)
viz.plot_interactive_map(2020, 6, output_path='./interactive_map.html')
# Dataset comparison
viz.plot_comparison(2020, 6, other_dir='./terraclimate_output',
labels=('ERA5', 'TerraClimate'), output_path='./comparison.png')
viz.plot_scatter_comparison(2000, 2020, other_dir='./terraclimate_output',
labels=('ERA5', 'TerraClimate'), output_path='./scatter.png')
viz.plot_annual_comparison(2000, 2020, other_dir='./terraclimate_output',
labels=('ERA5', 'TerraClimate'), output_path='./annual_comparison.png')
```
### Export Options
```python
from pycropwat import export_to_netcdf, export_to_cog
# Export to NetCDF (single file with time dimension)
export_to_netcdf('./output', './effective_precip.nc')
# Convert to Cloud-Optimized GeoTIFF
export_to_cog('./output/effective_precip_2020_06.tif', './cog_2020_06.tif')
```
### Command Line Interface
pyCropWat provides a subcommand-based CLI for all functionality:
```bash
pycropwat <command> [OPTIONS]
```
**Available Commands:**
| Command | Description |
|---------|-------------|
| `process` | Calculate effective precipitation from GEE climate data |
| `aggregate` | Temporal aggregation (annual, seasonal, growing season) |
| `analyze` | Statistical analysis (anomaly, trend, zonal statistics) |
| `export` | Export to NetCDF or Cloud-Optimized GeoTIFF |
| `plot` | Create visualizations (time series, climatology, maps) |
#### Process Command Examples
```bash
# Process ERA5-Land data (actual working example)
pycropwat process --asset ECMWF/ERA5_LAND/MONTHLY_AGGR \
--band total_precipitation_sum \
--gee-geometry projects/ssebop-471916/assets/Riodelaplata \
--start-year 2000 --end-year 2025 \
--scale-factor 1000 --scale 4000 \
--workers 32 --output ./Examples/RioDelaPlata/RDP_ERA5Land
# Use alternative effective precipitation method
pycropwat process --asset ECMWF/ERA5_LAND/MONTHLY_AGGR \
--band total_precipitation_sum \
--gee-geometry projects/my-project/assets/study_area \
--start-year 2020 --end-year 2023 \
--scale-factor 1000 \
--method fao_aglw --output ./outputs
# List available methods
pycropwat --list-methods
# Process TerraClimate data (actual working example)
pycropwat process --asset IDAHO_EPSCOR/TERRACLIMATE \
--band pr \
--gee-geometry projects/ssebop-471916/assets/Riodelaplata \
--start-year 2000 --end-year 2025 \
--workers 32 --output ./Examples/RioDelaPlata/RDP_TerraClimate
# Process with local shapefile
pycropwat process --asset ECMWF/ERA5_LAND/MONTHLY_AGGR \
--band total_precipitation_sum \
--geometry roi.geojson \
--start-year 2015 --end-year 2020 \
--scale-factor 1000 --output ./output
```
#### Aggregate Command Examples
```bash
# Annual total
pycropwat aggregate --input ./output --type annual --year 2020 --output ./annual_2020.tif
# Seasonal (summer)
pycropwat aggregate --input ./output --type seasonal --year 2020 --season JJA --output ./summer_2020.tif
# Growing season (April-October)
pycropwat aggregate --input ./output --type growing-season --year 2020 \
--start-month 4 --end-month 10 --output ./growing_2020.tif
# Multi-year climatology
pycropwat aggregate --input ./output --type climatology \
--start-year 2000 --end-year 2020 --output ./climatology/
```
#### Analyze Command Examples
```bash
# Calculate anomaly
pycropwat analyze anomaly --input ./output --year 2020 --month 6 \
--clim-start 1990 --clim-end 2020 --output ./anomaly_2020_06.tif
# Calculate trend
pycropwat analyze trend --input ./output --start-year 2000 --end-year 2020 \
--trend-method sen --output ./trend/
# Zonal statistics
pycropwat analyze zonal --input ./output --zones ./regions.shp \
--start-year 2000 --end-year 2020 --output ./zonal_stats.csv
```
#### Export Command Examples
```bash
# Export to NetCDF
pycropwat export netcdf --input ./output --output ./data.nc
# Convert to Cloud-Optimized GeoTIFF
pycropwat export cog --input ./effective_precip_2020_06.tif --output ./cog_2020_06.tif
```
#### Plot Command Examples
```bash
# Time series plot
pycropwat plot timeseries --input ./output --start-year 2000 --end-year 2020 --output ./timeseries.png
# Monthly climatology bar chart
pycropwat plot climatology --input ./output --start-year 2000 --end-year 2020 --output ./climatology.png
# Single month map
pycropwat plot map --input ./output --year 2020 --month 6 --output ./map_2020_06.png
# Interactive map (requires leafmap: pip install leafmap)
pycropwat plot interactive --input ./output --year 2020 --month 6 --output ./map.html
# Compare two datasets (e.g., ERA5 vs TerraClimate)
pycropwat plot compare --input ./era5_output --other-input ./terraclimate_output \
--year 2020 --month 6 --label1 ERA5 --label2 TerraClimate \
--output ./comparison.png
# Scatter plot for validation
pycropwat plot scatter --input ./era5_output --other-input ./terraclimate_output \
--start-year 2000 --end-year 2020 --output ./scatter.png
# Annual comparison bar chart
pycropwat plot annual-compare --input ./era5_output --other-input ./terraclimate_output \
--start-year 2000 --end-year 2020 --output ./annual.png
```
### CLI Arguments
#### Global Options
| Argument | Description |
|----------|-------------|
| `--help` | Show help message |
| `--version` | Show version number |
| `--list-methods` | List available effective precipitation methods |
#### Process Command Arguments
| Argument | Short | Required | Default | Description |
|----------|-------|----------|---------|-------------|
| `--asset` | `-a` | Yes | - | GEE ImageCollection asset ID |
| `--band` | `-b` | Yes | - | Precipitation band name |
| `--geometry` | `-g` | No* | - | Path to shapefile or GeoJSON |
| `--gee-geometry` | `-G` | No* | - | GEE FeatureCollection asset ID |
| `--start-year` | `-s` | Yes | - | Start year (inclusive) |
| `--end-year` | `-e` | Yes | - | End year (inclusive) |
| `--output` | `-o` | Yes | - | Output directory |
| `--scale-factor` | `-f` | No | 1.0 | Conversion factor to mm |
| `--scale` | `-r` | No | Native | Output resolution in meters |
| `--workers` | `-w` | No | 4 | Number of parallel workers |
| `--months` | `-m` | No | All | Specific months to process |
| `--project` | `-p` | No | None | GEE project ID |
| `--method` | - | No | ensemble | Peff method: cropwat, fao_aglw, fixed_percentage, dependable_rainfall, farmwest, usda_scs, suet, ensemble |
| `--percentage` | - | No | 0.7 | Percentage for fixed_percentage method |
| `--probability` | - | No | 0.75 | Probability for dependable_rainfall method |
| `--sequential` | - | No | False | Process sequentially |
| `--verbose` | `-v` | No | False | Verbose output |
\* Either `--geometry` or `--gee-geometry` must be provided.
For full CLI documentation, run `pycropwat <command> --help` or see the [CLI Reference](https://montimaj.github.io/pyCropWat/user-guide/cli/).
## Output Files
The package generates two GeoTIFF files per month:
1. `effective_precip_YYYY_MM.tif` - Effective precipitation (mm)
2. `effective_precip_fraction_YYYY_MM.tif` - Effective precipitation fraction (0-1)
> **Note:** For the PCML method, fraction files are annual (water year): `effective_precip_fraction_YYYY.tif` (one per year, WY 2000-2024).
### Output Resolution
- **Default (no `--scale`):** Uses the native resolution of the input dataset
- ERA5-Land: ~11 km (0.1°)
- TerraClimate: ~4 km (1/24°)
- CHIRPS: ~5.5 km (0.05°)
- **With `--scale`:** Reprojects to the specified resolution in meters (e.g., `--scale 1000` for 1 km)
### Large Region Handling
For large study areas or high-resolution outputs that exceed GEE's pixel limits (262,144 pixels per request), pyCropWat automatically:
1. Estimates pixel count for the region
2. Splits large regions into smaller tiles (max 256×256 pixels per tile)
3. Downloads each tile separately from GEE
4. Mosaics the tiles back together in memory (no temp files)
5. Resizes to match the target resolution
This applies to precipitation, AWC, and ETo data downloads. No configuration required - it's handled automatically.
### Important: Units
The CROPWAT formula is calibrated for precipitation in **millimeters (mm)**. The output effective precipitation is always in mm, provided you use the correct `--scale-factor` to convert input precipitation to mm first.
The formula constants (125, 250, 0.2, 0.1) are specifically designed for mm units:
- If P ≤ 250mm: `Peff = P × (125 - 0.2 × P) / 125`
- If P > 250mm: `Peff = 0.1 × P + 125`
**Warning:** If you pass precipitation in wrong units (e.g., ERA5 in meters without `--scale-factor 1000`), the results will be incorrect because the 250mm threshold won't match properly.
### Temporal Aggregation
pyCropWat automatically **sums** all images within each month to compute monthly total precipitation, regardless of the input data's temporal resolution:
- **Monthly data (ERA5, TerraClimate):** Uses the single monthly image directly
- **Daily data (CHIRPS/DAILY):** Sums all ~30 daily images → monthly total
- **Sub-daily data (GPM IMERG):** Sums all timesteps → monthly total
This ensures the CROPWAT formula always receives the correct monthly precipitation totals.
## Common GEE Climate Assets
### Global Precipitation Datasets
| Asset ID | Precipitation Band | Scale Factor | Spatial Resolution | Temporal Resolution |
|----------|-------------------|--------------|-------------------|---------------------|
| `ECMWF/ERA5_LAND/MONTHLY_AGGR` | `total_precipitation_sum` | 1000 | ~11 km (0.1°) | Monthly |
| `ECMWF/ERA5/MONTHLY` | `total_precipitation` | 1000 | ~27 km (0.25°) | Monthly |
| `IDAHO_EPSCOR/TERRACLIMATE` | `pr` | 1 | ~4 km (1/24°) | Monthly |
| `UCSB-CHG/CHIRPS/DAILY` | `precipitation` | 1 | ~5.5 km (0.05°) | Daily |
| `UCSB-CHG/CHIRPS/PENTAD` | `precipitation` | 1 | ~5.5 km (0.05°) | 5-day (Pentad) |
| `NASA/GPM_L3/IMERG_V06` | `precipitation` | 1 | ~11 km (0.1°) | Half-hourly |
| `projects/climate-engine-pro/assets/ce-ag-era5-v2/daily` | `Precipitation_Flux` | 1 | ~9 km (0.1°) | Daily |
### U.S.-Specific Precipitation Datasets
| Asset ID | Precipitation Band | Scale Factor | Spatial Resolution | Description |
|----------|-------------------|--------------|-------------------|-------------|
| `IDAHO_EPSCOR/GRIDMET` | `pr` | 1 | ~4 km | University of Idaho GridMET daily meteorological data |
| `projects/sat-io/open-datasets/OREGONSTATE/PRISM_800_MONTHLY` | `ppt` | 1 | ~800 m | Oregon State PRISM high-resolution monthly precipitation |
### USDA-SCS Method Required Datasets
For the USDA-SCS method, you need AWC (Available Water Capacity) and ETo (Reference ET) data:
| Region | Dataset Type | Asset ID | Band | Notes |
|--------|-------------|----------|------|-------|
| **U.S.** | AWC | `projects/openet/soil/ssurgo_AWC_WTA_0to152cm_composite` | (single band) | SSURGO soil data |
| **U.S.** | ETo | `projects/openet/assets/reference_et/conus/gridmet/monthly/v1` | `eto` | GridMET monthly ETo |
| **Global** | AWC | `projects/sat-io/open-datasets/FAO/HWSD_V2_SMU` | `AWC` | FAO HWSD v2 |
| **Global** | ETo | `projects/climate-engine-pro/assets/ce-ag-era5-v2/daily` | `ReferenceET_PenmanMonteith_FAO56` | ERA5-based (use `--eto-is-daily`) |
## Complete Workflow Examples
The `Examples/` directory contains comprehensive workflow scripts demonstrating pyCropWat capabilities:
### 1. Rio de la Plata Basin Example (Global)
📖 **Script:** `Examples/south_america_example.py`
Demonstrates the complete pyCropWat workflow comparing ERA5-Land and TerraClimate data for South America.
**For detailed step-by-step documentation, see [Examples/README.md](https://github.com/montimaj/pyCropWat/blob/main/Examples/README.md)**
#### What the Example Does
The script performs a comprehensive 6-step workflow:
1. **Process Effective Precipitation** - Downloads and calculates effective precipitation from ERA5-Land and TerraClimate via GEE
2. **Temporal Aggregation** - Creates annual totals, growing season aggregations (Apr-Sep), and monthly climatology
3. **Statistical Analysis** - Computes percent anomalies, trends (Sen's slope), and zonal statistics
4. **Visualization** - Generates time series plots, climatology charts, static maps, and interactive HTML maps
5. **Dataset Comparison** - Creates side-by-side comparison plots, scatter plots, annual charts, and zonal comparisons
6. **NetCDF Export** - Exports data to CF-compliant NetCDF format
#### Running the Example
```bash
# Navigate to the Examples directory
cd Examples/
# Run analysis only (using existing pre-processed data)
python south_america_example.py --analysis-only
# Run full workflow with GEE processing (requires authentication)
python south_america_example.py --gee-project your-project-id --workers 8
# Force reprocess all data from GEE
python south_america_example.py --force-reprocess --gee-project your-project-id --workers 8
```
#### Configuration
| Parameter | Value |
|-----------|-------|
| Study Area | Rio de la Plata Basin (GEE Asset: `projects/ssebop-471916/assets/Riodelaplata`) |
| Time Period | 2000-2025 |
| Climatology Period | 2000-2020 |
| Datasets | ERA5-Land, TerraClimate |
| Sample Zones | Eastern RDP (Uruguay, SE Brazil), Western RDP (N Argentina, Paraguay) |
---
### 2. Arizona USDA-SCS Example (U.S.)
📖 **Script:** `Examples/arizona_example.py`
Demonstrates the **USDA-SCS method** with U.S.-specific AWC and ETo datasets for Arizona, comparing GridMET and PRISM precipitation data.
#### USDA-SCS Method Configuration
The Arizona example uses these U.S.-based GEE datasets:
| Dataset | GEE Asset ID | Band |
|---------|-------------|------|
| **Precipitation (GridMET)** | `IDAHO_EPSCOR/GRIDMET` | `pr` |
| **Precipitation (PRISM)** | `projects/sat-io/open-datasets/OREGONSTATE/PRISM_800_MONTHLY` | `ppt` |
| **AWC (SSURGO)** | `projects/openet/soil/ssurgo_AWC_WTA_0to152cm_composite` | (single band) |
| **ETo (GridMET)** | `projects/openet/assets/reference_et/conus/gridmet/monthly/v1` | `eto` |
#### What the Example Does
1. **Process Effective Precipitation** - Uses USDA-SCS method with SSURGO AWC and GridMET ETo
2. **Compare Precipitation Sources** - GridMET (~4km) vs PRISM (~800m)
3. **Arizona-Specific Aggregation** - Monsoon season (Jul-Sep), winter season (Jan-Feb)
4. **Zonal Statistics** - Central AZ, Southern AZ, Northern AZ regions
5. **Dataset Comparison** - GridMET vs PRISM scatter plots, zonal comparisons
#### Running the Example
```bash
cd Examples/
# Run analysis only (if data already processed)
python arizona_example.py --analysis-only
# Run full workflow with GEE processing
python arizona_example.py --gee-project your-project-id --workers 8
# Force reprocess
python arizona_example.py --force-reprocess --gee-project your-project-id
```
#### CLI Equivalent
```bash
# Process with GridMET precipitation using USDA-SCS method
pycropwat process --asset IDAHO_EPSCOR/GRIDMET --band pr \
--gee-geometry users/mont | text/markdown | null | Sayantan Majumdar <sayantan.majumdar@dri.edu>, Peter ReVelle <peter.revelle@dri.edu>, Christopher Pearson <christopher.pearson@dri.edu>, Soheil Nozari <soheil.nozari@colostate.edu>, "Blake A. Minor" <blake.minor@dri.edu>, "M. F. Hasan" <fahim.hasan@colostate.edu>, "Justin L. Huntington" <justin.huntington@dri.edu>, "Ryan G. Smith" <ryan.g.smith@colostate.edu> | null | null | MIT | earth-engine, precipitation, hydrology, cropwat, effective-precipitation, remote-sensing | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: GIS",
"Topic :: Scientific/Engineering :: Hydrology"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"earthengine-api>=0.1.370",
"numpy>=1.21.0",
"xarray>=2022.3.0",
"rioxarray>=0.14.0",
"geopandas>=0.12.0",
"shapely>=2.0.0",
"dask>=2022.1.0",
"distributed>=2022.1.0",
"rasterio>=1.3.0",
"scipy>=1.9.0",
"matplotlib>=3.5.0",
"rasterstats>=0.18.0",
"pandas>=1.4.0",
"folium>=0.14.0",
"netCDF4>=1.6.0",
"leafmap>=0.30.0; extra == \"interactive\"",
"localtileserver>=0.7.0; extra == \"interactive\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mkdocs>=1.5.0; extra == \"docs\"",
"mkdocs-material>=9.0.0; extra == \"docs\"",
"mkdocstrings[python]>=0.24.0; extra == \"docs\"",
"mkdocs-gen-files>=0.5.0; extra == \"docs\"",
"mkdocs-literate-nav>=0.6.0; extra == \"docs\"",
"mkdocs-section-index>=0.3.0; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://github.com/montimaj/pyCropWat",
"Documentation, https://montimaj.github.io/pyCropWat",
"Repository, https://github.com/montimaj/pyCropWat",
"Issues, https://github.com/montimaj/pyCropWat/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:56:37.471173 | pycropwat-1.2.1.tar.gz | 84,823 | 6a/ab/9632d08b1b21d492f28d340c65849fcfaf477b2e5efdad3cb68590b27e66/pycropwat-1.2.1.tar.gz | source | sdist | null | false | f9e715bbd9c6ccbe329169bb3460c859 | 735595dd92316f8b1a2b99f7394e08ab7d837947ba6aa5f833f0f1333cc78ea9 | 6aab9632d08b1b21d492f28d340c65849fcfaf477b2e5efdad3cb68590b27e66 | null | [
"LICENSE"
] | 234 |
2.4 | feature-3dgs | 1.2.1 | Refactored python training and inference code for 3D Gaussian Splatting | # Feature 3DGS (Packaged Python Version)
This repo is the **refactored Python training and inference code for [Feature 3DGS](https://github.com/ShijieZhou-UCLA/feature-3dgs)**.
Built on top of [`gaussian-splatting`](https://github.com/yindaheng98/gaussian-splatting), we **reorganised the original code as a standard Python package** with a modular Extractor-Decoder architecture, making it easy to swap foundation models without changing the core pipeline.
Each Gaussian point carries a learnable **encoded semantics** embedding alongside standard 3DGS attributes. A frozen **Extractor** produces ground-truth feature maps from training images, while a lightweight learnable **Decoder** maps the rasterised per-point embeddings back to the extractor's feature space. The decoder's per-point transform can also be applied directly to the stored embeddings, yielding extractor-aligned semantic features without rendering. The framework is backbone-agnostic: new foundation models can be plugged in by implementing an Extractor-Decoder pair and registering it.
## Features
* [x] Organised as a standard Python package with `pip install` support
* [x] Modular Extractor-Decoder architecture for plugging in arbitrary foundation models
* [x] Built-in DINOv3 support (ViT and ConvNeXt backbones)
* [x] Auto-registration pattern — add new models with zero changes to core code
* [x] PCA-based feature visualisation for both ground-truth and rendered feature maps
* [x] All training modes from upstream: base, densify, camera, camera-densify
## Install
### Prerequisites
* [Pytorch](https://pytorch.org/) (>= v2.4 recommended)
* [CUDA Toolkit](https://developer.nvidia.com/cuda-12-4-0-download-archive) (12.4 recommended, match with PyTorch version)
* [gsplat](https://github.com/nerfstudio-project/gsplat)
### Development Install
```shell
pip install --upgrade git+https://github.com/facebookresearch/dinov3@main
pip install --upgrade git+https://github.com/yindaheng98/gaussian-splatting.git@master --no-build-isolation
pip install --target . --upgrade . --no-deps
```
### Download Checkpoints
Request access and download [DINOv3](https://github.com/facebookresearch/dinov3) weights to `checkpoints/`:
```
checkpoints/
├── dinov3_convnext_base_pretrain_lvd1689m-801f2ba9.pth
├── dinov3_convnext_large_pretrain_lvd1689m-61fa432d.pth
├── dinov3_convnext_small_pretrain_lvd1689m-296db49d.pth
├── dinov3_convnext_tiny_pretrain_lvd1689m-21b726bb.pth
├── dinov3_vit7b16_pretrain_lvd1689m-a955f4ea.pth
├── dinov3_vitb16_pretrain_lvd1689m-73cec8be.pth
├── dinov3_vith16plus_pretrain_lvd1689m-7c1da9a5.pth
├── dinov3_vitl16_pretrain_lvd1689m-8aa4cbdd.pth
├── dinov3_vits16_pretrain_lvd1689m-08c60483.pth
├── dinov3_vits16plus_pretrain_lvd1689m-4057cbaa.pth
└── ...
```
## Command-Line Usage
### Visualise Extractor Output
Verify that the extractor produces meaningful features before training:
```shell
python -m feature_3dgs.show \
--name dinov3_vitl16 \
-s data/truck -d output/truck-dinov3_vitl16 \
-o checkpoint_dir="'checkpoints'"
```
### Train
```shell
python -m feature_3dgs.train \
--name dinov3_vitl16 --embed_dim 32 \
-s data/truck -d output/truck-semantic -i 30000 \
--mode densify
```
### Render
```shell
python -m feature_3dgs.render \
--name dinov3_vitl16 --embed_dim 32 \
-s data/truck -d output/truck-semantic -i 30000
```
Rendered feature maps are PCA-projected to RGB and saved alongside ground-truth feature visualisations.
### Interactive Viewer
```shell
python -m feature_3dgs.viewer \
--name dinov3_vitl16 --embed_dim 32 \
-s data/truck -d output/truck-semantic -i 30000 \
--port 8080
```
Opens an interactive viewer (via [nerfview](https://github.com/hangg7/nerfview)) that renders PCA-colourised semantic feature maps in real time from free-viewpoint camera controls.
## API Usage
### Dataset & Decoder
```python
from feature_3dgs.prepare import prepare_dataset_and_decoder
dataset, decoder = prepare_dataset_and_decoder(
name="dinov3_vitl16", # registered extractor-decoder name
source="data/truck",
embed_dim=32,
device="cuda",
)
# dataset is a FeatureCameraDataset; each camera carries a 'feature_map' in custom_data
# decoder is the learnable AbstractFeatureDecoder
```
### Gaussian Model
```python
from feature_3dgs.prepare import prepare_gaussians
gaussians = prepare_gaussians(
decoder=decoder, sh_degree=3,
source="data/truck", device="cuda",
)
```
`SemanticGaussianModel` extends `GaussianModel` with `_encoded_semantics` (per-point learnable embeddings in a compact latent space) and a `_decoder`. During rendering, the rasteriser splatts the encoded semantics into a 2D feature map, and the decoder transforms it to match the extractor's output space. The output dict contains both `feature_map` (decoded, extractor-aligned) and `feature_map_encoded` (raw rasterised).
### Training
```python
from feature_3dgs.prepare import prepare_trainer
trainer = prepare_trainer(gaussians, dataset, mode="densify")
for camera in dataset:
loss, out = trainer.step(camera)
```
### Inference
```python
import torch
with torch.no_grad():
for camera in dataset:
out = gaussians(camera)
rgb = out["render"] # (3, H, W)
feat = out["feature_map"] # (D, H', W') decoded, extractor-aligned
feat_enc = out["feature_map_encoded"] # (embed_dim, H, W) raw rasterised
# Per-Gaussian semantic features (no rendering needed)
semantics = gaussians.get_semantics # (N, D) via decoder.transform_features
# Custom linear projection at full resolution (e.g. PCA visualisation)
weight, bias = ... # (C, D) and (C,)
out = gaussians.forward_linear(camera, weight, bias)
projected = out["feature_map"] # (C, H, W)
```
### Save & Load
```python
gaussians.save_ply("output/point_cloud.ply")
# also saves point_cloud.ply.semantic.pt and point_cloud.ply.decoder.pt
gaussians.load_ply("output/point_cloud.ply")
```
## Design: Extractor & Decoder
The core abstraction decouples **what features to distill** (Extractor) from **how to map rasterised embeddings back** (Decoder).
### Extractor (`AbstractFeatureExtractor`)
The extractor is a **frozen** foundation model that converts training images into dense feature maps. It runs **only on the dataset side** — each training view is processed once, cached, and served as the ground-truth supervision signal.
```
Image (C, H, W) ──► Extractor (frozen) ──► Feature Map (D, H', W')
```
The extractor defines the target feature space (dimension `D` and spatial resolution `H'×W'`). It is never updated during training.
### Decoder (`AbstractFeatureDecoder`)
The decoder is a **learnable** module with three core operations:
| Method | Signature | Purpose |
|---|---|---|
| `init(dataset)` | — | Build the mapping from data (e.g. PCA initialisation) |
| `transform_features(features)` | `(N, C_in) → (N, C_out)` | Per-point mapping, usable on per-Gaussian encoded semantics directly |
| `transform_feature_map(feature_map)` | `(C_in, H, W) → (C_out, H', W')` | Full rendered feature map → extractor output format (channel + spatial) |
An additional `transform_feature_map_linear(feature_map, weight, bias)` appends a custom linear projection after `transform_features` at full spatial resolution — useful for PCA visualisation or arbitrary downstream projections.
```
Encoded semantics ──► Rasteriser ──► Raw Feature Map (embed_dim, H, W)
│
┌────────────────┼────────────────┐
▼ ▼ ▼
transform_feature_map forward_linear (stored as
│ (custom linear) feature_map_encoded)
▼ ▼
Decoded Feature Map Projected Map
(D, H', W') (C, H, W)
```
The default `transform_feature_map` applies `transform_features` per pixel (no spatial change). Subclasses may override it with **reparameterized** implementations for memory efficiency — e.g. the DINOv3 decoder reparameterizes a linear mapping followed by patch-level average pooling into a single `F.conv2d` call, avoiding a large intermediate tensor. Similarly, `transform_feature_map_linear` reparameterizes two sequential linear layers into one combined projection.
The training loss is `L1(Decoded Feature Map, Extractor Feature Map)`. The decoder's role is to bridge the gap between the compact per-point embedding (`embed_dim`, typically 32) and the extractor's high-dimensional output (`D`, e.g. 1024 for ViT-L), while also handling any spatial resolution change.
### Why this split?
1. **Memory efficiency**: Only `embed_dim` channels are stored per Gaussian and rasterised, not the full `D` channels. The decoder upprojects after rasterisation.
2. **Spatial alignment**: Foundation models often output at patch resolution (e.g. 1/16 for ViT). The decoder can downsample the rasterised full-resolution map to match, avoiding expensive full-resolution feature supervision.
3. **Direct feature access**: `transform_features` can be applied directly to per-Gaussian encoded semantics (via `get_semantics`), producing extractor-aligned features without rendering.
4. **Modularity**: Swapping the foundation model only requires a new Extractor-Decoder pair. The Gaussian model, trainer, and rendering pipeline remain unchanged.
## Extending: Adding a New Foundation Model
The project uses an **auto-registration** pattern. To add support for a new model (e.g. a hypothetical `MyModel`), follow the DINOv3 implementation as a reference:
### Step 1: Implement the Extractor
Create `feature_3dgs/mymodel/extractor.py`:
```python
import torch
from feature_3dgs.extractor import AbstractFeatureExtractor
class MyModelExtractor(AbstractFeatureExtractor):
def __init__(self, model, ...):
self.model = model
self.model.eval()
@torch.no_grad()
def __call__(self, image: torch.Tensor) -> torch.Tensor:
# image: (C, H, W) in [0, 1]
# Return: (D, H', W') feature map
...
def to(self, device) -> 'MyModelExtractor':
self.model.to(device)
return self
```
### Step 2: Implement the Decoder
Create `feature_3dgs/mymodel/decoder.py`. At minimum, implement `transform_features` (per-point mapping) and optionally override `transform_feature_map` for efficiency:
```python
import torch
import torch.nn as nn
from feature_3dgs.decoder import NoopFeatureDecoder
class MyModelDecoder(NoopFeatureDecoder):
def __init__(self, in_channels: int, out_channels: int, ...):
super().__init__(embed_dim=in_channels)
self.linear = nn.Linear(in_channels, out_channels)
def transform_features(self, features: torch.Tensor) -> torch.Tensor:
# features: (N, in_channels) -> (N, out_channels)
return self.linear(features)
def transform_feature_map(self, feature_map: torch.Tensor) -> torch.Tensor:
# Optional override for fused / memory-efficient implementation.
# Default: applies transform_features per pixel (no spatial change).
# Override to add spatial downsampling if needed.
...
def to(self, device):
self.linear = self.linear.to(device)
return self
def load(self, path: str):
self.linear.load_state_dict(torch.load(path, weights_only=True))
def save(self, path: str):
torch.save(self.linear.state_dict(), path)
def parameters(self):
return self.linear.parameters()
```
The key design constraint: **`transform_feature_map`'s output spatial size and channel count must exactly match the extractor's output**, so that L1 loss can be computed directly.
For example, the DINOv3 ViT extractor outputs at patch resolution `(D, H/P, W/P)`. `DINOv3LinearAvgDecoder` reparameterizes a trainable `nn.Linear` with patch-level average pooling into a single `F.conv2d` call (kernel derived from linear weights, stride = patch size), avoiding the large `(D, H, W)` intermediate tensor entirely.
### Step 3: Register via Factory
Create `feature_3dgs/mymodel/registry.py`:
```python
from feature_3dgs.registry import register_extractor_decoder
from .extractor import MyModelExtractor
from .decoder import MyModelDecoder
FEATURE_DIM = 768 # D of your model's output
def factory(embed_dim: int, **configs):
extractor = MyModelExtractor(...)
decoder = MyModelDecoder(
in_channels=embed_dim,
out_channels=FEATURE_DIM,
...
)
return extractor, decoder
register_extractor_decoder("mymodel", factory)
```
### Step 4: Trigger Registration on Import
Create `feature_3dgs/mymodel/__init__.py`:
```python
from . import registry # triggers register_extractor_decoder() at import time
```
Then add the import in `feature_3dgs/__init__.py`:
```python
from . import mymodel # auto-registers "mymodel"
```
After these steps, the new model is available everywhere:
```shell
python -m feature_3dgs.train --name mymodel --embed_dim 32 -s data/truck -d output/truck-mymodel -i 30000
```
## Acknowledgement
This repo is developed based on [Feature 3DGS](https://github.com/ShijieZhou-UCLA/feature-3dgs), [3D Gaussian Splatting](https://github.com/graphdeco-inria/gaussian-splatting), and [gaussian-splatting (packaged)](https://github.com/yindaheng98/gaussian-splatting). Many thanks to the authors for open-sourcing their codebases.
# Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields
Shijie Zhou, Haoran Chang\*, Sicheng Jiang\*, Zhiwen Fan, Zehao Zhu, Dejia Xu, Pradyumna Chari, Suya You, Zhangyang Wang, Achuta Kadambi (\* indicates equal contribution)<br>
| [Webpage](https://feature-3dgs.github.io/) | [Full Paper](https://arxiv.org/abs/2312.03203) | [Video](https://www.youtube.com/watch?v=h4zmQsCV_Qw) | [Original Code](https://github.com/ShijieZhou-UCLA/feature-3dgs) |
Abstract: *3D scene representations have gained immense popularity in recent years. Methods that use Neural Radiance fields are versatile for traditional tasks such as novel view synthesis. In recent times, some work has emerged that aims to extend the functionality of NeRF beyond view synthesis, for semantically aware tasks such as editing and segmentation using 3D feature field distillation from 2D foundation models. However, these methods have two major limitations: (a) they are limited by the rendering speed of NeRF pipelines, and (b) implicitly represented feature fields suffer from continuity artifacts reducing feature quality. Recently, 3D Gaussian Splatting has shown state-of-the-art performance on real-time radiance field rendering. In this work, we go one step further: in addition to radiance field rendering, we enable 3D Gaussian splatting on arbitrary-dimension semantic features via 2D foundation model distillation. This translation is not straightforward: naively incorporating feature fields in the 3DGS framework encounters significant challenges, notably the disparities in spatial resolution and channel consistency between RGB images and feature maps. We propose architectural and training changes to efficiently avert this problem. Our proposed method is general, and our experiments showcase novel view semantic segmentation, language-guided editing and segment anything through learning feature fields from state-of-the-art 2D foundation models such as SAM and CLIP-LSeg. Across experiments, our distillation method is able to provide comparable or better results, while being significantly faster to both train and render. Additionally, to the best of our knowledge, we are the first method to enable point and bounding-box prompting for radiance field manipulation, by leveraging the SAM model.*
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title">BibTeX</h2>
<pre><code>@inproceedings{zhou2024feature,
title={Feature 3dgs: Supercharging 3d gaussian splatting to enable distilled feature fields},
author={Zhou, Shijie and Chang, Haoran and Jiang, Sicheng and Fan, Zhiwen and Zhu, Zehao and Xu, Dejia and Chari, Pradyumna and You, Suya and Wang, Zhangyang and Kadambi, Achuta},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={21676--21685},
year={2024}
}</code></pre>
</div>
</section>
| text/markdown | yindaheng98 | yindaheng98@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3"
] | [] | https://github.com/yindaheng98/gaussian-splatting | null | null | [] | [] | [] | [
"torch",
"torchvision",
"tqdm",
"plyfile",
"tifffile",
"numpy",
"opencv-python",
"pillow",
"open3d",
"gaussian-splatting>=2.3.0",
"scikit-learn"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-20T01:56:22.437119 | feature_3dgs-1.2.1-cp312-cp312-win_amd64.whl | 1,210,860 | 15/c8/c59798e17c46f3abce47ae3960aca78d02d1512f747688de677cac031a06/feature_3dgs-1.2.1-cp312-cp312-win_amd64.whl | cp312 | bdist_wheel | null | false | 77860af1be61e084439344de3d77a50c | 2adf5e5bb7b5022cf600cfe7bcf483971e669b5aa87441f4940f49e12b9c3fe9 | 15c8c59798e17c46f3abce47ae3960aca78d02d1512f747688de677cac031a06 | null | [] | 359 |
2.4 | artbox | 0.9.0 | ArtBox is a tool set for handling multimedia files. | # ArtBox
ArtBox is a tool set for handling multimedia files.
- Documentation: https://ggpedia.games
- License: BSD-3 Clause
## Features
TBD
# Setup
ArtBox uses some dependencies that maybe would not work well in your machine. In
order to have everything well installed, create a conda/mamba environment and
install `artbox` there.
```bash
$ mamba create --name artbox "python>=3.8.1,<3.12" pygobject pip
$ conda activate artbox
$ pip install artbox
```
## Examples
For the following examples, create the a temporary folder for artbox:
```bash
$ mkdir /tmp/artbox
```
### Convert text to audio
By default, the `artbox speech` uses
[`edge-tts`](https://pypi.org/project/edge-tts/) engine, but if you can also
specify [`gtts`](https://github.com/pndurette/gTTS) with the flag
`--engine gtts`.
```bash
$ echo "Are you ready to join Link and Zelda in fighting off this unprecedented threat to Hyrule?" > /tmp/artbox/text.md
$ artbox speech text-to-speech \
--title artbox \
--input-path /tmp/artbox/text.md \
--output-path /tmp/artbox/speech.mp3 \
--engine edge-tts
```
If you need to generate the audio for different language, you can use the flag
`--lang`:
```bash
$ echo "Bom dia, mundo!" > /tmp/artbox/text.md
$ artbox speech text-to-speech \
--title artbox \
--input-path /tmp/artbox/text.md \
--output-path /tmp/artbox/speech.mp3 \
--lang pt
```
If you are using `edge-tts` engine (the default one), you can also specify the
locale for that language, for example:
```bash
$ echo "Are you ready to join Link and Zelda in fighting off this unprecedented threat to Hyrule?" > /tmp/artbox/text.md
$ artbox speech text-to-speech \
--title artbox \
--input-path /tmp/artbox/text.md \
--output-path /tmp/artbox/speech.mp3 \
--engine edge-tts \
--lang en-IN
```
Additionally, if you are using edge-tts, you can specify `--rate`, `--volume`,
and `--pitch`, for example:
```bash
$ echo "Do you want some coffee?" > /tmp/artbox/text.md
$ artbox speech text-to-speech \
--title artbox \
--input-path /tmp/artbox/text.md \
--output-path /tmp/artbox/speech.mp3 \
--engine edge-tts \
--lang en \
--rate +10% \
--volume -10% \
--pitch -5Hz
```
### Download a youtube video
If you want to download videos from the youtube, you can use the following
command:
```bash
$ artbox youtube download \
--url https://www.youtube.com/watch?v=zw47_q9wbBE \
--output-path /tmp/artbox/
```
The command above downloads using a random resolution. If you want a specific
resolution, use the flat `--resolution`:
```bash
$ artbox youtube download \
--url https://www.youtube.com/watch?v=zw47_q9wbBE \
--output-path /tmp/artbox/ \
--resolution 360p
```
### Create a song based on the musical notes
```bash
# json format
echo '["E", "D#", "E", "D#", "E", "B", "D", "C", "A"]' > /tmp/artbox/notes.txt
$ artbox sound notes-to-audio \
--input-path /tmp/artbox/notes.txt \
--output-path /tmp/artbox/music.mp3 \
--duration 2
```
### Remove the audio from a video
First, download the youtube video `https://www.youtube.com/watch?v=zw47_q9wbBE`
as explained before.
Next, run the following command:
```bash
$ artbox video remove-audio \
--input-path "/tmp/artbox/The Legend of Zelda Breath of the Wild - Nintendo Switch Presentation 2017 Trailer.mp4" \
--output-path /tmp/artbox/botw.mp4
```
### Extract the audio from a video
First, download the youtube video `https://www.youtube.com/watch?v=zw47_q9wbBE`
as explained before.
Next, run the following command:
```bash
$ artbox video extract-audio \
--input-path "/tmp/artbox/The Legend of Zelda Breath of the Wild - Nintendo Switch Presentation 2017 Trailer.mp4" \
--output-path /tmp/artbox/botw-audio.mp3
```
### Combine audio and video files
First, execute the previous steps:
- Download a youtube video
- Remove the audio from a video
- Extract the audio from a video
Next, run the following command:
```bash
$ artbox video combine-video-and-audio \
--video-path /tmp/artbox/botw.mp4 \
--audio-path /tmp/artbox/botw-audio.mp3 \
--output-path /tmp/artbox/botw-combined.mp4
```
## Additional dependencies
If you want to use Python to play your audio files, you can install `playsound`:
```bash
$ pip wheel --use-pep517 "playsound (==1.3.0)"
```
## Troubleshoot
After installing with `poetry install`:
- Patch `pytube` (ref: https://github.com/pytube/pytube/issues/1773):
`sed -i 's/(r"^$\\w+\\W")/(r"^\\w+\\W")/' $CONDA_PREFIX/lib/python3.*/site-packages/pytube/cipher.py`
| text/markdown | Ivan Ogasawara | ivan.ogasawara@gmail.com | null | null | BSD-3-Clause | null | [
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.9 | [] | [] | [] | [
"aubio>=0.4.9",
"edge-tts>=6.1.8",
"ffmpeg-python>=0.2.0",
"google-cloud-speech>=2.24.1",
"gtts>=2.3.2",
"librosa>=0.10.1",
"matplotlib<=3.9",
"moviepy<2,>=1.0.3",
"noisereduce<3,>=2.0.1",
"numpy<2,>=1.20",
"openai>=1",
"pycairo<1.26.0,>=1.25.1",
"pydub>=0.25.1",
"pygobject<3.49,>=3.44.1",
"python-dotenv>=1.0.0",
"pytubefix>=5.0",
"scipy<1.23",
"speechrecognition>=3.10",
"typer>=0.9.0",
"vosk>=0.3.45"
] | [] | [] | [] | [] | poetry/2.3.1 CPython/3.12.12 Linux/6.11.0-1018-azure | 2026-02-20T01:56:21.536211 | artbox-0.9.0-py3-none-any.whl | 14,478 | 1f/86/ebd480f58479b05ca4a101700f36ea291c9adc8432f66b472502c8eb04af/artbox-0.9.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 66281b34eb26fa60e2da832140dab7d0 | 9e01928a3e63c66c64a1cea0da0b32f8f94598c8b5a2fbc1ce7a25a517e67698 | 1f86ebd480f58479b05ca4a101700f36ea291c9adc8432f66b472502c8eb04af | null | [
"LICENSE"
] | 232 |
2.4 | pepmatch | 1.16.0 | Search tool for peptides and epitopes within a proteome, while considering potential residue substitutions. | <p align="center">
<img src="docs/logo.png" alt="PEPMatch Logo">
</p>
--------------------------------------------------------------------
[](https://github.com/IEDB/PEPMatch/actions/workflows/tests.yml)
**Author:** Daniel Marrama
`PEPMatch` is a high-performance Python tool designed to find short peptide sequences within a reference proteome or other large protein sets. It is optimized for speed and flexibility, supporting exact matches, searches with a defined number of residue substitutions (mismatches), and a "best match" mode to find the most likely hit.
As a competition to improve tool performance, we created a benchmarking framework with instructions [here](./benchmarking).
### Key Features
* **Versatile Searching**: Find exact matches, matches with a specified tolerance for mismatches, or the single best match for each query peptide.
* **Discontinuous Epitope Support**: Search for non-contiguous residues in the format `"R377, Q408, Q432, ..."`.
* **High Performance**: Utilizes an efficient k-mer indexing strategy for rapid searching. The backend is powered by a C-based Hamming distance calculation for optimized mismatch detection.
* **Optimized Preprocessing**: Employs a two-step process. Proteomes are preprocessed once into a format optimized for the search type (SQLite for exact matching, Pickle for mismatching), making subsequent searches extremely fast.
* **Parallel Processing**: Built-in support for multicore processing to handle large query sets efficiently.
* **Flexible I/O**: Accepts queries from FASTA files or Python lists and can output results to multiple formats, including CSV, TSV, XLSX, JSON, or directly as a Polars DataFrame.
### Requirements
* Python 3.7+
* [Polars](https://pola.rs/)
* [Biopython](https://biopython.org/)
### Installation
```bash
pip install pepmatch
```
### Core Engine
`PEPMatch` operates using a two-step workflow:
1. **Preprocessing**: First, the target proteome is processed into an indexed format. This step only needs to be performed once per proteome and k-mer size. `PEPMatch` uses SQLite databases for the speed of indexed lookups in exact matching and serialized Python objects (pickle) for the flexibility needed in mismatch searching.
2. **Matching**: The user's query peptides are then searched against the preprocessed proteome.
This design ensures that the time-intensive task of parsing and indexing the proteome is separated from the search itself, allowing for rapid and repeated querying.
### Command-Line Usage
The tool provides two CLI commands: `pepmatch-preprocess` and `pepmatch-match`.
#### 1. Preprocessing
The `pepmatch-preprocess` command builds the necessary database from your proteome FASTA file.
* For **exact matching** (0 mismatches), use the `sql` format.
* For **mismatch matching**, use the `pickle` format.
```bash
# Preprocess for an exact match search using 5-mers
pepmatch-preprocess -p human.fasta -k 5 -f sql
# Preprocess for a mismatch search using 3-mers
pepmatch-preprocess -p human.fasta -k 3 -f pickle
```
##### Flags
* `-p`, `--proteome` (Required): Path to the proteome FASTA file.
* `-k`, `--kmer_size` (Required): The k-mer size to use for indexing.
* `-f`, `--preprocess_format` (Required): The format for the preprocessed database (`sql` or `pickle`).
* `-n`: A custom name for the proteome.
* `-P`: Path to the directory to save preprocessed files.
* `-g`: Path to a gene priority proteome file (UniProt specific 1-1 protein per gene file to prioritize matches later)
#### 2. Matching
The `pepmatch-match` command runs the search against a preprocessed proteome.
```bash
# Find exact matches (-m 0) using the preprocessed 5-mer database
pepmatch-match -q peptides.fasta -p human.fasta -m 0 -k 5
# Find matches with up to 3 mismatches (-m 3) using the 3-mer database
pepmatch-match -q neoepitopes.fasta -p human.fasta -m 3 -k 3
```
##### Flags
* `-q`, `--query` (Required): Path to the query peptide FASTA file.
* `-p`, `--proteome_file` (Required): Path to the original proteome FASTA file.
* `-m`: Maximum number of mismatches allowed (e.g., `0` for exact).
* `-k`: The k-mer size to use (must match the preprocessed file).
* `-P`: Path to the directory containing preprocessed files.
* `-b`: Enable "best match" mode.
* `-f`: Output format (`csv`, `tsv`, `xlsx`, `json`). Defaults to `csv`.
* `-o`: Name of the output file (do not include the file extension, i.e. `.csv`)
* `-v`: Disable sequence versioning (e.g. for protein ID P05067.1, ".1" will be removed.)
* `-n`: Number of parallel processing jobs (CPU cores) to use.
### Python API Usage
For more control and integration into other workflows, `PEPMatch` provides a simple Python API.
#### 1. Exact Matching
```python
from pepmatch import Preprocessor, Matcher
# Preprocess the proteome into a SQLite DB for exact matching
Preprocessor('proteomes/human.fasta').sql_proteome(k=5)
# Initialize the Matcher for an exact search (0 mismatches)
matcher = Matcher(
query='queries/mhc-ligands-test.fasta',
proteome_file='proteomes/human.fasta',
max_mismatches=0,
k=5
)
# Run the search and get results
results_df = matcher.match()
```
#### 2. Mismatching
```python
from pepmatch import Preprocessor, Matcher
# Preprocess the proteome into pickle files for mismatching
Preprocessor('proteomes/human.fasta').pickle_proteome(k=3)
# Initialize the Matcher to allow up to 3 mismatches
matcher = Matcher(
query='queries/neoepitopes-test.fasta',
proteome_file='proteomes/human.fasta',
max_mismatches=3,
k=3
)
results_df = matcher.match()
```
#### 3. Best Match
The `best_match` mode automatically finds the optimal match for each peptide, trying different k-mer sizes and mismatch thresholds. No manual preprocessing is required.
```python
from pepmatch import Matcher
matcher = Matcher(
query='queries/milk-peptides-test.fasta',
proteome_file='proteomes/human.fasta',
best_match=True
)
results_df = matcher.match()
```
#### 4. Parallel Processing
Use the `ParallelMatcher` class to run searches on multiple CPU cores. The `n_jobs` parameter specifies the number of cores to use.
```python
from pepmatch import Preprocessor, ParallelMatcher
# Preprocessing is the same
Preprocessor('proteomes/betacoronaviruses.fasta').pickle_proteome(k=3)
# Use ParallelMatcher to search with 4 jobs
parallel_matcher = ParallelMatcher(
query='queries/coronavirus-test.fasta',
proteome_file='proteomes/betacoronaviruses.fasta',
max_mismatches=3,
k=3,
n_jobs=4
)
results_df = parallel_matcher.match()
```
#### 5. Discontinuous Epitope Searching
`PEPMatch` can search for epitopes defined by non-contiguous residues and their positions. Simply provide a query list where each item is a string in the format `"A1, B10, C15"`.
```python
from pepmatch import Matcher
# A list of discontinuous epitopes to find
discontinuous_query = [
"R377, Q408, Q432, H433, F436",
"S2760, V2763, E2773, D2805, T2819"
]
matcher = Matcher(
query=discontinuous_query,
proteome_file='proteomes/sars-cov-2.fasta',
max_mismatches=1 # Allow 1 mismatch among the specified residues
)
results_df = matcher.match()
```
### Output Formats
You can specify the output format using the `output_format` parameter in the `Matcher` or `ParallelMatcher`.
* **`dataframe` (default for API)**: Returns a Polars DataFrame.
* **`csv` (default for CLI)**: Saves results to a CSV file.
* **`tsv`**: Saves results to a TSV file.
* **`xlsx`**: Saves results to an Excel file.
* **`json`**: Saves results to a JSON file.
To receive a DataFrame from the API, you can either omit the `output_format` parameter or set it explicitly:
```python
# The match() method will return a Polars DataFrame
df = Matcher(
'queries/neoepitopes-test.fasta',
'proteomes/human.fasta',
max_mismatches=3,
k=3,
output_format='dataframe' # Explicitly request a DataFrame
).match()
print(df.head())
```
### Citation
If you use PEPMatch in your research, please cite the following paper:
Marrama D, Chronister WD, Westernberg L, et al. PEPMatch: a tool to identify short peptide sequence matches in large sets of proteins. *BMC Bioinformatics*. 2023;24(1):485. Published 2023 Dec 18. doi:10.1186/s12859-023-05606-4
| text/markdown; charset=UTF-8; variant=GFM | null | Daniel Marrama <dmarrama@lji.org> | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [
"polars>=1.31.0",
"biopython>=1.78",
"xlsxwriter>=3.2.5",
"pytest>=8.0; extra == \"dev\"",
"pre-commit>=3.3.2; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/IEDB/PEPMatch"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:54:47.066417 | pepmatch-1.16.0.tar.gz | 39,970 | 83/56/77929dd229922b54985c55b2743a1537566b57f72fe9ebede3a19738320f/pepmatch-1.16.0.tar.gz | source | sdist | null | false | 7d713cba52b7c381662274bbebe9a5db | fd810d1fe398ab3f4052928737b9c63599ba6cc49cbc4fd416f0334e8e669b38 | 835677929dd229922b54985c55b2743a1537566b57f72fe9ebede3a19738320f | null | [
"LICENSE"
] | 985 |
2.4 | batcave | 47.1.1 | Python Programming Toolkit | # BatCave Python Module
A useful collection of tools for writing Python programs.
| text/markdown | null | "Jeffery G. Smith" <web@pobox.com> | null | null | null | python, programming, library | [
"Development Status :: 5 - Production/Stable",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.12",
"Intended Audience :: Developers",
"Topic :: Software Development",
"Natural Language :: English"
] | [] | null | null | ~=3.12 | [] | [] | [] | [
"docker~=7.1",
"DotMap~=1.3",
"GitPython~=3.1",
"google-api-core",
"kubernetes~=35.0",
"requests~=2.32",
"PyYAML~=6.0",
"pywin32-stubs; sys_platform == \"win32\"",
"WMI~=1.5; sys_platform == \"win32\"",
"psutil~=7.2; platform_machine not in \"armv6l armv7l armv8b armv8l\"",
"PyQt5~=5.15; platform_machine not in \"aarch64 aarch64_be armv6l armv7l armv8b armv8l\"",
"bumpver; extra == \"dev\"",
"vjer; extra == \"dev\"",
"flake8; extra == \"test\"",
"flake8-annotations; extra == \"test\"",
"flake8-pyproject; extra == \"test\"",
"mypy; extra == \"test\"",
"pylint; extra == \"test\"",
"PyQt5-stubs; extra == \"test\"",
"types-python-dateutil; extra == \"test\"",
"types-PyYAML; extra == \"test\"",
"types-psutil; extra == \"test\"",
"types-requests; extra == \"test\"",
"types-pywin32; extra == \"test\" and sys_platform == \"win32\"",
"unittest-xml-reporting; extra == \"test\""
] | [] | [] | [] | [
"changelog, https://github.com/arisilon/batcave/blob/master/CHANGELOG.md",
"documentation, https://batcave.readthedocs.io",
"homepage, https://github.com/arisilon/batcave/",
"repository, https://github.com/arisilon/batcave/"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T01:53:39.352063 | batcave-47.1.1.tar.gz | 113,361 | 04/e3/641a302422e64e922c071e12655b4ab5de3abc6ba9849f856c7b53ff3b75/batcave-47.1.1.tar.gz | source | sdist | null | false | 16104b6698771c42c33a760ce54a9998 | 3508ef4e57ff20ba360aa2d34a4983ada270ee9daf02af8043bca5eabbb0c628 | 04e3641a302422e64e922c071e12655b4ab5de3abc6ba9849f856c7b53ff3b75 | null | [
"LICENSE"
] | 427 |
2.4 | simpleworkernet | 0.0.1b17 | Python клиент для API WorkerNet | SimpleWorkerNet
Высокопроизводительный Python клиент для REST API системы WorkerNet с интеллектуальной системой трансформации и типизации сложных JSON структур
📋 Содержание
🌟 Особенности
📦 Установка
🚀 Быстрый старт
🔧 Конфигурация
📚 Основные компоненты
WorkerNetClient
BaseModel и smart_model
SmartData Framework
Метаданные и пути
📊 Логирование
Настройка логов
Сессионные логи
Управление логами
💾 Кэширование
Настройка кэша
Управление кэшем
🎨 Примеры использования
Базовые операции с API
Фильтрация и поиск с SmartData
Продвинутый поиск
Работа с моделями
Агрегация и статистика
Сериализация
🧹 Очистка и деинсталляция
📖 Документация
🤝 Вклад в разработку
📄 Лицензия
✒️ Автор
🌟 Особенности
🚀 SmartData Framework
SmartData — это специализированный SDK-компонент для обработки API-ответов сервера WorkerNet. Он превращает сырые, динамически типизированные JSON-данные сервера в строго типизированные объекты Python с поддержкой глубокого поиска и автоматической трансформации коллекций.
Автоматическое приведение типов согласно аннотациям Python
Сохранение метаданных о пути извлечения данных (MetaData класс)
Глубокий поиск по любым уровням вложенности
Fluent-интерфейс для фильтрации, сортировки и агрегации
Поддержка экспорта/импорта (JSON, Pickle, Gzip)
🔧 BaseModel Engine
BaseModel - мощная система рекурсивного кастинга типов:
Автоматическое преобразование данных в типизированные объекты
Поддержка Union, Optional, List, вложенных моделей
Постобработка и схлопывание избыточных структур
Сериализация/десериализация с сохранением типов
Декоратор @smart_model для удобного создания моделей
🎯 Умный клиент API
Автоматическое управление сессиями (поддержка with контекста)
Интеллектуальный выбор метода (GET/POST) при превышении лимита URL
Автоматические повторы при таймаутах
Кэширование полей для оптимизации производительности
Полное логирование всех операций
📊 Продвинутое логирование
Сессионные логи - каждый запуск создает отдельный файл с временной меткой
Автоматическая ротация - старые логи удаляются, новые не разрастаются
Детальная информация - ID сессии, время создания, размер файла
Кросс-платформенность - корректная работа на Windows, macOS и Linux
Управление сессиями - создание новых сессий, просмотр истории
🗄️ Кэширование
Двухуровневое кэширование (поля моделей и числовые ключи)
Автоматическая очистка при достижении лимита
Сохранение кэша на диск и загрузка при инициализации
Возможность полного отключения кэширования
Детальная статистика использования
📦 Установка
Базовая установка
```bash
pip install simpleworkernet
```
🚀 Быстрый старт
Минимальный пример
```python
from simpleworkernet import WorkerNetClient
# Создаем клиент
client = WorkerNetClient(
host="my.workernet.ru",
apikey="your-secret-api-key"
)
# Получаем данные
cables = client.Fiber.catalog_cables_get()
print(f"Найдено кабелей: {len(cables)}")
```
Использование с контекстным менеджером
```python
from simpleworkernet import WorkerNetClient
with WorkerNetClient("my.workernet.ru", "your-api-key") as client:
customers = client.Customer.get_data()
addresses = client.Address.get()
print(f"Абонентов: {len(customers)}")
print(f"Адресов: {len(addresses)}")
```
Умная фильтрация с SmartData
```python
from simpleworkernet import WorkerNetClient, Where, Operator
client = WorkerNetClient("my.workernet.ru", "your-api-key")
# Получаем данные
cables = client.Fiber.catalog_cables_get()
# Создаем условия поиска
conditions = [
Where('cable_line_type_id', 5),
Where('fiber_count', [4, 16], Operator.BETWEEN),
Where('model', 'ОКА', Operator.LIKE)
]
# Фильтруем
filtered = cables.filter(*conditions, join='AND')
print(f"Найдено: {filtered.count()}")
# Или через where для простых условий
active_customers = customers.where('state', 'active')
```
🔧 Конфигурация
Базовая настройка
```python
from simpleworkernet import config_manager
# Просмотр текущей конфигурации
config_manager.show_config()
# Изменение настроек
config_manager.update(
log_level="DEBUG",
log_to_file=True,
console_output=True,
cache_enabled=True,
cache_max_size=100000
)
# Сохранение в реестр/файл
config_manager.save()
```
Полная конфигурация через код
```python
from simpleworkernet import config_manager, WorkerNetConfig
# Создание своей конфигурации
custom_config = WorkerNetConfig(
log_level="INFO",
log_file="/custom/path/workernet.log",
cache_enabled=True,
cache_max_size=50000,
default_timeout=60,
max_retries=5,
smartdata_max_depth=200,
smartdata_debug=True
)
# Применение
config_manager.update(**custom_config.to_dict())
config_manager.save()
```
Переменные окружения
```bash
export SMARTDATA_LOG_LEVEL=DEBUG
export SMARTDATA_LOG_FILE=true
export SMARTDATA_LOG_PATH=/var/log/workernet.log
```
📚 Основные компоненты
WorkerNetClient
WorkerNetClient - основной класс для взаимодействия с API WorkerNet.
Параметры инициализации:
host - хост API (обязательный)
apikey - ключ API (обязательный)
protocol - протокол ('http' или 'https', по умолчанию 'https')
port - порт (по умолчанию 443)
apiscr - имя скрипта API (по умолчанию 'api.php')
timeout - таймаут запроса (из конфига)
max_retries - количество повторов при ошибке
Доступные категории:
Address - работа с адресами
Customer - работа с абонентами
Device - оборудование
Employee - сотрудники
Fiber - кабельные линии
Map - карты покрытия
Module - внешние запросы
И многие другие...
BaseModel и smart_model
BaseModel - базовый класс для всех моделей данных с автоматическим кастингом типов.
```python
from simpleworkernet import smart_model, BaseModel, vStr, GeoPoint
@smart_model
class Address(BaseModel):
"""Модель адреса"""
id: int
city: vStr
street: vStr
house: str
apartment: Optional[int]
coordinates: GeoPoint
# Автоматическое создание из словаря
addr = Address({
"id": 1,
"city": "Москва",
"street": "Ленина",
"house": "10",
"apartment": 42,
"coordinates": [55.75, 37.62]
})
print(addr.city) # "Москва"
print(addr.coordinates) # "55.75,37.62"
```
SmartData Framework
SmartData - контейнер для интеллектуальной обработки JSON-структур.
Создание SmartData
```python
from simpleworkernet import SmartData
# Из списка словарей
data = [
{"id": 1, "name": "Иван", "age": 30},
{"id": 2, "name": "Петр", "age": 25},
{"id": 3, "name": "Сидор", "age": 35}
]
sd = SmartData(data, target_type=Customer)
print(f"Всего: {sd.count()}") # Всего: 3
```
Метаданные и пути
Каждый элемент в SmartData сохраняет информацию о своем местоположении в исходной структуре:
```python
# Получение метаданных
for item in sd:
path = sd.get_item_path(item)
print(f"Элемент {item} находится по пути: {path}")
# Фильтрация по пути
nested_items = sd.filter_by_path("field:address/*")
```
📊 Логирование
Настройка логов
```python
from simpleworkernet import log, config_manager
# Базовая настройка через конфиг
config_manager.update(
log_level="DEBUG",
log_to_file=True,
console_output=True
)
# Прямая настройка логгера
log.configure(
level="DEBUG",
log_to_file=True,
log_file="/custom/path/workernet.log",
console_output=True,
max_log_files=20 # хранить 20 последних сессий
)
```
Сессионные логи
Логгер автоматически создает отдельные файлы для каждого запуска:
```python
from simpleworkernet import log
# Получить информацию о текущей сессии
current_log = log.get_session_log_path()
session_id = log.get_session_id()
print(f"Текущая сессия: {session_id}")
print(f"Файл лога: {current_log}")
print(f"ID сессии: {session_id}")
# Пример вывода:
# Текущая сессия: 20250220_143022
# Файл лога: C:\Users\user\AppData\Roaming\simpleworkernet\logs\workernet_20250220_143022.log
# ID сессии: 20250220_143022
```
Управление логами
```python
from simpleworkernet import log
from datetime import datetime
# Список всех сессий (от новых к старым)
all_logs = log.list_session_logs(sort_by='newest')
print(f"Всего сессий: {len(all_logs)}")
# Детальная информация о каждой сессии
for log_file in all_logs[:5]: # последние 5
info = log.get_session_info(log_file)
if 'created' in info:
created = info['created'].strftime("%Y-%m-%d %H:%M:%S")
size = info['size_kb']
print(f"📄 {info['session_id']} - {created} - {size:.1f} KB")
# Начать новую сессию вручную
new_session = log.new_session("my_custom_session")
print(f"Новая сессия: {new_session}")
# Проверить структуру файлов
import os
log_dir = log.get_session_log_path().parent
print(f"\nДиректория логов: {log_dir}")
print("Файлы:")
for f in sorted(log_dir.glob("*.log")):
stat = f.stat()
size = stat.st_size / 1024
modified = datetime.fromtimestamp(stat.st_mtime).strftime("%Y-%m-%d %H:%M")
print(f" {f.name} - {modified} - {size:.1f} KB")
```
Пример структуры файлов логов
```text
%APPDATA%\simpleworkernet\logs\
├── workernet_20250220_091233.log # утренняя сессия (50 KB)
├── workernet_20250220_143022.log # дневная сессия (120 KB)
└── workernet_20250220_163502.log # текущая сессия (45 KB)
```
💾 Кэширование
Настройка кэша
```python
from simpleworkernet import SmartData, config_manager
# Настройка через конфиг
config_manager.update(
cache_enabled=True,
cache_max_size=100000,
cache_auto_save=True,
cache_dir="/custom/cache/path"
)
# Прямое управление
SmartData.set_cache_max_size(50000)
SmartData.setup_auto_save(True)
```
Управление кэшем
```python
from simpleworkernet import SmartData
# Сохранение кэша
SmartData.save_cache()
SmartData.save_cache(force=True) # принудительно
# Загрузка кэша
SmartData.load_cache()
# Очистка
SmartData.clear_cache()
# Отключение/включение
SmartData.disable_cache()
SmartData.enable_cache()
# Статистика
stats = SmartData.get_cache_stats()
print(f"Всего обращений: {stats['total']}")
print(f"Попаданий: {stats['hits']} ({stats['hit_rate']:.1f}%)")
print(f"Размер кэша: {stats['field_cache_size']} полей")
# Детальная статистика
SmartData.print_detailed_stats()
```
🎨 Примеры использования
Базовые операции с API
```python
from simpleworkernet import WorkerNetClient, log
# Настройка логирования
log.configure(level="DEBUG")
with WorkerNetClient("my.workernet.ru", "your-api-key") as client:
# Получение списка абонентов
customers = client.Module.get_user_list()
log.info(f"Получено абонентов: {len(customers)}")
# Получение конкретного абонента по ID
customer = client.Customer.get_data(customer_id=123)
# Получение адресов
addresses = client.Address.get(city_id=1)
# Получение каталога кабелей
cables = client.Fiber.catalog_cables_get()
# Работа с модулями
users = client.Module.get_house_list()
```
Фильтрация и поиск с SmartData
```python
from simpleworkernet import SmartData, Where, Operator
# Создаем SmartData из ответа API
data = [
{"id": 1, "name": "Иванов Иван", "age": 30, "city": "Москва", "balance": 1500},
{"id": 2, "name": "Петров Петр", "age": 25, "city": "СПб", "balance": 800},
{"id": 3, "name": "Сидоров Сидор", "age": 35, "city": "Москва", "balance": 2200},
{"id": 4, "name": "Смирнова Анна", "age": 28, "city": "Казань", "balance": 1200},
]
sd = SmartData(data)
# Простая фильтрация
adults = sd.where('age', 18, Operator.GTE)
moscow = sd.where('city', 'Москва')
# Составные условия
filtered = sd.filter(
Where('age', 30, Operator.GTE),
Where('balance', 1000, Operator.GT),
join='AND'
)
# Частичное совпадение
ivanov = sd.where('name', 'Иванов', Operator.LIKE)
# Диапазон
middle_age = sd.where('age', [25, 35], Operator.BETWEEN)
# Проверка вхождения
cities = sd.where('city', ['Москва', 'СПб'], Operator.IN)
```
Продвинутый поиск
```python
# Глубокий поиск по всей структуре (включая вложенные объекты)
complex_data = [
{
"id": 1,
"name": "Иван",
"contacts": {
"email": "ivan@example.com",
"phone": "+7-999-123-45-67"
},
"orders": [
{"id": 101, "amount": 1500},
{"id": 102, "amount": 2300}
]
},
{
"id": 2,
"name": "Петр",
"contacts": {
"email": "petr@example.com",
"phone": "+7-999-765-43-21"
},
"orders": [
{"id": 103, "amount": 800}
]
}
]
sd = SmartData(complex_data)
# Поиск по email (найдет в любом месте структуры)
results = sd.find_all('email', 'ivan@example.com')
print(f"Найдено объектов: {len(results)}")
# Проверка существования
if sd.exists('phone', '+7-999-123-45-67'):
print("Телефон найден!")
# Поиск по частичному совпадению
results = sd.find_all('name', 'Иван', is_partial=True)
# Сложный поиск с несколькими критериями
found = sd.find_all(
key=['name', 'amount'],
value=['Иван', 1500],
is_partial=False
)
# Работа с путями через метаданные
for item in sd:
path = sd.get_item_path(item)
if "contacts" in path:
print(f"Контакт: {item} по пути {path}")
# Фильтрация по пути
emails = sd.filter_by_path("field:contacts/field:email")
```
Работа с моделями
```python
from simpleworkernet import smart_model, BaseModel, vStr, vPhoneNumber, vMoney
from typing import List, Optional
@smart_model
class Contact(BaseModel):
email: Optional[str]
phone: Optional[vPhoneNumber]
telegram: Optional[str]
@smart_model
class Order(BaseModel):
id: int
amount: vMoney
date: str
status: str
@smart_model
class User(BaseModel):
id: int
name: vStr
age: Optional[int]
contacts: Contact
orders: List[Order]
balance: vMoney
# Создание из словаря
user_data = {
"id": 1,
"name": "Иван Петров",
"age": 30,
"contacts": {
"email": "ivan@example.com",
"phone": "+7-999-123-45-67"
},
"orders": [
{"id": 101, "amount": 1500, "date": "2024-01-15", "status": "completed"},
{"id": 102, "amount": 2300, "date": "2024-02-01", "status": "pending"}
],
"balance": 5000
}
user = User(**user_data)
print(f"Пользователь: {user.name}")
print(f"Телефон: {user.contacts.phone.normalized}")
print(f"Всего заказов: {len(user.orders)}")
print(f"Сумма заказов: {sum(o.amount.amount for o in user.orders)}")
```
Агрегация и статистика
```python
from simpleworkernet import SmartData
data = [
{"name": "Иван", "age": 30, "salary": 50000, "department": "IT"},
{"name": "Петр", "age": 25, "salary": 45000, "department": "IT"},
{"name": "Сидор", "age": 35, "salary": 60000, "department": "Sales"},
{"name": "Анна", "age": 28, "salary": 55000, "department": "Sales"},
{"name": "Мария", "age": 32, "salary": 52000, "department": "HR"},
]
sd = SmartData(data)
# Статистика
total = sd.count() # 5
avg_age = sd.avg(lambda x: x['age']) # 30.0
max_salary = sd.max(lambda x: x['salary']) # 60000
min_salary = sd.min(lambda x: x['salary']) # 45000
total_salary = sd.sum(lambda x: x['salary']) # 262000
# Группировка по отделам
by_department = sd.group_by(lambda x: x['department'])
for dept, employees in by_department.items():
print(f"{dept}: {employees.count()} сотрудников")
print(f" Средняя зарплата: {employees.avg(lambda x: x['salary']):.0f}")
# Уникальные значения
unique_depts = sd.unique(lambda x: x['department'])
print(f"Отделы: {list(unique_depts)}")
# Трансформация
names = sd.map(lambda x: x['name'].upper())
print(f"Имена: {names}")
# Сортировка
by_age = sd.sort(key=lambda x: x['age'])
by_salary_desc = sd.sort(key=lambda x: x['salary'], reverse=True)
# Лимиты и смещения
top_3 = sd.sort(key=lambda x: x['salary'], reverse=True).limit(3)
next_2 = sd.sort(key=lambda x: x['salary']).skip(3).limit(2)
```
Сериализация
```python
from simpleworkernet import SmartData
data = [{"id": 1, "name": "Test"}, {"id": 2, "name": "Test2"}]
sd = SmartData(data)
# Сохранение в JSON
sd.to_file("data.json")
sd.to_file("data.json", clear_meta=True) # без метаданных
# Сохранение в pickle
sd.to_file("data.pkl", format="pkl")
# Сохранение в сжатый gzip
sd.to_file("data.gz", format="gz")
# Загрузка из файлов
loaded_json = SmartData.from_file("data.json")
loaded_pkl = SmartData.from_file("data.pkl")
loaded_gz = SmartData.from_file("data.gz")
# Преобразование в словарь с восстановлением структуры
original_dict = sd.to_dict()
print(original_dict)
# Получение плоского списка
flat_list = sd.to_list()
```
Работа с метаданными
```python
from simpleworkernet import SmartData
complex_data = {
"users": [
{
"id": 1,
"name": "Alice",
"contacts": {
"email": "alice@example.com",
"phone": "+7-999-111-22-33"
}
},
{
"id": 2,
"name": "Bob",
"contacts": {
"email": "bob@example.com",
"phone": "+7-999-444-55-66"
}
}
],
"settings": {"theme": "dark", "language": "ru"}
}
sd = SmartData(complex_data)
# Получение путей всех элементов
for item in sd:
path = sd.get_item_path(item)
if path:
print(f"{path}: {item}")
# Поиск по конкретному пути
emails = sd.filter_by_path("field:users/*/field:contacts/field:email")
for email in emails:
print(f"Email: {email}")
# Фильтрация по типу элементов
primitives = sd.get_items_by_type('primitive')
dicts = sd.get_items_by_type('dict')
# Получение метаданных конкретного элемента
first_user = sd[0]
meta = sd.get_metadata(first_user)
if meta:
print(f"Путь: {meta.get_path_string()}")
print(f"Родительский путь: {meta.get_parent_path()}")
print(f"Последний сегмент: {meta.get_last_segment()}")
```
Примитивные типы
```python
from simpleworkernet import vStr, vFlag, GeoPoint, vPhoneNumber, vMoney, vPercent, vINN
# Декодирование строк
text = vStr("Hello%20World&Co") # "Hello World&Co"
# Битовые флаги
flag = vFlag.v1
if flag & vFlag.v1:
print("Флаг установлен")
print(vFlag.from_bool(True)) # vFlag.v1
# Геокоординаты
point = GeoPoint(55.75, 37.62)
point2 = GeoPoint("55.76,37.63")
print(point) # "55.75,37.62"
print(point.distance_to(point2)) # расстояние в км
# Телефонные номера
phone = vPhoneNumber("+7 (123) 456-78-90")
print(phone.normalized) # "71234567890"
print(phone.formatted) # "+7 (123) 456-78-90"
print(phone.international) # "+71234567890"
# Денежные суммы
money = vMoney(100.50, "RUB")
money2 = money + 50.25
print(money2) # "150.75 RUB"
# Проценты
p = vPercent(15.5)
print(p) # "15.5%"
print(p.of(1000)) # 155.0
print(p.add_to(1000)) # 1155.0
# ИНН с валидацией
inn = vINN("1234567890")
print(inn.is_valid) # True
print(inn.is_legal) # True (10 цифр)
```
Пакетная обработка
```python
from simpleworkernet import SmartData
sd1 = SmartData([1, 2, 3])
sd2 = SmartData([4, 5, 6])
# Конкатенация
combined = sd1 + sd2
print(combined.count()) # 6
# Массовое обновление
data = [
{"id": 1, "status": "new"},
{"id": 2, "status": "new"}
]
sd = SmartData(data)
# Обновить все элементы
sd.update_all(status="processed", priority=1)
# Через __setattr__
sd.status = "archived"
sd.priority = 2
for item in sd:
print(item) # {"id": 1, "status": "archived", "priority": 2}, ...
```
🧹 Очистка и деинсталляция
Очистка данных (реестр, кэш, логи)
```python
from simpleworkernet import cleanup
# С подтверждением
cleanup()
# Без подтверждения (из кода)
from simpleworkernet.scripts.uninstall import cleanup_simpleworkernet
cleanup_simpleworkernet()
```
Консольная команда (доступна после установки)
```bash
# Запуск очистки
cleanup-simpleworkernet
# Будет запрошено подтверждение
# Вы уверены, что хотите удалить все данные SimpleWorkerNet? (y/N):
```
Полное удаление пакета
```bash
# 1. Сначала очищаем данные
cleanup-simpleworkernet
# 2. Затем удаляем пакет
pip uninstall simpleworkernet
```
Что удаляется при очистке:
Windows: ключ реестра HKEY_CURRENT_USER\SOFTWARE\SimpleWorkerNet
Конфигурационные файлы: ~/.config/simpleworkernet/ или %APPDATA%\simpleworkernet\
Кэш: ~/.cache/simpleworkernet/ или %LOCALAPPDATA%\simpleworkernet\cache\
Логи: ~/.local/share/simpleworkernet/logs/ или %APPDATA%\simpleworkernet\logs\
✒️ Автор
- [Андрей Литвинов](https://t.me/busy4beaver)
| text/markdown | null | BusyBeaver <busybeaver.bb@gmail.com> | null | null | MIT | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.25.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.1 | 2026-02-20T01:52:47.113369 | simpleworkernet-0.0.1b17.tar.gz | 103,026 | 69/7e/e78a92c5a951f01e78346d73ef1ce172953a176274e95a97dc17f94c94ae/simpleworkernet-0.0.1b17.tar.gz | source | sdist | null | false | 6d07aec0a525bea41e362a983353d8b0 | 0ae28e326cbd8aa945d20c5b02179124211dc1f1e0b6b33abde66f5f15c10eed | 697ee78a92c5a951f01e78346d73ef1ce172953a176274e95a97dc17f94c94ae | null | [
"LICENSE.txt"
] | 215 |
2.4 | private-attribute-cpp | 1.3.1 | A Python package that provides a way to define private attributes in C++ implementation. | # Private Attribute (c++ implementation)
## Introduction
This package provide a way to create the private attribute like "C++" does.
## All Base API
```python
from private_attribute import (PrivateAttrBase, PrivateWrapProxy) # 1 Import public API
def my_generate_func(obj_id, attr_name): # 2 Optional: custom name generator
return f"_hidden_{obj_id}_{attr_name}"
class MyClass(PrivateAttrBase, private_func=my_generate_func): # 3 Inherit + optional custom generator
__private_attrs__ = ['a', 'b', 'c', 'result', 'conflicted_name'] # 4 Must declare all private attrs
def __init__(self):
self.a = 1
self.b = 2
self.c = 3
self.result = 42 # deliberately conflicts with internal names
# Normal methods can freely access private attributes
def public_way(self):
print(self.a, self.b, self.c)
# Real-world case: method wrapped by multiple decorators
@PrivateWrapProxy(memoize()) # 5 Apply any decorator safely
@PrivateWrapProxy(login_required()) # 5 Stack as many as needed
@PrivateWrapProxy(rate_limit(calls=10)) # 5
def expensive_api_call(self, x): # First definition (will be wrapped)
def inner(...):
return some_implementation(self.a, self.b, self.c, x)
inner(...)
return heavy_computation(self.a, self.b, self.c, x)
# Fix decorator order + resolve name conflicts
@PrivateWrapProxy(expensive_api_call.result.name2, expensive_api_call) # 6 Chain .result to push decorators down
@PrivateWrapProxy(expensive_api_call.result.name1, expensive_api_call) # 6 Resolve conflict with internal names
def expensive_api_call(self, x): # Final real implementation
return heavy_computation(self.a, self.b, self.c, x)
# ====================== Usage ======================
obj = MyClass()
obj.public_way() # prints: 1 2 3
print(hasattr(obj, 'a')) # False – truly hidden from outside
print(obj.expensive_api_call(10)) # works with all decorators applied
```
| # | API | Purpose | Required? |
| --- | ---------------------------------------- | ------------------------------------------------------- | ----------- |
| 1 | PrivateAttrBase | Base class – must inherit | Yes |
| 1 | PrivateWrapProxy | Decorator wrapper for arbitrary decorators | When needed |
| 2 | private_func=callable | Custom hidden-name generator | Optional |
| 3 | Pass private_func in class definition | Same as above | Optional |
| 4 | \_\_private_attrs\_\_ list | Declare which attributes are private | Yes |
| 5 | @PrivateWrapProxy(...) | Make any decorator compatible with private attributes | When needed |
| 6 | method.result.xxx chain + dummy wrap | Fix decorator order and name conflicts | When needed |
## Usage
This is a simple usage about the module:
```python
from private_attribute import PrivateAttrBase
class MyClass(PrivateAttrBase):
__private_attrs__ = ['a', 'b', 'c']
def __init__(self):
self.a = 1
self.b = 2
self.c = 3
def public_way(self):
print(self.a, self.b, self.c)
obj = MyClass()
obj.public_way() # (1, 2, 3)
print(hasattr(obj, 'a')) # False
print(hasattr(obj, 'b')) # False
print(hasattr(obj, 'c')) # False
```
All of the attributes in `__private_attrs__` will be hidden from the outside world, and stored by another name.
You can use your function to generate the name. It needs the id of the obj and the name of the attribute:
```python
def my_generate_func(obj_id, attr_name):
return some_string
class MyClass(PrivateAttrBase, private_func=my_generate_func):
__private_attrs__ = ['a', 'b', 'c']
def __init__(self):
self.a = 1
self.b = 2
self.c = 3
def public_way(self):
print(self.a, self.b, self.c)
obj = MyClass()
obj.public_way() # (1, 2, 3)
```
If the method will be decorated, the `property`, `classmethod` and `staticmethod` will be supported.
For the other, you can use the `PrivateWrapProxy` to wrap the function:
```python
from private_attribute import PrivateAttrBase, PrivateWrapProxy
class MyClass(PrivateAttrBase):
__private_attrs__ = ['a', 'b', 'c']
@PrivateWrapProxy(decorator1())
@PrivateWrapProxy(decorator2())
def method1(self):
...
@PrivateWrapProxy(method1.attr_name, method1) # Use the argument "method1" to save old func
def method1(self):
...
@PrivateWrapProxy(decorator3())
def method2(self):
...
@PrivateWrapProxy(method2.attr_name, method2) # Use the argument "method2" to save old func
def method2(self):
...
```
The `PrivateWrapProxy` is a decorator, and it will wrap the function with the decorator. When it decorates the method, it returns a `_PrivateWrap` object.
The `_PrivateWrap` has the public api `result` and `funcs`. `result` returns the original decoratored result and `funcs` returns the tuple of the original functions.
```python
from private_attribute import PrivateAttrBase, PrivateWrapProxy
class MyClass(PrivateAttrBase):
__private_attrs__ = ['a', 'b', 'c']
@PrivateWrapProxy(decorator1())
@PrivateWrapProxy(decorator2())
def method1(self):
...
@PrivateWrapProxy(method1.result.conflict_attr_name1, method1) # Use the argument "method1" to save old func
def method1(self):
...
@PrivateWrapProxy(method1.result.conflict_attr_name2, method1)
def method1(self):
...
@PrivateWrapProxy(decorator3())
def method2(self):
```
## Advanced API
### define your metaclass based on one metaclass
You can define your metaclass based on one metaclass:
```python
from abc import ABCMeta, abstractmethod
import private_attribute
class PrivateAbcMeta(ABCMeta):
def __new__(cls, name, bases, attrs, **kwargs):
temp = private_attribute.prepare(name, bases, attrs, **kwargs)
typ = super().__new__(cls, temp.name, temp.bases, temp.attrs, **temp.kwds)
private_attribute.postprocess(typ, temp)
return typ
private_attribute.register_metaclass(PrivateAbcMeta)
```
By this way you create a metaclass both can behave as ABC and private attribute:
```python
class MyClass(metaclass=PrivateAbcMeta):
__private_attrs__ = ()
__slots__ = ()
@abstractmethod
def my_function(self): ...
class MyImplement(MyClass):
__private_attrs__ = ("_a",)
def __init__(self, value=1):
self._a = value
def my_function(self):
return self._a
```
Finally:
```python
>>> a = MyImplement(1)
>>> a.my_function()
1
>>> a._a
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
a._a
AttributeError: private attribute
>>> MyClass()
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
MyClass()
TypeError: Can't instantiate abstract class MyClass without an implementation for abstract method 'my_function'
```
## Notes
- All of the private attributes class must contain the `__private_attrs__` attribute.
- The `__private_attrs__` attribute must be a sequence of strings.
- You cannot define the name which in `__slots__` to `__private_attrs__`.
- When you define `__slots__` and `__private_attrs__` in one class, the attributes in `__private_attrs__` can also be defined in the methods, even though they are not in `__slots__`.
- All of the object that is the instance of the class "PrivateAttrBase" or its subclass are default to be unable to be pickled.
- Finally the attributes' names in `__private_attrs__` will be change to a tuple with two hash.
- Finally the `_PrivateWrap` object will be recoveried to the original object.
- One class defined in another class cannot use another class's private attribute.
- One parent class defined an attribute which not in `__private_attrs__` or not a `PrivateAttrType` instance, the child class shouldn't contain the attribute in its `__private_attrs__`.
- When combine with other metaclass, be ensure that the parent metaclass has no classmethod that can set subclasses' attributes. If it has, it will fail on new metaclass because the new metaclass you defined and registered will be immutable.
- CPython may change "tp_getattro", "tp_setattro" and so on when you change the attribute "\_\_getattribute\_\_", "\_\_setattr\_\_" and so on. If you are fear about it, you can use `ensure_type` to reset those tp slots. For the other metaclasses, you can use `ensure_metaclass` to reset those tp slots. Also, don't set those methods on these classes in your code.
## License
MIT
## Requirement
This package require the c++ module "[picosha2](https://github.com/okdshin/PicoSHA2)" to compute the sha256 hash.
## Support
Now it doesn't support "PyPy".
| text/markdown | HuangHaoHua | 13140752715@example.com | null | null | MIT | null | [] | [] | https://github.com/Locked-chess-official/private_attribute_cpp | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-20T01:51:09.172856 | private_attribute_cpp-1.3.1.tar.gz | 26,261 | 19/da/8882f8151b7ca40dbe5dc81ebd03887741254a0c40268817a569810fc8db/private_attribute_cpp-1.3.1.tar.gz | source | sdist | null | false | 377de1c1e75deb19c6823bc295974c13 | c342cddbb1cddec0923c9cf3f2bbb09da0ef8370c333a37de5962ce05d1aac4d | 19da8882f8151b7ca40dbe5dc81ebd03887741254a0c40268817a569810fc8db | null | [] | 2,408 |
2.4 | modxpy | 2.5.3 | ModX — The Python Module Universe at Your Fingertips | ### **🌟 ModXPy — The Python Module Universe at Your Fingertips**
Welcome to ModXPy, the ultimate playground for Python’s modules.
With ModXPy you can instantly import, explore, and experiment with the entire Python standard library — plus any installed third-party modules — all from one simple interface.
#### UPDATE 2.5.3
###### MAINTENANCE \& QOL UPDATE!
1\. Completely Reorganized Module List
The modules list used by importall() and related functions has been totally overhauled and now organized into logical categories: Built-in Core Modules, Import System, File \& Path Handling, Data Types \& Structures, Numbers \& Math, Strings \& Text, and WAYYYY more!
Module name in Python changed from "modx" to "modxpy" for consistency with PyPi. Import as "import modxpy" now inside of Python Shell!
importlog() now has an "alphabetical" parameter!
If False (default): Print out modules by chronological order like usual.
If True, Print out modules by alphabetical order instead to find imported modules much easier.
#### 🚀 Installation and Importing
IMPORTANT: Before you install ModX, you MUST first (if not already) run "pip install packaging" inside of powershell/terminal. ModX will NOT work without the packaging module.
Install directly from terminal.
Type: "pip install modxpy"
In Python, import as "import modxpy", not "import modx" (It used to be "import modx").
#### Functions:
🔹 dependencies(module, as\_data=False)
Shows what other modules a specific module depends on without importing it.
🔹 importall(show\_imported=False, as\_data=False)
Imports about every standard library module at once.
🔹 importexternal(show\_imported=False, as\_data=False)
Attempts to import every third-party module you currently have installed.
🔹 importletter(letter, show\_imported=False, as\_data=False)
Imports all standard library modules whose names start with the given letter (case-insensitive).
🔹 importlog(include\_deps=False, alphabetical = False, as\_data=False)
Shows every module imported since ModX loaded in CHRONOLOGICAL order.
include\_deps=True: Includes ModX's dependencies in the list.
🔹 importrandom(n, strict\_mode=False, show\_imported=False, as\_data=False)
Imports n random stdlib modules.
strict\_mode=False: May import less than n due to dependencies.
strict\_mode=True: Forces import until EXACTLY n NEW modules.
🔹 importscreen(show\_imported=False, as\_data=False)
Imports every module that uses a screen/GUI (like pygame or turtle).
🔹 info(module\_name, as\_data=False)
Shows basic info about a module: file path, built-in status, full docstring.
🔹 isimported(module)
Checks if a module is currently imported into the Python shell (not just sys.modules).
🔹 listimportall(as\_data=False)
Returns a list of modules that import\_all() would import.
🔹 modbench(module, as\_data=False)
Shows how much time and memory a module takes to import.
Note: The module IS imported, and cache is cleared before each benchmark.
🔹 modclasses(module, as\_data=False)
Shows how many and what classes a module has WITHOUT importing it.
🔹 modfunctions(module, as\_data=False)
Shows how many and what functions a module has WITHOUT importing it.
🔹 modglobals(module, show\_private=False, export=None, as\_data=False)
Shows a module's global names (not functions/classes/modules).
show\_private=True: Includes names starting with '\_'.
export=filename.md: Exports results as Markdown.
🔹 modorigin(module, as\_data=False)
Shows where a module came from (e.g., built-in, standard library, or pip-installed).
🔹 modsloaded()
Shows how many modules are currently loaded in your Python session.
🔹 modxhelp(export=None, compact=False, banner=True)
Shows ModX's built-in help dialogue.
export=filename.md: Exports help as Markdown.
compact=True: Shows single-line summaries only.
banner=False: Hides the ASCII banner.
🔹 modximported(as\_data=False)
Lists modules that were ONLY imported by ModX — NOT including user imports or dependencies.
🔹 nonimported(as\_data=False)
Returns a list of STANDARD LIBRARY modules that have NOT been imported yet.
🔹 preloaded(show\_builtins=True, show\_internal=False, show\_submodules=False, as\_data=False)
Shows modules pre-loaded by Python before ModX started.
show\_builtins=False: Hides built-in modules.
show\_internal=True: Shows modules starting with '\_'.
show\_submodules=True: Includes submodules (names with '.').
🔹 revdeps(module, as\_data=False)
Shows what modules import the given module WITHOUT importing it.
🔹 searchmodules(keyword, as\_data=False)
Searches for modules whose names contain the keyword.
🔹 timesince(module, as\_data = False)
Shows WHEN a module was loaded in relative to when ModX loaded.
🔹 vcompat(module\_name, python\_version=None, as\_data=False)
Checks if a module is compatible with a given Python version.
If no version is given, sweeps ALL Python versions 2.0–3.14.
🔹 whyloaded(module, as\_data=False)
Returns a comma-separated string explaining why a module is in memory.
Tags: not\_loaded, preloaded, user, referenced, modx, modx\_dep, dependency or unknown
Rule: user swallows referenced.
#### 💡 Why Use ModX?
1.) Understand your imports
Know why a module was loaded (whyloaded()), when it appeared (timesince()), and where it came from (modorigin()).
2.) Get real data, not just printed tables
Every function has as\_data mode — return clean Python lists/dicts for your own code.
3.) Stress-test your environment
Bulk-import hundreds of modules at once with importall() and see what breaks.
4.) Discover hidden dependencies
See what your imports actually depend on with dependencies() and revdeps().
5.) Experiment and learn
Random imports, compatibility checking, performance benchmarking — turn Python into a playground.
6.) Track everything
Chronological import logs, preloaded module lists, timestamp tracking — nothing stays hidden.
ModXPy turns Python’s module system into a playground —
perfect for learning, testing, or just satisfying your curiosity.
Install it today with pip install modxpy, and start discovering
how many modules Python already has waiting for you!
| text/markdown | Austin Wang | austinw87654@gmail.com | null | null | null | null | [] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.13.7 | 2026-02-20T01:50:05.024169 | modxpy-2.5.3.tar.gz | 24,832 | cf/ea/bd9624563c766e5ade7190b7a8ca792bafe9d8ea9b9ab9412e503a763ec5/modxpy-2.5.3.tar.gz | source | sdist | null | false | 59ba86164e258099550c7a0d4d99ed4e | ceafb8781d07d5f201c52bf11d67d493b3b1f35b4d1de2d4ea96318cf0bae7b6 | cfeabd9624563c766e5ade7190b7a8ca792bafe9d8ea9b9ab9412e503a763ec5 | null | [] | 235 |
2.4 | sari | 2.0.14 | Redesigned high-performance local search and indexing engine | # sari v2
LSP-first 로컬 인덱싱/검색 엔진 + MCP 데몬.
## 설치
```bash
uv tool install sari
# 또는
python3 -m pip install sari
```
## 기본 사용
```bash
sari doctor
sari daemon start
sari roots add /absolute/path/to/repo
sari roots deactivate /absolute/path/to/repo
sari roots activate /absolute/path/to/repo
```
## Workspace 활성 정책 (Soft-OFF)
`is_active`는 수집/도구 접근 제어 플래그다.
- `is_active=true`: 수집 루프와 MCP/HTTP repo 해석 경로에서 정상 접근된다.
- `is_active=false`: 수집 스케줄 및 watcher 등록에서 제외되고, 도구 접근은 `ERR_WORKSPACE_INACTIVE`로 차단된다.
- Soft-OFF 정책: 비활성화 시 기존 인덱스/메타데이터는 즉시 삭제하지 않는다(데이터 유지).
자세한 운영 규칙은 `docs/workspace_activation_policy.md`를 참고한다.
## MCP 연동 (권장)
```bash
sari install --host gemini
sari install --host codex
```
- Gemini/Codex 설정에 `command = "sari"` + `args = ["mcp","stdio"]`를 자동 반영한다.
- Codex 설정에는 `startup_timeout_sec = 45`를 기본 반영한다.
- 기존 설정 파일은 `.bak.<timestamp>`로 백업된다.
### MCP handshake timeout 대응
MCP 클라이언트에서 아래와 같은 메시지가 보이면 startup timeout이 짧은 경우가 많다.
- `MCP client for "sari" timed out after 10 seconds`
- `MCP startup incomplete`
Codex 설정(`~/.codex/config.toml`)에서 `startup_timeout_sec`를 늘려준다.
```toml
[mcp_servers.sari]
command = "sari"
args = ["mcp", "stdio"]
startup_timeout_sec = 45
```
- 권장 시작값: `30`
- 대형 DB/느린 디스크/초기 마이그레이션 환경: `45~60`
## 수동 설정 예시
### Gemini (`~/.gemini/settings.json`)
```json
{
"mcpServers": {
"sari": {
"command": "sari",
"args": ["mcp", "stdio"]
}
}
}
```
### Codex (`~/.codex/config.toml`)
```toml
[mcp_servers.sari]
command = "sari"
args = ["mcp", "stdio"]
startup_timeout_sec = 45
```
## Troubleshooting
### `sqlite3.OperationalError: no such column: repo_id`
기존(구버전) `state.db`를 현재 바이너리와 함께 사용할 때 발생할 수 있다.
복구 절차:
1. 기존 DB 백업
2. 새 DB 경로로 부팅해 초기 스키마/마이그레이션을 완료
3. `sari doctor`와 MCP 연결 재확인
예시:
```bash
# 1) 백업
cp ~/.local/share/sari-v2/state.db ~/.local/share/sari-v2/state.db.bak.$(date +%Y%m%d-%H%M%S)
# 2) 새 DB로 실행(임시/영구 경로 모두 가능)
export SARI_DB_PATH=~/.local/share/sari-v2/state.new.db
# 3) 상태 확인
sari doctor
```
### 설치 직후 최소 점검 순서
```bash
sari doctor
sari install --host codex
# Codex config.toml에서 startup_timeout_sec = 30~60 확인
```
## 개발 검증
```bash
pytest -q
tools/ci/run_release_gate.sh
tools/manual/test_mcp_call_flow.sh /absolute/path/to/repo
```
## GitHub Actions 배포
Release 워크플로우 파일: `.github/workflows/release-pypi.yml`
### 1) TestPyPI 선검증 (권장)
1. GitHub Actions에서 `Release PyPI`를 수동 실행한다.
2. 입력값 `publish_to_testpypi=true`로 실행한다.
3. `build` job에서 release gate + wheel/sdist 빌드 + twine check 통과를 확인한다.
4. `publish-testpypi` job 성공과 `release-dist` artifact 업로드를 확인한다.
### 2) PyPI 실배포
1. `pyproject.toml` 버전을 확정한다.
2. 동일 버전 태그(`v<version>`)를 push 한다. 예: `v2.0.14`
3. 워크플로우의 tag/version 일치 검증 통과 후 `publish-pypi` job 성공을 확인한다.
## 로컬 wheel 테스트 (글로벌 tool 오염 방지)
로컬 빌드 산출물 검증은 `uv tool install dist/*.whl` 대신 아래 스크립트를 사용한다.
```bash
python3 -m build
tools/manual/test_local_wheel_ephemeral.sh
```
- 위 방식은 `uvx --from <wheel>`로 일회성 실행만 하므로, `~/.local/bin/sari` 글로벌 설치를 덮어쓰지 않는다.
- 글로벌 업그레이드는 계속 `uv tool upgrade sari`를 사용한다.
### 실수로 글로벌 tool을 로컬 wheel로 덮어쓴 경우 복구
```bash
tools/manual/repair_global_sari_tool.sh
# 특정 버전으로 복구
tools/manual/repair_global_sari_tool.sh 2.0.13
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"peewee>=3.17.0",
"alembic>=1.13.0",
"sqlalchemy>=2.0.0",
"pydantic>=2.0.0",
"pyright<2,>=1.1.396",
"overrides<8,>=7.7.0",
"structlog>=23.1.0",
"click>=8.0.0",
"starlette>=0.27.0",
"uvicorn>=0.22.0",
"tantivy==0.25.1",
"pathspec>=0.12.0",
"psutil>=5.9.0",
"requests>=2.31.0",
"watchdog>=4.0.0",
"rich>=13.0.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:48:24.085053 | sari-2.0.14.tar.gz | 425,864 | d2/43/8179e248d2f23f23c02073b631a03c08092df538dde2f209c1333bcd8a55/sari-2.0.14.tar.gz | source | sdist | null | false | 6f3a57c23a7b2d2bb132c3d3a89c1dba | cced69dbd7339947cad33667b0fca7580092104a1e3882e049e72d928f690943 | d2438179e248d2f23f23c02073b631a03c08092df538dde2f209c1333bcd8a55 | null | [] | 230 |
2.3 | complex-evaluate | 0.0.1 | Package to evaluate complex alignments. | # Complex Evaluate
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://github.com/guihcs/complex_evaluate/actions/workflows/tests.yml)
[](https://codecov.io/gh/guihcs/complex_evaluate)
A Python library for evaluating complex ontology alignments in [EDOAL](https://moex.gitlabpages.inria.fr/alignapi/edoal.html) (Expressive and Declarative Ontology Alignment Language) format adapting precision, recall, and f-measure metrics to the complex matching case.
### Requirements
- Python >= 3.9
- NumPy
- SciPy
## 📦 Installation
```bash
pip install complex_evaluate
```
## 📖 Usage
### Basic Example
```python
from complex_evaluate.evaluate import evaluate_edoal
# Compare two alignment files
precision, recall, f_measure = evaluate_edoal(
'predicted_alignment.edoal',
'reference_alignment.edoal'
)
print(f"Precision: {precision:.3f}")
print(f"Recall: {recall:.3f}")
print(f"F-measure: {f_measure:.3f}")
```
### Comparing from strings
```python
from complex_evaluate.evaluate import evaluate_edoal_string
predicted = '''<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns="http://knowledgeweb.semanticweb.org/heterogeneity/alignment#">
<Alignment>
<map>
<Cell>
<entity1>
<Class rdf:about="http://example.org#ClassA" />
</entity1>
<entity2>
<Class rdf:about="http://example.org#ClassB" />
</entity2>
</Cell>
</map>
</Alignment>
</rdf:RDF>'''
reference = predicted # Use same for identity test
p, r, f = evaluate_edoal_string(predicted, reference)
print(f"F-measure: {f}") # Should be 1.0 for identical alignments
```
## 📊 Use Cases
This metric was used in the evaluation of OAEI 2025 in the Complex Matching track https://oaei.ontologymatching.org/2025/results/complex/index.html.
Also, this library is particularly useful for:
- **Ontology Alignment Evaluation**: Benchmarking alignment approaches on complex matching tasks.
- **LLM reasoning training**: The metric can enable the training of LLMs to reason about complex alignments, by providing a verifiable reward signal based on the score of the predicted alignment against a reference alignment.
## 🤝 Contributing
Contributions are welcome! Some areas for improvement:
- Additional similarity metrics.
- Performance optimizations.
- Support for other alignment formats.
- Extended documentation and examples.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 📚 Citation
If you use this library in your research, please cite it as follows:
```bibtex
@inproceedings{DBLP:conf/esws/SousaLS25,
author = {Guilherme Henrique Santos Sousa and
Rinaldo Lima and
C{\'{a}}ssia Trojahn dos Santos},
title = {On Evaluation Metrics for Complex Matching Based on Reference Alignments},
booktitle = {{ESWC} {(1)}},
series = {Lecture Notes in Computer Science},
volume = {15718},
pages = {77--93},
publisher = {Springer},
year = {2025}
}
```
---
*Built with ❤️ for the Semantic Web and Ontology Matching community.*
| text/markdown | Guilherme Henrique | Guilherme Henrique <guihss.cs@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/guihcs/complex_evaluate",
"Issues, https://github.com/guihcs/complex_evaluate/issues"
] | twine/6.2.0 CPython/3.12.9 | 2026-02-20T01:48:04.442621 | complex_evaluate-0.0.1.tar.gz | 4,975 | c9/1d/db0f6df453f3ba7227f156d350616a48179a1f854c7f82019ae1b1874bbe/complex_evaluate-0.0.1.tar.gz | source | sdist | null | false | 5d143e0951870f67212aca487699df1c | 3369fe26027007317dd60cad774fededc42f29c911363193dd837c324e408100 | c91ddb0f6df453f3ba7227f156d350616a48179a1f854c7f82019ae1b1874bbe | null | [] | 255 |
2.3 | omop-semantics | 0.1.10 | Add your description here | # omop_semantics
**omop_semantics** is a Python library for defining and managing **semantic conventions on top of OMOP CDM**.
It lets you describe conventions in code
- which OMOP concepts you want to have on hand as named key concepts to improve ergonomics in analytic code,
- how they are grouped,
- what roles they play
- and provide profiles to render these targets uniformly into CDM tables.
The goal is to make these conventions **explicit, versioned, and reusable**, instead of being buried in code, SQL, or documentation. They are also extensible so that you can add opinionated layers on top of default specifications that may be relevant in a domain-specific context only.
---
## Key ideas
- **Human-authored**
Semantic rules and concept groups are written in YAML and validated with schemas.
- **Portable**
No database or graph store required.
- **Versionable**
Conventions can evolve over time and be tracked in git.
- **Integrates with pipelines**
Can drive ETL logic, validation, and documentation so they stay in sync.
---
## Typical workflow
1. **Define a schema**
Describes what kinds of semantic objects and roles exist (e.g. staging, modifiers).
2. **Write YAML instances**
Lists actual OMOP concepts and groups used in your project.
3. **Load a runtime registry**
This gives you a programmatic API to query concepts, groups, and relationships.
4. **Use it in code**
For validation, cohort logic, ETL constraints, or documentation.
---
## When should you use this?
Use **omop_semantics** if you:
- have project-specific rules about which OMOP concepts are valid,
- need consistent concept groupings across ETL and analytics,
- want semantic conventions to be explicit, testable, and versioned,
- are working in domains like oncology where OMOP alone is too permissive.
| text/markdown | gkennos | gkennos <georgina.kennedy@unsw.edu.au> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"ipykernel>=7.2.0",
"linkml>=1.9.6",
"linkml-runtime>=1.9.5",
"python-dotenv>=1.2.1",
"ruamel-yaml>=0.18.17",
"typing-extensions>=4.15.0",
"ipython>=8.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"mypy>=1.8; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\"",
"rich>=13.0; extra == \"dev\"",
"mkdocs-material>=9.7.1; extra == \"docs\"",
"mkdocstrings-python>=2.0.1; extra == \"docs\"",
"mkdocs>=1.6.1; extra == \"docs\"",
"mkdocs-mermaid2-plugin; extra == \"docs\""
] | [] | [] | [] | [
"Homepage, https://australiancancerdatanetwork.github.io/OMOP_Semantics/",
"Repository, https://github.com/AustralianCancerDataNetwork/OMOP_Semantics",
"Issues, https://github.com/AustralianCancerDataNetwork/OMOP_Semantics/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:48:01.216348 | omop_semantics-0.1.10.tar.gz | 40,376 | dd/4e/2b59e24a420f464b8c4075d59673f27e55f6a8756411de7ea5071879432b/omop_semantics-0.1.10.tar.gz | source | sdist | null | false | 79238cc8dbe9185a777d18ffd7951840 | c5b329244a775a31517de21aba5f4c3baebed38215f0393bf383d2de6a03180b | dd4e2b59e24a420f464b8c4075d59673f27e55f6a8756411de7ea5071879432b | null | [] | 222 |
2.4 | sfft | 1.7.1 | Image Subtraction in Fourier Space | .. image:: https://github.com/thomasvrussell/sfft/blob/master/docs/sfft_logo_gwbkg.png
*SFFT: Saccadic Fast Fourier Transform for image subtraction*
---------------------
.. image:: https://img.shields.io/pypi/v/sfft.svg
:target: https://pypi.python.org/pypi/sfft
:alt: Latest Version
.. image:: https://static.pepy.tech/personalized-badge/sfft?period=total&units=international_system&left_color=grey&right_color=orange&left_text=Downloads
:target: https://pepy.tech/project/sfft
.. image:: https://img.shields.io/badge/python-3.12-green.svg
:target: https://www.python.org/downloads/release/python-312/
.. image:: https://zenodo.org/badge/doi/10.5281/zenodo.6463000.svg
:target: https://doi.org/10.5281/zenodo.6463000
:alt: 1.0.6
.. image:: https://img.shields.io/badge/License-MIT-yellow.svg
:target: https://opensource.org/licenses/MIT
|
Saccadic Fast Fourier Transform (SFFT) is an algorithm for fast & accurate image subtraction in Fourier space.
SFFT brings about a remarkable improvement of computational performance of around an order of magnitude compared to other published image subtraction codes.
SFFT method is the transient detection engine for several ongoing time-domain programs, including the `DESIRT <https://ui.adsabs.harvard.edu/abs/2022TNSAN.107....1P/abstract>`_ survey based on DECam & DESI, the DECam GW-MMADS Survey for GW Follow-ups and the JWST Cycle 3 Archival program `AR 5965 <https://www.stsci.edu/jwst/science-execution/program-information?id=5965>`_. SFFT is also the core engine for the differential photometry pipeline of the `Roman Supernova PIT <https://github.com/Roman-Supernova-PIT>`_.
Get started
---------------------
- **Documentation:** https://thomasvrussell.github.io/sfft-doc/ **[recommended]**
- **Installation:** https://thomasvrussell.github.io/sfft-doc/installation/
- **Tutorials:** https://thomasvrussell.github.io/sfft-doc/tutorials/
- **Source code:** https://github.com/thomasvrussell/sfft
- **Contact the author:** astroleihu@gmail.com or leihu@sas.upenn.edu
Installation
=================
To install the latest release from PyPI, use pip: ::
pip install sfft
For more detailed instructions, see the `install guide <https://thomasvrussell.github.io/sfft-doc/installation/>`_ in the docs.
Citing
--------
*Image Subtraction in Fourier Space, Lei Hu et al. 2022, The Astrophysical Journal, 936, 157*
See ADS Link: https://ui.adsabs.harvard.edu/abs/2022ApJ...936..157H/abstract
Publications using SFFT method
--------------------------------
See ADS Library: https://ui.adsabs.harvard.edu/public-libraries/lc4tiTR_T--92f9k0YrRQg
| text/markdown | Lei Hu | leihu@sas.upenn.edu | Lei Hu | leihu@sas.upenn.edu | MIT Licence | null | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientific/Engineering :: Astronomy"
] | [] | null | https://github.com/thomasvrussell/sfft | >=3.5 | [] | [] | [] | [
"scipy>=1.5.2",
"astropy>=3.2.3",
"fastremap>=1.7.0",
"sep>=1.0.3",
"numba>=0.53.1",
"llvmlite>=0.36.0",
"pyfftw>=0.12.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.14 | 2026-02-20T01:47:19.809574 | sfft-1.7.1.tar.gz | 370,162 | 6d/4b/e6e380846b1de343cb5fd05ecf19395e4719aa5781ece25046590dbf4eba/sfft-1.7.1.tar.gz | source | sdist | null | false | 536b8d075f06826ee6d0d0cf3259b6e0 | c742118bd10e1a2b4052cb56bb03c02432f0d9c8039cd0f7b0e110f9d3742f3d | 6d4be6e380846b1de343cb5fd05ecf19395e4719aa5781ece25046590dbf4eba | null | [
"LICENSE"
] | 231 |
2.4 | pylsp-workspace-symbols | 0.1.0 | workspace/symbol support for python-lsp-server via Jedi | # pylsp-workspace-symbols
A [python-lsp-server](https://github.com/python-lsp/python-lsp-server) plugin that adds **workspace/symbol** support via [Jedi](https://github.com/davidhalter/jedi).
> **Why?** `pylsp` does not implement `workspace/symbol` natively. This plugin fills that gap, enabling "Go to Symbol in Workspace" in any LSP client — including [CudaText](https://cudatext.github.io/), Neovim, Emacs, and others.
---
## ✨ Features
- 🔍 **Workspace-wide symbol search** — find functions, classes, and modules across all files in the project
- ⚡ **Fast** — results in ~130ms after the first call (Jedi cache warm)
- 🔤 **Case-insensitive substring match** — `area` finds `calculate_area`, `Cal` finds `Calculator`
- 📁 **Smart folder exclusion** — automatically skips `.git`, `__pycache__`, `node_modules`, `.venv`, `dist`, `build`, and more
- ⚙️ **Configurable** — tune `max_symbols` and `ignore_folders` via pylsp settings
- 🐍 **Python 3.8+** — compatible with all modern Python versions
## 📦 Installation
```bash
pip install pylsp-workspace-symbols
```
The plugin is discovered automatically by `pylsp` via its entry point — no manual configuration needed.
## ⚙️ Configuration
Add to your LSP client's `pylsp` settings (e.g. in `settings.json` or equivalent):
```json
{
"pylsp": {
"plugins": {
"jedi_workspace_symbols": {
"enabled": true,
"max_symbols": 500,
"ignore_folders": []
}
}
}
}
```
| Option | Type | Default | Description |
|---|---|---|---|
| `enabled` | bool | `true` | Enable/disable the plugin |
| `max_symbols` | int | `500` | Maximum symbols returned. `0` means no limit |
| `ignore_folders` | list | `[]` | Extra folder names to skip (merged with built-in list) |
### Built-in ignored folders
`.git`, `.hg`, `.svn`, `__pycache__`, `.mypy_cache`, `.ruff_cache`, `.pytest_cache`,
`node_modules`, `.venv`, `venv`, `.env`, `env`, `dist`, `build`, `.eggs`, `egg-info`
## 🚀 Usage
Once installed, your LSP client will receive `workspaceSymbolProvider: true` in the server capabilities.
Use your client's "Go to Symbol in Workspace" command (typically `Ctrl+T` or `#` in the symbol picker).
### How it works
pylsp does not define a `pylsp_workspace_symbols` hookspec, so this plugin uses two hooks:
1. **`pylsp_experimental_capabilities`** — advertises `workspaceSymbolProvider: true` to the client during the `initialize` handshake.
2. **`pylsp_dispatchers`** — registers a custom JSON-RPC handler for `workspace/symbol` that calls Jedi's `project.complete_search("")` and filters results client-side by case-insensitive substring match.
> **Note:** `workspace/symbol` returns module-level definitions (functions, classes, modules).
> Local variables inside functions are not indexed — this is standard LSP behaviour,
> consistent with pyright and other Python language servers.
## 🧪 Tests
```bash
pip install -e ".[dev]"
pytest
```
## 🤝 Contributing
Issues and pull requests are welcome!
Please open an issue before submitting a large change.
## 📚 References
- [python-lsp-server](https://github.com/python-lsp/python-lsp-server)
- [Jedi](https://github.com/davidhalter/jedi)
- [LSP workspace/symbol specification](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#workspace_symbol)
## 👤 Author
Bruno Eduardo — [github.com/Hanatarou](https://github.com/Hanatarou)
## 📄 License
MIT
| text/markdown | null | null | null | null | MIT | pylsp, python-lsp-server, lsp, jedi, workspace-symbols | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Text Editors"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"python-lsp-server>=1.7",
"jedi>=0.18",
"pytest>=7; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"python-lsp-server[all]; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Hanatarou/pylsp-workspace-symbols",
"Repository, https://github.com/Hanatarou/pylsp-workspace-symbols",
"Bug Tracker, https://github.com/Hanatarou/pylsp-workspace-symbols/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:46:21.963690 | pylsp_workspace_symbols-0.1.0.tar.gz | 8,119 | ed/ea/b9f100d8e6626e06f0a059d8ff11e528b02846ce6badac4baf7b7f18a0e4/pylsp_workspace_symbols-0.1.0.tar.gz | source | sdist | null | false | 2d31c3f10484a4be1f67d325ff3e977e | e2a9786ad22734351513a9d258bac88bcbd5453eea018f10996fd32a608d6e16 | edeab9f100d8e6626e06f0a059d8ff11e528b02846ce6badac4baf7b7f18a0e4 | null | [
"LICENSE"
] | 258 |
2.4 | codedoctor | 0.2.0 | Beginner-friendly codebase doctor: lint, format, type-check, security and tests. | # CodeDoctor
**CodeDoctor** is a beginner-friendly Python CLI that runs common quality checks
(linting, formatting, type checking, security scanning, and tests) against **any**
Python repository or folder you point it at, and produces a readable report.
It’s designed to be simple to run, easy to understand, and safe by default.
---
## Requirements
- Python **3.12+**
- (Recommended) `git` available on PATH for best `.gitignore` support
---
## Install
From PyPI:
```bash
python -m pip install codedoctor
```
Verify:
```bash
codedoctor --help
codedoctor scan --help
```
---
## Setup (required)
Before running scans, initialize CodeDoctor once:
```bash
codedoctor setup
```
This creates a small config file in your user profile (JSON) that stores default
behavior (e.g., whether to respect `.gitignore`, default report directory, etc.).
### CI / no-config environments
If you’re running in CI or you don’t want CodeDoctor to create a config file,
you can bypass the setup requirement with:
```bash
codedoctor scan . --assume-defaults
```
---
## Quick Start
Scan the current folder:
```bash
codedoctor scan .
```
Scan a different path:
```bash
codedoctor scan /path/to/repo
```
On Windows:
```powershell
codedoctor scan C:\path\to\repo
```
Apply safe auto-fixes + formatting:
```bash
codedoctor scan . --fix
```
Skip tests:
```bash
codedoctor scan . --skip-tests
```
---
## Commands
### `codedoctor scan`
```bash
codedoctor scan [PATH] [--fix] [--skip-tests] [--report-dir DIR] [--no-gitignore] \
[--no-update-check] [--assume-defaults]
```
#### Options
- `PATH`
Repository/folder to scan (default: `.`)
- `--fix`
Apply safe auto-fixes (Ruff `--fix`) and format with Black.
- `--skip-tests`
Skip running `pytest`.
- `--report-dir DIR`
Directory (relative to the repo) to store reports. If omitted, uses the value
from your CodeDoctor config.
- `--no-gitignore`
Disable best-effort `.gitignore` handling (useful for debugging).
- `--no-update-check`
Disable the non-blocking “update available” notice during scans.
- `--assume-defaults`
Allow scanning without running `codedoctor setup` (useful for CI).
---
### `codedoctor setup`
```bash
codedoctor setup [--force]
```
Creates or updates the user config file.
- `--force` overwrites an existing config file.
---
### `codedoctor update`
```bash
codedoctor update [--yes]
```
Checks PyPI for the latest version and offers to upgrade CodeDoctor.
- `--yes` updates without prompting.
> Note: if CodeDoctor was installed into a locked or managed Python environment,
> `codedoctor update` may fail due to permissions. In that case, update using
> your environment’s normal package management approach (venv, pipx, etc.).
---
## What gets run during a scan
CodeDoctor invokes the following tools (when installed/available):
- `ruff check .` (and optionally `ruff check . --fix`)
- `black . --check` (and optionally `black .`)
- `mypy .`
- `bandit -r .`
- `pytest -q` (unless `--skip-tests`)
CodeDoctor runs tools in the target repo by setting `cwd` to the repo path.
---
## Reports
Reports are written under the repository (default `.codedoctor/`):
- `report-latest.txt` — newest scan
- `report-prev.txt` — previous scan (rotated)
- `report-YYYYMMDD-HHMMSS.txt` — timestamped snapshot
---
## `.gitignore` behavior (best effort)
Different tools treat ignore rules differently:
- **Ruff** and **Black** already respect `.gitignore` in typical setups.
- **MyPy** and **Bandit** do not consistently honor `.gitignore` the same way.
To provide consistent behavior, CodeDoctor will **attempt** to use git’s ignore
information when scanning a git repository by running:
```bash
git ls-files -ci --exclude-standard
```
Those ignored paths are then excluded from MyPy/Bandit runs.
If any of the following are true:
- the target folder is not a git repo
- `git` is not installed
- the git command fails
…CodeDoctor falls back to excluding common junk directories like `.venv`, `.git`,
caches, `build/`, and `dist/`.
---
## Exit Codes
CodeDoctor returns an exit code that matches the overall result:
- `0` — all checks passed
- `1` — warnings (non-fatal issues)
- `2` — failures (one or more checks failed)
Missing tools are treated as failures for that check (return code `127`) so the
report remains explicit and beginner-friendly.
---
## Development
Clone and install editable:
```bash
git clone https://github.com/BigPattyOG/CodeDoctor.git
cd CodeDoctor
python -m pip install -e .
```
Run setup:
```bash
codedoctor setup
```
Run a scan:
```bash
codedoctor scan .
```
---
## License
MIT License. See `LICENSE`. | text/markdown | null | Patrick Faint <pattymayo3@icloud.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"bandit>=1.7",
"black>=24.0",
"mypy>=1.8",
"pytest>=8",
"ruff>=0.9",
"bandit>=1.7.0; extra == \"dev\"",
"black>=24.0.0; extra == \"dev\"",
"mypy>=1.10.0; extra == \"dev\"",
"pre-commit>=3.7.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:46:15.018195 | codedoctor-0.2.0.tar.gz | 9,867 | c1/8c/34c793767db2a4240f722e0ece7cecb15dc4b32a59bf742309ad518948e6/codedoctor-0.2.0.tar.gz | source | sdist | null | false | 6961d4d3a4982a7c8ee913a2a5e6922f | 5503dc1a52a363f058e9b1f996107c955ba0f2fd0e50ccafa056145724930cc3 | c18c34c793767db2a4240f722e0ece7cecb15dc4b32a59bf742309ad518948e6 | MIT | [
"LICENSE"
] | 231 |
2.4 | OneBotConnecter | 0.3.14 | 基于websocket(服务器正向)连接的onebot11通用python接口 | # OneBotConnecter
本项目为onebot协议的非官方python整合, 允许用户快捷连接ws服务器,并收发信息。<br>
本项目基于ll2接口开发。理论上面对其他基于onebot-11协议的接口同样可以运行,但是毕竟没实际测试过,本人不担保可以100%顺利运行。
### !!!!!!
项目本身不包括任何机器人接口,请自行安装支持onebot协议的机器人接口并完成登录,再运行本项目!!!
## 项目结构
项目本身仅包括两个文件,OneBot.py及MessageType.py。<br>
OneBot负责服务器的直接连接及信息的IO处理。<br>
MessageType负责信息发送的数据包构造。<br>
换而言之,需要 查询/修改 对服务器直接交互或信息收集行为的情况下,请直接查询或修改 [OneBot.py](https://github.com/Sugar51243/OneBotConnecter/blob/main/src/OneBotConnecter/OneBot.py)。需要 查询/修改 向服务器发送的数据包内容或格式,请直接查询或修改 [MessageType.py](https://github.com/Sugar51243/OneBotConnecter/blob/main/src/OneBotConnecter/MessageType.py)。
## 使用教程
本项目基于python异步运行,请确保asyncio库已被引入。<br>
使用方法很简单:<br>
1.构造收集到信息时需要运行的脚本函数,填入参数为(机器人本体bot, 信息数据包message)<br>
2.通过本库创建OneBot对象并填入机器人基本信息,填入参数为(服务器地址, 管理员id, 机器人别称)<br>
3.运行对象的run函数,并填入1步骤的脚本函数为参数,开始连接并监听服务器推送<br>
具体可参考本项目的[example文件](https://github.com/Sugar51243/OneBotConnecter/blob/main/test/main.py),个人认为已经写得很清楚了。
## 安装
`pip install OneBotConnecter`
| text/markdown | null | Sugar51243 <1733682365@qq.com> | null | null | MIT License
Copyright (c) 2025 Sugar51243
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/Sugar51243/OneBotConnecter",
"Issues, https://github.com/Sugar51243/OneBotConnecter/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-20T01:45:30.600017 | onebotconnecter-0.3.14.tar.gz | 12,323 | 97/36/cfdaf2c6010e98f33068f9bbf14bc46f9cef7c9b79683f54fb88d64f2cfb/onebotconnecter-0.3.14.tar.gz | source | sdist | null | false | 0bd2e38680c319d43471146c6f7b73ec | 284186c5f656d6570659ca78725078c4b1a38111806b877530e4e87196bb7c7e | 9736cfdaf2c6010e98f33068f9bbf14bc46f9cef7c9b79683f54fb88d64f2cfb | null | [
"LICENSE"
] | 0 |
2.4 | keepercommander | 17.2.8 | Keeper Commander for Python 3 | 

### About Keeper Commander
Keeper Commander is a command-line and terminal UI interface to Keeper® Password Manager and KeeperPAM. Commander can be used to access and control your Keeper vault, perform administrative actions (managing users, teams, roles, SSO, privileged access resources, device approvals, data import/export), launch sessions, rotate passwords, integrate with developer tools, eliminate hardcoded passwords, run as a REST service and more. Keeper Commander is an open source project with contributions from Keeper's engineering team, customers and partners.
### Windows and macOS Binaries
See the [Releases](https://github.com/Keeper-Security/Commander/releases)
### Linux / Python using PIP
```
python3 -m venv keeper-env
source keeper-env/bin/activate
pip install keepercommander
```
### Running from Source
```
git clone https://github.com/Keeper-Security/Commander
cd Commander
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
pip install -e .
pip install -e '.[email]'
```
### Starting Commander
For a list of all available commands:
```
keeper help
```
To launch the interactive command shell:
```
keeper shell
```
or for a full terminal vault user interface
```
keeper supershell
```
Once logged in, check out the `this-device` command to set up persistent login sessions, logout timer and 2FA frequency. Also check out the `biometric register` command to enable biometric authentication on supported platforms.
### Documentation
- [Commander Documentation Home](https://docs.keeper.io/en/keeperpam/commander-cli/overview)
- [Installation](https://docs.keeper.io/en/keeperpam/commander-cli/commander-installation-setup)
- [Full Command Reference](https://docs.keeper.io/en/keeperpam/commander-cli/command-reference)
- [Service Mode REST API](https://docs.keeper.io/en/keeperpam/commander-cli/service-mode-rest-api)
- [Commander SDK](https://docs.keeper.io/en/keeperpam/commander-sdk/keeper-commander-sdks)
- [All Keeper Documentation](https://docs.keeper.io/)
### About Keeper Security
Keeper Security is the creator of KeeperPAM - the zero-trust and zero-knowledge privileged access management ("PAM") platform for securing and managing access to your critical infrastructure.
- [Keeper Security Homepage](https://keepersecurity.com)
- [Privileged Access Management](https://www.keepersecurity.com/privileged-access-management/)
- [Endpoint Privilege Manager](https://www.keepersecurity.com/endpoint-privilege-management/)
- [Encryption and Security Model](https://docs.keeper.io/en/enterprise-guide/keeper-encryption-model)
- [Downloads](https://www.keepersecurity.com/download.html?t=d)
| text/markdown | Craig Lurey | craig@keepersecurity.com | null | null | MIT | security, password | [
"Environment :: Console",
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.7",
"Topic :: Security"
] | [] | https://keepersecurity.com/ | https://github.com/Keeper-Security/Commander/releases | >=3.7 | [] | [] | [] | [
"asciitree",
"bcrypt",
"colorama",
"cryptography>=41.0.0",
"fido2>=2.0.0; python_version >= \"3.10\"",
"flask; python_version >= \"3.8\"",
"flask-limiter; python_version >= \"3.8\"",
"keeper-secrets-manager-core>=16.6.0",
"prompt_toolkit",
"protobuf>=4.23.0",
"psutil; python_version >= \"3.8\"",
"pycryptodomex>=3.20.0",
"pyngrok; python_version >= \"3.8\"",
"pyperclip",
"python-dotenv",
"requests>=2.31.0",
"tabulate",
"websockets",
"keeper_pam_webrtc_rs>=1.2.1; python_version >= \"3.8\"",
"pydantic>=2.6.4; python_version >= \"3.8\"",
"fpdf2>=2.8.3",
"cbor2; sys_platform == \"darwin\" and python_version >= \"3.10\"",
"pyobjc-framework-LocalAuthentication; sys_platform == \"darwin\" and python_version >= \"3.10\"",
"winrt-runtime; sys_platform == \"win32\" and python_version >= \"3.10\"",
"winrt-Windows.Foundation; sys_platform == \"win32\" and python_version >= \"3.10\"",
"winrt-Windows.Security.Credentials.UI; sys_platform == \"win32\" and python_version >= \"3.10\"",
"keeper-mlkem; python_version >= \"3.11\"",
"textual; python_version >= \"3.9\"",
"pytest; extra == \"test\"",
"testfixtures; extra == \"test\"",
"sendgrid>=6.10.0; extra == \"email-sendgrid\"",
"boto3>=1.26.0; extra == \"email-ses\"",
"google-auth>=2.16.0; extra == \"email-gmail-oauth\"",
"google-auth-oauthlib>=0.8.0; extra == \"email-gmail-oauth\"",
"google-auth-httplib2>=0.1.0; extra == \"email-gmail-oauth\"",
"google-api-python-client>=2.70.0; extra == \"email-gmail-oauth\"",
"msal>=1.20.0; extra == \"email-microsoft-oauth\"",
"sendgrid>=6.10.0; extra == \"email\"",
"boto3>=1.26.0; extra == \"email\"",
"google-auth>=2.16.0; extra == \"email\"",
"google-auth-oauthlib>=0.8.0; extra == \"email\"",
"google-auth-httplib2>=0.1.0; extra == \"email\"",
"google-api-python-client>=2.70.0; extra == \"email\"",
"msal>=1.20.0; extra == \"email\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-20T01:44:17.625093 | keepercommander-17.2.8.tar.gz | 2,281,496 | 76/2d/b7c07714770360d801071a50e60cc01fa41b7d5313e0a4f347f58f8e577f/keepercommander-17.2.8.tar.gz | source | sdist | null | false | df38b79b2b04587e57914e992a958257 | 5ef70b1354d14e35b7ce4edc3654263893bc9dff314639898299e2a74590d936 | 762db7c07714770360d801071a50e60cc01fa41b7d5313e0a4f347f58f8e577f | null | [
"LICENSE"
] | 1,477 |
2.4 | davidkhala.ai.models | 0.0.0 | GenAI models | # davidkhala.ai.models | text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"davidkhala-utils",
"pydantic",
"davidkhala-utils[http-request]; extra == \"api\"",
"voyageai; extra == \"atlas\"",
"anthropic; extra == \"minimax\"",
"openai; extra == \"minimax\"",
"davidkhala-ml-ocr; extra == \"ocr\"",
"openrouter; extra == \"openrouter\""
] | [] | [] | [] | [] | uv/0.9.7 | 2026-02-20T01:44:15.096682 | davidkhala_ai_models-0.0.0.tar.gz | 5,206 | 9c/f8/ae89e6a8ab6987cb92f7f1627e23dbc7e3e06adbfbff3d98fb11c976b5cb/davidkhala_ai_models-0.0.0.tar.gz | source | sdist | null | false | 24f26f1f0a7e9221af4ce0dae819b4f2 | e6f0778509b465493eeaea06e525e3cfc942093baca554667f57ef2bafaa2ad1 | 9cf8ae89e6a8ab6987cb92f7f1627e23dbc7e3e06adbfbff3d98fb11c976b5cb | null | [] | 0 |
2.4 | operator-agent | 0.4.0 | Personal AI agent that bridges Telegram to CLI agents (Claude, Codex, Gemini) running on your machine. | # Operator
Personal AI agent that bridges Telegram (more to come) to CLI agents running on your server. Send a message, get a response from Claude, Codex, or Gemini — with live status updates as they work. Use your subscription, not an API key.
## Quick Start
```bash
# Install
pip install "operator-agent[telegram]"
# Setup (creates bot, links your account, installs service)
operator setup
# Or run manually
operator serve
```
## How It Works
Operator runs as a background service on your server. When you send a Telegram message:
1. Your message is routed to the active CLI agent (Claude, Codex, or Gemini)
2. The agent runs in your working directory with full access to your files
3. Live status updates show what the agent is doing (reading files, running commands, etc.)
4. The final response is sent back to you in Telegram
## Setup
### Prerequisites
- Python 3.10+
- At least one CLI agent: [`claude`](https://docs.anthropic.com/en/docs/claude-code), [`codex`](https://github.com/openai/codex), or [`gemini`](https://github.com/google-gemini/gemini-cli)
- A Telegram account
### Recommended: use pyenv + virtualenv
We recommend using [pyenv](https://github.com/pyenv/pyenv) to manage Python versions and installing Operator in a virtualenv to avoid conflicts with system packages.
```bash
# Install Python 3.12+ via pyenv (if needed)
pyenv install 3.13
pyenv shell 3.13
# Create and activate a virtualenv
python -m venv ~/.operator-venv
source ~/.operator-venv/bin/activate
# Install Operator
pip install "operator-agent[telegram]"
```
### Running `operator setup`
The setup wizard walks you through:
1. **Provider detection** — checks which CLI agents are on your PATH
2. **Telegram bot creation** — guides you through @BotFather to create a bot, validates your token, and auto-captures your user ID when you send your first message
3. **Working directory** — where agents run commands and create files
4. **Background service** — optionally installs a service that starts on boot (launchd on macOS, systemd on Linux)
Config is saved to `~/.operator/config.json`.
## Commands
Send these in your Telegram chat with the bot:
| Command | Description |
|---------|-------------|
| `!status` | Show active provider and model |
| `!use claude\|codex\|gemini` | Switch provider |
| `!claude` / `!codex` / `!gemini` | Shortcut for `!use` |
| `!models` | List available models |
| `!model <name\|index>` | Switch model |
| `!stop` | Kill running process |
| `!clear` | Clear current provider session |
| `!clear all` | Clear all sessions |
| `!restart` | Restart the service |
## Configuration
### `~/.operator/config.json`
```json
{
"working_dir": "/home/you/projects",
"telegram": {
"bot_token": "123456:ABC-DEF...",
"allowed_user_ids": [your_telegram_id]
},
"providers": {
"claude": {
"path": "claude",
"models": ["opus", "sonnet", "haiku"]
},
"codex": {
"path": "codex",
"models": ["gpt-5.3-codex"]
},
"gemini": {
"path": "gemini",
"models": ["gemini-2.5-pro", "gemini-2.5-flash"]
}
}
}
```
- **working_dir** — where agents execute commands and create files
- **bot_token** — from @BotFather
- **allowed_user_ids** — Telegram user IDs that can use the bot (empty = allow all)
- **providers.*.path** — CLI binary name or full path
- **providers.*.models** — available models for each provider
### `~/.operator/state.json`
Runtime state managed automatically. Contains active provider/model per chat and session IDs for conversation continuity.
## Running as a Service
`operator setup` can install a background service. Management commands are shown at the end of setup.
### macOS (launchd)
```bash
# Start / Stop
launchctl load ~/Library/LaunchAgents/com.operator.agent.plist
launchctl unload ~/Library/LaunchAgents/com.operator.agent.plist
# Logs
tail -f ~/.operator/operator.log
```
### Linux (systemd)
```bash
# Start / Stop / Restart
systemctl --user start operator
systemctl --user stop operator
systemctl --user restart operator
# Status & Logs
systemctl --user status operator
journalctl --user -u operator -f
```
## Development
```bash
# Install with dev deps
pip install -e ".[telegram,dev]"
# Lint
ruff check src/
# Integration tests (requires CLI agents installed)
python tests/test_integration.py
```
## Versioning & Releases
This project uses [semver](https://semver.org/) (`MAJOR.MINOR.PATCH`). While pre-1.0, minor bumps may include breaking changes.
The version lives in `pyproject.toml` → `project.version` and is read at runtime via `importlib.metadata`.
### Release flow
```bash
# 1. Bump version in pyproject.toml
# 2. Update CHANGELOG.md
# 3. Commit and tag
git commit -m "release: v0.x.y"
git tag v0.x.y
# 4. Build and publish
python -m build
twine upload dist/*
# 5. Push
git push && git push --tags
```
| text/markdown | null | Gavin Vickery <gavin@geekforbrains.com> | null | null | null | ai, agent, telegram, claude, codex, gemini, cli | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Communications :: Chat",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"typer>=0.12",
"rich>=13.0",
"python-telegram-bot>=22.6; extra == \"telegram\"",
"ruff>=0.15.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/geekforbrains/operator",
"Repository, https://github.com/geekforbrains/operator",
"Changelog, https://github.com/geekforbrains/operator/blob/main/CHANGELOG.md",
"Issues, https://github.com/geekforbrains/operator/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-20T01:43:48.187155 | operator_agent-0.4.0.tar.gz | 25,247 | 2b/05/ee11477e0931e1ed7938989f89925002126bef1f6a025e2ef18529f3f4e5/operator_agent-0.4.0.tar.gz | source | sdist | null | false | 263732ab99de85c885ed03c7b1bd7e38 | a5a6484499161fcc6097ff893b2b598f8862857ea929b39653ad16c7d0149e1c | 2b05ee11477e0931e1ed7938989f89925002126bef1f6a025e2ef18529f3f4e5 | MIT | [
"LICENSE"
] | 236 |
2.4 | banabot-ai | 0.3.0 | A lightweight personal AI assistant framework | <div align="center">
<img src="img/banabot-logo.png" alt="banabot" width="500">
<h1>Banabot: Ultra-Lightweight Personal AI Assistant</h1>
</div>
🍌 **banabot** is an **ultra-lightweight** personal AI assistant — a fork of [nanobot](https://github.com/HKUDS/nanobot)
⚡️ Delivers core agent functionality in just **~4,000** lines of code — **99% smaller** than Clawdbot's 430k+ lines.
📏 Real-time line count: **3,761 lines** (run `bash core_agent_lines.sh` to verify anytime)
## 📢 News
- **2026-02-19** 🍌 **banabot v0.2.0** released! Fork of nanobot with multi-provider web search and complete rebranding.
- **2026-02-19** 🔍 Multi-provider web search: DuckDuckGo (free, no API key), Brave, Tavily, Serper, SearXNG.
- **2026-02-19** 🎨 Complete rebranding: new logo 🍌, CLI command `banabot`, config path `~/.banabot`.
<details>
<summary>Historical news (from nanobot)</summary>
- **2026-02-17** 🎉 Released **v0.1.4** — MCP support, progress streaming, new providers. See [nanobot releases](https://github.com/HKUDS/nanobot/releases).
- **2026-02-14** 🔌 MCP support added! See [MCP section](#mcp-model-context-protocol) for details.
- **2026-02-09** 💬 Added Slack, Email, and QQ support.
- **2026-02-02** 🎉 nanobot officially launched!
</details>
## Key Features of banabot:
🪶 **Ultra-Lightweight**: Just ~4,000 lines of core agent code — 99% smaller than Clawdbot.
🔬 **Research-Ready**: Clean, readable code that's easy to understand, modify, and extend for research.
⚡️ **Lightning Fast**: Minimal footprint means faster startup, lower resource usage, and quicker iterations.
💎 **Easy-to-Use**: One-click to deploy and you're ready to go.
## 🏗️ Architecture
<p align="center">
<img src="banabot_arch.png" alt="banabot architecture" width="800">
</p>
## ✨ Features
<table align="center">
<tr align="center">
<th><p align="center">📈 24/7 Real-Time Market Analysis</p></th>
<th><p align="center">🚀 Full-Stack Software Engineer</p></th>
<th><p align="center">📅 Smart Daily Routine Manager</p></th>
<th><p align="center">📚 Personal Knowledge Assistant</p></th>
</tr>
<tr>
<td align="center"><p align="center"><img src="case/search.gif" width="180" height="400"></p></td>
<td align="center"><p align="center"><img src="case/code.gif" width="180" height="400"></p></td>
<td align="center"><p align="center"><img src="case/scedule.gif" width="180" height="400"></p></td>
<td align="center"><p align="center"><img src="case/memory.gif" width="180" height="400"></p></td>
</tr>
<tr>
<td align="center">Discovery • Insights • Trends</td>
<td align="center">Develop • Deploy • Scale</td>
<td align="center">Schedule • Automate • Organize</td>
<td align="center">Learn • Memory • Reasoning</td>
</tr>
</table>
## 📦 Install
**Install from source** (latest features, recommended for development)
```bash
git clone https://github.com/Mrbanano/banabot.git
cd banabot
pip install -e .
```
**Install with [uv](https://github.com/astral-sh/uv)** (stable, fast)
```bash
uv tool install banabot-ai
```
**Install from PyPI** (stable)
```bash
pip install banabot-ai
```
## 🚀 Quick Start
> [!TIP]
> Set your API key in `~/.banabot/config.json`.
> Get API keys: [OpenRouter](https://openrouter.ai/keys) (Global) · Web search works out-of-the-box with DuckDuckGo (free)
**1. Initialize**
```bash
banabot onboard
```
**2. Configure** (`~/.banabot/config.json`)
Add or merge these **two parts** into your config (other options have defaults).
*Set your API key* (e.g. OpenRouter, recommended for global users):
```json
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
}
}
}
```
*Set your model*:
```json
{
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
}
}
}
```
**3. Chat**
```bash
banabot agent
```
That's it! You have a working AI assistant in 2 minutes.
## 💬 Chat Apps
Connect banabot to your favorite chat platform.
| Channel | What you need |
|---------|---------------|
| **Telegram** | Bot token from @BotFather |
| **Discord** | Bot token + Message Content intent |
| **WhatsApp** | QR code scan |
| **Feishu** | App ID + App Secret |
| **Mochat** | Claw token (auto-setup available) |
| **DingTalk** | App Key + App Secret |
| **Slack** | Bot token + App-Level token |
| **Email** | IMAP/SMTP credentials |
| **QQ** | App ID + App Secret |
<details>
<summary><b>Telegram</b> (Recommended)</summary>
**1. Create a bot**
- Open Telegram, search `@BotFather`
- Send `/newbot`, follow prompts
- Copy the token
**2. Configure**
```json
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}
```
> You can find your **User ID** in Telegram settings. It is shown as `@yourUserId`.
> Copy this value **without the `@` symbol** and paste it into the config file.
**3. Run**
```bash
banabot gateway
```
</details>
<details>
<summary><b>Mochat (Claw IM)</b></summary>
Uses **Socket.IO WebSocket** by default, with HTTP polling fallback.
**1. Ask banabot to set up Mochat for you**
Simply send this message to banabot (replace `xxx@xxx` with your real email):
```
Read https://raw.githubusercontent.com/HKUDS/MoChat/refs/heads/main/skills/nanobot/skill.md and register on MoChat. My Email account is xxx@xxx Bind me as your owner and DM me on MoChat.
```
banabot will automatically register, configure `~/.banabot/config.json`, and connect to Mochat.
**2. Restart gateway**
```bash
banabot gateway
```
That's it — banabot handles the rest!
<br>
<details>
<summary>Manual configuration (advanced)</summary>
If you prefer to configure manually, add the following to `~/.banabot/config.json`:
> Keep `claw_token` private. It should only be sent in `X-Claw-Token` header to your Mochat API endpoint.
```json
{
"channels": {
"mochat": {
"enabled": true,
"base_url": "https://mochat.io",
"socket_url": "https://mochat.io",
"socket_path": "/socket.io",
"claw_token": "claw_xxx",
"agent_user_id": "6982abcdef",
"sessions": ["*"],
"panels": ["*"],
"reply_delay_mode": "non-mention",
"reply_delay_ms": 120000
}
}
}
```
</details>
</details>
<details>
<summary><b>Discord</b></summary>
**1. Create a bot**
- Go to https://discord.com/developers/applications
- Create an application → Bot → Add Bot
- Copy the bot token
**2. Enable intents**
- In the Bot settings, enable **MESSAGE CONTENT INTENT**
- (Optional) Enable **SERVER MEMBERS INTENT** if you plan to use allow lists based on member data
**3. Get your User ID**
- Discord Settings → Advanced → enable **Developer Mode**
- Right-click your avatar → **Copy User ID**
**4. Configure**
```json
{
"channels": {
"discord": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"]
}
}
}
```
**5. Invite the bot**
- OAuth2 → URL Generator
- Scopes: `bot`
- Bot Permissions: `Send Messages`, `Read Message History`
- Open the generated invite URL and add the bot to your server
**6. Run**
```bash
banabot gateway
```
</details>
<details>
<summary><b>WhatsApp</b></summary>
Requires **Node.js ≥18**.
**1. Link device**
```bash
banabot channels login
# Scan QR with WhatsApp → Settings → Linked Devices
```
**2. Configure**
```json
{
"channels": {
"whatsapp": {
"enabled": true,
"allowFrom": ["+1234567890"]
}
}
}
```
**3. Run** (two terminals)
```bash
# Terminal 1
banabot channels login
# Terminal 2
banabot gateway
```
</details>
<details>
<summary><b>Feishu (飞书)</b></summary>
Uses **WebSocket** long connection — no public IP required.
**1. Create a Feishu bot**
- Visit [Feishu Open Platform](https://open.feishu.cn/app)
- Create a new app → Enable **Bot** capability
- **Permissions**: Add `im:message` (send messages)
- **Events**: Add `im.message.receive_v1` (receive messages)
- Select **Long Connection** mode (requires running banabot first to establish connection)
- Get **App ID** and **App Secret** from "Credentials & Basic Info"
- Publish the app
**2. Configure**
```json
{
"channels": {
"feishu": {
"enabled": true,
"appId": "cli_xxx",
"appSecret": "xxx",
"encryptKey": "",
"verificationToken": "",
"allowFrom": []
}
}
}
```
> `encryptKey` and `verificationToken` are optional for Long Connection mode.
> `allowFrom`: Leave empty to allow all users, or add `["ou_xxx"]` to restrict access.
**3. Run**
```bash
banabot gateway
```
> [!TIP]
> Feishu uses WebSocket to receive messages — no webhook or public IP needed!
</details>
<details>
<summary><b>QQ (QQ单聊)</b></summary>
Uses **botpy SDK** with WebSocket — no public IP required. Currently supports **private messages only**.
**1. Register & create bot**
- Visit [QQ Open Platform](https://q.qq.com) → Register as a developer (personal or enterprise)
- Create a new bot application
- Go to **开发设置 (Developer Settings)** → copy **AppID** and **AppSecret**
**2. Set up sandbox for testing**
- In the bot management console, find **沙箱配置 (Sandbox Config)**
- Under **在消息列表配置**, click **添加成员** and add your own QQ number
- Once added, scan the bot's QR code with mobile QQ → open the bot profile → tap "发消息" to start chatting
**3. Configure**
> - `allowFrom`: Leave empty for public access, or add user openids to restrict. You can find openids in the banabot logs when a user messages the bot.
> - For production: submit a review in the bot console and publish. See [QQ Bot Docs](https://bot.q.qq.com/wiki/) for the full publishing flow.
```json
{
"channels": {
"qq": {
"enabled": true,
"appId": "YOUR_APP_ID",
"secret": "YOUR_APP_SECRET",
"allowFrom": []
}
}
}
```
**4. Run**
```bash
banabot gateway
```
Now send a message to the bot from QQ — it should respond!
</details>
<details>
<summary><b>DingTalk (钉钉)</b></summary>
Uses **Stream Mode** — no public IP required.
**1. Create a DingTalk bot**
- Visit [DingTalk Open Platform](https://open-dev.dingtalk.com/)
- Create a new app -> Add **Robot** capability
- **Configuration**:
- Toggle **Stream Mode** ON
- **Permissions**: Add necessary permissions for sending messages
- Get **AppKey** (Client ID) and **AppSecret** (Client Secret) from "Credentials"
- Publish the app
**2. Configure**
```json
{
"channels": {
"dingtalk": {
"enabled": true,
"clientId": "YOUR_APP_KEY",
"clientSecret": "YOUR_APP_SECRET",
"allowFrom": []
}
}
}
```
> `allowFrom`: Leave empty to allow all users, or add `["staffId"]` to restrict access.
**3. Run**
```bash
banabot gateway
```
</details>
<details>
<summary><b>Slack</b></summary>
Uses **Socket Mode** — no public URL required.
**1. Create a Slack app**
- Go to [Slack API](https://api.slack.com/apps) → **Create New App** → "From scratch"
- Pick a name and select your workspace
**2. Configure the app**
- **Socket Mode**: Toggle ON → Generate an **App-Level Token** with `connections:write` scope → copy it (`xapp-...`)
- **OAuth & Permissions**: Add bot scopes: `chat:write`, `reactions:write`, `app_mentions:read`
- **Event Subscriptions**: Toggle ON → Subscribe to bot events: `message.im`, `message.channels`, `app_mention` → Save Changes
- **App Home**: Scroll to **Show Tabs** → Enable **Messages Tab** → Check **"Allow users to send Slash commands and messages from the messages tab"**
- **Install App**: Click **Install to Workspace** → Authorize → copy the **Bot Token** (`xoxb-...`)
**3. Configure banabot**
```json
{
"channels": {
"slack": {
"enabled": true,
"botToken": "xoxb-...",
"appToken": "xapp-...",
"groupPolicy": "mention"
}
}
}
```
**4. Run**
```bash
banabot gateway
```
DM the bot directly or @mention it in a channel — it should respond!
> [!TIP]
> - `groupPolicy`: `"mention"` (default — respond only when @mentioned), `"open"` (respond to all channel messages), or `"allowlist"` (restrict to specific channels).
> - DM policy defaults to open. Set `"dm": {"enabled": false}` to disable DMs.
</details>
<details>
<summary><b>Email</b></summary>
Give banabot its own email account. It polls **IMAP** for incoming mail and replies via **SMTP** — like a personal email assistant.
**1. Get credentials (Gmail example)**
- Create a dedicated Gmail account for your bot (e.g. `my-nanobot@gmail.com`)
- Enable 2-Step Verification → Create an [App Password](https://myaccount.google.com/apppasswords)
- Use this app password for both IMAP and SMTP
**2. Configure**
> - `consentGranted` must be `true` to allow mailbox access. This is a safety gate — set `false` to fully disable.
> - `allowFrom`: Leave empty to accept emails from anyone, or restrict to specific senders.
> - `smtpUseTls` and `smtpUseSsl` default to `true` / `false` respectively, which is correct for Gmail (port 587 + STARTTLS). No need to set them explicitly.
> - Set `"autoReplyEnabled": false` if you only want to read/analyze emails without sending automatic replies.
```json
{
"channels": {
"email": {
"enabled": true,
"consentGranted": true,
"imapHost": "imap.gmail.com",
"imapPort": 993,
"imapUsername": "my-nanobot@gmail.com",
"imapPassword": "your-app-password",
"smtpHost": "smtp.gmail.com",
"smtpPort": 587,
"smtpUsername": "my-nanobot@gmail.com",
"smtpPassword": "your-app-password",
"fromAddress": "my-nanobot@gmail.com",
"allowFrom": ["your-real-email@gmail.com"]
}
}
}
```
**3. Run**
```bash
banabot gateway
```
</details>
## 🌐 Agent Social Network
🍌 banabot is capable of linking to the agent social network (agent community). **Just send one message and your banabot joins automatically!**
| Platform | How to Join (send this message to your bot) |
|----------|-------------|
| [**Moltbook**](https://www.moltbook.com/) | `Read https://moltbook.com/skill.md and follow the instructions to join Moltbook` |
| [**ClawdChat**](https://clawdchat.ai/) | `Read https://clawdchat.ai/skill.md and follow the instructions to join ClawdChat` |
Simply send the command above to your banabot (via CLI or any chat channel), and it will handle the rest.
## ⚙️ Configuration
Config file: `~/.banabot/config.json`
### Providers
> [!TIP]
> - **Groq** provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.
> - **Zhipu Coding Plan**: If you're on Zhipu's coding plan, set `"apiBase": "https://open.bigmodel.cn/api/coding/paas/v4"` in your zhipu provider config.
> - **MiniMax (Mainland China)**: If your API key is from MiniMax's mainland China platform (minimaxi.com), set `"apiBase": "https://api.minimaxi.com/v1"` in your minimax provider config.
| Provider | Purpose | Get API Key |
|----------|---------|-------------|
| `custom` | Any OpenAI-compatible endpoint (direct, no LiteLLM) | — |
| `openrouter` | LLM (recommended, access to all models) | [openrouter.ai](https://openrouter.ai) |
| `anthropic` | LLM (Claude direct) | [console.anthropic.com](https://console.anthropic.com) |
| `openai` | LLM (GPT direct) | [platform.openai.com](https://platform.openai.com) |
| `deepseek` | LLM (DeepSeek direct) | [platform.deepseek.com](https://platform.deepseek.com) |
| `groq` | LLM + **Voice transcription** (Whisper) | [console.groq.com](https://console.groq.com) |
| `gemini` | LLM (Gemini direct) | [aistudio.google.com](https://aistudio.google.com) |
| `minimax` | LLM (MiniMax direct) | [platform.minimax.io](https://platform.minimax.io) |
| `aihubmix` | LLM (API gateway, access to all models) | [aihubmix.com](https://aihubmix.com) |
| `siliconflow` | LLM (SiliconFlow/硅基流动, API gateway) | [siliconflow.cn](https://siliconflow.cn) |
| `dashscope` | LLM (Qwen) | [dashscope.console.aliyun.com](https://dashscope.console.aliyun.com) |
| `moonshot` | LLM (Moonshot/Kimi) | [platform.moonshot.cn](https://platform.moonshot.cn) |
| `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https://open.bigmodel.cn) |
| `vllm` | LLM (local, any OpenAI-compatible server) | — |
| `openai_codex` | LLM (Codex, OAuth) | `banabot provider login openai-codex` |
| `github_copilot` | LLM (GitHub Copilot, OAuth) | `banabot provider login github-copilot` |
<details>
<summary><b>OpenAI Codex (OAuth)</b></summary>
Codex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account.
**1. Login:**
```bash
banabot provider login openai-codex
```
**2. Set model** (merge into `~/.banabot/config.json`):
```json
{
"agents": {
"defaults": {
"model": "openai-codex/gpt-5.1-codex"
}
}
}
```
**3. Chat:**
```bash
banabot agent -m "Hello!"
```
> Docker users: use `docker run -it` for interactive OAuth login.
</details>
<details>
<summary><b>Custom Provider (Any OpenAI-compatible API)</b></summary>
Connects directly to any OpenAI-compatible endpoint — LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Bypasses LiteLLM; model name is passed as-is.
```json
{
"providers": {
"custom": {
"apiKey": "your-api-key",
"apiBase": "https://api.your-provider.com/v1"
}
},
"agents": {
"defaults": {
"model": "your-model-name"
}
}
}
```
> For local servers that don't require a key, set `apiKey` to any non-empty string (e.g. `"no-key"`).
</details>
<details>
<summary><b>vLLM (local / OpenAI-compatible)</b></summary>
Run your own model with vLLM or any OpenAI-compatible server, then add to config:
**1. Start the server** (example):
```bash
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
```
**2. Add to config** (partial — merge into `~/.banabot/config.json`):
*Provider (key can be any non-empty string for local):*
```json
{
"providers": {
"vllm": {
"apiKey": "dummy",
"apiBase": "http://localhost:8000/v1"
}
}
}
```
*Model:*
```json
{
"agents": {
"defaults": {
"model": "meta-llama/Llama-3.1-8B-Instruct"
}
}
}
```
</details>
<details>
<summary><b>Adding a New Provider (Developer Guide)</b></summary>
banabot uses a **Provider Registry** (`banabot/providers/registry.py`) as the single source of truth.
Adding a new provider only takes **2 steps** — no if-elif chains to touch.
**Step 1.** Add a `ProviderSpec` entry to `PROVIDERS` in `banabot/providers/registry.py`:
```python
ProviderSpec(
name="myprovider", # config field name
keywords=("myprovider", "mymodel"), # model-name keywords for auto-matching
env_key="MYPROVIDER_API_KEY", # env var for LiteLLM
display_name="My Provider", # shown in `banabot status`
litellm_prefix="myprovider", # auto-prefix: model → myprovider/model
skip_prefixes=("myprovider/",), # don't double-prefix
)
```
**Step 2.** Add a field to `ProvidersConfig` in `banabot/config/schema.py`:
```python
class ProvidersConfig(BaseModel):
...
myprovider: ProviderConfig = ProviderConfig()
```
That's it! Environment variables, model prefixing, config matching, and `banabot status` display will all work automatically.
**Common `ProviderSpec` options:**
| Field | Description | Example |
|-------|-------------|---------|
| `litellm_prefix` | Auto-prefix model names for LiteLLM | `"dashscope"` → `dashscope/qwen-max` |
| `skip_prefixes` | Don't prefix if model already starts with these | `("dashscope/", "openrouter/")` |
| `env_extras` | Additional env vars to set | `(("ZHIPUAI_API_KEY", "{api_key}"),)` |
| `model_overrides` | Per-model parameter overrides | `(("kimi-k2.5", {"temperature": 1.0}),)` |
| `is_gateway` | Can route any model (like OpenRouter) | `True` |
| `detect_by_key_prefix` | Detect gateway by API key prefix | `"sk-or-"` |
| `detect_by_base_keyword` | Detect gateway by API base URL | `"openrouter"` |
| `strip_model_prefix` | Strip existing prefix before re-prefixing | `True` (for AiHubMix) |
</details>
### MCP (Model Context Protocol)
> [!TIP]
> The config format is compatible with Claude Desktop / Cursor. You can copy MCP server configs directly from any MCP server's README.
banabot supports [MCP](https://modelcontextprotocol.io/) — connect external tool servers and use them as native agent tools.
Add MCP servers to your `config.json`:
```json
{
"tools": {
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
}
}
}
}
```
Two transport modes are supported:
| Mode | Config | Example |
|------|--------|---------|
| **Stdio** | `command` + `args` | Local process via `npx` / `uvx` |
| **HTTP** | `url` | Remote endpoint (`https://mcp.example.com/sse`) |
MCP tools are automatically discovered and registered on startup. The LLM can use them alongside built-in tools — no extra configuration needed.
### Security
> [!TIP]
> For production deployments, set `"restrictToWorkspace": true` in your config to sandbox the agent.
| Option | Default | Description |
|--------|---------|-------------|
| `tools.restrictToWorkspace` | `false` | When `true`, restricts **all** agent tools (shell, file read/write/edit, list) to the workspace directory. Prevents path traversal and out-of-scope access. |
| `channels.*.allowFrom` | `[]` (allow all) | Whitelist of user IDs. Empty = allow everyone; non-empty = only listed users can interact. |
### Web Search
banabot supports multiple search providers — **works out-of-the-box with DuckDuckGo (free, no API key required)**.
| Provider | API Key | Get Key |
|----------|---------|---------|
| `duckduckgo` (default) | No | — |
| `brave` | Yes | [Brave Search API](https://brave.com/search/api/) |
| `tavily` | Yes | [Tavily](https://tavily.com/) |
| `serper` | Yes | [Serper](https://serper.dev/) |
| `searxng` | No (self-hosted) | [SearXNG](https://searxng.org/) |
**Configuration** (`~/.banabot/config.json`):
```json
{
"tools": {
"web": {
"search": {
"defaultProvider": "duckduckgo",
"maxResults": 5,
"providers": {
"brave": { "apiKey": "YOUR_KEY", "enabled": true },
"duckduckgo": { "enabled": true },
"tavily": { "apiKey": "YOUR_KEY", "enabled": false },
"serper": { "apiKey": "YOUR_KEY", "enabled": false },
"searxng": { "apiBase": "http://localhost:8080", "enabled": false }
}
}
}
}
}
```
If no `defaultProvider` is set, uses DuckDuckGo (free). Set `defaultProvider` to use a different provider by default.
## CLI Reference
| Command | Description |
|---------|-------------|
| `banabot onboard` | Initialize config & workspace |
| `banabot agent -m "..."` | Chat with the agent |
| `banabot agent` | Interactive chat mode |
| `banabot agent --no-markdown` | Show plain-text replies |
| `banabot agent --logs` | Show runtime logs during chat |
| `banabot gateway` | Start the gateway |
| `banabot status` | Show status |
| `banabot provider login openai-codex` | OAuth login for providers |
| `banabot channels login` | Link WhatsApp (scan QR) |
| `banabot channels status` | Show channel status |
Interactive mode exits: `exit`, `quit`, `/exit`, `/quit`, `:q`, or `Ctrl+D`.
<details>
<summary><b>Scheduled Tasks (Cron)</b></summary>
```bash
# Add a job
banabot cron add --name "daily" --message "Good morning!" --cron "0 9 * * *"
banabot cron add --name "hourly" --message "Check status" --every 3600
# List jobs
banabot cron list
# Remove a job
banabot cron remove <job_id>
```
</details>
## 🐳 Docker
> [!TIP]
> The `-v ~/.banabot:/root/.banabot` flag mounts your local config directory into the container, so your config and workspace persist across container restarts.
### Docker Compose
```bash
docker compose run --rm banabot-cli onboard # first-time setup
vim ~/.banabot/config.json # add API keys
docker compose up -d banabot-gateway # start gateway
```
```bash
docker compose run --rm banabot-cli agent -m "Hello!" # run CLI
docker compose logs -f banabot-gateway # view logs
docker compose down # stop
```
### Docker
```bash
# Build the image
docker build -t banabot .
# Initialize config (first time only)
docker run -v ~/.banabot:/root/.banabot --rm banabot onboard
# Edit config on host to add API keys
vim ~/.banabot/config.json
# Run gateway (connects to enabled channels, e.g. Telegram/Discord/Mochat)
docker run -v ~/.banabot:/root/.banabot -p 18790:18790 banabot gateway
# Or run a single command
docker run -v ~/.banabot:/root/.banabot --rm banabot agent -m "Hello!"
docker run -v ~/.banabot:/root/.banabot --rm banabot status
```
## 📁 Project Structure
```
banabot/
├── agent/ # 🧠 Core agent logic
│ ├── loop.py # Agent loop (LLM ↔ tool execution)
│ ├── context.py # Prompt builder
│ ├── memory.py # Persistent memory
│ ├── skills.py # Skills loader
│ ├── subagent.py # Background task execution
│ └── tools/ # Built-in tools (incl. spawn)
├── skills/ # 🎯 Bundled skills (github, weather, tmux...)
├── channels/ # 📱 Chat channel integrations
├── bus/ # 🚌 Message routing
├── cron/ # ⏰ Scheduled tasks
├── heartbeat/ # 💓 Proactive wake-up
├── providers/ # 🤖 LLM providers (OpenRouter, etc.)
├── session/ # 💬 Conversation sessions
├── config/ # ⚙️ Configuration
└── cli/ # 🖥️ Commands
```
## 🛠️ Development Guide
### Prerequisites
- **Python 3.11+**
- **Node.js 20+** (only needed for WhatsApp bridge)
- **Git**
### Setup
```bash
# 1. Clone the repo
git clone https://github.com/Mrbanano/banabot.git
cd banabot
# 2. Create and activate a virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# 3. Install in editable mode with dev dependencies
pip install -e ".[dev]"
# 4. Initialize config & workspace
banabot onboard
# 5. Add an API key to ~/.banabot/config.json (e.g. OpenRouter)
# {
# "providers": {
# "openrouter": { "apiKey": "sk-or-v1-xxx" }
# }
# }
# 6. Verify everything works
banabot status
banabot agent -m "Hello!"
```
### Running Tests
```bash
# Run all tests
pytest
# Run a specific test file
pytest tests/test_commands.py
# Run a specific test function
pytest tests/test_commands.py::test_onboard_fresh_install
# Verbose output
pytest -v
```
Tests use `pytest-asyncio` (auto mode) for async tests and `unittest.mock` for mocking config/paths.
### Linting & Formatting
The project uses [Ruff](https://docs.astral.sh/ruff/) for both linting and formatting.
```bash
# Check for lint errors
ruff check banabot/
# Auto-fix lint errors
ruff check --fix banabot/
# Format code
ruff format banabot/
```
Rules configured: `E` (pycodestyle), `F` (Pyflakes), `I` (isort), `N` (naming), `W` (whitespace). Line length: 100 chars.
### Debugging
```bash
# Run agent with runtime logs visible
banabot agent -m "test" --logs
# Run gateway in verbose mode
banabot gateway --verbose
```
### Building the WhatsApp Bridge (optional)
Only needed if you're working on WhatsApp integration:
```bash
cd bridge
npm install
npm run build
```
### Key Extension Points
<details>
<summary><b>Adding a New Tool</b></summary>
1. Create `banabot/agent/tools/mytool.py` extending the `Tool` base class
2. Implement `name`, `description`, `parameters` (JSON schema), and `execute(**kwargs)`
3. Register it in the `AgentLoop` tool setup
```python
from nanobot.agent.tools.base import Tool
class MyTool(Tool):
name = "my_tool"
description = "Does something useful"
parameters = {
"type": "object",
"properties": {
"input": {"type": "string", "description": "The input value"}
},
"required": ["input"]
}
async def execute(self, **kwargs):
return f"Result: {kwargs['input']}"
```
</details>
<details>
<summary><b>Adding a New Channel</b></summary>
1. Create `banabot/channels/myservice.py` extending `Channel`
2. Implement `start()`, `stop()`, and message sending logic
3. Subscribe to the inbound message bus
4. Add a config class to `banabot/config/schema.py`
5. Register in `ChannelManager.start_all()`
</details>
<details>
<summary><b>Creating a Custom Skill</b></summary>
Skills are Markdown files that give the agent domain-specific instructions:
1. Create `~/.banabot/workspace/skills/myskill/SKILL.md`
2. Write instructions, examples, and notes in Markdown
3. The agent will auto-discover and use it
See `banabot/skills/README.md` for the full skill format.
</details>
### Architecture Overview
| Component | Path | Role |
|-----------|------|------|
| **Agent Loop** | `banabot/agent/loop.py` | Core LLM ↔ tool execution cycle |
| **Context Builder** | `banabot/agent/context.py` | Assembles prompts from workspace files |
| **Memory** | `banabot/agent/memory.py` | Two-layer: `MEMORY.md` (facts) + `HISTORY.md` (events) |
| **Message Bus** | `banabot/bus/` | Async inbound/outbound queues decoupling channels from agent |
| **Provider Registry** | `banabot/providers/registry.py` | Single registry for 18+ LLM providers |
| **Session Manager** | `banabot/session/manager.py` | JSONL-based per-channel conversation storage |
| **Tool Registry** | `banabot/agent/tools/registry.py` | Manages built-in + MCP tools |
| **Channel Manager** | `banabot/channels/manager.py` | Starts/stops all enabled channel integrations |
| **Cron Service** | `banabot/cron/service.py` | Scheduled task execution (cron, interval, one-time) |
| **Config Schema** | `banabot/config/schema.py` | Pydantic models for all config sections |
### PR Workflow
```bash
# Create a feature branch
git checkout -b feature/my-feature
# Make changes, then lint and test
ruff check --fix banabot/
ruff format banabot/
pytest
# Commit and push
git add .
git commit -m "feat: description of change"
git push origin feature/my-feature
```
Then open a PR against `main`. Use conventional commit prefixes: `feat:`, `fix:`, `docs:`, `chore:`, `refactor:`.
---
## 🤝 Contribute & Roadmap
PRs welcome! The codebase is intentionally small and readable. 🤗
**Roadmap** — Pick an item and open a PR!
- [ ] **Multi-modal** — See and hear (images, voice, video)
- [ ] **Long-term memory** — Never forget important context
- [ ] **Better reasoning** — Multi-step planning and reflection
- [ ] **More integrations** — Calendar and more
- [ ] **Self-improvement** — Learn from feedback and mistakes
### Contributors
**banabot** is a fork of [nanobot](https://github.com/HKUDS/nanobot). We thank the original contributors:
<a href="https://github.com/HKUDS/nanobot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=HKUDS/nanobot&max=100&columns=12" alt="nanobot Contributors" />
</a>
See [CREDITS.md](./CREDITS.md) for full attribution.
| text/markdown | banabot contributors | null | null | null | MIT | agent, ai, banabot, chatbot, cli | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"croniter<7.0.0,>=6.0.0",
"ddgs<10.0.0,>=9.0.0",
"dingtalk-stream<1.0.0,>=0.24.0",
"httpx<1.0.0,>=0.28.0",
"inquirerpy<1.0.0,>=0.3.4",
"json-repair<1.0.0,>=0.57.0",
"lark-oapi<2.0.0,>=1.5.0",
"litellm<2.0.0,>=1.81.5",
"loguru<1.0.0,>=0.7.3",
"mcp<2.0.0,>=1.26.0",
"msgpack<2.0.0,>=1.1.0",
"oauth-cli-kit<1.0.0,>=0.1.3",
"prompt-toolkit<4.0.0,>=3.0.50",
"pydantic-settings<3.0.0,>=2.12.0",
"pydantic<3.0.0,>=2.12.0",
"python-socketio<6.0.0,>=5.16.0",
"python-socks[asyncio]<3.0.0,>=2.8.0",
"python-telegram-bot[socks]<23.0,>=22.0",
"qq-botpy<2.0.0,>=1.2.0",
"readability-lxml<1.0.0,>=0.8.4",
"rich<15.0.0,>=14.0.0",
"slack-sdk<4.0.0,>=3.39.0",
"slackify-markdown<1.0.0,>=0.2.0",
"socksio<2.0.0,>=1.0.0",
"typer<1.0.0,>=0.20.0",
"websocket-client<2.0.0,>=1.9.0",
"websockets<17.0,>=16.0",
"pytest-asyncio<2.0.0,>=1.3.0; extra == \"dev\"",
"pytest<10.0.0,>=9.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Mrbanano/banabot",
"Repository, https://github.com/Mrbanano/banabot",
"Issues, https://github.com/Mrbanano/banabot/issues",
"Chat, https://t.me/bananobot_chat",
"Logo, https://raw.githubusercontent.com/Mrbanano/banabot/main/img/banabot-logo.png"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-20T01:43:47.611201 | banabot_ai-0.3.0.tar.gz | 136,160 | 08/56/c3899a489fa37ca3a134ebf8407ebf39b81c1c8c7122a41c095e2503b6fb/banabot_ai-0.3.0.tar.gz | source | sdist | null | false | c4a7850effb965a84d11864b3de079e8 | 9035765ca828ac394f5306a07f99d6517ba683e00dd059353e9fd8e021c45592 | 0856c3899a489fa37ca3a134ebf8407ebf39b81c1c8c7122a41c095e2503b6fb | null | [
"LICENSE"
] | 226 |
2.4 | jstverify-tracing | 0.10.4 | Python distributed tracing SDK for JstVerify application mapping | # jstverify-tracing
Python distributed tracing SDK for [JstVerify](https://jstverify.com) application mapping. Auto-instruments your backend to produce trace spans that connect with the JstVerify JavaScript SDK, giving you a full frontend-to-backend service map.
## Installation
```bash
pip install jstverify-tracing
```
With framework extras:
```bash
pip install jstverify-tracing[flask]
pip install jstverify-tracing[django]
pip install jstverify-tracing[fastapi]
```
## Quick Start
### 1. Initialize (once at startup)
```python
import jstverify_tracing
jstverify_tracing.init(
api_key="your-sdk-key",
service_name="my-backend",
)
```
> The endpoint defaults to the production ingestion URL (`https://sdkapi.jstverify.com/v1/tracing/spans`). Override for dev environments:
>
> ```python
> jstverify_tracing.init(
> api_key="your-sdk-key",
> endpoint="https://sdkapi.dev.jstverify.com/v1/tracing/spans",
> service_name="my-backend",
> )
> ```
### 2. Add Framework Middleware
**Flask:**
```python
from jstverify_tracing.integrations.flask import JstVerifyTracingMiddleware
JstVerifyTracingMiddleware(app)
```
**Django (settings.py):**
```python
MIDDLEWARE = [
"jstverify_tracing.integrations.django.JstVerifyTracingMiddleware",
...
]
```
**FastAPI:**
```python
from jstverify_tracing.integrations.fastapi import JstVerifyTracingMiddleware
app.add_middleware(JstVerifyTracingMiddleware)
```
**AWS Lambda (API Gateway):**
```python
from jstverify_tracing.integrations.awslambda import JstVerifyTracingMiddleware
@JstVerifyTracingMiddleware
def lambda_handler(event, context):
return {"statusCode": 200, "body": "ok"}
```
**AWS AppSync Lambda Resolver:**
```python
from jstverify_tracing.integrations.appsync import JstVerifyAppSyncMiddleware
@JstVerifyAppSyncMiddleware
def handler(event, context):
return [{"id": "1", "name": "Alice"}]
```
The AppSync middleware extracts trace context from `event["request"]["headers"]` and derives the operation name from `event["info"]["parentTypeName"]` and `event["info"]["fieldName"]` (e.g. `Query.listUsers`). Only direct transport mode is supported — relay mode is not available for AppSync since GraphQL responses cannot carry custom HTTP headers.
### 3. Manual Instrumentation (optional)
```python
from jstverify_tracing import trace, trace_span
@trace("process-payment")
def process_payment(order_id):
...
def handle_order(order_id):
with trace_span("validate-order") as span:
...
span.set_status(200)
with trace_span("charge-card") as span:
...
span.set_http_metadata(method="POST", url="/payments/charge", status_code=201)
```
### 4. DynamoDB Tracing
The `patch_requests=True` option only patches the `requests` HTTP library. AWS SDK calls via `boto3` use `urllib3` directly, so DynamoDB, S3, and SQS operations are **not** auto-traced.
Use the `trace_dynamodb()` helper to wrap individual DynamoDB operations:
```python
from jstverify_tracing import trace_dynamodb
# Instead of: table.get_item(Key={"UserID": "123"})
result = trace_dynamodb("GetItem", table, Key={"UserID": "123"})
# Works with any DynamoDB operation
result = trace_dynamodb("Query", table, KeyConditionExpression="pk = :pk",
ExpressionAttributeValues={":pk": org_id})
result = trace_dynamodb("PutItem", table, Item={"UserID": "456", "name": "Alice"})
```
Each call creates a child span with the operation name (e.g. `DynamoDB.GetItem`) and the table name in metadata.
### 5. Shutdown
```python
jstverify_tracing.shutdown()
```
Shutdown is also registered via `atexit` automatically.
## Configuration Options
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `api_key` | str | required | Your JstVerify SDK API key |
| `endpoint` | str | production URL | Span ingestion endpoint URL (defaults to `https://sdkapi.jstverify.com/v1/tracing/spans`) |
| `service_name` | str | required | Service name shown in the service map |
| `service_type` | str | `"http"` | Service type identifier |
| `transport` | str | `"direct"` | `"direct"` sends spans via HTTP; `"relay"` encodes spans into response headers |
| `flush_interval` | float | `5.0` | Seconds between background flushes |
| `max_queue_size` | int | `200` | Max buffered spans (circular buffer) |
| `max_batch_size` | int | `50` | Max spans per API request |
| `debug` | bool | `False` | Enable debug logging |
| `patch_requests` | bool | `True` | Auto-patch `requests` library for outgoing HTTP tracing |
## How It Works
### Direct Mode (default)
1. The middleware reads `X-JstVerify-Trace-Id` and `X-JstVerify-Parent-Span-Id` headers from incoming requests (injected by the JS SDK).
2. A root span is created for each request, with nested child spans for `@trace` decorated functions and `trace_span` context managers.
3. Outgoing `requests` library calls are automatically instrumented — trace headers are injected so downstream services can continue the trace.
4. Spans are buffered in a thread-safe queue and flushed to the JstVerify API in batches by a background daemon thread.
### Relay Mode
For backends without outbound internet access (private VPC, strict firewalls), relay mode encodes spans into the `X-JstVerify-Spans` response header. The JstVerify JS SDK reads this header and relays the spans to the ingestion API on behalf of the backend.
```python
jstverify_tracing.init(
api_key="your-sdk-key",
service_name="my-backend",
transport="relay", # No endpoint needed
)
```
**How it works:**
1. Each request collects spans in a per-request buffer (async-safe via `contextvars`).
2. When the response is sent, all spans are base64url-encoded into the `X-JstVerify-Spans` header.
3. The JS SDK decodes the header and merges the spans into its own flush queue.
**Limitations:**
- ~20-30 spans per response max due to the 7500-byte header size limit.
- Only works for request-response flows — async background jobs have no response to carry spans.
- Cross-origin requests require the `Access-Control-Expose-Headers` header (set automatically by the middleware).
## License
MIT
| text/markdown | JustBard Technologies | null | null | null | null | null | [
"Development Status :: 4 - Beta",
"Framework :: Django",
"Framework :: FastAPI",
"Framework :: Flask",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: System :: Monitoring"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"requests>=2.20.0",
"typing-extensions>=3.7.4",
"boto3>=1.20.0; extra == \"aws\"",
"boto3>=1.20.0; extra == \"dev\"",
"django>=3.2; extra == \"dev\"",
"fastapi>=0.68; extra == \"dev\"",
"flask>=2.0; extra == \"dev\"",
"httpx>=0.23; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"responses>=0.20; extra == \"dev\"",
"django>=3.2; extra == \"django\"",
"fastapi>=0.68; extra == \"fastapi\"",
"httpx>=0.23; extra == \"fastapi\"",
"flask>=2.0; extra == \"flask\""
] | [] | [] | [] | [
"Homepage, https://jstverify.com",
"Documentation, https://docs.jstverify.com/tracing/python",
"Repository, https://github.com/ANamelessDrake/JstVerify"
] | twine/6.2.0 CPython/3.13.11 | 2026-02-20T01:43:17.382713 | jstverify_tracing-0.10.4.tar.gz | 33,002 | e8/7a/62078cf8f6c701e95c4cc5aa197414e1dfa7ed13903162bd0df7c501a0df/jstverify_tracing-0.10.4.tar.gz | source | sdist | null | false | 5e575f86fc158ed64432818700b06b1d | 9c0c7795e02f6f6e2513c50c66c5bcb9b213f156ea5c399869910fdef56a06c2 | e87a62078cf8f6c701e95c4cc5aa197414e1dfa7ed13903162bd0df7c501a0df | MIT | [
"LICENSE"
] | 484 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.