metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | maxapi | 0.9.15 | Библиотека для разработки чат-ботов с помощью API мессенджера MAX | <p align="center">
<a href="https://github.com/love-apples/maxapi"><img src="logo.png" alt="MaxAPI"></a>
</p>
<p align="center">
<a href='https://max.ru/join/IPAok63C3vFqbWTFdutMUtjmrAkGqO56YeAN7iyDfc8'>MAX Чат</a> •
<a href='https://t.me/maxapi_github'>TG Чат</a>
</p>
<p align="center">
<a href='https://pypi.org/project/maxapi/'>
<img src='https://img.shields.io/pypi/v/maxapi.svg' alt='PyPI version'></a>
<a href='https://pypi.org/project/maxapi/'>
<img src='https://img.shields.io/pypi/pyversions/maxapi.svg' alt='Python Version'></a>
<a href='https://codecov.io/gh/love-apples/maxapi'>
<img src='https://img.shields.io/codecov/c/github/love-apples/maxapi.svg' alt='Coverage'></a>
<a href='https://love-apples/maxapi/blob/main/LICENSE'>
<img src='https://img.shields.io/github/license/love-apples/maxapi.svg' alt='License'></a>
</p>
## ● Документация и примеры использования
Можно посмотреть здесь: https://love-apples.github.io/maxapi/
## ● Установка из PyPi
Стабильная версия
```bash
pip install maxapi
```
## ● Установка из GitHub
Свежая версия, возможны баги. Рекомендуется только для ознакомления с новыми коммитами.
```bash
pip install git+https://github.com/love-apples/maxapi.git
```
## ● Быстрый старт
Если вы тестируете бота в чате - не забудьте дать ему права администратора!
### ● Запуск Polling
Если у бота установлены подписки на Webhook - события не будут приходить при методе `start_polling`. При таком случае удалите подписки на Webhook через `await bot.delete_webhook()` перед `start_polling`.
```python
import asyncio
import logging
from maxapi import Bot, Dispatcher
from maxapi.types import BotStarted, Command, MessageCreated
logging.basicConfig(level=logging.INFO)
# Внесите токен бота в переменную окружения MAX_BOT_TOKEN
# Не забудьте загрузить переменные из .env в os.environ
# или задайте его аргументом в Bot(token='...')
bot = Bot()
dp = Dispatcher()
# Ответ бота при нажатии на кнопку "Начать"
@dp.bot_started()
async def bot_started(event: BotStarted):
await event.bot.send_message(
chat_id=event.chat_id,
text='Привет! Отправь мне /start'
)
# Ответ бота на команду /start
@dp.message_created(Command('start'))
async def hello(event: MessageCreated):
await event.message.answer("Пример чат-бота для MAX 💙")
async def main():
await dp.start_polling(bot)
if __name__ == '__main__':
asyncio.run(main())
```
### ● Запуск Webhook
Перед запуском бота через Webhook, вам нужно установить дополнительные зависимости (fastapi, uvicorn). Можно это сделать через команду:
```bash
pip install maxapi[webhook]
```
Указан пример простого запуска, для более низкого уровня можете рассмотреть [этот пример](https://love-apples.github.io/maxapi/examples/#_6).
```python
import asyncio
import logging
from maxapi import Bot, Dispatcher
from maxapi.types import BotStarted, Command, MessageCreated
logging.basicConfig(level=logging.INFO)
bot = Bot()
dp = Dispatcher()
# Команда /start боту
@dp.message_created(Command('start'))
async def hello(event: MessageCreated):
await event.message.answer("Привет из вебхука!")
async def main():
await dp.handle_webhook(
bot=bot,
host='localhost',
port=8080,
log_level=logging.CRITICAL # Можно убрать для подробного логирования
)
if __name__ == '__main__':
asyncio.run(main())
```
| text/markdown | null | Denis <bestloveapples@gmail.com> | null | null | null | max, api, bot | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp<4,>=3.10",
"magic_filter<2,>=1.0.0",
"pydantic<3,>=2",
"aiofiles<26,>=24.1",
"puremagic<2,>=1.30",
"fastapi<1,>=0.103.0; extra == \"webhook\"",
"uvicorn<1,>=0.15.0; extra == \"webhook\""
] | [] | [] | [] | [
"Homepage, https://github.com/love-apples/maxapi"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:36:14.472515 | maxapi-0.9.15.tar.gz | 77,356 | a4/14/9f4fb07f5e37e2e1a410161696c69889bb4ea0396bc83ee2cb994f6ca483/maxapi-0.9.15.tar.gz | source | sdist | null | false | 2fc4284fcf52719f76a1e7d043802695 | 34cda3715487f37b0608843cdbb464ad15fcf1c98117d8eaf85c8cf2d4b0253c | a4149f4fb07f5e37e2e1a410161696c69889bb4ea0396bc83ee2cb994f6ca483 | null | [] | 890 |
2.3 | trainloop | 0.4.0 | Minimal PyTorch training loop with hooks and checkpointing. | # trainloop
[](https://pypi.org/project/trainloop/)
Minimal PyTorch training loop with hooks for logging, checkpointing, and customization.
Docs: https://karimknaebel.github.io/trainloop/
## Install
```bash
pip install trainloop
```
## Basic example
```python
import logging
import torch
import torch.nn as nn
from trainloop import BaseTrainer, CheckpointingHook, ProgressHook
logging.basicConfig(level=logging.INFO)
class MyTrainer(BaseTrainer):
def build_data_loader(self):
class ToyDataset(torch.utils.data.IterableDataset):
def __iter__(self):
while True:
data = torch.randn(784)
target = torch.randint(0, 10, (1,)).item()
yield data, target
return torch.utils.data.DataLoader(ToyDataset(), batch_size=32)
def build_model(self):
return nn.Sequential(
nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 10),
).to(self.device)
def build_optimizer(self):
return torch.optim.AdamW(self.model.parameters(), lr=3e-4)
def build_hooks(self):
return [
ProgressHook(interval=50, with_records=True),
CheckpointingHook(interval=500, keep_previous=2),
]
def forward(self, batch):
x, y = batch
x, y = x.to(self.device), y.to(self.device)
logits = self.model(x)
loss = nn.functional.cross_entropy(logits, y)
accuracy = (logits.argmax(1) == y).float().mean().item()
return loss, {"accuracy": accuracy}
trainer = MyTrainer(max_steps=2000, device="cpu", workspace="runs/demo")
trainer.train()
```
| text/markdown | Karim Abou Zeid | Karim Abou Zeid <contact@ka.codes> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pillow>=11.3.0",
"torch>=2.0.0"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T14:35:30.215491 | trainloop-0.4.0-py3-none-any.whl | 15,011 | e5/7c/707e315aa1abdf462400272081159a9d984610f9414ff61f0a38c5d89fa6/trainloop-0.4.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 1ee00d6c2c5741e7714d1aab2f301bbc | 7c8ae68cecf3c9d54badf0adbbc17d2e32131ff380e3e5a55b78d5bcb84f58c7 | e57c707e315aa1abdf462400272081159a9d984610f9414ff61f0a38c5d89fa6 | null | [] | 225 |
2.4 | justmyresource | 1.1.1 | Resource discovery and resolution library for Python | # JustMyResource
A precise, lightweight, and extensible resource discovery library for Python. JustMyResource provides a robust "Resource Atlas" for the Python ecosystem—a definitive map of every resource available to an application, whether bundled in a Python package or provided by third-party resource packs.
## Features
- **Generic Framework**: Unified interface for multiple resource types (SVG icons, raster images, future: audio/video)
- **Extensible**: Resource packs can be added via standard Python EntryPoints mechanism
- **Efficient**: Lazy discovery with in-memory caching
- **Prefix-Based Resolution**: Namespace disambiguation via `pack:name` format
- **Type-Safe**: Returns `ResourceContent` objects with MIME types and metadata
- **Zero Dependencies**: Core library has no required dependencies
## Installation
```bash
pip install justmyresource
```
## Available Resource Packs
JustMyResource supports icon packs from the [justmyresource-icons](https://github.com/kws/justmyresource-icons) monorepo. Install individual packs or all packs at once using optional dependency groups:
```bash
# Install a specific icon pack
pip install justmyresource[lucide]
# Install all icon packs
pip install justmyresource[icons]
```
| Pack | PyPI Package | Prefixes | Variants | License |
|------|-------------|----------|----------|---------|
| **Lucide** | `justmyresource-lucide` | `lucide`, `luc` | — | ISC |
| **Material Official** | `justmyresource-material-icons` | `material-icons`, `mi` | filled, outlined, rounded, sharp, two-tone | Apache-2.0 |
| **Material Community** | `justmyresource-mdi` | `mdi` | — | Apache-2.0 |
| **Phosphor** | `justmyresource-phosphor` | `phosphor`, `ph` | thin, light, regular, bold, fill, duotone | MIT |
| **Font Awesome Free** | `justmyresource-font-awesome` | `font-awesome`, `fa` | solid, regular, brands | CC-BY-4.0 |
| **Heroicons** | `justmyresource-heroicons` | `heroicons`, `hero` | 24/outline, 24/solid, 20/solid, 16/solid | MIT |
For detailed information about each pack, including icon counts, variant details, and usage examples, see the [justmyresource-icons repository](https://github.com/kws/justmyresource-icons).
## Quick Start
```python
from justmyresource import ResourceRegistry, get_default_registry
# Get default registry
registry = get_default_registry()
# Get resource with fully qualified name (always unique)
content = registry.get_resource("acme-icons/lucide:lightbulb")
# Get resource with short pack name (works if unique)
content = registry.get_resource("lucide:lightbulb")
# Get resource with alias
content = registry.get_resource("luc:lightbulb")
# Check content type and use accordingly
if content.content_type == "image/svg+xml":
svg_text = content.text # Decode as UTF-8
# Use SVG text...
# Get resource without prefix (requires default_prefix to be set)
registry = ResourceRegistry(default_prefix="lucide")
content = registry.get_resource("lightbulb") # Resolves as "lucide:lightbulb"
```
## Basic Usage
### Getting Resources
```python
from justmyresource import ResourceRegistry
registry = ResourceRegistry()
# Get resource with fully qualified name (always unique)
content = registry.get_resource("acme-icons/lucide:lightbulb")
# Get resource with short pack name (works if unique)
content = registry.get_resource("lucide:lightbulb")
# Get resource without prefix (requires default_prefix to be set)
registry = ResourceRegistry(default_prefix="lucide")
content = registry.get_resource("lightbulb") # Resolves as "lucide:lightbulb"
# Access resource data
if content.content_type == "image/svg+xml":
svg_text = content.text # UTF-8 decoded string
elif content.content_type == "image/png":
png_bytes = content.data # Raw bytes
```
### Listing Resources
```python
# List all resources from all packs
for resource_info in registry.list_resources():
print(f"{resource_info.pack}:{resource_info.name} ({resource_info.content_type})")
# pack is qualified name (e.g., "acme-icons/lucide")
# List resources from a specific pack (qualified or short name)
for resource_info in registry.list_resources(pack="acme-icons/lucide"):
print(resource_info.name)
for resource_info in registry.list_resources(pack="lucide"):
print(resource_info.name)
# List registered packs (returns qualified names)
for qualified_name in registry.list_packs():
print(qualified_name) # e.g., "acme-icons/lucide"
```
## Command-Line Interface
JustMyResource includes a CLI tool for discovering, inspecting, and extracting resources from the command line.
### Installation
The CLI is automatically available after installing justmyresource:
```bash
pip install justmyresource
justmyresource --help
```
### Commands
#### List Resources
List all available resources with optional filtering:
```bash
# List all resources from all packs
justmyresource list
# List resources from a specific pack
justmyresource list --pack lucide
# Filter by glob pattern
justmyresource list --filter "arrow-*"
# Show pack and content type information
justmyresource list --verbose
# JSON output
justmyresource list --json
```
#### Get Resource (Metadata-First)
Inspect resource metadata or extract to file/stdout:
```bash
# Show metadata only (default - safe for binary resources)
justmyresource get lucide:lightbulb
# Output to stdout (for piping)
justmyresource get lucide:lightbulb -o -
# Save to file
justmyresource get lucide:lightbulb -o icon.svg
# JSON output
justmyresource get lucide:lightbulb --json
```
The default behavior shows metadata (size, content-type, pack info) without outputting binary data to the terminal, preventing accidental terminal corruption.
#### List Resource Packs
List all registered resource packs:
```bash
# Simple list
justmyresource packs
# Detailed information (prefixes, aliases, collisions)
justmyresource packs --verbose
# JSON output
justmyresource packs --json
```
#### Resource Information
Show detailed information about a specific resource:
```bash
# Detailed resource information
justmyresource info lucide:lightbulb
# JSON output
justmyresource info lucide:lightbulb --json
```
### Global Options
All commands support these global options:
- `--json`: Output in JSON format
- `--blocklist <packs>`: Comma-separated list of pack names to block
- `--prefix-map <mappings>`: Override prefix mappings (format: `"alias1=dist1/pack1,alias2=dist2/pack2"`)
- `--default-prefix <prefix>`: Set default prefix for bare-name lookups
### Examples
```bash
# Discover available icon packs
justmyresource packs
# Find all arrow icons
justmyresource list --filter "*arrow*"
# Inspect a specific icon
justmyresource get lucide:arrow-right
# Extract icon to file
justmyresource get lucide:arrow-right -o arrow.svg
# Use with default prefix
justmyresource --default-prefix lucide get lightbulb
# Pipe SVG to another tool
justmyresource get lucide:lightbulb -o - | grep "path"
```
### Blocking Resource Packs
```python
# Block specific packs (accepts short or qualified names)
registry = ResourceRegistry(blocklist={"broken-pack", "acme-icons/lucide"})
# Block via environment variable
# RESOURCE_DISCOVERY_BLOCKLIST="broken-pack,acme-icons/lucide" python app.py
```
### Handling Prefix Collisions
When multiple packs claim the same prefix, the registry emits warnings and marks the prefix as ambiguous. No winner is picked—you must use qualified names or `prefix_map` to resolve:
```python
import warnings
# Filter collision warnings if desired
warnings.filterwarnings("ignore", category=PrefixCollisionWarning)
registry = ResourceRegistry()
# Both packs remain accessible via qualified names
content1 = registry.get_resource("acme-icons/lucide:lightbulb")
content2 = registry.get_resource("cool-icons/lucide:lightbulb")
# Short name raises error (ambiguous):
# registry.get_resource("lucide:lightbulb") # ValueError: ambiguous
# Inspect collisions
collisions = registry.get_prefix_collisions()
# {"lucide": ["acme-icons/lucide", "cool-icons/lucide"]}
```
### Custom Prefix Mapping
Override prefix mappings to resolve collisions or add custom aliases:
```python
# Via constructor
registry = ResourceRegistry(
prefix_map={
"icons": "acme-icons/lucide", # Map "icons" to specific pack
"mi": "material-icons/core", # Custom alias
}
)
# Via environment variable
# RESOURCE_PREFIX_MAP="icons=acme-icons/lucide,mi=material-icons/core" python app.py
# Inspect current mappings
prefix_map = registry.get_prefix_map()
# {"icons": "acme-icons/lucide", "mi": "material-icons/core", ...}
```
## Creating Resource Packs
Resource packs can be registered via Python EntryPoints. This allows applications to bundle resources or third-party packages to provide resources.
### First-Party Resource Pack (Application's Own Resources)
```python
# myapp/resources.py
from collections.abc import Iterator
from pathlib import Path
from importlib.resources import files
from justmyresource.types import ResourceContent, ResourcePack
class MyAppResourcePack:
"""Resource pack for application's bundled resources."""
def __init__(self):
package = files("myapp.resources")
self._base_path = Path(str(package))
def get_resource(self, name: str) -> ResourceContent:
"""Get resource from bundled files."""
# Try SVG first
svg_path = self._base_path / f"{name}.svg"
if svg_path.exists():
with open(svg_path, "rb") as f:
return ResourceContent(
data=f.read(),
content_type="image/svg+xml",
encoding="utf-8",
)
# Try PNG
png_path = self._base_path / f"{name}.png"
if png_path.exists():
with open(png_path, "rb") as f:
return ResourceContent(
data=f.read(),
content_type="image/png",
)
raise ValueError(f"Resource not found: {name}")
def list_resources(self) -> Iterator[str]:
"""List all resources."""
for path in self._base_path.iterdir():
if path.suffix in (".svg", ".png"):
yield path.stem
def get_prefixes(self) -> list[str]:
return ["myapp"] # Optional aliases (pack name is auto-registered)
# myapp/__init__.py
def get_resource_provider():
"""Entry point factory for application's bundled resources."""
from myapp.resources import MyAppResourcePack
return MyAppResourcePack()
```
```toml
# pyproject.toml
[project.entry-points."justmyresource.packs"]
"myapp-resources" = "myapp:get_resource_provider"
```
### Third-Party Resource Pack
```python
# my_resource_pack/__init__.py
from my_resource_pack.provider import MyResourcePack
def get_resource_provider():
"""Entry point factory for resource pack."""
return MyResourcePack()
```
### Using ZippedResourcePack Helper
For packs that bundle resources in a zip file, use the provided helper class:
```python
# my_icon_pack/__init__.py
from justmyresource.pack_utils import ZippedResourcePack
class MyIconPack(ZippedResourcePack):
def __init__(self):
super().__init__(
package_name="my_icon_pack",
archive_name="icons.zip",
default_content_type="image/svg+xml",
prefixes=["myicons"]
)
def get_resource_provider():
return MyIconPack()
```
This provides zip reading, caching, manifest support, and error handling out of the box.
## Architecture
JustMyResource follows a unified "Resource Pack" architecture where all resource sources implement the same `ResourcePack` protocol. This ensures:
- **Consistency**: All resources are discovered and resolved the same way
- **Extensibility**: New resource sources can be added via EntryPoints
- **Flat Pack Model**: All packs are equal; uniquely identified by their FQN (dist/pack)
See `docs/architecture.md` for detailed architecture documentation.
## ResourceContent Type
Resources are returned as `ResourceContent` objects:
```python
@dataclass(frozen=True, slots=True)
class ResourceContent:
data: bytes # Raw resource bytes
content_type: str # MIME type: "image/svg+xml", "image/png", etc.
encoding: str | None = None # Encoding for text resources (e.g., "utf-8")
metadata: dict[str, Any] | None = None # Optional pack-specific metadata
@property
def text(self) -> str:
"""Decode data as text (raises if encoding is None)."""
...
```
This wrapper allows consumers to:
- Branch on `content_type` to handle different resource types
- Access text content via `.text` property for text-based resources
- Access pack-specific metadata via `.metadata` dict
- Handle mixed-format packs (e.g., a samples pack with both SVG and PNG)
## Development
### Setup
```bash
# Clone the repository
git clone https://github.com/kws/justmyresource.git
cd justmyresource
# Install with development dependencies
pip install -e ".[dev]"
```
### Running Tests
```bash
# Run tests with coverage
pytest
# Run with coverage report
pytest --cov=justmyresource --cov-report=html
```
### Code Quality
```bash
# Format code
ruff format .
# Lint code
ruff check .
# Type checking
mypy src/
```
## Requirements
- Python 3.10+
- No required dependencies (core library)
- Resource packs may have their own dependencies
## License
MIT License - see LICENSE file for details.
## Contributing
Contributions are welcome! Please read the architecture documentation in `docs/architecture.md` and follow the project philosophy outlined in `AGENTS.md`.
| text/markdown | null | Kaj Siebert <kaj@k-si.com> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"coverage>=7.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pre-commit>=3.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"justmyresource-font-awesome; extra == \"font-awesome\"",
"justmyresource-heroicons; e... | [] | [] | [] | [
"Homepage, https://github.com/kws/justmyresource",
"Repository, https://github.com/kws/justmyresource",
"Issues, https://github.com/kws/justmyresource/issues"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T14:35:21.469603 | justmyresource-1.1.1-py3-none-any.whl | 22,008 | dc/2c/c38d9f7075aea9564792ac70a8c02a78d8af24dbc2ca59768b68dd2cbfa9/justmyresource-1.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 4529428cdc465d477f064bddb0eff7ba | bba482ef103f4ff5e4227defffc20bfd2190c6fe247172896e73aaff4d181bb9 | dc2cc38d9f7075aea9564792ac70a8c02a78d8af24dbc2ca59768b68dd2cbfa9 | null | [
"LICENSE"
] | 233 |
2.4 | mergify-cli | 2026.2.19.1 | Mergify CLI is a tool that automates the creation and management of stacked pull requests on GitHub | # Mergify CLI
## Introduction
Mergify CLI is a powerful tool designed to simplify and automate the creation
and management of stacked pull requests on GitHub as well as handling CI
results upload.
### Stacks
Before diving into its functionalities, let's grasp the concept of stacked pull
requests and why they matter in the
[documentation](https://docs.mergify.com/stacks/).
### CI
You can learn more about [CI Insights in the
documentation](https://docs.mergify.com/ci-insights/).
## Contributing
We welcome and appreciate contributions from the open-source community to make
this project better. Whether you're a developer, designer, tester, or just
someone with a good idea, we encourage you to get involved.
| text/markdown | null | Mehdi Abaakouk <sileht@mergify.com> | null | null | null | null | [] | [] | null | null | >=3.13 | [] | [] | [] | [
"aiofiles==25.1.0",
"click==8.3.1",
"httpx==0.28.1",
"opentelemetry-exporter-otlp-proto-http==1.39.1",
"opentelemetry-sdk==1.39.1",
"pydantic==2.12.5",
"pyyaml==6.0.3",
"questionary>=2.0.0",
"rich==14.3.2",
"tenacity==9.1.4"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:35:16.223958 | mergify_cli-2026.2.19.1.tar.gz | 99,507 | 5c/cc/4b4443f1e28c91074c9fe599f549d17b4082023fc7e1b3d5d26cd33bd876/mergify_cli-2026.2.19.1.tar.gz | source | sdist | null | false | 99ce2c07b4e7d38f809c694582109bae | 1c3df6090f0ee0a171a514a3e0d74668e48c097e635117c65ad9e83605012bc2 | 5ccc4b4443f1e28c91074c9fe599f549d17b4082023fc7e1b3d5d26cd33bd876 | Apache-2.0 | [
"LICENSE"
] | 7,259 |
2.4 | roskarl | 3.0.1 | Environment variable helpers | # Roskarl
Is a **tiny** module for environment variables.
## Requires
Python 3.11.0+
## How to install
```sh
pip install roskarl
```
## Example usage
```python
from roskarl import (
env_var,
env_var_bool,
env_var_cron,
env_var_float,
env_var_int,
env_var_iso8601_datetime,
env_var_list,
env_var_rfc3339_datetime,
env_var_tz,
env_var_dsn,
DSN
)
```
All functions return `None` if the variable is not set. An optional `default` parameter can be provided to return a fallback value instead.
```python
value = env_var(name="STR_VAR", default="fallback")
```
### str
```python
value = env_var(name="STR_VAR")
```
returns **`str`**
### bool
```python
value = env_var_bool(name="BOOL_VAR")
```
returns **`bool`** — accepts `true` or `false` (case insensitive)
### tz
```python
value = env_var_tz(name="TZ_VAR")
```
returns **`str`** if value is a valid IANA timezone (e.g. `Europe/Stockholm`)
### list
```python
value = env_var_list(name="LIST_VAR", separator="|")
```
returns **`list[str]`** if value is splittable by separator
### int
```python
value = env_var_int(name="INT_VAR")
```
returns **`int`** if value is numeric
### float
```python
value = env_var_float(name="FLOAT_VAR")
```
returns **`float`** if value is a float
### cron
```python
value = env_var_cron(name="CRON_EXPRESSION_VAR")
```
returns **`str`** if value is a valid cron expression
### datetime (ISO8601)
```python
value = env_var_iso8601_datetime(name="DATETIME_VAR")
```
returns **`datetime`** if value is a valid ISO8601 datetime string — timezone is optional
```
2026-01-01T00:00:00
2026-01-01T00:00:00+00:00
```
### datetime (RFC3339)
```python
value = env_var_rfc3339_datetime(name="DATETIME_VAR")
```
returns **`datetime`** if value is a valid [RFC3339](https://www.rfc-editor.org/rfc/rfc3339) datetime string — timezone is required
```
2026-01-01T00:00:00+00:00
```
### DSN
> **Note:** Special characters in passwords must be URL-encoded.
```python
from urllib.parse import quote
password = 'My$ecret!Pass@2024'
encoded = quote(password, safe='')
print(encoded) # My%24ecret%21Pass%402024 <--- use this
```
```python
value = env_var_dsn(name="DSN_VAR")
```
returns **`DSN`** object if value is a valid DSN string, formatted as:
```
postgresql://username:password@hostname:5432/database_name
```
The `DSN` object exposes the following attributes:
| Attribute | Type | Example |
|------------|-------|----------------------|
| `scheme` | `str` | `postgresql` |
| `host` | `str` | `hostname` |
| `port` | `int` | `5432` |
| `username` | `str` | `username` |
| `password` | `str` | `password` |
| `database` | `str` | `database_name` |
---
## Marshal
Marshals environment variables into typed configuration objects. Requires `croniter`:
```sh
pip install croniter
```
```python
from roskarl.marshal import load_env_config
env = load_env_config()
```
Raises `ValueError` if both `CRON_ENABLED` and `BACKFILL_ENABLED` are `true`.
### Env vars
| Env var | Type | Description |
|---|---|---|
| `MODEL_NAME` | `str` | Model name |
| `CRON_ENABLED` | `bool` | Enable cron mode |
| `CRON_EXPRESSION` | `str` | Valid cron expression |
| `BACKFILL_ENABLED` | `bool` | Enable backfill mode |
| `BACKFILL_SINCE` | `datetime` | ISO8601 UTC datetime |
| `BACKFILL_UNTIL` | `datetime` | ISO8601 UTC datetime |
| `BACKFILL_BATCH_SIZE` | `int` | Batch size |
### CronConfig
`since` and `until` are derived from `CRON_EXPRESSION` based on the latest fully elapsed interval — e.g. `0 * * * *` at 14:35 → `since=13:00, until=14:00`.
### BackfillConfig
`since` and `until` read from env as ISO8601 UTC datetimes. `CRON_ENABLED` and `BACKFILL_ENABLED` are mutually exclusive.
### `with_env_config`
A decorator that calls `load_env_config()` and injects the result as the first argument. Useful for pipeline entrypoints.
```python
from roskarl.decorators import with_env_config
from roskarl.marshal import EnvConfig
@with_env_config
def run(env: EnvConfig) -> None:
if env.backfill.enabled:
run_backfill(
model=env.model_name,
since=env.backfill.since,
until=env.backfill.until,
batch_size=env.backfill.batch_size,
)
else:
run_incremental(
model=env.model_name,
since=env.cron.since,
until=env.cron.until,
)
run()
```
| text/markdown | null | Erik Bremstedt <erik.bremstedt@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.11.0 | [] | [] | [] | [
"croniter>0.3.0"
] | [] | [] | [] | [
"Homepage, https://github.com/ebremstedt/roskarl",
"Issues, https://github.com/ebremstedt/roskarl/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T14:34:47.039108 | roskarl-3.0.1.tar.gz | 8,379 | 43/f6/bc0b511294239c211168c0c0835c84542d2e06ab469bce1b46329197e0cd/roskarl-3.0.1.tar.gz | source | sdist | null | false | 04803c6f112782e6c3346b086c97d3f8 | fb5b410932ef1434fc63ddc9cf16cbc13bfc427904e382215aad7d0e35e777cb | 43f6bc0b511294239c211168c0c0835c84542d2e06ab469bce1b46329197e0cd | null | [
"LICENSE"
] | 196 |
2.4 | adaptera | 0.1.2 | A local-first LLM orchestration library | # Adaptera 🌌
A local-first LLM orchestration library with native support for Hugging Face, PEFT/LoRA, QLoRA — without hiding the model and giving advanced users the full control.
---
> **Note:** This project is in its early development phase and may undergo significant changes. However, the core goal of providing local LLM processing will remain consistent. Once the agentic part of the module is stable, we will work on making a fine-tuner for it so that this library can be used as a quick way of prototyping local agentic models.
>
> Feel free to contribute, please do not spam pull requests. Any and all help is deeply appreciated.
---
## Features
- **Local-First**: Built for running LLMs on your own hardware efficiently.
- **Native PEFT/QLoRA**: Seamless integration with Hugging Face's PEFT for efficient model loading.
- **Persistent Memory**: Vector-based memory using FAISS with automatic text embedding (SLM).
- **Strict ReAct Agents**: Deterministic agent loops using JSON-based tool calls.
- **Model Transparency**: Easy access to the underlying Hugging Face model and tokenizer.
## Installation
### Using python
```bash
pip install adaptera
```
### Using Anaconda/Miniforge
```bash
conda activate < ENV NAME >
pip install adaptera
```
*(Note: Requires Python 3.12+)*
## Quick Start
```python
from adaptera import Agent, AdapteraModel, VectorDB, Tool
# 1. Initialize Vector Memory
db = VectorDB(index_file="memory.index")
# 2. Load a Model (with 4-bit quantization)
model = AdapteraModel(
model_name="unsloth/Llama-3.2-3B-Instruct",
quantization="4bit",
vector_db=db
)
# 3. Define Tools
def add(a, b):
"""Adds two numbers together"""
return a + b
tools = [
Tool(name="add", func=add, description="Adds two numbers together. Input: 'a,b'")
]
# 4. Create and Run Agent
agent = Agent(model, tools=tools)
print(agent.run("What is 15 + 27?"))
```
## Project Structure
- `adaptera/chains/`: Agentic workflows and ReAct implementations.
- `adaptera/model/`: Hugging Face model loading and generation wrappers.
- `adaptera/memory/`: FAISS-backed persistent vector storage.
- `adaptera/tools/`: Tool registry and definition system.
- `adaptera/experimental/`: Experimental features
## Non-goals
This library does not aim to be a full ML framework or replace existing tools like LangChain. It focuses on providing a clean, minimal interface for local-first LLM orchestration.
| text/markdown | Sylo | null | null | null | null | llm, orchestration, local-first, ai, machine-learning | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"torch>=2.0.0",
"transformers>=4.40.0",
"peft>=0.10.0",
"accelerate>=0.29.0",
"numpy>=1.24.0",
"faiss-cpu>=1.8.0",
"pytest; extra == \"dev\"",
"build; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T14:34:42.957019 | adaptera-0.1.2.tar.gz | 13,668 | 96/81/5fd3b2c6fcde8907f172a05486be316b2af038e02905f61d48a10b91ae36/adaptera-0.1.2.tar.gz | source | sdist | null | false | 1c87e4856ce8634f3137264e7d5dfaec | 0ab11ee0781e87557a30afd48dc8d263097203d741cfa4589b5f917b79bfbf39 | 96815fd3b2c6fcde8907f172a05486be316b2af038e02905f61d48a10b91ae36 | MIT | [
"LICENSE"
] | 232 |
2.4 | ynu-m-contest-client | 0.1.1 | Python client SDK for contest-server | # contest-client
Python SDK for the `contest-server`.
## Install (dev)
```bash
cd contest_client
python -m venv .venv
source .venv/bin/activate
pip install -U pip
pip install -e .
```
## Usage
サーバー側で用意されたコンテスト(`contest_id` は運営が `contest-provision --spec ...` 実行時の出力から取得し、必要な範囲で配布する想定)に対して、ユーザー登録・ログイン・提出を行うシナリオをサポートします。コンテスト作成や参加者(チーム)作成はクライアントからできません。
```python
from contest_client import ContestClient
c = ContestClient("http://127.0.0.1:8000")
# contest_id は運営から共有済み
contest_id = "cst_xxxxxx"
# 1) 初回登録
reg = c.register_user(contest_id, user_name="alice", password="secret123")
print("user_id:", reg["user_id"])
print("api_key (use for submissions):", reg["api_key"])
# 2) ログイン(APIキーをローテーションして取得)
login = c.login(contest_id, user_name="alice", password="secret123")
api_key = login["api_key"]
# 3) 提出
res = c.submit(contest_id, user_name="alice", api_key=api_key, x={"x1": 0.1, "x2": 0.7, "x3": 0.9})
print(res)
print(c.leaderboard(contest_id))
```
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"requests>=2.31"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T14:32:15.715805 | ynu_m_contest_client-0.1.1.tar.gz | 2,544 | fb/8f/7b18463ba5944e8862437bf56de602c413b6c5a0eb0d27811b19204880db/ynu_m_contest_client-0.1.1.tar.gz | source | sdist | null | false | c66d491839839e68988da2269eeaffb9 | 544457c6e5a7d8292aa9065beb99c9ba17eb262708651b8622f2d5dfdf08c7b3 | fb8f7b18463ba5944e8862437bf56de602c413b6c5a0eb0d27811b19204880db | null | [] | 204 |
2.4 | propre-cli | 0.1.1 | Analyze, restructure, and harden vibe-coded projects for production. Architecture-level linting, secret detection, and code quality enforcement in one CLI. | # Propre CLI
**Propre** is a post-vibe-coding cleanup and hardening CLI.
It analyzes AI-generated or rapidly prototyped codebases and prepares them for real-world use by detecting structural issues, unsafe patterns, and production blockers — then optionally fixing them.
> From messy prototype → production-ready project.
---
## Installation
```bash
pip install propre-cli
```
---
## Quick Start
Analyze a project:
```bash
propre scan .
```
Auto-fix safe issues:
```bash
propre fix .
```
Check production readiness:
```bash
propre ship .
```
Generate a report:
```bash
propre report . -o report.md
```
---
## Commands
| Command | Description |
| ------------------------------ | ------------------------------ |
| `propre scan [path]` | Full analysis (no changes) |
| `propre fix [path]` | Auto-fix safe issues |
| `propre res [path]` | Project restructuring only |
| `propre sec [path]` | Secret scanning only |
| `propre ship [path]` | Production readiness checklist |
| `propre report [path] -o file` | Export full report |
---
## Common Workflows
### Before committing AI-generated code
```bash
propre scan .
```
### Before opening a pull request
```bash
propre fix .
propre ship .
```
### CI safety check
```bash
propre scan . --ci
```
### Security audit
```bash
propre sec . --deep-scan
```
---
## Global Flags
| Flag | Description | | | |
| --------------------- | -------------------------------------- | ---- | ------ | ------------- |
| `--dry-run` | Preview changes without applying | | | |
| `--verbose` / `-v` | Detailed output | | | |
| `--config propre.yml` | Custom rules | | | |
| `--fix` | Apply safe fixes outside `fix` command | | | |
| `--ignore <pattern>` | Exclude paths | | | |
| `--ci` | Exit non-zero if blockers found | | | |
| `--format terminal | md | json | sarif` | Output format |
---
## Configuration
Create a `propre.yml` at your project root:
```yaml
stack: auto
restructure:
enabled: true
confirm: true
secrets:
deep_scan: false
severity_threshold: medium
rules:
dead_code: warn
console_logs: error
missing_types: warn
hardcoded_config: error
ignore:
- node_modules/
- .git/
- dist/
```
---
## What Propre Detects
- Dead code & unused files
- Debug logs & leftover prints
- Missing typing
- Hardcoded configuration
- Project structure issues
- Secrets & credentials
- Production blockers
---
## CI Integration
Example GitHub Action step:
```bash
propre scan . --ci
```
The command exits with a non-zero code if blocking issues are found.
---
## Philosophy
Modern coding workflows generate code faster than they validate it.
Propre acts as the **final safety layer** between experimentation and deployment:
- AI coding assistants create
- Developers iterate
- **Propre hardens**
---
## License
MIT
| text/markdown | null | immerSIR <immersir223@gmail.com> | null | immerSIR <immersir223@gmail.com> | null | cli, code-quality, security, secret-detection, refactoring, restructuring, developer-tools, architecture, linting, production-readiness, vibe-coding | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"... | [] | null | null | >=3.11 | [] | [] | [] | [
"pyyaml>=6.0.1",
"rich>=13.7.1",
"typer>=0.12.5",
"tree-sitter>=0.22.3; extra == \"parsers\"",
"tree-sitter-languages>=1.10.2; extra == \"parsers\""
] | [] | [] | [] | [
"Homepage, https://github.com/immerSIR/propre-cli",
"Documentation, https://github.com/immerSIR/propre-cli#readme",
"Repository, https://github.com/immerSIR/propre-cli",
"Issues, https://github.com/immerSIR/propre-cli/issues",
"Changelog, https://github.com/immerSIR/propre-cli/blob/main/CHANGELOG.md"
] | uv/0.8.7 | 2026-02-19T14:31:34.185936 | propre_cli-0.1.1.tar.gz | 26,124 | 5c/85/22d085107a9060d57f6a0324c0595114de182abbcee64096dd931c308fa4/propre_cli-0.1.1.tar.gz | source | sdist | null | false | 43505ca3353bdccdc2e5fcf1a43818dd | b54e01457dfa3b0d064cfedb7971cee41c53da0df8f31b1109542054f2def3cc | 5c8522d085107a9060d57f6a0324c0595114de182abbcee64096dd931c308fa4 | MIT | [
"LICENSE"
] | 215 |
2.4 | JoyPi-RGB-Matrix-RaspberryPi | 1.0.0 | This library enables usage of RGB matrix with RP2040 microcontroller chip on the Raspberry Pi. | # JoyPi_RGB_Matrix_RaspberryPi
This library enables usage of RGB matrix with RP2040 microcontroller chip on the Raspberry Pi.
>[!WARNING]
> This library was written for the usage of the LED matrix of the Joy-Pi Advanced 2 and Joy-Pi Note 2. Both Joy-Pis have the RP2040 microcontroller chip to control the LED matrix. This is to ensure compatibility with the Raspberry Pi 5 because the library which was used beforehand is not compatible up to this date.
## Installation
You can install this library from PyPI.
To install it for the current user on your Raspberry Pi, use the following command:
```
pip install JoyPi_RGB_Matrix_RaspberryPi
```
## Library Guide
- `LEDMatrix( count = 64, brightness = 10, right_border = [7,15,23,31,39,47,55,63], left_border = [0,8,16,24,32,40,48,56])`- initialize LED matrix with default values for Joy-Pi
- `clean()` - clears the LED matrix
- `setPixel(position, colour)` - sets specific pixel to a selected colour
- `RGB_on(colour)` - sets the complete matrix to one selected colour
- `RGB_off()` - turns the complete matrix off
- `rainbow(wait_ms=20, iterations=1)` - rainbow effect on the whole matrix with default values
- `colourWipe(colour, wait_ms=50)` - Move selected colour pixel by pixel onto the matrix with default speed
- `theaterChase( colour, wait_ms=50, iterations=10)` - chaser animation with a selected colour with deafult speed
- `show()` - displays set pixels
- `demo1()` - demo program version 1
- `demo2()` - demo program version 2 | text/markdown | null | Joy-IT <service@joy-it.net> | null | null | null | null | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/joy-it/JoyPi_RGB_Matrix_RaspberryPi",
"Issues, https://github.com/joy-it/JoyPi_RGB_Matrix_RaspberryPi/issues"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-19T14:30:42.840591 | joypi_rgb_matrix_raspberrypi-1.0.0.tar.gz | 7,189 | 24/05/becdf3ab9aa545f883febb27ffaa34927e9be01656bd8314f775f122b881/joypi_rgb_matrix_raspberrypi-1.0.0.tar.gz | source | sdist | null | false | 1c7767abee3bebc7995ccf19ae59f362 | 428df795b39d43462a9aacfba4c796645cadedbce20c6752a0422576d9a6a914 | 2405becdf3ab9aa545f883febb27ffaa34927e9be01656bd8314f775f122b881 | null | [
"LICENSE"
] | 0 |
2.3 | dijkies | 0.1.10 | A python framework that can be used to create, test and deploy trading algorithms. | 
# Dijkies
**Dijkies** is a Python framework for creating, testing, and deploying algorithmic trading strategies in a clean, modular, and exchange-agnostic way.
The core idea behind Dijkies is to **separate trading logic from execution and infrastructure**, allowing the same strategy code to be reused for:
- Historical backtesting
- Paper trading
- Live trading
## Philosophy
In Dijkies, a strategy is responsible only for **making decisions** — when to buy, when to sell, and how much. Everything else, such as order execution, fee calculation, balance management, and exchange communication, is handled by dedicated components.
This separation ensures that strategies remain:
- Easy to reason about
- Easy to test
- Easy to reuse across environments
A strategy written once can be backtested on historical data and later deployed to a real exchange without modification.
## Key Design Principles
- **Strategy–Executor separation**
Trading logic is completely decoupled from execution logic.
- **Single interface for backtesting and live trading**
Switching between backtesting and live trading requires no strategy changes.
- **Explicit state management**
All balances and positions are tracked in a transparent `State` object.
- **Minimal assumptions**
Dijkies does not enforce indicators, timeframes, or asset types.
- **Composable and extensible**
New exchanges, execution models, and risk layers can be added easily.
## Who Is This For?
Dijkies is designed for:
- Data Scientists building algorithmic trading systems
- Quantitative traders who want full control over strategy logic
- Anyone who wants to move from backtesting to production without focussing on
## What Dijkies Is Not
- A no-code trading bot
- A black-box strategy optimizer
- A fully managed trading platform
Dijkies provides the **building blocks**, not the trading edge.
---
## Quick Start
This quick start shows how to define a strategy, fetch market data, and run a backtest in just a few steps.
### 1. Define a Strategy
A strategy is a class that inherits from `Strategy` and implements the `execute` and `get_data_pipeline` method.
the execute method receives a pandas dataframe that should, at least, contain `open`, `high`, `low`, `close`, `volume` and `candle_time` columns.
more can be added, and used within your trading algorithm. The engineering of these columns falls outside of dijkies scope.
This data is then used to define and execute actions. The following actions are available (see docstrings in dijkies.interfaces.ExchangeAssetClient for more info):
- `place limit buy order`
- `place limit sell order`
- `place market buy order`
- `place market sell order`
- `cancel (limit) order`
- `get order information`
- `get account balance`
Below is an example implementation of an RSI strategy:
```python
from dijkies.executors import BacktestExchangeAssetClient
from dijkies.exchange_market_api import BitvavoMarketAPI
from dijkies.interfaces import (
Strategy,
DataPipeline,
ExchangeAssetClient
)
from dijkies.entities import State
from dijkies.data_pipeline import OHLCVDataPipeline
from ta.momentum import RSIIndicator
from pandas.core.frame import DataFrame as PandasDataFrame
class RSIStrategy(Strategy):
analysis_dataframe_size_in_minutes = 60*24*30
min_order_amount = 10
def __init__(
self,
executor: ExchangeAssetClient,
lower_threshold: float,
higher_threshold: float,
) -> None:
self.lower_threshold = lower_threshold
self.higher_threshold = higher_threshold
super().__init__(executor)
def execute(self, candle_df: PandasDataFrame) -> None:
candle_df["momentum_rsi"] = RSIIndicator(candle_df.close).rsi()
previous_candle = candle_df.iloc[-2]
current_candle = candle_df.iloc[-1]
is_buy_signal = (
previous_candle.momentum_rsi > self.lower_threshold
and current_candle.momentum_rsi < self.lower_threshold
)
if (
is_buy_signal and
self.state.quote_available > self.min_order_amount
):
self.executor.place_market_buy_order(self.state.quote_available)
is_sell_signal = (
previous_candle.momentum_rsi < self.higher_threshold
and current_candle.momentum_rsi > self.higher_threshold
)
if (
is_sell_signal and
self.state.base_available * candle_df.iloc[-1].close > self.min_order_amount
):
self.executor.place_market_sell_order(self.state.base_available)
def get_data_pipeline(self) -> DataPipeline:
return OHLCVDataPipeline(
BitvavoMarketAPI(),
self.state.base,
60,
60*24*7
)
```
### 2. fetch data for your backtest
Market data is provided as a pandas DataFrame containing OHLCV candles.
```python
from dijkies.exchange_market_api import BitvavoMarketAPI
bitvavo_market_api = BitvavoMarketAPI()
candle_df = bitvavo_market_api.get_candles()
```
### 3. Set Up State and BacktestingExecutor
final steps involve initializing a state and backtest-executor. The state keeps track of the assets that the strategy is managing.
This is in sync with the real state of the account at the exchange and is used as information source in decision making.
The backtest executor is a Mock for the execution of actions. This backtest executor is replaced by a real exchange executor in live trading. The backtest method returns a Pandas dataframe containing all important information about the backtest. For instance, a timeseries of the amount of assets, which buy orders are open, total amount of transactions made so far. the full list can be found in the performance module.
```python
# do backtest
fee_limit_order = 0.0015
fee_market_order = 0.0025
start_investment_base = 0
start_investment_quote = 1000
state = State(
base="XRP",
total_base=start_investment_base,
total_quote=start_investment_quote
)
executor = BacktestExchangeAssetClient(
state,
fee_limit_order=fee_limit_order,
fee_market_order=fee_market_order
)
strategy = RSIStrategy(
executor,
35,
65,
)
results = strategy.backtest(candle_df)
```
## Deployment & Live Trading
Dijkies supports deploying strategies to live trading environments using the **same strategy code** that is used for backtesting. Deployment is built around a small set of composable components that handle persistence, credentials, execution switching, and bot lifecycle management.
At a high level, deployment works by:
1. Persisting a configured strategy
2. Attaching a live exchange executor
3. Running the strategy via a `Bot`
4. Managing lifecycle states such as *active*, *paused*, and *stopped*
---
## Core Deployment Concepts
### Strategy Persistence
Strategies are **serialized and stored** so they can be resumed, paused, or stopped without losing state.
This includes:
- Strategy parameters
- Internal indicators or buffers
- Account state (balances, open orders, etc.)
Persistence is handled through a `StrategyRepository`.
---
### Strategy Status
Each deployed strategy (bot) exists in one of the following states:
- **active** — strategy is running normally
- **paused** — strategy execution stopped due to an error
- **stopped** — strategy has been intentionally stopped
Status transitions are managed automatically by the deployment system.
---
### Executor Switching
One of Dijkies’ key design goals is that **strategies do not know whether they are backtesting or live trading**.
At deployment time, the executor is injected dynamically:
- `BacktestExchangeAssetClient` for backtesting
- `BitvavoExchangeAssetClient` for live trading
No strategy code changes are required.
---
## Strategy Repository
The `StrategyRepository` abstraction defines how strategies are stored and retrieved.
```python
class StrategyRepository(ABC):
def store(...)
def read(...)
def change_status(...)
```
### LocalStrategyRepository
The provided implementation stores strategies locally using pickle.
#### Directory Structure
root/
└── person_id/
└── exchange/
└── status/
└── bot_id.pkl
```python
from pathlib import Path
from dijkies.deployment import LocalStrategyRepository
repo = LocalStrategyRepository(Path("./strategies"))
# read
strategy = repo.read(
person_id="ArnoldDijk",
exchange="bitvavo",
bot_id="berend_botje",
status="active"
)
# store
repo.store(
strategy=strategy,
person_id="ArnoldDijk",
exchange="bitvavo",
bot_id="berend_botje",
status="active"
)
# change status
repo.change_status(
person_id="ArnoldDijk",
exchange="bitvavo",
bot_id="berend_botje",
from_status="active",
to_status="stopped",
)
```
This makes it easy to:
- Resume bots after restarts
- Inspect stored strategies
- Build higher-level orchestration around the filesystem
## Credentials Management
Live trading requires exchange credentials. These are abstracted behind a CredentialsRepository.
```python
class CredentialsRepository(ABC):
def get_api_key(...)
def get_api_secret_key(...)
```
The local implementation retrieves credentials from environment variables:
```bash
export ArnoldDijk_bitvavo_api_key="..."
export ArnoldDijk_bitvavo_api_secret_key="..."
```
```python
from dijkies.deployment import LocalCredentialsRepository
credentials_repository = LocalCredentialsRepository()
bitvavo_api_key = credentials_repository.get_api_key(
person_id="ArnoldDijk",
exchange="bitvavo"
)
```
This keeps secrets out of source code and allows standard deployment practices (Docker, CI/CD, etc.).
## The Bot
The Bot class is the runtime orchestrator responsible for:
- Loading a stored strategy
- Injecting the correct executor
- Running or stopping the strategy
- Handling failures and state transitions
### running the bot
```python
bot.run(
person_id="ArnoldDijk",
exchange="bitvavo",
bot_id="berend_botje",
status="active",
)
```
What happens internally:
1. The state of the strategy is loaded from the repository
2. The executor is replaced with a live exchange client
3. The strategy’s data pipeline is executed
4. strategy.run() is called
5. The new state of the strategy is persisted
If an exception occurs:
1. The strategy is stored
2. The bot is automatically moved to paused
### Stopping a Bot
Bots can be stopped gracefully using the stop method.
```python
bot.stop(
person_id="ArnoldDijk",
exchange="bitvavo",
bot_id="berend_botje",
status="active",
asset_handling="quote_only",
)
```
#### Asset Handling Options
When stopping a bot, you must specify how assets should be handled:
`quote_only`
Sell all base assets and remain in quote currency
`base_only`
Buy base assets using all available quote currency
`ignore`
Leave balances unchanged
Before stopping, the bot:
1. Cancels all open orders
2. Handles assets according to the selected mode
3. Persists the final state
4. Moves the bot to stopped
If anything fails, the bot is moved to paused.
## Deployment locally Quickstart
In this example, we will continue from the earlier defined rsi strategy.
we ended at the moment we executed the backtest. Now suppose we decide to use this algorithm with real money.
Then we have to deploy the strategy. In this example we will deploy locally.
### Step 1: Prepare the Strategy for Deployment -> Create a Strategy Repository and store your strategy
```python
from pathlib import Path
from dijkies.deployment import LocalStrategyRepository
strategy_repository = LocalStrategyRepository(
root_directory=Path("./strategies")
)
# adjust state to what you want to invest.
strategy.state = State(
base="BTC",
total_base=0,
total_quote=13 # let's invest 13 euros initially
)
strategy_repository.store(
strategy=strategy,
person_id="ArnoldDijk",
exchange="bitvavo",
bot_id="berend_botje",
status="active",
)
```
This serializes the strategy and its state so it can be resumed later.
### Step 2: Configure Exchange Credentials
Set your exchange credentials as environment variables:
```bash
export ArnoldDijk_bitvavo_api_key="foo"
export ArnoldDijk_bitvavo_api_secret_key="bar"
```
### Step 3: Create the Bot Runtime
The Bot orchestrates loading, execution, and lifecycle management.
```python
from dijkies.deployment import Bot, LocalCredentialsRepository
credentials_repository = LocalCredentialsRepository()
bot = Bot(
strategy_repository=strategy_repository,
credential_repository=credentials_repository,
)
```
### Step 4: Run the Strategy Live
start the live trading bot
```python
bot.run(
person_id="ArnoldDijk",
exchange="bitvavo",
bot_id="berend_botje",
status="active",
)
```
What Happens Under the Hood:
1. The strategy is loaded from disk
2. The backtest executor is replaced with BitvavoExchangeAssetClient where API credentials are injected
3. The strategy’s data pipeline fetches live market data
4. strategy.run() executes decision logic. Here, orders are executed on the exchange and state is modified accordingly
5. strategy is persisted, executor and credentials not included.
If an exception occurs, the bot is automatically moved to paused.
the strategy should be run repeatedly every, say, 60 minutes. There are plenty of ways to accomplish this, and below is a very
basic example:
```python
import time
from datetime import datetime, timezone
while True:
try:
print("running bot cycle at ", datetime.now(tz=timezone.utc))
bot.run(
person_id="ArnoldDijk",
exchange="bitvavo",
bot_id="berend_botje",
status="active",
)
print("bot cycle finished")
except Exception as e:
print("an error occured: ", e)
t = datetime.now(tz=timezone.utc)
minutes_left = 60 - t.minute
time.sleep((minutes_left - 1) * 60 + (60 - t.second))
```
However, it is much better to use orchestration tools like Apache Airflow. Many bots can be run in parallel using the fan-in/fan-out principle.
| text/markdown | Arnold Dijk | Arnold Dijk <arnold.dijk@teamrockstars.nl> | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"pandas>=2.3.3",
"pydantic>=2.12.5",
"python-binance>=1.0.33",
"python-bitvavo-api>=1.4.3"
] | [] | [] | [] | [
"Homepage, https://github.com/ArnoldDijk/dijkies",
"Source, https://github.com/ArnoldDijk/dijkies",
"Issues, https://github.com/ArnoldDijk/dijkies/issues",
"Documentation, https://github.com/ArnoldDijk/dijkies/blob/dev/README.md"
] | uv/0.9.14 {"installer":{"name":"uv","version":"0.9.14","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T14:29:14.932317 | dijkies-0.1.10-py3-none-any.whl | 20,885 | e6/ce/572aea6031b3d3a07ed5659883e08096c014e96de94028105ef83dd4d21f/dijkies-0.1.10-py3-none-any.whl | py3 | bdist_wheel | null | false | 6efe6855c3996f014d10e2e83ba84a54 | 897930a37ce4ce53db0b3fb3070a1c74089824edff3f54b287ac395dc899880f | e6ce572aea6031b3d3a07ed5659883e08096c014e96de94028105ef83dd4d21f | null | [] | 221 |
2.4 | psa-strategy-core | 0.1.0 | Pure PSA computation core | # psa-strategy-core
Pure computation core for PSA strategy evaluation.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T14:28:52.149897 | psa_strategy_core-0.1.0-py3-none-any.whl | 7,455 | 86/21/2f681f79b415fea245c72b7854f769efec895befbcf61ec77f5f04559901/psa_strategy_core-0.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | d9e048fc2909d1bb1f2a920ca8472d25 | d9f0186a2e9c814072390c040ebc9b6aa00dbf6b5c8e4d38e442f164436313c3 | 86212f681f79b415fea245c72b7854f769efec895befbcf61ec77f5f04559901 | null | [] | 250 |
2.4 | kernax-ml | 0.4.4a0 | A JAX-based kernel library for Gaussian Processes with automatic differentiation and composable operations | # Kernax - The blazing-fast kernel library that scales 🚀
Kernax is a Python package providing **efficient mathematical kernel implementations** for *probabilistic machine
learning models*, built with the **JAX framework**.
Kernels are critical elements of probabilistic models, used in many inner-most loops to compute giant matrices and
optimise numerous hyper-parameters. Therefore, this library emphasise on efficient, modular and scalable kernel
implementations, with the following features:
- **JIT-compiled** computations for fast execution on CPU, GPU and TPU
- Kernels structured as **Equinox Modules** (aka *PyTrees*) which means...
- They can be sent to (jitted) functions **as parameters**
- Their **hyper-parameters can be optimised via autodiff**
- They can **be vectorised-on** with `vmap`
- **Composable kernels** through operator overloading (`+`, `*`, `-`)
- **Kernel wrappers** to scale to higher dimensions (batch or block of covariance matrices)
- **NaN-aware computations** for working with padded/masked data
> **⚠️ Project Status**: Kernax is in early development. The API may change, and some features are still experimental.
## Installation
Install from PyPI:
```bash
pip install kernax-ml
```
Or clone the repository for development:
```bash
git clone https://github.com/SimLej18/kernax-ml
cd kernax-ml
```
**Requirements**:
- Python >= 3.12
- JAX >= 0.6.2
**Using Conda** (recommended):
```bash
conda create -n kernax-ml python=3.12
conda activate kernax-ml
pip install -e .
```
**Using pip**:
```bash
pip install -e .
```
## Quick Start
```python
import jax.numpy as jnp
from kernax import SEKernel, LinearKernel, WhiteNoiseKernel, ExpKernel, BatchKernel, ARDKernel
# Create a simple Squared Exponential kernel
kernel = SEKernel(length_scale=1.0)
# Compute covariance between two points
x1 = jnp.array([1.0, 2.0])
x2 = jnp.array([1.5, 2.5])
cov = kernel(x1, x2)
# Compute covariance matrix for a set of points
X = jnp.array([[1.0], [2.0], [3.0]])
K = kernel(X, X) # Returns 3x3 covariance matrix
# Compose kernels using operators
composite_kernel = SEKernel(length_scale=1.0) + WhiteNoiseKernel(0.1) # SE + noise
# Use BatchKernel for distinct hyperparameters per batch
base_kernel = SEKernel(length_scale=1.0)
batched_kernel = BatchKernel(base_kernel, batch_size=10, batch_in_axes=0, batch_over_inputs=True)
# Use ARDKernel for Automatic Relevance Determination
length_scales = jnp.array([1.0, 2.0, 0.5]) # Different scale per dimension
ard_kernel = ARDKernel(SEKernel(length_scale=1.0), length_scales=length_scales)
```
## Available Kernels
### Base Kernels
- **`SEKernel`** (Squared Exponential, aka RBF or Gaussian)
- Hyperparameters: `length_scale`
- **`LinearKernel`**
- Hyperparameters: `variance_b`, `variance_v`, `offset_c`
- **`MaternKernel`** family
- `Matern12Kernel` (ν=1/2, equivalent to Exponential)
- `Matern32Kernel` (ν=3/2)
- `Matern52Kernel` (ν=5/2)
- Hyperparameters: `length_scale`
- **`PeriodicKernel`**
- Hyperparameters: `length_scale`, `variance`, `period`
- **`RationalQuadraticKernel`**
- Hyperparameters: `length_scale`, `variance`, `alpha`
- **`ConstantKernel`**
- Hyperparameters: `value`
- **`PolynomialKernel`**
- Hyperparameters: `degree`, `gamma`, `constant`
- **`SigmoidKernel`** (Hyperbolic Tangent)
- Hyperparameters: `alpha`, `constant`
- **`WhiteNoiseKernel`**
- Diagonal noise kernel (returns constant value only on diagonal)
- Implemented as `ConstantKernel` with `SafeDiagonalEngine`
- Hyperparameters: `noise`
### Composite Kernels
- **`SumKernel`**: Adds two kernels (use `kernel1 + kernel2`)
- **`ProductKernel`**: Multiplies two kernels (use `kernel1 * kernel2`)
### Wrapper Kernels
Transform or modify kernel behavior:
- **`ExpKernel`**: Applies exponential to kernel output
- **`LogKernel`**: Applies logarithm to kernel output
- **`NegKernel`**: Negates kernel output (use `-kernel`)
- **`BatchKernel`**: Adds batch handling with distinct hyperparameters per batch
- **`BlockKernel`**: Constructs block covariance matrices for grouped data
- **`ActiveDimsKernel`**: Selects specific input dimensions before kernel computation
- **`ARDKernel`**: Applies Automatic Relevance Determination (different length scale per dimension)
### Computation Engines
Computation engines control how covariance matrices are computed. All kernels accept a `computation_engine` parameter:
- **`DenseEngine`** (default): Computes full covariance matrices
- **`SafeDiagonalEngine`**: Returns diagonal matrices (uses conditional check for input equality)
- **`FastDiagonalEngine`**: Returns diagonal matrices (assumes x1 == x2, faster but requires constraint)
- **`SafeRegularGridEngine`**: Exploits regular grid structure with runtime checks
- **`FastRegularGridEngine`**: Exploits regular grid structure without checks (faster but requires constraint)
Example:
```python
from kernax import SEKernel
from kernax.engines import SafeDiagonalEngine, FastRegularGridEngine
# Diagonal computation
diagonal_kernel = SEKernel(length_scale=1.0, computation_engine=SafeDiagonalEngine)
# Regular grid optimization
grid_kernel = SEKernel(length_scale=1.0, computation_engine=FastRegularGridEngine)
```
## Architecture
Kernax is built on [Equinox](https://github.com/patrick-kidger/equinox), so they are compatible with every feature from JAX!
Each kernel uses a dual-class pattern to separate state and structure:
1. **Static Class** (e.g., `StaticSEKernel`): Contains JIT-compiled computation logic
2. **Instance Class** (e.g., `SEKernel`): Extends `eqx.Module`, holds hyperparameters
## Testing & Quality
Kernax maintains high code quality standards:
- **94% test coverage** with 231+ passing tests
- **Allure test reporting** for detailed test analytics
- **Cross-library validation** against scikit-learn, GPyTorch, and GPJax
- **Type checking** with mypy for enhanced code safety
- **Code formatting** with ruff (tabs, line length 100)
Run tests with:
```bash
make test # Run all tests
make test-cov # Run tests with coverage report
make test-allure # Generate Allure HTML report
make lint # Run type checking and linting
```
## Benchmarks
Kernax is designed for performance. You can run a benchmark comparison with other libraries with:
```bash
make benchmarks-compare
```
Our preliminary results show a significant speed-up over alternatives when JIT compilation is enabled:
```
------------------------------------------------- benchmark 'benchmarks/comparison/compare_se_kernel.py::Benchmark1DRandom::test_compare': 4 tests ------------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS (mops/s) Rounds Iterations
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_compare[kernax] 12.0889 (1.0) 14.9765 (1.0) 12.6690 (1.0) 0.6474 (1.0) 12.4472 (1.0) 0.5749 (1.29) 1;1 78,932.9054 (1.0) 20 1
test_compare[gpytorch] 43.5577 (3.60) 54.3097 (3.63) 44.5481 (3.52) 2.3159 (3.58) 43.9814 (3.53) 0.4448 (1.0) 1;1 22,447.6602 (0.28) 20 1
test_compare[gpjax] 67.3657 (5.57) 73.9067 (4.93) 68.8340 (5.43) 1.4448 (2.23) 68.5019 (5.50) 1.3212 (2.97) 3;1 14,527.6964 (0.18) 20 1
test_compare[sklearn] 328.1409 (27.14) 367.2989 (24.53) 334.7924 (26.43) 8.2573 (12.75) 332.5784 (26.72) 4.2589 (9.57) 1;1 2,986.9256 (0.04) 20 1
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------- benchmark 'benchmarks/comparison/compare_se_kernel.py::Benchmark1DRegularGrid::test_compare': 4 tests -----------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS (mops/s) Rounds Iterations
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_compare[kernax] 11.8415 (1.0) 13.3571 (1.0) 12.5704 (1.0) 0.4358 (1.0) 12.5185 (1.0) 0.6904 (2.23) 8;0 79,551.8567 (1.0) 20 1
test_compare[gpytorch] 43.6603 (3.69) 55.0724 (4.12) 44.5337 (3.54) 2.4908 (5.72) 43.9668 (3.51) 0.3099 (1.0) 1;1 22,454.9091 (0.28) 20 1
test_compare[gpjax] 67.2976 (5.68) 119.1254 (8.92) 70.9630 (5.65) 11.3640 (26.08) 68.3379 (5.46) 0.8114 (2.62) 1;2 14,091.8552 (0.18) 20 1
test_compare[sklearn] 297.8652 (25.15) 316.3752 (23.69) 302.3710 (24.05) 4.4811 (10.28) 300.7931 (24.03) 3.5605 (11.49) 4;2 3,307.1952 (0.04) 20 1
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------- benchmark 'benchmarks/comparison/compare_se_kernel.py::Benchmark2DMissingValues::test_compare': 4 tests ----------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS (mops/s) Rounds Iterations
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_compare[kernax] 11.9619 (1.0) 13.9954 (1.0) 12.7085 (1.0) 0.5785 (1.0) 12.4387 (1.0) 0.8605 (1.0) 5;0 78,687.2477 (1.0) 20 1
test_compare[gpytorch] 25.9657 (2.17) 30.2475 (2.16) 27.1899 (2.14) 1.3834 (2.39) 26.6297 (2.14) 1.4433 (1.68) 4;2 36,778.4048 (0.47) 20 1
test_compare[gpjax] 55.1528 (4.61) 136.3582 (9.74) 113.5941 (8.94) 26.1170 (45.15) 125.4449 (10.09) 16.2559 (18.89) 3;3 8,803.2728 (0.11) 20 1
test_compare[sklearn] 213.0630 (17.81) 272.6552 (19.48) 230.3581 (18.13) 15.2409 (26.35) 224.1107 (18.02) 14.8310 (17.23) 6;1 4,341.0674 (0.06) 20 1
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------- benchmark 'benchmarks/comparison/compare_se_kernel.py::Benchmark2DRandom::test_compare': 4 tests ------------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS (mops/s) Rounds Iterations
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_compare[kernax] 13.9697 (1.0) 15.5467 (1.0) 14.5331 (1.0) 0.4723 (1.0) 14.3701 (1.0) 0.8122 (1.57) 8;0 68,808.3748 (1.0) 20 1
test_compare[gpytorch] 43.6454 (3.12) 50.5380 (3.25) 44.5098 (3.06) 1.4877 (3.15) 44.1243 (3.07) 0.5169 (1.0) 1;2 22,466.9466 (0.33) 20 1
test_compare[gpjax] 94.1932 (6.74) 103.3563 (6.65) 97.6833 (6.72) 2.1244 (4.50) 97.8704 (6.81) 3.0956 (5.99) 4;0 10,237.1672 (0.15) 20 1
test_compare[sklearn] 408.6985 (29.26) 440.7834 (28.35) 417.1601 (28.70) 8.0450 (17.03) 413.5329 (28.78) 9.0707 (17.55) 5;1 2,397.1614 (0.03) 20 1
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------- benchmark 'benchmarks/comparison/compare_se_kernel.py::Benchmark2DRegularGrid::test_compare': 4 tests ----------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS (mops/s) Rounds Iterations
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_compare[kernax] 13.9134 (1.0) 16.3335 (1.0) 14.5133 (1.0) 0.7465 (1.0) 14.1642 (1.0) 0.8484 (1.0) 3;2 68,902.5346 (1.0) 20 1
test_compare[gpytorch] 43.9418 (3.16) 51.8444 (3.17) 45.7195 (3.15) 1.8991 (2.54) 45.4059 (3.21) 1.9106 (2.25) 2;2 21,872.5192 (0.32) 20 1
test_compare[gpjax] 93.1488 (6.69) 130.6572 (8.00) 102.3413 (7.05) 8.1851 (10.96) 101.9908 (7.20) 9.7936 (11.54) 2;1 9,771.2271 (0.14) 20 1
test_compare[sklearn] 381.4240 (27.41) 405.2938 (24.81) 387.4473 (26.70) 6.4979 (8.70) 384.8169 (27.17) 9.0655 (10.69) 2;0 2,580.9964 (0.04) 20 1
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
```
## Development Status
Check the [changelog](CHANGELOG.md) for details.
### ✅ Completed
- Core kernel implementations (SE, Linear, Matern, Periodic, Sigmoid, etc.)
- Kernel composition via operators
- Equinox Module integration
- NaN-aware computations
- BatchKernel wrapper with distinct/shared hyper-parameters
- ARDKernel wrapper using input scaling
- ActiveDimsKernel wrapper for dimension selection
- BlockKernel for block-matrix covariances
- StationaryKernel and DotProductKernel base classes with proper inheritance
- Parameter transform system (identity, exp, softplus) for optimization stability
- Parameter positivity constraints with config-based transformation
- Computation engines for special cases (diagonal, regular grids)
- Comprehensive test suite (94% coverage)
- Benchmark architecture
- PyPI package distribution
### 🚧 In Progress / Planned
- Parameter freezing for optimisation
- Comprehensive benchmarks with multiple kernels and input scenarios
- Expanded documentation and tutorials
## Contributing
This project is in early development. Contributions, bug reports, and feature requests are welcome!
## Related Projects
Kernax is developed alongside [MagmaClust](https://github.com/SimLej18/MagmaClustPy), a clustering and Gaussian Process library.
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Citation
[Citation information to be added]
| text/markdown | null | "S. Lejoly" <simon.lejoly@unamur.be> | null | "S. Lejoly" <simon.lejoly@unamur.be> | MIT | gaussian-processes, kernels, jax, machine-learning, covariance-functions, bayesian-optimization | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: ... | [] | null | null | >=3.12 | [] | [] | [] | [
"jax>=0.6.2",
"jaxlib>=0.6.2",
"equinox>=0.11.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\"",
"allure-pytest>=2.15; extra == \"dev\"",
"pytest-benchmark>=5.2; extra == \"dev\"",
"gpjax>=0.13; extra == \"dev\"",
"gpytorch>=1.15; extra == \"dev\"",
"scikit-learn>=1.8; extra... | [] | [] | [] | [
"Homepage, https://github.com/SimLej18/kernax-ml",
"Repository, https://github.com/SimLej18/kernax-ml",
"Documentation, https://kernax-ml.readthedocs.io",
"Bug Tracker, https://github.com/SimLej18/kernax-ml/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:27:54.847794 | kernax_ml-0.4.4a0.tar.gz | 35,368 | 38/6d/649ac08c3f43d0b8bf4a81b89b294d254a0dfb557cd691abb38bc3a47fbc/kernax_ml-0.4.4a0.tar.gz | source | sdist | null | false | d209134811f3e3f98fb55709e49b91eb | 2213f4e100cb5dea1a2359362fc05f62c0d84d27c7069e7cf4de151365dbf256 | 386d649ac08c3f43d0b8bf4a81b89b294d254a0dfb557cd691abb38bc3a47fbc | null | [
"LICENSE"
] | 211 |
2.4 | rsliding | 1.0.1 | Rust with Python bindings. Contains sliding (filtering) operations (padding, convolution, mean, median, standard deviation, sigma clipping) for float64 numpy data with NaN values and a kernel with weights. | # `rsliding` package
This python package contains utilities to compute a sliding sigma clipping, where the kernel can contain weights and the data can contain NaN values (has to be float64).
The actual core code is in Rust.
This package was created to have a less memory hungry sigma clipping code compared to the similar
`sliding` python package (cf. https://github.com/avoyeux/sliding.git). It is also a few times faster than the `sliding` package equivalent (except in some cases when using the Convolution or SlidingMean class).
Check the **Functions** markdown section to know about the different available classes.
For high numerical stability (c.f. https://dl.acm.org/doi/epdf/10.1145/359146.359152), the standard deviation is computed using the two-pass algorithm (i.e. mean computation then variance). Furthermore, the user can decide to use 'Neumaier's summation' for the standard deviation and mean computation (highest numerical stability that I know of). While the Welford algorithm is faster and quite stable, I did find (from literature and tests) that the two-pass algorithm is more stable, even more so when using Neumaier's summation.
**IMPORTANT:** the code only works if the kernel dimensions are odd and has the same dimensionality than the input data.
**NOTES**: for the standard deviation computation, compared to using numpy.std on each window, the ``rsliding`` implementation should be a little less numerically stable if 'neumaier=False'. This is because, while numpy.std seem to also use the two-pass algorithm (c.f. https://numpy.org/doc/stable/reference/generated/numpy.std.html), numpy functions also implement pairwise summation (not done in ``rsliding``). That being said, Neumaier's summation is more stable than pairwise summation and, as such, when 'neumaier=True`, the 'SlidingMean' and 'SlidingStandardDeviation' implementation should be more numerically stable than implementing sliding operations using np.mean and np.std (or nanmean/nanstd).
## Install package
Given that pre-compiled binaries are needed if the user doesn't have the Rust compiler installed, the user should install the package through *PyPi*.
#### (**OPTIONAL**) Create and activate a python virtual environnement:
```bash
python3 -m venv .venv
source .venv/bin/activate
```
or on Windows OS:
```bash
python -m venv .venv
# Command Prompt
.venv\Scripts\activate
# PowerShell
.venv\Scripts\Activate.ps1
# Git Bash or WSL
source .venv/Scripts/activate
```
#### Install package in virtual environnement (or on bare-metal - wouldn't recommend):
```bash
pip install --upgrade pip
pip install rsliding
```
## Functions
The `rsliding` package has 6 different classes:
- **Padding** which returns the padded data. Not really useful given np.pad is way more efficient. A Python binding exist just so that the user can check the results if wanted be.
- **Convolution** which lets you perform a convolution (NaN handling done).
- **SlidingMean** which performs a sliding mean (NaN handling done).
- **SlidingMedian** which performs a sliding median (NaN handling done).
- **SlidingStandardDeviation** which performs a sliding standard deviation (NaN handling done).
- **SlidingSigmaClipping** which performs a sliding sigma clipping (NaN handling done).
#### Example
```python
# IMPORTs
import numpy as np
from rsliding import SlidingSigmaClipping
# CREATE fake data
fake_data = np.random.rand(36, 1024, 128).astype(np.float64)
fake_data[10:15, 100:200, 50:75] = 1.3
fake_data[7:, 40:60, 70:] = 1.7
# KERNEL
kernel = np.ones((5, 3, 7), dtype=np.float64)
kernel[2, 1, 3] = 0.
# NaN ~5%
is_nan = np.random.rand(*fake_data.shape) < 0.05
fake_data[is_nan] = np.nan
# CLIPPING no lower value
clipped = SlidingSigmaClipping(
data=fake_data,
kernel=kernel,
center_choice='median',
sigma=3.,
sigma_lower=None,
max_iters=3,
borders='reflect',
threads=1,
masked_array=False,
neumaier=True,
).clipped
```
## IMPORTANT
Before using this package some information is needed:
- **float64** values for the data: the data needs to be of float64 type. Given the default class argument values, the data will be cast to float64 before calling Rust.
- **threads**: there is a threads argument for most of the classes. It is not used during the padding operations but used for in all other intensive operations. Done like so as using threads for the padding would most likely slow down the computation speed in most cases.
| text/markdown | null | Voyeux Alfred <voyeuxalfred@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Rust",
"Typing :: Typed",
"Operating System :: OS Independent",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.24"
] | [] | [] | [] | [
"Documentation, https://github.com/avoyeux/rsliding#readme",
"Homepage, https://github.com/avoyeux/rsliding",
"Source, https://github.com/avoyeux/rsliding"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:27:52.470652 | rsliding-1.0.1-cp310-abi3-macosx_11_0_x86_64.whl | 381,295 | c3/a4/6b8b00667eb08e6fb1db69d18d00e434b7a27e40b7dd84444a71a633c266/rsliding-1.0.1-cp310-abi3-macosx_11_0_x86_64.whl | cp310 | bdist_wheel | null | false | 1e717f24cb2e835bb93fd15896f11dfc | 21e1ab1ee7294d9ad0269de7570362dd7706be1bb9ef0e6d33ff9cf23cb289a5 | c3a46b8b00667eb08e6fb1db69d18d00e434b7a27e40b7dd84444a71a633c266 | null | [
"LICENSE"
] | 880 |
2.1 | pylibmgm | 1.1.2 | Utilities for graph and multi-graph matching optimization. | # Multi-matching optimization
Application and library to solve (multi-)graph matching problems.
For licensing term, see the `LICENSE` file. This work uses third party software. Please refer to `LICENSE-3RD-PARTY.txt` for an overview and recognition.
For details, refer to our publication:
- M. Kahl, S. Stricker, L. Hutschenreiter, F. Bernard, B. Savchynskyy<br>
**“Unlocking the Potential of Operations Research for Multi-Graph Matching”**.<br>
arXiv Pre-Print 2024. [[PDF][arxiv_pdf]] [[ArXiv][arxiv]]
### Python
**Install via pip**
`pip install pylibmgm`
Refer to the usage guide: [Python Documentation][readthedocs]
### C++
0. (Install requirements)
- Install `g++` or `clang` *(c++ compiler)*, `meson` *(build system)* and `git`
- `sudo apt install g++ meson git` (Ubuntu)
- `sudo dnf install g++ meson git` (Fedora)
1. **Clone repository**
- `git clone https://github.com/vislearn/multi-matching-optimization.git`
- `cd ./multi-matching-optimization`
3. **Build**
- `meson setup --buildtype release -Db_ndebug=true ../builddir/`
- `meson compile -C ../builddir/`
4. **Run**
- `../builddir/mgm -i tests/hotel_instance_1_nNodes_10_nGraphs_4.txt -o ../mgm_output --mode seqseq`
# Usage
### Minimal Usage
$ mgm -i [IN_FILE] -o [OUT_DIR] --mode [OPT_MODE]
You can use one of the small models in the `tests/` directory for testing.
### Input files
Input files follow the .dd file format for multi-graph matching problems, as defined in the [Structured prediction problem archive][problem_archive].
See references below.
### Available CLI parameters
- `--name` <br>
Base name for the resulting .json and .log files.
- `-l`, `--labeling` <br>
Path to an existing solution file (json). Pass to improve upon this labeling.
- `--mode` ENUM <br>
Optimization mode. See below.
- `--set-size`INT <br>
Subset size for incremenetal generation
- `--merge-one` <br>
In parallel local search, merge only the best solution. Do not try to merge other solutions as well.
- `-t`, `--threads` INT <br>
Number of threads to use. Upper limit defined by OMP_NUM_THREADS environment variable.
- `--libmpopt-seed` UINT <br>
Fix the random seed for the fusion moves graph matching solver of libmpopt.
- `--unary-constant` FLOAT <br>
Constant to add to every assignment cost. Negative values nudges matchings to be more complete.
- `--synchronize` <br>
Synchronize a cylce inconsistent solution.
Excludes: --synchronize-infeasible
- `--synchronize-infeasible` <br>
Synchronize a cylce inconsistent solution. Allow all (forbidden) matchings.
Excludes: --synchronize
### Optimization modes
`--mode` specifies the optimization routine. It provides ready to use routines that combine construction, graph matching local search (GM-LS), and swap local search (SWAP-LS) algorithms as defined in the publication.
The following choices are currently available:
***Construction modes.***
Use these to generate solutions quickly
- `seq`: sequential construction
- `par`: parallel construction
- `inc`: incremental construction
***Construction + GM-LS modes.***
A bit slower, but gives better solutions
- `seqseq`: sequential construction -> sequential GM-LS
- `seqpar`: sequential construction -> parallel GM-LS
- `parseq`: parallel construction -> sequential GM-LS
- `parpar`: parallel construction -> parallel GM-LS
- `incseq`: incremental construction -> sequential GM-LS
- `incpar`: incremental construction -> parallel GM-LS
***Construction + iterative GM-LS & SWAP-LS.***
After construction, iterate between GM-LS and and SWAP-LS.
- `optimal`: sequential construction -> Until conversion: (sequential GM-LS <-> swap local search)
- `optimalpar`: parallel construction -> Until conversion: (parallel GM-LS <-> swap local search)
***Improve given labeling.***
Skip construction and perform local search on a pre-existing solution.
Improve modes require a labeling to be provided via the `--labeling [JSON_FILE]` command line option.
- `improve-swap`: improve with SWAP-LS
- `improve-qap`: improve with sequential GM-LS
- `improve-qap-par`: improve with parallel GM-LS
- `improveopt`: improve with alternating sequential GM-LS <-> SWAP-LS
- `improveopt-par`: improve with alternating parallel GM-LS <-> SWAP-LS
### Use as synchronization algorithm
To synchronize a pre-existing *cylce inconsistent* solution, call with `--synchonize` or `--synchonize-infeasible`, either disallowing or allowing forbidden matchings.
***Synchronize feasible.***
Feasible solution. Disallows forbidden matchings.
$ mgm -i [IN_FILE] -o [OUT_DIR] --mode [OPT_MODE] --labeling [JSON_FILE] --synchonize
***Synchronize infeasible.***
Infeasible solution. Allows forbidden matchings.
$ mgm -i [IN_FILE] -o [OUT_DIR] --mode [OPT_MODE] --labeling [JSON_FILE] --synchonize-infeasible
# Software dependencies
This software incorporates and uses third party software. Dependencies are provided as meson wrap files.
They should be installed automatically during the build process,
but manual installation can help solve issues, if meson fails to build them.
***NOTE: Links given point to the original authors publications, which may have been substantially modified. The corresponding `.wrap` files in this repositry's `subprojects/` folder contain links to the actual code needed to compile the solver.***
- **spdlog** \
*logging library; https://github.com/gabime/spdlog*
- **cli11** \
*Command line parser; https://github.com/CLIUtils/CLI11*
- **nlohman_json** \
*json parsing. https://github.com/nlohmann/json*
- **unordered_dense** \
*A fast & densely stored hashmap; https://github.com/martinus/unordered_dense*
- **libqpbo** \
*for quadratic pseudo boolean optimization (QPBO); https://pub.ista.ac.at/~vnk/software.html*
- **libmpopt** \
*for quadratic assignment problem (QAP) optimization; https://github.com/vislearn/libmpopt*
- **Scipy_lap** \
*Linear assignment problem (LAP) solver implementation from the Scipy python package; https://github.com/scipy/scipy*
# References
- M. Kahl, S. Stricker, L. Hutschenreiter, F. Bernard, B. Savchynskyy<br>
**“Unlocking the Potential of Operations Research for Multi-Graph Matching”**.<br>
arXiv Pre-Print 2024. [[PDF][arxiv_pdf]] [[ArXiv][arxiv]]
- L. Hutschenreiter, S. Haller, L. Feineis, C. Rother, D. Kainmüller, B. Savchynskyy.<br>
**“Fusion Moves for Graph Matching”**.<br>
ICCV 2021. [[PDF][iccv2021_pdf]] [[ArXiv][iccv2021]]
- Swoboda, P., Andres, B., Hornakova, A., Bernard, F., Irmai, J., Roetzer, P., Savchynskyy, B., Stein, D. and Abbas, A. <br>
**“Structured prediction problem archive”** <br>
arXiv preprint arXiv:2202.03574 (2022) [[PDF][problem_archive]] [[ArXiv][problem_archive_pdf]]
[arxiv]: https://arxiv.org/abs/2406.18215
[arxiv_pdf]: https://arxiv.org/pdf/2406.18215
[iccv2021]: https://arxiv.org/abs/2101.12085
[iccv2021_pdf]: https://arxiv.org/pdf/2101.12085
[problem_archive]: https://arxiv.org/abs/2202.03574
[problem_archive_pdf]: https://arxiv.org/pdf/2202.03574
[readthedocs]: https://pylibmgm.readthedocs.io/en/latest/ | text/markdown | null | Sebastian Stricker <sebastian.stricker@iwr.uni-heidelberg.de> | null | null | null | null | [
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: POSIX :: Linux",
"Programming Language :: C++",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://pylibmgm.readthedocs.io/en/latest/",
"publication, https://arxiv.org/abs/2406.18215",
"source, https://github.com/vislearn/multi-matching-optimization"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:27:28.216386 | pylibmgm-1.1.2.tar.gz | 8,115,432 | fc/15/62c26a24701f9b865866fb14e33b18b7a2f8c353a1217370e6c65714dc0d/pylibmgm-1.1.2.tar.gz | source | sdist | null | false | f7af32723377f411c41c092c7bf0fa41 | 76004a7527d659a7787aac056b49d557f1d96582160260b02bf18a09348f2a23 | fc1562c26a24701f9b865866fb14e33b18b7a2f8c353a1217370e6c65714dc0d | null | [] | 1,593 |
2.4 | pennylane-toroidal-noise | 0.1.0 | Toroidal dephasing noise channel for PennyLane — spectral-gap suppression on T² lattices | # pennylane-toroidal-noise
Toroidal dephasing noise channel for [PennyLane](https://pennylane.ai/).
Qubits on a toroidal lattice experience reduced dephasing because the lattice
Laplacian's spectral gap filters low-frequency noise. This plugin provides
`ToroidalDephasing` — a drop-in replacement for `qml.PhaseDamping` that models
this suppression.
## Installation
```bash
pip install pennylane-toroidal-noise
```
## Usage
```python
import pennylane as qml
from pennylane_toroidal_noise import ToroidalDephasing
dev = qml.device("default.mixed", wires=1)
@qml.qnode(dev)
def circuit(gamma):
qml.Hadamard(wires=0)
ToroidalDephasing(gamma, grid_n=12, wires=0)
return qml.expval(qml.PauliX(0))
# Compare: more coherence preserved than plain PhaseDamping
circuit(0.5) # ~0.99 (vs ~0.87 for PhaseDamping)
```
## How it works
The effective dephasing probability is suppressed by the spectral gap of the
cycle graph C_n:
```text
gamma_eff = gamma * lambda_1 / (lambda_1 + alpha)
```
where `lambda_1 = 2 - 2*cos(2*pi/n)` is the smallest nonzero eigenvalue of
the n-vertex cycle graph Laplacian (also the spectral gap of the n x n torus).
| grid_n | suppression (alpha=1) |
|--------|----------------------|
| 4 | 1.5x |
| 8 | 2.7x |
| 12 | 4.7x |
| 32 | 27x |
## Parameters
| Parameter | Type | Default | Description |
|-----------|-------|---------|-------------|
| `gamma` | float | — | Bare dephasing probability in [0, 1] |
| `grid_n` | int | 12 | Side length of the toroidal lattice |
| `alpha` | float | 1.0 | Coupling strength (larger = stronger filtering) |
| `wires` | int | — | Wire the channel acts on |
## Rust companion
A pure-Rust implementation is also available: [`toroidal-noise`](https://crates.io/crates/toroidal-noise)
## Reference
S. Cormier, "Toroidal Logit Bias," Zenodo, 2026.
DOI: [10.5281/zenodo.18516477](https://doi.org/10.5281/zenodo.18516477)
## License
MIT
| text/markdown | null | Sylvain Cormier <sylvain@paraxiom.org> | null | null | MIT | quantum, noise-channels, pennylane, toroidal, spectral-gap, dephasing | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: Physics"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"pennylane>=0.35.0",
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Paraxiom/pennylane-toroidal-noise",
"Documentation, https://github.com/Paraxiom/pennylane-toroidal-noise#readme",
"Repository, https://github.com/Paraxiom/pennylane-toroidal-noise"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-19T14:27:06.384365 | pennylane_toroidal_noise-0.1.0.tar.gz | 6,296 | a7/97/db7b30b61c986c948525dd73d6a716f728e4dad97a3dc40ca23ea1e921d9/pennylane_toroidal_noise-0.1.0.tar.gz | source | sdist | null | false | b762fc25630eb983c483ff57990cfa57 | bd5d8a140a8f5bd03287a0f2dce429fb86572eef0dab3cdddfb762230256a68c | a797db7b30b61c986c948525dd73d6a716f728e4dad97a3dc40ca23ea1e921d9 | null | [
"LICENSE"
] | 239 |
2.4 | simpn | 1.9.0 | A package for Discrete Event Simulation, using the theory of Petri Nets. | SimPN
=====
.. image:: https://github.com/bpogroup/simpn/actions/workflows/main.yml/badge.svg
SimPN (Simulation with Petri Nets) is a package for discrete event simulation in Python.
SimPN provides a simple syntax that is based on Python functions and variables, making it familiar for people who already know Python. At the same time, it uses the power of and flexibility of `Colored Petri Nets (CPN)`_ for simulation. It also provides prototypes for easy modeling of frequently occurring simulation constructs, such as (customer) arrival, processing tasks, queues, choice, parallelism, etc.
.. _`Colored Petri Nets (CPN)`: http://dx.doi.org/10.1145/2663340
.. role:: python(code)
:language: python
:class: highlight
Features
========
- Easy to use discrete event simulation package in Python.
- Visualization of simulation models.
- Prototypes for easily modeling, simulating, and visualizing frequently occurring simulation constructs, including BPMN and Queuing Networks.
- Different reporting options, including logging to file as an event log for process mining purposes.
Installation
============
The SimPN package is available on PyPI and can simply be installed with pip.
.. code-block::
python -m pip install simpn
There are also executables available, which allow for quick visual simulation of BPMN models. The executables can be downloaded from the `releases`_.
You can find more information on how to use these executables in the section `Standalone-Limited-Simulation`_.
.. _`releases`: https://github.com/bpogroup/simpn/releases/
Quick Start
===========
You can check out a simple example of a simulation model, e.g., if you like Petri nets:
.. code-block::
python examples/presentation/presentation_1.py
If you like BPMN, you can check out this example:
.. code-block::
python examples/presentation/presentation_2.py
A Basic Tutorial
================
To illustrate how SimPN works, let's consider a simulation model of a cash register at a small shop,
which we can initialize as follows. This imports parts from the SimPN library that we use here
and further on in the example.
.. code-block:: python
from simpn.simulator import SimProblem, SimToken
shop = SimProblem()
A discrete event simulation is defined by the *state* of the system that is simulated and the *events* that can happen
in the system.
Simulation State and Variables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In case of our shop, the state of the system consists of customers that are waiting in line at
the cash register, resources that are free to help the customer, and resources that are busy helping a customer.
Consequently, we can model the state of our simulation, by defining two *variables* as follows.
.. code-block:: python
customers = shop.add_var("customers")
resources = shop.add_var("resources")
A simulation variable is different from a regular Python variable in two important ways. First, a simulation variable
can contain multiple values, while a regular Python variable can only contain one value. Second, values of a simulation
variable are available from a specific moment in (simulation) time. More about that later.
So, with that in mind, let's give our variables a value.
.. code-block:: python
resources.put("cassier")
customers.put("c1")
customers.put("c2")
customers.put("c3")
We now gave the `resources` variable one value, the string `cassier`, but we gave the `customers` variable three values.
You can probably understand why we did that: we now have one cassier and three customers waiting. This is the
*initial state* of our simulation model.
Simulation Events
~~~~~~~~~~~~~~~~~
Simulation events define what can happen in the system and how the system (state variables) change when they do.
We define simulation events as Python functions that take a system state and return a new system state.
Remember that the system state is defined in terms of variables, so an event function takes (values of) state variables as
input and produces (values of) state variables as output.
.. code-block:: python
def process(customer, resource):
return [SimToken(resource, delay=0.75)]
shop.add_event([customers, resources], [resources], process)
In our example we introduce a single event that represents a resource processing a waiting customer.
First, let's focus on `shop.add_event` in the code below. This tells the simulator that our event takes a value from the
`customers` variable and a value from the `resources` variable as input, produces a value for the `resources`
variable as output, and uses the `process` function to change the state variables.
Describing that in natural language: it takes a customer and a resource and, when it is done, returns a resource.
The `process` function defines how the event modifies the system state (variables).
Taking a value from the `customers` variable (and calling it `customer`) and a value from the `resources` variable
(and calling it `resource`), the function returns the `resource` again. This return value will be put into the
`resources` variable, as per the `shop.add_event` definition. However, as you can see, there are several things
going on in the return statement.
First, the function does not return a single resource value, but a list of values. This is simply a convention
that you have to remember: event functions return a list of values. The reason for this is that we defined the
simulation event in `shop.add_event` as taking a list of values (consisting of one value from customers and one value from
resources) as input and as producing a list of values (consisting of one value for resources) as output.
Accordingly, we must produce a list of values as output, even if there is only one value.
Second, the function does not return the `resource`, but returns a `SimToken` containing the resource.
That is because in simulation, values have a time from which they are available. A value with a time
is called a *token*. This represents that the value is only available at, or after, the specified time.
In this case, the resource value is made available after a delay of 0.75. You can consider this the time it takes the resource to
process the customer. Since it takes 0.75 to process a customer, the resource is only made available
again after a delay of 0.75. In the meantime no new `process` events can happen, because a value from `resources`,
which is needed as input for such an event, is not available.
Putting it all together
~~~~~~~~~~~~~~~~~~~~~~~
Now we have modeled the entire system and we can simulate it.
To do that, we call the `simulate` function on the model.
This function takes two parameters. One is the amount of time for which the simulation will be run.
The other is the reporter that will be used to report the results of the simulation.
In our example we will run the simulation for 10. (Since we only have 3 customers, and each customer
takes 0.75 to process, this should be more than enough.) We will use a `SimpleReporter` from the
reporters package to report the result. This reporter simply prints each event that happens
to the standard output.
.. code-block:: python
from simpn.reporters import SimpleReporter
shop.simulate(10, SimpleReporter())
As expected, running this code leads to the following output.
The event of (starting) processing customer c1 happens at time t=0.
It uses value `c2` for variable `customers` and value `cassier` for variable `resources`.
The event of (starting) processing customer c2 happens at time t=0.75.
This is logical, because our definition of the `process` event that the value `cassier` is only available
in the variable `resources` again after 0.75. Accordingly, processing of c3 happens at time t=1.5.
.. code-block::
process{customers: c1, resources: cassier}@t=0
process{customers: c2, resources: cassier}@t=0.75
process{customers: c3, resources: cassier}@t=1.5
For completeness, the full code of the example is:
.. code-block:: python
from simpn.simulator import SimProblem, SimToken
shop = SimProblem()
resources = shop.add_var("resources")
customers = shop.add_var("customers")
def process(customer, resource):
return [SimToken(resource, delay=0.75)]
shop.add_event([customers, resources], [resources], process)
resources.put("cassier")
customers.put("c1")
customers.put("c2")
customers.put("c3")
from simpn.reporters import SimpleReporter
shop.simulate(10, SimpleReporter())
Visualizing the Model
~~~~~~~~~~~~~~~~~~~~~
To help check whether the model is correct, it is possible to visualize it. To this end, there is a Visualisation class.
You can simply create an instance of this class and call the `show` method to show the model as follows.
.. code-block:: python
from simpn.visualisation import Visualisation
v = Visualisation(shop)
v.show()
The model will now be shown as a Petri net in a separate window.
The newly opened window will block further execution of the program until it is closed.
You can interact with the model in the newly opened window. Pressing the space bar will advance the simulation by one step.
You can also change the layout of the model by dragging its elements around.
After the model window is closed, you can save the layout of the model to a file, so that you can open it later.
Use the method `save_layout` to save the model to do so.
You can load the layout of the model from the file later, by passing the saved layout as a parameter to the constructor.
If the layout file does not exist, the model will be shown with an automatically generated layout.
.. code-block:: python
v = Visualisation(shop, "layout.txt")
v.show()
v.save_layout("layout.txt")
Standalone Limited Simulation
=============================
.. _`Standalone-Limited-Simulation`: Try out the standalone simulation by downloading one of the `releases`_.
You can use it to open a BPMN model file, such as `example.bpmn`_.
These BPMN files can be created using a BPMN modeling tool, such as `Signavio`_.
Note that they contain specific annotations to specify simulation properties:
- Each lane has a number of resources in its name between brackets. For example, a lane named 'employees (2)' represents that there are two employees who can perform the tasks in the employees lane.
- Each start event has a property 'interarrival_time', which must be an expression that evaluates to a number. For example, it can be :code:`1`, meaning that there is an arrival every 1 time units. It can also be :code:`random.expovariate(1)`, which means that there is an arrival rate that is exponentially distributed with an average of 1.
- Each task has a property 'processing_time', which must be an expression that evaluates to a number. For example, it can be :code:`1`, meaning that processing the task takes 1 time unit. It can also be :code:`random.uniform(0.5, 1.5)`, which means that the processing time is uniformly distributed between 0.5 and 1.5.
- Each outgoing arc of an XOR-split has a percentage on it, representing the probability that this path is followed after a choice. For example, an arc with :code:`25%` on it represents that there is a 25% possibility that this path is taken after a choice.
.. _`example.bpmn`: ext/bpmn_test_files/80%20correct%20numbers.bpmn
.. _`Signavio`: https://www.signavio.com/
Documentation
=============
For more information, including the `API specification`_, and a far more extensive `tutorial`_, please visit the `documentation`_.
.. _`documentation`: https://bpogroup.github.io/simpn/
.. _`tutorial`: https://bpogroup.github.io/simpn/teaching.html
.. _`API specification`: https://bpogroup.github.io/simpn/api.html
Acknowledgements
================
UI icons by `Flaticon`_.
.. _`Flaticon`: https://www.flaticon.com/free-icons/zoom-in
| text/x-rst | null | Remco Dijkman <r.m.dijkman@tue.nl> | null | null | Copyright (c) 2023 Remco Dijkman (Eindhoven University of Technology) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"igraph>=1.0.0",
"imageio>=2.7",
"matplotlib>=3.9.3",
"pygame>=2.6.1",
"pyqt6>=6.9.1",
"python-igraph>=1.0.0",
"sortedcontainers>=2.4.0",
"numpy; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/bpogroup/simpn",
"Documentation, https://bpogroup.github.io/simpn/"
] | twine/6.0.1 CPython/3.13.3 | 2026-02-19T14:27:06.055791 | simpn-1.9.0.tar.gz | 2,584,053 | 52/96/0b5a1fda90f832855b79203f1b8a9239d4ce1052bdaad63a98addb7a45de/simpn-1.9.0.tar.gz | source | sdist | null | false | a22a119316db990d192c09451d9191dd | 64fb0ed4af28ab4c98c4ee1b71fe13c1d1cff3a1881e15361032fa50e9d5d258 | 52960b5a1fda90f832855b79203f1b8a9239d4ce1052bdaad63a98addb7a45de | null | [] | 239 |
2.4 | seeq-spy | 199.2 | Easy-to-use Python interface for Seeq | The **seeq-spy** Python module is the recommended programming interface for interacting with the Seeq Server.
Use of this module requires a
[Seeq Data Lab license](https://support.seeq.com/space/KB/113723667/Requesting+and+Installing+a+License+File).
Documentation can be found at
[https://python-docs.seeq.com](https://python-docs.seeq.com/).
The Seeq **SPy** module is a friendly set of functions that are optimized for use with
[Jupyter](https://jupyter.org), [Pandas](https://pandas.pydata.org/) and [NumPy](https://www.numpy.org/).
The SPy module is the best choice if you're trying to do any of the following:
- Search for signals, conditions, scalars, assets
- Pull data out of Seeq
- Import data in a programmatic way (when Seeq Workbench's *CSV Import* capability won't cut it)
- Calculate new data in Python and push it into Seeq
- Create an asset model
- Programmatically create and manipulate Workbench Analyses or Organizer Topics
**Use of the SPy module requires Python 3.8 or later.**
**SPy version 187 and higher is compatible with Pandas 2.x.**
To start exploring the SPy module, execute the following lines of code in Jupyter:
```
from seeq import spy
spy.docs.copy()
```
Your Jupyter folder will now contain a `SPy Documentation` folder that has a *Tutorial* and *Command Reference*
notebook that will walk you through common activities.
For more advanced tasks, you may need to use the `seeq.sdk` module directly as described at
[https://pypi.org/project/seeq](https://pypi.org/project/seeq).
# Upgrade Considerations
The `seeq-spy` module can/should be upgraded separately from the main `seeq` module by doing `pip install -U
seeq-spy`. It is written to be compatible with Seeq Server version R60 and later.
Read the [Installation](https://python-docs.seeq.com/upgrade-considerations.html) page in the SPy documentation
for further instructions on how to install and upgrade the `seeq-spy` module.
Check the [Change Log](https://python-docs.seeq.com/changelog.html) and
[Version Considerations](https://python-docs.seeq.com/user_guide/Version%20Considerations.html) pages for more details.
| text/markdown | null | Seeq Corporation <support@seeq.com> | null | null | Seeq Python Library – License File
------------------------------------------------------------------------------------------------------------------------
Seeq Python Library - Copyright Notice
© Copyright 2020 - Seeq Corporation
Permission to Distribute - Permission is hereby granted, free of charge, to any person obtaining a copy of this Seeq
Python Library and associated documentation files (the "Software"), to copy the Software, to publish and distribute
copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following
conditions:
1. The foregoing permission does not grant any right to use, modify, merge, sublicense, sell or otherwise deal in the
Software, and all such use is subject to the terms and conditions of the Seeq Python Library - End User License
Agreement, below.
2. The above copyright notice and the full text of this permission notice shall be included in all copies or
substantial portions of the Software.
------------------------------------------------------------------------------------------------------------------------
SEEQ PYTHON LIBRARY - END USER LICENSE AGREEMENT
This End User License Agreement ("Agreement") is a binding legal document between Seeq and you, which explains your
rights and obligations as a Customer using Seeq products. "Customer" means the person or company that downloads and
uses the Seeq Python library. "Seeq" means Seeq Corporation, 1301 2nd Avenue, Suite 2850, Seattle, WA 98101, USA.
By installing or using the Seeq Python library, you agree on behalf of Customer to be bound by this Agreement. If you
do not agree to this Agreement, then do not install or use Seeq products.
This Agreement, and your license to use the Seeq Python Library, remains in effect only during the term of any
subscription license that you purchase to use other Seeq software. Upon the termination or expiration of any such paid
license, this Agreement terminates immediately, and you will have no further right to use the Seeq Python Library.
From time to time, Seeq may modify this Agreement, including any referenced policies and other documents. Any modified
version will be effective at the time it is posted to Seeq’s website at
https://www.seeq.com/legal/software-license-agreement. To keep abreast of your license rights and relevant
restrictions, please bookmark this Agreement and read it periodically. By using any Product after any modifications,
Customer agrees to all of the modifications.
This End User License Agreement ("Agreement") is entered into by and between Seeq Corporation ("Seeq") and the Customer
identified during the process of registering the Seeq Software. Seeq and Customer agree as follows.
1. Definitions.
"Affiliate" or "Affiliates" means any company, corporation, partnership, joint venture, or other entity in which any of
the Parties directly or indirectly owns, is owned by, or is under common ownership with a Party to this Agreement to
the extent of at least fifty percent (50%) of its equity, voting rights or other ownership interest (or such lesser
percentage which is the maximum allowed to be owned by a foreign corporation in a particular jurisdiction).
"Authorized Users" means the individually-identified employees, contractors, representatives or consultants of Customer
who are permitted to use the Software in accordance with the applicable Order Document.
"Customer Data" means data in Customer’s data resources that is accessed by, and processed in, Seeq Server software.
Customer Data includes all Derived Data.
"Derived Data" means data derived by Authorized Users from Customer Data and consists of: scalars, signals, conditions,
journals, analyses, analysis results, worksheets, workbooks and topics.
"Documentation" means Seeq’s standard installation materials, training materials, specifications and online help
documents normally made available by Seeq in connection with the Software, as modified from time to time by Seeq.
"On-Premise Software" means Software that is installed on hardware owned or arranged by and under the control of
Customer, such as Customer-owned hardware, a private cloud or a public cloud. On-Premise Software is managed by
Customer. Examples of On-Premise Software include Seeq Server, Seeq Python library, Seeq Connectors and Seeq Remote
Agents.
"Order Document" means each mutually-agreed ordering document used by the parties from time to time for the purchase of
licenses for the Software. Customer’s Purchase Order may, in conjunction with the applicable Proposal or Quote from
Seeq, constitute an Order Document subject to the terms of this Agreement. All Order Documents are incorporated by
reference into this Agreement.
"SaaS Software" means Software that is installed on hardware arranged by and under the control of Seeq, such as
Microsoft Azure or AWS. SaaS Software is managed by Seeq. Example of SaaS Software include Seeq Server.
"Seeq Technology" means the Software, the Documentation, all algorithms and techniques for use with the Software
created by Seeq, and all modifications, improvements, enhancements and derivative works thereof created by Seeq.
"Software" means: (i) the Seeq Python library with which this License File is included, and (ii) Seeq software
applications identified on an applicable Order Document that are licensed to Customer pursuant to this Agreement.
Software includes On-Premise Software and SaaS Software.
"Subscription License" means a license allowing Customer to access SaaS Software, and to copy, install and use
On-Premise Software, for the period of the Subscription Term.
"Subscription Term" means the period of time specified in an Order Document during which the Subscription License is in
effect. The Subscription Term for the Seeq Python library shall be coterminous with Customer’s paid license to use Seeq
Software under an Order Document.
2. Subscription License.
a. License. Seeq grants Customer a worldwide, non-exclusive, non‐transferable, non‐sublicenseable right to access the
SaaS Software and to copy, install and use the On-Premise Software for the duration of the Subscription Term, subject
to the terms and conditions of this Agreement. Seeq does not license the Software on a perpetual basis.
b. Separate License Agreement. If Seeq and Customer have executed a separate License Agreement intended to govern
Customer’s use of the Software, then such separate License Agreement shall constitute the complete and exclusive
agreement of the parties for such use, and this Agreement shall be of no force or effect, regardless of any action by
Customer personnel that would have appeared to accept the terms of this Agreement.
c. Authorized Users. Only Authorized Users may use the Software, and only up to the number of Authorized Users
specified in the applicable Order Document. Customer designates each individual Authorized User in the Software. If
the number of Authorized Users is greater than the number specified in the particular Order Document, Customer will
purchase additional Authorized Users at the prices set out in the Order Document.
d. Subscription Term. The Subscription Term shall begin and end as provided in the applicable Order Document. The
Subscription Term will not renew, except by a new Order Document acceptable to both parties. Upon renewal of a
Subscription Term, Customer will, if applicable, increase the number of Authorized Users to a number that Customer
believes in good faith will be sufficient to accommodate any expected growth in the number of Customer’s users during
the new Subscription Term. This Agreement will continue in effect for the Subscription Term of all Order Documents
hereunder.
e. Limitations on Use. All use of Software must be in accordance with the relevant Documentation. End User may make a
limited number of copies of the Software as is strictly necessary for purposes of data protection, archiving, backup,
and testing. Customer will use the Software for its internal business purposes and to process information about the
operations of Customer and its Affiliates, and will not, except as provided in an Order Document, directly or
indirectly, use the Software to process information about or for any other company. Customer will: (i) not permit
unauthorized use of the Software, (ii) not infringe or violate the intellectual property rights, privacy, or any other
rights of any third party or any applicable law, (iii) ensure that each user uses a unique Authorized User ID and
password, (iv) not, except as provided in an Order Document, allow resale, timesharing, rental or use of the Software
in a service bureau or as a provider of outsourced services, and (v) not modify, adapt, create derivative works of,
reverse engineer, decompile, or disassemble the Software or Seeq Technology.
f. Software Modification. Seeq may modify the Software from time to time, but such modification will not materially
reduce the functionality of the Software. Seeq may contract with third parties to support the Software, so long as they
are subject to obligations of confidentiality to Seeq at least as strict as Seeq’s to Customer. Seeq shall remain
responsible for the performance of its contractors.
2. Support. Support for Customer’s use of the Software is included in Customer’s subscription fee. Seeq will provide
support and maintenance for the Software, including all applicable updates, and web-based support assistance in
accordance with Seeq’s support policies in effect from time to time. Other professional services are available for
additional fees.
3. Ownership.
a. Customer Data. Customer owns all Customer Data, including all Derived Data, and Seeq shall not receive any ownership
interest in it. Seeq may use the Customer Data only to provide the Software capabilities purchased by Customer and as
permitted by this Agreement, and not for any other purpose. Customer is the owner and data controller for the Customer
Data.
b. Software and Seeq Technology. Seeq retains all rights in the Software and the Seeq Technology (subject to the
license granted to Customer). Customer will not, and will not allow any other person to, modify, adapt, create
derivative works of, reverse engineer, decompile, or disassemble the Software or Seeq Technology. All new Seeq
Technology developed by Seeq while working with Customer, including any that was originally based on feedback,
suggestions, requests or comments from Customer, shall be Seeq’s sole property, and Customer shall have the right to
use any such new Seeq Technology only in connection with the Software.
c. Third-Party Open Source Software. The Software incorporates third-party open source software. All such software must
comply with Seeq’s Third Party Open Source License Policy. Customer may request a list of such third-party software and
a copy of the Policy at any time.
4. Fees and Payment Terms.
a. Fees. Customer shall pay the fees as specified in the Order Document. Unless otherwise specified in the Order
Document, all amounts are in US Dollars (USD). Upon renewal of a Subscription Term, Customer will, if applicable,
increase the number of Authorized Users to a number that Customer believes in good faith will be sufficient to
accommodate any expected growth in the number of Customer’s users during the new Subscription Term.
b. Invoicing & Payment. All payments are due within 30 days of the date of the invoice and are non-cancellable and
non-refundable except as provided in this Agreement. If Customer does not pay any amount (not disputed in good faith)
when due, Seeq may charge interest on the unpaid amount at the rate of 1.0% per month (or if less, the maximum rate
allowed by law). If Customer does not pay an overdue amount (not disputed in good faith) within 20 days of notice of
non-payment, Seeq may suspend the Software until such payment is received, but Customer will remain obligated to make
all payments due under this Agreement. Customer agrees to pay Seeq’s expenses, including reasonable attorneys and
collection fees, incurred in collecting amounts not subject to a good faith dispute.
c. Excess Usage of Software. The Software has usage limitations based on the number of Authorized Users or other
metrics as set forth on the Order Document. Customer shall maintain accurate records regarding Customer’s actual use of
the Software and shall make such information promptly available to Seeq upon request. Seeq may also monitor Customer’s
use of the Software.
d. Fees for Excess Usage of Software. Seeq will not require Customer to pay for past excess use, and in consideration
thereof:
i. If Customer’s license covers a fixed number of Authorized Users, Customer will promptly issue a new Order Document
to cover current and good-faith anticipated future excess use.
ii. If Customer’s license is under Seeq’s Extended Experience Program, Strategic Agreement Program or any other program
not tied directly to a fixed number of Authorized Users, then the parties will negotiate in good faith as follows:
(1) if the excess use is less than 50% above the number of Authorized Users that was used to set pricing for such
license for the current contract period (usually a year), the parties will negotiate an appropriate usage level and
fees for the next contract period, and
(2) If the excess use is more than 50% above such number, the parties will negotiate appropriate usage levels and fees
for the remainder of the current and the next contract periods, with additional fees for the current period payable
upon Seeq’s invoice.
e. Taxes. All fees are exclusive of all taxes, including federal, state and local use, sales, property, value-added,
ad valorem and similar taxes related to this transaction, however designated (except taxes based on Seeq’s net income).
Unless Customer presents valid evidence of exemption, Customer agrees to pay any and all such taxes that it is
obligated by law to pay. Customer will pay Seeq’s invoices for such taxes whenever Seeq is required to collect such
taxes from Customer.
f. Purchase through Seeq Partner. In the event that End User purchased its subscription to Seeq through an accredited
Seeq Partner, notwithstanding provisions of this Agreement relating to End User's payments to Seeq, Partner will
invoice End User, or charge End User using the credit card on file, and End User will pay all applicable subscription
fees to Partner.
6. Confidentiality. "Confidential Information" means all information and materials obtained by a party (the
"Recipient") from the other party (the "Disclosing Party"), whether in tangible form, written or oral, that is
identified as confidential or would reasonably be understood to be confidential given the nature of the information and
circumstances of disclosure, including without limitation Customer Data, the Software, Seeq Technology, and the terms
and pricing set out in this Agreement and Order Documents. Confidential Information does not include information that
(a) is already known to the Recipient prior to its disclosure by the Disclosing Party; (b) is or becomes generally
known through no wrongful act of the Recipient; (c) is independently developed by the Recipient without use of or
reference to the Disclosing Party’s Confidential Information; or (d) is received from a third party without restriction
and without a breach of an obligation of confidentiality. The Recipient shall not use or disclose any Confidential
Information without the Disclosing Party’s prior written permission, except to its employees, contractors, directors,
representatives or consultants who have a need to know in connection with this Agreement or Recipient’s business
generally, or as otherwise allowed herein. The Recipient shall protect the confidentiality of the Disclosing Party’s
Confidential Information in the same manner that it protects the confidentiality of its own confidential information of
a similar nature, but using not less than a reasonable degree of care. The Recipient may disclose Confidential
Information to the extent that it is required to be disclosed pursuant to a statutory or regulatory provision or court
order, provided that the Recipient provides prior notice of such disclosure to the Disclosing Party, unless such notice
is prohibited by law, rule, regulation or court order. As long as an Order Document is active under this Agreement and
for two (2) years thereafter, and at all times while Customer Data is in Seeq’s possession, the confidentiality
provisions of this Section shall remain in effect.
7. Security. Seeq will maintain and enforce commercially reasonable physical and logical security methods and
procedures to protect Customer Data on the SaaS Software and to secure and defend the SaaS Software against "hackers"
and others who may seek to access the SaaS Software without authorization. Seeq will test its systems for potential
security vulnerabilities at least annually. Seeq will use commercially reasonable efforts to remedy any breach of
security or unauthorized access. Seeq reserves the right to suspend access to the Seeq System in the event of a
suspected or actual security breach. Customer will maintain and enforce commercially reasonable security methods and
procedures to prevent misuse of the log-in information of its employees and other users. Seeq shall not be liable for
any damages incurred by Customer or any third party in connection with any unauthorized access resulting from the
actions of Customer or its representatives.
8. Warranties.
a. Authority and Compliance with Laws. Each party warrants and represents that it has all requisite legal authority to
enter into this Agreement and that it shall comply with all laws applicable to its performance hereunder including
export laws and laws pertaining to the collection and use of personal data.
b. Industry Standards and Documentation. Seeq warrants and represents that the Software will materially conform to the
specifications as set forth in the applicable Documentation. At no additional cost to Customer, and as Customer’s sole
and exclusive remedy for nonconformity of the Software with this limited warranty, Seeq will use commercially
reasonable efforts to correct any such nonconformity, provided Customer promptly notifies Seeq in writing outlining the
specific details upon discovery, and if such efforts are unsuccessful, then Customer may terminate, and receive a
refund of all pre-paid and unused fees for, the affected Software. This limited warranty shall be void if the failure
of the Software to conform is caused by (i) the use or operation of the Software with an application or in an
environment other than as set forth in the Documentation, or (ii) modifications to the Software that were not made by
Seeq or Seeq’s authorized representatives.
c. Malicious Code. Seeq will not introduce any time bomb, virus or other harmful or malicious code designed to disrupt
the use of the Software, other than Seeq’s ability to disable access to the Software in the event of termination or
suspension as permitted hereunder.
d. DISCLAIMER. EXCEPT AS EXPRESSLY SET FORTH HEREIN, NEITHER PARTY MAKES ANY REPRESENTATIONS OR WARRANTIES OF ANY
KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NON-INFRINGEMENT. EXCEPT AS STATED IN THIS SECTION, SEEQ DOES NOT REPRESENT THAT CUSTOMER’S USE OF THE SOFTWARE WILL BE
SECURE, UNINTERRUPTED OR ERROR FREE. NO STATEMENT OR INFORMATION, WHETHER ORAL OR WRITTEN, OBTAINED FROM SEEQ IN ANY
MEANS OR FASHION SHALL CREATE ANY WARRANTY NOT EXPRESSLY AND EXPLICITLY SET FORTH IN THIS AGREEMENT.
9. Indemnification by Seeq. Seeq shall indemnify, defend and hold Customer harmless from and against all losses
(including reasonable attorney fees) arising out of any third-party suit or claim alleging that Customer’s authorized
use of the Software infringes any valid U.S. or European Union patent or trademark, trade secret or other proprietary
right of such third party. Customer shall: (i) give Seeq prompt written notice of such suit or claim, (ii) grant Seeq
sole control of the defense or settlement of such suit or claim and (iii) reasonably cooperate with Seeq, at Seeq’s
expense, in its defense or settlement of the suit or claim. To the extent that Seeq is prejudiced by Customer's failure
to comply with the foregoing requirements, Seeq shall not be liable hereunder. Seeq may, at its option and expense, (i)
replace the Software with compatible non-infringing Software, (ii) modify the Software so that it is non-infringing,
(iii) procure the right for Customer to continue using the Software, or (iv) if the foregoing options are not
reasonably available, terminate the applicable Order Document and refund Customer all prepaid fees for Software
applicable to the remainder of the applicable Subscription Term. Seeq shall have no obligation to Customer with respect
to any infringement claim against Customer if such claim existed prior to the effective date of the applicable Order
Document or such claim is based upon (i) Customer’s use of the Software in a manner not expressly authorized by this
Agreement, (ii) the combination, operation, or use of the Software with third party material that was not provided by
Seeq, if Customer’s liability would have been avoided in the absence of such combination, use, or operation, or (iii)
modifications to the Software other than as authorized in writing by Seeq. THIS SECTION SETS FORTH SEEQ’S ENTIRE
OBLIGATION TO CUSTOMER WITH RESPECT TO ANY CLAIM SUBJECT TO INDEMNIFICATION UNDER THIS SECTION.
10. LIMITATION OF LIABILITIES. IN NO EVENT SHALL EITHER PARTY OR THEIR SERVICE PROVIDERS, LICENSORS CONTRACTORS OR
SUPPLIERS BE LIABLE FOR ANY INDIRECT, INCIDENTAL, CONSEQUENTIAL, SPECIAL OR PUNITIVE DAMAGES OF ANY KIND, INCLUDING
WITHOUT LIMITATION DAMAGES FOR COVER OR LOSS OF USE, DATA, REVENUE OR PROFITS, EVEN IF SUCH PARTY HAS BEEN ADVISED OF
THE POSSIBILITY OF SUCH DAMAGES. THE FOREGOING LIMITATION OF LIABILITY AND EXCLUSION OF CERTAIN DAMAGES SHALL APPLY
REGARDLESS OF THE SUCCESS OR EFFECTIVENESS OF OTHER REMEDIES. EXCEPT FOR THE PARTIES’ INDEMNIFICATION OBLIGATIONS,
DAMAGES FOR BODILY INJURY OR DEATH, DAMAGES TO REAL PROPERTY OR TANGIBLE PERSONAL PROPERTY, AND FOR BREACHES OF
CONFIDENTIALITY UNDER SECTION 6, IN NO EVENT SHALL THE AGGREGATE LIABILITY OF A PARTY, ITS SERVICE PROVIDERS,
LICENSORS, CONTRACTORS OR SUPPLIERS ARISING UNDER THIS AGREEMENT, WHETHER IN CONTRACT, TORT OR OTHERWISE, EXCEED THE
TOTAL AMOUNT OF FEES PAID BY CUSTOMER TO SEEQ FOR THE RELEVANT SOFTWARE WITHIN THE PRECEDING TWELVE (12) MONTHS.
11. Termination and Expiration.
a. Termination Rights. A party may terminate any Order Document: (i) for any material breach not cured within thirty
(30) days following written notice of such breach, and (ii) immediately upon written notice if the other party files
for bankruptcy, becomes the subject of any bankruptcy proceeding or becomes insolvent.
b. Customer Termination for Convenience. Customer may terminate any Order Document for any reason at any time.
c. Termination Effects. Upon termination by Customer under Section 13(a)(i) or 13(b) above, Seeq shall refund
Customer all prepaid and unused fees for the Software. Upon termination by Seeq under Section 13(a)(i) above, Customer
shall promptly pay all unpaid fees due through the end of the Subscription Term of such Order Document.
d. Access and Data. Upon expiration or termination of an Order Document, Seeq will disable access to the applicable
SaaS Software, and Customer will uninstall and destroy all copies of the Software and Documentation on hardware under
its control. Upon Customer request, Seeq will provide Customer with a copy of all Customer Data in Seeq’s possession,
in a mutually agreeable format within a mutually agreeable timeframe. Notwithstanding the foregoing: (i) Seeq may
retain backup copies of Customer Data for a limited period of time in accordance with Seeq’s then-current backup
policy, and (ii) Seeq will destroy all Customer Data no later than 3 months after end of the Subscription Term or
earlier, upon written request from Customer.
12. General.
a. Amendment. Seeq may modify this Agreement from time to time, including any referenced policies and other documents.
Any modified version will be effective at the time it is posted on Seeq’s website at
https://seeq.com/legal/software-license-agreement.
b. Precedence. The Order Document is governed by the terms of this Agreement and in the event of a conflict or
discrepancy between the terms of an Order Document and the terms of this Agreement, this Agreement shall govern except
as to the specific Software ordered, and the fees, currency and payment terms for such orders, for which the Order
Document shall govern, as applicable. If an Order Document signed by Seeq explicitly states that it is intended to
amend or modify a term of this Agreement, such Order Document shall govern over this Agreement solely as to the
amendment or modification. Seeq objects to and rejects any additional or different terms proposed by Customer,
including those contained in Customer’s purchase order, acceptance, vendor portal or website. Neither Seeq’s acceptance
of Customer’s purchase order nor its failure to object elsewhere to any provisions of any subsequent document, website,
communication, or act of Customer shall be deemed acceptance thereof or a waiver of any of the terms hereof.
c. Assignment. Neither party may assign this Agreement, in whole or in part, without the prior written consent of the
other, which shall not be unreasonably withheld. However, either party may assign this Agreement to any Affiliate, or
to a person or entity into which it has merged or which has otherwise succeeded to all or substantially all of its
business or assets to which this Agreement pertains, by purchase of stock, assets, merger, reorganization or otherwise,
and which has assumed in writing or by operation of law its obligations under this Agreement, provided that Customer
shall not assign this Agreement to a direct competitor of Seeq. Any assignment or attempted assignment in breach of
this Section shall be void. This Agreement shall be binding upon and shall inure to the benefit of the parties’
respective successors and assigns.
d. Employees. Each party agrees that during, and for one year after, the term of this Agreement, it will not directly
or indirectly solicit for hire any of the other party’s employees who were actively engaged in the provision or use of
the Software without the other party’s express written consent. This restriction shall not apply to offers extended
solely as a result of and in response to public advertising or similar general solicitations not specifically targeted
at the other party’s employees.
e. Independent Contractors. The parties are independent contractors and not agents or partners of, or joint venturers
with, the other party for any purpose. Neither party shall have any right, power, or authority to act or create any
obligation, express or implied, on behalf of the other party.
f. Notices. All notices required under this Agreement shall be in writing and shall be delivered personally against
receipt, or by registered or certified mail, return receipt requested, postage prepaid, or sent by
nationally-recognized overnight courier service, and addressed to the party to be notified at their address set forth
below. All notices and other communications required or permitted under this Agreement shall be deemed given when
delivered personally, or one (1) day after being deposited with such overnight courier service, or five (5) days after
being deposited in the United States mail, postage prepaid to Seeq at 1301 Second Avenue, #2850, Seattle, WA 98101,
Attn: Legal and to Customer at the then-current address in Seeq’s records, or to such other address as each party may
designate in writing.
g. Force Majeure. Except for payment obligations hereunder, either party shall be excused from performance of
non-monetary obligations under this Agreement for such period of time as such party is prevented from performing such
obligations, in whole or in part, due to causes beyond its reasonable control, including but not limited to, delays
caused by the other party, acts of God, war, terrorism, criminal activity, civil disturbance, court order or other
government action, third party performance or non-performance, strikes or work stoppages, provided that such party
gives prompt written notice to the other party of such event.
h. Integration. This Agreement, including all Order Documents and documents attached hereto or incorporated herein by
reference, constitutes the complete and exclusive statement of the parties’ agreement and supersedes all proposals or
prior agreements, oral or written, between the parties relating to the subject matter hereof.
i. Not Contingent. The party’s obligations hereunder are neither contingent on the delivery of any future functionality
or features of the Software nor dependent on any oral or written public comments made by Seeq regarding future
functionality or features of the Software.
j. No Third Party Rights. No right or cause of action for any third party is created by this Agreement or any
transaction under it.
k. Non-Waiver; Invalidity. No waiver or modification of the provisions of this Agreement shall be effective unless in
writing and signed by the party against whom it is to be enforced. If any provision of this Agreement is held invalid,
illegal or unenforceable, the validity, legality and enforceability of the remaining provisions shall not be affected
or impaired thereby. A waiver of any provision, breach or default by either party or a party’s delay exercising its
rights shall not constitute a waiver of any other provision, breach or default.
l. Governing Law and Venue. This Agreement will be interpreted and construed in accordance with the laws of the State
of Delaware without regard to conflict of law principles, and both parties hereby consent to the exclusive jurisdiction
and venue of courts in Wilmington, Delaware in all disputes arising out of or relating to this Agreement.
m. Mediation. The parties agree to attempt to resolve disputes without extended and costly litigation. The parties
will: (1) communicate any dispute to other party, orally and in writing; (2) respond in writing to any written dispute
from other party within 15 days of receipt; (3) if satisfactory resolution does not occur within 45 days of initial
written notification of the dispute, and if both parties do not mutually agree to a time extension, then either party
may seek a remedy in court.
n. Survival. Provisions of this Agreement that are intended to survive termination or expiration of this Agreement in
order to achieve the fundamental purposes of this Agreement shall so survive, including without limitation: Ownership,
Fees and Payment Terms, Confidentiality, Customer Data, Indemnification by Seeq and Limitation of Liabilities.
o. Headings and Language. The headings of sections included in this Agreement are inserted for convenience only and
are not intended to affect the meaning or interpretation of this Agreement. The parties to this Agreement and Order
Document have requested that this Agreement and all related documentation be written in English.
p. Federal Government End Use Provisions. Seeq provides the Software, including related technology, for ultimate
federal government end use solely in accordance with the following: Government technical data and software rights
related to the Software include only those rights customarily provided to the public as defined in this Agreement. This
customary commercial license is provided in accordance with FAR 12.211 (Technical Data) and FAR 12.212 (Software) and,
for Department of Defense transactions, DFAR 252.227-7015 (Technical Data – Commercial Items) and DFAR 227.7202-3
(Rights in Commercial Computer Software or Computer Software Documentation).
q. Contract for Services. The parties intend this Agreement to be a contract for the provision of services and not a
contract for the sale of goods. To the fullest extent permitted by law, the provisions of the Uniform Commercial Code
(UCC), the Uniform Computer Information Transaction Act (UCITA), the United Nations Convention on Contracts for the
International Sale of Goods , and any substantially similar legislation as may be enacted, shall not apply to this
Agreement.
r. Actions Permitted. Except for actions for nonpayment or breach of a party’s proprietary rights, no action,
regardless of form, arising out of or relating to the Agreement may be brought by either party more than one year after
the cause of action has accrued.
Should you have any questions concerning this Agreement, please contact Seeq at legal@seeq.com.
| null | [
"Programming Language :: Python :: 3",
"License :: Other/Proprietary License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"Deprecated>=1.2.6",
"numpy>=1.21.6",
"pandas>=1.2.5",
"python-dateutil>=2.7.3",
"pytz>=2020.1",
"requests>=2.22.0",
"urllib3>=1.21.1",
"ipython>=7.6.1; extra == \"widgets\"",
"ipywidgets>=7.6.0; extra == \"widgets\"",
"matplotlib>=3.5.0; extra == \"widgets\"",
"seeq-data-lab-env-mgr>=0.1.0; ext... | [] | [] | [] | [
"Homepage, https://www.seeq.com",
"Documentation, https://python-docs.seeq.com/",
"Changelog, https://python-docs.seeq.com/changelog.html"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T14:26:25.063494 | seeq_spy-199.2.tar.gz | 5,716,208 | 3d/9d/782b0f390e02f87649f721cd159e28747495d30d4d4d914dda3f0905292a/seeq_spy-199.2.tar.gz | source | sdist | null | false | ed2563d9c1e728c75dc387137dec9f4a | 9dc95c9810fbe01a5f3ab44a54875dcba6037a5596cb4ee09ceb2c49f20f5978 | 3d9d782b0f390e02f87649f721cd159e28747495d30d4d4d914dda3f0905292a | null | [] | 2,775 |
2.4 | openmodule | 18.0.0rc1 | Libraries for developing the arivo openmodule | # OpenModule V2
[TOC]
## Quickstart
You can install this library via pip:
```bash
pip install openmodule
```
#### Development with Feature Branches
During development it might be necessary to install a version of openmodule, where no pip package exists.
Below you can find how to install a certain openmodule branch for your application with pip.
##### Openmodule
Bash command:
```bash
pip install "git+https://gitlab.com/arivo-public/device-python/openmodule@<branch>#egg=openmodule"
```
requirements.txt:
```text
git+https://gitlab.com/arivo-public/device-python/openmodule@<branch>#egg=openmodule
```
##### Openmodule Test
Bash command:
```bash
pip install "git+https://gitlab.com/arivo-public/device-python/openmodule@<branch>#egg=openmodule-test&subdirectory=openmodule_test"
```
requirements.txt:
```text
git+https://gitlab.com/arivo-public/device-python/openmodule@<branch>#egg=openmodule-test&subdirectory=openmodule_test
```
##### Openmodule Commands
Bash command:
```bash
pip install "git+https://gitlab.com/arivo-public/device-python/openmodule@<branch>#egg=openmodule-commands&subdirectory=openmodule_commands
```
requirements.txt:
```text
git+https://gitlab.com/arivo-public/device-python/openmodule@<branch>#egg=openmodule-commands&subdirectory=openmodule_commands
```
#### Local Development
Sometimes you want to test local changes of the Openmodule library in device services and therefore you can do a local
installation of the library. We use the
[editable installs](https://pip.pypa.io/en/stable/topics/local-project-installs/#editable-installs) of Pip for this.
##### Openmodule
bash command:
```bash
pip install -e <path_to_openmodule_root>
```
##### Openmodule Test
bash command:
```bash
pip install -e <path_to_openmodule_root>/openmodule_test/
```
##### Openmodule Commands
bash command:
```bash
pip install -e <path_to_openmodule_root>/openmodule_commands/
```
## Changes
- [Breaking Changes](docs/migrations.md)
- [Known Issues](docs/known_issues.md)
## Documentation
### Openmodule
- [Getting Started](docs/getting_started.md)
- [Coding Standard](docs/coding_standard.md)
- [Settings](docs/settings.md)
- [RPC](docs/rpc.md)
- [Health](docs/health.md)
- [Database](docs/database.md)
- [Eventlog](docs/event_sending.md)
- [Package Reader](docs/package_reader.md)
- [Anonymization](docs/anonymization.md)
- [Connection Status Listener](docs/connection_status_listener.md)
- [Settings Provider](docs/settings_provider.md)
- [Access Service](docs/access_service.md)
- [CSV Export](docs/csv_export.md)
- [Translations](docs/translation.md)
- [Utils](docs/utils.md)
- [Deprecated Features](docs/deprecated.md)
- [Testing](docs/testing.md)
- [File Cleanup](docs/cleanup.md)
### Openmodule Commands
- [Openmodule Commands](docs/commands.md)
| text/markdown; charset=UTF-8 | ARIVO | support@arivo.co | null | null | GNU General Public License v2 (GPLv2) | arivo openmodule | [
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
"Programming Language :: Python",
"Programming Language :: Python :: 3.12"
] | [] | https://gitlab.com/arivo-public/device-python/openmodule.git | null | null | [] | [] | [] | [
"pydantic~=2.12.0",
"sentry-sdk~=2.19.0",
"orjson~=3.11",
"pyzmq~=26.2",
"pyyaml<7,>=5.0",
"editdistance~=0.8.1",
"sqlalchemy~=2.0.0",
"alembic<2,>=1.5.4",
"requests<3,>=2.22",
"python-dateutil~=2.9",
"python-dotenv~=1.2.0",
"arivo-settings_models~=2.6.0",
"openmodule_test; extra == \"test\"... | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.10 | 2026-02-19T14:26:19.981449 | openmodule-18.0.0rc1.tar.gz | 174,092 | 0d/41/7b6c55362467b5f3c43c10c3b4a07890143db909673cf467e0ca60311132/openmodule-18.0.0rc1.tar.gz | source | sdist | null | false | 70effcb9d16ea62b0f710885b5a98880 | b54f3209028970113ff719ceea3eec4c4b73551d8bb2a7b417dcc5c6708dfdb6 | 0d417b6c55362467b5f3c43c10c3b4a07890143db909673cf467e0ca60311132 | null | [
"LICENSE"
] | 212 |
2.2 | pypi-json | 0.5.0.post1 | PyPI JSON API client library |
==========
pypi-json
==========
.. start short_desc
**PyPI JSON API client library**
.. end short_desc
.. start shields
.. list-table::
:stub-columns: 1
:widths: 10 90
* - Docs
- |docs| |docs_check|
* - Tests
- |actions_linux| |actions_windows| |actions_macos| |coveralls|
* - PyPI
- |pypi-version| |supported-versions| |supported-implementations| |wheel|
* - Anaconda
- |conda-version| |conda-platform|
* - Activity
- |commits-latest| |commits-since| |maintained| |pypi-downloads|
* - QA
- |codefactor| |actions_flake8| |actions_mypy|
* - Other
- |license| |language| |requires|
.. |docs| image:: https://img.shields.io/readthedocs/pypi-json/latest?logo=read-the-docs
:target: https://pypi-json.readthedocs.io/en/latest
:alt: Documentation Build Status
.. |docs_check| image:: https://github.com/repo-helper/pypi-json/workflows/Docs%20Check/badge.svg
:target: https://github.com/repo-helper/pypi-json/actions?query=workflow%3A%22Docs+Check%22
:alt: Docs Check Status
.. |actions_linux| image:: https://github.com/repo-helper/pypi-json/workflows/Linux/badge.svg
:target: https://github.com/repo-helper/pypi-json/actions?query=workflow%3A%22Linux%22
:alt: Linux Test Status
.. |actions_windows| image:: https://github.com/repo-helper/pypi-json/workflows/Windows/badge.svg
:target: https://github.com/repo-helper/pypi-json/actions?query=workflow%3A%22Windows%22
:alt: Windows Test Status
.. |actions_macos| image:: https://github.com/repo-helper/pypi-json/workflows/macOS/badge.svg
:target: https://github.com/repo-helper/pypi-json/actions?query=workflow%3A%22macOS%22
:alt: macOS Test Status
.. |actions_flake8| image:: https://github.com/repo-helper/pypi-json/workflows/Flake8/badge.svg
:target: https://github.com/repo-helper/pypi-json/actions?query=workflow%3A%22Flake8%22
:alt: Flake8 Status
.. |actions_mypy| image:: https://github.com/repo-helper/pypi-json/workflows/mypy/badge.svg
:target: https://github.com/repo-helper/pypi-json/actions?query=workflow%3A%22mypy%22
:alt: mypy status
.. |requires| image:: https://dependency-dash.repo-helper.uk/github/repo-helper/pypi-json/badge.svg
:target: https://dependency-dash.repo-helper.uk/github/repo-helper/pypi-json/
:alt: Requirements Status
.. |coveralls| image:: https://img.shields.io/coveralls/github/repo-helper/pypi-json/master?logo=coveralls
:target: https://coveralls.io/github/repo-helper/pypi-json?branch=master
:alt: Coverage
.. |codefactor| image:: https://img.shields.io/codefactor/grade/github/repo-helper/pypi-json?logo=codefactor
:target: https://www.codefactor.io/repository/github/repo-helper/pypi-json
:alt: CodeFactor Grade
.. |pypi-version| image:: https://img.shields.io/pypi/v/pypi-json
:target: https://pypi.org/project/pypi-json/
:alt: PyPI - Package Version
.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/pypi-json?logo=python&logoColor=white
:target: https://pypi.org/project/pypi-json/
:alt: PyPI - Supported Python Versions
.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/pypi-json
:target: https://pypi.org/project/pypi-json/
:alt: PyPI - Supported Implementations
.. |wheel| image:: https://img.shields.io/pypi/wheel/pypi-json
:target: https://pypi.org/project/pypi-json/
:alt: PyPI - Wheel
.. |conda-version| image:: https://img.shields.io/conda/v/domdfcoding/pypi-json?logo=anaconda
:target: https://anaconda.org/domdfcoding/pypi-json
:alt: Conda - Package Version
.. |conda-platform| image:: https://img.shields.io/conda/pn/domdfcoding/pypi-json?label=conda%7Cplatform
:target: https://anaconda.org/domdfcoding/pypi-json
:alt: Conda - Platform
.. |license| image:: https://img.shields.io/github/license/repo-helper/pypi-json
:target: https://github.com/repo-helper/pypi-json/blob/master/LICENSE
:alt: License
.. |language| image:: https://img.shields.io/github/languages/top/repo-helper/pypi-json
:alt: GitHub top language
.. |commits-since| image:: https://img.shields.io/github/commits-since/repo-helper/pypi-json/v0.5.0.post1
:target: https://github.com/repo-helper/pypi-json/pulse
:alt: GitHub commits since tagged version
.. |commits-latest| image:: https://img.shields.io/github/last-commit/repo-helper/pypi-json
:target: https://github.com/repo-helper/pypi-json/commit/master
:alt: GitHub last commit
.. |maintained| image:: https://img.shields.io/maintenance/yes/2026
:alt: Maintenance
.. |pypi-downloads| image:: https://img.shields.io/pypi/dm/pypi-json
:target: https://pypistats.org/packages/pypi-json
:alt: PyPI - Downloads
.. end shields
Installation
--------------
.. start installation
``pypi-json`` can be installed from PyPI or Anaconda.
To install with ``pip``:
.. code-block:: bash
$ python -m pip install pypi-json
To install with ``conda``:
* First add the required channels
.. code-block:: bash
$ conda config --add channels https://conda.anaconda.org/conda-forge
$ conda config --add channels https://conda.anaconda.org/domdfcoding
* Then install
.. code-block:: bash
$ conda install pypi-json
.. end installation
| text/x-rst | null | Dominic Davis-Foster <dominic@davis-foster.co.uk> | null | null | MIT | json, packaging, pypi, warehouse | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python... | [
"Windows"
] | https://github.com/repo-helper/pypi-json | null | >=3.7 | [] | [] | [] | [
"apeye>=1.1.0",
"packaging>=21.0",
"requests>=2.26.0"
] | [] | [] | [] | [
"Issue Tracker, https://github.com/repo-helper/pypi-json/issues",
"Source Code, https://github.com/repo-helper/pypi-json",
"Documentation, https://pypi-json.readthedocs.io/en/latest"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:25:50.430484 | pypi_json-0.5.0.post1.tar.gz | 9,496 | 79/32/2012f91e72229b5c6aa9959899b79cb91cf8cc246177f638373dfc12d7a2/pypi_json-0.5.0.post1.tar.gz | source | sdist | null | false | 89f31d9262e68bfba1c7586736d86080 | 423d37baa46d3a34689f7be447e72bee8a88f36b49f4d39af1c5ae77b7b171a8 | 79322012f91e72229b5c6aa9959899b79cb91cf8cc246177f638373dfc12d7a2 | null | [] | 7,146 |
2.4 | aieng-eval-agents | 0.2.1 | Agentic AI Evaluation reference implementation from Vector Institute AI Engineering | # aieng-eval-agents
[](https://pypi.org/project/aieng-eval-agents/)
[](https://pypi.org/project/aieng-eval-agents/)
[](https://github.com/VectorInstitute/eval-agents/blob/main/LICENSE)
Shared library for Vector Institute's Agentic AI Evaluation Bootcamp. Provides reusable components for building, running, and evaluating LLM agents with [Google ADK](https://google.github.io/adk-docs/) and [Langfuse](https://langfuse.com/).
## What's included
### Agent implementations
| Module | Description |
|---|---|
| `aieng.agent_evals.knowledge_qa` | ReAct agent that answers questions using live web search. Includes evaluation against the DeepSearchQA benchmark with LLM-as-a-judge metrics (precision/recall/F1). |
| `aieng.agent_evals.aml_investigation` | Agent that investigates Anti-Money Laundering cases by querying a SQLite database of financial transactions via a read-only SQL tool. |
| `aieng.agent_evals.report_generation` | Agent that generates structured Excel reports from a relational database based on natural language queries. |
### Reusable tools (`aieng.agent_evals.tools`)
- `search` — Google Search with response grounding and citations
- `web` — HTML and PDF content fetching
- `file` — Download and search data files (CSV, XLSX, JSON)
- `sql_database` — Read-only SQL database access via `ReadOnlySqlDatabase`
### Evaluation harness (`aieng.agent_evals.evaluation`)
Wrappers around Langfuse for running agent experiments:
- `run_experiment` — Run a dataset through an agent and score outputs
- `run_experiment_with_trace_evals` — Run experiments with trace-level evaluation
- `run_trace_evaluations` — Score existing Langfuse traces with LLM-based or heuristic graders
### Utilities
- `display` — Rich-based terminal and Jupyter display helpers for metrics and agent responses
- `progress` — Progress tracking for batch evaluation runs
- `configs` — Pydantic-based configuration loading from `.env`
- `langfuse` — Langfuse client and trace utilities
- `db_manager` — Database connection management
## Installation
```bash
pip install aieng-eval-agents
```
Requires Python 3.12+.
## Source
Full reference implementations and documentation are in the [eval-agents](https://github.com/VectorInstitute/eval-agents) repository.
| text/markdown | null | Vector AI Engineering <ai_engineering@vectorinstitute.ai> | null | null | null | null | [] | [] | null | null | <4.0,>=3.12 | [] | [] | [] | [
"google-adk>=1.23.0",
"google-genai>=1.52.0",
"gradio>=6.0.2",
"html-to-markdown>=2.24.0",
"httpx>=0.27.0",
"kagglehub>=0.4.1",
"langfuse>=3.10.3",
"openai-agents>=0.6.1",
"openai>=2.8.1",
"openinference-instrumentation-google-adk>=0.1.0",
"opentelemetry-exporter-otlp-proto-http>=1.20.0",
"ope... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:25:09.775112 | aieng_eval_agents-0.2.1.tar.gz | 125,089 | 18/93/3ccea44939d94f7b63ae7e989dc185e26ad81a209894b7487180b52e9f14/aieng_eval_agents-0.2.1.tar.gz | source | sdist | null | false | ff96199fd14b524ef3ac60f2da86b901 | 3351bae4ad7d61778088c1b9e31745d867cea1145627f90cc44371ed48bccfbf | 18933ccea44939d94f7b63ae7e989dc185e26ad81a209894b7487180b52e9f14 | MIT | [] | 240 |
2.4 | epics-bridge | 2.1.0 | A generic bridge between EPICS IOCs and Python logic. | # EPICS Bridge


**EPICS Bridge** is a high-availability Python framework designed for implementing a robust EPICS-Python interface. It provides a structured environment for bridging external control logic with the EPICS control system, emphasizing synchronous execution, fault tolerance, and strict process monitoring.
This library addresses the common reliability challenges like preventing silent stalls ("zombie processes") and handling network IO failures deterministically.
## Documentation
Comprehensive project documentation lives in `docs/README.md`.
## System Architecture
The core of `epics-bridge` relies on a **Twin-Thread Architecture** that decouples the control logic from the monitoring signal.
### 1. Synchronous Control Loop (Main Thread)
The primary thread executes the user-defined logic in a strict, synchronous cycle:
1. **Trigger:** Waits for an input event or timer.
2. **Run Task:** Executes user-defined task
3. **Acknowledge:** Updates the task status and completes the handshake.
### 2. Isolated Heartbeat Monitor (Daemon Thread)
A separate, isolated thread acts as an internal watchdog. It monitors the activity timestamp of the Main Thread.
* **Operational:** Pulses the `Heartbeat` PV as long as the Main Thread is active.
* **Stalled (Zombie Protection):** If the Main Thread hangs (e.g., infinite loop, deadlocked IO) for longer than the defined tolerance, the Heartbeat thread ceases pulsing immediately. This alerts external watchdogs (e.g., the IOC or alarm handler) that the process is unresponsive.
### 3. Automatic Recovery ("Suicide Pact")
To support containerized environments (Docker, Kubernetes) or systemd supervisors, the daemon implements a fail-fast mechanism. If network connectivity is lost or IO errors persist beyond a configurable threshold (`max_stuck_cycles`), the watchdog performs a hard-kill of the process (`os._exit(1)`). This allows the external supervisor to perform a clean restart of the service.
### 4. Logger
Output important messages in the daemon shell to a configured log file.
## Installation
```bash
# Install the package
pip install .
# Install test dependencies
pip install -r requirements-test.txt
```
### Conda environment (recommended for integration tests)
Integration tests run a real IOC and require EPICS tooling. A working reference
environment is provided in `environment.yml`.
```bash
conda env create -f environment.yml
conda activate epics-bridge
pip install -e .
```
## Project Structure
- **epics_bridge.daemon**
Main control loop, heartbeat logic, and failure handling
- **epics_bridge.io**
Synchronous P4P client wrapper with strict error handling
- **epics_bridge.base_pv_interface**
PV template definitions and prefix validation
- **epics_bridge.utils**
Utilities for converting P4P data into native Python types
---
## Quick Start
### 1. EPICS Interface
There should be a standard epics db to handle the basic functionalities of the daemon and any amount of specialized dbs to fulfill the intended functionality.
The standard db should always be loaded by the IOC that interfaces with the daemon.
These are its contents:
```epics
record(bo, "$(P)Trigger") {
field(DESC, "Start Task")
field(ZNAM, "Idle")
field(ONAM, "Run")
}
record(bi, "$(P)Busy") {
field(DESC, "Task Running Status")
field(ZNAM, "Idle")
field(ONAM, "Busy")
}
record(bi, "$(P)Heartbeat") {
field(DESC, "Daemon Heartbeat")
}
record(mbbi, "$(P)TaskStatus") {
field(DESC, "Last Cycle Result")
field(DTYP, "Raw Soft Channel")
# State 0: Success (Green)
field(ZRVL, "0")
field(ZRST, "Success")
field(ZRSV, "NO_ALARM")
# State 1: Logic Failure (Yellow - e.g. Interlock)
field(ONVL, "1")
field(ONST, "Task Fail")
field(ONSV, "MINOR")
# State 2: EPICS IO Failure (Yellow - e.g. PV Read/Write Error)
field(TWVL, "2")
field(TWST, "IO Failure")
field(TWSV, "MINOR")
# State 3: Exception (Red - Software/Hardware Crash)
field(THVL, "3")
field(THST, "Code Crash")
field(THSV, "MAJOR")
}
record(ai, "$(P)TaskDuration") {
field(DESC, "Task duration")
field(PREC, "2")
field(EGU, "s")
}
```
### 2. Define a Python PV Interface
Use a dataclass to define EPICS PV templates.
Standard PVs (trigger, busy, heartbeat, task_status) are provided automatically.
```python
from dataclasses import dataclass
from epics_bridge.base_pv_interface import BasePVInterface
@dataclass
class MotorInterface(BasePVInterface):
position_rbv: str = "{main}Pos:RBV"
velocity_sp: str = "{main}Vel:SP"
temperature: str = "{sys}Temp:Mon"
```
### 3. Implement Control Logic
Subclass `BridgeDaemon` and implement the synchronous `run_task()` method.
```python
from epics_bridge.daemon import BridgeDaemon, TaskStatus
class MotorControlDaemon(BridgeDaemon):
def run_task(self) -> TaskStatus:
velocity = self.io.pvget(self.interface.velocity_sp)
if velocity is None:
return TaskStatus.IO_FAILURE
new_position = velocity * 0.5
self.io.pvput({
self.interface.position_rbv: new_position
})
return TaskStatus.SUCCESS
```
### 4. Run the Daemon
```python
def main():
prefixes = {
"main": "IOC:MOTOR:01:",
"sys": "IOC:SYS:"
}
interface = MotorInterface(prefixes=prefixes)
daemon = MotorControlDaemon(
interface=interface,
)
daemon.start()
if __name__ == "__main__":
main()
```
## Example: Echo daemon (IOC + daemon)
This repository includes a complete example under `examples/echo_daemon/`:
- `st.cmd`: IOC startup script (loads `examples/base_interface.db` + echo-specific DBs)
- `echo_interface.py`: PV interface dataclass
- `echo_daemon.py`: example `BridgeDaemon` subclass
- `main.py`: runnable entrypoint which sets up logging and starts the daemon
Typical workflow (requires EPICS + pvxs tooling; easiest via `environment.yml`):
```bash
# Terminal A: start IOC
run-iocsh examples/echo_daemon/st.cmd
# Terminal B: start daemon and write logs to a file
python examples/echo_daemon/main.py /tmp/echo-daemon.log
```
## Testing
```bash
# Unit tests (pure Python)
pytest -q
# Integration tests (IOC + daemon)
pytest -m integration -v
```
| text/markdown | null | Hugo Valim <hugo.valim@ess.eu> | null | null | null | epics, p4p, control-system, daemon, ioc, process-variable | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Progra... | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"p4p"
] | [] | [] | [] | [
"Homepage, https://gitlab.esss.lu.se/hugovalim/epics-bridge",
"Repository, https://gitlab.esss.lu.se/hugovalim/epics-bridge",
"Issues, https://gitlab.esss.lu.se/hugovalim/epics-bridge/-/issues"
] | twine/6.2.0 CPython/3.13.12 | 2026-02-19T14:25:01.764053 | epics_bridge-2.1.0.tar.gz | 18,514 | ab/ef/a934865fad82ddd03fa4467d245e4f2d9e75d511d236e237c8937348d2e3/epics_bridge-2.1.0.tar.gz | source | sdist | null | false | 7f2a30736e9fcd07a126bdbf6d67e759 | 6179830d515d287ec58bfbaf8ad059b146b2109adee7307015316f75bef2ab6c | abefa934865fad82ddd03fa4467d245e4f2d9e75d511d236e237c8937348d2e3 | MIT | [
"LICENSE"
] | 267 |
2.4 | python-ort | 0.6.3 | A Python Ort model serialization library | # Python-Ort
Python-Ort is a pydantic based library to serialize OSS Review Toolkit generated reports using the default models.
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming L... | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.12.5"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T14:24:25.115886 | python_ort-0.6.3-py3-none-any.whl | 51,861 | dd/6d/d6a95db758992e28819d1faa906a5a5af19bb4b36e35d7fa504ddec4df3e/python_ort-0.6.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 55b04b3361887afdb60505186fd2451b | a830ceb0fb26317ad889998f582e938bbc2ed661ade676432b89110011c50001 | dd6dd6a95db758992e28819d1faa906a5a5af19bb4b36e35d7fa504ddec4df3e | MIT | [
"LICENSE"
] | 229 |
2.3 | qwak-core | 0.7.29 | Qwak Core contains the necessary objects and communication tools for using the Qwak Platform | # Qwak Core
Qwak is an end-to-end production ML platform designed to allow data scientists to build, deploy, and monitor their models in production with minimal engineering friction.
Qwak Core contains all the objects and tools necessary to use the Qwak Platform
| text/markdown | Qwak | info@qwak.com | null | null | Apache-2.0 | mlops, ml, deployment, serving, model | [
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: Implementatio... | [] | null | null | <3.12,>=3.9 | [] | [] | [] | [
"python-json-logger>=2.0.2",
"grpcio>=1.71.2",
"protobuf<5,>=4.25.8",
"dependency-injector>=4.0",
"requests",
"python-jose[cryptography]>=3.4.0",
"PyYAML>=6.0.2",
"filelock",
"marshmallow-dataclass<9.0.0,>=8.5.8",
"typeguard<3,>=2",
"joblib<2.0.0,>=1.3.2",
"pyspark==3.5.7; extra == \"feature-s... | [] | [] | [] | [
"Home page, https://www.qwak.com/"
] | poetry/2.1.3 CPython/3.9.25 Linux/6.12.66-88.122.amzn2023.x86_64 | 2026-02-19T14:24:11.468479 | qwak_core-0.7.29.tar.gz | 724,191 | 64/a5/94d09b5f0d90b11df6553123f107f81afde832563fd5fc4815cd04355a1d/qwak_core-0.7.29.tar.gz | source | sdist | null | false | 2d10145d11a96aec2bc3e45b7560b330 | 33a93e499dcc8b9ba69796758fc403493fbb8fba916dc4818ed16512e7cf7dc8 | 64a594d09b5f0d90b11df6553123f107f81afde832563fd5fc4815cd04355a1d | null | [] | 427 |
2.4 | boj-mcp | 0.2.1 | BOJ statistics MCP tool - Access BOJ statistics via the official REST API | # boj-mcp
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://pypi.org/project/boj-mcp/)
Bank of Japan statistics MCP tool — powered by the official BOJ REST API.
## What is this?
**boj-mcp** provides instant access to Bank of Japan statistical data via the [official BOJ REST API](https://www.stat-search.boj.or.jp/api/v1/) (launched February 2026).
- **34 databases** covering interest rates, financial markets, prices, TANKAN, flow of funds, and more
- **No API key required** — all data is publicly available
- **MCP server** for AI assistant integration (Claude Desktop, etc.)
- **Python client** for programmatic access
- Part of the [Japan Finance Data Stack](https://github.com/ajtgjmdjp/awesome-japan-finance-data)
## Quick Start
```bash
pip install boj-mcp
# or
uv add boj-mcp
```
### 30-Second Example
```python
import asyncio
from boj_mcp import BojClient
async def main():
async with BojClient() as client:
# Search for series
series = await client.search_series("call rate", db="IR01")
print(series[0].series_code) # e.g. "MADR1M"
# Fetch time series data
data = await client.get_series_data("IR01", "MADR1M")
print(data.latest) # BojObservation(date='202601', value=1.0)
asyncio.run(main())
```
### CLI
```bash
# List all databases
boj-mcp databases
# Search for series
boj-mcp search "call rate"
boj-mcp search "producer price" --db PR01
# Fetch time series data
boj-mcp data IR01 MADR1M
boj-mcp data IR01 MADR1M --start 202001 --end 202601 --format json
# Test connectivity
boj-mcp test
# Start MCP server
boj-mcp serve
```
## MCP Server
Add to Claude Desktop config:
```json
{
"mcpServers": {
"boj": {
"command": "uvx",
"args": ["boj-mcp", "serve"]
}
}
}
```
### Available MCP Tools
| Tool | Description |
|------|-------------|
| `list_databases` | List all 34 available BOJ databases |
| `search_series` | Search for time series by keyword |
| `get_series_data` | Fetch time series data by database + series code |
| `get_database_info` | Browse all series in a database |
## Available Databases
| Code | Description | Category |
|------|-------------|----------|
| IR01 | Basic Discount/Loan Rates and Overnight Call Rates | Interest Rates |
| IR02 | Interest Rates on Deposits and Lending | Interest Rates |
| IR03 | Market Interest Rates | Interest Rates |
| IR04 | International Interest Rates and Financial Indicators | Interest Rates |
| FM01 | Foreign Exchange Rates | Financial Markets |
| FM02 | Stock Price Indexes | Financial Markets |
| FM03–FM09 | Bond, Money Market, Commodities, Derivatives, REITs… | Financial Markets |
| MD01 | Money Stock | Money and Banking |
| MD02–MD14 | Deposits, Loans, Financial Statements… | Money and Banking |
| PR01 | Corporate Goods Price Index (CGPI) | Prices |
| PR02 | Services Producer Price Index (SPPI) | Prices |
| PR03 | Import/Export Price Index | Prices |
| PR04 | Residential Land Prices | Prices |
| CO | TANKAN: Short-term Economic Survey | Surveys |
| FF | Flow of Funds Accounts | Flow of Funds |
| BP01 | Balance of Payments | External Accounts |
## Python API
```python
from boj_mcp import BojClient
async with BojClient() as client:
# List all databases (no network call)
dbs = client.list_databases()
# Browse a database (up to 500 series)
series = await client.get_metadata("PR01", max_items=50)
# Search across databases
results = await client.search_series("yen", db="FM01")
# Fetch time series with date filter
data = await client.get_series_data(
"IR01", "MADR1M",
start_date="202001",
end_date="202601",
)
for obs in data.observations[-6:]:
print(f"{obs.date}: {obs.value}%")
```
## Data Attribution
This project uses data from the [Bank of Japan](https://www.boj.or.jp/) (日本銀行).
**Source**: Bank of Japan, Statistical Data ([https://www.stat-search.boj.or.jp/](https://www.stat-search.boj.or.jp/))
> このサービスは、日本銀行時系列統計データ検索サイトのAPI機能を使用しています。サービスの内容は日本銀行によって保証されたものではありません。
>
> This service uses the API of the Bank of Japan's Time-Series Data Search Site. The content of this service is not guaranteed by the Bank of Japan.
> **Disclaimer**: This project is neither officially related to nor endorsed by the Bank of Japan.
## Related Projects
- [edinet-mcp](https://github.com/ajtgjmdjp/edinet-mcp) — EDINET corporate financial data
- [estat-mcp](https://github.com/ajtgjmdjp/estat-mcp) — e-Stat government statistics
- [stockprice-mcp](https://github.com/ajtgjmdjp/stockprice-mcp) — Stock prices & FX rates (Yahoo Finance)
- [awesome-japan-finance-data](https://github.com/ajtgjmdjp/awesome-japan-finance-data) — Japan Finance Data Stack
## License
Apache-2.0
<!-- mcp-name: io.github.ajtgjmdjp/boj-mcp -->
| text/markdown | null | null | null | null | null | bank-of-japan, boj, japan, macro-economics, mcp, statistics | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"fastmcp<3.0,>=2.0",
"httpx>=0.27",
"loguru>=0.7",
"pydantic>=2.0",
"mypy>=1.10; extra == \"dev\"",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.4; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/ajtgjmdjp/boj-mcp",
"Repository, https://github.com/ajtgjmdjp/boj-mcp",
"Issues, https://github.com/ajtgjmdjp/boj-mcp/issues"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T14:23:48.046362 | boj_mcp-0.2.1-py3-none-any.whl | 19,111 | 9a/91/11be830bdebfef09a9c329b473c8bc7900919e796ffa1da583f94ca6acd6/boj_mcp-0.2.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 6c85bcba4cb7d6aa1293db62dc64edb1 | 35cb91da52435e9bf9eaf3e8953bfd05c2113dbe0093e28c4a9aab17783779e7 | 9a9111be830bdebfef09a9c329b473c8bc7900919e796ffa1da583f94ca6acd6 | Apache-2.0 | [
"LICENSE"
] | 177 |
2.4 | erbsland-ansi-convert | 0.9.1 | ANSI terminal emulation and conversion to ANSI, HTML, and plain text | # ANSI terminal emulation and conversion to ANSI, HTML, and plain text
`erbsland-ansi-convert` is a Python 3.10+ library that simulates an ANSI terminal and converts the resulting terminal history into:
- Plain text
- Compact ANSI text
- Compact HTML with CSS classes
It is designed to process terminal output from tools that emit ANSI escape sequences and render a static representation close to what users see in a real terminal.
## Installation
```shell
pip install erbsland-ansi-convert
```
## Quick Start
```python
from pathlib import Path
from erbsland.ansi_convert import Terminal
terminal = Terminal(width=120, height=40, back_buffer_height=2000)
terminal.write(Path("path/to/output.log").read_text())
Path("out.txt").write_text(terminal.to_text())
Path("out.ansi.txt").write_text(terminal.to_ansi())
Path("out.visible-esc.txt").write_text(terminal.to_ansi(esc_char="␛"))
Path("out.html").write_text(terminal.to_html(class_prefix="my-ansi"))
```
For terminal capture files, prefer `writeFile()` to preserve carriage returns:
```python
terminal = Terminal()
terminal.writeFile("capture-via-script-cmd.txt")
```
If your input text already normalized carriage returns to newlines, you can enable
the progress-line collapse heuristic on `write()`:
```python
terminal.write(text, collapse_capture_updates=True)
```
## Command Line
Installing the package also installs the `erbsland-ansi-convert` command:
```shell
erbsland-ansi-convert [-f/--format (text|ansi|html)] [-c|--collapse] [-C|--no-collapse] [-o/--output <file>] [<input file>|-]
```
- `-f`, `--format`: output format (`text`, `ansi`, `html`), default `ansi`
- `-o`, `--output`: write output to file, default stdout
- `-c`, `--collapse`: enable collapse heuristic
- `-C`, `--no-collapse`: disable collapse heuristic
- `<input file>`: input capture file, or `-` for stdin
## Supported Terminal Features
- Control characters: `BEL`, `BS`, `HT`, `LF`, `VT`, `FF`, `CR`, `DEL`
- Cursor controls (`CSI A/B/C/D/E/F/G/H/f`, `ESC M`, save/restore variants)
- Erase functions (`CSI J`, `CSI K` including mode `3J` for saved lines)
- SGR text styles (`bold`, `dim`, `italic`, `underline`, `blink`, `reverse`, `hidden`, `strike`)
- Basic + bright 8 ANSI colors for foreground/background
- Common private modes: cursor visibility (`?25`) and alternate buffer (`?47`, `?1049`)
Unknown sequences can optionally be reported as warnings.
## HTML Output Model
`to_html()` produces:
```html
<div class="{prefix}-block"><pre>...</pre></div>
```
Text styles are represented with flat span classes such as:
- `{prefix}-bold`
- `{prefix}-red`
- `{prefix}-background-blue`
Spans are reopened only when style actually changes to keep output compact and easy to style.
## Development
Install development dependencies:
```shell
pip install -r requirements-dev.txt
```
Run tests:
```shell
pytest
```
## License
Copyright (c) 2026 Tobias Erbsland / Erbsland DEV (<https://erbsland.dev>)
Licensed under the Apache License, Version 2.0.
See `LICENSE` for details.
| text/markdown | null | Tobias Erbsland / Erbsland DEV <info@erbsland.dev> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | ansi, converter, emulator, html, terminal | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming L... | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/erbsland-dev/erbsland-py-ansi-convert",
"Issues, https://github.com/erbsland-dev/erbsland-py-ansi-convert/issues",
"Documentation, https://ansi-convert.erbsland.dev/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:23:42.477435 | erbsland_ansi_convert-0.9.1.tar.gz | 31,334 | 68/cd/1ff8edcc68ba7652499ef358367ce55b893343fbaada8c3d1cac9e439bd8/erbsland_ansi_convert-0.9.1.tar.gz | source | sdist | null | false | 36b24dc541747312d46c9bfa6b61aa58 | 7d7e5431c735ccdee3f55937773e3833eb7e20775629b31924b1c06c0d1bdf21 | 68cd1ff8edcc68ba7652499ef358367ce55b893343fbaada8c3d1cac9e439bd8 | null | [
"LICENSE"
] | 248 |
2.4 | nia-etl-utils | 0.5.1 | Utilitários compartilhados para pipelines ETL do NIA/MPRJ | # Módulos Utilitários do NIA
## Visão Geral
Biblioteca Python centralizada contendo utilitários compartilhados para pipelines ETL do NIA/MPRJ. Consolida funções reutilizáveis para configuração de ambiente, notificações por email, conexões de banco de dados, logging padronizado e processamento de dados.
Desenvolvida para eliminar duplicação de código, padronizar boas práticas e facilitar manutenção em todos os projetos de engenharia de dados do NIA.
---
## Novidades da v0.5.1
- **Truncamento início + fim** — novo parâmetro `truncar_inicio_fim` nas funções de embedding permite truncar textos longos preservando início e fim do conteúdo
- **Exceção EmbeddingTruncadoError** — tratamento específico para erros em embeddings truncados com método início+fim
## Novidades da v0.5.0
- **Truncamento automático de tokens** — parâmetro `max_tokens` nas funções de embedding para truncar textos que excedem o limite do modelo (padrão: 8191 tokens)
- **Dependência tiktoken** — contagem precisa de tokens usando a biblioteca oficial da OpenAI
## Novidades da v0.4.1
- **Embedding em Batch** (`gerar_embedding_openai_batch`) — geração de embeddings para múltiplos textos em uma única chamada à API, mais eficiente que chamadas individuais
## Novidades da v0.4.0
- **Módulo Embedding** (`gerar_embedding_openai`) — geração de embeddings vetoriais via Azure OpenAI com retry automático
- **Exceções de Embedding** (`EmbeddingError`, `EmbeddingTimeoutError`) — tratamento granular de erros de embedding
- **Migração Oracle** — substituição de cx_Oracle por oracledb para compatibilidade com Python 3.13+
## Novidades da v0.2.2
- **Módulo OCR** (`executar_ocr`) — processamento de OCR via API IntelliDoc com suporte a PDF, imagens e detecção automática de formato
- **Exceções de OCR** (`OcrError`, `OcrSubmissaoError`, `OcrProcessamentoError`, `OcrTimeoutError`) — tratamento granular de erros de OCR
## Novidades da v0.2.0
- **Dataclasses de configuração** (`PostgresConfig`, `OracleConfig`, `SmtpConfig`, `LogConfig`) — configurações imutáveis e type-safe
- **Exceções customizadas hierárquicas** — tratamento de erros mais preciso e informativo
- **Dataclasses de resultado** (`Conexao`, `ResultadoExtracao`, `ResultadoLote`, `ResultadoEmail`)
- **Funções adicionais de env** — `obter_variavel_env_int`, `obter_variavel_env_bool`, `obter_variavel_env_lista`
- **Context managers** para conexões de banco — fechamento automático e seguro
---
## Estrutura do Projeto
```plaintext
.
├── src/
│ └── nia_etl_utils/
│ ├── __init__.py # Exporta funções principais
│ ├── config.py # Dataclasses de configuração
│ ├── exceptions.py # Exceções customizadas
│ ├── results.py # Dataclasses de resultado
│ ├── env_config.py # Gerenciamento de variáveis de ambiente
│ ├── email_smtp.py # Envio de emails via SMTP
│ ├── database.py # Conexões PostgreSQL e Oracle
│ ├── logger_config.py # Configuração de logging com Loguru
│ ├── processa_csv.py # Processamento e exportação de CSV
│ ├── processa_csv_paralelo.py # Processamento paralelo de CSV grandes
│ ├── limpeza_pastas.py # Manipulação de arquivos e diretórios
│ ├── ocr.py # OCR via API IntelliDoc
│ └── embedding.py # Embeddings via Azure OpenAI
│
├── tests/ # Testes unitários
├── .gitlab-ci.yml # Pipeline CI/CD
├── pyproject.toml # Configuração do pacote
└── README.md
```
---
## Módulos Disponíveis
### 1. Configuração de Ambiente (`env_config.py`)
Gerenciamento robusto de variáveis de ambiente com validação e tipagem.
```python
from nia_etl_utils import (
obter_variavel_env,
obter_variavel_env_int,
obter_variavel_env_bool,
obter_variavel_env_lista
)
# String obrigatória (falha com sys.exit(1) se não existir)
db_host = obter_variavel_env('DB_POSTGRESQL_HOST')
# String opcional com fallback
porta = obter_variavel_env('DB_PORT', default='5432')
# Inteiro
max_conexoes = obter_variavel_env_int('MAX_CONEXOES', default=10)
# Booleano (aceita: true/false, 1/0, yes/no, on/off)
debug = obter_variavel_env_bool('DEBUG_MODE', default=False)
# Lista (separada por vírgula)
destinatarios = obter_variavel_env_lista('EMAIL_DESTINATARIOS')
# ['email1@mprj.mp.br', 'email2@mprj.mp.br']
```
---
### 2. Dataclasses de Configuração (`config.py`)
Configurações imutáveis e type-safe com factory methods.
```python
from nia_etl_utils import PostgresConfig, OracleConfig, SmtpConfig, LogConfig
# Configuração explícita (recomendado para testes)
config = PostgresConfig(
host="localhost",
port="5432",
database="teste",
user="user",
password="pass"
)
# Configuração via variáveis de ambiente (recomendado para produção)
config = PostgresConfig.from_env() # usa DB_POSTGRESQL_*
config = PostgresConfig.from_env("_OPENGEO") # usa DB_POSTGRESQL_*_OPENGEO
# Connection string para SQLAlchemy
print(config.connection_string)
# postgresql+psycopg2://user:pass@localhost:5432/teste
# Oracle
oracle_config = OracleConfig.from_env()
# SMTP
smtp_config = SmtpConfig.from_env()
# Logging com padrões NIA
log_config = LogConfig.padrao_nia("meu_pipeline")
```
---
### 3. Exceções Customizadas (`exceptions.py`)
Hierarquia de exceções para tratamento preciso de erros.
```python
from nia_etl_utils import (
# Base
NiaEtlError,
# Configuração
ConfiguracaoError,
VariavelAmbienteError,
# Database
DatabaseError,
ConexaoError,
# Arquivos
ArquivoError,
DiretorioError,
EscritaArquivoError,
LeituraArquivoError,
# Extração
ExtracaoError,
ExtracaoVaziaError,
ProcessamentoError,
# Email
EmailError,
DestinatarioError,
SmtpError,
# Embedding
EmbeddingError,
EmbeddingTimeoutError,
EmbeddingTruncadoError,
# Validação
ValidacaoError,
)
from nia_etl_utils import (
# OCR
OcrError,
OcrSubmissaoError,
OcrProcessamentoError,
OcrTimeoutError,
)
# Uso em try/except
try:
config = PostgresConfig.from_env("_INEXISTENTE")
except ConfiguracaoError as e:
logger.error(f"Configuração inválida: {e}")
logger.debug(f"Detalhes: {e.details}")
# Exceções incluem contexto
try:
with conectar_postgresql(config) as conn:
conn.cursor.execute("SELECT * FROM tabela")
except ConexaoError as e:
# e.details contém informações adicionais
print(e.details) # {'host': 'localhost', 'database': 'teste', ...}
```
---
### 4. Dataclasses de Resultado (`results.py`)
Estruturas para retorno de operações.
```python
from nia_etl_utils import Conexao, ResultadoExtracao, ResultadoLote, ResultadoEmail
# Conexao - retornada por conectar_postgresql/oracle
with conectar_postgresql(config) as conn:
conn.cursor.execute("SELECT 1")
# conn.cursor e conn.connection disponíveis
# Fechamento automático ao sair do context manager
# ResultadoExtracao - retornado por extrair_e_exportar_csv
resultado = extrair_e_exportar_csv(...)
print(resultado.sucesso) # True/False
print(resultado.caminho) # '/dados/arquivo.csv'
print(resultado.linhas) # 1500
print(resultado.tempo_execucao) # 2.34 (segundos)
# ResultadoLote - retornado por exportar_multiplos_csv
lote = exportar_multiplos_csv(...)
print(lote.total) # 5
print(lote.sucessos) # 4
print(lote.falhas) # 1
print(lote.resultados) # Lista de ResultadoExtracao
```
---
### 5. Conexões de Banco (`database.py`)
Conexões com context manager para fechamento automático.
#### PostgreSQL
```python
from nia_etl_utils import conectar_postgresql, PostgresConfig
# Com configuração explícita
config = PostgresConfig(
host="localhost",
port="5432",
database="meu_banco",
user="usuario",
password="senha"
)
with conectar_postgresql(config) as conn:
conn.cursor.execute("SELECT * FROM tabela")
resultados = conn.cursor.fetchall()
# Conexão fechada automaticamente
# Com variáveis de ambiente
config = PostgresConfig.from_env()
with conectar_postgresql(config) as conn:
conn.cursor.execute("SELECT 1")
# Wrappers de conveniência (mantidos para retrocompatibilidade)
from nia_etl_utils import conectar_postgresql_nia, conectar_postgresql_opengeo
with conectar_postgresql_nia() as conn:
conn.cursor.execute("SELECT * FROM ouvidorias")
# Engine SQLAlchemy
from nia_etl_utils import obter_engine_postgresql
import pandas as pd
engine = obter_engine_postgresql(config)
df = pd.read_sql("SELECT * FROM tabela", engine)
```
#### Oracle
```python
from nia_etl_utils import conectar_oracle, OracleConfig
config = OracleConfig.from_env()
with conectar_oracle(config) as conn:
conn.cursor.execute("SELECT * FROM tabela WHERE ROWNUM <= 10")
resultados = conn.cursor.fetchall()
```
---
### 6. Email SMTP (`email_smtp.py`)
Envio de emails com suporte a anexos.
```python
from nia_etl_utils import enviar_email_smtp
# Uso padrão (destinatários da env var EMAIL_DESTINATARIOS)
enviar_email_smtp(
corpo_do_email="Pipeline concluído com sucesso",
assunto="[PROD] ETL Finalizado"
)
# Com destinatários específicos e anexo
enviar_email_smtp(
destinatarios=["diretor@mprj.mp.br"],
corpo_do_email="Relatório executivo anexo",
assunto="Relatório Mensal",
anexo="/tmp/relatorio.pdf"
)
```
---
### 7. Logging (`logger_config.py`)
Configuração padronizada do Loguru.
```python
from nia_etl_utils import configurar_logger_padrao_nia, configurar_logger
from loguru import logger
# Configuração rápida com padrões do NIA
caminho_log = configurar_logger_padrao_nia("ouvidorias_etl")
logger.info("Pipeline iniciado")
# Configuração customizada
caminho_log = configurar_logger(
prefixo="meu_pipeline",
data_extracao="2025_01_20",
pasta_logs="/var/logs/nia",
rotation="50 MB",
retention="30 days",
level="INFO"
)
```
---
### 8. Processamento CSV (`processa_csv.py`)
Exportação de DataFrames para CSV com nomenclatura padronizada.
```python
from nia_etl_utils import exportar_para_csv, extrair_e_exportar_csv
import pandas as pd
# Exportação simples
df = pd.DataFrame({"col1": [1, 2], "col2": [3, 4]})
caminho = exportar_para_csv(
df=df,
nome_arquivo="dados_clientes",
data_extracao="2025_01_20",
diretorio_base="/tmp/dados"
)
# Extração + Exportação
def extrair_dados():
return pd.DataFrame({"dados": [1, 2, 3]})
resultado = extrair_e_exportar_csv(
nome_extracao="dados_vendas",
funcao_extracao=extrair_dados,
data_extracao="2025_01_20",
diretorio_base="/tmp/dados",
falhar_se_vazio=True
)
# Múltiplas extrações em lote
from nia_etl_utils import exportar_multiplos_csv
extractions = [
{"nome": "clientes", "funcao": extrair_clientes},
{"nome": "vendas", "funcao": extrair_vendas}
]
lote = exportar_multiplos_csv(
extractions=extractions,
data_extracao="2025_01_20",
diretorio_base="/tmp/dados"
)
```
---
### 9. Processamento Paralelo de CSV (`processa_csv_paralelo.py`)
Processa arquivos CSV grandes em paralelo.
```python
from nia_etl_utils import processar_csv_paralelo
def limpar_texto(texto):
return texto.strip().upper()
processar_csv_paralelo(
caminho_entrada="dados_brutos.csv",
caminho_saida="dados_limpos.csv",
colunas_para_tratar=["nome", "descricao"],
funcao_transformacao=limpar_texto,
remover_entrada=True
)
```
---
### 10. Manipulação de Arquivos (`limpeza_pastas.py`)
Utilitários para limpeza e criação de diretórios.
```python
from nia_etl_utils import limpar_pasta, remover_pasta_recursivamente, criar_pasta_se_nao_existir
limpar_pasta("/tmp/dados")
remover_pasta_recursivamente("/tmp/temporario")
criar_pasta_se_nao_existir("/dados/processados/2025/01")
```
---
### 11. OCR via API IntelliDoc (`ocr.py`)
Processamento de OCR (Reconhecimento Óptico de Caracteres) via API IntelliDoc do MPRJ.
A API processa documentos de forma assíncrona:
1. Submete documento → retorna `document_id`
2. Consulta status via polling → retorna resultado quando pronto
**Formatos suportados:** PDF, JPG, PNG, GIF, BMP, TIFF (detecção automática via magic bytes)
```python
from nia_etl_utils import executar_ocr, OcrError, OcrTimeoutError
# Uso básico com variável de ambiente
with open("documento.pdf", "rb") as f:
resultado = executar_ocr(
conteudo=f.read(),
url_base="INTELLIDOC_URL", # nome da env var
)
print(resultado["full_text"]) # Texto extraído completo
print(resultado["overall_quality"]) # Qualidade do OCR (0-1)
print(resultado["total_pages"]) # Número de páginas
# Uso com URL direta e configurações customizadas
resultado = executar_ocr(
conteudo=blob_bytes,
url_base="http://google.com",
timeout_polling=600, # Timeout máximo em segundos (default: 300)
max_tentativas=5, # Tentativas de submissão (default: 3)
intervalo_retry=10, # Segundos entre retries (default: 5)
intervalo_polling=2, # Segundos entre consultas (default: 1)
)
# Suporta LOBs do Oracle diretamente
with conectar_oracle(config) as conn:
conn.cursor.execute("SELECT blob_documento FROM documentos WHERE id = :id", {"id": 123})
blob = conn.cursor.fetchone()[0]
resultado = executar_ocr(blob) # LOB é convertido automaticamente
# Acessando detalhes das páginas
for page in resultado["pages"]:
print(f"Página {page['page_number']}: {page['extraction_method']}")
# Tratamento de erros
try:
resultado = executar_ocr(conteudo, url_base="INTELLIDOC_URL")
except OcrTimeoutError as e:
logger.error(f"Timeout aguardando OCR: {e}")
logger.debug(f"Detalhes: {e.details}") # {'document_id': '...', 'ultimo_status': 'PENDING'}
except OcrError as e:
logger.error(f"Erro no OCR: {e}")
```
**Retorno da função `executar_ocr`:**
| Campo | Tipo | Descrição |
| -------------------- | ----- | ----------------------- |
| `document_id` | str | ID único do documento |
| `full_text` | str | Texto extraído completo |
| `mime_type` | str | Tipo MIME detectado |
| `overall_quality` | float | Qualidade geral (0-1) |
| `total_pages` | int | Número de páginas |
| `processing_time_ms` | int | Tempo de processamento |
| `pages` | list | Detalhes por página |
| `metadata` | dict | Metadados adicionais |
**Exceções:**
| Exceção | Quando ocorre |
| ----------------------- | --------------------------------------------------------------------- |
| `OcrSubmissaoError` | Falha ao enviar documento (rede, timeout, resposta inválida) |
| `OcrProcessamentoError` | API retornou FAILURE/REVOKED (documento corrompido, formato inválido) |
| `OcrTimeoutError` | Tempo máximo de polling atingido |
| `TypeError` | Tipo de conteúdo não suportado |
---
### 12. Embedding via Azure OpenAI (`embedding.py`)
Geração de embeddings vetoriais utilizando a API do Azure OpenAI.
#### Texto único
```python
from nia_etl_utils import (
gerar_embedding_openai,
obter_variavel_env,
EmbeddingError,
EmbeddingTimeoutError,
EmbeddingTruncadoError,
)
# Uso básico
vetor = gerar_embedding_openai(
endpoint=obter_variavel_env("AZURE_OPENAI_ENDPOINT"),
api_key=obter_variavel_env("AZURE_OPENAI_API_KEY"),
model_embedding=obter_variavel_env("AZURE_OPENAI_MODEL_EMBEDDING"),
api_version=obter_variavel_env("AZURE_OPENAI_API_VERSION_EMBEDDING"),
texto="Este é um texto de exemplo para embedding.",
)
print(f"Vetor com {len(vetor)} dimensões") # 1536 para ada-002
# Com parâmetros de retry customizados
vetor = gerar_embedding_openai(
endpoint=endpoint,
api_key=api_key,
model_embedding=model_embedding,
api_version=api_version,
texto="Outro texto",
max_retries=5, # Número máximo de tentativas (default: 3)
intervalo_segundos=10, # Intervalo entre retries (default: 5)
)
# Truncamento com início + fim (preserva contexto de início e fim do texto)
vetor = gerar_embedding_openai(
endpoint=endpoint,
api_key=api_key,
model_embedding=model_embedding,
api_version=api_version,
texto="Texto muito longo...",
truncar_inicio_fim=True, # Usa metade início + [...TRUNCADO...] + metade fim
)
# Desabilitando truncamento automático de tokens
vetor = gerar_embedding_openai(
endpoint=endpoint,
api_key=api_key,
model_embedding=model_embedding,
api_version=api_version,
texto="Texto muito longo...",
max_tokens=None, # Desabilita truncamento (API pode retornar erro se exceder limite)
)
# Tratamento de erros
try:
vetor = gerar_embedding_openai(
endpoint=endpoint,
api_key=api_key,
model_embedding=model_embedding,
api_version=api_version,
texto=texto,
truncar_inicio_fim=True,
)
except EmbeddingTruncadoError as e:
logger.error(f"Erro no truncamento início+fim: {e}")
logger.debug(f"Tokens originais: {e.tokens_originais}")
except EmbeddingTimeoutError as e:
logger.error(f"Timeout após retries: {e}")
logger.debug(f"Detalhes: {e.details}") # {'tentativas': 3, 'model': '...'}
except EmbeddingError as e:
logger.error(f"Erro no embedding: {e}")
```
#### Batch (múltiplos textos)
Mais eficiente que chamar `gerar_embedding_openai` múltiplas vezes, pois envia todos os textos em uma única requisição à API.
```python
from nia_etl_utils import gerar_embedding_openai_batch, obter_variavel_env
# Uso básico
textos = ["Primeiro texto", "Segundo texto", "Terceiro texto"]
vetores = gerar_embedding_openai_batch(
endpoint=obter_variavel_env("AZURE_OPENAI_ENDPOINT"),
api_key=obter_variavel_env("AZURE_OPENAI_API_KEY"),
model_embedding=obter_variavel_env("AZURE_OPENAI_MODEL_EMBEDDING"),
api_version=obter_variavel_env("AZURE_OPENAI_API_VERSION_EMBEDDING"),
textos=textos,
)
print(f"{len(vetores)} vetores gerados") # 3 vetores
# Textos vazios retornam lista vazia
textos = ["Texto válido", "", "Outro texto"]
vetores = gerar_embedding_openai_batch(...)
# vetores[0] -> [0.123, 0.456, ...] # embedding do primeiro texto
# vetores[1] -> [] # texto vazio retorna lista vazia
# vetores[2] -> [0.789, 0.012, ...] # embedding do terceiro texto
```
**Parâmetros da função `gerar_embedding_openai`:**
| Parâmetro | Tipo | Descrição |
| -------------------- | -------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| `endpoint` | str | URL do endpoint Azure OpenAI |
| `api_key` | str | Chave de API do Azure OpenAI |
| `model_embedding` | str | Nome do modelo (ex: `text-embedding-3-small`) |
| `api_version` | str | Versão da API (ex: `2024-02-01`) |
| `texto` | str | Texto de entrada para gerar o embedding |
| `max_retries` | int | Tentativas em caso de timeout (default: 3) |
| `intervalo_segundos` | int | Intervalo entre tentativas (default: 5) |
| `max_tokens` | int ∣ None | Limite máximo de tokens; textos maiores são truncados. Use `None` para desabilitar (default: 8191) |
| `truncar_inicio_fim` | bool | Se `True`, trunca preservando início + fim com marcador `[...TRUNCADO...]`. Se `False`, trunca apenas do início (default: False) |
**Parâmetros da função `gerar_embedding_openai_batch`:**
| Parâmetro | Tipo | Descrição |
| -------------------- | -------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| `endpoint` | str | URL do endpoint Azure OpenAI |
| `api_key` | str | Chave de API do Azure OpenAI |
| `model_embedding` | str | Nome do modelo (ex: `text-embedding-3-small`) |
| `api_version` | str | Versão da API (ex: `2024-02-01`) |
| `textos` | list[str] | Lista de textos para gerar embeddings |
| `max_retries` | int | Tentativas em caso de timeout (default: 3) |
| `intervalo_segundos` | int | Intervalo entre tentativas (default: 5) |
| `max_tokens` | int ∣ None | Limite máximo de tokens por texto; textos maiores são truncados. Use `None` para desabilitar (default: 8191) |
| `truncar_inicio_fim` | bool | Se `True`, trunca preservando início + fim com marcador `[...TRUNCADO...]`. Se `False`, trunca apenas do início (default: False) |
**Retorno:**
- `gerar_embedding_openai`: `list[float]` — Vetor de embedding
- `gerar_embedding_openai_batch`: `list[list[float]]` — Lista de vetores (mesma ordem dos textos de entrada)
**Exceções:**
| Exceção | Quando ocorre |
| ----------------------- | ------------------------------------------------------------------------ |
| `ValueError` | Texto ou parâmetros obrigatórios vazios/inválidos |
| `EmbeddingTimeoutError` | Todas as tentativas falharam por timeout |
| `EmbeddingTruncadoError`| Erro ao processar embedding com truncamento início+fim |
| `EmbeddingError` | Qualquer outro erro na geração (resposta inválida, API, etc) |
---
## Instalação
### Via PyPI
```bash
pip install nia-etl-utils
```
### Via GitLab
```bash
pip install git+https://gitlab-dti.mprj.mp.br/nia/etl-nia/nia-etl-utils.git@v0.5.1
```
### Modo Desenvolvimento
```bash
git clone https://gitlab-dti.mprj.mp.br/nia/etl-nia/nia-etl-utils.git
cd nia-etl-utils
pip install -e ".[dev]"
```
---
## Configuração
### Variáveis de Ambiente
```env
# Email SMTP
MAIL_SMTP_SERVER=smtp.mprj.mp.br
MAIL_SMTP_PORT=587
MAIL_SENDER=etl@mprj.mp.br
EMAIL_DESTINATARIOS=equipe@mprj.mp.br,gestor@mprj.mp.br
# PostgreSQL - NIA
DB_POSTGRESQL_HOST=postgres-nia.mprj.mp.br
DB_POSTGRESQL_PORT=5432
DB_POSTGRESQL_DATABASE=nia_database
DB_POSTGRESQL_USER=usuario
DB_POSTGRESQL_PASSWORD=senha
# PostgreSQL - OpenGeo
DB_POSTGRESQL_HOST_OPENGEO=postgres-opengeo.mprj.mp.br
DB_POSTGRESQL_PORT_OPENGEO=5432
DB_POSTGRESQL_DATABASE_OPENGEO=opengeo_database
DB_POSTGRESQL_USER_OPENGEO=usuario
DB_POSTGRESQL_PASSWORD_OPENGEO=senha
# Oracle
DB_ORACLE_HOST=oracle.mprj.mp.br
DB_ORACLE_PORT=1521
DB_ORACLE_SERVICE_NAME=ORCL
DB_ORACLE_USER=usuario
DB_ORACLE_PASSWORD=senha
# OCR (IntelliDoc)
INTELLIDOC_URL=http://intellidoc.mprj.mp.br
# Azure OpenAI (Embedding)
AZURE_OPENAI_ENDPOINT=https://meu-recurso.openai.azure.com
AZURE_OPENAI_API_KEY=sua-chave-api
AZURE_OPENAI_MODEL_EMBEDDING=text-embedding-ada-002
AZURE_OPENAI_API_VERSION_EMBEDDING=2023-05-15
```
---
## Testes
```bash
# Todos os testes
pytest
# Com cobertura
pytest --cov=src/nia_etl_utils --cov-report=term-missing
# Ou usar o script helper
./run_tests.sh --coverage --verbose
```
---
## Exemplo de Uso Completo
```python
from nia_etl_utils import (
configurar_logger_padrao_nia,
PostgresConfig,
conectar_postgresql,
exportar_para_csv,
ConexaoError,
)
from loguru import logger
import pandas as pd
from datetime import datetime
# 1. Configura logging
configurar_logger_padrao_nia("meu_pipeline")
# 2. Carrega configuração
config = PostgresConfig.from_env()
# 3. Conecta e extrai dados
try:
with conectar_postgresql(config) as conn:
logger.info("Extraindo dados...")
conn.cursor.execute("SELECT * FROM tabela WHERE data >= CURRENT_DATE - 7")
resultados = conn.cursor.fetchall()
colunas = [desc[0] for desc in conn.cursor.description]
df = pd.DataFrame(resultados, columns=colunas)
except ConexaoError as e:
logger.error(f"Falha na conexão: {e}")
raise
logger.info(f"Extração concluída: {len(df)} registros")
# 4. Exporta CSV
data_hoje = datetime.now().strftime("%Y_%m_%d")
caminho = exportar_para_csv(
df=df,
nome_arquivo="dados_extraidos",
data_extracao=data_hoje,
diretorio_base="/dados/processados"
)
logger.success(f"Pipeline concluído! Arquivo: {caminho}")
```
---
## Integração com Airflow
```python
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
task = KubernetesPodOperator(
task_id="meu_etl",
name="meu-etl-pod",
namespace="airflow-nia-stage",
image="python:3.13.3",
cmds=[
"sh", "-c",
"pip install nia-etl-utils && python src/extract.py"
],
env_vars={
"DB_POSTGRESQL_HOST": "...",
"EMAIL_DESTINATARIOS": "equipe@mprj.mp.br"
},
)
```
---
## Tecnologias Utilizadas
- Python 3.10+ (compatível até 3.13)
- Loguru (logging)
- python-dotenv (env vars)
- requests (HTTP/OCR)
- oracledb (Oracle - compatível com Python 3.13+)
- psycopg2 (PostgreSQL)
- SQLAlchemy (engines)
- pandas (processamento de dados)
- openai (Azure OpenAI - embeddings)
- tiktoken (contagem de tokens)
- pytest + pytest-cov (testes)
- ruff (linting)
---
## Versionamento
Este projeto usa [Semantic Versioning](https://semver.org/):
- **MAJOR**: Mudanças incompatíveis na API
- **MINOR**: Novas funcionalidades (retrocompatíveis)
- **PATCH**: Correções de bugs
**Versão atual:** `v0.5.1`
---
## CI/CD
Pipeline automatizado no GitLab:
- Testes unitários (pytest)
- Cobertura de código (>= 70%)
- Linting (ruff)
- Deploy automático no PyPI (em tags)
---
## Contribuição
Merge requests são bem-vindos. Sempre crie uma branch a partir de `main`.
### Checklist:
- [ ] Testes passam: `pytest`
- [ ] Cobertura >= 70%
- [ ] Lint OK: `ruff check src/ tests/`
- [ ] Commits semânticos
- [ ] Documentação atualizada
---
## Licença
Projeto de uso interno do MPRJ. Sem licença pública.
---
## Responsável Técnico
**Nícolas Galdino Esmael** | Engenheiro de Dados - NIA | MPRJ
| text/markdown | null | Nícolas Esmael <nicolas.esmael@mprj.mp.br> | null | null | MIT | etl, data-engineering, pipeline, mprj, nia | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pytho... | [] | null | null | >=3.10 | [] | [] | [] | [
"loguru>=0.7.0",
"python-dotenv>=1.0.0",
"oracledb>=2.0.0",
"psycopg2-binary>=2.9.0",
"sqlalchemy>=2.0.0",
"pandas>=2.0.0",
"requests>=2.28.0",
"openai>=1.0.0",
"tiktoken>=0.5.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy... | [] | [] | [] | [
"Repository, https://gitlab-dti.mprj.mp.br/nia/etl-nia/nia-etl-utils",
"Documentation, https://gitlab-dti.mprj.mp.br/nia/etl-nia/nia-etl-utils/-/blob/main/README.md",
"Changelog, https://gitlab-dti.mprj.mp.br/nia/etl-nia/nia-etl-utils/-/blob/main/CHANGELOG.md"
] | twine/6.2.0 CPython/3.13.3 | 2026-02-19T14:23:39.868905 | nia_etl_utils-0.5.1.tar.gz | 63,257 | 74/ea/489ac18757fd2885746d76905b33bdffdaf72f20bb59b7b3458799dbad1d/nia_etl_utils-0.5.1.tar.gz | source | sdist | null | false | 2a193f4bedf19151c24044c62f66c399 | 76c768bdee2ec6d1c652a9e529b51da6e3f17bb2c86e74a348a8dbd19222e39b | 74ea489ac18757fd2885746d76905b33bdffdaf72f20bb59b7b3458799dbad1d | null | [] | 299 |
2.4 | earthcarekit | 0.14.3 | A Python package to simplify working with EarthCARE satellite data. | <picture align="center">
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/TROPOS-RSD/earthcarekit-docs-assets/refs/heads/main/assets/images/logos/earthcarekit-logo-lightblue.png">
<img alt="logo" src="https://raw.githubusercontent.com/TROPOS-RSD/earthcarekit-docs-assets/refs/heads/main/assets/images/logos/earthcarekit-logo-blue.png">
</picture>
---
[](https://github.com/TROPOS-RSD/earthcarekit/blob/main/LICENSE)
[](https://tropos-rsd.github.io/earthcarekit/)
[](https://github.com/TROPOS-RSD/earthcarekit/tags)
[](https://pypi.org/project/earthcarekit/)
[](https://github.com/TROPOS-RSD/earthcarekit/commits/main)
[](https://doi.org/10.5281/zenodo.16813294)
A Python package to simplify working with EarthCARE satellite data.
> ⚠️ **Project Status: In Development**
>
> This project is still under active development.
> It is **not yet feature-complete**, and parts of the **user documentation are missing or incomplete**.
> Use at your own risk and expect breaking changes.
> Feedback and contributions are welcome!
- **Documentation:** https://tropos-rsd.github.io/earthcarekit/
- **Source code:** https://github.com/TROPOS-RSD/earthcarekit
- **Examples:** https://github.com/TROPOS-RSD/earthcarekit/tree/main/examples/notebooks
- **Feature requests and bug reports:** https://github.com/TROPOS-RSD/earthcarekit/issues
## What is `earthcarekit`?
**`earthcarekit`** is an open-source Python package that provides comprehensive and flexible tools for downloading, reading, analysing and visualizing data from [ESA](https://earth.esa.int/eogateway/missions/earthcare) (European Space Ageny) and [JAXA](https://www.eorc.jaxa.jp/EARTHCARE/index.html)'s (Japan Aerospace Exploration Agency) joint satellite mission EarthCARE (Earth Cloud, Aerosol and Radiation Explorer, [Wehr et al., 2023](https://doi.org/10.5194/amt-16-3581-2023)). The goal of this software is to support the diverse calibration/validation (cal/val) and scientific efforts related to the mission and provide easy-to-use functions for new EarthCARE data users.
You can find more info about the package, setup, and usage in the [documentation](https://tropos-rsd.github.io/earthcarekit/).
## Contact
The package is developed and maintained by [Leonard König](https://orcid.org/0009-0004-3095-3969) at Leibniz Institute for Tropospheric Research ([TROPOS](https://www.tropos.de/en/)).
For questions, suggestions, or bug reports, please [create an issue](https://github.com/TROPOS-RSD/earthcarekit/issues) or reach out via [email](mailto:koenig@tropos.de).
## Acknowledgments
The visual style of the along-track/vertical curtain plots was inspired by the exellent [ectools](https://bitbucket.org/smason/workspace/projects/EC) repository by Shannon Mason ([ECMWF](https://www.ecmwf.int/)), from which the colormap definitions for `calipso` and `chiljet2` were adapted.
## Citation
If you use this software in your work, please cite it.
We recommend citing the specific version you are using, which you can select on [Zenodo](https://doi.org/10.5281/zenodo.16813294).
Alternatively, if you want to cite version-independent use:
```bibtex
@software{koenig_2025_earthcarekit,
author = {König, Leonard and
Floutsi, Athena Augusta and
Haarig, Moritz and
Baars, Holger and
Mason, Shannon and
Wandinger, Ulla},
title = {earthcarekit: A Python package to simplify working
with EarthCARE satellite data},
month = aug,
year = 2025,
publisher = {Zenodo},
doi = {10.5281/zenodo.16813294},
url = {https://doi.org/10.5281/zenodo.16813294},
}
```
or in text:
> König, L., Floutsi, A. A., Haarig, M., Baars, H., Mason, S. & Wandinger, U. (2025). earthcarekit: A Python package to simplify working with EarthCARE satellite data. Zenodo. [https://doi.org/10.5281/zenodo.16813294](https://doi.org/10.5281/zenodo.16813294)
## License
This project is licensed under the Apache 2.0 License (see [LICENSE](https://github.com/TROPOS-RSD/earthcarekit/blob/main/LICENSE) file or [opensource.org/license/apache-2-0](https://opensource.org/license/apache-2-0)). See also third-party licenses listed in the [documentation](https://tropos-rsd.github.io/earthcarekit/#third-party-licenses).
## References
- Wehr, T., Kubota, T., Tzeremes, G., Wallace, K., Nakatsuka, H., Ohno, Y., Koopman, R., Rusli, S., Kikuchi, M., Eisinger, M., Tanaka, T., Taga, M., Deghaye, P., Tomita, E., and Bernaerts, D.: The EarthCARE mission – science and system overview, Atmos. Meas. Tech., 16, 3581–3608, https://doi.org/10.5194/amt-16-3581-2023, 2023.
| text/markdown | null | Leonard König <koenig@tropos.de> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2025 TROPOS
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | null | [
"Development Status :: 3 - Alpha",
"License :: OSI Approved :: Apache Software License",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language ... | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=2.3.2",
"pandas>=2.3.1",
"xarray>=2025.7.1",
"matplotlib>=3.10.3",
"plotly>=6.2.0",
"seaborn>=0.13.2",
"cartopy>=0.24.1",
"cmcrameri>=1.9",
"scipy>=1.16.1",
"owslib>=0.34.1",
"jupyterlab>=4.4.5",
"h5netcdf>=1.6.3",
"netcdf4>=1.7.2",
"tomli-w>=1.2.0",
"mkdocstrings-python"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:22:36.575235 | earthcarekit-0.14.3.tar.gz | 256,330 | 30/30/2c3651f8801efc58e97fb2f3b1771ef9b1f63be5bfac1c792c402f4aa7fe/earthcarekit-0.14.3.tar.gz | source | sdist | null | false | 31d95971af8dd4ff1f1fe7f1ed5fd302 | 7685d5f678b5e6430b3a4af145247ec519f4a8241f86fdd79eb87e4fd3dd6b8a | 30302c3651f8801efc58e97fb2f3b1771ef9b1f63be5bfac1c792c402f4aa7fe | null | [
"LICENSE"
] | 246 |
2.4 | langchain-core | 1.2.14 | Building applications with LLMs through composability | # 🦜🍎️ LangChain Core
[](https://pypi.org/project/langchain-core/#history)
[](https://opensource.org/licenses/MIT)
[](https://pypistats.org/packages/langchain-core)
[](https://x.com/langchain)
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
To help you ship LangChain apps to production faster, check out [LangSmith](https://www.langchain.com/langsmith).
[LangSmith](https://www.langchain.com/langsmith) is a unified developer platform for building, testing, and monitoring LLM applications.
## Quick Install
```bash
pip install langchain-core
```
## 🤔 What is this?
LangChain Core contains the base abstractions that power the LangChain ecosystem.
These abstractions are designed to be as modular and simple as possible.
The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.
## ⛰️ Why build on top of LangChain Core?
The LangChain ecosystem is built on top of `langchain-core`. Some of the benefits:
- **Modularity**: We've designed Core around abstractions that are independent of each other, and not tied to any specific model provider.
- **Stability**: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps.
- **Battle-tested**: Core components have the largest install base in the LLM ecosystem, and are used in production by many companies.
## 📖 Documentation
For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_core/). For conceptual guides, tutorials, and examples on using LangChain, see the [LangChain Docs](https://docs.langchain.com/oss/python/langchain/overview). You can also chat with the docs using [Chat LangChain](https://chat.langchain.com).
## 📕 Releases & Versioning
See our [Releases](https://docs.langchain.com/oss/python/release-policy) and [Versioning](https://docs.langchain.com/oss/python/versioning) policies.
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview).
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming ... | [] | null | null | <4.0.0,>=3.10.0 | [] | [] | [] | [
"jsonpatch<2.0.0,>=1.33.0",
"langsmith<1.0.0,>=0.3.45",
"packaging>=23.2.0",
"pydantic<3.0.0,>=2.7.4",
"pyyaml<7.0.0,>=5.3.0",
"tenacity!=8.4.0,<10.0.0,>=8.1.0",
"typing-extensions<5.0.0,>=4.7.0",
"uuid-utils<1.0,>=0.12.0"
] | [] | [] | [] | [
"Homepage, https://docs.langchain.com/",
"Documentation, https://reference.langchain.com/python/langchain_core/",
"Repository, https://github.com/langchain-ai/langchain",
"Issues, https://github.com/langchain-ai/langchain/issues",
"Changelog, https://github.com/langchain-ai/langchain/releases?q=%22langchain... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:22:33.514933 | langchain_core-1.2.14.tar.gz | 833,399 | 3f/ff/c5e3da8eca8a18719b300ef6c29e28208ee4e9da7f9749022b96292b6541/langchain_core-1.2.14.tar.gz | source | sdist | null | false | ef7ccbc058edc43f4468dd15320e1a7c | 09549d838a2672781da3a9502f3b9c300863284b77b27e2a6dac4e6e650acfed | 3fffc5e3da8eca8a18719b300ef6c29e28208ee4e9da7f9749022b96292b6541 | null | [] | 1,894,883 |
2.3 | okb | 2.1.0 | Personal knowledge base with semantic search for LLMs | # Owned Knowledge Base (OKB)
A local-first semantic search system for personal documents with Claude Code integration via MCP.
## Installation
pipx - preferred!
```bash
pipx install okb
```
Or pip:
```bash
pip install okb
```
## Quick Start
```bash
# 1. Start the database
okb db start
# 2. (Optional) Deploy Modal embedder for faster batch ingestion
okb modal deploy
# 3. Ingest your documents
okb ingest ~/notes ~/docs
# 4. Configure Claude Code MCP (see below)
```
## CLI Commands
| Command | Description |
|---------|-------------|
| `okb db start` | Start pgvector database container |
| `okb db stop` | Stop database container |
| `okb db status` | Show database status |
| `okb db migrate [name]` | Apply pending migrations (optionally for specific db) |
| `okb db list` | List configured databases |
| `okb db destroy` | Remove container and volume (destructive) |
| `okb db snapshot save [name]` | Create database snapshot (default: timestamp) |
| `okb db snapshot list` | List available snapshots |
| `okb db snapshot restore <name>` | Restore from snapshot (creates pre-restore backup) |
| `okb db snapshot restore <name> --no-backup` | Restore without pre-restore backup |
| `okb db snapshot delete <name>` | Delete a snapshot |
| `okb ingest <paths>` | Ingest documents into knowledge base |
| `okb ingest <paths> --local` | Ingest using local GPU/CPU embedding (no Modal) |
| `okb serve` | Start MCP server (stdio, for Claude Code) |
| `okb serve --http` | Start HTTP MCP server with token auth |
| `okb watch <paths>` | Watch directories for changes |
| `okb config init` | Create default config file |
| `okb config show` | Show current configuration |
| `okb config path` | Print config file path |
| `okb modal deploy` | Deploy GPU embedder to Modal |
| `okb token create` | Create API token for HTTP server |
| `okb token list` | List tokens for a database |
| `okb token revoke [TOKEN] --id <n>` | Revoke token by full value or ID |
| `okb sync list` | List available API sources (plugins) |
| `okb sync list-projects <source>` | List projects from source (for config) |
| `okb sync run <sources>` | Sync data from external APIs |
| `okb sync auth <source>` | Interactive OAuth setup (e.g., dropbox-paper) |
| `okb sync status` | Show last sync times |
| `okb rescan` | Check indexed files for changes, re-ingest stale |
| `okb rescan --dry-run` | Show what would change without executing |
| `okb rescan --delete` | Also remove documents for missing files |
| `okb llm status` | Show LLM config and connectivity |
| `okb llm deploy` | Deploy Modal LLM for open model inference |
| `okb llm clear-cache` | Clear LLM response cache |
| `okb enrich run` | Extract TODOs and entities from documents |
| `okb enrich run --dry-run` | Show what would be enriched |
| `okb enrich pending` | List entities awaiting review |
| `okb enrich approve <id>` | Approve a pending entity |
| `okb enrich reject <id>` | Reject a pending entity |
| `okb enrich analyze` | Analyze database and update description/topics |
| `okb enrich consolidate` | Run entity consolidation (duplicates, clusters) |
| `okb enrich merge-proposals` | List pending merge proposals |
| `okb enrich approve-merge <id>` | Approve an entity merge |
| `okb enrich reject-merge <id>` | Reject an entity merge |
| `okb enrich clusters` | List topic clusters |
| `okb enrich relationships` | List entity relationships |
| `okb service install` | Install systemd user services for background operation |
| `okb service uninstall` | Remove systemd user services |
| `okb service status` | Show service status |
| `okb service start` | Start okb services |
| `okb service stop` | Stop okb services |
| `okb service restart` | Restart services (use after upgrading okb) |
| `okb service logs [-f]` | Show service logs (optionally follow) |
## Configuration
Configuration is loaded from `~/.config/okb/config.yaml` (or `$XDG_CONFIG_HOME/okb/config.yaml`).
Create default config:
```bash
okb config init
```
Example config:
```yaml
databases:
personal:
url: postgresql://knowledge:localdev@localhost:5433/personal_kb
default: true # Used when --db not specified (only one can be default)
managed: true # okb manages via Docker
work:
url: postgresql://knowledge:localdev@localhost:5433/work_kb
managed: true
docker:
port: 5433
container_name: okb-pgvector
chunking:
chunk_size: 512
chunk_overlap: 64
```
Use `--db <name>` to target a specific database with any command.
Environment variables override config file settings:
- `OKB_DATABASE_URL` - Database connection string
- `OKB_DOCKER_PORT` - Docker port mapping
- `OKB_CONTAINER_NAME` - Docker container name
### Project-Local Config
Override global config per-project with `.okbconf.yaml` (searched from CWD upward):
```yaml
# .okbconf.yaml
default_database: work # Use 'work' db in this project
extensions:
skip_directories: # Extends global list
- test_fixtures
```
Merge: scalars replace, lists extend, dicts deep-merge.
### LLM Integration (Optional)
Enable LLM-based document classification, filtering, and enrichment:
```yaml
llm:
provider: claude # "claude", "modal", or null (disabled)
model: claude-haiku-4-5-20251001
timeout: 30
cache_responses: true
```
**Providers:**
| Provider | Setup | Cost |
|----------|-------|------|
| `claude` | `export ANTHROPIC_API_KEY=...` | ~$0.25/1M tokens |
| `modal` | `okb llm deploy` | ~$0.02/min GPU |
**Modal LLM Setup** (no API key needed, runs on Modal's GPUs):
```yaml
llm:
provider: modal
model: microsoft/Phi-3-mini-4k-instruct # Recommended: no gating
```
Non-gated models (work immediately):
- `microsoft/Phi-3-mini-4k-instruct` - Good quality, 4K context
- `Qwen/Qwen2-1.5B-Instruct` - Smaller/faster
Gated models (require HuggingFace approval + token):
- `meta-llama/Llama-3.2-3B-Instruct` - Requires accepting license at HuggingFace
- Setup: `modal secret create huggingface HF_TOKEN=hf_...`
Deploy after configuring:
```bash
okb llm deploy
```
**Pre-ingest filtering** - skip low-value content during sync:
```yaml
plugins:
sources:
dropbox-paper:
llm_filter:
enabled: true
prompt: "Skip meeting notes and drafts"
action_on_skip: discard # or "archive"
```
### Document Enrichment
Extract TODOs and entities (people, projects, technologies) from documents using LLM:
```bash
okb enrich run # Enrich un-enriched documents
okb enrich run --dry-run # Preview what would be enriched
okb enrich run --source-type markdown # Only markdown files
okb enrich run --query "meeting" # Filter by semantic search
```
Entities are created as pending suggestions for review:
```bash
okb enrich pending # List pending entities
okb enrich approve <id> # Approve → creates entity document
okb enrich reject <id> # Reject → hidden from future suggestions
```
Configure enrichment behavior:
```yaml
enrichment:
enabled: true
extract_todos: true
extract_entities: true
auto_create_todos: true # TODOs created immediately
auto_create_entities: false # Entities go to pending review
min_confidence_todo: 0.7
min_confidence_entity: 0.8
```
CLI commands:
```bash
okb llm status # Show config and connectivity
okb llm deploy # Deploy Modal LLM (for provider: modal)
okb llm clear-cache # Clear response cache
```
## Claude Code MCP Config
### stdio mode (default)
Add to your Claude Code MCP configuration:
```json
{
"mcpServers": {
"knowledge-base": {
"command": "okb",
"args": ["serve"]
}
}
}
```
### HTTP mode (for remote/shared servers)
First, start the HTTP server and create a token:
```bash
# Create a token
okb token create --db default -d "Claude Code"
# Output: okb_default_rw_a1b2c3d4e5f6g7h8
# Start HTTP server
okb serve --http --host 0.0.0.0 --port 8080
```
The server uses Streamable HTTP transport (RFC 9728 compliant):
- `POST /mcp` - Send JSON-RPC messages, receive SSE response
- `GET /mcp` - Establish SSE connection for server notifications
- `DELETE /mcp` - Terminate session
- `/sse` is an alias for `/mcp` for backward compatibility
Configure your MCP client to connect:
```json
{
"mcpServers": {
"knowledge-base": {
"type": "sse",
"url": "http://localhost:8080/mcp",
"headers": {
"Authorization": "Bearer okb_default_rw_a1b2c3d4e5f6g7h8"
}
}
}
}
```
## MCP Tools available to LLM
| Tool | Purpose |
|------|---------|
| `search_knowledge` | Semantic search with natural language queries |
| `keyword_search` | Exact keyword/symbol matching |
| `hybrid_search` | Combined semantic + keyword (RRF fusion) |
| `get_document` | Retrieve full document by path |
| `list_sources` | Show indexed document stats |
| `list_projects` | List known projects |
| `recent_documents` | Show recently indexed files |
| `save_knowledge` | Save knowledge from Claude (`source_type`: `claude-note` or `synthesis`) |
| `delete_knowledge` | Delete a Claude-saved knowledge entry |
| `get_actionable_items` | Query tasks/events with structured filters |
| `get_database_info` | Get database description, topics, and stats |
| `set_database_description` | Update database description/topics (LLM can self-document) |
| `add_todo` | Create a TODO item in the knowledge base |
| `trigger_sync` | Sync API sources (Todoist, GitHub, Dropbox Paper). Accepts `repos` for GitHub. |
| `trigger_rescan` | Check indexed files for changes and re-ingest |
| `list_sync_sources` | List available API sync sources with status |
| `enrich_document` | Run LLM enrichment to extract TODOs/entities |
| `list_pending_entities` | List entities awaiting review |
| `approve_entity` | Approve a pending entity |
| `reject_entity` | Reject a pending entity |
| `analyze_knowledge_base` | Analyze content and generate description/topics |
| `get_synthesis_samples` | Get document samples and stats for LLM-driven synthesis |
| `find_entity_duplicates` | Find potential duplicate entities |
| `merge_entities` | Merge duplicate entities |
| `list_pending_merges` | List pending merge proposals |
| `approve_merge` | Approve a merge proposal |
| `reject_merge` | Reject a merge proposal |
| `get_topic_clusters` | Get topic clusters from consolidation |
| `get_entity_relationships` | Get relationships between entities |
| `run_consolidation` | Run full entity consolidation pipeline |
## Contextual Chunking
Documents are chunked with context for better retrieval:
```
Document: Django Performance Notes
Project: student-app ← inferred from path or frontmatter
Section: Query Optimization ← extracted from markdown headers
Topics: django, performance ← from frontmatter tags
Content: Use `select_related()` to avoid N+1 queries...
```
### Frontmatter Example
```markdown
---
tags: [django, postgresql, performance]
project: student-app
category: backend
---
# Your Document Title
Content here...
```
## Plugin System
OKB supports plugins for custom file parsers and API data sources (GitHub, Todoist, etc).
### Creating a Plugin
```python
# File parser plugin
from okb.plugins import FileParser, Document
class EpubParser:
extensions = ['.epub']
source_type = 'epub'
def can_parse(self, path): return path.suffix.lower() == '.epub'
def parse(self, path, extra_metadata=None) -> Document: ...
# API source plugin
from okb.plugins import APISource, SyncState, Document
class GitHubSource:
name = 'github'
source_type = 'github-issue'
def configure(self, config): ...
def fetch(self, state: SyncState | None) -> tuple[list[Document], SyncState]: ...
```
### Registering Plugins
In your plugin's `pyproject.toml`:
```toml
[project.entry-points."okb.parsers"]
epub = "okb_epub:EpubParser"
[project.entry-points."okb.sources"]
github = "okb_github:GitHubSource"
```
### Configuring API Sources
```yaml
# ~/.config/okb/config.yaml
plugins:
sources:
github:
enabled: true
token: ${GITHUB_TOKEN} # Resolved from environment
repos: [owner/repo1, owner/repo2]
todoist:
enabled: true
token: ${TODOIST_TOKEN}
include_completed: false # Sync completed tasks
completed_days: 30 # Days of completed history
include_comments: false # Include task comments (1 API call per task)
project_filter: [] # List of project IDs (use sync list-projects to find)
dropbox-paper:
enabled: true
# Option 1: Refresh token (recommended, auto-refreshes)
app_key: ${DROPBOX_APP_KEY}
app_secret: ${DROPBOX_APP_SECRET}
refresh_token: ${DROPBOX_REFRESH_TOKEN}
# Option 2: Access token (short-lived, expires after ~4 hours)
# token: ${DROPBOX_TOKEN}
folders: [/] # Optional: filter to specific folders
```
**Dropbox Paper OAuth Setup:**
```bash
okb sync auth dropbox-paper
```
This interactive command will guide you through getting a refresh token from Dropbox.
## License
MIT
| text/markdown | null | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/username/okb | null | >=3.11 | [] | [] | [] | [
"psycopg[binary]>=3.1.0",
"pgvector>=0.2.0",
"sentence-transformers>=2.2.0",
"mcp>=1.0.0",
"pyyaml>=6.0",
"watchdog>=3.0.0",
"einops>=0.7.0",
"click>=8.0.0",
"modal>=1.0.0",
"yoyo-migrations>=8.0.0",
"dropbox>=12.0.0",
"PyGithub>=2.0.0",
"pymupdf>=1.23.0; extra == \"pdf\"",
"python-docx>=1... | [] | [] | [] | [
"Homepage, https://github.com/username/okb",
"Repository, https://github.com/username/okb",
"Issues, https://github.com/username/okb/issues"
] | poetry/2.1.3 CPython/3.11.12 Linux/5.15.153.1-microsoft-standard-WSL2 | 2026-02-19T14:21:57.612168 | okb-2.1.0.tar.gz | 109,161 | 0d/22/2e21236d88333257678e5aadf779f96f9dac92c0fd2ac0e72d028b553a10/okb-2.1.0.tar.gz | source | sdist | null | false | a3b802da00cfa456404396e972583ad6 | aa6e8b9f8684f69afe2ed1438240bcb53c7fe087fa821d6d42f0f4e2a31161e6 | 0d222e21236d88333257678e5aadf779f96f9dac92c0fd2ac0e72d028b553a10 | null | [] | 224 |
2.4 | replication | 0.9.11 | A prototype object replication lib | # A basic python replication framework prototype
> A simple client/server python objects replication framework
## Dependencies
| Dependencies | Version | Needed |
|--------------|:-------:|-------:|
| ZeroMQ | latest | yes |
## Contributing
1. Fork it (<https://gitlab.com/yourname/yourproject/fork>)
2. Create your feature branch (`git checkout -b feature/fooBar`)
3. Commit your changes (`git commit -am 'Add some fooBar'`)
4. Push to the branch (`git push origin feature/fooBar`)
5. Create a new Pull Request
| text/markdown | Swann Martinez | swann.martinez@pm.me | null | null | GPL3 | null | [] | [] | https://gitlab.com/slumber/replication | null | null | [] | [] | [] | [
"pyzmq==25.0.2",
"deepdiff==8.1.1"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T14:21:16.250352 | replication-0.9.11.tar.gz | 33,159 | 2d/53/5315e26c0762eda7df01396e824fdba53c88575b5ed678e833702b43072d/replication-0.9.11.tar.gz | source | sdist | null | false | 762b2334d73da0d402fce9d052789a86 | 032266f8e86c49de1bfc4da555b0351d871661691cea404d458482224c3de183 | 2d535315e26c0762eda7df01396e824fdba53c88575b5ed678e833702b43072d | null | [
"LICENSE"
] | 244 |
2.4 | shellhost | 2.1.0 | Turn Python functions into interactive shell commands in an isolated environment. | Provides an isolated interactive shell environment that you can import Python functions into as shell commands.
| null | M. Bragg | mbragg@spear.ai | null | null | LICENSE.txt | null | [] | [] | https://github.com/mbragg-spear/pyshell | null | null | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:21:06.559899 | shellhost-2.1.0-cp39-cp39-win_amd64.whl | 23,009 | bd/32/01b822201447a113af37308c96c9759e564cc62f1047a3f46766310c9805/shellhost-2.1.0-cp39-cp39-win_amd64.whl | cp39 | bdist_wheel | null | false | c7d0cdc880b74d701a70ae2be0edd067 | 966961baed693b1517e7d5732ed4c76269f8f7ac86566fe96fa8d78268d064c1 | bd3201b822201447a113af37308c96c9759e564cc62f1047a3f46766310c9805 | null | [
"LICENSE.txt"
] | 2,247 |
2.4 | marqetive-lib | 0.2.13 | Modern Python utilities for web APIs | # MarqetiveLib
Modern Python library for social media platform integrations - Simple, type-safe, and async-ready.
## Supported Platforms
- **Twitter/X** - Post tweets, upload media, manage threads
- **LinkedIn** - Share updates, upload images and videos
- **Instagram** - Create posts with media via Graph API
- **TikTok** - Upload and publish videos
## Features
- **Unified API**: Single interface for all social media platforms
- **Async-First**: Built for modern async Python applications
- **Type-Safe**: Full type hints and Pyright compliance
- **Auto Token Refresh**: Factory handles OAuth token lifecycle automatically
- **Media Upload**: Progress tracking for large file uploads
- **Retry Logic**: Exponential backoff with jitter for transient failures
- **Well Tested**: Comprehensive test coverage
## Installation
```bash
pip install marqetive
```
Or with Poetry:
```bash
poetry add marqetive
```
## Quick Start
```python
import asyncio
from marqetive import get_client, AuthCredentials, PostCreateRequest
async def main():
# Create credentials for your platform
credentials = AuthCredentials(
platform="twitter",
access_token="your_access_token",
refresh_token="your_refresh_token"
)
# Get authenticated client (auto-refreshes token if expired)
client = await get_client(credentials)
# Use client as async context manager
async with client:
request = PostCreateRequest(content="Hello from MarqetiveLib!")
post = await client.create_post(request)
print(f"Posted! ID: {post.platform_id}")
asyncio.run(main())
```
## Platform Examples
### Twitter
```python
from marqetive import get_client, AuthCredentials, PostCreateRequest
credentials = AuthCredentials(
platform="twitter",
access_token="your_access_token",
refresh_token="your_refresh_token" # Optional, for token refresh
)
client = await get_client(credentials)
async with client:
# Text post
post = await client.create_post(PostCreateRequest(content="Hello Twitter!"))
# Post with media
post = await client.create_post(PostCreateRequest(
content="Check out this image!",
media_paths=["/path/to/image.jpg"]
))
```
### LinkedIn
```python
from marqetive import get_client, AuthCredentials, PostCreateRequest
credentials = AuthCredentials(
platform="linkedin",
access_token="your_access_token",
user_id="urn:li:person:your_person_id" # Required for LinkedIn
)
client = await get_client(credentials)
async with client:
post = await client.create_post(PostCreateRequest(
content="Excited to share this update!"
))
```
### Instagram
```python
from marqetive import get_client, AuthCredentials, PostCreateRequest
credentials = AuthCredentials(
platform="instagram",
access_token="your_access_token",
user_id="your_instagram_business_account_id" # Required
)
client = await get_client(credentials)
async with client:
# Instagram requires media for posts
post = await client.create_post(PostCreateRequest(
content="Beautiful day! #photography",
media_paths=["/path/to/photo.jpg"]
))
```
### TikTok
```python
from marqetive import get_client, AuthCredentials, PostCreateRequest
credentials = AuthCredentials(
platform="tiktok",
access_token="your_access_token",
additional_data={"open_id": "your_open_id"} # Required for TikTok
)
client = await get_client(credentials)
async with client:
post = await client.create_post(PostCreateRequest(
content="Check out this video!",
media_paths=["/path/to/video.mp4"]
))
```
## Using Custom OAuth Credentials
```python
from marqetive import PlatformFactory, AuthCredentials, PostCreateRequest
# Create factory with your OAuth app credentials
factory = PlatformFactory(
twitter_client_id="your_client_id",
twitter_client_secret="your_client_secret",
linkedin_client_id="your_linkedin_client_id",
linkedin_client_secret="your_linkedin_client_secret"
)
credentials = AuthCredentials(
platform="twitter",
access_token="user_access_token",
refresh_token="user_refresh_token"
)
# Get client with automatic token refresh
client = await factory.get_client(credentials)
async with client:
post = await client.create_post(PostCreateRequest(content="Hello!"))
```
## Progress Tracking for Media Uploads
```python
def progress_callback(operation: str, progress: int, total: int, message: str | None):
percent = (progress / total) * 100 if total > 0 else 0
print(f"{operation}: {percent:.1f}% - {message or ''}")
client = await get_client(credentials, progress_callback=progress_callback)
async with client:
post = await client.create_post(PostCreateRequest(
content="Uploading video...",
media_paths=["/path/to/large_video.mp4"]
))
```
## Error Handling
```python
from marqetive import get_client, AuthCredentials, PostCreateRequest
from marqetive.core.exceptions import (
PlatformError,
PlatformAuthError,
RateLimitError,
MediaUploadError
)
try:
client = await get_client(credentials)
async with client:
post = await client.create_post(request)
except PlatformAuthError as e:
print(f"Authentication failed: {e}")
# Token may need refresh or reconnection
except RateLimitError as e:
print(f"Rate limited. Retry after: {e.retry_after} seconds")
except MediaUploadError as e:
print(f"Media upload failed: {e}")
except PlatformError as e:
print(f"Platform error: {e}")
```
## Development
### Setup Development Environment
```bash
# Clone the repository
git clone https://github.com/your-org/marqetive-lib.git
cd marqetive-lib
# Install Poetry if you haven't already
curl -sSL https://install.python-poetry.org | python3 -
# Install dependencies
poetry install --with dev,docs
# Activate virtual environment
poetry shell
```
### Running Tests
```bash
# Run tests
poetry run pytest
# Run tests with coverage
poetry run pytest --cov=src/marqetive --cov-report=term-missing
# Run platform-specific tests
poetry run pytest tests/platforms/test_twitter.py
```
### Code Quality
```bash
# Lint code with Ruff
poetry run ruff check .
# Format code with Ruff
poetry run ruff format .
# Type check with Pyright
poetry run pyright src/
```
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
| text/markdown | null | null | null | null | null | api, utilities, web, http, marqetive | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Software Development :: Libraries :: Python Modules"
] | [] | https://github.com/yourusername/marqetive-lib | null | >=3.12 | [] | [] | [] | [
"httpx>=0.28.1",
"pydantic>=2.0.0",
"tweepy<5.0.0,>=4.16.0",
"aiofiles>=24.0.0",
"pytest>=9.0.0; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=1.3.0; extra == \"dev\"",
"ruff>=0.14.4; extra == \"dev\"",
"pyright>=1.1.0; extra == \"dev\"",
"mkdocs>=1.5.0; extra == \"doc... | [] | [] | [] | [
"Homepage, https://github.com/yourusername/marqetive-lib",
"Repository, https://github.com/yourusername/marqetive-lib",
"Documentation, https://marqetive-lib.readthedocs.io",
"Issues, https://github.com/yourusername/marqetive-lib/issues"
] | poetry/2.2.1 CPython/3.12.9 Darwin/24.6.0 | 2026-02-19T14:20:54.063904 | marqetive_lib-0.2.13.tar.gz | 102,646 | 86/4d/a0d5c95e9d4e9efc2b06b616c5bf263cf4c52fbe5496fb979532320ed23e/marqetive_lib-0.2.13.tar.gz | source | sdist | null | false | 94d7541585f7ba963dcd8d2055a63e98 | be57ba9df0d2f83db909c1f6258de7f1271dbcd616dd98604cc39cc8485f38e1 | 864da0d5c95e9d4e9efc2b06b616c5bf263cf4c52fbe5496fb979532320ed23e | null | [] | 243 |
2.1 | pyavif | 0.0.2 | AVIF bindings for Python with NumPy support | # pyavif
> **For LLM agents:** This README is the primary context source for the project.
> It covers installation, full API surface, and usage patterns. Build details are
> in [`docs/`](./docs/).
Opinionated, easy to use and performance-oriented AVIF encoder/decoder for Python. Built on
[libavif](https://github.com/AOMediaCodec/libavif) with
[nanobind](https://github.com/wjakob/nanobind) for minimal overhead.
**Highlights:**
- Decode (DAV1D, AOM) and encode (AOM, RAV1E) with full codec choice
- 8/10/12-bit, RGB/RGBA, animated AVIF
- Batch encode/decode with parallel workers
- ICC, EXIF, XMP metadata support
- GIL released during all C++ operations
- Zero-copy `to_torch()` helper
- Prebuilt wheels for Linux x86_64, macOS arm64, Windows x86_64 (Python 3.12+)
## Installation
```bash
pip install pyavif
```
## Quick Start
### Decode
```python
from pyavif import Decoder
decoder = Decoder()
decoder.init("image.avif")
image = decoder.get_image(0) # numpy ndarray (H, W, C), uint8 or uint16
```
### Encode
```python
import numpy as np
from pyavif import Encoder
encoder = Encoder("out.avif", width=256, height=256, channels=3, depth=8)
encoder.add_frame(np.zeros((256, 256, 3), dtype=np.uint8))
encoder.finish()
```
## API Reference
### `Decoder`
```
init(filepath, decoder_threads=1, codec=DecoderCodec.DAV1D)
get_image(index, force_rgba=False) -> ndarray # random access by frame index
next_image(force_rgba=False) -> ndarray # sequential access
get_image_count() -> int
get_width() / get_height() / get_depth() -> int
has_alpha() -> bool
get_pixel_format() -> PixelFormat
```
### `BatchDecoder`
```
BatchDecoder(file_names, max_workers=0, decoder_threads=1,
force_rgba=False, codec=DecoderCodec.DAV1D)
next_batch() -> (int, dict[str, ndarray]) # frame_index, {path: image}
get_batch_at(frame_idx) -> (int, dict[str, ndarray])
files() -> list[str]
get_image_count() -> int
```
### `Encoder`
```
Encoder(output_path, width, height, channels, depth, options=EncoderOptions())
add_frame(ndarray, duration=1, quality_override=None, quality_alpha_override=None)
finish()
set_icc(data: bytes) / set_exif(data: bytes) / set_xmp(data: bytes)
add_advanced_option(key: str, value: str)
```
### `BatchEncoder`
```
BatchEncoder(output_paths, options=EncoderOptions())
add_image_batch(images, duration=1, depth=None) # depth is required for uint16 (10 or 12)
finish_all()
files() -> list[str]
```
### `EncoderOptions`
| Property | Type | Default |
|---|---|---|
| `quality` | int | 80 |
| `quality_alpha` | int | 100 (lossless) |
| `speed` | int | AVIF_SPEED_DEFAULT |
| `max_threads` | int | 1 |
| `codec` | `EncoderCodec` | AOM |
| `pixel_format` | `PixelFormat` | YUV444 |
| `range` | `Range` | FULL |
| `timescale` | int | 30 |
| `keyframe_interval` | int | 0 |
| `repetition_count` | int | infinite |
| `auto_tiling` | bool | True |
| `tile_rows_log2` | int | 0 |
| `tile_cols_log2` | int | 0 |
| `alpha_premultiplied` | bool | False |
### Enums
- **`DecoderCodec`**: `DAV1D`, `AOM`
- **`EncoderCodec`**: `RAV1E`, `AOM`
- **`PixelFormat`**: `YUV444`, `YUV422`, `YUV420`, `YUV400`
- **`Range`**: `FULL`, `LIMITED`
### `to_torch(array, *, layout="channels_last", pin_memory=False)`
Zero-copy NumPy-to-PyTorch conversion. `layout="chw"` returns a `CxHxW` view.
## License
GPLv3. See [`LICENSE`](./LICENSE).
| text/markdown | null | Andrey Volodin <andrey@gracia.ai> | null | null | GNU General Public License v3 (GPLv3) | null | [
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"numpy>=1.20.0",
"pytest>=6.0; extra == \"test\"",
"pytest-cov>=2.10; extra == \"test\"",
"pytest>=6.0; extra == \"dev\"",
"pytest-cov>=2.10; extra == \"dev\"",
"ipython; extra == \"dev\"",
"jupyter; extra == \"dev\"",
"matplotlib; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/gracia-labs/pyavif"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:20:45.109155 | pyavif-0.0.2-cp312-abi3-win_amd64.whl | 6,550,815 | dd/62/2a43c618db0b2b814b0fb966baae416671a505fdccc37fb623aba3a1405a/pyavif-0.0.2-cp312-abi3-win_amd64.whl | cp312 | bdist_wheel | null | false | 854054394dd974c6af17f81025ae93c4 | a84f315096553f697bc33c3dd4ebb71b5f159445c9253006d0b7e913c8cc6587 | dd622a43c618db0b2b814b0fb966baae416671a505fdccc37fb623aba3a1405a | null | [] | 433 |
2.4 | norman-mcp-server | 0.1.7 | A Model Context Protocol (MCP) server for Norman Finance API | <div align="center">
<a href="https://norman.finance/?utm_source=mcp_server">
<img width="140px" src="https://github.com/user-attachments/assets/d2cb1df3-69f1-460e-b675-beb677577b06" alt="Norman" />
</a>
<h1>Norman MCP Server</h1>
<p>Your finances, inside your AI assistant.<br/>
Norman connects your accounting, invoicing, and VAT filing directly to Claude, Cursor, and any MCP-compatible AI.</p>
<br/>
<p>
<img src="https://img.shields.io/badge/Protocol-MCP-black?style=flat-square" alt="MCP" />
<img src="https://img.shields.io/badge/Transport-Streamable_HTTP-black?style=flat-square" alt="Streamable HTTP" />
<img src="https://img.shields.io/badge/Auth-OAuth_2.1-black?style=flat-square" alt="OAuth 2.1" />
<img src="https://img.shields.io/badge/License-MIT-black?style=flat-square" alt="MIT" />
</p>
<code>https://mcp.norman.finance/mcp</code>
<br/><br/>
<strong>Claude</strong> · <strong>ChatGPT</strong> · <strong>Cursor</strong> · <strong>n8n</strong> · <strong>Any MCP Client</strong>
</div>
<br/>
---
<br/>
### What you can do
**Invoicing** — Create, send, and track invoices including recurring and ZUGFeRD e-invoices
**Bookkeeping** — Categorize transactions, match receipts, and verify entries
**Client Management** — Maintain your client database and contact details
**Tax Filing** — Generate Finanzamt previews, file VAT returns, and track deadlines
**Company Overview** — Check your balance, revenue, and financial health at a glance
**Documents** — Upload and attach receipts, invoices, and supporting files
<br/>
<details open>
<summary>
<h3>👀 See it in action</h3>
</summary>
<br/>
<table>
<tr>
<td align="center">
<p><strong>Filing a VAT return</strong></p>
<img src="https://github.com/user-attachments/assets/00bdf6df-1e37-4ecd-9f12-2747d8f53484" alt="Filing VAT tax report" width="400">
</td>
<td align="center">
<p><strong>Transaction insights</strong></p>
<img src="https://github.com/user-attachments/assets/534c7aac-4fed-4b28-8a5e-3a3411e13bca" alt="Transaction insights" width="400">
</td>
</tr>
<tr>
<td align="center">
<p><strong>Syncing Stripe payments</strong></p>
<img src="https://github.com/user-attachments/assets/2f13bc4e-6acb-4b39-bddc-a4a1ca6787f0" alt="Syncing Stripe payments" width="400">
</td>
<td align="center">
<p><strong>Receipts from Gmail</strong></p>
<img src="https://github.com/user-attachments/assets/2380724b-7a79-45a4-93bd-ddc13a175525" alt="Creating transactions from Gmail receipts" width="200">
</td>
</tr>
<tr>
<td align="center">
<p><strong>Chasing overdue invoices</strong></p>
<img src="https://github.com/user-attachments/assets/d59ed22a-5e75-46f6-ad82-db2f637cf7a2" alt="Managing overdue invoices" width="300">
</td>
<td align="center">
<p><strong>Sending payment reminders</strong></p>
<img src="https://github.com/user-attachments/assets/26cfb8e9-4725-48a9-b413-077dfb5902e7" alt="Sending payment reminders" width="350">
</td>
</tr>
</table>
</details>
<br/>
---
<br/>
## 🚀 Get Started
Before connecting, [create a free Norman account](https://app.norman.finance/sign-up?utm_source=mcp_server) if you don't have one yet. Log in with your Norman credentials via OAuth — your password never touches the AI.
<details>
<summary><strong>Claude Connectors</strong></summary>
<br/>
1. Go to [claude.ai/settings/connectors](https://claude.ai/settings/connectors)
2. Click **Add custom connector**
3. Paste:
```
https://mcp.norman.finance/mcp
```
</details>
<details>
<summary><strong>Claude Code</strong></summary>
<br/>
Norman is available as a [Claude Code plugin](https://code.claude.com/docs/en/plugins) with built-in skills.
```bash
/plugin marketplace add norman-finance/norman-mcp-server
/plugin install norman-finance@norman-finance
```
Or install directly from GitHub:
```bash
claude /plugin install github:norman-finance/norman-mcp-server
```
</details>
<details>
<summary><strong>ChatGPT Apps</strong></summary>
<br/>
1. Open **Settings → Apps → Advanced**
2. Click **Create App**
3. Paste:
```
https://mcp.norman.finance/mcp
```
</details>
<details>
<summary><strong>Cursor</strong></summary>
<br/>
[](https://cursor.com/en-US/install-mcp?name=norman-finance&config=eyJ1cmwiOiJodHRwczovL21jcC5ub3JtYW4uZmluYW5jZS9tY3AifQ%3D%3D)
</details>
<details>
<summary><strong>n8n</strong></summary>
<br/>
1. Create an **MCP OAuth2 API** credential
2. Enable **Dynamic Client Registration**
3. Set Server URL: `https://mcp.norman.finance/`
4. Click **Connect my account** and log in with Norman
5. Add an **MCP Client Tool** node to your AI Agent workflow
6. Set the URL to `https://mcp.norman.finance/mcp` and select the credential
</details>
<details>
<summary><strong>Any MCP Client</strong></summary>
<br/>
Add a remote HTTP MCP server with URL:
```
https://mcp.norman.finance/mcp
```
</details>
<br/>
---
<br/>
## Skills
Ready-to-use skills compatible with **Claude Code**, **OpenClaw**, and the [Agent Skills](https://agentskills.io) standard.
| Skill | What it does |
|:--|:--|
| `financial-overview` | Full dashboard — balance, transactions, invoices, and tax status |
| `create-invoice` | Step-by-step invoice creation and sending |
| `manage-clients` | List, create, and update client records |
| `tax-report` | Review, preview, and file tax reports with the Finanzamt |
| `categorize-transactions` | Categorize and verify bank transactions |
| `find-receipts` | Find missing receipts from Gmail or email and attach them |
| `overdue-reminders` | Identify overdue invoices and send payment reminders |
| `expense-report` | Expense breakdown by category, top vendors, and trends |
| `tax-deduction-finder` | Scan transactions for missed deductions and suggest fixes |
| `monthly-reconciliation` | Full monthly close — transactions, invoices, receipts, and taxes |
<br/>
> **Claude Code** — `/plugin marketplace add norman-finance/norman-mcp-server`
>
> **Claude Code (local)** — `claude --plugin-dir ./norman-mcp-server`
>
> **OpenClaw** — `cp -r skills/<skill-name> ~/.openclaw/skills/`
<br/>
---
<br/>
<p align="center">
Have a feature idea? <a href="../../issues"><strong>Share your suggestion →</strong></a>
</p>
<br/>
<p align="center">
<a href="https://glama.ai/mcp/servers/@norman-finance/norman-mcp-server"><img src="https://glama.ai/mcp/servers/@norman-finance/norman-mcp-server/badge" alt="Norman Finance MCP server" width="200" /></a>
<a href="https://mseep.ai/app/norman-finance-norman-mcp-server"><img src="https://mseep.net/pr/norman-finance-norman-mcp-server-badge.png" alt="MseeP.ai Security Assessment" height="41" /></a>
</p>
<p align="center">
<br/>
<a href="https://norman.finance/?utm_source=mcp_server">
<img width="80px" src="https://github.com/user-attachments/assets/d2cb1df3-69f1-460e-b675-beb677577b06" alt="Norman" />
</a>
<br/><br/>
<sub>Make business effortless</sub>
</p>
<!-- mcp-name: finance.norman/mcp-server -->
| text/markdown | null | Norman Finance <stan@norman.finance> | null | null | MIT | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp[cli]>=1.8.0",
"requests>=2.25.0",
"python-dotenv>=0.19.0",
"pyyaml>=6.0.1",
"httpx>=0.24.0",
"jinja2>=3.0.0",
"pytest>=6.0.0; extra == \"dev\"",
"black>=22.3.0; extra == \"dev\"",
"isort>=5.10.1; extra == \"dev\"",
"flake8>=4.0.0; extra == \"dev\"",
"mypy>=0.942; extra == \"dev\"",
"fasta... | [] | [] | [] | [
"Homepage, https://github.com/norman-finance/norman-mcp-server",
"Issues, https://github.com/norman-finance/norman-mcp-server/issues"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-19T14:20:40.761779 | norman_mcp_server-0.1.7.tar.gz | 15,606 | c4/1a/70d20a0625d09128379d5d8426e58eed5af68c3ed7bb60a359785bd1e314/norman_mcp_server-0.1.7.tar.gz | source | sdist | null | false | 10045e88fac8d09f8d31957d06c6089b | 8ddfd90ebe834ee605ac9d8cfb29c12d18913082e790d5dce30edf93e6b93f05 | c41a70d20a0625d09128379d5d8426e58eed5af68c3ed7bb60a359785bd1e314 | null | [
"LICENSE"
] | 245 |
2.4 | databricks-zerobus-ingest-sdk | 0.3.0 | Databricks Zerobus Ingest SDK for Python | # Databricks Zerobus Ingest SDK for Python
[](https://pypistats.org/packages/databricks-zerobus-ingest-sdk)
[](https://github.com/databricks/zerobus-sdk-py/blob/main/LICENSE)

[Public Preview](https://docs.databricks.com/release-notes/release-types.html): This SDK is supported for production use cases and is available to all customers. Databricks is actively working on stabilizing the Zerobus Ingest SDK for Python. Minor version updates may include backwards-incompatible changes.
We are keen to hear feedback from you on this SDK. Please [file issues](https://github.com/databricks/zerobus-sdk-py/issues), and we will address them.
The Databricks Zerobus Ingest SDK for Python provides a high-performance, Rust-backed client for ingesting data directly into Databricks Delta tables using the Zerobus streaming protocol. Built on top of the battle-tested [Rust SDK](https://github.com/databricks/zerobus-sdk-rs) using PyO3 bindings, it delivers native performance with a Python-friendly API. | See also the [SDK for Java](https://github.com/databricks/zerobus-sdk-java)
## Table of Contents
- [Disclaimer](#disclaimer)
- [Features](#features)
- [Requirements](#requirements)
- [Quick Start User Guide](#quick-start-user-guide)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Choose Your Serialization Format](#choose-your-serialization-format)
- [Option 1: Using JSON (Simplest)](#option-1-using-json-simplest)
- [Option 2: Using Protocol Buffers](#option-2-using-protocol-buffers)
- [Usage Examples](#usage-examples)
- [JSON Examples](#json-examples)
- [Protocol Buffer Examples](#protocol-buffer-examples)
- [Authentication](#authentication)
- [Configuration](#configuration)
- [Error Handling](#error-handling)
- [API Reference](#api-reference)
- [Best Practices](#best-practices)
- [Handling Stream Failures](#handling-stream-failures)
- [Performance Tips](#performance-tips)
- [Debugging](#debugging)
## Features
- **Rust-backed performance**: Native Rust implementation with Python bindings for maximum throughput and minimal latency
- **High-throughput ingestion**: Optimized for high-volume data ingestion with native async/await support
- **Automatic recovery**: Built-in retry and recovery mechanisms from the Rust SDK
- **Flexible configuration**: Customizable stream behavior and timeouts
- **Multiple serialization formats**: Support for JSON and Protocol Buffers
- **OAuth 2.0 authentication**: Secure authentication with client credentials
- **Type safety**: Rust's type system ensures reliability and correctness
- **Sync and Async support**: Both synchronous and asynchronous Python APIs
- **Zero-copy operations**: Efficient data handling with minimal overhead
## Architecture
The Python SDK is a thin wrapper around the [Databricks Zerobus Rust SDK](https://github.com/databricks/zerobus-sdk-rs), built using PyO3 bindings:
```
┌─────────────────────────────────────────┐
│ Python Application Code │
│ (Your code using the Python SDK API) │
└─────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────┐
│ Python SDK (Thin Wrapper) │
│ • API compatibility layer │
│ • Python types & error handling │
└─────────────────────────────────────────┘
│
▼ (PyO3 bindings)
┌─────────────────────────────────────────┐
│ Rust Core Implementation │
│ • gRPC communication │
│ • OAuth 2.0 authentication │
│ • Stream management & recovery │
│ • Protocol encoding/decoding │
└─────────────────────────────────────────┘
```
This architecture provides:
- **Native performance** through Rust's zero-cost abstractions
- **Memory safety** without garbage collection overhead
- **Single source of truth** for all SDK implementations
- **Python-friendly API** with full type hints and IDE support
## Requirements
### Runtime Requirements
- **Python**: 3.9 or higher
- **Databricks workspace** with Zerobus access enabled
### Dependencies
- `protobuf` >= 4.25.0, < 7.0 (for Protocol Buffer schema handling)
- `requests` >= 2.28.1, < 3 (only for the `generate_proto` utility tool)
**Note**: All core ingestion functionality (gRPC, OAuth authentication, stream management) is handled by the native Rust implementation. The `requests` dependency is only used by the optional `generate_proto.py` tool for fetching table schemas from Unity Catalog.
## Quick Start User Guide
### Prerequisites
Before using the SDK, you'll need the following:
#### 1. Workspace URL and Workspace ID
After logging into your Databricks workspace, look at the browser URL:
```
https://<databricks-instance>.cloud.databricks.com/o=<workspace-id>
```
- **Workspace URL**: The part before `/o=` → `https://<databricks-instance>.cloud.databricks.com`
- **Workspace ID**: The part after `/o=` → `<workspace-id>`
> **Note:** The examples above show AWS endpoints (`.cloud.databricks.com`). For Azure deployments, the workspace URL will be `https://<databricks-instance>.azuredatabricks.net`.
Example:
- Full URL: `https://dbc-a1b2c3d4-e5f6.cloud.databricks.com/o=1234567890123456`
- Workspace URL: `https://dbc-a1b2c3d4-e5f6.cloud.databricks.com`
- Workspace ID: `1234567890123456`
#### 2. Create a Delta Table
Create a table using Databricks SQL:
```sql
CREATE TABLE <catalog_name>.default.air_quality (
device_name STRING,
temp INT,
humidity BIGINT
)
USING DELTA;
```
Replace `<catalog_name>` with your catalog name (e.g., `main`).
#### 3. Create a Service Principal
1. Navigate to **Settings > Identity and Access** in your Databricks workspace
2. Click **Service principals** and create a new service principal
3. Generate a new secret for the service principal and save it securely
4. Grant the following permissions:
- `USE_CATALOG` on the catalog (e.g., `main`)
- `USE_SCHEMA` on the schema (e.g., `default`)
- `MODIFY` and `SELECT` on the table (e.g., `air_quality`)
Grant permissions using SQL:
```sql
-- Grant catalog permission
GRANT USE CATALOG ON CATALOG <catalog_name> TO `<service-principal-application-id>`;
-- Grant schema permission
GRANT USE SCHEMA ON SCHEMA <catalog_name>.default TO `<service-principal-application-id>`;
-- Grant table permissions
GRANT SELECT, MODIFY ON TABLE <catalog_name>.default.air_quality TO `<service-principal-application-id>`;
```
### Installation
#### From PyPI (Recommended)
Install the latest stable version using pip:
```bash
pip install databricks-zerobus-ingest-sdk
```
Pre-built wheels are available for:
- **Linux**: x86_64, aarch64 (manylinux)
- **macOS**: x86_64, arm64 (universal2)
- **Windows**: x86_64
#### From Source
Building from source requires the **Rust toolchain** (install from [rustup.rs](https://rustup.rs/)).
```bash
# Install Rust (if not already installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Clone and install
git clone https://github.com/databricks/zerobus-sdk-py.git
cd zerobus-sdk-py
pip install -e .
```
The SDK uses [maturin](https://github.com/PyO3/maturin) to build Python bindings for the Rust implementation. Installation via `pip install -e .` automatically:
1. Installs maturin if needed
2. Compiles the Rust extension
3. Installs the package in editable mode
**For active development**, see [CONTRIBUTING.md](CONTRIBUTING.md) for detailed build instructions and development workflows.
### Choose Your Serialization Format
The SDK supports two serialization formats:
1. **JSON** - Simple, no schema compilation needed. Good for getting started.
2. **Protocol Buffers (Default to maintain backwards compatibility)** - Strongly-typed schemas. More efficient over the wire.
### Option 1: Using JSON
#### Write Your Client Code (JSON)
**Synchronous Example:**
```python
import json
import logging
from zerobus.sdk.sync import ZerobusSdk
from zerobus.sdk.shared import RecordType, StreamConfigurationOptions, TableProperties
# Configure logging (optional but recommended)
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
# Configuration
# For AWS:
server_endpoint = "https://1234567890123456.zerobus.us-west-2.cloud.databricks.com"
workspace_url = "https://dbc-a1b2c3d4-e5f6.cloud.databricks.com"
# For Azure:
# server_endpoint = "https://1234567890123456.zerobus.us-west-2.azuredatabricks.net"
# workspace_url = "https://dbc-a1b2c3d4-e5f6.azuredatabricks.net"
table_name = "main.default.air_quality"
client_id = "your-service-principal-application-id"
client_secret = "your-service-principal-secret"
# Initialize SDK
sdk = ZerobusSdk(server_endpoint, workspace_url)
# Configure table properties
table_properties = TableProperties(table_name)
# Configure stream with JSON record type
options = StreamConfigurationOptions(record_type=RecordType.JSON)
# Create stream
stream = sdk.create_stream(client_id, client_secret, table_properties, options)
try:
# Ingest records
for i in range(100):
# Option 1: Pass a dict (SDK serializes to JSON)
record_dict = {
"device_name": f"sensor-{i % 10}",
"temp": 20 + (i % 15),
"humidity": 50 + (i % 40)
}
ack = stream.ingest_record(record_dict)
# Option 2: Pass a pre-serialized JSON string (client controls serialization)
# json_string = json.dumps(record_dict)
# ack = stream.ingest_record(json_string)
# Optional: Wait for durability confirmation
ack.wait_for_ack()
print(f"Ingested record {i + 1}")
print("Successfully ingested 100 records!")
finally:
stream.close()
```
**Asynchronous Example:**
```python
import asyncio
import json
import logging
from zerobus.sdk.aio import ZerobusSdk
from zerobus.sdk.shared import RecordType, StreamConfigurationOptions, TableProperties
# Configure logging (optional but recommended)
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
async def main():
# Configuration
# For AWS:
server_endpoint = "https://1234567890123456.zerobus.us-west-2.cloud.databricks.com"
workspace_url = "https://dbc-a1b2c3d4-e5f6.cloud.databricks.com"
# For Azure:
# server_endpoint = "1234567890123456.zerobus.us-west-2.azuredatabricks.net"
# workspace_url = "https://dbc-a1b2c3d4-e5f6.azuredatabricks.net"
table_name = "main.default.air_quality"
client_id = "your-service-principal-application-id"
client_secret = "your-service-principal-secret"
# Initialize SDK
sdk = ZerobusSdk(server_endpoint, workspace_url)
# Configure table properties
table_properties = TableProperties(table_name)
# Configure stream with JSON record type
options = StreamConfigurationOptions(record_type=RecordType.JSON)
# Create stream
stream = await sdk.create_stream(client_id, client_secret, table_properties, options)
try:
# Ingest records
for i in range(100):
# Option 1: Pass a dict (SDK serializes to JSON)
record_dict = {
"device_name": f"sensor-{i % 10}",
"temp": 20 + (i % 15),
"humidity": 50 + (i % 40)
}
future = await stream.ingest_record(record_dict)
# Option 2: Pass a pre-serialized JSON string (client controls serialization)
# json_string = json.dumps(record_dict)
# future = await stream.ingest_record(json_string)
# Optional: Wait for durability confirmation
await future
print(f"Ingested record {i + 1}")
print("Successfully ingested 100 records!")
finally:
await stream.close()
asyncio.run(main())
```
### Option 2: Using Protocol Buffers
You'll need to define and compile a protobuf schema.
#### Define Your Protocol Buffer Schema
Create a file named `record.proto`:
```protobuf
syntax = "proto2";
message AirQuality {
optional string device_name = 1;
optional int32 temp = 2;
optional int64 humidity = 3;
}
```
Compile the protobuf:
```bash
pip install "grpcio-tools>=1.60.0,<2.0"
python -m grpc_tools.protoc --python_out=. --proto_path=. record.proto
```
This generates a `record_pb2.py` file compatible with protobuf 6.x.
#### Generate Protocol Buffer Schema from Unity Catalog (Alternative)
Instead of manually writing your protobuf schema, you can automatically generate it from an existing Unity Catalog table using the included `generate_proto.py` tool.
**Basic Usage:**
```bash
python -m zerobus.tools.generate_proto \
--uc-endpoint "https://dbc-a1b2c3d4-e5f6.cloud.databricks.com" \
--client-id "your-service-principal-application-id" \
--client-secret "your-service-principal-secret" \
--table "main.default.air_quality" \
--output "record.proto" \
--proto-msg "AirQuality"
```
**Parameters:**
- `--uc-endpoint`: Your workspace URL (required)
- `--client-id`: Service principal application ID (required)
- `--client-secret`: Service principal secret (required)
- `--table`: Fully qualified table name in format catalog.schema.table (required)
- `--output`: Output path for the generated proto file (required)
- `--proto-msg`: Name of the protobuf message (optional, defaults to table name)
After generating, compile it as shown above.
**Type Mappings:**
| Delta Type | Proto2 Type |
|-----------|-------------|
| TINYINT, BYTE, INT, SMALLINT, SHORT | int32 |
| BIGINT, LONG | int64 |
| FLOAT | float |
| DOUBLE | double |
| STRING, VARCHAR | string |
| BOOLEAN | bool |
| BINARY | bytes |
| DATE | int32 |
| TIMESTAMP | int64 |
| TIMESTAMP_NTZ | int64 |
| ARRAY\<type\> | repeated type |
| MAP\<key, value\> | map\<key, value\> |
| STRUCT\<fields\> | nested message |
| VARIANT | string (unshredded, JSON string) |
#### Write Your Client Code (Protocol Buffers)
**Synchronous Example:**
```python
import logging
from zerobus.sdk.sync import ZerobusSdk
from zerobus.sdk.shared import TableProperties
import record_pb2
# Configure logging (optional but recommended)
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
# Configuration
# For AWS:
server_endpoint = "https://1234567890123456.zerobus.us-west-2.cloud.databricks.com"
workspace_url = "https://dbc-a1b2c3d4-e5f6.cloud.databricks.com"
# For Azure:
# server_endpoint = "https://1234567890123456.zerobus.us-west-2.azuredatabricks.net"
# workspace_url = "https://dbc-a1b2c3d4-e5f6.azuredatabricks.net"
table_name = "main.default.air_quality"
client_id = "your-service-principal-application-id"
client_secret = "your-service-principal-secret"
# Initialize SDK
sdk = ZerobusSdk(server_endpoint, workspace_url)
# Configure table properties with protobuf descriptor
table_properties = TableProperties(table_name, record_pb2.AirQuality.DESCRIPTOR)
# Create stream
stream = sdk.create_stream(client_id, client_secret, table_properties)
try:
# Ingest records
for i in range(100):
# Option 1: Pass a Message object (SDK serializes to bytes)
record = record_pb2.AirQuality(
device_name=f"sensor-{i % 10}",
temp=20 + (i % 15),
humidity=50 + (i % 40)
)
ack = stream.ingest_record(record)
# Option 2: Pass pre-serialized bytes (client controls serialization)
# serialized_bytes = record.SerializeToString()
# ack = stream.ingest_record(serialized_bytes)
# Optional: Wait for durability confirmation
ack.wait_for_ack()
print(f"Ingested record {i + 1}")
print("Successfully ingested 100 records!")
finally:
stream.close()
```
**Asynchronous Example:**
```python
import asyncio
import logging
from zerobus.sdk.aio import ZerobusSdk
from zerobus.sdk.shared import TableProperties
import record_pb2
# Configure logging (optional but recommended)
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
async def main():
# Configuration
# For AWS:
server_endpoint = "https://1234567890123456.zerobus.us-west-2.cloud.databricks.com"
workspace_url = "https://dbc-a1b2c3d4-e5f6.cloud.databricks.com"
# For Azure:
# server_endpoint = "https://1234567890123456.zerobus.us-west-2.azuredatabricks.net"
# workspace_url = "https://dbc-a1b2c3d4-e5f6.azuredatabricks.net"
table_name = "main.default.air_quality"
client_id = "your-service-principal-application-id"
client_secret = "your-service-principal-secret"
# Initialize SDK
sdk = ZerobusSdk(server_endpoint, workspace_url)
# Configure table properties with protobuf descriptor
table_properties = TableProperties(table_name, record_pb2.AirQuality.DESCRIPTOR)
# Create stream
stream = await sdk.create_stream(client_id, client_secret, table_properties)
try:
# Ingest records
for i in range(100):
# Option 1: Pass a Message object (SDK serializes to bytes)
record = record_pb2.AirQuality(
device_name=f"sensor-{i % 10}",
temp=20 + (i % 15),
humidity=50 + (i % 40)
)
future = await stream.ingest_record(record)
# Option 2: Pass pre-serialized bytes (client controls serialization)
# serialized_bytes = record.SerializeToString()
# future = await stream.ingest_record(serialized_bytes)
# Optional: Wait for durability confirmation
await future
print(f"Ingested record {i + 1}")
print("Successfully ingested 100 records!")
finally:
await stream.close()
asyncio.run(main())
```
## Usage Examples
See the `examples/` directory for complete, runnable examples in both JSON and protobuf formats (sync and async variants). See [examples/README.md](examples/README.md) for detailed instructions.
### JSON Examples
#### Blocking Ingestion (JSON)
```python
import json
import logging
from zerobus.sdk.sync import ZerobusSdk
from zerobus.sdk.shared import RecordType, StreamConfigurationOptions, TableProperties
logging.basicConfig(level=logging.INFO)
sdk = ZerobusSdk(server_endpoint, workspace_url)
table_properties = TableProperties(table_name)
options = StreamConfigurationOptions(record_type=RecordType.JSON)
stream = sdk.create_stream(client_id, client_secret, table_properties, options)
try:
for i in range(1000):
# Pass a dict (SDK serializes) or a pre-serialized JSON string
record_dict = {
"device_name": f"sensor-{i}",
"temp": 20 + i % 15,
"humidity": 50 + i % 40
}
ack = stream.ingest_record(record_dict)
# Optional: Wait for durability confirmation
ack.wait_for_ack()
finally:
stream.close()
```
#### Non-Blocking Ingestion (JSON)
```python
import asyncio
import json
import logging
from zerobus.sdk.aio import ZerobusSdk
from zerobus.sdk.shared import RecordType, StreamConfigurationOptions, TableProperties, AckCallback
logging.basicConfig(level=logging.INFO)
async def main():
# Create a custom callback class
class MyCallback(AckCallback):
def on_ack(self, offset: int):
print(f"Acknowledged offset: {offset}")
options = StreamConfigurationOptions(
record_type=RecordType.JSON,
max_inflight_records=50000,
ack_callback=MyCallback()
)
sdk = ZerobusSdk(server_endpoint, workspace_url)
table_properties = TableProperties(table_name)
stream = await sdk.create_stream(client_id, client_secret, table_properties, options)
futures = []
try:
for i in range(100000):
# Pass a dict (SDK serializes) or a pre-serialized JSON string
record_dict = {
"device_name": f"sensor-{i % 10}",
"temp": 20 + i % 15,
"humidity": 50 + i % 40
}
future = await stream.ingest_record(record_dict)
futures.append(future)
await stream.flush()
await asyncio.gather(*futures)
finally:
await stream.close()
asyncio.run(main())
```
### Protocol Buffer Examples
#### Blocking Ingestion (Protobuf)
```python
import logging
from zerobus.sdk.sync import ZerobusSdk
from zerobus.sdk.shared import TableProperties
import record_pb2
logging.basicConfig(level=logging.INFO)
sdk = ZerobusSdk(server_endpoint, workspace_url)
table_properties = TableProperties(table_name, record_pb2.AirQuality.DESCRIPTOR)
stream = sdk.create_stream(client_id, client_secret, table_properties)
try:
for i in range(1000):
# Pass a Message object (SDK serializes) or pre-serialized bytes
record = record_pb2.AirQuality(
device_name=f"sensor-{i}",
temp=20 + i % 15,
humidity=50 + i % 40
)
ack = stream.ingest_record(record)
# Optional: Wait for durability confirmation
ack.wait_for_ack()
finally:
stream.close()
```
#### Non-Blocking Ingestion (Protobuf)
```python
import asyncio
import logging
from zerobus.sdk.aio import ZerobusSdk
from zerobus.sdk.shared import TableProperties, StreamConfigurationOptions, AckCallback
import record_pb2
logging.basicConfig(level=logging.INFO)
async def main():
# Create a custom callback class
class MyCallback(AckCallback):
def on_ack(self, offset: int):
print(f"Acknowledged offset: {offset}")
options = StreamConfigurationOptions(
max_inflight_records=50000,
ack_callback=MyCallback()
)
sdk = ZerobusSdk(server_endpoint, workspace_url)
table_properties = TableProperties(table_name, record_pb2.AirQuality.DESCRIPTOR)
stream = await sdk.create_stream(client_id, client_secret, table_properties, options)
futures = []
try:
for i in range(100000):
# Pass a Message object (SDK serializes) or pre-serialized bytes
record = record_pb2.AirQuality(
device_name=f"sensor-{i % 10}",
temp=20 + i % 15,
humidity=50 + i % 40
)
future = await stream.ingest_record(record)
futures.append(future)
await stream.flush()
await asyncio.gather(*futures)
finally:
await stream.close()
asyncio.run(main())
```
## Authentication
The SDK uses OAuth 2.0 Client Credentials for authentication:
```python
from zerobus.sdk.sync import ZerobusSdk
from zerobus.sdk.shared import TableProperties
import record_pb2
sdk = ZerobusSdk(server_endpoint, workspace_url)
table_properties = TableProperties(table_name, record_pb2.AirQuality.DESCRIPTOR)
# Create stream with OAuth authentication
stream = sdk.create_stream(client_id, client_secret, table_properties)
```
The SDK automatically handles OAuth 2.0 authentication and uses secure TLS connections by default.
For advanced use cases requiring custom authentication headers, see the `HeadersProvider` section in the API Reference below.
## Configuration
### Stream Configuration Options
Configure stream behavior by passing a `StreamConfigurationOptions` object to `create_stream()`:
```python
from zerobus.sdk.sync import ZerobusSdk
from zerobus.sdk.shared import StreamConfigurationOptions, RecordType, TableProperties
sdk = ZerobusSdk(server_endpoint, workspace_url)
table_properties = TableProperties(table_name)
# Optional: Create a custom callback class
class MyCallback(AckCallback):
def on_ack(self, offset: int):
print(f"Ack: {offset}")
# Create options with custom configuration
options = StreamConfigurationOptions(
record_type=RecordType.JSON,
max_inflight_records=10000,
recovery=True,
recovery_timeout_ms=20000,
ack_callback=MyCallback() # Optional - can be None
)
# Pass options when creating the stream
stream = sdk.create_stream(
client_id,
client_secret,
table_properties,
options # <-- Configuration options passed here
)
```
**All options are optional** - if not specified, defaults will be used.
### Available Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `record_type` | `RecordType` | `RecordType.PROTO` | Serialization format: `RecordType.PROTO` or `RecordType.JSON` |
| `max_inflight_records` | `int` | `50000` | Maximum number of unacknowledged records |
| `recovery` | `bool` | `True` | Enable automatic stream recovery |
| `recovery_timeout_ms` | `int` | `15000` | Timeout for recovery operations (ms) |
| `recovery_backoff_ms` | `int` | `2000` | Delay between recovery attempts (ms) |
| `recovery_retries` | `int` | `3` | Maximum number of recovery attempts |
| `flush_timeout_ms` | `int` | `300000` | Timeout for flush operations (ms) |
| `server_lack_of_ack_timeout_ms` | `int` | `60000` | Server acknowledgment timeout (ms) |
| `stream_paused_max_wait_time_ms` | `Optional[int]` | `None` | Max time (ms) to wait during graceful stream close. `None` = wait for full server duration, `0` = immediate, `x` = wait up to min(x, server_duration) |
| `callback_max_wait_time_ms` | `Optional[int]` | `5000` | Max time (ms) to wait for callbacks to finish after `close()`. `None` = wait forever, `x` = wait up to x ms |
| `ack_callback` | `AckCallback` | `None` | Callback invoked on record acknowledgment (must be a class extending `AckCallback`) |
### Acknowledgment Callbacks
The `ack_callback` parameter requires a custom class extending `AckCallback`:
```python
from zerobus.sdk.shared import AckCallback, StreamConfigurationOptions
class MyCallback(AckCallback):
def on_ack(self, offset: int):
# Called when a record is successfully acknowledged
print(f"Record at offset {offset} was acknowledged")
# You can track metrics, update UI, etc.
def on_error(self, offset: int, error_message: str):
# Called when a record encounters an error
print(f"Record at offset {offset} failed: {error_message}")
# Handle errors, log, retry, etc.
# Create options with the callback
options = StreamConfigurationOptions(
ack_callback=MyCallback()
)
# Use the options when creating a stream
stream = sdk.create_stream(
client_id,
client_secret,
table_properties,
options
)
```
## Error Handling
The SDK raises two types of exceptions:
- `ZerobusException`: Retriable errors (e.g., network issues, temporary server errors)
- `NonRetriableException`: Non-retriable errors (e.g., invalid credentials, missing table)
```python
from zerobus.sdk.shared import ZerobusException, NonRetriableException
try:
stream.ingest_record(record)
except NonRetriableException as e:
# Fatal error - do not retry
print(f"Non-retriable error: {e}")
raise
except ZerobusException as e:
# Retriable error - can retry with backoff
print(f"Retriable error: {e}")
# Implement retry logic
```
## API Reference
### ZerobusSdk
Main entry point for the SDK.
**Synchronous API:**
```python
from zerobus.sdk.sync import ZerobusSdk
sdk = ZerobusSdk(server_endpoint, unity_catalog_endpoint)
```
**Constructor Parameters:**
- `server_endpoint` (str) - The Zerobus gRPC endpoint (e.g., `<workspace-id>.zerobus.<region>.cloud.databricks.com` for AWS, or `<workspace-id>.zerobus.<region>.azuredatabricks.net` for Azure)
- `unity_catalog_endpoint` (str) - The Unity Catalog endpoint (your workspace URL)
**Methods:**
```python
def create_stream(
client_id: str,
client_secret: str,
table_properties: TableProperties,
options: StreamConfigurationOptions = None,
headers_provider: HeadersProvider = None
) -> ZerobusStream
```
Creates a new ingestion stream using OAuth 2.0 Client Credentials authentication.
**Parameters:**
- `client_id` (str) - OAuth client ID (ignored if `headers_provider` is provided)
- `client_secret` (str) - OAuth client secret (ignored if `headers_provider` is provided)
- `table_properties` (TableProperties) - Target table configuration
- `options` (StreamConfigurationOptions) - Stream behavior configuration (optional)
- `headers_provider` (HeadersProvider) - Custom headers provider (optional, defaults to OAuth)
Automatically includes these headers (when using default OAuth):
- `"authorization": "Bearer <oauth_token>"` (fetched via OAuth 2.0 Client Credentials flow)
- `"x-databricks-zerobus-table-name": "<table_name>"`
Returns a `ZerobusStream` instance.
---
**Asynchronous API:**
```python
from zerobus.sdk.aio import ZerobusSdk
sdk = ZerobusSdk(server_endpoint, unity_catalog_endpoint)
```
**Methods:**
```python
async def create_stream(
client_id: str,
client_secret: str,
table_properties: TableProperties,
options: StreamConfigurationOptions = None,
headers_provider: HeadersProvider = None
) -> ZerobusStream
```
Creates a new ingestion stream using OAuth 2.0 Client Credentials authentication.
**Parameters:**
- `client_id` (str) - OAuth client ID (ignored if `headers_provider` is provided)
- `client_secret` (str) - OAuth client secret (ignored if `headers_provider` is provided)
- `table_properties` (TableProperties) - Target table configuration
- `options` (StreamConfigurationOptions) - Stream behavior configuration (optional)
- `headers_provider` (HeadersProvider) - Custom headers provider (optional, defaults to OAuth)
Automatically includes these headers (when using default OAuth):
- `"authorization": "Bearer <oauth_token>"` (fetched via OAuth 2.0 Client Credentials flow)
- `"x-databricks-zerobus-table-name": "<table_name>"`
Returns a `ZerobusStream` instance.
---
### ZerobusStream
Represents an active ingestion stream.
**Synchronous Methods:**
**Single Record Ingestion:**
```python
def ingest_record_offset(record: Union[Message, dict, bytes, str]) -> int
```
**RECOMMENDED** - Ingests a single record and returns the offset after queueing.
```python
def ingest_record_nowait(record: Union[Message, dict, bytes, str]) -> None
```
**RECOMMENDED** - Fire-and-forget ingestion. Submits the record without waiting or returning an offset. Best for maximum throughput.
```python
def ingest_record(record: Union[Message, dict, bytes, str]) -> RecordAcknowledgment
```
**DEPRECATED since v0.3.0** - Use `ingest_record_offset()` or `ingest_record_nowait()` instead for better performance.
**Batch Ingestion:**
```python
def ingest_records_offset(records: List[Union[Message, dict, bytes, str]]) -> int
```
Ingests a batch of records and returns the final offset immediately. More efficient than individual calls for bulk ingestion.
```python
def ingest_records_nowait(records: List[Union[Message, dict, bytes, str]]) -> None
```
Fire-and-forget batch ingestion. Submits all records without waiting. Most efficient for bulk ingestion.
```python
def get_unacked_records() -> List[bytes]
```
Returns a list of unacknowledged records (as raw bytes). These are records that have been ingested but not yet acknowledged by the server.
**Important**: Records are returned in their serialized form:
- **JSON mode**: Decode with `json.loads(record.decode('utf-8'))`
- **Protobuf mode**: Deserialize with `YourMessage.FromString(record)` or use as-is if pre-serialized
Useful for recovery and monitoring.
```python
def get_unacked_batches() -> List[List[bytes]]
```
Returns a list of unacknowledged batches, where each batch is a list of records (as raw bytes). These are batches that have been sent but not yet acknowledged by the server.
**Important**: Records are returned in their serialized form (see `get_unacked_records()` for decoding).
Useful for batch retry logic.
**Stream Management:**
```python
def flush() -> None
```
Flushes all pending records and waits for server acknowledgment. Does not close the stream.
```python
def close() -> None
```
Flushes and closes the stream gracefully. Always call in a `finally` block.
**Accepted Record Types (all methods):**
- **JSON mode**: `dict` (SDK serializes) or `str` (pre-serialized JSON string)
- **Protobuf mode**: `Message` object (SDK serializes) or `bytes` (pre-serialized)
---
**Asynchronous Methods:**
**Single Record Ingestion:**
```python
async def ingest_record_offset(record: Union[Message, dict, bytes, str]) -> int
```
**RECOMMENDED** - Ingests a single record and returns the offset after queueing.
```python
def ingest_record_nowait(record: Union[Message, dict, bytes, str]) -> None
```
**RECOMMENDED** - Fire-and-forget ingestion. Submits the record without waiting. Not async (don't use `await`). Best for maximum throughput.
```python
async def ingest_record(record: Union[Message, dict, bytes, str]) -> Awaitable
```
**DEPRECATED since v0.3.0** - Use `ingest_record_offset()` or `ingest_record_nowait()` instead for better performance.
**Batch Ingestion:**
```python
async def ingest_records_offset(records: List[Union[Message, dict, bytes, str]]) -> int
```
Ingests a batch of records and returns the final offset immediately. More efficient than individual calls for bulk ingestion.
```python
def ingest_records_nowait(records: List[Union[Message, dict, bytes, str]]) -> None
```
Fire-and-forget batch ingestion. Submits all records without waiting. Not async (don't use `await`). Most efficient for bulk ingestion.
**Offset Tracking:**
```python
async def wait_for_offset(offset: int) -> None
```
Waits for a specific offset to be acknowledged by the server. Useful when you have an offset from `ingest_record_offset()` and want to ensure it's durably written:
```python
offset = await stream.ingest_record_offset(record)
# Do other work...
await stream.wait_for_offset(offset) # Ensure this offset is acknowledged
```
**Stream Monitoring:**
```python
async def get_unacked_records() -> List[bytes]
```
Returns a list of unacknowledged records (as raw bytes). These are records that have been ingested but not yet acknowledged by the server.
**Important**: Records are returned in their serialized form:
- **JSON mode**: Decode with `json.loads(record.decode('utf-8'))`
- **Protobuf mode**: Deserialize with `YourMessage.FromString(record)` or use as-is if pre-serialized
Useful for recovery and monitoring.
```python
async def get_unacked_batches() -> List[List[bytes]]
```
Returns a list of unacknowledged batches, where each batch is a list of records (as raw bytes). These are batches that have been sent but not yet acknowledged by the server.
**Important**: Records are returned in their serialized form (see `get_unacked_records()` for decoding).
Useful for batch retry logic.
**Stream Management:**
```python
async def flush() -> None
```
Flushes all pending records and waits for server acknowledgment. Does not close the stream.
```python
async def close() -> None
```
Flushes and closes the stream gracefully. Always call in a `finally` block.
Returns the unique stream ID assigned by the server.
**Accepted Record Types (all methods):**
- **JSON mode**: `dict` (SDK serializes) or `str` (pre-serialized JSON string)
- **Protobuf mode**: `Message` object (SDK serializes) or `bytes` (pre-serialized)
---
### TableProperties
Configuration for the target table.
**Constructor:**
```python
TableProperties(table_name: str, descriptor: Descriptor = None)
```
**Parameters:**
- `table_name` (str) - Fully qualified table name (e.g., `catalog.schema.table`)
- `descriptor` (Descriptor) - Protobuf message descriptor (e.g., `MyMessage.DESCRIPTOR`). Required for protobuf mode, not needed for JSON mode.
**Examples:**
```python
# JSON mode
table_properties = TableProperties("catalog.schema.table")
# Protobuf mode (default)
table_properties = TableProperties("catalog.schema.table", record_pb2.MyMessage.DESCRIPTOR)
```
---
### HeadersProvider
Abstract base class for providing custom authentication headers to gRPC streams.
**Default:** The SDK handles OAuth 2.0 Client Credentials authentication internally when you provide `client_id` and `client_secret` to `create_stream()`. You don't need to implement any headers provider for standard OAuth authentication.
**Custom Implementation:** For advanced use cases (e.g., custom token providers, non-OAuth authentication), you can implement a custom `HeadersProvider` by extending the base class and implementing the `get_headers()` method. Custom providers must include both the `authorization` and `x-databricks-zerobus-table-name` headers. See example files for implementation details.
---
### StreamConfigurationOptions
Configuration options for stream behavior.
**Constructor:**
```python
StreamConfigurationOptions(
record_type: RecordType = RecordType.PROTO,
max_inflight_records: int = 50000,
recovery: bool = True,
recovery_timeout_ms: int = 15000,
recovery_backoff_ms: int = 2000,
recovery_retries: int = 3,
flush_timeout_ms: int = 300000,
server_lack_of_ack_timeout_ms: int = 60000,
stream_paused_max_wait_time_ms: Optional[int] = None,
callback_max_wait_time_ms: Optional[int] = 5000,
ack_callback: AckCallback = None
)
```
**Parameters:**
- `record_type` (RecordType) - Serialization format: `RecordType.PROTO` (default) or `RecordType.JSON`
- `max_inflight_records` (int) - Maximum number of unacknowledged records (default: 50000)
- `recovery` (bool) - Enable or disable automatic stream recovery (default: True)
- `recovery_timeout_ms` (int) - Recovery operation timeout in milliseconds (default: 15000)
- `recovery_backoff_ms` (int) - Delay between recovery attempts in milliseconds (default: 2000)
- `recovery_retries` (int) - Maximum number of recovery attempts (default: 3)
- `flush_timeout_ms` (int) - Flush operation timeout in milliseconds (default: 300000)
- `server_lack_of_ack_timeout_ms` (int) - Server acknowledgment timeout in milliseconds (default: 60000)
- `stream_paused_max_wait_time_ms` (Optional[int]) - Maximum time in milliseconds to wait during graceful stream close. When the server signals stream closure, the SDK can pause and wait for in-flight records to be acknowledged. `None` = wait for full server-specified duration (most graceful), `0` = immediate recovery, `x` = wait up to min(x, server_duration) milliseconds (default: None)
- `callback_max_wait_time_ms` (Optional[int]) - Maximum time in milliseconds to wait for callbacks to finish after calling `close()` on the stream. `None` = wait forever, `x` = wait up to x milliseconds (default: 5000)
- `ack_callback` (AckCallback) - Callback to be invoked when records are acknowledged or encounter errors. Must be a custom class extending `AckCallback` that implements `on_ack()` and optionally `on_error()` methods. (default: None)
**Example:**
```python
from zerobus.sdk.shared import StreamConfigurationOptions, RecordType, AckCallback
# Create a custom callback class
class MyCallback(AckCallback):
def on_ack(self, offset: int):
print(f"Ack: {offset}")
def on_error(self, offset: int, error_message: str):
print(f"Error at {offset}: {error_message}")
# Use the callback in options
options = StreamConfigurationOptions(
record_type=RecordType.JSON,
max_inflight_records=10000,
ack_callback=MyCallback()
)
# Pass to create_ | text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | null | zerobus, databricks, sdk | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Intended Audience :: System Administrators",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Program... | [] | null | null | >=3.9 | [] | [] | [] | [
"protobuf<7.0,>=4.25.0",
"requests<3,>=2.28.1",
"wheel; extra == \"dev\"",
"build; extra == \"dev\"",
"grpcio-tools<2.0,>=1.60.0; extra == \"dev\"",
"black; extra == \"dev\"",
"pycodestyle; extra == \"dev\"",
"autoflake; extra == \"dev\"",
"isort; extra == \"dev\"",
"pytest; extra == \"dev\"",
"... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:20:09.460800 | databricks_zerobus_ingest_sdk-0.3.0.tar.gz | 61,816 | 50/c0/1f170f70c64506e41434d3e8f10644d5db4131e76f75d288841aadbb123d/databricks_zerobus_ingest_sdk-0.3.0.tar.gz | source | sdist | null | false | cc2425e7dd51cb68faaaf53d6becc97d | 3e0405f2c1db7d5da787ab4f4bbe995f3f46fbc8b616bc9f69dae74c2d37113b | 50c01f170f70c64506e41434d3e8f10644d5db4131e76f75d288841aadbb123d | null | [
"LICENSE",
"NOTICE"
] | 1,331 |
2.4 | stacksense-core | 0.1.1 | StackSense - Developer-first Reliability Intelligence Engine for CI, Logs, and Deployment Risk Detection | # StackSense
**StackSense** is a developer-first Reliability Intelligence Engine for CI/CD pipelines, logs, and deployment risk detection.
It analyzes error signals across application code, infrastructure, and CI logs to determine whether a deployment should proceed or be blocked.
---
## Why StackSense?
Modern teams face:
- Noisy CI failures
- Silent deployment risks
- Copy-pasting errors into AI tools
- Unactionable alert noise
- Lack of deployment risk scoring
StackSense fills the gap between:
- Monitoring tools (Datadog, New Relic)
- Error trackers (Sentry)
- CI systems (GitHub Actions, GitLab CI)
It provides **deterministic deployment intelligence**, not just logs.
---
## Installation
```bash
pip install stacksense
| text/markdown | null | StackSense <founder@stacksense.dev> | null | null | null | ci, devops, error-analysis, observability, reliability, deployment, logs, incident-detection, deployment-guard | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Operating System :: OS Independent",
"Topic :: Software Development :: Testing",
"Topic :: System :: Monitoring",
"Topic :: Software Development :: Quality Assurance",
"Intended ... | [] | null | null | >=3.10 | [] | [] | [] | [
"pyyaml>=6.0"
] | [] | [] | [] | [
"Homepage, https://github.com/Stack-Sense/stacksense-engine",
"Source, https://github.com/Stack-Sense/stacksense-engine",
"Issues, https://github.com/Stack-Sense/stacksense-engine/issues"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-19T14:19:38.606401 | stacksense_core-0.1.1.tar.gz | 10,429 | 90/0b/e0add243255ad262d7084149f08ce959c19a8b7ad7270212a205f8d30599/stacksense_core-0.1.1.tar.gz | source | sdist | null | false | d7b022e7722f5aa320f22b4f5ee6bd20 | fd3d8dbf215d23d48b32f102ddc4349b0b031f1a9f38797be122e46f5129f569 | 900be0add243255ad262d7084149f08ce959c19a8b7ad7270212a205f8d30599 | MIT | [] | 256 |
2.4 | kaiserlift | 0.1.100 | Science-based progressive overload system for weightlifting and running with Pareto optimization | # 🏋️ KaiserLift
[](https://pypi.org/project/kaiserlift/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
**A smarter way to choose your next workout: data-driven progressive overload**
🎯 **Never guess your next workout again** — KaiserLift analyzes your training history and tells you exactly which exercise, weight, and rep range will give you the easiest PR.
## ✨ Quick Start
### Installation
```bash
pip install kaiserlift
```
Or with [uv](https://docs.astral.sh/uv/) (recommended):
```bash
uv pip install kaiserlift
```
### Run the Web Interface
```bash
kaiserlift-cli
```
Then open http://localhost:8000 in your browser and upload your FitNotes CSV export.
### Use as a Python Library
```python
from kaiserlift import pipeline
# Generate interactive HTML with workout recommendations
html = pipeline(["your_fitnotes_export.csv"])
with open("workout_plan.html", "w") as f:
f.write(html)
```
### CSV Data Format
Your CSV should follow the FitNotes export format:
```csv
Date,Exercise,Category,Weight,Reps
2022-09-14,Flat Barbell Bench Press,Chest,45.0,10
2022-09-14,Dumbbell Curl,Biceps,35.0,10
```
## Why keep doing 10 rep sets? Are you pushing in a smart way?
The core idea I’m exploring is simple: I want a science-based system for determining what’s the best workout to do next if your goal is muscle growth. The foundation of this is progressive overload—the principle that muscles grow when you consistently challenge them beyond what they’re used to.
One of the most effective ways to apply progressive overload is by taking sets to failure. But doing that intelligently requires knowing what you’ve done before. If you have a record of your best sets on a given exercise, you can deliberately aim to push past those PRs. That history becomes your benchmark.

Now, there’s another dimension to this: rep range adaptation. If you always train for, say, 10 reps, your muscles can get used to that rep range—even if you’re still pushing for PRs. Switching things up and going for, say, 20-rep maxes (even with lighter weight) can stimulate growth by forcing the muscle to adapt to new challenges. Then, when you go back to 10 reps, you might find you’ve blown through a plateau.
To make this system more precise, I propose using a one-rep max (1RM) equivalence formula—a way of mapping rep and weight combinations onto a single curve. It gives you a way to compare different PRs across rep ranges. Using that, you can identify which rep range you’re weakest in—meaning, which PR has the lowest 1RM equivalent. That’s where your next opportunity lies.

For the 1 rep max we use [`The Epley Formula`](https://en.wikipedia.org/wiki/One-repetition_maximum#cite_ref-7):
$$
\text{estimated\_1rm} = \text{weight} \times \left(1 + \frac{\text{reps}}{30.0}\right)
$$
## Here's how to operationalize it:
1. Collect your full workout history for a given exercise—every weight and rep combo you’ve done.
2. Calculate the Pareto front of that data. This is the set of “non-dominated” performances: the heaviest weights at each rep range that can’t be beaten in both weight and reps at the same time.
3. For each point on the Pareto front, compute the 1RM equivalent using a decay-style formula.
4. Identify the Pareto front point with the lowest 1RM equivalent—that’s your weakest spot.
5. Now, generate a “next step” PR target: a new set that just barely beats that weakest point, by the smallest reasonable margin (e.g., +1 rep or +5 lbs). That becomes your next workout goal.
This method gives you a structured, data-driven way to chase the easiest possible PR—which is still a PR. That keeps you progressing without burning out.
You can extend this concept across exercises too. Let’s say it’s biceps day. Instead of defaulting to the same curl variation you always do, you can rotate in a bicep exercise you haven’t done recently in order to assure a new PR. This introduces variability, which is another powerful way to drive adaptation while still targeting the same muscle group.

The end goal here is simple: use your data to intelligently apply progressive overload, break through plateaus, and train more effectively—with zero guesswork.
To this end I also made an HTML page that can organize these in a text searchable way and can be accessed from my phone at the gym. This table of taget sets can be ordered by 1RPM and so easilly parsed.

## Advanced Usage
### Development
Set up a local environment with [uv](https://docs.astral.sh/uv/):
```
uv venv
uv sync --extra dev
# or, with pip:
pip install ".[dev]"
```
Run the full test suite (includes benchmarks via `pytest-benchmark`):
```
uv run pytest
```
Before committing, run the pre-commit hooks locally to mirror CI and avoid
formatting failures:
```
uvx pre-commit run --all-files
```
This installs the pinned `ruff-format` version and rewrites files if needed, so
rerun the command until it reports no changes.
Import data and run the pareto calculations:
```
from kaiserlift import (
process_csv_files,
highest_weight_per_rep,
df_next_pareto,
)
csv_files = glob.glob("*.csv")
df = process_csv_files(csv_files)
df_pareto = highest_weight_per_rep(df)
df_targets = df_next_pareto(df_pareto)
```
Plotting the data:
```
from kaiserlift import plot_df
# Simple view of all data (only the blue dots)
fig = plot_df(df, Exercise="Dumbbell Curl")
fig.savefig("build/Dumbbell_Curl_Raw.png")
# View with pareto front plotted (the red line)
fig = plot_df(df, df_pareto=df_pareto, Exercise="Dumbbell Curl")
fig.savefig("build/Dumbbell_Curl_Pareto.png")
# View with pareto and targets (the green x's)
fig = plot_df(df, df_pareto=df_pareto, df_targets=df_targets, Exercise="Dumbbell Curl")
fig.savefig("build/Dumbbell_Curl_Pareto_and_Targets.png")
```
Generate views:
```
from kaiserlift import (
print_oldest_exercise,
gen_html_viewer,
)
# Console print out with optional args
output_lines = print_oldest_exercise(df, n_cat=2, n_exercises_per_cat=2, n_target_sets_per_exercises=2)
with open("your_workout_summary.txt", "w") as f:
f.writelines(output_lines)
# Print in HTML format for ease of use
full_html = gen_html_viewer(df)
with open("your_interactive_table.html", "w", encoding="utf-8") as f:
f.write(full_html)
# Console print of the HTML
from IPython.display import display, HTML
display(HTML(full_html))
```
### Example HTML generation in CI
An example dataset and helper script live in `tests/example_use`. You can
generate the interactive HTML table locally with:
```
python tests/example_use/generate_example_html.py
```
The "Generate example HTML" job in the CI workflow runs the same script. When
run on the main branch, the generated pages are deployed to GitHub Pages:
- Landing page: `index.html`
- Lifting demo: `lifting/index.html`
- Running demo: `running/index.html`
The HTML is also available as a downloadable artifact for all branches.
| text/markdown | null | Douglas Kaiser <douglastkaiser@gmail.com> | null | null | MIT | fitness, weightlifting, progressive-overload, workout-optimization, running, pareto, 1rm, strength-training, data-analysis | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Visualization",
"Topic :: Utilities",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python ::... | [] | null | null | >=3.8 | [] | [] | [] | [
"pytest-benchmark>=4.0",
"pandas",
"numpy",
"matplotlib",
"plotly",
"ipython",
"fastapi; extra == \"server\"",
"uvicorn; extra == \"server\"",
"python-multipart; extra == \"server\"",
"httpx; extra == \"server\"",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://www.douglastkaiser.com/projects/#workoutPlanner",
"Documentation, https://github.com/douglastkaiser/kaiserlift#readme",
"Repository, https://github.com/douglastkaiser/kaiserlift",
"Bug Tracker, https://github.com/douglastkaiser/kaiserlift/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T14:19:29.981828 | kaiserlift-0.1.100-py3-none-any.whl | 44,740 | 21/db/f1c21c063b67c02eefa9300d2da511f51e8ded0b6296439baf7f39cbdcdb/kaiserlift-0.1.100-py3-none-any.whl | py3 | bdist_wheel | null | false | 0480f10c82958743387bedbc797ca60c | 830c8a7e2b9a25a825f38afe6b15b86c6172814f4c70858d25715b9f3569f3d0 | 21dbf1c21c063b67c02eefa9300d2da511f51e8ded0b6296439baf7f39cbdcdb | null | [
"LICENSE"
] | 239 |
2.4 | scholarcli | 1.15 | A tool for structured literature searches across bibliographic databases | # Scholar
A command-line tool for conducting structured literature searches across multiple academic databases, with built-in support for systematic literature reviews.
## Features
### Multi-Database Search
Search across six academic databases with a single query:
- **Semantic Scholar** - AI-powered research database with 200M+ papers
- **OpenAlex** - Open catalog of 250M+ scholarly works
- **DBLP** - Computer science bibliography
- **Web of Science** - Comprehensive citation index (requires API key)
- **IEEE Xplore** - IEEE technical literature (requires API key)
- **arXiv** - Preprints (no API key)
```bash
# Search specific providers
scholar search "federated learning" -p s2 -p openalex
# Start from a research question (LLM generates provider-specific queries)
scholar rq "How can privacy-preserving ML be evaluated?" \
--provider openalex --provider dblp \
--count 20
```
### Interactive Review Interface
Review search results in a terminal-based interface with vim-style navigation:
```bash
scholar search "neural networks" --review
```
The TUI supports:
- **Keep/Discard decisions** with mandatory motivations for discards
- **Theme tagging** for organizing kept papers
- **Note-taking** with your preferred editor
- **PDF viewing** with automatic download and caching
- **Abstract enrichment** for papers missing abstracts
- **LLM-assisted classification** to help review large result sets
- **Sorting and filtering** by various criteria
### Output Formats
Export results in multiple formats:
```bash
# Pretty table (default for terminal)
scholar search "query"
# Machine-readable formats
scholar search "query" -f json
scholar search "query" -f csv
scholar search "query" -f bibtex
```
### Session Management
Save and resume review sessions:
```bash
# List saved sessions
scholar sessions list
# Resume a session
scholar sessions resume "machine learning"
# Export session to reports
scholar sessions export "machine learning" -f all
```
### Paper Notes
Manage notes across all reviewed papers:
```bash
# Browse papers with notes
scholar notes
# List papers with notes
scholar notes list
# Export/import notes
scholar notes export notes.json
scholar notes import notes.json
```
### Caching
Search results are cached to avoid redundant API calls:
```bash
scholar cache info # Show cache statistics
scholar cache clear # Delete cached results
scholar cache path # Print cache directory
```
PDF downloads are also cached for offline viewing.
## Quickstart
### Install
```bash
pipx install scholarcli
```
### Configure LLM access (optional, for `scholar rq` and LLM-assisted review)
Scholar uses the [`llm`](https://llm.datasette.io/) package for model selection
and API key configuration.
If you want to configure it via the `llm` CLI, install it as well (or install
`scholarcli` with `pipx --include-deps` so the dependency CLIs are exposed):
```bash
pipx install llm
# Or: pipx install --include-deps scholarcli
```
Then configure at least one provider (examples):
```bash
llm install llm-openai-plugin
llm keys set openai
# Or:
llm install llm-anthropic
llm keys set anthropic
```
Set a default model for Scholar to use:
```bash
llm models
llm models default gpt-4o-mini
```
### First run
```bash
# Search directly
scholar search "machine learning privacy"
# Start from a research question (LLM generates provider-specific queries)
scholar rq "How do LLMs support novice programming?" --count 20
```
## Installation
If you don't use `pipx`, you can install with `pip`:
```bash
pip install scholarcli
```
Or with [uv](https://github.com/astral-sh/uv):
```bash
uv pip install scholarcli
```
## Configuration
Some providers require API keys set as environment variables:
| Provider | Environment Variable | Required | How to Get |
|----------|---------------------|----------|------------|
| Semantic Scholar | `S2_API_KEY` | No | [api.semanticscholar.org](https://api.semanticscholar.org) |
| OpenAlex | `OPENALEX_EMAIL` | No | Any email (for polite pool) |
| DBLP | - | No | No key needed |
| Web of Science | `WOS_EXPANDED_API_KEY` or `WOS_STARTER_API_KEY` | Yes | [developer.clarivate.com](https://developer.clarivate.com) |
| IEEE Xplore | `IEEE_API_KEY` | Yes | [developer.ieee.org](https://developer.ieee.org) |
View provider status:
```bash
scholar providers
```
## Usage Examples
### Basic Search
```bash
# Search with default providers (Semantic Scholar, OpenAlex, DBLP)
scholar search "differential privacy"
# Limit results per provider (default: 1000)
scholar search "blockchain" -l 50
# Unlimited results per provider
scholar search "blockchain" -l 0
```
### Systematic Review Workflow
```bash
# 1. Search and review interactively
scholar search "privacy-preserving machine learning" --review --name "privacy-ml-review"
# 2. Add more searches to the same session
scholar search "federated learning privacy" --review --name "privacy-ml-review"
# 3. Resume reviewing later
scholar sessions resume "privacy-ml-review"
# 4. Generate reports
scholar sessions export "privacy-ml-review" -f all
```
### Enriching Results
Some providers (like DBLP) don't include abstracts. Fetch them from other sources:
```bash
# Enrich during search
scholar search "query" --enrich
# Enrich an existing session
scholar enrich "session-name"
```
### PDF Management
```bash
# Download and open a PDF
scholar pdf open "https://arxiv.org/pdf/2301.00001.pdf"
# View PDF cache
scholar pdf info
scholar pdf clear
```
## Keybindings (Review TUI)
| Key | Action |
|-----|--------|
| `j`/`k` | Navigate up/down |
| `Enter` | View paper details |
| `K` | Keep paper (quick) |
| `T` | Keep with themes |
| `d` | Discard (requires motivation) |
| `n` | Edit notes |
| `p` | Open PDF |
| `e` | Enrich (fetch abstract) |
| `L` | LLM-assisted classification |
| `s` | Sort papers |
| `f` | Filter by status |
| `q` | Quit |
## LLM-Assisted Review
For large result sets, Scholar can use LLMs to assist with paper classification:
```bash
# In the TUI, press 'L' to invoke LLM classification
# Or use the CLI command directly
scholar llm classify "session-name" --count 10
```
### How It Works
1. **Tag some papers manually** - The LLM needs examples to learn from. Review at least 5 papers with tags (themes for kept, motivations for discarded).
2. **Set research context** (optional) - Describe your review's focus to help the LLM understand relevance criteria.
3. **Invoke LLM classification** - The LLM classifies pending papers based on your examples, returning confidence scores.
4. **Review LLM decisions** - Prioritize low-confidence classifications. Accept correct ones, correct wrong ones.
5. **Iterate** - Corrections become training examples for the next round.
### Requirements
Install and configure the `llm` command (Scholar uses `llm`'s configuration and
default model):
```bash
pipx install llm
llm install llm-openai-plugin
llm keys set openai
# Pick a default model (used by `scholar rq` and `scholar llm classify`)
llm models
llm models default gpt-4o-mini
```
If you installed Scholar with `pipx install scholarcli` and want the `llm` CLI
available from that same environment, you can alternatively install Scholar
with `pipx install --include-deps scholarcli`.
The LLM integration supports models available through Simon Willison's `llm`
package (OpenAI, Anthropic, local models, etc.).
Note: `scholar llm classify` learns from your existing labeled examples (typically
~5 tagged papers). `scholar rq` can start without examples by using the research
question as context.
## Documentation
Full documentation is available in the `doc/` directory as a literate program combining documentation and implementation.
## License
MIT License - see [LICENSE](LICENSE) for details.
| text/markdown | null | Daniel Bosk <dbosk@kth.se>, Ric Glassey <glassey@kth.se> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"requests>=2.32.5",
"typer>=0.21.0",
"rich>=14.2.0",
"pyalex>=0.19",
"arxiv>=2.1.0",
"cachetools>=6.2.4",
"platformdirs>=4.5.1",
"textual>=6.11.0",
"pypandoc>=1.14",
"click>=8.0.0",
"llm>=0.19",
"llm-openai-plugin>=0.7",
"llm-gpt4all>=0.4",
"llm-azure>=2.1",
"llm-anthropic>=0.23",
"llm... | [] | [] | [] | [] | uv/0.9.28 {"installer":{"name":"uv","version":"0.9.28","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"25.10","id":"questing","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T14:19:28.214202 | scholarcli-1.15-py3-none-any.whl | 249,490 | e8/59/5e3d3c468a3e150bd4860a865689d4cc01fb520a380adaa18652dde23a9d/scholarcli-1.15-py3-none-any.whl | py3 | bdist_wheel | null | false | c77eccb407d11f316321a7f8324b7823 | 459a2f3e99edec36ac17974cbc03b1e4162a73ba81785798fc490a2427380653 | e8595e3d3c468a3e150bd4860a865689d4cc01fb520a380adaa18652dde23a9d | MIT | [
"LICENSE"
] | 231 |
2.4 | mercury-python | 2.14.0 | Python interface into mercury's network protocol fingerprinting and analysis functionality | # mercury-python
The goal of the `mercury-python` package is to expose mercury's network protocol analysis functionality via python. The cython interface is given in `mercury.pyx`.
## Installation
### Recommended Installation
```bash
pip install mercury-python
```
### From Source
You will first need to [build mercury](https://wwwin-github.cisco.com/network-intelligence/mercury-transition#building-and-installing-mercury)
and install cython and optionally wheel:
```bash
pip install Cython
pip install wheel
pip install setuptools
```
Within mercury's `src/cython/` directory, `Makefile` will build the package based on the makefile target:
```bash
make # default build in-place
make wheel # generates pip-installable wheel file
```
## Usage
### Initialization
```python
import mercury
libmerc = mercury.Mercury() # initialization for packet parsing
libmerc = mercury.Mercury(do_analysis=True, resources=b'/<path>/<to>/<resources.tgz>') # initialization for analysis
```
### Parsing packets
```python
hex_packet = '5254001235020800273a230d08004500...'
libmerc.get_mercury_json(bytes.fromhex(hex_packet))
```
```javascript
{
"fingerprints": {
"tls": "tls/(0303)(13011303...)((0000)...)"
},
"tls": {
"client": {
"version": "0303",
"random": "0d4e266cf66416689ded443b58d2b12bb2f53e8a3207148e3c8f2be2476cbd24",
"session_id": "67b5db473da1b71fbca9ed288052032ee0d5139dcfd6ea78b4436e509703c0e4",
"cipher_suites": "130113031302c02bc02fcca9cca8c02cc030c00ac009c013c014009c009d002f0035000a",
"compression_methods": "00",
"server_name": "content-signature-2.cdn.mozilla.net",
"application_layer_protocol_negotiation": [
"h2",
"http/1.1"
],
"session_ticket": ""
}
},
"src_ip": "10.0.2.15",
"dst_ip": "13.249.64.25",
"protocol": 6,
"src_port": 32972,
"dst_port": 443,
}
```
### Analysis
There are two methods to invoke mercury's analysis functionality. The first operates on the full hex packet:
```python
libmerc.analyze_packet(bytes.fromhex(hex_packet))
```
```javascript
{
"tls": {
"client": {
"server_name": "content-signature-2.cdn.mozilla.net"
}
},
"fingerprint_info": {
"status": "labeled",
"type": "tls",
"str_repr": "tls/1/(0303)(13011303...)[(0000)...]"
},
"analysis": {
"process": "firefox",
"score": 0.9992411956652674,
"malware": false,
"p_malware": 8.626882751003134e-06
}
```
The second method operates directly on the data features (network protocol fingerprint string and destination context):
```python
libmerc.perform_analysis('tls/1/(0303)(13011303...)[(0000)...]', 'content-signature-2.cdn.mozilla.net', '13.249.64.25', 443)
```
```javascript
{
"fingerprint_info": {
"status": "labeled"
},
"analysis": {
"process": "firefox",
"score": 0.9992158715704546,
"malware": false,
"p_malware": 8.745628825189023e-06
}
}
```
### Static functions
Parsing base64 representations of certificate data:
```python
b64_cert = 'MIIJRDC...'
mercury.parse_cert(b64_cert)
```
output:
```javascript
{
"version": "02",
"serial_number": "00eede6560cd35c0af02000000005971b7",
"signature_identifier": {
"algorithm": "sha256WithRSAEncryption"
},
"issuer": [
{
"country_name": "US"
},
{
"organization_name": "Google Trust Services"
},
{
"common_name": "GTS CA 1O1"
}
],
...
```
Parsing base64 representations of DNS data:
```python
b64_dns = '1e2BgAAB...'
mercury.parse_dns(b64_dns)
```
output:
```javascript
{
"response": {
"question": [
{
"name": "live.github.com.",
"type": "AAAA",
"class": "IN"
}
],
...
```
| text/markdown | Blake Anderson | blake.anderson@cisco.com | null | null | null | tls fingerprinting network traffic analysis | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3 :: Only",
"Topic :: System :: Networking :: Monitoring",
"Topic :: Security"
] | [] | https://github.com/cisco/mercury-python/ | null | >=3.6.0 | [] | [] | [] | [] | [] | [] | [] | [] | twine/3.8.0 colorama/0.4.4 importlib-metadata/6.6.0 keyring/25.7.0 pkginfo/1.12.1.2 readme-renderer/34.0 requests-toolbelt/0.9.1 requests/2.32.3 rfc3986/1.5.0 tqdm/4.57.0 urllib3/1.26.5 CPython/3.10.12 | 2026-02-19T14:19:04.750232 | mercury_python-2.14.0-cp39-cp39-manylinux_2_34_x86_64.whl | 13,585,786 | 1a/35/9d868d0fa32089c6f69acb38a98623fb55d0d45c678bcdbe18034904286d/mercury_python-2.14.0-cp39-cp39-manylinux_2_34_x86_64.whl | cp39 | bdist_wheel | null | false | 6b1d088fbf8f8de5980b7a2817d335fb | d2b1b5b5edc5870e64f0f689cb7892d347a855a3b6bd7cbbc022ccb2a1a33af9 | 1a359d868d0fa32089c6f69acb38a98623fb55d0d45c678bcdbe18034904286d | null | [] | 1,451 |
2.4 | cyclonedx-bom | 7.2.2 | CycloneDX Software Bill of Materials (SBOM) generator for Python projects and environments | # CycloneDX Python SBOM Generation Tool
[![shield_pypi-version]][link_pypi]
[![shield_docker-version]][link_docker]
[![shield_rtfd]][link_rtfd]
[![shield_gh-workflow-test]][link_gh-workflow-test]
[![shield_coverage]][link_codacy]
[![shield_ossf-best-practices]][link_ossf-best-practices]
[![shield_license]][license_file]
[![shield_website]][link_website]
[![shield_slack]][link_slack]
[![shield_groups]][link_discussion]
[![shield_twitter-follow]][link_twitter]
----
This tool generates Software Bill of material (SBOM) documents in OWASP [CycloneDX](https://cyclonedx.org/) format.
This is probably the most accurate, complete SBOM generator for any python-related projects.
Supported data sources are:
* Python (virtual) environment
* `Poetry` manifest and lockfile
* `Pipenv` manifest and lockfile
* Pip's `requirements.txt` format
* `PDM` manifest and lockfile are not explicitly supported.
However, PDM's Python virtual environments are fully supported. See the docs for an example.
* `uv` manifest and lockfile are not explicitly supported.
However, uv's Python virtual environments are fully supported. See the docs for an example.
* `Conda` as a package manager is no longer supported since version 4.
However, conda's Python environments are fully supported via the methods listed above. See the docs for an example.
Based on [OWASP Software Component Verification Standard for Software Bill of Materials](https://scvs.owasp.org/scvs/v2-software-bill-of-materials/)'
criteria, this tool is capable of producing SBOM documents almost passing Level-2 (only signing needs to be done externally).
The resulting SBOM documents follow [official specifications and standards](https://github.com/CycloneDX/specification),
and might have properties following
[`cdx:python` Namespace Taxonomy](https://github.com/CycloneDX/cyclonedx-property-taxonomy/blob/main/cdx/python.md),
[`cdx:pipenv` Namespace Taxonomy](https://github.com/CycloneDX/cyclonedx-property-taxonomy/blob/main/cdx/pipenv.md),
[`cdx:poetry` Namespace Taxonomy](https://github.com/CycloneDX/cyclonedx-property-taxonomy/blob/main/cdx/poetry.md)
.
Read the full [documentation][link_rtfd] for more details.
## Requirements
* Python `>=3.9,<4`
However, there are older versions of this tool available, which
support Python `>=2.7`.
## Installation
Install this from [Python Package Index (PyPI)][link_pypi] using your preferred Python package manager.
install via one of commands:
```shell
python -m pip install cyclonedx-bom # install via pip
pipx install cyclonedx-bom # install via pipx
poetry add cyclonedx-bom # install via poetry
uv tool install cyclonedx-bom # install via uv
# ... you get the hang
```
## Usage
Call via one of commands:
```shell
cyclonedx-py # call script
python3 -m cyclonedx_py # call python module CLI
```
### Basic usage
```shellSession
$ cyclonedx-py --help
usage: cyclonedx-py [-h] [--version] <command> ...
Creates CycloneDX Software Bill of Materials (SBOM) from Python projects and environments.
positional arguments:
<command>
environment (env, venv)
Build an SBOM from Python (virtual) environment
requirements Build an SBOM from Pip requirements
pipenv Build an SBOM from Pipenv manifest
poetry Build an SBOM from Poetry project
options:
-h, --help show this help message and exit
--version show program's version number and exit
```
### Advanced usage and details
See the full [documentation][link_rtfd] for advanced usage and details on input formats, switches and options.
## Python Support
We endeavour to support all functionality for all [current actively supported Python versions](https://www.python.org/downloads/).
However, some features may not be possible/present in older Python versions due to their lack of support.
However, there are older versions of this tool, that support `python>=2.7`.
## Internals
This tool utilizes the [CycloneDX Python library][cyclonedx-library] to generate the actual data structures, and serialize and validate them.
This tool does **not** expose any additional _public_ API or symbols - all code is intended to be internal and might change without any notice during version upgrades.
However, the CLI is stable - you might call it programmatically. See the documentation for an example.
## Contributing
Feel free to open issues, bugreports or pull requests.
See the [CONTRIBUTING][contributing_file] file for details, and how to run/setup locally.
## Copyright & License
CycloneDX BOM is Copyright (c) OWASP Foundation. All Rights Reserved.
Permission to modify and redistribute is granted under the terms of the Apache 2.0 license.
See the [LICENSE][license_file] file for the full license.
[license_file]: https://github.com/CycloneDX/cyclonedx-python/blob/main/LICENSE
[contributing_file]: https://github.com/CycloneDX/cyclonedx-python/blob/main/CONTRIBUTING.md
[link_rtfd]: https://cyclonedx-bom-tool.readthedocs.io/
[cyclonedx-library]: https://pypi.org/project/cyclonedx-python-lib
[shield_gh-workflow-test]: https://img.shields.io/github/actions/workflow/status/CycloneDX/cyclonedx-python/python.yml?branch=main&logo=GitHub&logoColor=white "build"
[shield_rtfd]: https://img.shields.io/readthedocs/cyclonedx-bom-tool?logo=readthedocs&logoColor=white "Read the Docs"
[shield_pypi-version]: https://img.shields.io/pypi/v/cyclonedx-bom?logo=Python&logoColor=white&label=PyPI "PyPI"
[shield_docker-version]: https://img.shields.io/docker/v/cyclonedx/cyclonedx-python?logo=docker&logoColor=white&label=docker "docker"
[shield_license]: https://img.shields.io/github/license/CycloneDX/cyclonedx-python?logo=open%20source%20initiative&logoColor=white "license"
[shield_website]: https://img.shields.io/badge/https://-cyclonedx.org-blue.svg "homepage"
[shield_slack]: https://img.shields.io/badge/slack-join-blue?logo=Slack&logoColor=white "slack join"
[shield_groups]: https://img.shields.io/badge/discussion-groups.io-blue.svg "groups discussion"
[shield_twitter-follow]: https://img.shields.io/badge/Twitter-follow-blue?logo=Twitter&logoColor=white "twitter follow"
[shield_coverage]: https://img.shields.io/codacy/coverage/682ceda9a1044832a087afb95ae280fe?logo=Codacy&logoColor=white "test coverage"
[shield_ossf-best-practices]: https://img.shields.io/cii/percentage/7957?label=OpenSSF%20best%20practices "OpenSSF best practices"
[link_gh-workflow-test]: https://github.com/CycloneDX/cyclonedx-python/actions/workflows/python.yml?query=branch%3Amain
[link_pypi]: https://pypi.org/project/cyclonedx-bom/
[link_docker]: https://hub.docker.com/r/cyclonedx/cyclonedx-python
[link_codacy]: https://app.codacy.com/gh/CycloneDX/cyclonedx-python
[link_ossf-best-practices]: https://www.bestpractices.dev/projects/7957
[link_website]: https://cyclonedx.org/
[link_slack]: https://cyclonedx.org/slack/invite
[link_discussion]: https://groups.io/g/CycloneDX
[link_twitter]: https://twitter.com/CycloneDX_Spec
| text/markdown | Jan Kowalleck | jan.kowalleck@gmail.com | Jan Kowalleck | jan.kowalleck@gmail.com | Apache-2.0 | OWASP, CycloneDX, bill-of-materials, BOM, software-bill-of-materials, SBOM, environment, virtualenv, venv, Poetry, Pipenv, requirements, PDM, Conda, SPDX, licenses, PURL, package-url, dependency-graph | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Legal Industry",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Program... | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"chardet<6.0,>=5.1",
"cyclonedx-python-lib[validation]<12,>=8.0",
"packageurl-python<2,>=0.11",
"packaging<27,>=22",
"pip-requirements-parser<33.0,>=32.0",
"tomli<3.0.0,>=2.0.1; python_version < \"3.11\""
] | [] | [] | [] | [
"Bug Tracker, https://github.com/CycloneDX/cyclonedx-python/issues",
"Changelog, https://github.com/CycloneDX/cyclonedx-python/releases",
"Documentation, https://cyclonedx-bom-tool.readthedocs.io/",
"Funding, https://owasp.org/donate/?reponame=www-project-cyclonedx&title=OWASP+CycloneDX",
"Homepage, https:/... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:18:58.532573 | cyclonedx_bom-7.2.2.tar.gz | 4,417,157 | 09/95/02b4e9e13a0b2b2eb36c618520df6772217ae5cd5058f3ca0aa97aae6002/cyclonedx_bom-7.2.2.tar.gz | source | sdist | null | false | 875d6185397f8bcb7b56f58b690d6210 | dc7b994d41a83dc24caa511462dbf9fa6ede51b962f42533a58d0a8261d98e56 | 099502b4e9e13a0b2b2eb36c618520df6772217ae5cd5058f3ca0aa97aae6002 | null | [
"LICENSE",
"NOTICE"
] | 22,453 |
2.4 | fmu-datamodels | 0.19.2 | FMU data standard, including models and schemas | # fmu-datamodels
This package contains models and schemas that contribute to the FMU data
standard.
Data is represented using Pydantic models, which describe metadata for data
exported from FMU experiments. These models can be serialized into versioned
[JSON schemas](https://json-schema.org/) that validate the metadata. In some
cases the exported data itself can be validated by a schema contained here.
The models are utilized by [fmu-dataio](https://github.com/equinor/fmu-dataio)
for data export and can also be leveraged by Sumo data consumers for
programmatic access to data model values like enumerations.
## Schemas
The metadata standard is defined by [JSON schemas](https://json-schema.org/).
Within Equinor, the schemas are available on a Radix-hosted endpoint.
- Radix Dev: 
- Radix Staging: 
- Radix Prod: 
## Documentation
The documentation for this package is built into the
[fmu-dataio](https://fmu-dataio.readthedocs.io/en/latest/) documentation.
- The [FMU results data
model](https://fmu-dataio.readthedocs.io/en/latest/datamodel/index.html)
page documents the data model, contains the schema change logs, and more.
- The [Schema
versioning](https://fmu-dataio.readthedocs.io/en/latest/schema_versioning.html)
page describes how schemas produced by this model are versioned.
- The [Updating
schemas](https://fmu-dataio.readthedocs.io/en/latest/update_schemas.html)
page contains instructions on how to update the schemas. This is relevant
for developers of this model only.
## License
This project is licensed under the terms of the [Apache
2.0](https://github.com/equinor/fmu-datamodels/blob/main/LICENSE) license.
| text/markdown | null | Equinor <fg-fmu_atlas@equinor.com> | null | null | Apache 2.0 | fmu, sumo | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Langu... | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic",
"coverage>=4.1; extra == \"dev\"",
"hypothesis; extra == \"dev\"",
"mypy; extra == \"dev\"",
"pyarrow; extra == \"dev\"",
"pyarrow-stubs; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"pytest-mock; extra == \"dev\"",
"pytest-runner; extra == \"dev\"",
... | [] | [] | [] | [
"Homepage, https://github.com/equinor/fmu-datamodels",
"Repository, https://github.com/equinor/fmu-datamodels",
"Issues, https://github.com/equinor/fmu-datamodels/issues",
"Documentation, https://fmu-dataio.readthedocs.io"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:18:42.659586 | fmu_datamodels-0.19.2.tar.gz | 367,509 | 86/7b/b73807b855ea50fa23981d1d277bcafde1037d4339f228a43bad757ec53a/fmu_datamodels-0.19.2.tar.gz | source | sdist | null | false | aa4efcf67f320c6752c2c2ccca0338dd | cba16a32a11f98f2b871693da58a35079844d25ec07a4d2a7304caeadf5f08bb | 867bb73807b855ea50fa23981d1d277bcafde1037d4339f228a43bad757ec53a | null | [
"LICENSE"
] | 700 |
2.4 | rekd | 0.1.2 | An eBPF-based ransomware scanner | # R.E.K.D: Ransomware Entropy Kernel Detector
* This project is an eBPF-based encrypted disk I/O scanner that monitors file write operations at the kernel level.
* It hooks into the Linux `vfs\_write` system call to capture real-time write activity on regular files.
* The system selectively samples large write buffers to avoid noise from small system writes like logs.
* Captured data is analyzed in user space using Shannon entropy to detect potential encryption activity.
* High-entropy data is treated as suspicious since encrypted ransomware outputs exhibit randomness.
* The scanner maintains per-process statistics such as total written bytes and encrypted write ratio.
* A configurable exclusion system allows trusted applications to be ignored during monitoring.
* The tool supports both interactive monitoring mode and background daemon mode for continuous protection.
* Suspicious processes are flagged based on encrypted write percentage and cumulative encrypted output.
* Installation scripts automate deployment as a systemd service for persistent runtime monitoring.
---
## 📦 Installation & Dependencies
### 1. System Requirements
This tool requires the BPF Compiler Collection (BCC). Install it using your package manager:
#### Ubuntu / Debian:
```bash
sudo apt-get install bpfcc-tools linux-headers-$(uname -r) python3-bpfcc
```
### 2. Python Dependencies
This tool requires python dependencies for the dashboard UI(rich) and configuration parsing(pyyaml):
```bash
pip3 install rich pyyaml
```
---
### **Summary of Flags**
| Flag | Description |
| :--- | :--- |
| `--init-config` | Generates a default `exclusions.yaml` file and exits. |
| `--config [path]` | Loads exclusions from a specific YAML file (default: `./exclusions.yaml`). |
| text/markdown | null | "Spider R&D, NIT Trichy" <hirthickdiyan.v@gmail.com> | null | null | GPL-3.0-or-later | null | [
"Programming Language :: Python :: 3",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"rich",
"pyyaml",
"pytest",
"numba"
] | [] | [] | [] | [
"Homepage, https://github.com/SpiderNitt/rekd"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-19T14:17:23.109401 | rekd-0.1.2.tar.gz | 18,325 | 18/35/995f677f5400c132c4fa00a66e97a9fd0aee321196aa6e975af042b83d20/rekd-0.1.2.tar.gz | source | sdist | null | false | 4cb2d92397eb56683a75a006491d725b | e614a348a06cfee7dabf3a5752e76bbd000b30029e4ee2379dd2992beb866891 | 1835995f677f5400c132c4fa00a66e97a9fd0aee321196aa6e975af042b83d20 | null | [
"LICENSE"
] | 244 |
2.4 | UnderAutomation.Fanuc | 4.4.0.0 | Quickly create applications that communicate with your Fanuc robots | # Fanuc Communication SDK for Python
[](https://underautomation.com)
[](https://pypi.org/project/UnderAutomation.Fanuc/)
[](#)
[](#)
[](https://underautomation.com/fanuc/eula)
### 🤖 Effortlessly Communicate with Fanuc Robots
The **Fanuc SDK for Python** enables seamless integration with Fanuc robots for automation, data exchange, and remote control through multiple native communication protocols.
> Whether you're building a custom application, integrating with a MES/SCADA system, or performing advanced diagnostics, this SDK provides the tools you need.
It supports communication with **real robots** and **ROBOGUIDE** simulation.
🔗 **More Information:** [https://underautomation.com/fanuc](https://underautomation.com/fanuc)
🔗 Also available in **[🟦 .NET](https://github.com/underautomation/Fanuc.NET)** & **[🟨 LabVIEW](https://github.com/underautomation/Fanuc.vi)**
---
[⭐ Star this repo if it's useful to you!](https://github.com/underautomation/Fanuc.py/stargazers)
[👁️ Watch for updates](https://github.com/underautomation/Fanuc.py/watchers)
---
## 🚀 TL;DR
- ✔️ **No PCDK needed** - Connect without Fanuc's Robot Interface
- 📖 **Read/write system variables**
- 🔄 **Register access** for numbers, strings, and positions
- 🎬 **Program control** (run, pause, abort, etc.)
- 🔔 **Alarm viewing and reset**
- ⚡ **I/O control** (UI, UO, GI, GO, SDI, SDO, etc.)
- 🔍 **State & diagnostics monitoring**
- 📂 **FTP file & variable access**
- 🏎️ **Remote motion:** Remote move the robot
- 📐 **Kinematics Calculations:** Perform forward and inverse kinematics offline (CRX and standard robots)
> No custom robot options or installations are required. The SDK uses **standard communication** protocols available on all Fanuc controllers.
---
## 🛠 Installation & Getting Started
### Prerequisites
- **Python 3.7** or higher
- A Fanuc robot or **ROBOGUIDE** simulation
### Step 1 - Create a Virtual Environment
We recommend using a virtual environment to keep your project dependencies isolated.
Open a terminal (Command Prompt, PowerShell, or your favorite terminal) and run:
```bash
# Create a project folder
mkdir my-fanuc-project
cd my-fanuc-project
# Create a virtual environment
python -m venv venv
# Activate it
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
```
You should see `(venv)` in your terminal prompt, indicating the virtual environment is active.
### Step 2 - Install the SDK
The SDK is published on PyPI. Install it with a single command:
```bash
pip install UnderAutomation.Fanuc
```
That's it! All dependencies (including `pythonnet`) are installed automatically.
On **Linux**, you should also install .NET Core and set environment variable PYTHONNET_RUNTIME to coreclr :
```bash
sudo apt-get install -y dotnet-runtime-8.0
PYTHONNET_RUNTIME=coreclr
```
> **Alternative: install from source**
>
> ```bash
> git clone https://github.com/underautomation/Fanuc.py.git
> cd Fanuc.py
> pip install -e .
> ```
### Step 3 - Connect to Your Robot
Create a Python file (e.g. `main.py`) and write:
```python
from underautomation.fanuc.fanuc_robot import FanucRobot
from underautomation.fanuc.connection_parameters import ConnectionParameters
from underautomation.fanuc.common.languages import Languages
# Create a robot instance
robot = FanucRobot()
# Connect (replace with your robot's IP address)
params = ConnectionParameters('192.168.0.1')
# Set the controller language among English, Japanese and Chinese (optional, defaults to English)
params.language = Languages.English
# Enable Telnet KCL
# Activate Telnet on your robot or ROBOGUIDE : https://underautomation.com/fanuc/documentation/enable-telnet
params.telnet.enable = True
params.telnet.telnet_kcl_password="telnet_password"
# Enable FTP
params.ftp.enable = True
params.ftp.ftp_user = ""
params.ftp.ftp_password = ""
# Enable SNPX
# You need option R553 "HMI Device SNPX" for FANUC America (R650 FRA)
# No additional options needed for FANUC Ltd. (R651 FRL)
params.snpx.enable = True
# Connect to the robot
# If you get a license exception, ask a trial license here: https://underautomation.com/license and call FanucRobot.register_license(...) before connecting
robot.connect(params)
if robot.ftp.connected:
safety = robot.ftp.get_safety_status()
print(f"Safety Status:")
print(f" External E-Stop : {safety.external_e_stop}")
print(f" SOP E-Stop : {safety.sope_stop}")
print(f" TP E-Stop : {safety.tpe_stop}")
print(f" TP Enable : {safety.tp_enable}")
print(f" TP Deadman : {safety.tp_deadman}")
print()
if robot.snpx.connected:
position = robot.snpx.current_position.read_world_position(1)
print(position)
r1 = robot.snpx.numeric_registers.read(1)
print(f" R[1] = {r1}")
print()
if robot.telnet.connected:
speed_override = robot.telnet.get_variable("$MCR.$GENOVERRIDE")
print(f"Speed override: {speed_override.raw_value}%")
print()
# Don't forget to disconnect
robot.disconnect()
```
Run it:
```bash
python main.py
```
> **Connecting to ROBOGUIDE?** Instead of an IP address, pass the workcell path:
>
> ```python
> params = ConnectionParameters(r"C:\Users\you\Documents\My Workcells\CRX 10iA L\Robot_1")
> ```
---
## 🔑 Licensing
The SDK works out of the box for **30 days** (trial period) - no registration needed.
After the trial, you can:
- **Buy a license** at [underautomation.com/order](https://underautomation.com/order?sdk=fanuc)
- **Get a new trial period immediately by email** at [underautomation.com/license](https://underautomation.com/license?sdk=fanuc)
To register a license in code:
```python
from underautomation.fanuc.fanuc_robot import FanucRobot
license_info = FanucRobot.register_license("your-licensee", "your-license-key")
print(license_info)
```
---
## 📂 Examples
The repository includes a complete set of ready-to-run examples in the [`examples/`](https://github.com/underautomation/fanuc.py/tree/main/examples) folder, organized by communication protocol.
### How the Examples Work
| File | Role |
| ---------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| [`examples/launcher.py`](https://github.com/underautomation/fanuc.py/blob/main/examples/launcher.py) | **Interactive menu** - browse and run any example from a single launcher |
| [`examples/__init__.py`](https://github.com/underautomation/fanuc.py/blob/main/examples/__init__.py) | **Shared helpers** - sets up the Python path, manages robot connection settings, and handles license registration |
| `examples/robot_config.json` | **Saved settings** (git-ignored) - remembers your robot IP, credentials, and license key so you don't have to re-enter them every time |
**Run manually each examples**
> The first time you run an example, it will ask for your robot IP (or ROBOGUIDE path) and credentials. These are saved in `robot_config.json` so you only enter them once.
```bash
# run any example directly
python examples/snpx/snpx_write_numeric_register.py
```
**Or browse examples with the launcher:**
Use the launcher to easily browse and run any example without needing to open each file.
```bash
# Launch the interactive menu
python examples/launcher.py
```
And you will get a menu like this to select and run any example with a single keystroke:
```
╔════════════════════════════════════════════════════════════════════════════════╗
║ ║
║ ███████╗ █████╗ ███╗ ██╗██╗ ██╗ ██████╗ ║
║ ██╔════╝██╔══██╗████╗ ██║██║ ██║██╔════╝ ║
║ █████╗ ███████║██╔██╗ ██║██║ ██║██║ ║
║ ██╔══╝ ██╔══██║██║╚██╗██║██║ ██║██║ ║
║ ██║ ██║ ██║██║ ╚████║╚██████╔╝╚██████╗ ║
║ ╚═╝ ╚═╝ ╚═╝╚═╝ ╚═══╝ ╚═════╝ ╚═════╝ ║
║ ║
║ Python SDK - Interactive Example Launcher ║
║ ║
╚════════════════════════════════════════════════════════════════════════════════╝
╔════════════════════════════════════════════════════════════════════════════════╗
║ SELECT A CATEGORY ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ ║
║ 📂 1. FTP (16 examples) ║
║ File Transfer Protocol - read/write files, registers, diagnostics ║
║ ║
║ 🦾 2. KINEMATICS (1 example) ║
║ Kinematics - offline forward & inverse kinematics, no connection needed║
║ ║
║ 🔑 3. LICENSE (1 example) ║
║ License management - activation & status ║
║ ║
║ ⚡ 4. SNPX (19 examples) ║
║ SNPX industrial protocol - fast real-time register & I/O access ║
║ ║
║ 🔌 5. TELNET (7 examples) ║
║ Telnet KCL - send commands, read variables, control I/O ║
║ ║
╠════════════════════════════════════════════════════════════════════════════════╣
║ 0. Exit ║
║ ║
╚════════════════════════════════════════════════════════════════════════════════╝
Enter category number [0-5]:
```
---
### 📋 Complete Example List
#### 📂 FTP - File Transfer & Variable Access
| # | Example | Description |
| --- | ----------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
| 1 | [ftp_check_file_exists.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_check_file_exists.py) | Check if a file or directory exists on the robot controller |
| 2 | [ftp_current_position.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_current_position.py) | Read the current robot position (joints + Cartesian) for all motion groups |
| 3 | [ftp_download_file.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_download_file.py) | Download a file from the robot controller to your local machine |
| 4 | [ftp_error_list.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_error_list.py) | Retrieve the complete error/alarm history with codes, timestamps, and active status |
| 5 | [ftp_get_all_variables.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_get_all_variables.py) | Interactive navigator to browse all variable files, search, and drill into structures |
| 6 | [ftp_io_state.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_io_state.py) | Read all digital I/O states (DIN, DOUT, RI, RO, UI, UO, SI, SO, FLG) |
| 7 | [ftp_list_files.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_list_files.py) | Browse files and directories on the controller's file system |
| 8 | [ftp_program_states.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_program_states.py) | Read the state of all running tasks/programs with call history |
| 9 | [ftp_read_features.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_read_features.py) | List all installed software features/options on the controller |
| 10 | [ftp_read_numeric_registers.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_read_numeric_registers.py) | Read numeric registers (R[1], R[2], ...) from the NUMREG variable file |
| 11 | [ftp_read_position_registers.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_read_position_registers.py) | Read position registers (PR[1], PR[2], ...) with Cartesian and joint data |
| 12 | [ftp_read_string_registers.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_read_string_registers.py) | Read string registers (SR[1], SR[2], ...) from the STRREG variable file |
| 13 | [ftp_read_system_variables.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_read_system_variables.py) | Read commonly used system variables (robot name, hostname, language, etc.) |
| 14 | [ftp_safety_status.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_safety_status.py) | Read safety signals: E-Stop, deadman, fence open, TP enable, and more |
| 15 | [ftp_summary_diagnostic.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_summary_diagnostic.py) | Get a complete diagnostic snapshot: position, safety, I/O, features, programs |
| 16 | [ftp_upload_file.py](https://github.com/underautomation/fanuc.py/blob/main/examples/ftp/ftp_upload_file.py) | Upload a local file to the robot controller |
#### ⚡ SNPX - High-Speed Industrial Protocol
| # | Example | Description |
| --- | -------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| 1 | [snpx_clear_alarms.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_clear_alarms.py) | Clear all active alarms on the robot |
| 2 | [snpx_read_alarm_history.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_read_alarm_history.py) | Read the alarm history with severity and cause information |
| 3 | [snpx_read_alarms.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_read_alarms.py) | Read currently active alarms with ID, severity, message, and cause |
| 4 | [snpx_read_batch_flags.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_read_batch_flags.py) | Read multiple flags at once using batch assignment (much faster) |
| 5 | [snpx_read_batch_registers.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_read_batch_registers.py) | Read a batch of numeric registers at once for high performance |
| 6 | [snpx_read_current_position.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_read_current_position.py) | Read real-time Cartesian and joint position via SNPX |
| 7 | [snpx_read_digital_io.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_read_digital_io.py) | Read digital I/O signals (SDI, SDO, RDI, RDO, UI, UO, SI, SO, WI, WO) |
| 8 | [snpx_read_flag.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_read_flag.py) | Read a single boolean flag (FLG[i]) |
| 9 | [snpx_read_integer_sysvar.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_read_integer_sysvar.py) | Read integer system variables by name (e.g. `$MCR.$GENOVERRIDE`) |
| 10 | [snpx_read_numeric_io.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_read_numeric_io.py) | Read numeric/analog I/O values (GI, GO, AI, AO) |
| 11 | [snpx_read_numeric_register.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_read_numeric_register.py) | Read a single numeric register (R[i]) with fast direct access |
| 12 | [snpx_read_position_register.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_read_position_register.py) | Read a position register (PR[i]) with Cartesian and joint data |
| 13 | [snpx_read_string_register.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_read_string_register.py) | Read a string register (SR[i]) |
| 14 | [snpx_write_digital_output.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_write_digital_output.py) | Write digital output signals (SDO, RDO, UO, SO, WO) |
| 15 | [snpx_write_flag.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_write_flag.py) | Write a boolean flag (FLG[i]) with read-back confirmation |
| 16 | [snpx_write_numeric_register.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_write_numeric_register.py) | Write a numeric register (R[i]) with read-back confirmation |
| 17 | [snpx_write_position_register.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_write_position_register.py) | Write a position register (PR[i]) in Cartesian or Joint mode |
| 18 | [snpx_write_string_register.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_write_string_register.py) | Write a string register (SR[i]) with read-back confirmation |
| 19 | [snpx_write_sysvar.py](https://github.com/underautomation/fanuc.py/blob/main/examples/snpx/snpx_write_sysvar.py) | Write a system variable (e.g. speed override) via `set_variable` |
#### 🔌 Telnet - KCL Remote Control
| # | Example | Description |
| --- | ---------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- |
| 1 | [telnet_get_position.py](https://github.com/underautomation/fanuc.py/blob/main/examples/telnet/telnet_get_position.py) | Read the current Cartesian position (X, Y, Z, W, P, R) |
| 2 | [telnet_read_variable.py](https://github.com/underautomation/fanuc.py/blob/main/examples/telnet/telnet_read_variable.py) | Read any robot variable by name (e.g. `$MCR.$GENOVERRIDE`) |
| 3 | [telnet_set_port.py](https://github.com/underautomation/fanuc.py/blob/main/examples/telnet/telnet_set_port.py) | Set a digital output port (DOUT, RDO, OPOUT, TPOUT, GOUT) |
| 4 | [telnet_simulate_port.py](https://github.com/underautomation/fanuc.py/blob/main/examples/telnet/telnet_simulate_port.py) | Simulate/unsimulate I/O ports for testing without real hardware |
| 5 | [telnet_task_info.py](https://github.com/underautomation/fanuc.py/blob/main/examples/telnet/telnet_task_info.py) | Get task information: status, current line, routine, program type |
| 6 | [telnet_write_variable.py](https://github.com/underautomation/fanuc.py/blob/main/examples/telnet/telnet_write_variable.py) | Write a numeric value to any robot variable |
| 7 | [telnet_program_control.py](https://github.com/underautomation/fanuc.py/blob/main/examples/telnet/telnet_program_control.py) | Program lifecycle control: run, pause, resume, task info, abort |
#### 🦾 Kinematics - Offline FK & IK (no robot connection needed)
| # | Example | Description |
| --- | ---------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- |
| 1 | [kinematics_forward_inverse.py](https://github.com/underautomation/fanuc.py/blob/main/examples/kinematics/kinematics_forward_inverse.py) | Interactive forward & inverse kinematics: select a model, view DH parameters, compute FK then all IK solutions |
#### 🔑 License
| # | Example | Description |
| --- | ------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
| 1 | [license_info_example.py](https://github.com/underautomation/fanuc.py/blob/main/examples/license/license_info_example.py) | Display license state, register a license, and view all license properties |
---
## 📌 Feature Documentation
### 🖥️ Telnet KCL - Remote Command Interface
Telnet KCL (Karel Command Language) lets you remotely send commands to the robot controller. It's the simplest way to control programs, read/write variables, and manage I/O.
**What you can do:**
- **Run, pause, hold, continue, and abort programs** remotely
- **Read and write any robot variable** (`$MCR.$GENOVERRIDE`, `$RMT_MASTER`, custom variables, etc.)
- **Set digital output ports** (DOUT, RDO, OPOUT, TPOUT, GOUT)
- **Simulate and unsimulate I/O ports** for testing without physical devices
- **Get task information** - which programs are running, their status, and current line
- **Read the current Cartesian position** of the robot
**Quick example:**
```python
from underautomation.fanuc.fanuc_robot import FanucRobot
from underautomation.fanuc.connection_parameters import ConnectionParameters
from underautomation.fanuc.telnet.kcl_ports import KCLPorts
robot = FanucRobot()
params = ConnectionParameters("192.168.0.1")
params.telnet.enable = True
robot.connect(params)
# Read a variable
result = robot.telnet.get_variable("$MCR.$GENOVERRIDE")
print(f"Speed override: {result.raw_value}%")
# Write a variable
robot.telnet.set_variable("$MCR.$GENOVERRIDE", 50)
# Control programs
robot.telnet.run("MyProgram")
robot.telnet.pause("MyProgram")
robot.telnet.abort("MyProgram", force=True)
# Set a digital output
robot.telnet.set_port(KCLPorts.DOUT, 1, 1) # DOUT[1] = ON
# Simulate an input for testing
robot.telnet.simulate(KCLPorts.DIN, 3, 1) # DIN[3] simulated to ON
robot.telnet.unsimulate(KCLPorts.DIN, 3) # Restore normal operation
# Get task info
info = robot.telnet.get_task_information("MAINPROG")
print(f"Status: {info.task_status_str}, Line: {info.current_line}")
# Read current position
pose = robot.telnet.get_current_pose()
print(f"X={pose.position.x}, Y={pose.position.y}, Z={pose.position.z}")
robot.disconnect()
```
---
### ⚡ SNPX (RobotIF) - High-Speed Industrial Protocol
SNPX provides **fast, structured data exchange** with the robot. It's the best choice for real-time monitoring and high-frequency register access. It supports batch reads for maximum throughput.
**What you can do:**
- **Read/write numeric registers** (R[1], R[2], ...) - single or batch
- **Read/write string registers** (SR[1], SR[2], ...)
- **Read/write position registers** (PR[1], PR[2], ...) with Cartesian and joint data
- **Read/write boolean flags** (FLG[1], FLG[2], ...) - single or batch
- **Read/write digital I/O** - SDI, SDO, RDI, RDO, UI, UO, SI, SO, WI, WO
- **Read/write numeric I/O** - GI, GO, AI, AO (group and analog signals)
- **Read/write system variables** (e.g. speed override)
- **Read the current robot position** in real-time (world and joint coordinates)
- **Read active alarms** and **alarm history** with severity and cause
- **Clear alarms** remotely
**Quick example:**
```python
from underautomation.fanuc.fanuc_robot import FanucRobot
from underautomation.fanuc.connection_parameters import ConnectionParameters
robot = FanucRobot()
params = ConnectionParameters("192.168.0.1")
params.snpx.enable = True
robot.connect(params)
# Read a numeric register
value = robot.snpx.numeric_registers.read(1)
print(f"R[1] = {value}")
# Write a numeric register
robot.snpx.numeric_registers.write(1, 42.5)
# Batch read for maximum speed
batch = robot.snpx.numeric_registers.create_batch_assignment(1, 10)
values = batch.read() # Reads R[1] through R[10] in one call
# Read/write string registers
text = robot.snpx.string_registers.read(1)
robot.snpx.string_registers.write(1, "Hello Fanuc")
# Read a position register
position = robot.snpx.position_registers.read(1)
print(f"PR[1]: X={position.cartesian_position.x}, Y={position.cartesian_position.y}")
# Digital I/O
sdi_values = robot.snpx.sdi.read(1, 8) # Read SDI[1..8]
robot.snpx.sdo.write(1, [True, False]) # Write SDO[1]=ON, SDO[2]=OFF
# Current position
pos = robot.snpx.current_position.read_world_position(1)
print(f"X={pos.cartesian_position.x}, Y={pos.cartesian_position.y}")
# Alarms
robot.snpx.clear_alarms()
# System variables
speed = robot.snpx.integer_system_variables.read("$MCR.$GENOVERRIDE")
robot.snpx.set_variable("$MCR.$GENOVERRIDE", "50")
robot.disconnect()
```
---
### 📐 Kinematics - Offline Forward & Inverse Kinematics
The kinematics module lets you compute **forward kinematics** (joint angles → Cartesian position) and **inverse kinematics** (Cartesian position → all joint angle solutions) **entirely offline** — no robot connection or license required.
It includes built-in Denavit-Hartenberg parameters for **80+ FANUC robot models** (CRX collaborative and standard OPW arms).
**What you can do:**
- **List all supported robot models** and their DH parameters
- **Forward kinematics** - compute the TCP Cartesian pose from 6 joint angles
- **Inverse kinematics** - compute **all** valid joint configurations for a given Cartesian pose
- **No connection needed** - works fully offline, no robot or license required
- **Supports CRX and standard (OPW) kinematics categories**
**Quick example:**
```python
import math
from underautomation.fanuc.kinematics.arm_kinematic_models import ArmKinematicModels
from underautomation.fanuc.kinematics.dh_parameters import DhParameters
from underautomation.fanuc.kinematics.kinematics_utils import KinematicsUtils
from underautomation.fanuc.common.cartesian_position import CartesianPosition
# Get DH parameters for a specific robot model
dh = DhParameters.from_arm_kinematic_model(ArmKinematicModels.CRX10iA)
print(f"a1={dh.a1}, a2={dh.a2}, a3={dh.a3}, d4={dh.d4}, d5={dh.d5}, d6={dh.d6}")
# Forward kinematics: joint angles (radians) → Cartesian position
joints_rad = [math.radians(j) for j in [0, -30, 45, 0, 60, 0]]
fk = KinematicsUtils.forward_kinematics(joints_rad, dh)
print(f"FK → X={fk.x:.2f}, Y={fk.y:.2f}, Z={fk.z:.2f}, W={fk.w:.2f}, P={fk.p:.2f}, R={fk.r:.2f}")
# Inverse kinematics: Cartesian position → all joint solutions
target = CartesianPosition(fk.x, fk.y, fk.z, fk.w, fk.p, fk.r, None)
solutions = KinematicsUtils.inverse_kinematics(target, dh)
for i, sol in enumerate(solutions, 1):
print(f" IK #{i}: J1={sol.j1:.2f}, J2={sol.j2:.2f}, J3={sol.j3:.2f}, "
f"J4={sol.j4:.2f}, J5={sol.j5:.2f}, J6={sol.j6:.2f}")
```
**TRY IT:**
To test IK and FK, you can run the complete example : [kinematics_forward_inverse.py](https://github.com/underautomation/fanuc.py/blob/main/examples/kinematics/kinematics_forward_inverse.py)
```bash
python .\examples\kinematics\kinematics_forward_inverse.py
```
And get this kind of output (with interactive model selection and joint input):
```
(.venv) PS Fanuc.py> python .\examples\kinematics\kinematics_forward_inverse.py
============================================================
FANUC SDK - Forward & Inverse Kinematics (offline)
============================================================
Available robot models (82):
1. ARCMate0iA 2. ARCMate0iB 3. ARCMate0iB_2
4. ARCMate100iD 5. ARCMate100iD10L 6. ARCMate100iD16S
7. ARCMate100iD8L 8. ARCMate120iD 9. ARCMate120iD12L
10. ARCMate120iD35 11. CR14iAL 12. CR15iA
13. CR35iA 14. CR7iA 15. CR7iAL
16. CRX10iA 17. CRX10iAL 18. LRMate200iD
19. LRMate200iD7C 20. LRMate200iD7L 21. LRMate200iD7LC
22. LaserRobotHA 23. M10iA10M 24. M10iA10MS
25. M10iA12 26. M10iA12S 27. M10iA7L
28. M10iA8L 29. M2000iA1200 30. M2000iA1700L
31. M2000iA2300 32. M2000iA900L 33. M20iA
34. M20iA12L 35. M20iA20M 36. M20iA35M
37. M20iB25 38. M20iB25C 39. M20iB35S
40. M410iC110 41. M410iC185 42. M410iC185_2
43. M410iC500 44. M410iC500_2 45. M710iC12L
46. M710iC20M 47. M710iC45M 48. M710iC50
49. M800iA60 50. M900iB280L 51. M900iB330L
52. M900iB360 53. M900iB400L 54. M900iB700
55. M900iBKAI 56. P350iA45LeftHand 57. P350iA45RightHand
58. P700iANewRightyArmRightOffset 59. R1000iA100F 60. R1000iA100F7
61. R1000iA120F7B 62. R1000iA120F7BS 63. R1000iA120F7BS_2
64. R1000iA120F7BS_3 65. R1000iA120F7B_2 66. R1000iA120F7B_3
67. R1000iA130F 68. R1000iA80F 69. R2000iB125L
70. R2000iB175L 71. R2000iB210FS 72. R2000iB220US
73. R2000iC100S 74. R2000iC125L 75. R2000iC190U
76. R2000iC210F 77. R2000iC210L 78. R2000iC210WE
79. R2000iC210WEProto 80. R2000iC220U 81. R2000iC270F
82. R2000iD100FH
Select a model number [1-82] (default 1): 16
→ Selected model: CRX10iA
Denavit-Hartenberg parameters for CRX10iA:
----------------------------------------
a1 = 0.0000 mm
a2 = 540.0000 mm
a3 = 0.0000 mm
d4 = -540.0000 mm
d5 = 150.0000 mm
d6 = -160.0000 mm
Category: Crx
Enter joint angles in degrees (press Enter for 0):
J1 (0.0000): 10
J2 (0.0000): 20
J3 (0.0000): 40
J4 (0.0000): 10
J5 (0.0000): -10
J6 (0.0000):
Computing forward kinematics for J=[10.00, 20.00, 40.00, 10.00, -10.00, 0.00] deg ...
FK result — Cartesian position:
----------------------------------------
X = 735.4571 mm
Y = -25.2181 mm
Z = 954.8160 mm
W = 14.8408 deg
P = -58.7116 deg
R = 170.7749 deg
Conf = N U T, 0, 0, 0
Enter cartesian position for IK (press Enter to keep FK value):
X [mm] (735.4571):
Y [mm] (-25.2181):
Z [mm] (954.8160):
W [deg] (14.8408):
P [deg] (-58.7116):
R [deg] (170.7749):
Computing inverse kinematics for X=735.4571, Y=-25.2181, Z=954.8160, W=14.8408, P=-58.7116, R=170.7749 ...
8 IK solution(s) found:
----------------------------------------------------------------------------------------------------
# J1 J2 J3 J4 J5 J6 Configuration
----------------------------------------------------------------------------------------------------
1 -15.6154 48.3697 60.5986 142.0264 34.2715 -150.7098 F D T, 0, 0, 0
2 10.2040 47.1644 69.1161 3.0266 -39.0031 7.6007 N D T, 0, 0, 0
3 3.2969 19.0319 27.7002 58.2991 4.7830 -51.7280 F U T, 0, 0, 0
4 10.0000 20.0000 40.0000 10.0000 -10.0000 0.0000 N U T, 0, 0, 0
5 164.3846 -48.3697 119.4014 -37.9736 34.2715 -150.7098 F U B, 0, 0, 0
6 -169.7960 -47.1644 110.8839 -176.9734 -39.0031 7.6007 N U B, 0, 0, 0
7 -176.7031 -19.0319 152.2998 -121.7009 4.7830 -51.7280 F D B, 0, 0, 0
8 -170.0000 -20.0000 140.0000 -170.0000 -10.0000 0.0000 N D B, 0, 0, 0
----------------------------------------------------------------------------------------------------
Done.
```
---
### 📂 FTP - File Transfer & Variable Management
FTP gives you access to the robot's **file system** and **internal variable files**. It's ideal for bulk data access, diagnostics, backups, and program management.
It not only allows you to upload/download files but also provides **structured** and **parsed** access to system files, variables files, and more. You can even get a complete diagnostic snapshot in one call.
**What you can do:**
- **Upload and download files** to/from the controller
- **Browse the controller's file system** - list files, check existence
- **Read all variable files at once** - navigate through an interactive explorer
- **Read numeric, string, and position registers** from variable files
- **Read system variables** (robot name, hostname, language, timers)
- **Get the complete error/alarm history** with codes and timestamps
- **Read all I/O states** (DIN, DOUT, RI, RO, UI, UO, SI, SO, FLG)
- **Get safety status** - E-Stop, deadman, fence open, TP enable
- **Get program/task states** - which programs are running and their call stacks
- **Read installed features/options** on the controller
- **Get a full summary diagnostic** - position, safety, I/O, features, programs in one call
**Quick example:**
```python
from underautomation.fanuc.fanuc_robot import FanucRobot
from underautomation.fanuc.connection_parameters import ConnectionParameters
robot = FanucRobot()
params = ConnectionParameters("192.168.0.1")
params.ftp.enable = True
robot.connect(params)
# File operations
robot.ftp.direct_file_handling.upload_file_to_controller("local.tp", "/md:/remote.tp")
robot.ftp.direct_file_handling.download_file_from_controller("backup.va", "/md:/backup.va")
exists = robot.ftp.direct_file_handling.file_exists("/md:/summary.dg")
# Read registers from variable files
numreg = robot.ftp.known_variable_files.get_numreg_file()
for idx, val in enumerate(numreg.numreg, start=1):
print(f"R[{idx}] = {val}")
posreg = robot.ftp.known_variable_files.get_posreg_file()
strreg = robot.ftp.known_variable_files.get_strreg_file()
# System variables
system = robot.ftp.known_variable_files.get_system_file()
print(f"Robot: {system.robot_name}, Host: {system.hostname}")
# Current position (all groups, all frames)
position = robot.ftp.get_current_position()
for gp in position.groups_position:
print(f"J1={gp.joints_position.j1}, J2={gp.joints_position.j2}")
# Safety status
safety = robot.ftp.get_safety_status()
print(f"E-Stop: {safety.external_e_stop}, TP Enable: {safety.tp_enable}")
# Error history
errors = robot.ftp.get_all_errors_list()
for err in errors.filter_active_alarms():
print(f"[{err.error_code}] {err.message}")
# I/O states (all ports at once)
io = robot.ftp.get_io_state()
for signal in io.states:
print(f"{signal.port}[{signal.id}] = {'ON' if signal.value else 'OFF'}")
# Complete diagnostic in one call
diag = robot.ftp.get_summary_diagnostic()
robot.disconnect()
```
---
## 🔧 Robot Configuration
Some features require enabling protocols on the controller.
### ✅ Enable Telnet KCL
Read full tutorial: [underautomation.com/fanuc/documentation/enable-telnet](https://underautomation.com/fanuc/documentation/enable-telnet)
1. Go to **SETUP > Host Comm**
2. Select **TELNET** → **[DETAIL]**
3. Set a password and reboot
### ✅ Enable FTP
1. Go to **SETUP > Host Comm > FTP**
2. Set username/password
3. Perform a cold start
### ✅ Enable SNPX
- For **FANUC America (R650 FRA)**: Enable option R553 "HMI Device SNPX"
- For **FANUC Ltd. (R651 FRL)**: No additional options required
---
## 🔍 Compatibility
| | Supported | text/markdown | UnderAutomation | support@underautomation.com | null | null | Commercial | null | [
"Programming Language :: Python :: 3",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Intended Audience :: Developers",
"Intended Audience :: Manufacturing",
"Topic :: Scientific/Engineering",
"Topic :: Software Development :: Libraries"... | [] | https://underautomation.com/fanuc | null | >=3.7 | [] | [] | [] | [
"pythonnet==3.0.5"
] | [] | [] | [] | [
"Documentation, https://underautomation.com/fanuc/documentation/get-started-python",
"Source, https://github.com/underautomation/Fanuc.py"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-19T14:17:19.307679 | underautomation_fanuc-4.4.0.0.tar.gz | 799,378 | 35/e3/da1d3210eb9d58fa140444e2873554740f4dbe9703fd2405fb954962dbfc/underautomation_fanuc-4.4.0.0.tar.gz | source | sdist | null | false | 79f4d1535053bd88aee5ceb413691edb | 4c6c50b0231a9e7b6f08f15e56893ba952e1a249168c89d0bd259ced48fb5906 | 35e3da1d3210eb9d58fa140444e2873554740f4dbe9703fd2405fb954962dbfc | null | [] | 0 |
2.4 | pymittagleffler | 0.1.9 | High performance implementation of the Mittag-Leffler function | <div align="center">
<img width="600" src="https://raw.githubusercontent.com/alexfikl/mittagleffler/refs/heads/main/python/docs/_static/mittag-leffler-accuracy-contour.png"/><br>
</div>
# mittagleffler
[](https://github.com/alexfikl/mittagleffler/actions?query=branch%3Amain+workflow%3ACI)
[](https://api.reuse.software/info/github.com/alexfikl/mittagleffler)
[](https://pypi.org/project/pymittagleffler/)
[](https://crates.io/crates/mittagleffler)
[](https://mittagleffler.readthedocs.io/en/latest)
[](https://docs.rs/mittagleffler/latest/mittagleffler/)
This library implements the two-parameter Mittag-Leffler function.
Currently only the algorithm described in the paper by [Roberto Garrapa (2015)](<https://doi.org/10.1137/140971191>)
is implemented. This seems to be the most accurate and computationally efficient
method to date for evaluating the Mittag-Leffler function.
**Links**
* *Documentation*: [Rust (docs.rs)](https://docs.rs/mittagleffler/latest/mittagleffler/)
and [Python (readthedocs.io)](https://mittagleffler.readthedocs.io).
* *Code*: [Github](https://github.com/alexfikl/mittagleffler).
* *License*: [MIT](https://spdx.org/licenses/MIT.html) (see `LICENSES/MIT.txt`).
**Other implementations**
* [ml.m](https://www.mathworks.com/matlabcentral/fileexchange/48154-the-mittag-leffler-function) (MATLAB):
implements the three-parameter Mittag-Leffler function.
* [ml_matrix.m](https://www.mathworks.com/matlabcentral/fileexchange/66272-mittag-leffler-function-with-matrix-arguments) (MATLAB):
implements the matrix-valued two-parameter Mittag-Leffler function.
* [MittagLeffler.jl](https://github.com/JuliaMath/MittagLeffler.jl) (Julia):
implements the two-parameter Mittag-Leffler function and its derivative.
* [MittagLeffler](https://github.com/gurteksinghgill/MittagLeffler) (R):
implements the three-parameter Mittag-Leffler function.
* [mittag-leffler](https://github.com/khinsen/mittag-leffler) (Python):
implements the three-parameter Mittag-Leffler function.
* [mlf](https://github.com/tranqv/Mittag-Leffler-function-and-its-derivative) (Fortran 90):
implements the three-parameter Mittag-Leffler function.
* [mlpade](https://github.com/matt-black/mlpade) (MATLAB):
implements the two-parameter Mittag-Leffler function.
* [MittagLeffler](https://github.com/droodman/Mittag-Leffler-for-Stata) (Stata):
implements the three-parameter Mittag-Leffler function.
* [MittagLefflerE](https://reference.wolfram.com/language/ref/MittagLefflerE.html.en) (Mathematica):
implements the two-parameter Mittag-Leffler function.
# Rust Crate
The library is available as a Rust crate that implements the main algorithms.
Evaluating the Mittag-Leffler function can be performed directly by
```rust
use mittagleffler::MittagLeffler;
let alpha = 0.75;
let beta = 1.25;
let z = Complex64::new(1.0, 2.0);
println!("E({}; {}, {}) = {}", z, alpha, beta, z.mittag_leffler(alpha, beta));
let z: f64 = 3.1415;
println!("E({}; {}, {}) = {}", z, alpha, beta, z.mittag_leffler(alpha, beta));
```
This method will call the best underlying algorithm and takes care of any special
cases that are known in the literature, e.g. for `(alpha, beta) = (1, 1)` we
know that the Mittag-Leffler function is equivalent to the standard exponential.
To call a specific algorithm, we can do
```rust
use mittagleffler::GarrappaMittagLeffler
let eps = 1.0e-8;
let ml = GarrappaMittagLeffler::new(eps);
let z = Complex64::new(1.0, 2.0);
println!("E({}; {}, {}) = {}",z, alpha, beta, ml.evaluate(z, alpha, beta));
```
The algorithm from Garrappa (2015) has several parameters that can be tweaked
for better performance or accuracy. They can be found in the documentation of the
structure, but should not be changed unless there is good reason!
Python Bindings
===============
The library also has Python bindings (using [pyo3](https://github.com/PyO3/pyo3))
that can be found in the `python` directory. The bindings are written to work
with scalars and with `numpy` arrays equally. For example
```python
import numpy as np
from pymittagleffler import mittag_leffler
alpha, beta = 2.0, 2.0
z = np.linspace(0.0, 1.0, 128)
result = mittag_leffler(z, alpha, beta)
```
These are available on PyPI under the name `pymittagleffler`.
| text/markdown; charset=UTF-8; variant=GFM | null | Alexandru Fikl <alexfikl@gmail.com> | null | Alexandru Fikl <alexfikl@gmail.com> | null | fractional-calculus, special-functions | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pyth... | [] | https://github.com/alexfikl/mittagleffler | null | >=3.10 | [] | [] | [] | [
"numpy>=2",
"sphinx>=6; extra == \"docs\"",
"sphinx-book-theme; extra == \"docs\""
] | [] | [] | [] | [
"Documentation, https://mittagleffler.readthedocs.io",
"Repository, https://github.com/alexfikl/mittagleffler"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:16:30.118791 | pymittagleffler-0.1.9.tar.gz | 263,687 | 60/5b/f3513253f27f791fe703b01afba36908bb6a5dfde411b90f6aa6b97284a8/pymittagleffler-0.1.9.tar.gz | source | sdist | null | false | 56095289e8a22e8616a8e10332a8eeea | 601d1a00025c3c6717ea0183ff6b563cd9b4fee85ac46e239824a1f26d120134 | 605bf3513253f27f791fe703b01afba36908bb6a5dfde411b90f6aa6b97284a8 | MIT | [] | 482 |
2.4 | PEAKQC | 0.1.6 | Module for quality control of ATAC-seq data | 


<img src="docs/source/_static/logo.png" alt="drawing" width="500"/>
Periodicity Evaluation in scATAC-seq data for quality assessment
A python tool for ATAC-seq quality control in single cells.
On the bulk level quality control approaches rely on four key aspects:
- signal-to-noise ratio
- library complexity
- mitochondrial DNA nuclear DNA ratio
- fragment length distribution
Hereby relies PEAKQC on the evaluation of the fragment length distribution.
While on the bulk level the evaluation is done visually, it is not possible to do that on the single cell level.
PEAKQC solves this constraint with an convolution based algorithmic approach.
# API Documentation
A detailed API documentation is provided by our read the docs page:
https://loosolab.pages.gwdg.de/software/peakqc/
# Workflow
To execute the tool an anndata object and fragments, corresponding to the cells in the anndata have to be provided. The fragments can be either determined from a bamfile directly or by an fragments file in the bed format. If a fragments bedfile is available this is recommended to shorten the runtime.

# Installation
## PyPi
```
pip install peakqc
```
## From Source
### 1. Enviroment & Package Installation
1. Download the repository. This will download the repository to the current directory
```
git@gitlab.gwdg.de:loosolab/software/peakqc.git
```
2. Change the working directory to the newly created repository directory.
```
cd sc_framework
```
3. Install analysis environment. Note: using `mamba` is faster than `conda`, but this requires mamba to be installed.
```
mamba env create -f peakqc_env.yml
```
4. Activate the environment.
```
conda activate peakqc
```
5. Install PEAKQC into the enviroment.
```
pip install .
```
### 2. Package Installation
1. Download the repository. This will download the repository to the current directory
```
git@gitlab.gwdg.de:loosolab/software/peakqc.git
```
2. Change the working directory to the newly created repository directory.
```
cd sc_framework
```
3. Install PEAKQC into the enviroment.
```
pip install .
```
# Quickstart
Below is a minimal example showing how to integrate FLD scoring into a Jupyter Notebook. A fully worked example is available at [`paper/example_notebook.ipynb`](paper/example_notebook.ipynb).
1. **Load your AnnData object**
```python
import scanpy as sc
# replace with your path to the .h5ad file
anndata = sc.read_h5ad('path/to/your_data.h5ad')
```
Note: We recommend storing your cell barcodes as the `.obs` index in `adata`. If your barcodes are instead in a specific `.obs` column, you can override this via the `barcode_col` parameter (see below).
2. **Import FLD scoring function**
```python
from peakqc.fld_scoring import add_fld_metrics
```
3. **Prepare fragment files**
- Provide either a BED or BAM file via fragments=.
- BED files are recommended for faster runtime.
- Example:
```python
fragments = 'path/to/fragments.bed' # or .bam
```
4. **Run FLD scoring**
```python
adata = add_fld_metrics(adata=anndata,
fragments=fragments,
barcode_col=None,
plot=True,
save_density=None,
save_overview=None,
sample=0,
n_threads=8,
sample_size=5000,
mc_seed=42,
mc_samples=1000
)
```
5. **Filter on PEAKQC scores**
In our experience, PEAKQC scores above 100 are generally effective for filtering out low-quality cells. Hereby PEAKQC scores positively correlate with improving FLD patterns. However, it is important to note that optimal thresholds can vary between datasets and should be tuned to achieve reliable results.
Threshold selection may also depend on the specific requirements of your downstream analysis, and should be adjusted accordingly.
For a step-by-step walkthrough along with plotting examples, see the example notebook at
`paper/example_notebook.ipynb`
| text/markdown | Jan Detleffsen, Brenton Joey Bruns, Mette Bentsen, Carsten Kuenne, Mario Looso | null | null | Jan Detleffsen <Jan.Detleffsen@mpi-bn.mpg.de> | null | quality control, single-cell, single-cell analysis, scATAC-seq, epigenomics, chromatin accessibility, QC, reproducible research, Scanpy, AnnData | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"scanpy>=1.9",
"pysam",
"scipy",
"pytest; extra == \"test\"",
"pytest-mock; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"pytest-html; extra == \"test\""
] | [] | [] | [] | [
"Repository, https://github.com/loosolab/PEAKQC",
"Issues, https://github.com/loosolab/PEAKQC/issues",
"Changelog, https://github.com/loosolab/PEAKQC/blob/main/CHANGES.md"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T14:15:27.887670 | peakqc-0.1.6.tar.gz | 95,179,737 | 0a/0c/9b6d79c013b54b4878906fafc1c1a09ac89cc50c2aa81bba54d70d371d2d/peakqc-0.1.6.tar.gz | source | sdist | null | false | 2b416c385a08f3de285dfa577e362fae | cd41369c31caaa0dedd38080dae56de599ebebb9b1b7b2829ca805c7d2f2044c | 0a0c9b6d79c013b54b4878906fafc1c1a09ac89cc50c2aa81bba54d70d371d2d | MIT | [
"LICENSE"
] | 0 |
2.4 | astreum | 0.3.72 | Python library to interact with the Astreum blockchain and its virtual machine. | # lib
Python library to interact with the Astreum blockchain and its virtual machine.
[View on PyPI](https://pypi.org/project/astreum/)
## Configuration
When initializing an `astreum.Node`, pass a dictionary with any of the options below. Only the parameters you want to override need to be present – everything else falls back to its default.
### Core Configuration
| Parameter | Type | Default | Description |
| --------------------------- | ---------- | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `hot_storage_limit` | int | `1073741824` | Maximum bytes kept in the hot cache before new atoms are skipped (1 GiB). |
| `cold_storage_limit` | int | `10737418240` | Cold storage write threshold (10 GiB by default); set to `0` to skip the limit. |
| `cold_storage_path` | string | `None` | Directory where persisted atoms live; Astreum creates it on startup and skips cold storage when unset. |
| `cold_storage_level_size` | int | `10485760` | Size threshold (10 MiB) for collating `level_0` into the first cold-storage index/data pair. |
| `atom_fetch_interval` | float | `0.25` | Poll interval (seconds) while waiting for missing atoms in `get_atom_list_from_storage`; `0` disables waiting. |
| `atom_fetch_retries` | int | `8` | Number of poll attempts for missing atoms; max wait is roughly `interval * retries`, `0` disables waiting. |
| `logging_retention_days` | int | `7` | Number of days to keep rotated log files (daily gzip). |
| `chain_id` | int | `0` | Chain identifier used for validation (0 = test, 1 = main). |
| `verbose` | bool | `False` | When **True**, also mirror JSON logs to stdout with a human-readable format. |
### Communication
| Parameter | Type | Default | Description |
| ------------------------ | ----------- | --------------------- | ------------------------------------------------------------------------------------------------------- |
| `relay_secret_key` | hex string | Auto-generated | X25519 private key used for the relay route; a new keypair is created when this field is omitted. |
| `validation_secret_key` | hex string | `None` | Optional Ed25519 key that lets the node join the validation route; leave blank to opt out of validation. |
| `use_ipv6` | bool | `False` | Bind the incoming/outgoing sockets on IPv6 (the OS still listens on IPv4 if a peer speaks both). |
| `incoming_port` | int | `52780` | UDP port the relay binds to; pass `0` or omit to let the OS pick an ephemeral port. |
| `default_seed` | string | `"bootstrap.astreum.org:52780"` | Default address to ping before joining; set to `None` to disable the built-in default. |
| `additional_seeds` | list\[str\] | `[]` | Extra addresses appended to the bootstrap list; each must look like `host:port` or `[ipv6]:port`. |
| `peer_timeout` | int | `900` | Evict peers that have not been seen within this many seconds (15 minutes). |
| `peer_timeout_interval` | int | `10` | How often (seconds) the peer manager checks for stale peers. |
| `bootstrap_retry_interval` | int | `30` | How often (seconds) to retry bootstrapping when the peer list is empty. |
| `storage_index_interval` | int | `600` | How often (seconds) to re-advertise entries in `node.atom_advertisments` to the closest known peer. |
| `incoming_queue_size_limit` | int | `67108864` | Soft cap (bytes) for inbound queue usage tracked by `enqueue_incoming`; set to `0` to disable. |
| `incoming_queue_timeout` | float | `1.0` | When > 0, `enqueue_incoming` waits up to this many seconds for space before dropping the payload. |
Advertisements: `node.atom_advertisments` holds `(atom_id, payload_type, expires_at)` tuples. Use `node.add_atom_advertisement` or `node.add_atom_advertisements` to enqueue entries (`expires_at=None` keeps them indefinite). Validators automatically advertise block, transaction (main and detail lists), receipt, and account trie lists for 15 minutes by default.
> **Note**
> The peer‑to‑peer *route* used for object discovery is always enabled.
> If `validation_secret_key` is provided the node automatically joins the validation route too.
### Example
```python
from astreum.node import Node
config = {
"relay_secret_key": "ab…cd", # optional – hex encoded
"validation_secret_key": "12…34", # optional – validator
"hot_storage_limit": 1073741824, # cap hot cache at 1 GiB
"cold_storage_limit": 10737418240, # cap cold storage at 10 GiB
"cold_storage_path": "./data/node1",
"incoming_port": 52780,
"use_ipv6": False,
"default_seed": None,
"additional_seeds": [
"127.0.0.1:7374"
]
}
node = Node(config)
# … your code …
```
## Astreum Machine Quickstart
The Astreum virtual machine (VM) is embedded inside `astreum.Node`. You feed it Astreum script, and the node tokenizes, parses, and evaluates.
```python
# Define a named function int.add (stack body) and call it with bytes 1 and 2
import uuid
from astreum import Node, Env, Expr
# 1) Spin‑up a stand‑alone VM
node = Node()
# 2) Create an environment (simple manual setup)
env_id = uuid.uuid4()
node.environments[env_id] = Env()
# 3) Build a function value using a low‑level stack body via `sk`.
# Body does: $0 $1 add (i.e., a + b)
low_body = Expr.ListExpr([
Expr.Symbol("$0"), # a (first arg)
Expr.Symbol("$1"), # b (second arg)
Expr.Symbol("add"),
])
fn_body = Expr.ListExpr([
Expr.Symbol("a"),
Expr.Symbol("b"),
Expr.ListExpr([low_body, Expr.Symbol("sk")]),
])
params = Expr.ListExpr([Expr.Symbol("a"), Expr.Symbol("b")])
int_add_fn = Expr.ListExpr([fn_body, params, Expr.Symbol("fn")])
# 4) Store under the name "int.add"
node.env_set(env_id, "int.add", int_add_fn)
# 5) Retrieve the function and call it with bytes 1 and 2
bound = node.env_get(env_id, "int.add")
call = Expr.ListExpr([Expr.Bytes(b"\x01"), Expr.Bytes(b"\x02"), bound])
res = node.high_eval(env_id, call)
# sk returns a list of bytes; for 1+2 expect a single byte with value 3
print([int.from_bytes(b.value, 'big', signed=True) for b in res.elements]) # [3]
```
### Handling errors
Both helpers raise `ParseError` (from `astreum.machine.error`) when something goes wrong:
* Unterminated string literals are caught by `tokenize`.
* Unexpected or missing parentheses are caught by `parse`.
Catch the exception to provide developer‑friendly diagnostics:
```python
try:
tokens = tokenize(bad_source)
expr, _ = parse(tokens)
except ParseError as e:
print("Parse failed:", e)
```
---
## Logging
Every `Node` instance wires up structured logging automatically:
- Logs land in per-instance files named `node.log` under `%LOCALAPPDATA%\Astreum\lib-py\logs/<instance_id>` on Windows and `$XDG_STATE_HOME` (or `~/.local/state`)/`Astreum/lib-py/logs/<instance_id>` on other platforms. The `<instance_id>` is the first 16 hex characters of a BLAKE3 hash of the caller's file path, so running the node from different entry points keeps their logs isolated.
- Files rotate at midnight UTC with gzip compression (`node-YYYY-MM-DD.log.gz`) and retain 7 days by default. Override via `config["logging_retention_days"]`.
- Each event is a single JSON line containing timestamp, level, logger, message, process/thread info, module/function, and the derived `instance_id`.
- Set `config["verbose"] = True` to mirror logs to stdout in a human-friendly format like `[2025-04-13-42-59] [info] Starting Astreum Node`.
- The very first entry emitted is the banner `Starting Astreum Node`, signalling that the logging pipeline is live before other subsystems spin up.
## Testing
```bash
python3 -m venv venv
source venv/bin/activate
pip install -e .
```
for all tests
```
python3 -m unittest discover -s tests
```
for individual tests
| Test | Pass |
| --- | --- |
| `python3 -m unittest tests.node.test_current_validator` | ✅ |
| `python3 -m unittest tests.node.test_node_connection` | ✅ |
| `python3 -m unittest tests.node.test_node_init` | |
| `python3 -m unittest tests.node.test_node_validation` | |
| `python3 -m unittest tests.node.tokenize` | |
| `python3 -m unittest tests.node.parse` | |
| `python3 -m unittest tests.node.function` | |
| `python3 -m unittest tests.node.stack` | |
| `python3 -m unittest tests.communication.test_message_port` | |
| `python3 -m unittest tests.communication.test_integration_port_handling` | |
| `python3 -m unittest tests.storage.indexing` | |
| `python3 -m unittest tests.storage.cold` | |
| `python3 -m unittest tests.storage.utils` | |
| `python3 -m unittest tests.models.test_merkle` | |
| `python3 -m unittest tests.models.test_patricia` | |
| `python3 -m unittest tests.block.atom` | |
| `python3 -m unittest tests.block.nonce` | |
| text/markdown | null | "Roy R. O. Okello" <roy@stelar.xyz> | null | null | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"pycryptodomex==3.21.0",
"cryptography==44.0.2",
"blake3==1.0.4"
] | [] | [] | [] | [
"Homepage, https://github.com/astreum/lib-py",
"Issues, https://github.com/astreum/lib-py/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T14:15:20.118428 | astreum-0.3.72.tar.gz | 90,006 | 82/70/ae29204633d45b05384544aedb9859abd463197f4c2b21b8c7acd94905c0/astreum-0.3.72.tar.gz | source | sdist | null | false | 96bc19b0b3301e1c433752b55e6b8aba | 8f7382e1af770463045ba5e536eb36a209a371c3020d7a07af371ff75ed40f33 | 8270ae29204633d45b05384544aedb9859abd463197f4c2b21b8c7acd94905c0 | null | [
"LICENSE"
] | 268 |
2.4 | nova-mvvm | 0.16.2 | A Python Package for Model-View-ViewModel pattern | MVVM Library for Python
=======================
# Introduction
`nova-mvvm` is a Python package designed to simplify the implementation of the Model-View-ViewModel (MVVM) pattern.
This library provides tools and utilities that help in building clean, scalable, and maintainable GUI applications using MVVM architecture in Python.
It currently supports pyqt5, pyqt6, [Trame](https://github.com/Kitware/trame) and [Panel](https://github.com/holoviz/panel) GUI frameworks.
| text/markdown | null | Sergey Yakubov <yakubovs@ornl.gov>, John Duggan <dugganjw@ornl.gov>, Greg Watson <watsongr@ornl.gov> | null | null | null | MVVM, python | [] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"deepdiff",
"pydantic",
"trame",
"panel; extra == \"panel\"",
"pyqt5; extra == \"pyqt5\"",
"pyqt6; extra == \"pyqt6\""
] | [] | [] | [] | [] | Hatch/1.16.3 cpython/3.13.9 HTTPX/0.28.1 | 2026-02-19T14:15:16.867920 | nova_mvvm-0.16.2-py3-none-any.whl | 21,001 | a5/da/52d60f3eeae1154ee09977b75f8bb04745643e4fb67ef8e84efc743a8015/nova_mvvm-0.16.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 6c57c7b73e878e4ad611a8b22daec43a | 0097a87ee26d76ae29b014f904966c4f744423800f2f244aa6bd405d33ce21bd | a5da52d60f3eeae1154ee09977b75f8bb04745643e4fb67ef8e84efc743a8015 | MIT | [
"LICENSE"
] | 283 |
2.4 | pydantic-monty | 0.0.7 | Python bindings for the Monty sandboxed Python interpreter | # pydantic-monty
Python bindings for the Monty sandboxed Python interpreter.
## Installation
```bash
pip install pydantic-monty
```
## Usage
### Basic Expression Evaluation
```python
import pydantic_monty
# Simple code with no inputs
m = pydantic_monty.Monty('1 + 2')
print(m.run())
#> 3
```
### Using Input Variables
```python
import pydantic_monty
# Create with code that uses input variables
m = pydantic_monty.Monty('x * y', inputs=['x', 'y'])
# Run multiple times with different inputs
print(m.run(inputs={'x': 2, 'y': 3}))
#> 6
print(m.run(inputs={'x': 10, 'y': 5}))
#> 50
```
### Resource Limits
```python
import pydantic_monty
m = pydantic_monty.Monty('x + y', inputs=['x', 'y'])
# With resource limits
limits = pydantic_monty.ResourceLimits(max_duration_secs=1.0)
result = m.run(inputs={'x': 1, 'y': 2}, limits=limits)
assert result == 3
```
### External Functions
```python
import pydantic_monty
# Code that calls an external function
m = pydantic_monty.Monty('double(x)', inputs=['x'], external_functions=['double'])
# Provide the external function implementation at runtime
result = m.run(inputs={'x': 5}, external_functions={'double': lambda x: x * 2})
print(result)
#> 10
```
### Iterative Execution with External Functions
Use `start()` and `resume()` to handle external function calls iteratively,
giving you control over each call:
```python
import pydantic_monty
code = """
data = fetch(url)
len(data)
"""
m = pydantic_monty.Monty(code, inputs=['url'], external_functions=['fetch'])
# Start execution - pauses when fetch() is called
result = m.start(inputs={'url': 'https://example.com'})
print(type(result))
#> <class 'pydantic_monty.MontySnapshot'>
print(result.function_name) # fetch
#> fetch
print(result.args)
#> ('https://example.com',)
# Perform the actual fetch, then resume with the result
result = result.resume(return_value='hello world')
print(type(result))
#> <class 'pydantic_monty.MontyComplete'>
print(result.output)
#> 11
```
### Serialization
Both `Monty` and `MontySnapshot` can be serialized to bytes and restored later.
This allows caching parsed code or suspending execution across process boundaries:
```python
import pydantic_monty
# Serialize parsed code to avoid re-parsing
m = pydantic_monty.Monty('x + 1', inputs=['x'])
data = m.dump()
# Later, restore and run
m2 = pydantic_monty.Monty.load(data)
print(m2.run(inputs={'x': 41}))
#> 42
```
Execution state can also be serialized mid-flight:
```python
import pydantic_monty
m = pydantic_monty.Monty('fetch(url)', inputs=['url'], external_functions=['fetch'])
progress = m.start(inputs={'url': 'https://example.com'})
# Serialize the execution state
state = progress.dump()
# Later, restore and resume (e.g., in a different process)
progress2 = pydantic_monty.MontySnapshot.load(state)
result = progress2.resume(return_value='response data')
print(result.output)
#> response data
```
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language... | [] | https://github.com/pydantic/monty/ | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/pydantic/monty",
"Source, https://github.com/pydantic/monty"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T14:14:48.648397 | pydantic_monty-0.0.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl | 6,059,238 | 98/f9/9471a56881ba8b2b87dc6bc274194fb3e0d58ad61d1ee17a6427690c951e/pydantic_monty-0.0.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl | cp311 | bdist_wheel | null | false | 18482a116afa6dff3a9a262c361a8bf7 | 300dc5dcaae167f61540037cc5cb66ee63edb39298e04807472afd6f14c7d4cf | 98f99471a56881ba8b2b87dc6bc274194fb3e0d58ad61d1ee17a6427690c951e | null | [] | 23,017 |
2.4 | findiff | 0.13.1 | A Python package for finite difference derivatives in any number of dimensions. | # <img src="docs/assets/findiff_logo.png" width="100px"> findiff
[](https://img.shields.io/pypi/v/findiff.png?style=flat-square&color=brightgreen)


[](https://maroba.github.io/findiff/)
[]()
[](https://pepy.tech/project/findiff)
A Python package for finite difference numerical derivatives and partial differential equations in
any number of dimensions.
## Main Features
* Differentiate arrays of any number of dimensions along any axis with any desired accuracy order
* Accurate treatment of grid boundary
* Can handle arbitrary linear combinations of derivatives with constant and variable coefficients
* Fully vectorized for speed
* Matrix representations of arbitrary linear differential operators
* Solve partial differential equations with Dirichlet or Neumann boundary conditions
* Symbolic representation of finite difference schemes
* **New in version 0.11**: More comfortable API (keeping the old API available)
* **New in version 0.12**: Periodic boundary conditions for differential operators and PDEs.
* **New in version 0.13**: Compact (implicit) finite differences with spectral-like resolution.
## Installation
```
pip install --upgrade findiff
```
## Documentation and Examples
You can find the documentation of the code including examples of application at https://maroba.github.io/findiff/.
## Taking Derivatives
*findiff* allows to easily define derivative operators that you can apply to *numpy* arrays of
any dimension.
Consider the simple 1D case of a equidistant grid
with a first derivative $\displaystyle \frac{\partial}{\partial x}$ along the only axis (0):
```python
import numpy as np
from findiff import Diff
# define the grid:
x = np.linspace(0, 1, 100)
# the array to differentiate:
f = np.sin(x) # as an example
# Define the derivative:
d_dx = Diff(0, x[1] - x[0])
# Apply it:
df_dx = d_dx(f)
```
Similarly, you can define partial derivatives along other axes, for example, if $z$ is the 2-axis, we can write
$\frac{\partial}{\partial z}$ as:
```python
Diff(2, dz)
```
`Diff` always creates a first derivative. For higher derivatives, you simply exponentiate them, for example for $\frac{\partial^2}{\partial_x^2}$
```
d2_dx2 = Diff(0, dx)**2
```
and apply it as before.
You can also define more general differential operators intuitively, like
$$
2x \frac{\partial^3}{\partial x^2 \partial z} + 3 \sin(y)z^2 \frac{\partial^3}{\partial x \partial y^2}
$$
which can be written as
```python
# define the operator
diff_op = 2 * X * Diff(0)**2 * Diff(2) + 3 * sin(Y) * Z**2 * Diff(0) * Diff(1)**2
# set the grid you use (equidistant here)
diff_op.set_grid({0: dx, 1: dy, 2: dz})
# apply the operator
result = diff_op(f)
```
where `X, Y, Z` are *numpy* arrays with meshed grid points. Here you see that you can also define your grid
lazily.
Of course, standard operators from vector calculus like gradient, divergence and curl are also available
as shortcuts.
If one or more axis of your grid are periodic, you can specify that when defining the derivative or later
when setting the grid. For example:
```python
d_dx = Diff(0, dx, periodic=True)
# or later
d_dx = Diff(0)
d_dx.set_grid({0: {"h": dx, "periodic": True}})
```
More examples can be found [here](https://maroba.github.io/findiff/examples.html) and in [this blog](https://medium.com/p/7e54132a73a3).
### Accuracy Control
When constructing an instance of `Diff`, you can request the desired accuracy
order by setting the keyword argument `acc`. For example:
```python
d_dx = Diff(0, dy, acc=4)
df_dx = d2_dx2(f)
```
Alternatively, you can also split operator definition and configuration:
```python
d_dx = Diff(0, dx)
d_dx.set_accuracy(2)
df_dx = d2_dx2(f)
```
which comes in handy if you have a complicated expression of differential operators, because then you
can specify it on the whole expression and it will be passed down to all basic operators.
If not specified, second order accuracy will be taken by default.
### Compact Finite Differences
Standard finite differences only use function values to approximate a derivative. Compact (or implicit)
finite differences also couple derivative values at neighboring points, which gives you spectral-like
resolution from small stencils. The tradeoff is that applying the operator requires solving a banded linear
system — but for tridiagonal systems that's $O(N)$ and very fast.
You can define a compact scheme explicitly by specifying the left-hand side coefficients and right-hand side offsets:
```python
from findiff import Diff, CompactScheme
scheme = CompactScheme(
deriv=1,
left={-1: 1/3, 0: 1, 1: 1/3}, # tridiagonal LHS
right=[-3, -2, -1, 0, 1, 2, 3], # RHS offsets (coefficients computed automatically)
)
d_dx = Diff(0, dx, scheme=scheme, periodic=True)
df_dx = d_dx(np.sin(x)) # 6th-order accurate first derivative
```
Or let findiff pick a scheme for you:
```python
d_dx = Diff(0, dx, compact=3, acc=6, periodic=True)
```
Here `compact=3` sets the LHS bandwidth (must be odd), and findiff widens the RHS stencil until
the requested accuracy is reached. Higher derivatives work the same as usual:
```python
d2_dx2 = d_dx ** 2
```
Non-periodic grids are handled automatically — findiff uses one-sided compact stencils near the
boundaries. The `matrix()` method is also supported. For more details, see the
[compact finite differences documentation](https://maroba.github.io/findiff/compact.html).
## Finite Difference Coefficients
Sometimes you may want to have the raw finite difference coefficients.
These can be obtained for __any__ derivative and accuracy order
using `findiff.coefficients(deriv, acc)`. For instance,
```python
import findiff
coefs = findiff.coefficients(deriv=3, acc=4, symbolic=True)
```
gives
```
{'backward': {'coefficients': [15/8, -13, 307/8, -62, 461/8, -29, 49/8],
'offsets': [-6, -5, -4, -3, -2, -1, 0]},
'center': {'coefficients': [1/8, -1, 13/8, 0, -13/8, 1, -1/8],
'offsets': [-3, -2, -1, 0, 1, 2, 3]},
'forward': {'coefficients': [-49/8, 29, -461/8, 62, -307/8, 13, -15/8],
'offsets': [0, 1, 2, 3, 4, 5, 6]}}
```
If you want to specify the detailed offsets instead of the
accuracy order, you can do this by setting the offset keyword
argument:
```python
import findiff
coefs = findiff.coefficients(deriv=2, offsets=[-2, 1, 0, 2, 3, 4, 7], symbolic=True)
```
The resulting accuracy order is computed and part of the output:
```
{'coefficients': [187/1620, -122/27, 9/7, 103/20, -13/5, 31/54, -19/2835],
'offsets': [-2, 1, 0, 2, 3, 4, 7],
'accuracy': 5}
```
## Matrix Representation
For a given differential operator, you can get the matrix representation
using the `matrix(shape)` method, e.g. for a small 1D grid of 10 points:
```python
d2_dx2 = Diff(0, dx)**2
mat = d2_dx2.matrix((10,)) # this method returns a scipy sparse matrix
print(mat.toarray())
```
has the output
```
[[ 2. -5. 4. -1. 0. 0. 0.]
[ 1. -2. 1. 0. 0. 0. 0.]
[ 0. 1. -2. 1. 0. 0. 0.]
[ 0. 0. 1. -2. 1. 0. 0.]
[ 0. 0. 0. 1. -2. 1. 0.]
[ 0. 0. 0. 0. 1. -2. 1.]
[ 0. 0. 0. -1. 4. -5. 2.]]
```
If you have periodic boundary conditions, the matrix looks like that:
```python
d2_dx2 = Diff(0, dx, periodic=True)**2
mat = d2_dx2.matrix((10,)) # this method returns a scipy sparse matrix
print(mat.toarray())
```
```
[[-2. 1. 0. 0. 0. 0. 1.]
[ 1. -2. 1. 0. 0. 0. 0.]
[ 0. 1. -2. 1. 0. 0. 0.]
[ 0. 0. 1. -2. 1. 0. 0.]
[ 0. 0. 0. 1. -2. 1. 0.]
[ 0. 0. 0. 0. 1. -2. 1.]
[ 1. 0. 0. 0. 0. 1. -2.]]
```
## Stencils
*findiff* uses standard stencils (patterns of grid points) to evaluate the derivative.
However, you can design your own stencil. A picture says more than a thousand words, so
look at the following example for a standard second order accurate stencil for the
2D Laplacian $\displaystyle \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}$:
<img src="docs/assets/laplace2d.png" width="400">
This can be reproduced by *findiff* writing
```python
offsets = [(0, 0), (1, 0), (-1, 0), (0, 1), (0, -1)]
stencil = Stencil(offsets, partials={(2, 0): 1, (0, 2): 1}, spacings=(1, 1))
```
The attribute `stencil.values` contains the coefficients
```
{(0, 0): -4.0, (1, 0): 1.0, (-1, 0): 1.0, (0, 1): 1.0, (0, -1): 1.0}
```
Now for a some more exotic stencil. Consider this one:
<img src="docs/assets/laplace2d-x.png" width="400">
With *findiff* you can get it easily:
```python
offsets = [(0, 0), (1, 1), (-1, -1), (1, -1), (-1, 1)]
stencil = Stencil(offsets, partials={(2, 0): 1, (0, 2): 1}, spacings=(1, 1))
stencil.values
```
which returns
```
{(0, 0): -2.0, (1, 1): 0.5, (-1, -1): 0.5, (1, -1): 0.5, (-1, 1): 0.5}
```
## Symbolic Representations
As of version 0.10, findiff can also provide a symbolic representation of finite difference schemes suitable for using in conjunction with sympy. The main use case is to facilitate deriving your own iteration schemes.
```python
from findiff import SymbolicMesh, SymbolicDiff
mesh = SymbolicMesh("x, y")
u = mesh.create_symbol("u")
d2_dx2, d2_dy2 = [SymbolicDiff(mesh, axis=k, degree=2) for k in range(2)]
(
d2_dx2(u, at=(m, n), offsets=(-1, 0, 1)) +
d2_dy2(u, at=(m, n), offsets=(-1, 0, 1))
)
```
Outputs:
$$
\frac{u_{m,n + 1} + u_{m,n - 1} - 2 u_{m,n}}{\Delta y^2} + \frac{u_{m + 1,n} + u_{m - 1,n} - 2 u_{m,n}}{\Delta x^2}
$$
Also see the [example notebook](examples/symbolic.ipynb).
## Partial Differential Equations
_findiff_ can be used to easily formulate and solve partial differential equation problems
$$
\mathcal{L}u(\vec{x}) = f(\vec{x})
$$
where $\mathcal{L}$ is a general linear differential operator.
In order to obtain a unique solution, Dirichlet, Neumann or more general boundary conditions
can be applied.
### Boundary Value Problems
#### Example 1: 1D forced harmonic oscillator with friction
Find the solution of
$$
\left( \frac{d^2}{dt^2} - \alpha \frac{d}{dt} + \omega^2 \right)u(t) = \sin{(2t)}
$$
subject to the (Dirichlet) boundary conditions
$$
u(0) = 0, \hspace{1em} u(10) = 1
$$
```python
from findiff import Diff, Id, PDE
shape = (300, )
t = numpy.linspace(0, 10, shape[0])
dt = t[1]-t[0]
L = Diff(0, dt)**2 - Diff(0, dt) + 5 * Id()
f = numpy.cos(2*t)
bc = BoundaryConditions(shape)
bc[0] = 0
bc[-1] = 1
pde = PDE(L, f, bc)
u = pde.solve()
```
Result:
<p align="center">
<img src="docs/assets/ho_bvp.jpg" alt="ResultHOBVP" height="300"/>
</p>
#### Example 2: 2D heat conduction
A plate with temperature profile given on one edge and zero heat flux across the other
edges, i.e.
$$
\left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} \right) u(x,y) = f(x,y)
$$
with Dirichlet boundary condition
$$
\begin{align*}
u(x,0) &= 300 \\
u(1,y) &= 300 - 200y
\end{align*}
$$
and Neumann boundary conditions
$$
\begin{align*}
\frac{\partial u}{\partial x} &= 0, & \text{ for } x = 0 \\
\frac{\partial u}{\partial y} &= 0, & \text{ for } y = 0
\end{align*}
$$
```python
shape = (100, 100)
x, y = np.linspace(0, 1, shape[0]), np.linspace(0, 1, shape[1])
dx, dy = x[1]-x[0], y[1]-y[0]
X, Y = np.meshgrid(x, y, indexing='ij')
L = FinDiff(0, dx, 2) + FinDiff(1, dy, 2)
f = np.zeros(shape)
bc = BoundaryConditions(shape)
bc[1,:] = FinDiff(0, dx, 1), 0 # Neumann BC
bc[-1,:] = 300. - 200*Y # Dirichlet BC
bc[:, 0] = 300. # Dirichlet BC
bc[1:-1, -1] = FinDiff(1, dy, 1), 0 # Neumann BC
pde = PDE(L, f, bc)
u = pde.solve()
```
Result:
<p align="center">
<img src="docs/assets/heat.png"/>
</p>
## Citations
You have used *findiff* in a publication? Here is how you can cite it:
> M. Baer. *findiff* software package. URL: https://github.com/maroba/findiff. 2018
BibTeX entry:
```
@misc{findiff,
title = {{findiff} Software Package},
author = {M. Baer},
url = {https://github.com/maroba/findiff},
key = {findiff},
note = {\url{https://github.com/maroba/findiff}},
year = {2018}
}
```
## Development
### Set up development environment
- Fork the repository
- Clone your fork to your machine
- Install in development mode:
```
pip install -e .
```
### Running tests
From the console:
```
pip install pytest
pytest tests
```
| text/markdown | Matthias Baer | null | null | Matthias Baer <matthias.r.baer@googlemail.com> | MIT | finite-differences, numerical-derivatives, scientific-computing | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Intended Audience :: Developers",
"Topic :: Scientific... | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy",
"scipy",
"sympy"
] | [] | [] | [] | [
"Homepage, https://github.com/maroba/findiff",
"source, https://github.com/maroba/findiff",
"Issues, https://github.com/maroba/findiff/issues",
"tracker, https://github.com/maroba/findiff/issues"
] | twine/6.2.0 CPython/3.14.0 | 2026-02-19T14:14:24.077338 | findiff-0.13.1.tar.gz | 1,603,620 | 81/54/7c14bb5d76ac145d791c55903c2078994545209a1b208d38f3bb2745ac7e/findiff-0.13.1.tar.gz | source | sdist | null | false | 5385248b5367089154b7a3db53a60960 | 3aad6eda5e707675310d2c37f69bc1f52751184bb48dc49222e2d4af8aa62eff | 81547c14bb5d76ac145d791c55903c2078994545209a1b208d38f3bb2745ac7e | null | [
"LICENSE"
] | 3,779 |
2.4 | fugue-sql-antlr-cpp | 0.2.4 | Fugue SQL Antlr C++ Parser | # Fugue SQL Antlr Parser
[](https://pypi.python.org/pypi/fugue-sql-antlr/)
[](https://pypi.python.org/pypi/fugue-sql-antlr/)
[](https://pypi.python.org/pypi/fugue-sql-antlr/)
[](https://codecov.io/gh/fugue-project/fugue-sql-antlr)
Chat with us on slack!
[](http://slack.fugue.ai)
# Fugue SQL Antlr Parser
This is the dedicated package for the Fugue SQL parser built on Antlr4. It consists of two packages: [fugue-sql-antlr](https://pypi.python.org/pypi/fugue-sql-antlr/) and [fugue-sql-antlr-cpp](https://pypi.python.org/pypi/fugue-sql-antlr-cpp/).
[fugue-sql-antlr](https://pypi.python.org/pypi/fugue-sql-antlr/) is the main package. It contains the python parser of Fugue SQL and the vistor for the parser tree.
[fugue-sql-antlr-cpp](https://pypi.python.org/pypi/fugue-sql-antlr-cpp/) is the C++ implementation of the parser. This solution is based on the incredible work of [speedy-antlr-tool](https://github.com/amykyta3/speedy-antlr-tool), a tool that generates thin python interface code on top of the C++ Antlr parser. This package is purely optional, it should not affect the correctness and features of the Fugue SQL parser. However, with this package installed, the parsing time is **~25x faster**.
Neither of these two packages should be directly used by users. They are the core internal dependency of the main [Fugue](https://github.com/fugue-project/fugue) project (>=0.7.0).
## Installation
To install fugue-sql-antlr:
```bash
pip install fugue-sql-antlr
```
To install fugue-sql-antlr and fugue-sql-antlr-cpp:
```bash
pip install fugue-sql-antlr[cpp]
```
We try to pre-build the wheels for major operating systems and active python versions. But in case your environment is special, then when you install fugue-sql-antlr-cpp, please make sure you have the C++ compiler in your operating system. The C++ compiler must support the ISO C++ 2017 standard.
| text/markdown | The Fugue Development Team | hello@fugue.ai | null | null | Apache-2.0 | distributed spark dask sql dsl domain specific language | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: ... | [] | http://github.com/fugue-project/fugue | null | >=3.10 | [] | [] | [] | [
"triad>=0.6.8",
"antlr4-python3-runtime<4.14,>=4.13.2",
"jinja2",
"packaging"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-19T14:14:00.686065 | fugue_sql_antlr_cpp-0.2.4.tar.gz | 532,179 | f0/01/00f1cc46660f0caefab88f425ab74c65f8c16425b1f10ca233e0bf489e93/fugue_sql_antlr_cpp-0.2.4.tar.gz | source | sdist | null | false | c3c6de814cd3ea552c6478e2c18ac75b | 638c5d072229cfced47849201ee468f5972cdd59d3f458654ccad14460f41048 | f00100f1cc46660f0caefab88f425ab74c65f8c16425b1f10ca233e0bf489e93 | null | [
"LICENSE"
] | 2,596 |
2.4 | linuxpy | 0.24.0 | Human friendly interface to linux subsystems using python | # linuxpy
[![linuxpy][pypi-version]](https://pypi.python.org/pypi/linuxpy)
[![Python Versions][pypi-python-versions]](https://pypi.python.org/pypi/linuxpy)
![License][license]
[![CI][CI]](https://github.com/tiagocoutinho/linuxpy/actions/workflows/ci.yml)
[![Source][source]](https://github.com/tiagocoutinho/linuxpy/)
[![Documentation][documentation]](https://tiagocoutinho.github.io/linuxpy/)
Human friendly interface to linux subsystems using python.
Provides python access to several linux subsystems like V4L2, GPIO, Led, thermal,
input and MIDI.
There is experimental, undocumented, incomplete and unstable access to USB.
Requirements:
* python >= 3.11
* Fairly recent linux kernel
* Installed kernel modules you want to access
And yes, it is true: there are no python libraries required! Also there are no
C libraries required. Everything is done here through direct ioctl, read and
write calls. Ain't linux wonderful?
## Installation
From within your favorite python environment:
```console
$ pip install linuxpy
```
To run the examples you'll need:
```console
$ pip install linuxpy --group=examples
```
To develop, run tests, build package, lint, etc you'll need:
```console
$ pip install linuxpy --group=dev
```
## Subsystems
### GPIO
```python
from linuxpy.gpio import Device
with Device.from_id(0) as gpio:
info = gpio.get_info()
print(info.name, info.label, len(info.lines))
l0 = info.lines[0]
print(f"L0: {l0.name!r} {l0.flags.name}")
# output should look somethig like:
# gpiochip0 INT3450:00 32
# L0: '' INPUT
```
Check the [GPIO user guide](https://tiagocoutinho.github.io/linuxpy/user_guide/gpio/) and
[GPIO reference](https://tiagocoutinho.github.io/linuxpy/api/gpio/) for more information.
### Input
```python
import time
from linuxpy.input.device import find_gamepads
pad = next(find_gamepads())
abs = pad.absolute
with pad:
while True:
print(f"X:{abs.x:>3} | Y:{abs.y:>3} | RX:{abs.rx:>3} | RY:{abs.ry:>3}", end="\r", flush=True)
time.sleep(0.1)
```
Check the [Input user guide](https://tiagocoutinho.github.io/linuxpy/user_guide/input/) and
[Input reference](https://tiagocoutinho.github.io/linuxpy/api/input/) for more information.
### Led
```python
from linuxpy.led import find
caps_lock = find(function="capslock")
print(caps_lock.brightness)
print(caps_lock.max_brightness)
```
Check the [LED user guide](https://tiagocoutinho.github.io/linuxpy/user_guide/led/) and
[LED reference](https://tiagocoutinho.github.io/linuxpy/api/led/) for more information.
### MIDI Sequencer
```console
$ python
>>> from linuxpy.midi.device import Sequencer, event_stream
>>> seq = Sequencer()
>>> with seq:
port = seq.create_port()
port.connect_from(14, 0)
for event in seq:
print(event)
14:0 Note on channel=0, note=100, velocity=3, off_velocity=0, duration=0
14:0 Clock queue=0, pad=b''
14:0 System exclusive F0 61 62 63 F7
14:0 Note off channel=0, note=55, velocity=3, off_velocity=0, duration=0
```
Check the [MIDI user guide](https://tiagocoutinho.github.io/linuxpy/user_guide/midi/) and
[MIDI reference](https://tiagocoutinho.github.io/linuxpy/api/midi/) for more information.
### Thermal and cooling
```python
from linuxpy.thermal import find
with find(type="x86_pkg_temp") as tz:
print(f"X86 temperature: {tz.temperature/1000:6.2f} C")
```
Check the [Thermal and cooling user guide](https://tiagocoutinho.github.io/linuxpy/user_guide/thermal/) and
[Thermal and cooling reference](https://tiagocoutinho.github.io/linuxpy/api/thermal/) for more information.
### Video
Video for Linux 2 (V4L2) python library
Without further ado:
```python
>>> from linuxpy.video.device import Device
>>> with Device.from_id(0) as cam:
>>> for i, frame in enumerate(cam):
... print(f"frame #{i}: {len(frame)} bytes")
... if i > 9:
... break
...
frame #0: 54630 bytes
frame #1: 50184 bytes
frame #2: 44054 bytes
frame #3: 42822 bytes
frame #4: 42116 bytes
frame #5: 41868 bytes
frame #6: 41322 bytes
frame #7: 40896 bytes
frame #8: 40844 bytes
frame #9: 40714 bytes
frame #10: 40662 bytes
```
Check the [V4L2 user guide](https://tiagocoutinho.github.io/linuxpy/user_guide/video/) and
[V4L2 reference](https://tiagocoutinho.github.io/linuxpy/api/video/) for more information.
[pypi-python-versions]: https://img.shields.io/pypi/pyversions/linuxpy.svg
[pypi-version]: https://img.shields.io/pypi/v/linuxpy.svg
[pypi-status]: https://img.shields.io/pypi/status/linuxpy.svg
[license]: https://img.shields.io/pypi/l/linuxpy.svg
[CI]: https://github.com/tiagocoutinho/linuxpy/actions/workflows/ci.yml/badge.svg
[documentation]: https://img.shields.io/badge/Documentation-blue?color=grey&logo=mdBook
[source]: https://img.shields.io/badge/Source-grey?logo=git
| text/markdown | null | Jose Tiago Macara Coutinho <coutinhotiago@gmail.com> | null | null | null | linux, video, system, midi, gpio, led, input, gamepad, joystick, keyboard, mouse, thermal, asyncio | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: P... | [] | null | null | >=3.10 | [] | [] | [] | [
"typing_extensions<5,>=4.6; python_version < \"3.12\""
] | [] | [] | [] | [
"Documentation, https://tiagocoutinho.github.io/linuxpy/",
"Homepage, https://github.com/tiagocoutinho/linuxpy",
"Repository, https://github.com/tiagocoutinho/linuxpy"
] | twine/6.1.0 CPython/3.13.5 | 2026-02-19T14:13:50.470879 | linuxpy-0.24.0.tar.gz | 451,602 | a3/f7/66d76bb3099bce8a952e6114e654d1199d7a02aec90a300294919bd17f77/linuxpy-0.24.0.tar.gz | source | sdist | null | false | e8993d1c030aea2abbcc318569d5cb20 | 2b44434d28d49257e859a4830267a25aa4b51b1d229f95f79d025ae7f1a5d30e | a3f766d76bb3099bce8a952e6114e654d1199d7a02aec90a300294919bd17f77 | GPL-3.0-or-later | [
"LICENSE"
] | 443 |
2.4 | pygeom-scarf | 0.1.2 | Geometry model of the SCARF experiment | # pygeom-scarf
[](https://pypi.org/project/pygeom-scarf/)
[](https://anaconda.org/conda-forge/pygeom-scarf)

[](https://github.com/legend-exp/pygeom-scarf/actions)
[](https://github.com/pre-commit/pre-commit)
[](https://github.com/psf/black)
[](https://app.codecov.io/gh/legend-exp/pygeom-scarf)



[](https://pygeom-scarf.readthedocs.io)
Geometry of the SCARF experiment at TUM for radiation transport simulations.
| text/markdown | null | Luigi Pertoldi <gipert@pm.me> | The LEGEND Collaboration | null | null | null | [
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: MacOS",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Scientif... | [] | null | null | >=3.10 | [] | [] | [] | [
"dbetto",
"legend-pygeom-hpges>=0.7.0",
"legend-pygeom-tools>=0.0.23",
"numpy",
"pint",
"pyg4ometry>=1.3.4",
"pylegendmeta>=1.3.1",
"pylegendtestdata",
"pyyaml",
"matplotlib",
"pygeom-scarf[docs,test]; extra == \"all\"",
"furo; extra == \"docs\"",
"myst-parser; extra == \"docs\"",
"sphinx;... | [] | [] | [] | [
"Homepage, https://github.com/legend-exp/pygeom-scarf",
"Bug Tracker, https://github.com/legend-exp/pygeom-scarf/issues",
"Discussions, https://github.com/legend-exp/pygeom-scarf/discussions",
"Changelog, https://github.com/legend-exp/pygeom-scarf/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:13:46.102379 | pygeom_scarf-0.1.2.tar.gz | 486,529 | e9/48/616c6965e4d80a0320db8ce6de6b0f488f06d3c20bbbb175908681c136b4/pygeom_scarf-0.1.2.tar.gz | source | sdist | null | false | 3852303f60aadc0ec9345e293264b010 | a88c9536d3e996b82f851c6045efdbf9d80837c11a6632f8360b4d63032c72cc | e948616c6965e4d80a0320db8ce6de6b0f488f06d3c20bbbb175908681c136b4 | GPL-3.0 | [
"LICENSE"
] | 257 |
2.4 | plone.pgcatalog | 1.0.0b6 | PostgreSQL-backed catalog for Plone replacing ZCatalog BTrees indexes | # plone.pgcatalog
PostgreSQL-backed catalog for Plone, replacing ZCatalog BTrees indexes with SQL queries on JSONB.
Requires [zodb-pgjsonb](https://github.com/bluedynamics/zodb-pgjsonb) as the ZODB storage backend.
## Features
- **All standard index types** supported: FieldIndex, KeywordIndex, DateIndex, BooleanIndex, DateRangeIndex, UUIDIndex, ZCTextIndex, ExtendedPathIndex, GopipIndex
- **DateRecurringIndex** for recurring events (Plone's `start`/`end` indexes) -- recurrence expansion at query time via [rrule_plpgsql](https://github.com/sirrodgepodge/rrule_plpgsql), no C extensions needed
- **Extensible** via `IPGIndexTranslator` named utilities for custom index types
- **Dynamic index discovery** from ZCatalog at startup -- addons adding indexes via `catalog.xml` just work
- **Transactional writes** -- catalog data written atomically alongside object state during ZODB commit
- **Full-text search** via PostgreSQL `tsvector`/`tsquery` -- language-aware stemming for SearchableText (30 languages), word-level matching for Title/Description/addon ZCTextIndex fields
- **Optional BM25 ranking** -- when `vchord_bm25` + `pg_tokenizer` extensions are detected, search results are automatically ranked using BM25 (IDF, term saturation, length normalization) instead of `ts_rank_cd`. Title matches are boosted. Falls back to tsvector ranking on vanilla PostgreSQL.
- **Zero ZODB cache pressure** -- no BTree/Bucket objects stored in ZODB
- **Container-friendly** -- works on standard `postgres:17` Docker images; for BM25 use `tensorchord/vchord-suite:pg17-latest`
## Requirements
- Python 3.12+
- PostgreSQL 14+ (tested with 17)
- [zodb-pgjsonb](https://github.com/bluedynamics/zodb-pgjsonb)
- Plone 6
## Installation
```bash
pip install plone-pgcatalog
```
Add to your Zope configuration:
```xml
<!-- zope.conf -->
%import zodb_pgjsonb
<zodb_main>
<pgjsonb>
dsn dbname=mydb user=zodb password=zodb host=localhost port=5432
</pgjsonb>
</zodb_main>
```
Install the `plone.pgcatalog:default` GenericSetup profile through Plone's Add-on installer or your policy package.
## Usage
Once installed, `portal_catalog` is replaced with `PlonePGCatalogTool`. All catalog queries use the same ZCatalog API:
```python
# Standard catalog queries -- same syntax as ZCatalog
results = catalog(portal_type="Document", review_state="published")
results = catalog(Subject={"query": ["Python", "Plone"], "operator": "or"})
results = catalog(SearchableText="my search term")
results = catalog(SearchableText="Katzen", Language="de") # language-aware stemming
results = catalog(Title="quick fox") # word-level match (finds "The Quick Brown Fox")
results = catalog(path={"query": "/plone/folder", "depth": 1})
# Recurring events (DateRecurringIndex)
results = catalog(start={
"query": [DateTime("2025-03-01"), DateTime("2025-03-31")],
"range": "min:max",
})
```
## Documentation
- [ARCHITECTURE.md](ARCHITECTURE.md) -- internal design, index registry, query translation, custom index types
- [BENCHMARKS.md](BENCHMARKS.md) -- performance comparison vs RelStorage+ZCatalog
- [CHANGES.md](CHANGES.md) -- changelog
## Source Code and Contributions
The source code is managed in a Git repository, with its main branches hosted on GitHub.
Issues can be reported there too.
We'd be happy to see many forks and pull requests to make this package even better.
We welcome AI-assisted contributions, but expect every contributor to fully understand and be able to explain the code they submit.
Please don't send bulk auto-generated pull requests.
Maintainers are Jens Klein and the BlueDynamics Alliance developer team.
We appreciate any contribution and if a release on PyPI is needed, please just contact one of us.
We also offer commercial support if any training, coaching, integration or adaptations are needed.
## License
GPL-2.0
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Framework :: Plone",
"Framework :: Plone :: 6.1",
"Framework :: ZODB",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"... | [] | null | null | >=3.12 | [] | [] | [] | [
"orjson>=3.9",
"products-cmfplone",
"psycopg[binary,pool]>=3.1",
"zodb-pgjsonb>=1.1",
"coverage[toml]>=7.0; extra == \"test\"",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\""
] | [] | [] | [] | [
"Homepage, https://github.com/bluedynamics/plone.pgcatalog",
"Repository, https://github.com/bluedynamics/plone.pgcatalog",
"Issues, https://github.com/bluedynamics/plone.pgcatalog/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:13:18.712424 | plone_pgcatalog-1.0.0b6.tar.gz | 144,844 | 33/a4/7e07ad1edbead4eb7590bce29093e8c8162463820b1605df057e2101482a/plone_pgcatalog-1.0.0b6.tar.gz | source | sdist | null | false | 8e17e5f1b91e566250619f778d81ada8 | 6c892ba272b2a99e340c39ea5125466b4c386c6b3130405d42edc25440f59107 | 33a47e07ad1edbead4eb7590bce29093e8c8162463820b1605df057e2101482a | GPL-2.0-only | [] | 0 |
2.4 | quati | 1.2.9 | A professional-grade toolkit for seamless automation and workflow optimization | <div align="center">
<picture>
<!-- <source media="(prefers-color-scheme: dark)" srcset="assets/quati_white.svg"> -->
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/quati-dev/quati/refs/heads/main/assets/quati.png">
<img src="https://raw.githubusercontent.com/quati-dev/quati/refs/heads/main/assets/quati.png" width="100%">
</picture>
<br><br><br>
<hr>
<h1>quati: A python <u>Quick Actions Toolkit</u> for data engeneering</h1>
<img src="https://img.shields.io/badge/Author-lucasoal-blue?logo=github&logoColor=white"> <img src="https://img.shields.io/badge/License-MIT-750014.svg"> <!-- <img src="https://img.shields.io/badge/Status-Beta-DF1F72"> -->
<br>
<img src="https://img.shields.io/pypi/v/quati.svg?label=Version&color=white"> <img src="https://img.shields.io/pypi/pyversions/quati?logo=python&logoColor=white&label=Python"> <img src="https://img.shields.io/badge/Code Style-Black Formatter-111.svg">
<br>
<img src="https://static.pepy.tech/badge/quati/month">
<!-- <img src="https://static.pepy.tech/badge/quati"> -->
<!-- <img src="https://img.shields.io/pypi/dm/quati.svg?label=PyPI downloads"> -->
</div>
## What is it?
**quati** is a multifaceted utility framework featuring a suite of high-performance
functions engineered to streamline software development and operational automation.
It encompasses a robust and adaptable infrastructure of **modular tools**,
**specialized libraries**, and **technical assets**, empowering professionals to
architect, implement, and orchestrate complex applications with heightened precision
and reduced time-to-market.
<h2>Table of Contents</h2>
- [What is it?](#what-is-it)
- [Main Features](#main-features)
- [Where to get it / Install](#where-to-get-it--install)
- [Documentation](#documentation)
- [License](#license)
- [Dependencies](#dependencies)
## Main Features
Here are just a few of the things that quati does well:
⠀⠀[**`convert_magnitude_string()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#convert_magnitude_string): Transforms string-based magnitude suffixes (K, M, B) into numerical integers <br>
⠀⠀[**`format_column_header()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#format_column_header): Normalizes DataFrame column names by handling special characters and casing <br>
⠀⠀[**`sync_dataframe_to_bq_schema()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#sync_dataframe_to_bq_schema): Aligns Pandas DataFrame data types with a specific BigQuery table schema <br>
⠀⠀[**`execute_bq_fetch()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#execute_bq_fetch): Runs a BigQuery SQL query and returns the results as a Pandas DataFrame <br>
⠀⠀[**`acquire_gsheet_access()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#acquire_gsheet_access): Authorizes and retrieves a Google Sheets worksheet object <br>
⠀⠀[**`retrieve_gsheet_as_df()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#retrieve_gsheet_as_df): Imports Google Sheets data directly into a Pandas DataFrame <br>
⠀⠀[**`remove_gsheet_duplicates()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#remove_gsheet_duplicates): Deduplicates sheet rows based on specific columns and updates the source <br>
⠀⠀[**`locate_next_empty_cell()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#locate_next_empty_cell): Identifies the next available cell ID for data insertion in a column <br>
⠀⠀[**`push_df_to_gsheet()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#push_df_to_gsheet): Updates a worksheet using a DataFrame starting from a reference pivot cell <br>
⠀⠀[**`Dispatcher.push_emsg()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#push_emsg): Sends structured HTML alerts (Types: error, warning, note, tip, important) with attachment support <br>
⠀⠀[**`erase_file()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#erase_file): Removes a specified file from the file system <br>
⠀⠀[**`modify_file_name()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#modify_file_name): Renames an existing file based on path and prefix <br>
⠀⠀[**`locate_and_verify_file()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#locate_and_verify_file): Searches for a file and validates it against a minimum size threshold <br>
⠀⠀[**`display_timer()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#display_timer): Implements a wait period with an optional visual progress bar <br>
⠀⠀[**`fetch_host_details()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#fetch_host_details): Extracts detailed system architecture and kernel information <br>
⠀⠀[**`launch_navigator()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#launch_navigator): Initializes a customized Chrome WebDriver instance <br>
⠀⠀[**`save_session_cookies()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#save_session_cookies): Exports active browser session cookies to a local file <br>
⠀⠀[**`load_session_cookies()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#load_session_cookies): Injects saved cookies into the browser to bypass authentication <br>
⠀⠀[**`is_node_present()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#is_node_present): Validates the existence of a web element using XPath <br>
⠀⠀[**`dismiss_popup()`**](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md#dismiss_popup): Automates popup closure via ESC key or targeted element clicks <br>
## Where to get it / Install
The source code is currently hosted on GitHub at: https://github.com/quati-dev/quati
> [!WARNING]
> It's essential to use [**Python 3.10**](https://www.python.org/downloads/release/python-310/) version
<!-- > It's essential to **upgrade pip** to the latest version to ensure compatibility with the library. -->
<!-- > ```sh
> # Requires the latest pip
> pip install --upgrade pip
> ``` -->
- [PyPI](https://pypi.org/project/quati/)
```sh
# PyPI
pip install quati
```
- GitHub
```sh
# or GitHub
pip install git+https://github.com/quati-dev/quati.git
```
## Documentation
- [Documentation](https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md)
## License
- [MIT](https://github.com/quati-dev/quati/blob/main/LICENSE)
## Dependencies
- [NumPy](https://numpy.org/) | [Pandas](https://pandas.pydata.org/) | [Selenium](https://www.automation.dev/) | [gspread](https://docs.gspread.org/)
See the [full installation instructions](https://github.com/quati-dev/quati/blob/main/INSTALLATION.md) for minimum supported versions of required, recommended and optional dependencies.
<hr>
[⇧ Go to Top](#table-of-contents)
| text/markdown | lucaslealll | null | null | null | MIT | quick, actions, toolkit, functions | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.10",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"google-cloud-bigquery==2.34.4",
"google-cloud-bigquery-storage==2.27.0",
"gspread==5.5.0",
"oauthlib==3.2.2",
"pandas==1.3.4",
"pandas-gbq==0.14.1",
"requests==2.32.3",
"selenium==4.7.2",
"tqdm==4.67.1"
] | [] | [] | [] | [
"Homepage, https://pypi.org/project/quati",
"Documentation, https://github.com/quati-dev/quati/blob/main/doc/DOCUMENTATION.md",
"Repository, https://github.com/quati-dev/quati"
] | twine/6.1.0 CPython/3.10.19 | 2026-02-19T14:12:50.047183 | quati-1.2.9.tar.gz | 20,447 | a4/d0/86ec1a1d31e3d01ccdf1b6405adeeed1591be3c74cd4bcd268fca8adcf23/quati-1.2.9.tar.gz | source | sdist | null | false | 71c6e7daddb6c9fa57a66421a3daa5bb | 8b56d9ddb322111e56e4b36b44b9b92ce3fa8e5f9933724f1340c04a0010213e | a4d086ec1a1d31e3d01ccdf1b6405adeeed1591be3c74cd4bcd268fca8adcf23 | null | [
"LICENSE",
"AUTHORS.md"
] | 244 |
2.4 | sigdetect | 0.5.4 | Signature detection and role attribution for PDFs | # CaseWorks.Automation.CaseDocumentIntake
## sigdetect
`sigdetect` is a small Python library + CLI that detects **e-signature evidence** in PDFs and infers the **signer role** (e.g., _patient_, _attorney_, _representative_).
It looks for:
- Real signature **form fields** (`/Widget` annotations with `/FT /Sig`)
- **AcroForm** signature fields present only at the document level
- Common **vendor markers** (e.g., DocuSign, “Signature Certificate”)
- Page **labels** (like “Signature of Patient” or “Signature of Parent/Guardian”)
It returns a structured summary per file (pages, counts, roles, hints, etc.) that can be used downstream.
---
## Contents
- [Quick start](#quick-start)
- [CLI usage](#cli-usage)
- [Library usage](#library-usage)
- [Result schema](#result-schema)
- [Configuration & rules](#configuration--rules)
- [Smoke tests](#smoke-tests)
- [Dev workflow](#dev-workflow)
- [Troubleshooting](#troubleshooting)
- [License](#license)
---
## Quick start
### Requirements
- Python **3.9+** (developed & tested on **3.11**)
- macOS / Linux / WSL
### Setup
~~~bash
# 1) Create and activate a virtualenv (example uses Python 3.11)
python3.11 -m venv .venv
source .venv/bin/activate
# 2) Install in editable (dev) mode
python -m pip install --upgrade pip
pip install -e .
~~~
### Sanity check
~~~bash
# Run unit & smoke tests
pytest -q
~~~
---
## CLI usage
The project ships a Typer-based CLI (exposed either as `sigdetect` or runnable via `python -m sigdetect.cli`, depending on how it is installed).
~~~bash
sigdetect --help
# or
python -m sigdetect.cli --help
~~~
### Detect (per-file summary)
~~~bash
# Execute detection according to the YAML configuration
sigdetect detect \
--config ./sample_data/config.yml \
--profile hipaa # or: retainer
~~~
### Notes
- The config file controls `pdf_root`, `out_dir`, `engine`, `pseudo_signatures`, `recurse_xobjects`, etc.
- Engine selection is forced to **auto** (prefers PyMuPDF for geometry, falls back to PyPDF2); any configured `engine` value is overridden.
- `--pseudo-signatures` enables a vendor/Acro-only pseudo-signature when no actual `/Widget` is present (useful for DocuSign / Acrobat Sign receipts).
- `--recurse-xobjects` allows scanning Form XObjects for vendor markers and labels embedded in page resources.
- `--profile` selects tuned role logic:
- `hipaa` → patient / representative / attorney
- `retainer` → client / firm (prefers detecting two signatures)
- `--recursive/--no-recursive` toggles whether `sigdetect detect` descends into subdirectories when hunting for PDFs (recursive by default).
- Results output is disabled by default; set `write_results: true` or pass `--write-results` when you need `results.json` (for EDA).
- Cropping (`--crop-signatures`) writes PNG crops to disk by default; enable `--crop-docx` to write DOCX files instead of PNGs. `--crop-bytes` embeds base64 PNG data in `signatures[].crop_bytes` and, when `--crop-docx` is enabled, embeds DOCX bytes in `signatures[].crop_docx_bytes`. PyMuPDF is required for crops, and `python-docx` is required for DOCX output.
- Wet detection runs automatically for non-e-sign PDFs when dependencies are available; missing OCR dependencies add a `ManualReview:*` hint instead of failing. PyMuPDF + Tesseract are required for wet detection.
- If the executable is not on `PATH`, you can always fall back to `python -m sigdetect.cli ...`.
### EDA (quick aggregate stats)
~~~bash
sigdetect eda \
--config ./sample_data/config.yml
~~~
`sigdetect eda` expects `results.json`; enable `write_results: true` when running detect.
---
## Library usage
~~~python
from pathlib import Path
from sigdetect.config import DetectConfiguration
from sigdetect.detector.pypdf2_engine import PyPDF2Detector
configuration = DetectConfiguration(
PdfRoot=Path("/path/to/pdfs"),
OutputDirectory=Path("./out"),
Engine="pypdf2",
PseudoSignatures=True,
RecurseXObjects=True,
Profile="retainer", # or "hipaa"
)
detector = PyPDF2Detector(configuration)
result = detector.Detect(Path("/path/to/pdfs/example.pdf"))
print(result.to_dict())
~~~
`Detect(Path)` returns a **FileResult** dataclass; call `.to_dict()` for the JSON-friendly representation (see [Result schema](#result-schema)). Each signature entry now exposes `bounding_box` coordinates (PDF points, origin bottom-left). When PNG cropping is enabled, `crop_path` points at the generated image; when DOCX cropping is enabled, `crop_docx_path` points at the generated doc. Use `Engine="auto"` if you want the single-pass defaults that prefer PyMuPDF (for geometry) when available.
---
## Library API (embed in another script)
Minimal, plug-and-play API that returns plain dicts (JSON-ready) without side effects unless you opt into cropping. Engine selection is forced to `auto` (PyMuPDF preferred) to ensure geometry. Wet detection runs automatically for non-e-sign PDFs; pass `runWetDetection=False` to skip OCR.
~~~python
from pathlib import Path
from sigdetect.api import (
CropSignatureImages,
DetectMany,
DetectPdf,
ScanDirectory,
ToCsvRow,
Version,
get_detector,
)
print("sigdetect", Version())
# 1) Single file → dict
result = DetectPdf(
"/path/to/file.pdf",
profileName="retainer",
includePseudoSignatures=True,
recurseXObjects=True,
# runWetDetection=False, # disable OCR-backed wet detection if desired
)
print(
result["file"],
result["pages"],
result["esign_found"],
result["sig_count"],
result["sig_pages"],
result["roles"],
result["hints"],
)
# 2) Directory walk (generator of dicts)
for res in ScanDirectory(
"/path/to/pdfs",
profileName="hipaa",
includePseudoSignatures=True,
recurseXObjects=True,
):
# store in DB, print, etc.
pass
# 3) Crop signature snippets for FileResult objects (requires PyMuPDF; DOCX needs python-docx)
detector = get_detector(pdfRoot="/path/to/pdfs", profileName="hipaa")
file_result = detector.Detect(Path("/path/to/pdfs/example.pdf"))
CropSignatureImages(
"/path/to/pdfs/example.pdf",
file_result,
outputDirectory="./signature_crops",
dpi=200,
)
~~~
## Result schema
High-level summary (per file):
~~~json
{
"file": "example.pdf",
"size_kb": 123.4,
"pages": 3,
"esign_found": true,
"scanned_pdf": false,
"mixed": false,
"sig_count": 2,
"sig_pages": "1,3",
"roles": "patient;representative",
"hints": "AcroSig:sig_patient;VendorText:DocuSign\\s+Envelope\\s+ID",
"signatures": [
{
"page": 1,
"field_name": "sig_patient",
"role": "patient",
"score": 5,
"scores": { "field": 3, "page_label": 2 },
"evidence": ["field:patient", "page_label:patient"],
"hint": "AcroSig:sig_patient",
"render_type": "typed",
"bounding_box": [10.0, 10.0, 150.0, 40.0],
"crop_path": "signature_crops/example/sig_01_patient.png",
"crop_docx_path": null
},
{
"page": null,
"field_name": "vendor_or_acro_detected",
"role": "representative",
"score": 6,
"scores": { "page_label": 4, "general": 2 },
"evidence": ["page_label:representative(parent/guardian)", "pseudo:true"],
"hint": "VendorOrAcroOnly",
"render_type": "typed",
"bounding_box": null,
"crop_path": null
}
]
}
~~~
### Field notes
- **`esign_found`** is `true` if any signature widget, AcroForm `/Sig` field, or vendor marker is detected.
- **`scanned_pdf`** is a heuristic: pages with images only and no extractable text.
- **`mixed`** means both `esign_found` and `scanned_pdf` are `true`.
- **`roles`** summarizes unique non-`unknown` roles across signatures.
- In retainer profile, emitter prefers two signatures (client + firm), often on the same page.
- **`signatures[].bounding_box`** reports the widget rectangle in PDF points (origin bottom-left).
- **`signatures[].crop_path`** is populated when PNG crops are generated (via CLI `--crop-signatures` or `CropSignatureImages`).
- **`signatures[].crop_docx_path`** is populated when DOCX crops are generated (`--crop-docx` or `docx=True`).
- **`signatures[].crop_bytes`** contains base64 PNG data when CLI `--crop-bytes` is enabled.
- **`signatures[].crop_docx_bytes`** contains base64 DOCX data when `--crop-docx` and `--crop-bytes` are enabled together.
---
## Configuration & rules
Built-in rules live under **`src/sigdetect/data/`**:
- **`vendor_patterns.yml`** – vendor byte/text patterns (e.g., DocuSign, Acrobat Sign).
- **`role_rules.yml`** – signer-role logic:
- `labels` – strong page labels (e.g., “Signature of Patient”, including Parent/Guardian cases)
- `general` – weaker role hints in surrounding text
- `field_hints` – field-name keywords (e.g., `sig_patient`)
- `doc_hard` – strong document-level triggers (relationship to patient, “minor/unable to sign”, first-person consent)
- `weights` – scoring weights for the above
- **`role_rules.retainer.yml`** – retainer-specific rules (labels for client/firm, general tokens, and field hints).
You can keep one config YAML per dataset, e.g.:
~~~yaml
# ./sample_data/config.yml (example)
pdf_root: ./pdfs
out_dir: ./sigdetect_out
engine: auto
write_results: false
pseudo_signatures: true
recurse_xobjects: true
profile: retainer # or: hipaa
crop_signatures: false # enable to write PNG crops (requires pymupdf)
crop_docx: false # enable to write DOCX crops instead of PNGs (requires python-docx)
# crop_output_dir: ./signature_crops
crop_image_dpi: 200
detect_wet_signatures: false # kept for compatibility; non-e-sign PDFs still trigger OCR
wet_ocr_dpi: 200
wet_ocr_languages: eng
wet_precision_threshold: 0.82
~~~
YAML files can be customized or load at runtime (see CLI `--config`, if available, or import and pass patterns into engine).
### Key detection behaviors
- **Widget-first in mixed docs:** if a real `/Widget` exists, no pseudo “VendorOrAcroOnly” signature is emitted.
- **Acro-only dedupe:** multiple `/Sig` fields at the document level collapse to a single pseudo signature.
- **Parent/Guardian label:** “Signature of Parent/Guardian” maps to the `representative` role.
- **Field-name fallbacks:** role hints are pulled from `/T`, `/TU`, or `/TM` (in that order).
- Retainer heuristics:
- Looks for client and firm labels/tokens; boosts pages with law-firm markers (LLP/LLC/PA/PC) and “By:” blocks.
- Applies an anti-front-matter rule to reduce page-1 false positives (e.g., letterheads, firm mastheads).
- When only vendor/Acro clues exist (no widgets), it will emit two pseudo signatures targeting likely pages.
- **Wet detection (non-e-sign):** The CLI runs an OCR-backed pass (PyMuPDF + pytesseract/Tesseract) after e-sign detection whenever no e-sign evidence is found. It emits `RenderType="wet"` signatures for high-confidence label/stroke pairs in the lower page region. When an image-based signature is present on a page, label-only OCR candidates are suppressed unless a stroke is detected. Results are deduped to the top signature per role (dropping `unknown`). Missing OCR dependencies add a `ManualReview:*` hint instead of failing.
---
## Smoke tests
Drop-in smoke tests live under **`tests/`** and cover:
- Vendor-only (multiple markers)
- Acro-only (single pseudo with multiple `/Sig`)
- Mixed (real widget + vendor markers → widget role, no pseudo)
- Field-name fallbacks (`/TU`, `/TM`)
- Parent/Guardian label → `representative`
- Encrypted PDFs (graceful handling)
Run a subset:
~~~bash
pytest -q -k smoke
# or specific files:
pytest -q tests/test_mixed_widget_vendor_smoke.py
~~~
---
## Debugging
If you need to debug or inspect the detection logic, you can run the CLI with `--debug`:
~~~python
from pathlib import Path
from sigdetect.config import DetectConfiguration
from sigdetect.detector.pypdf2_engine import PyPDF2Detector
pdf = Path("/path/to/one.pdf")
configuration = DetectConfiguration(
PdfRoot=pdf.parent,
OutputDirectory=Path("."),
Engine="pypdf2",
Profile="retainer",
PseudoSignatures=True,
RecurseXObjects=True,
)
print(PyPDF2Detector(configuration).Detect(pdf).to_dict())
~~~
---
## Dev workflow
### Project layout
~~~text
src/
sigdetect/
detector/
base.py
pypdf2_engine.py
data/
role_rules.yml
vendor_patterns.yml
cli.py
tests/
pyproject.toml
.pre-commit-config.yaml
~~~
### Formatting & linting (pre-commit)
~~~bash
# one-time
pip install pre-commit
pre-commit install
# run on all files
pre-commit run --all-files
~~~
Hooks: `black`, `isort`, `ruff`, plus `pytest` (optional).
Ensure your virtualenv folders are excluded in `.pre-commit-config.yaml` (e.g., `^\.venv`).
### Typical loop
~~~bash
# run tests
pytest -q
# run only smoke tests while iterating
pytest -q -k smoke
~~~
---
## Troubleshooting
**Using the wrong Python**
~~~bash
which python
python -V
~~~
If you see 3.8 or system Python, recreate the venv with 3.11.
**ModuleNotFoundError: typer / click / pytest**
~~~bash
pip install typer click pytest
~~~
**Pre-commit reformats files in `.venv`**
~~~yaml
exclude: |
^(\.venv|\.venv311|dist|build)/
~~~
**Vendor markers not detected**
Set `--recurse-xobjects true` and enable pseudo signatures. Many providers embed markers in Form XObjects or compressed streams.
**Parent/Guardian not recognized**
The rules already include a fallback for “Signature of Parent/Guardian”; if your variant differs, add it to `role_rules.yml → labels.representative`.
---
## License
MIT
| text/markdown | null | BT Asmamaw <basmamaw@angeiongroup.com> | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"pypdf>=4.0.0",
"rich>=13.0",
"typer>=0.12",
"pydantic>=2.5",
"pillow>=10.0",
"python-docx>=1.1.0",
"pytesseract>=0.3.10",
"pymupdf>=1.23",
"pyyaml>=6.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:12:31.939101 | sigdetect-0.5.4.tar.gz | 76,886 | 1d/58/6d27f9e4fb6c568c5a4862cddc458028658507ed604f8ff6c9cb4f489d3e/sigdetect-0.5.4.tar.gz | source | sdist | null | false | 5916f9721e43d14c8629069e1c17692f | f55f47f9d07afeabf00c6c6ff96b0abe96b264086c674a6f64f43f200a0e6181 | 1d586d27f9e4fb6c568c5a4862cddc458028658507ed604f8ff6c9cb4f489d3e | null | [] | 241 |
2.4 | seeq | 66.106.0.20260219 | The Seeq SDK for Python | The **seeq** Python module is used to interface with Seeq Server API ([https://www.seeq.com](https://www.seeq.com)).
Documentation can be found at
[https://python-docs.seeq.com](https://python-docs.seeq.com/).
**IMPORTANT:**
This module mirrors the versioning of Seeq Server, such that the version of this module reflects the version
of the Seeq Server from which the Seeq SDK was generated.
For Seeq Server version R60 and later, the SPy module
[is in its own namespace package](https://pypi.org/project/seeq-spy/) called `seeq-spy` with its own versioning
scheme. (Prior to R60, the SPy module was bundled into this main `seeq` module.)
# Upgrade Considerations
## When using Seeq Data lab:
If you are using **Seeq Data Lab**, you should not upgrade or otherwise affect the pre-installed version of the
`seeq` module, as the pre-installed version correctly matches the version of Seeq Server that Data Lab is tied
to.
### Older than R60
However, in older versions of Seeq Server, you must follow the (more complex) instructions on the corresponding PyPI
pages for that version:
https://pypi.org/project/seeq/55.4.9.183.34/
https://pypi.org/project/seeq/56.1.9.184.21/
https://pypi.org/project/seeq/57.2.7.184.22/
https://pypi.org/project/seeq/58.1.3.184.22/
https://pypi.org/project/seeq/59.0.1.184.25/
## When NOT using Seeq Data Lab:
If you are _not_ using **Seeq Data Lab**, you should install the version of the `seeq` module that corresponds
to the first two numbers in the version of Seeq you are interfacing with. This version can be found at the top
of the Seeq Server *API Reference* page.
E.G.: `pip install -U seeq~=65.1`. You should then upgrade SPy separately via `pip install -U seeq-spy`.
# seeq.spy
The SPy module is the recommended programming interface for interacting with the Seeq Server.
See [https://pypi.org/project/seeq-spy/](https://pypi.org/project/seeq-spy/) for more information on the SPy module.
For more advanced tasks than what the SPy module can provide, you may with to use the SDK module described below.
# seeq.sdk
The Seeq **SDK** module is a set of Python bindings for the Seeq Server REST API. You can experiment with the REST API
by selecting the *API Reference* menu item in the upper-right "hamburger" menu of Seeq Workbench.
**The SDK module supports both Python 2.x and Python 3.x, but it is strongly recommended that you use Python 3.x
(or later) as Python 2.x is end-of-life.**
Login is accomplished with the following pattern:
```
import seeq
import getpass
api_client = seeq.sdk.ApiClient('http://localhost:34216/api')
# Change this to False if you're getting errors related to SSL
seeq.sdk.Configuration().verify_ssl = True
auth_api = seeq.sdk.AuthApi(api_client)
auth_input = seeq.sdk.AuthInputV1()
# Use raw_input() instead of input() if you're using Python 2
auth_input.username = input('Username:').rstrip().lower()
auth_input.password = getpass.getpass()
auth_input.auth_provider_class = "Auth"
auth_input.auth_provider_id = "Seeq"
auth_api.login(body=auth_input)
```
The `api_client` object is then used as the argument to construct any API object you need, such as
`seeq.sdk.ItemsApi`. Each of the root endpoints that you see in the *API Reference* webpage corresponds to
a `seeq.sdk.XxxxxApi` class.
----------
In case you are looking for the Gencove package, it is available here: https://pypi.org/project/gencove/
| text/markdown | null | Seeq Corporation <support@seeq.com> | null | null | Seeq Python Library – License File
------------------------------------------------------------------------------------------------------------------------
Seeq Python Library - Copyright Notice
© Copyright 2020 - Seeq Corporation
Permission to Distribute - Permission is hereby granted, free of charge, to any person obtaining a copy of this Seeq
Python Library and associated documentation files (the "Software"), to copy the Software, to publish and distribute
copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following
conditions:
1. The foregoing permission does not grant any right to use, modify, merge, sublicense, sell or otherwise deal in the
Software, and all such use is subject to the terms and conditions of the Seeq Python Library - End User License
Agreement, below.
2. The above copyright notice and the full text of this permission notice shall be included in all copies or
substantial portions of the Software.
------------------------------------------------------------------------------------------------------------------------
SEEQ PYTHON LIBRARY - END USER LICENSE AGREEMENT
This End User License Agreement ("Agreement") is a binding legal document between Seeq and you, which explains your
rights and obligations as a Customer using Seeq products. "Customer" means the person or company that downloads and
uses the Seeq Python library. "Seeq" means Seeq Corporation, 1301 2nd Avenue, Suite 2850, Seattle, WA 98101, USA.
By installing or using the Seeq Python library, you agree on behalf of Customer to be bound by this Agreement. If you
do not agree to this Agreement, then do not install or use Seeq products.
This Agreement, and your license to use the Seeq Python Library, remains in effect only during the term of any
subscription license that you purchase to use other Seeq software. Upon the termination or expiration of any such paid
license, this Agreement terminates immediately, and you will have no further right to use the Seeq Python Library.
From time to time, Seeq may modify this Agreement, including any referenced policies and other documents. Any modified
version will be effective at the time it is posted to Seeq’s website at
https://www.seeq.com/legal/software-license-agreement. To keep abreast of your license rights and relevant
restrictions, please bookmark this Agreement and read it periodically. By using any Product after any modifications,
Customer agrees to all of the modifications.
This End User License Agreement ("Agreement") is entered into by and between Seeq Corporation ("Seeq") and the Customer
identified during the process of registering the Seeq Software. Seeq and Customer agree as follows.
1. Definitions.
"Affiliate" or "Affiliates" means any company, corporation, partnership, joint venture, or other entity in which any of
the Parties directly or indirectly owns, is owned by, or is under common ownership with a Party to this Agreement to
the extent of at least fifty percent (50%) of its equity, voting rights or other ownership interest (or such lesser
percentage which is the maximum allowed to be owned by a foreign corporation in a particular jurisdiction).
"Authorized Users" means the individually-identified employees, contractors, representatives or consultants of Customer
who are permitted to use the Software in accordance with the applicable Order Document.
"Customer Data" means data in Customer’s data resources that is accessed by, and processed in, Seeq Server software.
Customer Data includes all Derived Data.
"Derived Data" means data derived by Authorized Users from Customer Data and consists of: scalars, signals, conditions,
journals, analyses, analysis results, worksheets, workbooks and topics.
"Documentation" means Seeq’s standard installation materials, training materials, specifications and online help
documents normally made available by Seeq in connection with the Software, as modified from time to time by Seeq.
"On-Premise Software" means Software that is installed on hardware owned or arranged by and under the control of
Customer, such as Customer-owned hardware, a private cloud or a public cloud. On-Premise Software is managed by
Customer. Examples of On-Premise Software include Seeq Server, Seeq Python library, Seeq Connectors and Seeq Remote
Agents.
"Order Document" means each mutually-agreed ordering document used by the parties from time to time for the purchase of
licenses for the Software. Customer’s Purchase Order may, in conjunction with the applicable Proposal or Quote from
Seeq, constitute an Order Document subject to the terms of this Agreement. All Order Documents are incorporated by
reference into this Agreement.
"SaaS Software" means Software that is installed on hardware arranged by and under the control of Seeq, such as
Microsoft Azure or AWS. SaaS Software is managed by Seeq. Example of SaaS Software include Seeq Server.
"Seeq Technology" means the Software, the Documentation, all algorithms and techniques for use with the Software
created by Seeq, and all modifications, improvements, enhancements and derivative works thereof created by Seeq.
"Software" means: (i) the Seeq Python library with which this License File is included, and (ii) Seeq software
applications identified on an applicable Order Document that are licensed to Customer pursuant to this Agreement.
Software includes On-Premise Software and SaaS Software.
"Subscription License" means a license allowing Customer to access SaaS Software, and to copy, install and use
On-Premise Software, for the period of the Subscription Term.
"Subscription Term" means the period of time specified in an Order Document during which the Subscription License is in
effect. The Subscription Term for the Seeq Python library shall be coterminous with Customer’s paid license to use Seeq
Software under an Order Document.
2. Subscription License.
a. License. Seeq grants Customer a worldwide, non-exclusive, non‐transferable, non‐sublicenseable right to access the
SaaS Software and to copy, install and use the On-Premise Software for the duration of the Subscription Term, subject
to the terms and conditions of this Agreement. Seeq does not license the Software on a perpetual basis.
b. Separate License Agreement. If Seeq and Customer have executed a separate License Agreement intended to govern
Customer’s use of the Software, then such separate License Agreement shall constitute the complete and exclusive
agreement of the parties for such use, and this Agreement shall be of no force or effect, regardless of any action by
Customer personnel that would have appeared to accept the terms of this Agreement.
c. Authorized Users. Only Authorized Users may use the Software, and only up to the number of Authorized Users
specified in the applicable Order Document. Customer designates each individual Authorized User in the Software. If
the number of Authorized Users is greater than the number specified in the particular Order Document, Customer will
purchase additional Authorized Users at the prices set out in the Order Document.
d. Subscription Term. The Subscription Term shall begin and end as provided in the applicable Order Document. The
Subscription Term will not renew, except by a new Order Document acceptable to both parties. Upon renewal of a
Subscription Term, Customer will, if applicable, increase the number of Authorized Users to a number that Customer
believes in good faith will be sufficient to accommodate any expected growth in the number of Customer’s users during
the new Subscription Term. This Agreement will continue in effect for the Subscription Term of all Order Documents
hereunder.
e. Limitations on Use. All use of Software must be in accordance with the relevant Documentation. End User may make a
limited number of copies of the Software as is strictly necessary for purposes of data protection, archiving, backup,
and testing. Customer will use the Software for its internal business purposes and to process information about the
operations of Customer and its Affiliates, and will not, except as provided in an Order Document, directly or
indirectly, use the Software to process information about or for any other company. Customer will: (i) not permit
unauthorized use of the Software, (ii) not infringe or violate the intellectual property rights, privacy, or any other
rights of any third party or any applicable law, (iii) ensure that each user uses a unique Authorized User ID and
password, (iv) not, except as provided in an Order Document, allow resale, timesharing, rental or use of the Software
in a service bureau or as a provider of outsourced services, and (v) not modify, adapt, create derivative works of,
reverse engineer, decompile, or disassemble the Software or Seeq Technology.
f. Software Modification. Seeq may modify the Software from time to time, but such modification will not materially
reduce the functionality of the Software. Seeq may contract with third parties to support the Software, so long as they
are subject to obligations of confidentiality to Seeq at least as strict as Seeq’s to Customer. Seeq shall remain
responsible for the performance of its contractors.
2. Support. Support for Customer’s use of the Software is included in Customer’s subscription fee. Seeq will provide
support and maintenance for the Software, including all applicable updates, and web-based support assistance in
accordance with Seeq’s support policies in effect from time to time. Other professional services are available for
additional fees.
3. Ownership.
a. Customer Data. Customer owns all Customer Data, including all Derived Data, and Seeq shall not receive any ownership
interest in it. Seeq may use the Customer Data only to provide the Software capabilities purchased by Customer and as
permitted by this Agreement, and not for any other purpose. Customer is the owner and data controller for the Customer
Data.
b. Software and Seeq Technology. Seeq retains all rights in the Software and the Seeq Technology (subject to the
license granted to Customer). Customer will not, and will not allow any other person to, modify, adapt, create
derivative works of, reverse engineer, decompile, or disassemble the Software or Seeq Technology. All new Seeq
Technology developed by Seeq while working with Customer, including any that was originally based on feedback,
suggestions, requests or comments from Customer, shall be Seeq’s sole property, and Customer shall have the right to
use any such new Seeq Technology only in connection with the Software.
c. Third-Party Open Source Software. The Software incorporates third-party open source software. All such software must
comply with Seeq’s Third Party Open Source License Policy. Customer may request a list of such third-party software and
a copy of the Policy at any time.
4. Fees and Payment Terms.
a. Fees. Customer shall pay the fees as specified in the Order Document. Unless otherwise specified in the Order
Document, all amounts are in US Dollars (USD). Upon renewal of a Subscription Term, Customer will, if applicable,
increase the number of Authorized Users to a number that Customer believes in good faith will be sufficient to
accommodate any expected growth in the number of Customer’s users during the new Subscription Term.
b. Invoicing & Payment. All payments are due within 30 days of the date of the invoice and are non-cancellable and
non-refundable except as provided in this Agreement. If Customer does not pay any amount (not disputed in good faith)
when due, Seeq may charge interest on the unpaid amount at the rate of 1.0% per month (or if less, the maximum rate
allowed by law). If Customer does not pay an overdue amount (not disputed in good faith) within 20 days of notice of
non-payment, Seeq may suspend the Software until such payment is received, but Customer will remain obligated to make
all payments due under this Agreement. Customer agrees to pay Seeq’s expenses, including reasonable attorneys and
collection fees, incurred in collecting amounts not subject to a good faith dispute.
c. Excess Usage of Software. The Software has usage limitations based on the number of Authorized Users or other
metrics as set forth on the Order Document. Customer shall maintain accurate records regarding Customer’s actual use of
the Software and shall make such information promptly available to Seeq upon request. Seeq may also monitor Customer’s
use of the Software.
d. Fees for Excess Usage of Software. Seeq will not require Customer to pay for past excess use, and in consideration
thereof:
i. If Customer’s license covers a fixed number of Authorized Users, Customer will promptly issue a new Order Document
to cover current and good-faith anticipated future excess use.
ii. If Customer’s license is under Seeq’s Extended Experience Program, Strategic Agreement Program or any other program
not tied directly to a fixed number of Authorized Users, then the parties will negotiate in good faith as follows:
(1) if the excess use is less than 50% above the number of Authorized Users that was used to set pricing for such
license for the current contract period (usually a year), the parties will negotiate an appropriate usage level and
fees for the next contract period, and
(2) If the excess use is more than 50% above such number, the parties will negotiate appropriate usage levels and fees
for the remainder of the current and the next contract periods, with additional fees for the current period payable
upon Seeq’s invoice.
e. Taxes. All fees are exclusive of all taxes, including federal, state and local use, sales, property, value-added,
ad valorem and similar taxes related to this transaction, however designated (except taxes based on Seeq’s net income).
Unless Customer presents valid evidence of exemption, Customer agrees to pay any and all such taxes that it is
obligated by law to pay. Customer will pay Seeq’s invoices for such taxes whenever Seeq is required to collect such
taxes from Customer.
f. Purchase through Seeq Partner. In the event that End User purchased its subscription to Seeq through an accredited
Seeq Partner, notwithstanding provisions of this Agreement relating to End User's payments to Seeq, Partner will
invoice End User, or charge End User using the credit card on file, and End User will pay all applicable subscription
fees to Partner.
6. Confidentiality. "Confidential Information" means all information and materials obtained by a party (the
"Recipient") from the other party (the "Disclosing Party"), whether in tangible form, written or oral, that is
identified as confidential or would reasonably be understood to be confidential given the nature of the information and
circumstances of disclosure, including without limitation Customer Data, the Software, Seeq Technology, and the terms
and pricing set out in this Agreement and Order Documents. Confidential Information does not include information that
(a) is already known to the Recipient prior to its disclosure by the Disclosing Party; (b) is or becomes generally
known through no wrongful act of the Recipient; (c) is independently developed by the Recipient without use of or
reference to the Disclosing Party’s Confidential Information; or (d) is received from a third party without restriction
and without a breach of an obligation of confidentiality. The Recipient shall not use or disclose any Confidential
Information without the Disclosing Party’s prior written permission, except to its employees, contractors, directors,
representatives or consultants who have a need to know in connection with this Agreement or Recipient’s business
generally, or as otherwise allowed herein. The Recipient shall protect the confidentiality of the Disclosing Party’s
Confidential Information in the same manner that it protects the confidentiality of its own confidential information of
a similar nature, but using not less than a reasonable degree of care. The Recipient may disclose Confidential
Information to the extent that it is required to be disclosed pursuant to a statutory or regulatory provision or court
order, provided that the Recipient provides prior notice of such disclosure to the Disclosing Party, unless such notice
is prohibited by law, rule, regulation or court order. As long as an Order Document is active under this Agreement and
for two (2) years thereafter, and at all times while Customer Data is in Seeq’s possession, the confidentiality
provisions of this Section shall remain in effect.
7. Security. Seeq will maintain and enforce commercially reasonable physical and logical security methods and
procedures to protect Customer Data on the SaaS Software and to secure and defend the SaaS Software against "hackers"
and others who may seek to access the SaaS Software without authorization. Seeq will test its systems for potential
security vulnerabilities at least annually. Seeq will use commercially reasonable efforts to remedy any breach of
security or unauthorized access. Seeq reserves the right to suspend access to the Seeq System in the event of a
suspected or actual security breach. Customer will maintain and enforce commercially reasonable security methods and
procedures to prevent misuse of the log-in information of its employees and other users. Seeq shall not be liable for
any damages incurred by Customer or any third party in connection with any unauthorized access resulting from the
actions of Customer or its representatives.
8. Warranties.
a. Authority and Compliance with Laws. Each party warrants and represents that it has all requisite legal authority to
enter into this Agreement and that it shall comply with all laws applicable to its performance hereunder including
export laws and laws pertaining to the collection and use of personal data.
b. Industry Standards and Documentation. Seeq warrants and represents that the Software will materially conform to the
specifications as set forth in the applicable Documentation. At no additional cost to Customer, and as Customer’s sole
and exclusive remedy for nonconformity of the Software with this limited warranty, Seeq will use commercially
reasonable efforts to correct any such nonconformity, provided Customer promptly notifies Seeq in writing outlining the
specific details upon discovery, and if such efforts are unsuccessful, then Customer may terminate, and receive a
refund of all pre-paid and unused fees for, the affected Software. This limited warranty shall be void if the failure
of the Software to conform is caused by (i) the use or operation of the Software with an application or in an
environment other than as set forth in the Documentation, or (ii) modifications to the Software that were not made by
Seeq or Seeq’s authorized representatives.
c. Malicious Code. Seeq will not introduce any time bomb, virus or other harmful or malicious code designed to disrupt
the use of the Software, other than Seeq’s ability to disable access to the Software in the event of termination or
suspension as permitted hereunder.
d. DISCLAIMER. EXCEPT AS EXPRESSLY SET FORTH HEREIN, NEITHER PARTY MAKES ANY REPRESENTATIONS OR WARRANTIES OF ANY
KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NON-INFRINGEMENT. EXCEPT AS STATED IN THIS SECTION, SEEQ DOES NOT REPRESENT THAT CUSTOMER’S USE OF THE SOFTWARE WILL BE
SECURE, UNINTERRUPTED OR ERROR FREE. NO STATEMENT OR INFORMATION, WHETHER ORAL OR WRITTEN, OBTAINED FROM SEEQ IN ANY
MEANS OR FASHION SHALL CREATE ANY WARRANTY NOT EXPRESSLY AND EXPLICITLY SET FORTH IN THIS AGREEMENT.
9. Indemnification by Seeq. Seeq shall indemnify, defend and hold Customer harmless from and against all losses
(including reasonable attorney fees) arising out of any third-party suit or claim alleging that Customer’s authorized
use of the Software infringes any valid U.S. or European Union patent or trademark, trade secret or other proprietary
right of such third party. Customer shall: (i) give Seeq prompt written notice of such suit or claim, (ii) grant Seeq
sole control of the defense or settlement of such suit or claim and (iii) reasonably cooperate with Seeq, at Seeq’s
expense, in its defense or settlement of the suit or claim. To the extent that Seeq is prejudiced by Customer's failure
to comply with the foregoing requirements, Seeq shall not be liable hereunder. Seeq may, at its option and expense, (i)
replace the Software with compatible non-infringing Software, (ii) modify the Software so that it is non-infringing,
(iii) procure the right for Customer to continue using the Software, or (iv) if the foregoing options are not
reasonably available, terminate the applicable Order Document and refund Customer all prepaid fees for Software
applicable to the remainder of the applicable Subscription Term. Seeq shall have no obligation to Customer with respect
to any infringement claim against Customer if such claim existed prior to the effective date of the applicable Order
Document or such claim is based upon (i) Customer’s use of the Software in a manner not expressly authorized by this
Agreement, (ii) the combination, operation, or use of the Software with third party material that was not provided by
Seeq, if Customer’s liability would have been avoided in the absence of such combination, use, or operation, or (iii)
modifications to the Software other than as authorized in writing by Seeq. THIS SECTION SETS FORTH SEEQ’S ENTIRE
OBLIGATION TO CUSTOMER WITH RESPECT TO ANY CLAIM SUBJECT TO INDEMNIFICATION UNDER THIS SECTION.
10. LIMITATION OF LIABILITIES. IN NO EVENT SHALL EITHER PARTY OR THEIR SERVICE PROVIDERS, LICENSORS CONTRACTORS OR
SUPPLIERS BE LIABLE FOR ANY INDIRECT, INCIDENTAL, CONSEQUENTIAL, SPECIAL OR PUNITIVE DAMAGES OF ANY KIND, INCLUDING
WITHOUT LIMITATION DAMAGES FOR COVER OR LOSS OF USE, DATA, REVENUE OR PROFITS, EVEN IF SUCH PARTY HAS BEEN ADVISED OF
THE POSSIBILITY OF SUCH DAMAGES. THE FOREGOING LIMITATION OF LIABILITY AND EXCLUSION OF CERTAIN DAMAGES SHALL APPLY
REGARDLESS OF THE SUCCESS OR EFFECTIVENESS OF OTHER REMEDIES. EXCEPT FOR THE PARTIES’ INDEMNIFICATION OBLIGATIONS,
DAMAGES FOR BODILY INJURY OR DEATH, DAMAGES TO REAL PROPERTY OR TANGIBLE PERSONAL PROPERTY, AND FOR BREACHES OF
CONFIDENTIALITY UNDER SECTION 6, IN NO EVENT SHALL THE AGGREGATE LIABILITY OF A PARTY, ITS SERVICE PROVIDERS,
LICENSORS, CONTRACTORS OR SUPPLIERS ARISING UNDER THIS AGREEMENT, WHETHER IN CONTRACT, TORT OR OTHERWISE, EXCEED THE
TOTAL AMOUNT OF FEES PAID BY CUSTOMER TO SEEQ FOR THE RELEVANT SOFTWARE WITHIN THE PRECEDING TWELVE (12) MONTHS.
11. Termination and Expiration.
a. Termination Rights. A party may terminate any Order Document: (i) for any material breach not cured within thirty
(30) days following written notice of such breach, and (ii) immediately upon written notice if the other party files
for bankruptcy, becomes the subject of any bankruptcy proceeding or becomes insolvent.
b. Customer Termination for Convenience. Customer may terminate any Order Document for any reason at any time.
c. Termination Effects. Upon termination by Customer under Section 13(a)(i) or 13(b) above, Seeq shall refund
Customer all prepaid and unused fees for the Software. Upon termination by Seeq under Section 13(a)(i) above, Customer
shall promptly pay all unpaid fees due through the end of the Subscription Term of such Order Document.
d. Access and Data. Upon expiration or termination of an Order Document, Seeq will disable access to the applicable
SaaS Software, and Customer will uninstall and destroy all copies of the Software and Documentation on hardware under
its control. Upon Customer request, Seeq will provide Customer with a copy of all Customer Data in Seeq’s possession,
in a mutually agreeable format within a mutually agreeable timeframe. Notwithstanding the foregoing: (i) Seeq may
retain backup copies of Customer Data for a limited period of time in accordance with Seeq’s then-current backup
policy, and (ii) Seeq will destroy all Customer Data no later than 3 months after end of the Subscription Term or
earlier, upon written request from Customer.
12. General.
a. Amendment. Seeq may modify this Agreement from time to time, including any referenced policies and other documents.
Any modified version will be effective at the time it is posted on Seeq’s website at
https://seeq.com/legal/software-license-agreement.
b. Precedence. The Order Document is governed by the terms of this Agreement and in the event of a conflict or
discrepancy between the terms of an Order Document and the terms of this Agreement, this Agreement shall govern except
as to the specific Software ordered, and the fees, currency and payment terms for such orders, for which the Order
Document shall govern, as applicable. If an Order Document signed by Seeq explicitly states that it is intended to
amend or modify a term of this Agreement, such Order Document shall govern over this Agreement solely as to the
amendment or modification. Seeq objects to and rejects any additional or different terms proposed by Customer,
including those contained in Customer’s purchase order, acceptance, vendor portal or website. Neither Seeq’s acceptance
of Customer’s purchase order nor its failure to object elsewhere to any provisions of any subsequent document, website,
communication, or act of Customer shall be deemed acceptance thereof or a waiver of any of the terms hereof.
c. Assignment. Neither party may assign this Agreement, in whole or in part, without the prior written consent of the
other, which shall not be unreasonably withheld. However, either party may assign this Agreement to any Affiliate, or
to a person or entity into which it has merged or which has otherwise succeeded to all or substantially all of its
business or assets to which this Agreement pertains, by purchase of stock, assets, merger, reorganization or otherwise,
and which has assumed in writing or by operation of law its obligations under this Agreement, provided that Customer
shall not assign this Agreement to a direct competitor of Seeq. Any assignment or attempted assignment in breach of
this Section shall be void. This Agreement shall be binding upon and shall inure to the benefit of the parties’
respective successors and assigns.
d. Employees. Each party agrees that during, and for one year after, the term of this Agreement, it will not directly
or indirectly solicit for hire any of the other party’s employees who were actively engaged in the provision or use of
the Software without the other party’s express written consent. This restriction shall not apply to offers extended
solely as a result of and in response to public advertising or similar general solicitations not specifically targeted
at the other party’s employees.
e. Independent Contractors. The parties are independent contractors and not agents or partners of, or joint venturers
with, the other party for any purpose. Neither party shall have any right, power, or authority to act or create any
obligation, express or implied, on behalf of the other party.
f. Notices. All notices required under this Agreement shall be in writing and shall be delivered personally against
receipt, or by registered or certified mail, return receipt requested, postage prepaid, or sent by
nationally-recognized overnight courier service, and addressed to the party to be notified at their address set forth
below. All notices and other communications required or permitted under this Agreement shall be deemed given when
delivered personally, or one (1) day after being deposited with such overnight courier service, or five (5) days after
being deposited in the United States mail, postage prepaid to Seeq at 1301 Second Avenue, #2850, Seattle, WA 98101,
Attn: Legal and to Customer at the then-current address in Seeq’s records, or to such other address as each party may
designate in writing.
g. Force Majeure. Except for payment obligations hereunder, either party shall be excused from performance of
non-monetary obligations under this Agreement for such period of time as such party is prevented from performing such
obligations, in whole or in part, due to causes beyond its reasonable control, including but not limited to, delays
caused by the other party, acts of God, war, terrorism, criminal activity, civil disturbance, court order or other
government action, third party performance or non-performance, strikes or work stoppages, provided that such party
gives prompt written notice to the other party of such event.
h. Integration. This Agreement, including all Order Documents and documents attached hereto or incorporated herein by
reference, constitutes the complete and exclusive statement of the parties’ agreement and supersedes all proposals or
prior agreements, oral or written, between the parties relating to the subject matter hereof.
i. Not Contingent. The party’s obligations hereunder are neither contingent on the delivery of any future functionality
or features of the Software nor dependent on any oral or written public comments made by Seeq regarding future
functionality or features of the Software.
j. No Third Party Rights. No right or cause of action for any third party is created by this Agreement or any
transaction under it.
k. Non-Waiver; Invalidity. No waiver or modification of the provisions of this Agreement shall be effective unless in
writing and signed by the party against whom it is to be enforced. If any provision of this Agreement is held invalid,
illegal or unenforceable, the validity, legality and enforceability of the remaining provisions shall not be affected
or impaired thereby. A waiver of any provision, breach or default by either party or a party’s delay exercising its
rights shall not constitute a waiver of any other provision, breach or default.
l. Governing Law and Venue. This Agreement will be interpreted and construed in accordance with the laws of the State
of Delaware without regard to conflict of law principles, and both parties hereby consent to the exclusive jurisdiction
and venue of courts in Wilmington, Delaware in all disputes arising out of or relating to this Agreement.
m. Mediation. The parties agree to attempt to resolve disputes without extended and costly litigation. The parties
will: (1) communicate any dispute to other party, orally and in writing; (2) respond in writing to any written dispute
from other party within 15 days of receipt; (3) if satisfactory resolution does not occur within 45 days of initial
written notification of the dispute, and if both parties do not mutually agree to a time extension, then either party
may seek a remedy in court.
n. Survival. Provisions of this Agreement that are intended to survive termination or expiration of this Agreement in
order to achieve the fundamental purposes of this Agreement shall so survive, including without limitation: Ownership,
Fees and Payment Terms, Confidentiality, Customer Data, Indemnification by Seeq and Limitation of Liabilities.
o. Headings and Language. The headings of sections included in this Agreement are inserted for convenience only and
are not intended to affect the meaning or interpretation of this Agreement. The parties to this Agreement and Order
Document have requested that this Agreement and all related documentation be written in English.
p. Federal Government End Use Provisions. Seeq provides the Software, including related technology, for ultimate
federal government end use solely in accordance with the following: Government technical data and software rights
related to the Software include only those rights customarily provided to the public as defined in this Agreement. This
customary commercial license is provided in accordance with FAR 12.211 (Technical Data) and FAR 12.212 (Software) and,
for Department of Defense transactions, DFAR 252.227-7015 (Technical Data – Commercial Items) and DFAR 227.7202-3
(Rights in Commercial Computer Software or Computer Software Documentation).
q. Contract for Services. The parties intend this Agreement to be a contract for the provision of services and not a
contract for the sale of goods. To the fullest extent permitted by law, the provisions of the Uniform Commercial Code
(UCC), the Uniform Computer Information Transaction Act (UCITA), the United Nations Convention on Contracts for the
International Sale of Goods , and any substantially similar legislation as may be enacted, shall not apply to this
Agreement.
r. Actions Permitted. Except for actions for nonpayment or breach of a party’s proprietary rights, no action,
regardless of form, arising out of or relating to the Agreement may be brought by either party more than one year after
the cause of action has accrued.
Should you have any questions concerning this Agreement, please contact Seeq at legal@seeq.com.
| null | [
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 3",
"License :: Other/Proprietary License",
"Operating System :: OS Independent"
] | [] | null | null | >=2.7 | [] | [] | [] | [
"certifi>=14.05.14",
"six>=1.10",
"urllib3<3.0.0,>=1.15.1",
"requests>=2.21.0",
"cryptography>=3.2"
] | [] | [] | [] | [
"Homepage, https://www.seeq.com",
"Documentation, https://python-docs.seeq.com/",
"Changelog, https://python-docs.seeq.com/changelog.html"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T14:12:25.391808 | seeq-66.106.0.20260219.tar.gz | 638,547 | 25/d2/a2e491e7c3b1fa02a84ca34b1351a6bda13c5cf0ea3edf77f66ea0742c81/seeq-66.106.0.20260219.tar.gz | source | sdist | null | false | f7f3bb0e0f872bd4f0f82966823b8cd4 | 09f3620230f791c288ff3cb6cacaa14c0b2162ecae496ef357212439247a8cce | 25d2a2e491e7c3b1fa02a84ca34b1351a6bda13c5cf0ea3edf77f66ea0742c81 | null | [] | 3,455 |
2.4 | qontract-reconcile | 0.10.2.dev533 | Collection of tools to reconcile services with their desired state as defined in the app-interface DB. | # qontract-reconcile
[](https://github.com/astral-sh/ruff)
[](https://github.com/astral-sh/uv)
[][pypi-link]
[![PyPI platforms][pypi-platforms]][pypi-link]

[](https://mypy-lang.org/)
A tool to reconcile services with their desired state as defined in app-interface.
Additional tools that use the libraries created by the reconciliations are also hosted here.
## Usage
Use [config.toml.example](config.toml.example) as a template to create a `config.toml` file.
Run a reconcile integration like this:
```sh
qontract-reconcile --config config.toml --dry-run <subcommand>
# review output and run without `--dry-run` to perform actual changes
qontract-reconcile --config config.toml <subcommand>
```
> Note: you can use the `QONTRACT_CONFIG` environment variable instead of using `--config`.
### OpenShift usage
OpenShift templates can be found [here](/openshift/qontract-reconcile.yaml). In order to add integrations there please use the [helm](/helm/README.md) chart provided.
## Available Integrations
`qontract-reconcile` includes the following integrations:
```text
acs-policies Manages RHACS security policy configurations
acs-rbac Manages RHACS rbac configuration
advanced-upgrade-scheduler Manage Cluster Upgrade Policy schedules in
OCM organizations based on OCM labels.
aws-account-manager Create and manage AWS accounts.
aws-ami-cleanup Cleanup old and unused AMIs.
aws-ami-share Share AMI and AMI tags between accounts.
aws-cloudwatch-log-retention Set up retention period for Cloudwatch logs.
aws-ecr-image-pull-secrets Generate AWS ECR image pull secrets and
store them in Vault.
aws-iam-keys Delete IAM access keys by access key ID.
aws-iam-password-reset Reset IAM user password by user reference.
aws-saml-idp Manage the SAML IDP config for all AWS
accounts.
aws-saml-roles Manage the SAML IAM roles for all AWS
accounts with SSO enabled.
aws-support-cases-sos Scan AWS support cases for reports of leaked
keys and remove them (only submits PR)
aws-version-sync Sync AWS asset version numbers to App-
Interface
blackbox-exporter-endpoint-monitoring
Manages Prometheus Probe resources for
blackbox-exporter
change-log-tracking Analyze bundle diffs by change types.
change-owners Detects owners for changes in app-interface
PRs and allows them to self-service merge.
cluster-auth-rhidp Manages the OCM subscription labels for
clusters with RHIDP authentication. Part of
RHIDP.
cluster-deployment-mapper Maps ClusterDeployment resources to Cluster
IDs.
dashdotdb-dora Collects dora metrics.
dashdotdb-dvo Collects the DeploymentValidations from all
the clusters and posts them to Dashdotdb.
dashdotdb-slo Collects the ServiceSloMetrics from all the
clusters and posts them to Dashdotdb.
database-access-manager Manage Databases and Database Users.
deadmanssnitch Automate Deadmanssnitch Creation/Deletion
dynatrace-token-provider Automatically provide dedicated Dynatrace
tokens to management clusters
email-sender Send email notifications to app-interface
audience.
endpoints-discovery Discover routes and update endpoints
external-resources Manages External Resources
external-resources-secrets-sync
Syncs External Resources Secrets from Vault
to Clusters
gabi-authorized-users Manages user access for GABI instances.
gcr-mirror Mirrors external images into Google
Container Registry.
github Configures the teams and members in a GitHub
org.
github-owners Configures owners in a GitHub org.
github-repo-invites Accept GitHub repository invitations for
known repositories.
github-repo-permissions-validator
Validates permissions in github
repositories.
github-validator Validates GitHub organization settings.
gitlab-fork-compliance Ensures that forks of App Interface are
compliant.
gitlab-housekeeping Manage issues and merge requests on GitLab
projects.
gitlab-labeler Guesses and adds labels to merge requests
according to changed paths.
gitlab-members Manage GitLab group members.
gitlab-mr-sqs-consumer Listen to SQS and creates MRs out of the
messages.
gitlab-owners Manages labels on gitlab merge requests
based on OWNERS files schema.
gitlab-permissions Manage permissions on GitLab projects.
gitlab-projects Create GitLab projects.
glitchtip Configure and enforce glitchtip instance
configuration.
glitchtip-project-alerts Configure Glitchtip project alerts.
glitchtip-project-dsn Glitchtip project dsn as openshift secret.
integrations-manager Manages Qontract Reconcile integrations.
jenkins-job-builder Manage Jenkins jobs configurations using
jenkins-jobs.
jenkins-job-builds-cleaner Clean up jenkins job history.
jenkins-roles Manage Jenkins roles association via REST
API.
jenkins-webhooks Manage web hooks to Jenkins jobs.
jenkins-webhooks-cleaner Remove webhooks to previous Jenkins
instances.
jenkins-worker-fleets Manage Jenkins worker fleets via JCasC.
jira-permissions-validator Validate permissions in Jira.
ldap-groups Manages LDAP groups based on App-Interface
roles.
ldap-users Removes users which are not found in LDAP
search.
ocm-additional-routers Manage additional routers in OCM.
ocm-addons Manages cluster Addons in OCM.
ocm-addons-upgrade-scheduler-org
Manage Addons Upgrade Policy schedules in
OCM organizations.
ocm-addons-upgrade-tests-trigger
Trigger jenkins jobs following Addon
upgrades.
ocm-aws-infrastructure-access Grants AWS infrastructure access to members
in AWS groups via OCM.
ocm-clusters Manages clusters via OCM.
ocm-external-configuration-labels
Manage External Configuration labels in OCM.
ocm-groups Manage membership in OpenShift groups via
OCM.
ocm-internal-notifications Notifications to internal Red Hat users
based on conditions in OCM.
ocm-labels Manage cluster OCM labels.
ocm-machine-pools Manage Machine Pools in OCM.
ocm-oidc-idp Manage OIDC cluster configuration in OCM
organizations based on OCM labels. Part of
RHIDP.
ocm-standalone-user-management Manages OCM cluster usergroups and
notifications via OCM labels.
ocm-update-recommended-version Update recommended version for OCM orgs
ocm-upgrade-scheduler-org Manage Upgrade Policy schedules in OCM
organizations.
ocm-upgrade-scheduler-org-updater
Update Upgrade Policy schedules in OCM
organizations.
openshift-cluster-bots Manages dedicated-admin and cluster-admin
creds.
openshift-clusterrolebindings Configures ClusterRolebindings in OpenShift
clusters.
openshift-groups Manages OpenShift Groups.
openshift-limitranges Manages OpenShift LimitRange objects.
openshift-namespace-labels Manages labels on OpenShift namespaces.
openshift-namespaces Manages OpenShift Namespaces.
openshift-network-policies Manages OpenShift NetworkPolicies.
openshift-prometheus-rules Manages OpenShift Prometheus Rules.
openshift-resourcequotas Manages OpenShift ResourceQuota objects.
openshift-resources Manages OpenShift Resources.
openshift-rolebindings Configures Rolebindings in OpenShift
clusters.
openshift-routes Manages OpenShift Routes.
openshift-saas-deploy Manage OpenShift resources defined in Saas
files.
openshift-saas-deploy-change-tester
Runs openshift-saas-deploy for each saas-
file that changed within a bundle.
openshift-saas-deploy-trigger-cleaner
Clean up deployment related resources.
openshift-saas-deploy-trigger-configs
Trigger deployments when configuration
changes.
openshift-saas-deploy-trigger-images
Trigger deployments when images are pushed.
openshift-saas-deploy-trigger-moving-commits
Trigger deployments when a commit changed
for a ref.
openshift-saas-deploy-trigger-upstream-jobs
Trigger deployments when upstream job runs.
openshift-serviceaccount-tokens
Use OpenShift ServiceAccount tokens across
namespaces/clusters.
openshift-tekton-resources Manages custom resources for Tekton based
deployments.
openshift-upgrade-watcher Watches for OpenShift upgrades and sends
notifications.
openshift-users Deletion of users from OpenShift clusters.
openshift-vault-secrets Manages OpenShift Secrets from Vault.
prometheus-rules-tester Tests prometheus rules using promtool.
quay-membership Configures the teams and members in Quay.
quay-mirror Mirrors external images into Quay.
quay-mirror-org Mirrors entire Quay orgs.
quay-permissions Manage permissions for Quay Repositories.
quay-repos Creates and Manages Quay Repos.
query-validator Validate queries to maintain consumer schema
compatibility.
requests-sender Send emails to users based on requests
submitted to app-interface.
resource-scraper Get resources from clusters and store in
Vault.
resource-template-tester Tests templating of resources.
rhidp-sso-client Manage Keycloak SSO clients for OCM
clusters. Part of RHIDP.
saas-auto-promotions-manager Manage auto-promotions defined in SaaS files
saas-file-validator Validates Saas files.
sendgrid-teammates Manages SendGrid teammates for a given
account.
service-dependencies Validate dependencies are defined for each
service.
signalfx-prometheus-endpoint-monitoring
Manages Prometheus Probe resources for
signalfx exporter
skupper-network Manages Skupper Networks.
slack-usergroups Manage Slack User Groups (channels and
users).
sql-query Runs SQL Queries against app-interface RDS
resources.
status-board-exporter Export Product and Application informnation
to Status Board.
status-page-components Manages components on statuspage.io hosted
status pages.
status-page-maintenances Manages maintenances on statuspage.io hosted
status pages.
template-renderer Render datafile templates in app-interface.
template-validator Test app-interface templates.
terraform-aws-route53 Manage AWS Route53 resources using
Terraform.
terraform-init Initialize AWS accounts for Terraform usage.
terraform-repo Manages raw HCL Terraform from a separate
repository.
terraform-resources Manage AWS Resources using Terraform.
terraform-tgw-attachments Manages Transit Gateway attachments.
terraform-users Manage AWS users using Terraform.
terraform-vpc-peerings Manage VPC peerings between OSD clusters and
AWS accounts or other OSD clusters.
terraform-vpc-resources Manage VPC creation
unleash-feature-toggles Manage Unleash feature toggles.
vault-replication Allow vault to replicate secrets to other
instances.
version-gate-approver Approves OCM cluster upgrade version gates.
vpc-peerings-validator Validates that VPC peerings do not exist
between public and internal clusters.
```
## Tools
Additionally, the following tools are available:
- `app-interface-metrics-exporter`: Exports metrics from App-Interface.
- `app-interface-reporter`: Creates service reports and submits PR to App-Interface.
- `glitchtip-access-reporter`: Creates a report of users with access to Glitchtip.
- `glitchtip-access-revalidation`: Requests a revalidation of Glitchtip access.
- `qontract-cli`: A cli tool for qontract (currently very good at getting information).
- `run-integration`: A script to run qontract-reconcile in a container.
- `saas-metrics-exporter`: This tool is responsible for exposing/exporting SaaS metrics and data.
- `template-validation`: Run template validation.
## Installation
Install the package from PyPI:
```sh
uv tool install --python 3.12 qontract-reconcile
```
or via `pip`:
```sh
pip install qontract-reconcile
```
Install runtime requirements:
Versions can be found in [qontract-reconcile-base Dockerfile](https://github.com/app-sre/container-images/blob/master/qontract-reconcile-base/Dockerfile).
- amtool
- git-secrets
- helm
- kubectl
- oc
- promtool
- skopeo
- terraform
## Development
This project targets Python version 3.12.x for best compatibility and leverages [uv](https://docs.astral.sh/uv/) for the dependency managment.
Create a local development environment with all required dependencies:
```sh
make dev-env
```
### Image build
In order to speed up frequent builds and avoid issues with dependencies, docker image
makes use [`qontract-reconcile-build`](https://quay.io/repository/app-sre/qontract-reconcile-base?tag=latest&tab=tags)
image. See [`app-sre/coontainer-images`](https://github.com/app-sre/container-images)
repository if you want to make changes to the base image.
This repo [`Dockerfile`](dockerfiles/Dockerfile) must only contain instructions related to the Python code build.
The [README](dockerfiles/README.md) contains more information about the Dockerfile and the build stages.
### Testing
This project uses [pytset](https://docs.pytest.org/en/stable/) as the test runner and
these tools for static analysis and type checking:
- [ruff](https://docs.astral.sh/ruff/): A fast Python linter and code formatter.
- [mypy](https://mypy.readthedocs.io/en/stable/): A static type checker for Python.
The [Makefile](Makefile) contains several targets to help with testing, linting,
formatting, and type checking:
- `make all-tests`: Run all available tests.
- `make linter-test`: Run the linter and formatter tests.
- `make types-test`: Run the type checker tests.
- `make qenerate-test`: Run the query classes generation tests.
- `make helm-test`: Run the helm chart tests.
- `make unittest`: Run all Python unit tests.
## Run reconcile loop for an integration locally in a container
This is currently only tested with the docker container engine.
For more flexible way to run in container, please see [qontract-development-cli](https://github.com/app-sre/qontract-development-cli).
### Prepare config.toml
Make sure the file `./config.dev.toml` exists and contains your current configuration.
Your `config.dev.toml` should point to the following qontract-server address:
```toml
[graphql]
server = "http://host.docker.internal:4000/graphql"
```
### Run qontract-server
Start the [qontract-server](https://github.com/app-sre/qontract-server) in a different window, e.g., via:
Run this in the root dir of `qontract-server` repo:
```shell
make build-dev
```
### Trigger integration
```shell
make dev-reconcile-loop INTEGRATION_NAME=terraform-resources DRY_RUN=--dry-run LOG_LEVEL=DEBUG INTEGRATION_EXTRA_ARGS=--light SLEEP_DURATION_SECS=100
```
## Query Classes
We use [qenerate](https://github.com/app-sre/qenerate) to generate data classes for GQL queries.
GQL definitions and generated classes can be found [here](reconcile/gql_definitions/).
### Workflow
1. Define your query or fragment in a `.gql` file somewhere in `reconcile/gql_definitions`.
2. Every gql file must hold exactly one `query` OR `fragment` definition. You must not have multiple definitions within one file.
3. Do not forget to add `# qenerate: plugin=pydantic_v1` in the beginning of the file. This tells `qenerate` which plugin is used to render the code.
4. Have an up-to-date schema available at localhost:4000
5. `make gql-introspection` gets the type definitions. They will be stored in `reconcile/gql_definitions/introspection.json`
6. `make gql-query-classes` generates the data classes for your queries and fragments
## Design Patterns
This project follows a set of established architectural and implementation patterns to ensure consistency, reliability, and scalability. For a detailed explanation of these concepts, please see the **[Design Patterns Documentation](docs/patterns/README.md)**.
Understanding these patterns, especially the `qenerate` workflow for GraphQL data binding, is highly recommended for new developers.
## Troubleshooting
`faulthandler` is enabled for this project and SIGUSR1 is registered to dump the traceback. To do so, you can use `kill -USR1 pid` where pid is the ID of the qontract-reconcile process.
## Profiling
Enable the Python cProfile module by setting the environment variable `ENABLE_PROFILING=1` before running the integration. This will generate a profile file `/tmp/profile.prof`.
You can then analyze the profile using `snakeviz`:
```sh
snakeviz /tmp/profile.prof
```
> :information_source: Note
>
> `cProfile` doesn't support multithreading, but it can still highlight performance issues on the main thread.
> If you need to profile multithreaded code, consider using [py-spy](https://github.com/benfred/py-spy) or similar tools that support sampling profiling.
> Also [memray](https://github.com/bloomberg/memray) could be beneficial for memory profiling.
## Code style guide
Qontract-reconcile uses [PEP8](https://peps.python.org/pep-0008/) as the code style guide.
The style is enforced via PR checks (`make test`) with the help of the following utilities:
- [Ruff - An extremely fast Python linter and code formatter, written in Rust.](https://docs.astral.sh/ruff/)
- [Mypy](https://mypy.readthedocs.io/en/stable/)
Run `make format` before you commit your changes to keep the code compliant.
## Release
Release version are calculated from git tags of the form X.Y.Z.
- If the current commit has such a tag, it will be used as is
- Otherwise the latest tag of that format is used and:
- the patch label (Z) is incremented
- the string `.pre<count>+<commitid>` is appended. `<count>` is the number of commits since the X.Y.Z tag. `<commitid>` is... the current commit id.
After the PR is merged, a CI job will be triggered that will publish the package to pypi: <https://pypi.org/project/qontract-reconcile>.
## Licence
[Apache License Version 2.0](LICENSE).
## Authors
These tools have been written by the [Red Hat App-SRE Team](mailto:sd-app-sre@redhat.com).
[pypi-link]: https://pypi.org/project/qontract-reconcile/
[pypi-platforms]: https://img.shields.io/pypi/pyversions/qontract-reconcile
| text/markdown | null | Red Hat App-SRE Team <sd-app-sre@redhat.com> | null | null | Apache 2.0 | null | [
"Development Status :: 2 - Pre-Alpha",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | ==3.12.* | [] | [] | [] | [
"anymarkup==0.8.1",
"boto3==1.34.94",
"botocore==1.34.94",
"click<9.0,>=7.0",
"croniter<1.1.0,>=1.0.15",
"dateparser~=1.1.7",
"deepdiff==8.6.1",
"dnspython~=2.1",
"dt==1.1.73",
"filetype~=1.2.0",
"gql==3.1.0",
"hvac==2.4.0",
"jenkins-job-builder==6.4.2",
"jinja2<3.2.0,>=2.10.1",
"jira==3... | [] | [] | [] | [
"homepage, https://github.com/app-sre/qontract-reconcile",
"repository, https://github.com/app-sre/qontract-reconcile",
"documentation, https://github.com/app-sre/qontract-reconcile"
] | uv/0.8.22 | 2026-02-19T14:12:23.099748 | qontract_reconcile-0.10.2.dev533.tar.gz | 916,684 | c0/63/443c8724ad83a643ae260d0f32299d50f3fd78fa97c84029f94c0616f239/qontract_reconcile-0.10.2.dev533.tar.gz | source | sdist | null | false | fc92c508cd513b17c2549bc10bd661a0 | b31cd3da334de6384aa61b8f61cb04965d5285dbc30e8a44274be1950af60731 | c063443c8724ad83a643ae260d0f32299d50f3fd78fa97c84029f94c0616f239 | null | [] | 196 |
2.4 | voiceType2 | 0.4.1 | Type with your voice using hotkey-activated speech recognition | # voiceType - type with your voice

[](https://github.com/Adam-D-Lewis/voicetype/actions/workflows/tests.yaml)
## Features
- Press a hotkey (default: `Pause/Break` key) to start recording audio.
- Release the hotkey to stop recording.
- The recorded audio is transcribed to text (e.g., using OpenAI's Whisper model).
- The transcribed text is typed into the currently active application.
## Prerequisites
- Python 3.8+
- `pip` (Python package installer)
- For Linux installation: `systemd` (common in most modern Linux distributions).
- An OpenAI API Key (if using OpenAI for transcription).
## Installation
### Option 1: Install from PyPI
```bash
pip install voicetype2
```
### Option 2: Install from Source
1. **Clone the repository (including submodules):**
```bash
git clone --recurse-submodules https://github.com/Adam-D-Lewis/voicetype.git
cd voicetype
```
If you already cloned without `--recurse-submodules`, initialize the submodules:
```bash
git submodule update --init --recursive
```
2. **Set up a Python virtual environment (recommended):**
```bash
python3 -m venv .venv
source .venv/bin/activate # On Windows, use `.venv\Scripts\activate`
```
3. **Install the package and its dependencies:**
This project uses `pyproject.toml` with `setuptools`. Install the `voicetype` package and its dependencies using pip:
```bash
pip install .
```
This command reads `pyproject.toml`, installs all necessary dependencies, and makes the `voicetype` script available (callable as `python -m voicetype`).
4. **Run the installation script (for Linux with systemd):**
If you are on Linux and want to run VoiceType as a systemd user service (recommended for background operation and auto-start on login), use the CLI entrypoint installed with the package. Ensure you're in the environment where you installed dependencies.
```bash
voicetype install
```
During install you'll be prompted to choose a provider [litellm, local]. If you choose `litellm` you'll then be prompted for your `OPENAI_API_KEY`. Values are stored in `~/.config/voicetype/.env` with restricted permissions.
The script will:
- Create a systemd service file at `~/.config/systemd/user/voicetype.service`.
- Store your OpenAI API key in `~/.config/voicetype/.env` (with restricted permissions).
- Reload the systemd user daemon, enable the `voicetype.service` to start on login, and start it immediately.
For other operating systems, or if you prefer not to use the systemd service on Linux, you can run the application directly after installation (see Usage).
## Configuration
VoiceType can be configured using a `settings.toml` file. The application looks for configuration files in the following locations (in priority order):
1. `./settings.toml` - Current directory
2. `~/.config/voicetype/settings.toml` - User config directory
3. `/etc/voicetype/settings.toml` - System-wide config
### Available Settings
VoiceType uses a pipeline-based configuration system. See [settings.example.toml](settings.example.toml) for a complete, documented example configuration including:
- Stage definitions (RecordAudio, Transcribe, CorrectTypos, TypeText, LLMAgent)
- Local and cloud transcription options with fallback support
- Pipeline configuration with hotkey bindings
- Telemetry and logging settings
**Note:** If you used `voicetype install` and configured litellm during installation, your API key is stored separately in `~/.config/voicetype/.env`.
## Monitoring Pipeline Performance with OpenTelemetry
VoiceType includes built-in OpenTelemetry instrumentation to track pipeline execution and stage performance. When enabled, traces are exported to a local file for offline analysis.
### Enabling Telemetry
Telemetry is disabled by default. To enable it, add to your `settings.toml`:
```toml
[telemetry]
enabled = true
```
### Trace File Location
Traces are automatically saved to:
- Linux: `~/.config/voicetype/traces.jsonl`
- macOS: `~/Library/Application Support/voicetype/traces.jsonl`
- Windows: `%APPDATA%/voicetype/traces.jsonl`
### What You Can See
Each pipeline execution creates a trace with:
- **Overall pipeline duration** - Total time from start to finish
- **Individual stage timings** - How long each stage (RecordAudio, Transcribe, etc.) took
- **Pipeline metadata** - Pipeline name, ID, stage count
- **Error tracking** - Any exceptions or failures with stack traces
### Example Trace
Each span is written as a JSON line:
```json
{
"name": "pipeline.default",
"context": {...},
"start_time": 1234567890,
"end_time": 1234567895,
"attributes": {
"pipeline.id": "abc-123",
"pipeline.name": "default",
"pipeline.duration_ms": 5200
}
}
```
### Managing Trace Files
**Automatic rotation:**
Trace files are automatically rotated when they reach 10 MB. Rotated files are timestamped (e.g., `traces.20250117_143022.jsonl`) and kept indefinitely.
**View traces:**
```bash
# Pretty-print the current trace file
cat ~/.config/voicetype/traces.jsonl | jq
# View all trace files (including rotated)
cat ~/.config/voicetype/traces*.jsonl | jq
# Or just view in any text editor
cat ~/.config/voicetype/traces.jsonl
```
**Clear old traces:**
```bash
# Delete all trace files
rm ~/.config/voicetype/traces*.jsonl
```
**Analyze with grep:**
```bash
# Find slow stages in current file
grep "duration_ms" ~/.config/voicetype/traces.jsonl | grep -E "duration_ms\":[0-9]{4,}"
# Search across all trace files
grep "duration_ms" ~/.config/voicetype/traces*.jsonl | grep -E "duration_ms\":[0-9]{4,}"
```
### Configuration
**Custom trace file location:**
```toml
[telemetry]
enabled = true
trace_file = "~/my-custom-traces.jsonl"
```
**Adjust rotation size or disable rotation:**
```toml
[telemetry]
enabled = true
rotation_max_size_mb = 50 # Rotate at 50 MB instead of 10 MB
# Or disable rotation entirely
# rotation_enabled = false
```
**Export to OTLP endpoint only (disable file export):**
```toml
[telemetry]
enabled = true
export_to_file = false
otlp_endpoint = "http://localhost:4317"
```
## Usage
- **If using the Linux systemd service:** The service will start automatically on login. VoiceType will be listening for the hotkey in the background.
- **To run manually (e.g., for testing or on non-Linux systems):**
Activate your virtual environment and run:
```bash
python -m voicetype
```
**Using the Hotkey:**
1. Press and hold the configured hotkey (default is `Pause/Break`).
2. Speak clearly.
3. Release the hotkey to stop recording.
4. The transcribed text should then be typed into your currently active application.
## Managing the Service (Linux with systemd)
If you used `voicetype install`:
- **Check service status:**
```bash
voicetype status
```
Alternatively:
```bash
systemctl --user status voicetype.service
```
- **View service logs:**
```bash
journalctl --user -u voicetype.service -f
```
- **Restart the service:**
(e.g., after changing the `OPENAI_API_KEY` in `~/.config/voicetype/.env`)
```bash
systemctl --user restart voicetype.service
```
- **Stop the service:**
```bash
systemctl --user stop voicetype.service
```
- **Start the service manually (if not enabled to start on login):**
```bash
systemctl --user start voicetype.service
```
- **Disable auto-start on login:**
```bash
systemctl --user disable voicetype.service
```
- **Enable auto-start on login (if previously disabled):**
```bash
systemctl --user enable voicetype.service
```
## Uninstallation (Linux with systemd)
To stop the service, disable auto-start, and remove the systemd service file and associated configuration:
```bash
voicetype uninstall
```
This will:
- Stop and disable the `voicetype.service`.
- Remove the service file (`~/.config/systemd/user/voicetype.service`).
- Remove the environment file (`~/.config/voicetype/.env` containing your API key).
- Attempt to remove the application configuration directory (`~/.config/voicetype`) if it's empty.
If you installed the package using `pip install .`, you can uninstall it from your Python environment with:
```bash
pip uninstall voicetype
```
## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
## Architecture
VoiceType uses a pipeline-based architecture with resource-based concurrency control. See [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md) for:
- Complete system architecture diagram (Mermaid UML)
- Component descriptions and responsibilities
- Execution flow and lifecycle
- Design principles and extension points
### Vendored Dependencies
VoiceType includes a vendored version of [pynput](https://github.com/moses-palmer/pynput) located in `voicetype/_vendor/pynput/`. This vendored version includes a not-yet-merged bug fix and allows for better control over keyboard/mouse input handling functionality across different platforms
## Development
Preferred workflow: Pixi
- Pixi is the preferred way to create and manage the development environment for this project. It ensures reproducible, cross-platform setups using the definitions in pyproject.toml.
Setup Pixi
- Install Pixi:
- Linux/macOS (official installer):
- curl -fsSL https://pixi.sh/install.sh | bash
- macOS (Homebrew):
- brew install prefix-dev/pixi/pixi
- Verify:
- pixi --version
Development Environments
Available Pixi environments:
- **local**: Standard development environment (default)
- `pixi install -e local && pixi shell -e local`
- **dev**: Development with testing tools
- `pixi install -e dev && pixi shell -e dev`
- **cpu**: CPU-only (no CUDA dependencies)
- `pixi install -e cpu && pixi shell -e cpu`
- **windows-build**: Build Windows installers (PyInstaller + dependencies)
- `pixi install -e windows-build && pixi shell -e windows-build`
Run the application
- pixi run voicetype
- Equivalent to:
- python -m voicetype
Run tests
- If a test task is defined:
- pixi run test
- Otherwise (pytest directly):
- pixi run python -m pytest
Lint and format
- If tasks are defined:
- pixi run lint
- pixi run fmt
- Or run tools directly:
- pixi run ruff format
- pixi run ruff check .
Pre-commit hooks (recommended)
- Install hooks:
- pixi run pre-commit install
- Run on all files:
- pixi run pre-commit run --all-files
Building Windows Installers (Windows only)
Using Pixi:
- Setup build environment:
- `pixi install -e windows-build`
- `pixi shell -e windows-build`
- Install NSIS (one-time):
- Download from https://nsis.sourceforge.io/Download
- Or via Chocolatey: `choco install nsis`
- Build installer:
- `pixi run -e windows-build build-windows`
- Output: `dist/VoiceType-Setup.exe`
Or build executable only (no installer):
- `pixi run -e windows-build build-exe`
- Output: `dist/voicetype/voicetype.exe`
Clean build artifacts:
- `pixi run -e windows-build clean-build`
See [docs/BUILDING.md](docs/BUILDING.md) for detailed build instructions.
Alternative: Python venv (fallback)
- Ensure Python 3.11+ is installed.
- Create and activate a venv:
- python -m venv .venv
- source .venv/bin/activate
- Editable install with dev dependencies:
- pip install -U pip
- pip install -e ".[dev]"
- Run the app:
- python -m voicetype
Notes
- Dependency definitions live in pyproject.toml
- After changing dependencies, update pyproject.toml then run:
- pixi install
## License
This project is licensed under the Apache License 2.0. See the [LICENSE](LICENSE) file for details.
| text/markdown | Adam Lewis | null | null | null | null | accessibility, speech-recognition, transcription, voice-typing | [
"Development Status :: 4 - Beta",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: Apache Software License",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programm... | [] | null | null | >=3.11 | [] | [] | [] | [
"nvidia-ml-py; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pytest; extra == \"dev\"",
"dbus-next>=0.2.3; extra == \"wayland\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:11:59.704147 | voicetype2-0.4.1.tar.gz | 600,109 | 94/de/3fd485c78edf6cac1c64d16c69f86a93f6eb7b99d2adba4eebb5bc648246/voicetype2-0.4.1.tar.gz | source | sdist | null | false | 57fc770a87f3ba128142c78754e213e5 | bd2dd925f50748902591c582d73c3bcf3a2157b9fea3b909b8fb91ef1d01378c | 94de3fd485c78edf6cac1c64d16c69f86a93f6eb7b99d2adba4eebb5bc648246 | Apache-2.0 | [
"LICENSE",
"voicetype/_vendor/pynput/COPYING.LGPL"
] | 0 |
2.4 | proxa | 1.0.2 | A simple yet powerful proxy management library for Python |
<h1 align="center">Proxa</h1>
<h2 align="center">A simple yet powerful Python library for managing and validating proxies.<h2>
<p align="center">
<a href="https://github.com/abbas-bachari/proxa"><img src="https://img.shields.io/badge/Python%20-3.8+-green?style=plastic&logo=Python" alt="Python"></a>
<a href="https://pypi.org/project/proxa/"><img src="https://img.shields.io/pypi/v/proxa?style=plastic" alt="PyPI - Version"></a>
<a href="https://pypi.org/project/proxa/"><img src="https://img.shields.io/pypi/l/proxa?style=plastic" alt="License"></a>
<a href="https://pepy.tech/project/proxa"><img src="https://pepy.tech/badge/proxa?style=flat-plastic" alt="Downloads"></a>
</p>
## 🛠️ Version 1.0.2
### 📌 Features
- ✅ Easy proxy parsing from strings, dictionaries, or files
- 🔄 Automatic proxy rotation
- 🔀 Shuffle proxy list randomly
- 🧪 Built-in proxy checking with multiple IP lookup services
- 📦 Ready-to-use formats for `requests`, `Telethon`, and more
- ⚡ Lightweight and dependency-minimal
## 📥 Installation
```bash
pip install proxa
```
---
## 🚀 Quick Start
```python
from proxa import ProxyManager
# Initialize with a list of proxies
manager = ProxyManager([
"http://user:pass@127.0.0.1:8080",
"socks5://10.10.1.0:3128"
])
# Get the current proxy
proxy=manager.current
print(proxy.url)
# Rotate to the next proxy
proxy=manager.next()
print(proxy.url)
# Shuffle proxies to randomize order
manager.shuffle()
print("Proxies shuffled.")
# Check if proxy works and get IP info
status, ip_info, error = proxy.check()
if status:
print("Proxy is working. IP info:", ip_info)
else:
print("Proxy check failed. Error:", error)
# Check if a proxy works
working_proxy = manager.get_working_proxy()
if working_proxy:
print("Working proxy:", working_proxy.url)
```
## 🛠 Usage Examples
### From a File
```python
manager = ProxyManager("proxies.txt")
```
### Add & Remove Proxies
```python
manager.add("http://new-proxy.com:8080")
manager.remove("http://user:pass@127.0.0.1:8080")
```
---
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---
## 🌟 Contribute
Contributions are welcome!
1. Fork the repo
2. Create your feature branch
3. Submit a pull request
---
Made with ❤️ by [Abbas Bachari](https://github.com/abbas-bachari)
| text/markdown | null | Abbas Bachari <abbas-bachari@hotmail.com> | null | null | MIT | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests>=2.31.0"
] | [] | [] | [] | [
"Homepage, https://github.com/abbas-bachari/proxa",
"Repository, https://github.com/abbas-bachari/proxa"
] | twine/6.1.0 CPython/3.13.3 | 2026-02-19T14:11:20.266232 | proxa-1.0.2.tar.gz | 9,306 | 38/c0/15a1cd5a8b3041f14e899bd709be08815a1b38ecec971a6e29e1b7764aea/proxa-1.0.2.tar.gz | source | sdist | null | false | a2d7247a906588fbe0138f462e6374ce | 15d6ae208ea397c2b55cbb3146b422c57c371e7f4de2018f901f612519cb1d12 | 38c015a1cd5a8b3041f14e899bd709be08815a1b38ecec971a6e29e1b7764aea | null | [
"LICENSE"
] | 243 |
2.4 | mongo-charms-single-kernel | 1.6.25 | Shared and reusable code for Mongo-related charms | # Mongo Charms Single Kernel library
Library containing shared code for MongoDB operators (mongodb, mongos, VM and k8s).
The goal of this library is to provide reusable and shared code for the four
mongo charms:
* [MongoDB VM](https://github.com/canonical/mongodb-operator/)
* [MongoDB Kubernetes](https://github.com/canonical/mongodb-k8s-operator/)
* [Mongos VM](https://github.com/canonical/mongos-operator/)
* [Mongos Kubernetes](https://github.com/canonical/mongos-k8s-operator/)
## Code layout
The source code can be found in [single_kernel_mongo/](./single_kernel_mongo/)
The layout is organised as so:
* [configurations](./single_kernel_mongo/config)
* [core services](./single_kernel_mongo/core/)
* [events handlers](./single_kernel_mongo/events/)
* [event managers](./single_kernel_mongo/managers/)
* [charm state](./single_kernel_mongo/state/)
* [charm workloads](./single_kernel_mongo/workload/)
* [utils and helpers](./single_kernel_mongo/utils/)
* [abstract charm skeleton](./single_kernel_mongo/abstract_charm.py)
* [exceptions](./single_kernel_mongo/exceptions.py)
## Charm Structure
This single kernel library aims at providing a clear and reliable structure, following the single responsibility principle. All the logic is expected to
happen in this library and a charm should be no more than a few lines defining the substrate, the operator type and the config.
```python3
class MongoTestCharm(AbstractMongoCharm[MongoDBCharmConfig, MongoDBOperator]):
config_type = MongoDBCharmConfig
operator_type = MongoDBOperator
substrate = Substrates.VM
peer_rel_name = PeerRelationNames.PEERS
name = "mongodb-test"
```

## Contributing
You can have longer explanations in [./CONTRIBUTING.md](./CONTRIBUTING.md) but for a quick start:
```shell
# Install poetry and tox
pipx install tox
pipx install poetry
poetry install
```
Code quality is enforced using [pre-commit](https://github.com/pre-commit/pre-commit) hooks. They will run before each commit and also at other stages.
```shell
# Install the first time
pre-commit install
# Run it manually with
pre-commit run --all-files
```
Once a PR is opened, it's possible to trigger integration testing on the charms with a comment on the PR.
This can be run only by members of the [Data and AI team](https://github.com/orgs/canonical/teams/data-ai-engineers)
Use the following syntax:
```shell
* /test to run on 4 charms.
* /test/<mongodb | mongos>/<vm | k8s> to run on a specific charm.
* /test/*/<vm | k8s> to run for both charms on a specific substrate.
```
This will create a PR with an updated version of the library on the selected charms.
## Project and community
Mongo Charms Single Kernel library is an open source project that warmly welcomes community contributions, suggestions, fixes, and constructive feedback.
* Check our [Code of Conduct](https://ubuntu.com/community/ethos/code-of-conduct)
* Raise software issues or feature requests on [GitHub](https://github.com/canonical/mongo-single-kernel-library/issues)
* Report security issues through [LaunchPad](https://wiki.ubuntu.com/DebuggingSecurity#How%20to%20File).
* Meet the community and chat with us on [Matrix](https://matrix.to/#/#charmhub-data-platform:ubuntu.com)
* [Contribute](https://github.com/canonical/mongo-single-kernel-library/blob/main/CONTRIBUTING.md) to the code
## License
The Mongo Single Library is free software, distributed under the Apache Software License, version 2.0. See [LICENSE](https://github.com/canonical/mongo-single-kernel-library/blob/main/LICENSE) for more information.
| text/markdown | Neha Oudin | neha@oudin.red | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Operating System :: POSIX :: Linux"
] | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"boto3<1.38.0,>=1.37.12",
"cosl",
"cryptography",
"dacite<1.10.0,>=1.9.0",
"data-platform-helpers>=0.1.7",
"deepmerge>=2.0",
"jinja2",
"jsonschema<4.25.0,>=4.24.0",
"ldap3",
"lightkube",
"mypy-boto3-s3<1.38.0,>=1.37.0",
"ops<2.22.0,>=2.21.1",
"overrides<7.8.0,>=7.7.0",
"poetry-core>=2.0",
... | [] | [] | [] | [
"Bug Tracker, https://github.com/canonical/mongo-single-kernel-library/issues",
"Contribute, https://github.com/canonical/mongo-single-kernel-library/blob/main/CONTRIBUTING.md",
"Homepage, https://github.com/canonical/mongo-single-kernel-library",
"Matrix, https://matrix.to/#/#charmhub-data-platform:ubuntu.co... | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:10:53.075673 | mongo_charms_single_kernel-1.6.25.tar.gz | 303,931 | 4c/dd/d30e6da2c468982fc2579019ce285ab26805d2b842be0f3c302a29e5b97b/mongo_charms_single_kernel-1.6.25.tar.gz | source | sdist | null | false | 0291238d9213a66db227fe089b77b658 | 85757de4503b8b2c79ebaf7d675e900a587580ac7c7c7cee8c19a55558a56a2a | 4cddd30e6da2c468982fc2579019ce285ab26805d2b842be0f3c302a29e5b97b | Apache-2.0 | [
"LICENSE"
] | 343 |
2.4 | nonebot-plugin-mute-cat | 1.2.2 | The Betterest Mute Cat — 极致的禁言猫猫,功能强大的QQ群禁言插件 | # nonebot-plugin-mute-cat
<div align="center">
# 🐱 The Betterest Mute Cat
✨ 功能强大的 QQ 群禁言插件,带有可爱的猫娘语气 ✨
[](LICENSE)
[](https://pypi.python.org/pypi/nonebot-plugin-mute-cat)
[](https://www.python.org)
[](https://nonebot.dev)
</div>
---
## 📖 介绍
**The Betterest Mute Cat** 是一个功能完善的 QQ 群禁言插件,支持多种禁言方式,带有可爱的猫娘语气回复,附有持久化存储和定时任务功能。
### ✨ 特性
- 🔇 **禁言成员** — 支持同时禁言多人,可自定义时长(分钟 / 小时 / 秒)
- 🔊 **取消禁言** — 解除单个或多个成员的禁言
- 🔕 **全员禁言** — 支持永久模式和定时自动解除模式
- 📢 **解禁全员** — 一键解除全员禁言 + 所有个人禁言 + 清除定时任务
- 🎲 **禁我功能** — 随机禁言自己,给自己整点小刺激
- ⏰ **定时禁言** — 指定时间点开始禁言,支持多种时间格式
- 📌 **@ 开关** — 每个群独立设置是否需要 @ 机器人才能触发命令
- 📊 **状态查看** — 实时查看谁被禁言中、剩余时间、待执行的定时任务
- 💾 **持久化存储** — 基于 nonebot-plugin-localstore,重启后保留禁言记录和设置
---
## 📦 安装
### 方式一:nb-cli(推荐)
```bash
nb plugin install nonebot-plugin-mute-cat
```
### 方式二:pip
```bash
pip install nonebot-plugin-mute-cat
```
安装完成后,在 `pyproject.toml` 或 `bot.py` 中加载插件:
```toml
# pyproject.toml
[tool.nonebot]
plugins = ["nonebot_plugin_mute_cat"]
```
或
```python
# bot.py
nonebot.load_plugin("nonebot_plugin_mute_cat")
```
---
## ⚙️ 配置项
在项目根目录的 `.env` 或 `.env.prod` 文件中添加以下配置(全部可选,均有默认值):
| 配置项 | 类型 | 默认值 | 说明 |
|---|---|---|---|
| `MUTE_DEFAULT_MINUTES` | `int` | `5` | 未指定时长时的默认禁言分钟数 |
| `MUTE_SELF_OPTIONS` | `list[int]` | `[1, 3, 5, 0]` | 「禁我」功能的随机时长候选(分钟),`0` 表示本次不禁言 |
| `MUTE_AT_REQUIRED` | `bool` | `true` | 全局默认:是否需要 @ 机器人才能触发命令(可被群级别 @ 开关覆盖) |
| `MUTE_SUPERUSER_ONLY` | `bool` | `false` | 为 `true` 时只有超级管理员可用管理命令,群管理员无权限 |
### 配置示例
```dotenv
# .env.prod
MUTE_DEFAULT_MINUTES=10
MUTE_SELF_OPTIONS=[1, 2, 3, 5, 10, 0]
MUTE_AT_REQUIRED=true
MUTE_SUPERUSER_ONLY=false
```
---
## 🔐 权限说明
| 命令 | 需要权限 |
|---|---|
| 禁言 / 取消禁言 / 全员禁言 / 解禁全员 | 群管理员 **或** 超级管理员 |
| @ 开关(开启 / 关闭 at 模式) | 群管理员 **或** 超级管理员,且**必须 @ 机器人** |
| 禁我 | **任何人** |
| 帮助 / 查看状态 | **任何人** |
> ⚠️ 机器人自身必须是**群管理员**才能执行禁言和解禁操作。
> ⚠️ 无法禁言群主和其他管理员(QQ 协议限制)。
---
## 🎮 命令详解
> 以下示例默认已开启 `MUTE_AT_REQUIRED=true`(默认值),命令前需要 @ 机器人。
> 如已关闭该配置或该群设置了「关闭 at」,则直接发送命令文字即可。
---
### 🔇 禁言某人
**语法:** `禁言 @目标 [时长]`
时长支持以下所有格式,不填则使用 `MUTE_DEFAULT_MINUTES` 配置值:
| 输入示例 | 含义 |
|---|---|
| `禁言 @猫猫` | 禁言默认时长(默认 5 分钟) |
| `禁言 @猫猫 10` | 禁言 10 分钟 |
| `禁言 @猫猫 10分钟` | 禁言 10 分钟 |
| `禁言 @猫猫 1小时` | 禁言 1 小时 |
| `禁言 @猫猫 90s` | 禁言 90 秒(自动换算取整为分钟) |
| `禁言 @猫猫 2h` | 禁言 2 小时 |
| `禁言 @甲 @乙 @丙 5分钟` | 同时禁言多人 |
---
### 🔊 解除禁言
**语法:** `取消/解除/解禁 @目标 [@目标2 ...]`
三个关键词效果完全相同:
```
取消 @猫猫
解除 @猫猫 @狗狗
解禁 @猫猫
```
---
### 🔕 全员禁言
**语法:** `全员禁言 [时长]`
| 输入示例 | 含义 |
|---|---|
| `全员禁言` | 开启永久全员禁言(直到手动解除) |
| `全员禁言 30分钟` | 全员禁言 30 分钟后自动解除 |
| `全员禁言 2小时` | 全员禁言 2 小时后自动解除 |
| `全体禁言 1h` | 同上,`全体` 和 `全员` 效果一致 |
---
### 📢 解禁全员
以下命令任选其一,效果完全相同:
```
解禁全员 取消全员 解除全员
解禁全体 取消全体 解除全体
全员解禁 全体解禁
关闭全员禁言 解除全员禁言
```
> 💡 该命令会同时:关闭全员禁言、解除所有个人禁言、取消所有待执行的定时禁言任务。
---
### 🎲 禁我
**语法:** `禁我`
随机从 `MUTE_SELF_OPTIONS` 配置的时长选项中抽取一个执行:
- 默认选项为 `[1, 3, 5, 0]`,四档各 25% 概率,抽到 `0` 本次不禁言
- 任何人均可使用,无需管理员权限
- 管理员因 QQ 协议限制无法被禁言,会收到提示
---
### ⏰ 定时禁言
在禁言命令后加上时间参数即可创建定时任务,时间格式支持:
| 输入示例 | 含义 |
|---|---|
| `禁言 @猫猫 14:30 15:30` | 14:30 开始禁言,15:30 结束(持续 1 小时) |
| `禁言 @猫猫 14:30~15:30` | 同上,波浪线写法 |
| `禁言 @猫猫 14:30 1小时` | 14:30 开始,禁言 1 小时 |
| `禁言 @猫猫 14:30 30分钟` | 14:30 开始,禁言 30 分钟 |
| `禁言 @猫猫 14:30` | 14:30 开始,使用默认时长 |
> ⚠️ 所有定时任务使用**北京时间(Asia/Shanghai)**。
> ⚠️ 指定时间已过今天时,任务自动安排到明天同一时间。
> ⚠️ 机器人**重启后定时任务会丢失**,已存储的禁言状态记录不受影响。
---
### 📊 查看状态
**语法:** `查看状态`
显示当前群的:
- @ 触发模式(需要 @ / 不需要 @)
- 全员禁言状态及剩余时间
- 正在被禁言的成员列表及各自剩余时间
- 待执行的定时任务列表
---
### 📌 @ 开关
**前提:必须 @ 机器人,且操作者为管理员。**
```
@机器人 开启at → 该群开启「需要 @ 才能触发命令」模式
@机器人 关闭at → 该群关闭,直接发命令即可触发
```
> 💡 @ 开关是**每个群独立**设置的,优先级高于全局 `MUTE_AT_REQUIRED` 配置。
---
### 📖 帮助
**语法:** `帮助` 或 `help` 或 `菜单`
机器人回复当前群触发模式说明和命令速查表。
---
## 💾 数据存储
插件使用 `nonebot-plugin-localstore` 管理持久化数据,文件位于 localstore 管理的插件专属目录:
```
<data_dir>/nonebot_plugin_mute_cat/
├── at_overrides.json # 各群的 @ 开关设置
└── group_states.json # 各群的禁言状态和任务记录
```
---
## ⚠️ 已知限制
- 仅支持 **OneBot V11** 适配器(如 LLOneBot、Lagrange、NapCat 等)
- 机器人需要拥有**群管理员**权限才能执行禁言 / 解禁
- 无法禁言群主和其他管理员(QQ 协议限制,非 bug)
- 机器人**重启后定时任务会丢失**,重启后孤儿任务记录会在启动时自动清理
- QQ 单次禁言最长 30 天(43200 分钟)
---
## 🔧 依赖
| 依赖 | 最低版本 | 说明 |
|---|---|---|
| nonebot2 | 2.2.0 | 核心框架,提供 `nonebot.compat` 兼容层 |
| nonebot-adapter-onebot | 2.4.0 | OneBot V11 适配器 |
| nonebot-plugin-apscheduler | 0.4.0 | 定时任务支持 |
| nonebot-plugin-localstore | 0.6.0 | 本地持久化存储 |
本插件同时兼容 **pydantic v1** 和 **pydantic v2**,通过 `nonebot.compat.ConfigDict` 实现
---
## ❓ 常见问题
**Q: 机器人没有反应?**
A: 检查是否满足触发条件。默认需要 @ 机器人,发送 `帮助` 查看当前群的触发模式。
**Q: 禁言失败提示权限不够?**
A: 需要先在 QQ 群中给机器人赋予管理员权限。
**Q: 能禁言群主 / 管理员吗?**
A: 不能,这是 QQ 协议的限制。
**Q: 重启后定时禁言任务消失了?**
A: APScheduler 的任务不做持久化,重启后会丢失。这是已知限制,重启时孤儿任务记录会自动清理。如有强需求可考虑接入持久化 job store(如 SQLAlchemy)。
**Q: 数据文件在哪里?**
A: 由 nonebot-plugin-localstore 管理,路径因系统和项目配置而异,可通过 `store.get_plugin_data_file(...)` 确认实际路径。
---
## 📄 开源协议
本项目使用 [MIT](LICENSE) 协议开源,欢迎自由使用和二次开发。
---
## 💖 致谢
感谢所有使用和支持本插件的小伙伴们,感谢 NoneBot2 社区提供的优秀框架和生态~
| text/markdown | null | binglang <lianbingyu_v2@163.com> | null | null | MIT | nonebot, nonebot2, qq, mute, ban, 禁言 | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Natural Language :: Chinese (Simplified)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming La... | [] | null | null | <4.0,>=3.9 | [] | [] | [] | [
"nonebot2>=2.2.0",
"nonebot-adapter-onebot>=2.4.0",
"nonebot-plugin-apscheduler>=0.4.0",
"nonebot-plugin-localstore>=0.6.0"
] | [] | [] | [] | [
"homepage, https://github.com/binglang001/nonebot-plugin-mute-cat",
"repository, https://github.com/binglang001/nonebot-plugin-mute-cat",
"documentation, https://github.com/binglang001/nonebot-plugin-mute-cat#readme"
] | twine/6.2.0 CPython/3.11.0 | 2026-02-19T14:08:43.452174 | nonebot_plugin_mute_cat-1.2.2.tar.gz | 21,234 | f6/0c/9ef3247f03fabacbc34f0bb2ba46b2c3d3c386f20dedfe67c9c9bcb725a0/nonebot_plugin_mute_cat-1.2.2.tar.gz | source | sdist | null | false | e8ceb58da77a9511c3d5c00d9f8d9a1c | ecdf52361d8e3d95e8a804f40bb5cf935f88ce746d76f2124bd4250d48967e01 | f60c9ef3247f03fabacbc34f0bb2ba46b2c3d3c386f20dedfe67c9c9bcb725a0 | null | [
"LICENSE"
] | 231 |
2.4 | boj-ts-api | 0.2.0 | Generic Python client for the Bank of Japan Time-Series Statistics API | # boj-ts-api
Generic Python client for the [Bank of Japan Time-Series Statistics API](https://www.stat-search.boj.or.jp/).
[](https://pypi.org/project/boj-ts-api/)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
Low-level, typed wrapper around the three BOJ API endpoints. For a beginner-friendly interface with domain wrappers and enum-driven filtering, see [pyboj](https://github.com/obichan117/pyboj).
## Installation
```bash
pip install boj-ts-api
```
## Usage
### Fetch Data by Series Code
```python
from boj_ts_api import Client, Lang
with Client(lang=Lang.EN) as client:
resp = client.get_data_code(
db="CO",
code="TK99F1000601GCQ01000",
start_date="202401",
end_date="202404",
)
for series in resp.RESULTSET:
print(series.SERIES_CODE, series.VALUES.VALUES)
```
### Auto-Pagination
```python
with Client(lang=Lang.EN) as client:
for series in client.iter_data_code(db="CO", code="TK99F1000601GCQ01000"):
print(series.SERIES_CODE, len(series.VALUES.SURVEY_DATES), "data points")
```
### Fetch Data by Layer
```python
from boj_ts_api import Client, Frequency, Lang
with Client(lang=Lang.EN) as client:
resp = client.get_data_layer(db="FM08", frequency=Frequency.D, layer="1,1")
for series in resp.RESULTSET:
print(series.SERIES_CODE, series.VALUES.VALUES[:5])
```
### Metadata
```python
with Client(lang=Lang.EN) as client:
meta = client.get_metadata(db="FM08")
for rec in meta.RESULTSET[:3]:
print(rec.SERIES_CODE, rec.FREQUENCY, rec.NAME_OF_TIME_SERIES)
```
### Async Client
```python
import asyncio
from boj_ts_api import AsyncClient, Lang
async def main():
async with AsyncClient(lang=Lang.EN) as client:
resp = await client.get_data_code(db="CO", code="TK99F1000601GCQ01000")
print(resp.RESULTSET[0].SERIES_CODE)
asyncio.run(main())
```
### CSV Output
```python
with Client(lang=Lang.EN) as client:
csv_text = client.get_data_code_csv(db="CO", code="TK99F1000601GCQ01000")
print(csv_text)
```
## API Surface
| Method | Description |
|--------|-------------|
| `get_data_code()` | Fetch time-series data by series code(s) |
| `iter_data_code()` | Auto-paginating iterator over series results |
| `get_data_code_csv()` | Fetch data as raw CSV text |
| `get_data_layer()` | Fetch data by hierarchy layer |
| `iter_data_layer()` | Auto-paginating iterator for layer data |
| `get_data_layer_csv()` | Fetch layer data as raw CSV text |
| `get_metadata()` | Fetch series metadata for a database |
| `get_metadata_csv()` | Fetch metadata as raw CSV text |
Both `Client` (sync) and `AsyncClient` (async) expose the same methods.
## License
MIT
| text/markdown | obichan117 | null | null | null | null | api, bank-of-japan, boj, statistics, time-series | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Financial and Insurance Industry",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programmin... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.25",
"pydantic>=2.0"
] | [] | [] | [] | [
"Repository, https://github.com/obichan117/pyboj"
] | twine/6.2.0 CPython/3.11.0 | 2026-02-19T14:07:17.064256 | boj_ts_api-0.2.0.tar.gz | 9,400 | a2/ae/f659760476b91756c70ddd145fc212619087ea81ae52827ec00c58eb4ca3/boj_ts_api-0.2.0.tar.gz | source | sdist | null | false | 3c9ae76c9b25d221ed0531ecbd46087f | ee1665d8cd53ac8ae690e7f4e986b9a0ae043ecc1f4b4da8c94ce39e210208cd | a2aef659760476b91756c70ddd145fc212619087ea81ae52827ec00c58eb4ca3 | MIT | [] | 264 |
2.3 | brixo | 0.1.6 | The Brixo Python SDK | # Brixo Python SDK
The **Brixo Python SDK** lets you instrument AI agents and capture high-quality interaction traces for analysis in the Brixo platform. It is designed to be lightweight, explicit, and easy to integrate into existing agent code.
---
## Compatibility
- Python 3.10+
---
## Security advisory (January 27, 2026)
We currently ignore `CVE-2026-0994` in `pip-audit` because there is no fixed
protobuf release yet. The advisory reports the issue affects all protobuf
versions up to 6.33.4, and PyPI shows 6.33.4 as the latest release (uploaded
January 12, 2026). We will remove the ignore once a patched release is
available.
References:
- https://advisories.gitlab.com/pkg/pypi/protobuf/CVE-2026-0994/
- https://pypi.org/pypi/protobuf
---
## Installation
Install the SDK from PyPI:
```bash
pip install brixo
```
---
## Authentication
### Create an API key
1. Create a Brixo account @ https://app.brixo.com/sign_up
2. Once logged in, generate a new API key from the instructions page
3. Export it as an environment variable:
```bash
export BRIXO_API_KEY=<your_api_key>
```
The Brixo SDK will automatically read this value at runtime.
---
## Quickstart (copy/paste)
Create a minimal file that instruments a single interaction:
```bash
export BRIXO_API_KEY=<your_api_key>
cat > main.py <<'PY'
from brixo import Brixo
Brixo.init(
app_name="my-app",
environment="development",
agent_version="1.0.0",
)
@Brixo.interaction("Hello World Interaction")
def handle_user_input(user_input: str):
Brixo.begin_context(
user={"id": "1", "name": "Jane Doe"},
input=user_input,
)
response = f"You said: {user_input}"
Brixo.end_context(output=response)
print(response)
def main():
while True:
user_input = input("User: ")
handle_user_input(user_input)
if __name__ == "__main__":
main()
PY
python main.py
```
What you should see:
- Your console echoes responses like `You said: <input>`
- A new trace appears in Brixo Live View shortly after each interaction: https://app.brixo.com/traces/live
---
## Instrumentation Quickstart
The typical flow is:
1. Initialize Brixo once at application startup
2. Wrap each *user interaction* with `@Brixo.interaction`
3. Attach context (user, customer, session, input, output)
4. Let the interaction finish so traces can be flushed
---
## Example
Below is a minimal but complete example that instruments a single agent interaction loop.
Create a file called `main.py`:
```python
# --- Brixo SDK import ---
# Import the Brixo SDK so we can instrument and send interaction traces to Brixo.
from brixo import Brixo
from my_agent import agent
# --- Brixo interaction boundary ---
# Mark ONE bounded user interaction (one request -> one response) so Brixo can group
# spans/attributes into a single trace and flush it when this function returns.
@Brixo.interaction("Main Agent Execution")
def handle_user_input(user_input: str):
# --- Brixo context start ---
# Attach contextual metadata to the current trace
Brixo.begin_context(
account={"id": "1", "name": "ACME, Inc."},
user={"id": "1", "name": "John Doe"},
session_id="session-123",
metadata={"foo": "bar"},
input=user_input,
)
response = agent.invoke(
{"messages": [{"role": "user", "content": user_input}]}
)
handle_agent_response(response)
def handle_agent_response(response):
"""Extracts the final agent output and updates the trace."""
final_text = response["messages"][-1].content
# --- Brixo context update ---
# Add/update attributes after the agent has produced output.
Brixo.end_context(output=final_text)
def main():
# --- Brixo SDK initialization ---
# Initialize once at startup, before any instrumented code runs.
Brixo.init(
app_name="my-app",
environment="production",
agent_version="1.0.0",
)
while True:
user_input = input("User: ")
handle_user_input(user_input)
if __name__ == "__main__":
main()
```
Run the example:
```bash
python main.py
```
---
## Key Concepts
### Concepts at a glance
- **Interaction boundary**: one user request -> one response
- **Context lifecycle**: `begin_context` early, `update_context` for mid-flight, `end_context` to close
- **Flush timing**: traces are exported when the interaction function returns
### `Brixo.init(...)`
Initializes the SDK. Call **once** at application startup.
Arguments and value formats:
- `app_name`: `str` logical name for your application or agent; cannot be `None`
- `environment`: `str` such as `development`, `staging`, `production`; cannot be `None`
- `api_key`: `str` or `None`; defaults to `BRIXO_API_KEY`
- `agent_version`: `str` or `None` agent version identifier
- `filter_openinference_spans`: `bool` or `None` to drop OpenInference spans on export
- `filter_traceloop_spans`: `bool` or `None` to drop Traceloop spans on export
Usage:
```python
Brixo.init(
app_name="my-app",
environment="production",
api_key="brx_123456",
agent_version="1.0.0",
filter_openinference_spans=True,
filter_traceloop_spans=True,
)
```
---
### `@Brixo.interaction(name)`
Marks a single, bounded **user interaction**.
Arguments and value formats:
- `name`: `str` or `None` descriptive interaction name
Usage:
```python
@Brixo.interaction("Main Agent Execution")
def handle_user_input(user_input: str):
...
```
Guidelines:
- Use one interaction per user request
- The function must terminate (no infinite loops)
- Choose descriptive names - they improve trace readability
---
### `Brixo.begin_context(...)`
Attaches structured metadata to the current interaction trace.
Arguments and value formats:
- `account`: `dict` or `None` with any of: `id`, `name`, `logo_url`, `website_url` (all `str`)
- `user`: `dict` or `None` with any of: `id`, `name`, `email` (all `str`)
- `session_id`: `str` or `None` logical session identifier
- `metadata`: `dict` or `None` of arbitrary key/value data
- `input`: `str` or `None` raw user input
- `output`: `str` or `None` output if available at start
Usage:
```python
Brixo.begin_context(
account={
"id": "acct_123",
"name": "ACME, Inc.",
"logo_url": "https://example.com/logo.png",
"website_url": "https://acme.com",
},
user={
"id": "user_456",
"name": "Jane Doe",
"email": "jane@example.com",
},
session_id="session-123",
metadata={"plan": "pro", "feature": "search"},
input="Find me the latest quarterly report.",
output="",
)
```
---
### `Brixo.update_context(...)`
Adds or updates attributes **after** the interaction has started and leaves the interaction context open.
Arguments and value formats:
- `account`: `dict` or `None` with any of: `id`, `name`, `logo_url`, `website_url` (all `str`)
- `user`: `dict` or `None` with any of: `id`, `name`, `email` (all `str`)
- `session_id`: `str` or `None` logical session identifier
- `metadata`: `dict` or `None` of arbitrary key/value data
- `input`: `str` or `None` raw user input
- `output`: `str` or `None` output or intermediate result
Usage:
```python
Brixo.update_context(
account={
"id": "acct_123",
"name": "ACME, Inc.",
"logo_url": "https://example.com/logo.png",
"website_url": "https://acme.com",
},
user={
"id": "user_456",
"name": "Jane Doe",
"email": "jane@example.com",
},
session_id="session-123",
metadata={"latency_ms": 1200, "tool": "search"},
input="Find me the latest quarterly report.",
output="Intermediate tool summary...",
)
```
Typical use cases:
- Derived metrics
- Tool results or summaries
---
### `Brixo.end_context(...)`
Adds or updates attributes **after** the interaction has started and then explicitly closes the interaction context.
Arguments and value formats:
- `account`: `dict` or `None` with any of: `id`, `name`, `logo_url`, `website_url` (all `str`)
- `user`: `dict` or `None` with any of: `id`, `name`, `email` (all `str`)
- `session_id`: `str` or `None` logical session identifier
- `metadata`: `dict` or `None` of arbitrary key/value data
- `input`: `str` or `None` raw user input
- `output`: `str` or `None` final agent output
Usage:
```python
Brixo.end_context(
account={
"id": "acct_123",
"name": "ACME, Inc.",
"logo_url": "https://example.com/logo.png",
"website_url": "https://acme.com",
},
user={
"id": "user_456",
"name": "Jane Doe",
"email": "jane@example.com",
},
session_id="session-123",
metadata={"feedback_score": 5},
input="Find me the latest quarterly report.",
output="Here is the latest quarterly report summary...",
)
```
Typical use cases:
- Final agent output
---
## Best Practices
- **One interaction = one user request**
- Keep interaction functions short and bounded
- Use descriptive interaction names
- Attach inputs early and outputs late
- Initialize Brixo early at startup; if you rely on auto-instrumentation, import those
libraries after `Brixo.init(...)`
---
## Troubleshooting
- **Missing traces**: Confirm `BRIXO_API_KEY` is set and that `Brixo.init(...)` runs before instrumented code.
- **Nothing in Live View**: Check https://app.brixo.com/traces/live and allow a few seconds after each interaction.
- **No internet or proxy issues**: Ensure your runtime can reach `app.brixo.com`.
---
## Support
If you have questions or run into issues:
- Check the Brixo [Live View](https://app.brixo.com/traces/live) for trace visibility
- Reach out to the Brixo team at support@brixo.com
Happy instrumenting 🚀
| text/markdown | Brixo Engineering | Brixo Engineering <sdk@brixo.com> | null | null | null | null | [] | [] | null | null | <4,>=3.10.0 | [] | [] | [] | [
"openinference-instrumentation-langchain>=0.1.55",
"openinference-instrumentation-openai>=0.1.41",
"opentelemetry-exporter-otlp>=1.39.1",
"opentelemetry-sdk>=1.39.1",
"traceloop-sdk>=0.50.1",
"bandit>=1.7.7; extra == \"dev\"",
"mypy>=1.10.0; extra == \"dev\"",
"pip-audit>=2.7.2; extra == \"dev\"",
"... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T14:07:16.229922 | brixo-0.1.6-py3-none-any.whl | 10,225 | db/f6/fa8e535a6edfc38910501908a9e2ea112c491732afb0f17d1c428cfb7637/brixo-0.1.6-py3-none-any.whl | py3 | bdist_wheel | null | false | e50d140fb77a3de12187f90f34763743 | ea61b9cb993015a5adfd3e88bc5d2d6fd3628841b31839e51f915c14b6d6fff6 | dbf6fa8e535a6edfc38910501908a9e2ea112c491732afb0f17d1c428cfb7637 | null | [] | 242 |
2.4 | hubai-sdk | 0.2.1 | SDK for HubAI. | # HubAI SDK
Python SDK for interacting with Luxonis HubAI - a platform for managing, converting, and deploying machine learning models for Luxonis OAK devices. If you want to convert models locally, check out [modelconverter](https://github.com/luxonis/modelconverter) instead.
## ✨ Features
- **Model Management**: Create, list, update, and delete HubAI models
- **Variant Management**: Manage HubAI model variants and versions
- **Instance Management**: Create and manage HubAI model instances
- **Model Conversion**: Convert HubAI models to various formats including:
- RVC2
- RVC3
- RVC4
- Hailo
- **CLI Tools**: Command-line interface for all operations
- **Type Safety**: Full type hints for better developer experience
## 📦 Installation
Install the package using pip:
```bash
pip install hubai-sdk
```
Or install from source:
```bash
git clone https://github.com/luxonis/hubai-sdk.git
cd hubai-sdk
pip install -e .
```
## 📋 Requirements
- Python 3.10 or higher
- Valid Luxonis HubAI API key - you can get it from [HubAI Team Settings](https://hub.luxonis.com/team-settings)
## 🔐 Authentication
### Get Your API Key
1. Visit [HubAI Team Settings](https://hub.luxonis.com/team-settings)
1. Generate or copy your API key
### Set API Key
You can authenticate in several ways:
**Option 1: Environment Variable**
```bash
export HUBAI_API_KEY="your-api-key-here"
```
This will store the API key in your environment variable and will be used by the SDK automatically. It is valid for the current session only.
**Option 2: CLI Login**
```bash
hubai login
```
This will open a browser to generate a new API key and prompt you to enter it, which will be securely stored. Use `hubai login --relogin` to relogin with different API key or `hubai logout` to logout.
**Option 3: Pass API Key Directly**
```python
from hubai_sdk import HubAIClient
client = HubAIClient(api_key="your-api-key-here")
```
## 🚀 Quick Start
### Python SDK Usage
```python
import os
from hubai_sdk import HubAIClient
# Initialize client
api_key = os.getenv("HUBAI_API_KEY")
client = HubAIClient(api_key=api_key)
# List all models
models = client.models.list_models()
print(f"Found {len(models)} models")
# Get a specific model
model = client.models.get_model(models[0].id)
print(f"Model: {model.name}")
# Convert a model to RVC2 format
response = client.convert.RVC2(
path="/path/to/your/model.onnx",
name="my-converted-model"
)
print(f"Converted model downloaded to: {response.downloaded_path}")
```
## 🛠️ Services
The SDK provides four main services accessible through the `HubAIClient`:
### Using Slugs from HubAI
You can copy slugs directly from the HubAI platform and use them as identifiers in the SDK for models and variants. For example like this:
```bash
hubai model info luxonis/yolov6-nano:r2-coco-512x384
```
### 🤖 Models Service (`client.models`)
Manage ML models in HubAI.
```python
# List models
models = client.models.list_models(
tasks=["OBJECT_DETECTION"],
is_public=True,
limit=10
)
# Get model by ID or slug (e.g., "luxonis/yolov6-nano:r2-coco-512x384")
model = client.models.get_model("model-id-or-slug")
# Create a new model
new_model = client.models.create_model(
name="my-model",
license_type="MIT",
is_public=False,
description="My awesome model",
tasks=["OBJECT_DETECTION"]
)
# Update a model
updated_model = client.models.update_model(
model_id,
license_type="Apache 2.0",
description="Updated description"
)
# Delete a model
client.models.delete_model(model_id)
```
### 🔄 Variants Service (`client.variants`)
Manage model variants and versions.
```python
# List variants (optionally filtered by model)
variants = client.variants.list_variants(model_id="model-id")
# Get variant by ID or slug (e.g., "luxonis/yolov6-nano:r2-coco-512x384")
variant = client.variants.get_variant("variant-id-or-slug")
# Create a new variant
new_variant = client.variants.create_variant(
name="my-variant",
model_id="model-id",
variant_version="1.0.0",
description="First version"
)
# Delete a variant
client.variants.delete_variant("variant-id")
```
### 📦 Instances Service (`client.instances`)
Manage model instances (specific configurations of variants).
```python
# Create an instance
instance = client.instances.create_instance(
name="my-instance",
variant_id="variant-id",
model_type=ModelType.ONNX,
input_shape=[1, 3, 288, 512]
)
# Upload a file to instance
client.instances.upload_file("/path/to/nn_archive.tar.xz", instance.id)
# Get instance config
config = client.instances.get_config(instance.id)
# Download instance
downloaded_path = client.instances.download_instance(instance.id)
# Delete instance
client.instances.delete_instance(instance.id)
```
### ⚡ Conversion Service (`client.convert`)
Convert models to various formats.
#### RVC2 Conversion
Convert models for Luxonis OAK devices:
```python
response = client.convert.RVC2(
path="/path/to/model.onnx",
name="converted-model",
compress_to_fp16=True,
number_of_shaves=8,
superblob=True
)
```
#### RVC4 Conversion
Convert models to Qualcomm SNPE format:
```python
response = client.convert.RVC4(
path="/path/to/model.onnx",
name="converted-model",
quantization_mode="INT8_STANDARD",
use_per_channel_quantization=True,
htp_socs=["sm8550"]
)
```
#### Generic Conversion
Convert to any supported target:
```python
from hubai_sdk.utils.types import Target
response = client.convert.convert(
target=Target.RVC2, # or Target.RVC4, Target.HAILO, etc.
path="/path/to/model.onnx",
name="converted-model",
quantization_mode="INT8_STANDARD",
input_shape=[1, 3, 288, 512]
)
```
## 💻 CLI Usage
The SDK also provides a command-line interface:
```bash
# Login
hubai login
# List models
hubai model ls
# Get model info
hubai model info <model-id-or-slug>
# Create a model
hubai model create "my-model" --license-type MIT --tasks OBJECT_DETECTION
# Convert a model
hubai convert RVC2 --path /path/to/model.onnx --name "my-model"
# List variants
hubai variant ls
# List instances
hubai instance ls
```
For more CLI options, use the `--help` flag:
```bash
hubai --help
hubai model --help
hubai convert --help
```
## 📚 Examples
See the `examples/` directory for more detailed usage examples:
- **`examples/models.py`**: Model management operations
- **`examples/variants.py`**: Variant management operations
- **`examples/instances.py`**: Instance management and file operations
- **`examples/conversion/`**: Model conversion examples for different formats
## Migration from `blobconverter`
[BlobConverter](https://pypi.org/project/blobconverter/) is our previous library for converting models to the BLOB format usable with `RVC2` and `RVC3` devices. This library is being replaced by `modelconverter` and `HubAI SDK`, which eventually become the only supported way of converting models in the future.
`blobconverter` is still available and can be used for conversion, but we recommend using `HubAI SDK` for new projects. The API of `HUBAI SDK` is similar to that of `blobconverter`, but there are some differences in the parameters and the way the conversion is done.
`blobconverter` offers several functions for converting models from different frameworks, such as `from_onnx`, `from_openvino`, and `from_tf`. These functions are now replaced by the `convert.RVC2` (or `convert.RVC3`) function in `HubAI SDK`, which takes a single argument `path` that specifies the path to the model file.
The following table shows the mapping between the parameters of `blobconverter` and `HUBAI SDK`. The parameters are grouped by their purpose. The first column shows the parameters of `blobconverter`, the second column shows the equivalent parameters in `HubAI SDK`, and the third column contains additional notes.
| `blobconverter` | `HubAI SDK` | Notes |
| ------------------ | ------------------- | --------------------------------------------------------------------------------------------------------- |
| `model` | `path` | The model file path. |
| `xml` | `path` | The XML file path. Only for conversion from OpenVINO IR |
| `bin` | `opts["input_bin"]` | The BIN file path. Only for conversion from OpenVINO IR. See the [example](#conversion-from-openvino-ir). |
| `version` | `tool_version` | The version of the conversion tool. |
| `data_type` | `quantization_mode` | The quantization mode of the model. |
| `shaves` | `number_of_shaves` | The number of shaves to use. |
| `optimizer_params` | `mo_args` | The arguments to pass to the model optimizer. |
| `compile_params` | `compile_tool_args` | The arguments to pass to the BLOB compiler. |
By default, `HubAI SDK` has `superblob` enabled which is only supported on DepthAI v3. If you want to convert a model to legacy RVC2 format (blob), you can pass `superblob=False` to the `convert.RVC2` function.
### Simple Conversion
**Simple ONNX conversion using `blobconverter`**
```python
import blobconverter
blob = blobconverter.from_onnx(
model="resnet18.onnx",
)
```
**Equivalent code using `HubAI SDK`**
```python
response = client.convert.RVC2(
path="resnet18.onnx",
)
blob = response.downloaded_path
```
### Conversion from OpenVINO IR
**`blobconverter` example**
```python
import blobconverter
blob = blobconverter.from_openvino(
xml="resnet18.xml",
bin="resnet18.bin",
)
```
**`HubAI SDK` example**
```python
# When the XML and BIN files are at the same location,
# only the XML needs to be specified
response = client.convert.RVC2("resnet18.xml")
blob = response.downloaded_path
# Otherwise, the BIN file can be specified using
# the `opts` parameter
response = client.convert.RVC2(
path="resnet18.xml",
opts={
"input_bin": "resnet18.bin",
}
)
blob = response.downloaded_path
```
### Conversion from `tflite`
> [!WARNING]
> `HubAI` online conversion does not support conversion from frozen PB files, only TFLITE files are supported.
`blobconverter`
```python
import blobconverter
blob = blobconverter.from_tf(
frozen_pb="resnet18.tflite",
)
```
**Equivalent code using `HubAI SDK`**
```python
response = client.convert.RVC2(
path="resnet18.tflite",
)
blob = response.downloaded_path
```
### Advanced Parameters
**`blobconverter.from_onnx` with advanced parameters**
```python
import blobconverter
blob = blobconverter.from_onnx(
model="resnet18.onnx",
data_type="FP16",
version="2021.4",
shaves=6,
optimizer_params=[
"--mean_values=[127.5,127.5,127.5]",
"--scale_values=[255,255,255]",
],
compile_params=["-ip U8"],
)
```
**Equivalent code using `HubAI SDK`**
```python
response = client.convert.RVC2(
path="resnet18.onnx",
quantization_mode="FP16_STANDARD",
tool_version="2021.4.0",
number_of_shaves=6,
mo_args=[
"mean_values=[127.5,127.5,127.5]",
"scale_values=[255,255,255]"
],
compile_tool_args=["-ip", "U8"],
)
blob = response.downloaded_path
```
### `Caffe` Conversion
Conversion from the `Caffe` framework is not supported.
## 📄 All Available Parameters
See the [All available parameters](docs/available_parameters.md) file for all available parameters during conversion.
## 🔨 Development
### Setup Development Environment
```bash
git clone https://github.com/luxonis/hubai-sdk.git
cd hubai-sdk
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e ".[dev]"
```
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
## 💬 Support
- **Issues**: [GitHub Issues](https://github.com/luxonis/hubai-sdk/issues)
- **Email**: support@luxonis.com
- **Documentation**: [HubAI Platform](https://docs.luxonis.com)
## 🔗 Links
- **Repository**: [https://github.com/luxonis/hubai-sdk](https://github.com/luxonis/hubai-sdk)
- **HubAI Platform**: [https://hub.luxonis.com](https://hub.luxonis.com)
- **Luxonis**: [https://luxonis.com](https://luxonis.com)
| text/markdown | null | Luxonis <support@luxonis.com> | null | Luxonis <support@luxonis.com> | null | ml, onnx, openvino, nn, ai, embedded | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3.10",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Image Processing",
"Topic :: Scientific/Engineering :: Image Recognition"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"luxonis-ml[data,nn_archive]",
"keyring",
"requests",
"cyclopts",
"loguru",
"rich",
"packaging",
"onnx",
"psutil",
"pillow",
"posthog",
"openapi-python-client; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pre-commit; extra == \"dev\""
] | [] | [] | [] | [
"repository, https://github.com/luxonis/hubai-sdk",
"issues, https://github.com/luxonis/hubai-sdk/issues"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-19T14:07:07.478478 | hubai_sdk-0.2.1.tar.gz | 62,952 | 45/5e/3a086158597c325d80f9a9253f51c4293acecdf5fa2722579ce50d924fcd/hubai_sdk-0.2.1.tar.gz | source | sdist | null | false | ce0605550367109dee65495053ebecb6 | 60f7c767c7c9a4d8cdfc61b83ef10b6ba864a88fbb5488d0b4826ed0378321dd | 455e3a086158597c325d80f9a9253f51c4293acecdf5fa2722579ce50d924fcd | Apache-2.0 | [
"LICENSE"
] | 277 |
2.4 | pyclsp | 2.0.0 | Modular Two-Step Convex Optimization Estimator for Ill-Posed Problems | # CLSP — Convex Least Squares Programming
**Convex Least Squares Programming (CLSP)** is a two-step estimator for solving underdetermined, ill-posed, or structurally constrained least-squares problems. It combines pseudoinverse-based estimation with convex-programming correction (Lasso, Ridge, Elastic Net) to ensure numerical stability, constraint enforcement, and interpretability. The package also provides numerical stability analysis and CLSP-specific diagnostics, including partial R², normalized RMSE (NRMSE), Monte Carlo t-tests for mean NRMSE, and condition-number-based confidence bands. All calculations use numpy.float64 precision.
## Installation
```bash
pip install pyclsp
```
## Quick Example
```python
import numpy as np
from clsp import CLSP
# CMLS (RP), based on known stationary points for y = D @ x + e, x to be estimated
seed = 123456789
rng = np.random.default_rng(seed)
# sample (dataset)
k = 500 # number of observations in D
p = 6 # number of regressors
c = 1 # sum of coefficients
D = np.empty((k, p))
D[:, 0 ] = 1.0 # constant
D[:, 1:] = rng.normal(size=(k, p - 1)) # D.,j ~ N(0,1), 2 <= j <= p
b_true = rng.normal(size=p) # b_true ~ N(0,1)
b_true = (b_true / b_true.sum()) * c
e = rng.normal(size=(k, 1)) # e ~ N(0,1)
y = (D @ b_true).reshape(-1, 1) + e # y_t = D @ b_true + e_t
# model
b = np.vstack([
np.asarray([c]), # c (the sum of coefficients)
np.zeros((k - 2, 1)), # zeros
np.zeros((k - 1, 1)), # zeros
y # values of y_t
])
C = np.vstack([
np.ones((1, p)), # a row of ones
np.diff(D, n=2, axis=0), # the 2nd differences
np.diff(D, n=1, axis=0) # the 1st differences
])
S = np.block([
[np.zeros(( 1, k-2))], # a zero vector
[np.diag(np.sign(np.diff(y.ravel(), n=2)))], # a diagonal sign matrix
[np.zeros((k-1, k-2))], # a zero matrix
])
model = CLSP().solve(
problem="cmls", b=b, C=C, S=S, M=D,
r=1, # a solution without refinement
alpha=1.0 # a unique MNBLUE estimator
)
# results
print("true beta (x_M):")
print(np.round(np.asarray(b_true).flatten(), 4))
print("beta hat (x_M hat):")
print(np.round(model.x.flatten(), 4))
model.summary(display=True)
print(" Bootstrap t-test:")
for kw, val in model.ttest(sample_size=30, # NRMSE_partial sample
seed=seed, distribution="normal", # seed and distribution
partial=True).items():
print(f" {kw}: {float(val):.6f}")
```
## User Reference
For comprehensive information on the estimator’s capabilities, advanced configuration options, and implementation details, please refer to the docstrings provided in each of the individual .py source files. These docstrings contain complete descriptions of available methods, their parameters, expected input formats, and output structures.
### The `CLSP` Class
```python
self.__init__()
```
Stores the solution, goodness-of-fit statistics, and ancillary parameters.
The class has three core methods: `solve()`, `corr()`, and `ttest()`.
**Selected attributes:**
`self.A` : *np.ndarray*<br>
design matrix `A` = [`C` | `S`; `M` | `Q`], where `Q` is either a zero matrix or *S_residual*.
`self.b` : *np.ndarray*<br>
vector of the right-hand side.
`self.zhat` : *np.ndarray*<br>
vector of the first-step estimate.
`self.r` : *int*<br>
number of refinement iterations performed in the first step.
`self.z` : *np.ndarray*<br>
vector of the final solution. If the second step is disabled, it equals `self.zhat`.
`self.x` : *np.ndarray*<br>
`m` x `p` matrix or vector containing the variable component of `z`.
`self.y` : *np.ndarray*<br>
vector containing the slack component of `z`.
`self.kappaC` : *float*<br>
spectral κ() for *C_canon*.
`self.kappaB` : *float*<br>
spectral κ() for *B* = *C_canon^+*`A`.
`self.kappaA` : *float*<br>
spectral κ() for `A`.
`self.rmsa` : *float*<br>
total root mean square alignment (RMSA).
`self.r2_partial` : *float*<br>
R^2 for the `M` block in `A`.
`self.nrmse` : *float*<br>
mean square error calculated from `A` and normalized by standard deviation (NRMSE).
`self.nrmse_partial` : *float*<br>
mean square error calculated from the `M` block in `A` and normalized by standard deviation (NRMSE).
`self.z_lower` : *np.ndarray*<br>
lower bound of the diagnostic interval (confidence band) based on κ(`A`).
`self.z_upper` : *np.ndarray*<br>
upper bound of the diagnostic interval (confidence band) based on κ(`A`).
`self.x_lower` : *np.ndarray*<br>
lower bound of the diagnostic interval (confidence band) based on κ(`A`).
`self.x_upper` : *np.ndarray*<br>
upper bound of the diagnostic interval (confidence band) based on κ(`A`).
`self.y_lower` : *np.ndarray*<br>
lower bound of the diagnostic interval (confidence band) based on κ(`A`).
`self.y_upper` : *np.ndarray*<br>
upper bound of the diagnostic interval (confidence band) based on κ(`A`).
### Solver Method: `solve()`
```python
self.solve(problem, C, S, M, b, m, p, i, j, zero_diagonal, r, Z, rcond, tolerance, iteration_limit, final, alpha)
```
Solves the Convex Least Squares Programming (CLSP) problem.
This method performs a two-step estimation:<br>
(1) a pseudoinverse-based solution using either the Moore–Penrose or Bott–Duffin inverse, optionally iterated for refinement;<br>
(2) a convex-programming correction using Lasso, Ridge, or Elastic Net regularization (if enabled).
**Parameters:**
`problem` : *str*, optional<br>
Structural template for matrix construction. One of:<br>
- *'ap'* or *'tm'* : allocation (tabular) matrix problem (AP).<br>
- *'cmls'* or *'rp'* : constrained-model least squares (regression) problem.<br>
- anything else: general CLSP problem (user-defined `C` and/or `M`).
`C`, `S`, `M` : *np.ndarray* or *None*<br>
Blocks of the design matrix `A` = [`C` | `S`; `M` | `Q`]. If `C` and/or `M` are provided, the matrix `A` is constructed accordingly (please note that for AP, `C` is constructed automatically and known values are specified in `M`).
`b` : *np.ndarray* or *None*<br>
Right-hand side vector. Must have as many rows as `A` (please note that for AP, it should start with row sums). Required.
`m`, `p` : *int* or *None*<br>
Dimensions of X ∈ ℝ^{m×p}, relevant for AP.
`i`, `j` : *int*, default = *1*<br>
Grouping sizes for row and column sum constraints in AP.
`zero_diagonal` : *bool*, default = *False*<br>
If *True*, enforces structural zero diagonals.
`r` : *int*, default = *1*<br>
Number of refinement iterations for the pseudoinverse-based estimator.
`Z` : *np.ndarray* or *None*<br>
A symmetric idempotent matrix (projector) defining the subspace for Bott–Duffin pseudoinversion. If *None*, the identity matrix is used, reducing the Bott–Duffin inverse to the Moore–Penrose case.
`rcond` : *float* or *bool*, default = *False*<br>
Regularization parameter for the Moore-Penrose and Bott-Duffin inverses, providing numerically stable inversion and ensuring convergence of singular values.<br>
If True, an automatic tolerance equal to `tolerance` is applied. If set to a float, it specifies the relative cutoff below which small singular values are treated as zero.
`tolerance` : *float*, default = *square root of machine epsilon*<br>
Convergence tolerance for NRMSE change between refinement iterations.
`iteration_limit` : *int*, default = *50*<br>
Maximum number of iterations allowed in the refinement loop.
`final` : *bool*, default = *True*<br>
If *True*, a convex programming problem is solved to refine `zhat`. The resulting solution `z` minimizes a weighted L1/L2 norm around `zhat` subject to `Az` = `b`.
`alpha` : *float*, *list[float]* or *None*, default = *None*<br>
Regularization parameter (weight) in the final convex program:<br>
- `α = 0`: Lasso (L1 norm)<br>
- `α = 1`: Tikhonov Regularization/Ridge (L2 norm)<br>
- `0 < α < 1`: Elastic Net<br>
If a scalar float is provided, that value is used after clipping to [0, 1].<br>
If a list/iterable of floats is provided, each candidate is evaluated via a full solve, and the α with the smallest NRMSE is selected.<br>
If None, α is chosen, based on an error rule: α = min(1.0, NRMSE_{α = 0} / (NRMSE_{α = 0} + NRMSE_{α = 1} + tolerance))
`*args`, `**kwargs` : optional<br>
CVXPY arguments passed to the CVXPY solver.
**Returns:**
*self*
### Correlogram Method: `corr()`
```python
self.corr(reset, threshold)
```
Computes the structural correlogram of the CLSP constraint part.
This method performs a row-deletion sensitivity analysis on the canonical constraint matrix `[C` | `S`], denoted as *C_canon*, and evaluates the marginal effect of each constraint row on numerical stability, angular alignment, and estimator sensitivity.
For each row `i` in `C_canon`, it computes:<br>
- The Root Mean Square Alignment (`RMSA_i`) with all other rows `j` ≠ `i`.<br>
- The change in condition numbers κ(`C`), κ(`B`), and κ(`A`) when row `i` is deleted.<br>
- The effect on estimation quality: changes in `nrmse`, `zhat`, `z`, and `x` when row `i` is deleted.
Additionally, it computes the total `rmsa` statistic across all rows, summarizing the overall angular alignment of *C_canon*.
**Parameters:**
`reset` : *bool*, default = *False*<br>
If *True*, forces recomputation of all diagnostic values (the results are preserved for eventual reproduction after the method is called).
`threshold` : *float*, default = *0*<br>
If positive, limits the output to constraints with `RMSA_i` ≥ `threshold`.
**Returns:**
*dict* of *list*<br>
A dictionary containing per-row diagnostic values:<br>
{<br>
`"constraint"` : `[1, 2, ..., k]`, # 1-based indices<br>
`"rmsa_i"` : list of `RMSA_i` values,<br>
`"rmsa_dkappaC"` : list of Δκ(`C`) after deleting row `i`,<br>
`"rmsa_dkappaB"` : list of Δκ(`B`) after deleting row `i`,<br>
`"rmsa_dkappaA"` : list of Δκ(`A`) after deleting row `i`,<br>
`"rmsa_dnrmse"` : list of Δ`nrmse` after deleting row `i`,<br>
`"rmsa_dzhat"` : list of Δ`zhat` after deleting row `i`,<br>
`"rmsa_dz"` : list of Δ`z` after deleting row `i`,<br>
`"rmsa_dx"` : list of Δ`x` after deleting row `i`,<br>
}
### T-Test Method: `ttest`
```python
self.ttest(reset, sample_size, seed, distribution, partial, simulate)
```
Perform bootstrap or Monte Carlo t-tests on the NRMSE statistic from the CLSP estimator.
This function either (a) resamples residuals via a nonparametric bootstrap to generate an empirical NRMSE sample, or (b) produces synthetic right-hand side vectors `b` from a user-defined or default distribution and re-estimates the model. It tests whether the observed NRMSE significantly deviates from the null distribution of resampled or simulated NRMSE values.
**Parameters:**
`reset` : *bool*, default = *False*<br>
If *True*, forces recomputation of the NRMSE null distribution (under H₀) (the results are preserved for eventual reproduction after the method is called).
`sample_size` : *int*, default = *50*<br>
Size of the Monte Carlo simulated sample under H₀.
`seed` : *int* or *None*, optional<br>
Optional random seed to override the default.
`distribution` : *str* or *None*, default = *’normal’*<br>
Distribution for generating simulated `b` vectors. One of (standard): *'normal'*, *'uniform'*, or *'laplace'*.
`partial` : *bool*, default = *False*<br>
If True, runs the t-test on the partial NRMSE: during simulation, the C-block entries are preserved and the M-block entries are simulated.
`simulate` : bool, default = **False**<br>
If True, performs a parametric Monte Carlo simulation by generating synthetic right-hand side vectors `b`. If False (default), executes a nonparametric bootstrap procedure on residuals without re-estimation.
**Returns:**
*dict*<br>
Dictionary with test results and null distribution statistics:<br>
{<br>
`'p_one_left'` : P(nrmse ≤ null mean),<br>
`'p_one_right'` : P(nrmse ≥ null mean),<br>
`'p_two_sided'` : 2-sided t-test p-value,<br>
`'nrmse'` : observed value,<br>
`'mean_null'` : mean of the null distribution (under H₀),<br>
`'std_null'` : standard deviation of the null distribution (under H₀)<br>
}
### Summary Method: `summarize` or `summary`
```python
self.summarize(display)
self.summary(display)
```
Return or print a summary for the CLSP estimator.
**Parameters:**
`display` : bool, default = **False**<br>
If True, prints the summary instead of returning a dictionary.
**Returns:**
*dict*<br>
Dictionary of estimator configuration, numerical stability, and goodness of fit statistics:<br>
{<br>
`'inverse'` : type of generalized inverse used ('Bott-Duffin' or 'Moore-Penrose'),<br>
...<br>
`'final'` : boolean flag indicating whether the second step was present,<br>
...<br>
`'rmsa'` : total RMSA,<br>
...<br>
`'r2_partial'` : coefficient of determination for the M block within A,<br>
...<br>
}
## Bibliography
Bolotov, I. (2025). CLSP: Linear Algebra Foundations of a Modular Two-Step Convex Optimization-Based Estimator for Ill-Posed Problems. *Mathematics*, *13*(21), 3476. [https://doi.org/10.3390/math13213476](https://doi.org/10.3390/math13213476)
## License
MIT License — see the [LICENSE](LICENSE) file.
| text/markdown | null | The Economist <29724411+econcz@users.noreply.github.com> | null | null | null | estimators, convex-optimization, least-squares, generalized-inverse, regularization | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Information Ana... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.24",
"scipy>=1.10",
"cvxpy>=1.3",
"ecos>=2.0",
"osqp>=0.6",
"scs>=3.2"
] | [] | [] | [] | [
"Homepage, https://github.com/econcz/pyclsp",
"Bug Tracker, https://github.com/econcz/pyclsp/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-19T14:06:51.594626 | pyclsp-2.0.0.tar.gz | 23,075 | fc/4d/f18971ed7a00ec3ec93ab29be2ffaff7a25fcdd3fc9dd022d6891d9900a2/pyclsp-2.0.0.tar.gz | source | sdist | null | false | bfbb5118c4c827c4f32fb84d00aa0b1a | 03890bcb1ce2a8a8fc12e7f44b89664882d8c105d4de7e3afc35bc74ee6b3df6 | fc4df18971ed7a00ec3ec93ab29be2ffaff7a25fcdd3fc9dd022d6891d9900a2 | MIT | [
"LICENSE"
] | 238 |
2.4 | geoscored | 0.1.0 | Python SDK for the GeoScored GEO audit API | # GeoScored Python SDK
Typed async Python client for the [GeoScored](https://geoscored.ai) GEO audit API.
## Installation
```bash
pip install geoscored
```
## Quickstart
```python
import asyncio
from geoscored import GeoScoredClient
async def main():
async with GeoScoredClient(api_key="geo_your_key_here") as client:
# Create a scan
scan = await client.create_scan(
"https://example.com",
brand_name="Example Corp",
)
print(f"Scan created: {scan.id}")
# Wait for completion
scan = await client.poll_scan(scan.id)
print(f"Score: {scan.overall_score} ({scan.grade})")
# Get the full report
report = await client.get_report(scan.id)
for check in report.checks:
print(f" [{check.severity}] {check.check_name}: {check.score}")
asyncio.run(main())
```
## Sandbox mode
Use a test key (starting with `geo_test_`) to get deterministic mock responses
without creating real scans:
```python
async with GeoScoredClient(api_key="geo_test_demo") as client:
scan = await client.create_scan("https://example.com")
# Returns instantly with status="complete" and fixed scores
```
## API Reference
### `GeoScoredClient(api_key, base_url="https://geoscored.ai", timeout=30.0)`
All methods are async and return typed Pydantic models.
| Method | Returns | Description |
|--------|---------|-------------|
| `create_scan(url, brand_name, callback_url)` | `Scan` | Create a new GEO audit scan |
| `get_scan(scan_id)` | `Scan` | Get scan status and scores |
| `list_scans(limit, status, starting_after)` | `ScanList` | List scans with pagination |
| `get_report(scan_id)` | `Report` | Get full audit report |
| `list_checks(scan_id)` | `CheckList` | List all check results |
| `get_check(scan_id, check_id)` | `Check` | Get a specific check |
| `delete_scan(scan_id)` | `None` | Delete a scan |
| `export_scans(days)` | `ExportData` | Export scan history |
| `poll_scan(scan_id, interval, timeout)` | `Scan` | Poll until complete/failed |
### Error handling
All API errors raise `GeoScoredError` with structured fields:
```python
from geoscored import GeoScoredClient, GeoScoredError
try:
scan = await client.get_scan("nonexistent")
except GeoScoredError as e:
print(e.status_code) # 404
print(e.code) # "scan_not_found"
print(e.message) # "Scan not found."
print(e.request_id) # "req_7f3a2b1c..."
```
## Requirements
- Python 3.10+
- httpx >= 0.28.0
- pydantic >= 2.0
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.28.0",
"pydantic>=2.0"
] | [] | [] | [] | [
"Homepage, https://geoscored.ai",
"Documentation, https://geoscored.ai/docs/api",
"Repository, https://github.com/GEOscored/geo"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T14:06:23.809676 | geoscored-0.1.0.tar.gz | 6,568 | 12/2b/8e66d43372d6e9ddaed4b2b1c759aa8d205544522df6d813a59c8f3440be/geoscored-0.1.0.tar.gz | source | sdist | null | false | 86c2c3d3deb2494f3dca6b70f53be387 | ac039a7b149aa79452fd9d24f1d19cf4808dd4e1af1e77a82362fb924d299c7c | 122b8e66d43372d6e9ddaed4b2b1c759aa8d205544522df6d813a59c8f3440be | MIT | [] | 255 |
2.4 | prodsys | 1.0.7 | A useful module for production system simulation and optimization | 
*prodsys - modeling, simulating and optimizing production systems*




[](https://doi.org/10.5281/zenodo.18177268)
prodsys is a python package for modeling, simulating and optimizing production systems based on the product, process and resource (PPR) modelling principle. For more information, have a look at the [documentation](https://sdm4fzi.github.io/prodsys/).
## Installation
To install the package, run the following command in the terminal:
```bash
pip install prodsys
```
Please note that prodsys is currently only fully compatible with Python 3.11. Other versions might cause some errors.
## Getting started
The package is designed to be easy to use. The following example shows how to model a simple production system and simulate it. The production system contains a single milling machine that performs milling processes on aluminium housings. The transport is thereby performed by a worker. At first, just import the express API of `prodsys`:
```python
import prodsys.express as psx
```
We now create all components required for describing the production system. At first we define times for all arrival, production and transport processes:
```python
milling_time = psx.FunctionTimeModel(distribution_function="normal", location=1, scale=0.1, ID="milling_time")
transport_time = psx.FunctionTimeModel(distribution_function="normal", location=0.3, scale=0.2, ID="transport_time")
arrival_time_of_housings = psx.FunctionTimeModel(distribution_function="exponential", location=1.5, ID="arrival_time_of_housings")
```
Next, we can define the production and transport process in the system by using the created time models:
```python
milling_process = psx.ProductionProcess(milling_time, ID="milling_process")
transport_process = psx.TransportProcess(transport_time, ID="transport_process")
```
With the processes defined, we can now create the production and transport resources:
```python
milling_machine = psx.Resource([milling_process], location=[5, 5], ID="milling_machine")
worker = psx.Resource([transport_process], location=[0, 0], ID="worker")
```
Now we define our product, the housing, that is produced in the system. For this example it requires only a single processsing step:
```python
housing = psx.Product(process=[milling_process], transport_process=transport_process, ID="housing")
```
Only the sources and sinks that are responsible for creating the housing and storing finished housing are misssing:
```py
source = psx.Source(housing, arrival_time_of_housings, location=[0, 0], ID="source")
sink = psx.Sink(housing, location=[20, 20], ID="sink")
```
Finally, we can create our production system, run the simulation for 60 minutes and print aggregated simulation results:
```python
production_system = psx.ProductionSystem([milling_machine, worker], [source], [sink])
production_system.run(60)
production_system.runner.print_results()
```
As we can see, the system produced 39 parts in this hour with an work in progress (WIP ~ number of products in the system) of 4.125 and utilized the milling machine with 79.69% and the worker for 78.57% at the PR percentage, the rest of the time, both resource are in standby (SB). Note that these results stay the same although there are stochastic processes in the simulation. This is caused by seeding the random number generator with a fixed value. If you want to get different results, just specify another value for `seed` parameter from the `run` method.
``` python
production_system.run(60, seed=1)
production_system.runner.print_results()
```
As expected, the performance of the production system changed quite strongly with the new parameters. The system now produces 26 parts in this hour with an work in progress (WIP ~ number of products in the system) of 1.68. As the arrival process of the housing is modelled by an exponential distribution and we only consider 60 minutes of simulation, this is absolutely expected.
However, running longer simulations with multiple seeds is absolutely easy with `prodsys`. We average our results at the end to calculate the WIP to expect by utilizing the post_processor of the runner, which stores all events of a simulation and has many useful methods for analyzing the simulation results:
```python
wip_values = []
for seed in range(5):
production_system.run(2000, seed=seed)
run_wip = production_system.post_processor.get_aggregated_wip_data()
wip_values.append(run_wip)
print("WIP values for the simulation runs:", wip_values)
```
We can analyze these results easily with numpy seeing that the average WIP is 2.835, which is in between the two first runs, which gives us a more realistic expectation of the system's performance.
```python
import numpy as np
wip = np.array(wip_values).mean(axis=0)
print(wip)
```
These examples only cover the most basic functionalities of `prodsys`. For more elaborate guides that guide you through more of the package's features, please see the [tutorials](https://sdm4fzi.github.io/prodsys/Tutorials/tutorial_0_overview). For a complete overview of the package's functionality, please see the [API reference](https://sdm4fzi.github.io/prodsys/API_reference/API_reference_0_overview/).
## Contributing
`prodsys` is a new project and has therefore much room for improvement. Therefore, it would be a pleasure to get feedback or support! If you want to contribute to the package, either create issues on [prodsys' github page](https://github.com/sdm4fzi/prodsys) for discussing new features or contact me directly via [github](https://github.com/SebBehrendt) or [email](mailto:sebastian.behrendt@kit.edu).
## License
The package is licensed under the [MIT license](LICENSE).
## Acknowledgements
We extend our sincere thanks to the German Federal Ministry for Economic Affairs and Climate Action
(BMWK) for supporting this research project 13IK001ZF “Software-Defined Manufacturing for the
automotive and supplying industry [https://www.sdm4fzi.de/](https://www.sdm4fzi.de/).
| text/markdown | null | Sebastian Behrendt <sebastian.behrendt@kit.edu> | null | null | MIT | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"deap>=1.4.3",
"email-validator>=2.3.0",
"gurobipy>=12.0.3",
"importlib-metadata>=8.7.0",
"matplotlib<4.0.0,>=3.7.2",
"numpy<2",
"openpyxl>=3.1.5",
"pandas>=2.3.2",
"pathfinding>=1.0.17",
"plotly>=6.3.0",
"pydantic>=2.11.9",
"pygeoops==0.3.0",
"scikit-image<0.26.0,>=0.25.2",
"scipy>=1.15.3... | [] | [] | [] | [] | uv/0.8.19 | 2026-02-19T14:06:16.133191 | prodsys-1.0.7.tar.gz | 597,924 | eb/17/7a791585c27290202875176e7a755fc8c2da6747be548264e03b52790c31/prodsys-1.0.7.tar.gz | source | sdist | null | false | b5cec03739e8d1a3ecc4c6ebeda36d41 | cae42dc710df1bb6485912b121059cc2d164b38bbd28b6d7881991d5c8a6c76a | eb177a791585c27290202875176e7a755fc8c2da6747be548264e03b52790c31 | null | [
"LICENSE"
] | 228 |
2.4 | uratools | 1.0.0 | Tools to simplify the use of URANIE with python | # uratools
This Python module provides tools to simplify the use of URANIE with Python
`uratools` aims to provide a multitude of utilities designed to enhance the user experience, particularly by facilitating ports to NumPy, interfacing Uranie with Python machine learning libraries (PyTorch, TensorFlow, and scikit-learn), and providing tools for fitting empirical distributions and generating plots easily. It will be updated regularly.
## URANIE
- Website: https://uranie.cea.fr/
- Python documentation: https://uranie.cea.fr/documentation/userManual_Py/index
- Installation: https://gitlab.com/uranie-cea/publication/-/wikis/home
- Uratools documentation https://uratools-508bc1.gitlab.io/
## Installation
```bash
pip install uratools
```
## How to use it
Before using the package, ensure that URANIE is installed and properly sourced.
The python command to use the package is the following
```python
from uratools import converter
```
Documentation is available on this page
https://uratools-508bc1.gitlab.io/
## Example
```python
import numpy as np
import ROOT
from ROOT.URANIE import DataServer, Sampler
from uratools import converter
## == np.array to DataServer
mA = np.random.randn(10,2)
tds = converter.np2ds(mA, "x1:x2") ## a DataServer.TDataServer is created from np.array
## == DataServer to np.array
tds = DataServer.TDataServer()
tds.addAttribute(DataServer.TUniformDistribution("x1",0.0, 5.0))
tds.addAttribute(DataServer.TNormalDistribution("x2",-2.0, 0.5))
sam = Sampler.TSampling(tds,"lhs",20)
sam.generateSample()
npA = converter.ds2np(tds) ## a np.array is created from a DataServer.TDataServer
```
## Support
support-uranie@cea.fr
| text/markdown | Uranie Team | rudy.chocat@cea.fr | null | null | LGPL | null | [
"Environment :: Console",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Topic :: Software Development",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy"
] | [] | [] | [] | [
"Homepage, https://gitlab.com/uranie-cea/uratools/"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T14:05:34.138200 | uratools-1.0.0.tar.gz | 14,270 | 41/b0/a739fd1006bdb4e84e5111e108b40c83156ab82d88dca9d945a919c179b7/uratools-1.0.0.tar.gz | source | sdist | null | false | 9f76598ed31183f49d44b8ae0928e58e | 91e3dbf326a61835afa92df6fe5a7f9f8ad3615d615d31e30b64656842c2ae25 | 41b0a739fd1006bdb4e84e5111e108b40c83156ab82d88dca9d945a919c179b7 | null | [
"LICENSE"
] | 242 |
2.4 | fugue-sql-antlr | 0.2.4 | Fugue SQL Antlr Parser | # Fugue SQL Antlr Parser
[](https://pypi.python.org/pypi/fugue-sql-antlr/)
[](https://pypi.python.org/pypi/fugue-sql-antlr/)
[](https://pypi.python.org/pypi/fugue-sql-antlr/)
[](https://codecov.io/gh/fugue-project/fugue-sql-antlr)
Chat with us on slack!
[](http://slack.fugue.ai)
# Fugue SQL Antlr Parser
This is the dedicated package for the Fugue SQL parser built on Antlr4. It consists of two packages: [fugue-sql-antlr](https://pypi.python.org/pypi/fugue-sql-antlr/) and [fugue-sql-antlr-cpp](https://pypi.python.org/pypi/fugue-sql-antlr-cpp/).
[fugue-sql-antlr](https://pypi.python.org/pypi/fugue-sql-antlr/) is the main package. It contains the python parser of Fugue SQL and the vistor for the parser tree.
[fugue-sql-antlr-cpp](https://pypi.python.org/pypi/fugue-sql-antlr-cpp/) is the C++ implementation of the parser. This solution is based on the incredible work of [speedy-antlr-tool](https://github.com/amykyta3/speedy-antlr-tool), a tool that generates thin python interface code on top of the C++ Antlr parser. This package is purely optional, it should not affect the correctness and features of the Fugue SQL parser. However, with this package installed, the parsing time is **~25x faster**.
Neither of these two packages should be directly used by users. They are the core internal dependency of the main [Fugue](https://github.com/fugue-project/fugue) project (>=0.7.0).
## Installation
To install fugue-sql-antlr:
```bash
pip install fugue-sql-antlr
```
To install fugue-sql-antlr and fugue-sql-antlr-cpp:
```bash
pip install fugue-sql-antlr[cpp]
```
We try to pre-build the wheels for major operating systems and active python versions. But in case your environment is special, then when you install fugue-sql-antlr-cpp, please make sure you have the C++ compiler in your operating system. The C++ compiler must support the ISO C++ 2017 standard.
| text/markdown | The Fugue Development Team | hello@fugue.ai | null | null | Apache-2.0 | distributed spark dask sql dsl domain specific language | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Topic :: Software Development :: Libraries :: Python Modules",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: ... | [] | http://github.com/fugue-project/fugue | null | >=3.10 | [] | [] | [] | [
"triad>=0.6.8",
"antlr4-python3-runtime<4.14,>=4.13.2",
"jinja2",
"packaging",
"fugue-sql-antlr-cpp==0.2.4; extra == \"cpp\"",
"speedy_antlr_tool; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.9.25 | 2026-02-19T14:04:55.265854 | fugue_sql_antlr-0.2.4.tar.gz | 158,964 | fc/67/a9719eb2fc568f39dd9b7f6063d5844d6c4b348aeb86bae0777a9462693d/fugue_sql_antlr-0.2.4.tar.gz | source | sdist | null | false | a07ae0ac04115f6f4756772ecadfc551 | 688cec1f76ff88dd1b1a714629e097989384b27e54c0a672cc3d6adcbed824d2 | fc67a9719eb2fc568f39dd9b7f6063d5844d6c4b348aeb86bae0777a9462693d | null | [
"LICENSE"
] | 19,229 |
2.4 | translate-remote | 0.0.3b2870 | PyPI package for circles translation | PyPI package for circles translationJIRA Work Item: https://circles-zone.atlassian.net/browse/BU-2614GHA: https://github.com/circles-zone/logger-local-python-package/actions
| text/markdown | Circles | info@circlez.ai | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/circles-zone/translate-remote-python-package | null | null | [] | [] | [] | [
"python-sdk-remote",
"translate",
"logger-local"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.12.8 | 2026-02-19T14:03:49.461209 | translate_remote-0.0.3b2870.tar.gz | 4,580 | 10/5c/29fca38470dac7fb2a366a2fcb1e2526b56f8a5eb6c2eecf9f6e2904b8ab/translate_remote-0.0.3b2870.tar.gz | source | sdist | null | false | c90365b535f1ba08cf347dc398da2d2b | 97d2b5fcea4457d73927b0ea7270d1597539ed8d24b0ea28adb50b984b1fb1b7 | 105c29fca38470dac7fb2a366a2fcb1e2526b56f8a5eb6c2eecf9f6e2904b8ab | null | [] | 243 |
2.4 | risk-network | 0.1.2 | A Python package for scalable network analysis and high-quality visualization. | # RISK

[](https://pypi.python.org/pypi/risk-network)


**Regional Inference of Significant Kinships** (**RISK**) is a next-generation tool for biological network annotation and visualization. It integrates community detection algorithms, rigorous overrepresentation analysis, and a modular framework for diverse network types. RISK identifies biologically coherent relationships within networks and generates publication-ready visualizations, making it a useful tool for biological and interdisciplinary network analysis.
For a full description of RISK and its applications, see:
<br>
Horecka, I., and Röst, H. (2026)
<br>
_RISK: a next-generation tool for biological network annotation and visualization_
<br>
_Bioinformatics_. https://doi.org/10.1093/bioinformatics/btaf669
## Documentation and Tutorial
- **Full Documentation**: [riskportal.github.io/risk-docs](https://riskportal.github.io/risk-docs)
- **Try in Browser (Binder)**: [](https://mybinder.org/v2/gh/riskportal/risk-docs/HEAD?filepath=notebooks/quickstart.ipynb)
- **Documentation Repository**: [github.com/riskportal/risk-docs](https://github.com/riskportal/risk-docs)
## Installation
RISK is compatible with Python 3.8 or later and runs on all major operating systems. To install the latest version of RISK, run:
```bash
pip install risk-network --upgrade
```
## Key Features of RISK
- **Broad Data Compatibility**: Accepts multiple network formats (Cytoscape, Cytoscape JSON, GPickle, NetworkX) and user-provided annotations formatted as term–to–gene membership tables (JSON, CSV, TSV, Excel, Python dictionaries).
- **Flexible Clustering**: Offers Louvain, Leiden, Markov Clustering, Greedy Modularity, Label Propagation, Spinglass, and Walktrap, with user-defined resolution parameters to detect both coarse and fine-grained modules.
- **Statistical Testing**: Provides permutation, hypergeometric, chi-squared, and binomial tests, balancing statistical rigor with speed.
- **High-Resolution Visualization**: Generates publication-ready figures with customizable node/edge properties, contour overlays, and export to SVG, PNG, or PDF.
## Example Usage
We applied RISK to a _Saccharomyces cerevisiae_ protein–protein interaction (PPI) network (Michaelis _et al_., 2023; 3,839 proteins, 30,955 interactions). RISK identified compact, functional modules overrepresented in Gene Ontology Biological Process (GO BP) terms (Ashburner _et al_., 2000), revealing biological organization including ribosomal assembly, mitochondrial organization, and RNA polymerase activity.

**RISK workflow overview and analysis of the yeast PPI network**. Clusters are color-coded by enriched GO Biological Process terms (p < 0.01).
## Citation
### Primary citation
Horecka, I., and Röst, H. (2026)
<br>
_RISK: a next-generation tool for biological network annotation and visualization_
<br>
_Bioinformatics_. https://doi.org/10.1093/bioinformatics/btaf669
### Software archive
RISK software for the published manuscript.
<br>
Zenodo. https://doi.org/10.5281/zenodo.17257418
## Contributing
We welcome contributions from the community:
- [Issues Tracker](https://github.com/riskportal/risk/issues)
- [Source Code](https://github.com/riskportal/risk/tree/main/src/risk)
## Support
If you encounter issues or have suggestions for new features, please use the [Issues Tracker](https://github.com/riskportal/risk/issues) on GitHub.
## License
RISK is open source under the [GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html).
| text/markdown | null | Ira Horecka <ira89@icloud.com> | null | null | GPL-3.0-or-later | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8"... | [] | null | null | >=3.8 | [] | [] | [] | [
"ipywidgets",
"leidenalg",
"markov_clustering",
"matplotlib",
"networkx",
"nltk",
"numpy",
"openpyxl",
"pandas",
"python-igraph",
"python-louvain",
"scikit-learn",
"scipy",
"statsmodels",
"threadpoolctl",
"tqdm"
] | [] | [] | [] | [
"Homepage, https://github.com/riskportal/risk",
"Issues, https://github.com/riskportal/risk/issues"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-19T14:03:48.497972 | risk_network-0.1.2.tar.gz | 108,068 | 3f/cc/16649e984524898c02fb777a885ad1085208a1ad0f6e23ff732eb88ee5b8/risk_network-0.1.2.tar.gz | source | sdist | null | false | bb0a944196f93d9eead434bcd16c22a6 | f81b85b750d93de340984dfa05bba9c4afbe251032bf4c0ad974a44f745309dc | 3fcc16649e984524898c02fb777a885ad1085208a1ad0f6e23ff732eb88ee5b8 | null | [
"LICENSE"
] | 245 |
2.4 | optilang | 0.1.0 | A Python-inspired interpreter with real-time code analysis and optimization suggestions | # OptiLang
**A Python-inspired interpreter with real-time code analysis and optimization suggestions**
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/psf/black)
---
## 🎯 Project Overview
OptiLang is an educational interpreter for a Python-like language (PyLite) that provides:
- **Real-time code execution** with line-by-line profiling
- **Optimization suggestions** based on detected anti-patterns
- **Quantitative scoring** (0-100) for code quality
- **Pattern detection** for 8+ common performance issues
---
## 🚀 Quick Start
### Installation
```bash
pip install optilang
```
### Basic Usage
```python
from optilang import execute, analyze
# Execute PyLite code
result = execute("""
def factorial(n):
if n <= 1:
return 1
return n * factorial(n - 1)
print(factorial(5))
""")
print(result.output) # "120"
print(f"Execution time: {result.execution_time}ms")
# Analyze code for optimizations
report = analyze("""
for i in range(100):
for j in range(100):
result = i * j
""")
print(f"Optimization Score: {report.optimization_score}/100")
for suggestion in report.suggestions:
print(f"Line {suggestion.line}: {suggestion.description}")
```
---
## 🏗️ Architecture
```
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ Lexer │ -> │ Parser │ -> │ Executor │ -> │ Profiler │
│ (Tokens) │ │ (AST) │ │ (Runtime)│ │ (Metrics)│
└──────────┘ └──────────┘ └──────────┘ └──────────┘
│ │
v v
┌──────────┐ ┌──────────┐
│Optimizer │ │ Scorer │
│(Patterns)│ │ (0-100) │
└──────────┘ └──────────┘
```
---
## 📋 Features
### Current (v0.1.0)
- [x] Lexical analysis (tokenization)
- [x] Syntax parsing (AST generation)
- [x] Code execution (variables, functions, control flow)
- [x] Basic profiling (execution time, line counts)
### Planned
- [ ] Advanced profiling (memory usage, call graphs)
- [ ] 8+ optimization patterns
- [ ] ML-based suggestion ranking (optional)
- [ ] Optimization score calculation
- [ ] Comprehensive documentation
---
## 🛠️ Development Setup
```bash
# Clone repository
git clone https://github.com/Sthamanik/optilang.git
cd optilang
# Create virtual environment
conda create -n optilang python -y
conda activate optilang
# Install in editable mode with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black optilang/
# Type checking
mypy optilang/
# Linting
flake8 optilang/
```
---
## 📚 Documentation
- **User Guide**: Coming soon
- **API Reference**: Coming soon
- **Contributing Guide**: See [CONTRIBUTING.md](CONTRIBUTING.md)
- **Changelog**: See [CHANGELOG.md](CHANGELOG.md)
---
## 🧪 Testing
```bash
# Run all tests
pytest
# Run with coverage
pytest --cov=optilang --cov-report=html
# View coverage report
open htmlcov/index.html # Linux/Mac
# or start htmlcov/index.html on Windows
```
---
## 🤝 Contributing
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
---
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---
## 👥 Team
- **Manik Kumar Shrestha** - [GitHub](https://github.com/Sthamanik)
- **Om Shree Mahat** - [GitHub](https://github.com/itsomshree)
- **Aashish Rimal** - [GitHub](https://github.com/aashishrimal22)
---
## 📧 Contact
For questions or feedback:
- **Email**: shresthamanik1820@gmail.com
- **Issues**: [GitHub Issues](https://github.com/Sthamanik/optilang/issues)
---
**⭐ Star this repository if you find it useful!**
| text/markdown | null | Manik Kumar Shrestha <shresthamanik1820@gmail.com> | null | null | MIT | interpreter, optimization, profiling, static-analysis, education | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python ::... | [] | null | null | >=3.9 | [] | [] | [] | [
"pytest>=9.0.2; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"black>=26.1.0; extra == \"dev\"",
"mypy>=1.19.1; extra == \"dev\"",
"flake8>=7.3.0; extra == \"dev\"",
"scikit-learn>=1.5.0; extra == \"ml\"",
"joblib>=1.4.0; extra == \"ml\"",
"sphinx>=7.0.0; extra == \"docs\"",
"sphinx-rtd-... | [] | [] | [] | [
"Homepage, https://github.com/Sthamanik/optilang",
"Documentation, https://optilang.readthedocs.io",
"Repository, https://github.com/Sthamanik/optilang",
"Issues, https://github.com/Sthamanik/optilang/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T14:03:12.070883 | optilang-0.1.0.tar.gz | 38,082 | 89/b1/5784809f8e8ca3829c3cb9bbda61936afa83cfdb8ea0b79ba739f4225061/optilang-0.1.0.tar.gz | source | sdist | null | false | 35dd4de9ac7943715ce851ca15c9a90e | 99710da7bfc792bf1af0debcdbc5bd65567f0ee1643c0f9486bc1854e88a3f11 | 89b15784809f8e8ca3829c3cb9bbda61936afa83cfdb8ea0b79ba739f4225061 | null | [
"LICENSE"
] | 251 |
2.4 | legend-lh5io | 0.0.2 | LEGEND LH5 File I/O | # legend-lh5io
[](https://pypi.org/project/legend-lh5io/)

[](https://github.com/legend-exp/legend-lh5io/actions)
[](https://github.com/pre-commit/pre-commit)
[](https://github.com/psf/black)
[](https://app.codecov.io/gh/legend-exp/legend-lh5io)



[](https://legend-lh5io.readthedocs.io)
[](https://doi.org/10.5281/zenodo.10592107)
This package provides a Python implementation of I/O to HDF5 for the LEGEND Data
Objects (LGDOs), including [Numba](https://numba.pydata.org/)-accelerated custom
compression algorithms for particle detector signals. More documentation is
available in the
[LEGEND data format specification](https://legend-exp.github.io/legend-data-format-specs).
If you are using this software,
[consider citing](https://doi.org/10.5281/zenodo.10592107)!
| text/markdown | The LEGEND Collaboration | null | The LEGEND Collaboration | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: MacOS",
"Operating System :: POSIX",
"Operating System :: Unix",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python... | [] | null | null | >=3.10 | [] | [] | [] | [
"awkward>=2",
"colorlog",
"h5py>=3.10",
"hdf5plugin",
"numba!=0.53.*,!=0.54.*",
"numpy>=1.21",
"pandas>=1.4.4",
"parse",
"legend-pydataobj>=1.16.0",
"legend_lh5io[docs,test]; extra == \"all\"",
"furo; extra == \"docs\"",
"hist[plot]; extra == \"docs\"",
"jupyter; extra == \"docs\"",
"myst-... | [] | [] | [] | [
"Homepage, https://github.com/legend-exp/legend-lh5",
"Bug Tracker, https://github.com/legend-exp/legend-lh5/issues",
"Discussions, https://github.com/legend-exp/legend-lh5/discussions",
"Changelog, https://github.com/legend-exp/legend-lh5/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:03:10.486022 | legend_lh5io-0.0.2.tar.gz | 105,127 | 63/79/5b9b06f8139fc41e3026fd45ac460f9546ba7d96ac5409fe44d89f47ee18/legend_lh5io-0.0.2.tar.gz | source | sdist | null | false | bbb12f008b1b39ec3bb52eb326bf0b2e | 85aeeceb74a201b294a3fac06e90150e568dad4a49e49992116d1117d0cb9219 | 63795b9b06f8139fc41e3026fd45ac460f9546ba7d96ac5409fe44d89f47ee18 | GPL-3.0 | [
"LICENSE"
] | 302 |
2.4 | opticlient | 0.1.4 | Python client for SaaS optimization API | # opticlient
A lightweight Python client for interacting with the SaaS optimization API.
This package provides a clean interface for submitting optimization jobs, polling their status, and retrieving results.
Currently supported tools:
- **MAXSAT** - solves a .wcnf instance in the format of "maxsat evaluation" - post 2022
- **Single Machine Scheduling (sms)** — submit an Excel instance and obtain an ordered job schedule.
More tools will be added in future versions.
For details visit: https://github.com/Milan-Adhikari/opticlient
---
## Installation
```bash
pip install opticlient
```
## Quick Usage Guide
opticlient requires an **API key**, which you obtain from the website https://cad-eta.vercel.app
You can provide it in either of two ways:
#### Option 1 - Environment variable (recommended)
```bash
export OPTICLIENT_API_TOKEN="YOUR_API_KEY"
```
#### Option 2 - Pass directly in code
```bash
from opticlient import OptiClient
client = OptiClient(api_token="YOUR_API_KEY")
```
## Quick Start: Single Machine Scheduling (SMS)
The SMS tool takes an Excel file describing a scheduling instance and returns an ordered sequence of jobs. You can download the sample Excel file from https://cad-eta.vercel.app or see below.
#### Basic usage
```bash
# use case for maxsat
from opticlient import OptiClient
client = OptiClient() # reads token/base URL from environment if available
# set the file path first (!!! necessary)
client.maxsatSolver.set_file("test.wcnf")
solution = client.maxsatSolver.optimize()
print("Solution:", solution)
```
```bash
# use case for single machine scheduling problem
from opticlient import OptiClient
client = OptiClient() # reads token/base URL from environment if available
schedule = client.sms.run(
file_path="instance.xlsx",
description="Test run",
)
print("Job schedule:")
for job in schedule:
print(job)
```
## Sample .wcnf File format
c Example WCNF file (post-2023 MaxSAT Evaluation format)
c Offset: 0
h 1 -2 3 0
h -1 4 0
5 2 -3 0
3 -4 0
1 5 0
## Sample Excel File format
| Job | Job1 | Job2 | Job3 | Job4 |
|-------|------|------|------|------|
| Job1 | 0 | 2 | 1 | 1 |
| Job2 | 3 | 0 | 1 | 1 |
| Job3 | 5 | 4 | 1 | 2 |
| Job4 | 2 | 2 | 1 | 0 |
## Versioning
This package follows semantic versioning:
* 0.x — early releases, API may change
* 1.0+ — stable API | text/markdown | null | Milan Adhikari <reach.out.to.milan@gmail.com> | null | null | MIT | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx",
"build; extra == \"dev\"",
"pytest; extra == \"dev\"",
"twine; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://cad-eta.vercel.app",
"Source, https://github.com/Milan-Adhikari/opticlient"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T14:02:04.489618 | opticlient-0.1.4.tar.gz | 12,959 | 22/c9/10e1229ef1eaf6763a9a326b20ddfbcbd4ba7beb0aeee729e8445dfb9bc3/opticlient-0.1.4.tar.gz | source | sdist | null | false | e43b49b82fb9bac62427b2a003e73fd9 | 16554462d964ba91241c3a8538eedb9a9e6248092258d627c166a655a7a5e2ef | 22c910e1229ef1eaf6763a9a326b20ddfbcbd4ba7beb0aeee729e8445dfb9bc3 | null | [
"LICENSE"
] | 223 |
2.4 | meteole | 0.2.6 | A Python client library for forecast model APIs (e.g., Météo-France). | <p align="center">
<a href="https://maif.github.io/meteole"><img src="https://raw.githubusercontent.com/MAIF/meteole/main/docs/pages/assets/img/svg/meteole-fond-clair.svg" alt="meteole" width="50%"></a>
</p>
<p align="center">
<em>Easy access to Météo-France weather models and data</em>
</p>
<p align="center">
<img src="https://github.com/MAIF/meteole/actions/workflows/ci-cd.yml/badge.svg?branch=main" alt="CI">
<img src="https://img.shields.io/badge/coverage-89%25-dark_green" alt="Coverage">
<img src="https://img.shields.io/pypi/v/meteole" alt="Versions">
<img src="https://img.shields.io/pypi/pyversions/meteole" alt="Python">
<img src="https://img.shields.io/pypi/dm/meteole" alt="Downloads">
</p>
---
**Documentation:** [https://maif.github.io/meteole/home/](https://maif.github.io/meteole/home/)
**Repository:** [https://github.com/MAIF/meteole](https://github.com/MAIF/meteole)
**Release article:** [Medium](https://medium.com/oss-by-maif/meteole-simplifier-lacc%C3%A8s-aux-donn%C3%A9es-m%C3%A9t%C3%A9o-afeec5e5d395)
---
## Overview
**Meteole** is a Python library designed to simplify accessing weather data from the Météo-France APIs. It provides:
- **Automated token management**: Simplify authentication with a single `application_id`.
- **Unified model usage**: AROME, AROME INSTANTANE, ARPEGE, PIAF forecasts with a consistent interface.
- **User-friendly parameter handling**: Intuitive management of key weather forecasting parameters.
- **Seamless data integration**: Directly export forecasts as Pandas DataFrames
- **Vigilance bulletins**: Retrieve real-time weather warnings across France.
Perfect for data scientists, meteorologists, and developers, Meteole helps integrate weather forecasts into projects effortlessly.
### Installation
```python
pip install meteole
```
## 🕐 Quickstart
### Step 1: Obtain an API token or key
Create an account on [the Météo-France API portal](https://portail-api.meteofrance.fr/). Next, subscribe to the desired APIs (Arome, Arpege, Arome Instantané, etc.). Retrieve the API token (or key) by going to “Mes APIs” and then “Générer token”.
### Step 2: Fetch Forecasts
Meteole allows you to retrieve forecasts for a wide range of weather indicators. Here's how to get started:
| Characteristics | AROME | AROME-PE | ARPEGE | ARPEGE-PE | AROME INSTANTANE | PIAF |
|------------------|----------------------------|----------------------------|-----------------------------|--------------------------------| -------------------------------| -------------------------------|
| Resolution | 1.3 km | 2.8 km | 10 km | 10 km | 1.3 km | 1.3 km |
| Update Frequency | Every 3 hours | Every 6 hours | Every 6 hours | Every 6 hours | Every 1 hour | Every 10 minutes |
| Forecast Range | Every hour, up to 51 hours | Every hour, up to 51 hours | Every hour, up to 114 hours | Every hour up to 48 hours, then every 3 hours up to 114 hours | Every 15 minutes, up to 360 minutes | Every 5 minutes, up to 195 minutes |
| Numbers of scenarios | 1 | 25 | 1 | 35 | 1 | 1 |
The AromePE and ArpegePE models are ensemble models. Instead of making a single forecast of the most likely weather, a set (or ensemble) of forecasts is produced. This set of forecasts aims to give an indication of the range of possible future states of the atmosphere ([from Wikipedia](https://en.wikipedia.org/wiki/Ensemble_forecasting)). It provides several scenarios of the possible weather parameters instead of one for the standard determinist models.
*note : the date of the run cannot be more than 4 days in the past. Consequently, change the date of the run in the example below.*
```python
import datetime as dt
from meteole import AromeForecast
# Configure the logger to provide information on data recovery: recovery status, default settings, etc.
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("meteole")
# Initialize the AROME forecast client
# Find your APPLICATION_ID by following these guidelines: https://maif.github.io/meteole/how_to/?h=application_id#get-a-token-an-api-key-or-an-application-id
arome_client = AromeForecast(application_id=APPLICATION_ID)
# Check indicators available
print(arome_client.indicators)
# Fetch weather data
df_arome = arome_client.get_coverage(
indicator="V_COMPONENT_OF_WIND_GUST__SPECIFIC_HEIGHT_LEVEL_ABOVE_GROUND", # Optional: if not, you have to fill coverage_id
run="2025-01-10T00.00.00Z", # Optional: forecast start time
forecast_horizons=[ # Optional: prediction times (in hours)
dt.timedelta(hours=1),
dt.timedelta(hours=2),
],
heights=[10], # Optional: height above ground level
pressures=None, # Optional: pressure level
long = (-5.1413, 9.5602), # Optional: longitude. tuple (min_long, max_long) or a float for a specific location
lat = (41.33356, 51.0889), # Optional: latitude. tuple (min_lat, max_lat) or a float for a specific location
coverage_id=None, # Optional: an alternative to indicator/run/interval
temp_dir=None, # Optional: Directory to store the temporary file
ensemble_numbers=range(3), # Optional: Only for ensemble models (AromePE), the number of scenarios
)
```
Note: The coverage_id can be used instead of indicator, run, and interval.
The usage of ARPEGE, AROME INSTANTANE, PIAF is identical to AROME, except that you initialize the appropriate class
### Step 3: Explore Parameters and Indicators
#### Discover Available Indicators
Use the `get_capabilities()` method to list all available indicators, run times, and intervals:
```
indicators = arome_client.get_capabilities()
print(indicators)
```
#### Fetch Description for a Specific Indicator
Understand the required parameters (`forecast_horizons`, `heights`, `pressures`) for any indicator using `get_coverage_description()`:
```
description = arome_client.get_coverage_description(coverage_id)
print(description)
```
#### Geographical Coverage
The geographical coverage of forecasts can be customized using the lat and long parameters in the get_coverage method. By default, Meteole retrieves data for the entire metropolitan France.
#### Fetch Forecasts for Multiple Indicators
The `get_combined_coverage` method allows you to retrieve weather data for multiple indicators at the same time, streamlining the process of gathering forecasts for different parameters (e.g., temperature, wind speed, etc.). For detailed guidance on using this feature, refer to this [tutorial](./tutorial/Fetch_forecast_for_multiple_indicators.ipynb).
Explore detailed examples in the [tutorials folder](./tutorial) to quickly get started with Meteole.
### ⚠️ VIGILANCE METEO FRANCE
Meteo France provides nationwide vigilance bulletins, highlighting potential weather risks. These tools allow you to integrate weather warnings into your workflows, helping trigger targeted actions or models.
```python
from meteole import Vigilance
vigi = Vigilance(application_id=APPLICATION_ID)
df_phenomenon, df_timelaps = vigi.get_phenomenon()
bulletin = vigi.get_bulletin()
vigi.get_vignette()
```
<img src="docs/pages/assets/img/png/vignette_exemple.png" width="600" height="300" alt="vignette de vigilance">
To have more documentation from Meteo-France in Vigilance Bulletin :
- [Meteo France Documentation](https://donneespubliques.meteofrance.fr/?fond=produit&id_produit=305&id_rubrique=50)
## Contributing
Contributions are *very* welcome!
If you see an issue that you'd like to see fixed, the best way to make it happen is to help out by submitting a pull request implementing it.
Refer to the [CONTRIBUTING.md](./CONTRIBUTING.md) file for more details about the workflow,
and general hints on how to prepare your pull request. You can also ask for clarifications or guidance in GitHub issues directly.
## License
This project is Open Source and available under the Apache 2 License.
## 🙏 Acknowledgements
The development of Meteole was inspired by the excellent work in the [meteofranceapi](https://github.com/antoinetavant/meteofranceapi) repository by Antoine Tavant.
| text/markdown | ThomasBouche, GratienDSX | null | null | null | Apache-2.0 | null | [
"Development Status :: 4 - Beta",
"Operating System :: OS Independent",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programmin... | [] | null | null | >3.8.0 | [] | [] | [] | [
"pandas>=2.0.0",
"eccodes>=2.39.0",
"cfgrib>=0.9.10.4",
"requests>=2.31.0",
"xarray>=2024.5.0",
"xmltodict>=0.13.0",
"matplotlib>=3.8.4",
"pytest; extra == \"test\"",
"coverage; extra == \"test\"",
"tox; extra == \"test\"",
"mkdocs-material; extra == \"doc\"",
"mkdocstrings[python]; extra == \... | [] | [] | [] | [
"Homepage, https://maif.github.io/meteole/",
"Documentation, https://maif.github.io/meteole/home",
"Repository, https://github.com/MAIF/meteole"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T14:00:06.601574 | meteole-0.2.6.tar.gz | 40,024 | 6b/04/dd3080c950a68b1d07139dd4446e3d54158ca3c4cf53668b3769373c2c0a/meteole-0.2.6.tar.gz | source | sdist | null | false | e936ce0d610b1d9b89d7b3877940610d | fb6b5f71b226ef9d9eeaac2cc4523d5df9396c3577c20414376c5d211c8ce81c | 6b04dd3080c950a68b1d07139dd4446e3d54158ca3c4cf53668b3769373c2c0a | null | [
"LICENCE",
"NOTICE"
] | 221 |
2.1 | esrp_release_test | 26.219.134425 | A test package for ESRP Release BVTs | # esrp-release-test-pypi
A test package for ESRP Release BVTs. This will be used for MCP Server publishing as well.
## Define MCP Server Medatadata
mcp-name: com.microsoft.esrp/esrp-oss-mcp-test
| text/markdown | null | ESRP <esrpreldri@microsoft.com> | null | null | MIT | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/example/esrp-release-test-pypi"
] | RestSharp/106.13.0.0 | 2026-02-19T14:00:03.187207 | esrp_release_test-26.219.134425-py3-none-any.whl | 1,818 | 01/5d/23583472602e5c3a18fea569dd4ce36dd065124c59ee94efe727bcd43e37/esrp_release_test-26.219.134425-py3-none-any.whl | py3 | bdist_wheel | null | false | 967f488ef38e336ab3891b294313b1ce | 361b075b36afc89fa0cbbfe6d94c84c0822ebc79fc3a10e0fd416f3cb38df170 | 015d23583472602e5c3a18fea569dd4ce36dd065124c59ee94efe727bcd43e37 | null | [] | 0 |
2.4 | deadend_cli | 0.1.1 | AI agent CLI for security research | # Deadend CLI
> [!WARNING]
> **Active Development**: This project is undergoing active development. Current features are functional but the interface and workflows are being improved based on new architecture and features.
**Autonomous pentesting agent using feedback-driven iteration**
Achieves ~78% on XBOW benchmarks with fully local execution and model-agnostic architecture.
📄 [Read Technical Deep Dive](https://xoxruns.medium.com/feedback-driven-iteration-and-fully-local-webapp-pentesting-ai-agent-achieving-78-on-xbow-199ef719bf01) | 📊 [Benchmark Results](https://github.com/xoxruns/deadend-cli/tree/main/benchmarks-results/xbow)
---
## What is Deadend CLI?
Deadend CLI is an autonomous web application penetration testing agent that uses feedback-driven iteration to adapt exploitation strategies. When standard tools fail, it generates custom Python payloads, observes responses, and iteratively refines its approach until breakthrough.
**Key features:**
- Fully local execution (no cloud dependencies, zero data exfiltration)
- Model-agnostic design (works with any deployable LLM)
- Custom sandboxed tools (Playwright, Docker, WebAssembly)
- ADaPT-based architecture with supervisor-subagent hierarchy
- Confidence-based decision making (fail <20%, expand 20-60%, refine 60-80%, validate >80%)
**Benchmark results:** 78% on XBOW validation suite (76/98 challenges), including blind SQL injection exploits where other agents achieved 0%.
[Read the architecture breakdown in our technical article →](https://xoxruns.medium.com/feedback-driven-iteration-and-fully-local-webapp-pentesting-ai-agent-achieving-78-on-xbow-199ef719bf01)
---
## Core Analysis Capabilities
The framework focuses on **intelligent security analysis** through:
- **🔍 Taint Analysis**: Automated tracking of data flow from sources to sinks
- **🎯 Source/Sink Detection**: Intelligent identification of entry points and vulnerable functions
- **🔗 Contextual Tool Integration**: Smart connection to specialized tools for testing complex logic patterns
- **🧠 AI-Driven Reasoning**: Context-aware analysis that mimics expert security thinking
---
## 🔧 Custom Pentesting Tools
- **Webapp-Specific Tooling**: Custom tools designed specifically for web application penetration testing
- **Authentication Handling**: Built-in support for session management, cookies, and auth flows
- **Fine-Grained Testing**: Precise control over individual requests and parameters
- **Payload Generation**: AI-powered payload creation tailored to target context
- **Automated Payload Testing**: Generate, inject, and validate payloads in a single workflow
---
## Quick Start
### Prerequisites
- Docker (required)
- Python 3.11+
- uv >= 0.5.30
- Playwright: `playwright install`
### Installation
```bash
# Install via pipx (recommended)
pipx install deadend_cli
# Or build from source
git clone https://github.com/xoxruns/deadend-cli.git
cd deadend-cli
uv sync && uv build
```
### First Run
```bash
# Initialize configuration
deadend-cli init
# Start testing
deadend-cli chat \
--target "http://localhost:3000" \
--prompt "find SQL injection vulnerabilities"
```
---
## Usage Examples
### Basic Vulnerability Testing
```bash
# Test OWASP Juice Shop
docker run -p 3000:3000 bkimminich/juice-shop
deadend-cli chat \
--target "http://localhost:3000" \
--prompt "test the login endpoint for SQL injection"
```
### API Security Testing
```bash
deadend-cli chat \
--target "https://api.example.com" \
--prompt "test authentication endpoints"
```
### Autonomous Mode
```bash
# Run without approval prompts (CTFs/labs only)
deadend-cli chat \
--target "http://ctf.example.com" \
--mode yolo \
--prompt "find and exploit all vulnerabilities"
```
---
## Commands
### `deadend-cli init`
Initialize configuration and set up pgvector database
### `deadend-cli chat`
Start interactive security testing session
- `--target`: Target URL
- `--prompt`: Initial testing prompt
- `--mode`: `hacker` (approval required) or `yolo` (autonomous)
### `deadend-cli eval-agent`
Run evaluation against challenge datasets
- `--eval-metadata-file`: Challenge dataset file
- `--llm-providers`: AI model providers to test
- `--guided`: Run with subtask decomposition
### `deadend-cli version`
Display current version
---
## Architecture Summary
The agent uses a two-phase approach (reconnaissance → exploitation) with a supervisor-subagent hierarchy:
**Supervisor**: Maintains high-level goals, delegates to specialized subagents
**Subagents**: Focused toolsets (Requester for HTTP, Shell for commands, Python for payloads)
**Policy**: Confidence scores (0-1.0) determine whether to fail, expand, refine, or validate
**Key innovation:** When standard tools fail, the agent generates custom exploitation scripts and iterates based on observed feedback—solving challenges like blind SQL injection where static toolchains achieve 0%.
[Read full architecture details →](https://xoxruns.medium.com/feedback-driven-iteration-and-fully-local-webapp-pentesting-ai-agent-achieving-78-on-xbow-199ef719bf01)
---
## Benchmark Results
Evaluated on XBOW's 104-challenge validation suite (black-box mode, January 2026):
| Agent | Success Rate | Infrastructure | Blind SQLi |
| ------------------ | ------------ | --------------- | ---------- |
| XBOW (proprietary) | 85% | Proprietary | ? |
| Cyber-AutoAgent | 81% | AWS Bedrock | 0% |
| **Deadend CLI** | **78%** | **Fully local** | **33%** |
| MAPTA | 76.9% | External APIs | 0% |
**Models tested:** Claude Sonnet 4.5 (~78%), Kimi K2 Thinking (~69%)
Strong performance: XSS (91%), Business Logic (86%), SQL injection (83%), IDOR (80%)
Perfect scores: GraphQL, SSRF, NoSQL injection, HTTP method tampering (100%)
---
## Operating Modes
**Hacker Mode (default):** Requires approval for dangerous operations
```bash
deadend-cli chat --target URL --mode hacker
```
**YOLO Mode:** Autonomous execution (CTFs/labs only)
```bash
deadend-cli chat --target URL --mode yolo
```
---
## Technology Stack
- **LiteLLM**: Multi-provider model abstraction (OpenAI, Anthropic, Ollama)
- **Instructor**: Structured LLM outputs
- **pgvector**: Vector database for context
- **Pyodide/WebAssembly**: Python sandbox
- **Playwright**: HTTP request generation
- **Docker**: Shell command isolation
---
## Configuration
Configuration is managed via `~/.cache/deadend/config.toml`. Run `deadend-cli init` to set up your configuration interactively.
---
## Current Status & Roadmap
### Stable (v0.0.15)
✅ New architecture
✅ XBOW benchmark evaluation (78%)
✅ Custom sandboxed tools
✅ Multi-model support with liteLLM
✅ Two-phase execution (recon + exploitation)
### In Progress (v0.1.0)
🚧 **CLI Redesign** with enhanced workflows:
- Plan mode (review strategies before execution)
- Preset configuration workflows (API testing, web apps, auth bypass)
- Workflow automation (save/replay attack chains)
🚧 Context optimization (reduce redundant tool calls)
🚧 Secrets management improvements
### Future roadmap
The current architecture proves competitive autonomous pentesting (78%) is achievable without cloud dependencies. Next challenges:
- **Open-Source Models**: Achieve 75%+ with Llama/Qwen (eliminate proprietary dependencies)
- **Hybrid Testing**: Add AST analysis for white-box code inspection
- **Adversarial Robustness**: Train against WAFs, rate limiting, adaptive defenses
- **Multi-Target Orchestration**: Test interconnected systems simultaneously
- **Context Efficiency**: Better information sharing between components
Goal: Make autonomous pentesting accessible (open models), comprehensive (hybrid testing), and robust (works against real defenses).
---
## Contributing
Contributions welcome in:
- Context optimization algorithms
- Vulnerability test cases
- Open-weight model fine-tuning
- Adversarial testing scenarios
See [CONTRIBUTING.md](../CONTRIBUTING.md) for guidelines on how to contribute.
---
## Citation
```bibtex
@software{deadend_cli_2026,
author = {Yassine Bargach},
title = {Deadend CLI: Feedback-Driven Autonomous Pentesting},
year = {2026},
url = {https://github.com/xoxruns/deadend-cli}
}
```
---
## Disclaimer
**For authorized security testing only.** Unauthorized testing is illegal. Users are responsible for compliance with all applicable laws and obtaining proper authorization.
---
## Links
📄 [Architecture Deep Dive](https://xoxruns.medium.com/feedback-driven-iteration-and-fully-local-webapp-pentesting-ai-agent-achieving-78-on-xbow-199ef719bf01)
📊 [Benchmark Results](https://github.com/xoxruns/deadend-cli/tree/main/benchmarks-results/xbow)
🐛 [Report Issues](https://github.com/xoxruns/deadend-cli/issues)
⭐ [Star this repo](https://github.com/xoxruns/deadend-cli)
| text/markdown | null | Yassine Bargach <yassine@straylabs.ai> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiofiles>=24.1.0",
"aiohttp>=3.12.14",
"asgiref>=3.8.1",
"asyncpg>=0.30.0",
"beautifulsoup4>=4.13.4",
"cssbeautifier>=1.15.4",
"deadend-agent",
"deadend-eval",
"deadend-prompts",
"docker>=7.1.0",
"dotenv>=0.9.9",
"google-genai>=1.18.0",
"httptools>=0.6.4",
"jinja2>=3.1.6",
"jsbeautifier... | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:59:12.446065 | deadend_cli-0.1.1-py3-none-any.whl | 315,316 | 74/c6/b6f0a1632d90669655fae5ab0ac2b0211526b3c10cd456ffffc96ba23f05/deadend_cli-0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 8a195cee5daffbffd757520e1524d42d | 449094b0c80d6c73fc7f27dadba94759c6a1fcbbb4e55846ced3011fe934322d | 74c6b6f0a1632d90669655fae5ab0ac2b0211526b3c10cd456ffffc96ba23f05 | null | [] | 0 |
2.4 | digitalhub-runtime-python | 0.15.0b7 | Python runtime for DHCore | # DigitalHub SDK Runtime Python
[](https://github.com/scc-digitalhub/digitalhub-sdk-runtime-python/LICENSE) 

The Digitalhub SDK Runtime Python is a runtime extension for the [Digitalhub SDK](https://github.com/scc-digitalhub/digitalhub-sdk). It enables you to create and execute python function in the Digitalhub platform.
Explore the full documentation at the [link](https://scc-digitalhub.github.io/sdk-docs/reference/runtimes/).
## Quick start
To install the Digitalhub SDK Runtime Python, you can use pip:
```bash
pip install digitalhub-sdk-runtime-python
```
## Development
See CONTRIBUTING for contribution instructions.
## Security Policy
The current release is the supported version. Security fixes are released together with all other fixes in each new release.
If you discover a security vulnerability in this project, please do not open a public issue.
Instead, report it privately by emailing us at digitalhub@fbk.eu. Include as much detail as possible to help us understand and address the issue quickly and responsibly.
## Contributing
To report a bug or request a feature, please first check the existing issues to avoid duplicates. If none exist, open a new issue with a clear title and a detailed description, including any steps to reproduce if it's a bug.
To contribute code, start by forking the repository. Clone your fork locally and create a new branch for your changes. Make sure your commits follow the [Conventional Commits v1.0](https://www.conventionalcommits.org/en/v1.0.0/) specification to keep history readable and consistent.
Once your changes are ready, push your branch to your fork and open a pull request against the main branch. Be sure to include a summary of what you changed and why. If your pull request addresses an issue, mention it in the description (e.g., “Closes #123”).
Please note that new contributors may be asked to sign a Contributor License Agreement (CLA) before their pull requests can be merged. This helps us ensure compliance with open source licensing standards.
We appreciate contributions and help in improving the project!
## Authors
This project is developed and maintained by **DSLab – Fondazione Bruno Kessler**, with contributions from the open source community. A complete list of contributors is available in the project’s commit history and pull requests.
For questions or inquiries, please contact: [digitalhub@fbk.eu](mailto:digitalhub@fbk.eu)
## Copyright and license
Copyright © 2025 DSLab – Fondazione Bruno Kessler and individual contributors.
This project is licensed under the Apache License, Version 2.0.
You may not use this file except in compliance with the License. Ownership of contributions remains with the original authors and is governed by the terms of the Apache 2.0 License, including the requirement to grant a license to the project.
| text/markdown | null | Fondazione Bruno Kessler <digitalhub@fbk.eu>, Matteo Martini <mmartini@fbk.eu> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2025 DSLab, Fondazione Bruno Kessler
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | data, dataops, kubernetes | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"digitalhub[full]<0.16,>=0.15.0b",
"msgpack",
"nuclio-sdk",
"pip",
"digitalhub[dev]<0.16,>=0.15.0b; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/scc-digitalhub/digitalhub-sdk-runtime-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:58:50.663232 | digitalhub_runtime_python-0.15.0b7.tar.gz | 39,993 | 79/d4/34cc4157be51b2869c342bdfd05ee22d32a99de090458e860cbe0852f2da/digitalhub_runtime_python-0.15.0b7.tar.gz | source | sdist | null | false | 3b04ffe4ab6afcd885c8067d652f3667 | a2a0cb38de590a5ba04fd8516203e01a3265825f332ed1a1b79716b2655dac08 | 79d434cc4157be51b2869c342bdfd05ee22d32a99de090458e860cbe0852f2da | null | [
"AUTHORS",
"LICENSE"
] | 229 |
2.4 | autowebx | 1.11.1 | Automation helpers: temp email, captcha solvers, proxies, Playwright humanizer, and more | AutoWebX
========
Automate common web tasks with a lightweight Python toolkit: temp emails and inbox readers, captcha solvers, phone/SMS helpers, Playwright input humanizer, proxy utilities, and auto‑saving data structures.
Features
- Temp email helpers: temp-mail.io, mail.tm, email-fake.com, inboxes.com, Remotix Mail
- Captcha solvers: 2Captcha (Recaptcha v2/v3, Turnstile, image-to-text), CapSolver (Recaptcha v2)
- Phone verification helpers: 5sim pricing and activations
- Playwright “human” input utilities (mouse and typing)
- Proxy utilities including LunaProxy builder
- Auto-saving dict/set/queue, file append/replace helpers
- Lightweight inter-process message passing
Installation
- Python 3.9+
- Install from source:
- `pip install -e .` (or `pip install .` to build a wheel)
- Optional: Playwright support requires `playwright` and a browser install: `pip install playwright` then `playwright install`
Quick Start
- Account generation
- `from autowebx.account import Account; acc = Account(); print(acc.email, acc.password)`
- Temp Mail (internal temp-mail.io)
- `from autowebx.temp_mail import Email; e = Email(); print(e.address); print(e.get_messages())`
- Mail.tm
- `from autowebx.mail_tm import MailTmAccount; a = MailTmAccount(); print(a.email, a.password); print(a.messages())`
- Inboxes.com
- `from autowebx.inboxes import Inboxes; ib = Inboxes('user@example.com'); msgs = ib.inbox(); html = ib.html(msgs[0])`
- Remotix Mail
- `from autowebx.remotix_mail import messages, domains; print(domains()); print(messages('user@remotix.app'))`
- 2Captcha (Recaptcha v2/v3)
- `from autowebx.two_captcha import TwoCaptcha, CaptchaType; token = TwoCaptcha('<api_key>', CaptchaType.recaptchaV2, 'https://site', '<site_key>').solution()`
- 2Captcha (Turnstile)
- `from autowebx.two_captcha import Turnstile; token = Turnstile('<api_key>', 'https://site', '<site_key>').solution()`
- CapSolver (Recaptcha v2)
- `from autowebx.capsolver import RecaptchaV2; token = RecaptchaV2('<api_key>', 'https://site', '<site_key>').solution()`
- 5sim pricing and activations
- `from autowebx.five_sim import FiveSim, min_cost_providers; fs = FiveSim('<api_token>'); print(fs.balance()); phone = fs.buy_activation_number('netherlands','any','other')`
- Playwright humanizer
- `from autowebx.human_wright import add_mouse_position_listener, click, fill, show_mouse`
- Use with a `playwright.sync_api.Page` to move the mouse and type more human‑like.
- Proxy helper
- `from autowebx.proxy import Proxy, LunaProxy; p = Proxy('user:pass@host:port'); requests_proxies = p.for_requests()`
CLI: HTTP -> Requests boilerplate
- The `functioner` console script converts a raw HTTP request file into a Python method using `requests`.
- Usage: `functioner path\to\request.txt` → writes `function.py` with a ready‑to‑paste method.
Modules Overview
- `autowebx.account` – `Account`, password/username/US phone/address generators
- `autowebx.temp_mail` – create temp-mail.io inbox; `domains()` helper with caching
- `autowebx.mail_tm` – `MailTmAccount` with JWT token management
- `autowebx.email_fake` – read/delete messages from email-fake.com
- `autowebx.inboxes` – poll inboxes.com and fetch message HTML
- `autowebx.remotix_mail` – fetch messages/domains from Remotix Mail
- `autowebx.two_captcha` – 2Captcha wrappers (Recaptcha v2/v3, Turnstile, ImageToText)
- `autowebx.capsolver` – CapSolver Recaptcha v2 wrapper
- `autowebx.five_sim` – 5sim pricing utilities and activation API
- `autowebx.human_wright` – Playwright helpers for human‑like mouse/typing
- `autowebx.proxy` – Parse proxies, build `requests`/Playwright configs, `LunaProxy`
- `autowebx.files` – thread‑safe append, replace, reactive File buffer
- `autowebx.auto_save_dict|set|queue` – persistent containers that save on mutation
- `autowebx.communications` – simple localhost message send/receive primitives
- `autowebx.panels` – SMS panel readers (Premiumy, Sniper, PSCall, Ziva) + `ReportReader`
- `autowebx.remotix` – remote usage logging/metrics (Run)
Notes & Best Practices
- Network usage: Several modules do network I/O on method calls. Avoid calling them inside tight loops without backoff.
- Optional deps: Playwright usage requires installing Playwright and browsers separately.
- Secrets: Do not hardcode API keys/tokens. Use environment variables or config files.
- Error handling: Helpers raise `TimeoutError`, `ConnectionError`, or library‑specific errors (e.g., `CaptchaError`). Catch and retry as needed.
- Thread safety: `autowebx.files.add` and `sync_print` append safely. Persistent containers save on each mutation.
Development
- Run lint/tests as appropriate for your environment.
- Update `CHANGELOG.md` when publishing.
- Packaging entry point: `functioner` maps to `autowebx.__init__:__get_function__`.
License
- See repository terms. This project includes code snippets that contact third‑party services; check their terms of service before use.
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"requests>=2.31.0",
"beautifulsoup4>=4.12.2",
"names>=0.3.0",
"phonenumbers>=8.13.0",
"colorama>=0.4.6",
"art>=6.5",
"multipledispatch>=1.0.0",
"ntplib>=0.4.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.2 | 2026-02-19T13:58:49.015673 | autowebx-1.11.1.tar.gz | 32,144 | fd/39/eaf6e862556f58be1c8218ec271929578e4df165e7c6125afa0e85ba4281/autowebx-1.11.1.tar.gz | source | sdist | null | false | a157c04e31518a615c70f29c79284031 | e619e3eedb5c549e5fd2cb8e116cb62dcc6c518ebcf711a1e516d325735dd9ae | fd39eaf6e862556f58be1c8218ec271929578e4df165e7c6125afa0e85ba4281 | null | [] | 234 |
2.4 | lambdalib | 0.10.3 | Standardized ASIC design libraries | # Lambdalib
**A Modular Hardware Abstraction Library for Portable ASIC Design**
[](https://github.com/siliconcompiler/lambdalib/actions/workflows/ci.yml)
[](https://github.com/siliconcompiler/lambdalib/actions/workflows/wheels.yml)
[](https://pypi.org/project/lambdalib/)
[](https://pepy.tech/project/lambdalib)
[](https://github.com/siliconcompiler/lambdalib/stargazers)
[](https://github.com/siliconcompiler/lambdalib/issues)
[](LICENSE)
<!-- FIGURE PLACEHOLDER: Add a hero banner image showing the abstraction concept

-->
## Why Lambdalib?
Lambdalib is a modular hardware abstraction library which decouples design from the manufacturing target. The project was inspired by the Lambda concept invented during the **1978 VLSI revolution by Mead and Conway**. Unfortunately, the elegant single value Lambda approach no longer applies to modern CMOS manufacturing. Lambdalib solves the scaling tech porting problem by raising the abstraction level to the cell/block level.
### The Problem
<!-- FIGURE PLACEHOLDER: Add diagram showing traditional design flow with technology coupling

-->
- Synchronizers, clock gating cells, and I/O pads are technology-specific
- Memory compilers generate different interfaces per foundry
- Analog blocks require complete redesign for each process
- Design teams waste months re-implementing the same functionality
### The Solution
<!-- FIGURE PLACEHOLDER: Add diagram showing Lambdalib abstraction layer

-->
Lambdalib provides **technology-agnostic interfaces** for all cells that can't be expressed in pure RTL:
- Write your design once using Lambdalib cells
- Target any supported technology through [Lambdapdk](https://github.com/siliconcompiler/lambdapdk)
- Automatic porting between process nodes
- Proven in multiple production tapeouts
## Key Features
<!-- FIGURE PLACEHOLDER: Add feature icons/graphics

-->
| Feature | Benefit |
|---------|---------|
| **Technology Independence** | Write once, fabricate anywhere |
| **Complete Cell Library** | 160+ cells covering all common needs |
| **SiliconCompiler Integration** | Seamless ASIC build flow |
| **Parameterizable Cells** | Flexible width, depth, and configuration |
| **Production Proven** | Used in real tapeouts |
| **Open Source** | MIT licensed, free to use and modify |
## Architecture
<!-- FIGURE PLACEHOLDER: Add architecture diagram showing library hierarchy

-->
Lambdalib is organized into specialized sub-libraries, each addressing a specific design domain:
```
lambdalib/
├── stdlib/ # Standard digital cells (97 cells)
├── auxlib/ # Special-purpose cells (22 cells)
├── ramlib/ # Memory modules (6 modules)
├── iolib/ # I/O pad cells (16 cells)
├── padring/ # Padring generator (3 modules)
├── veclib/ # Vectorized datapath (15 cells)
├── fpgalib/ # FPGA primitives (3 cells)
└── analoglib/ # Analog circuits (2 modules)
```
## Integration with Lambdapdk
<!-- FIGURE PLACEHOLDER: Add PDK integration flow diagram

-->
Lambdalib cells are implemented for the following open source technologies through [Lambdapdk](https://github.com/siliconcompiler/lambdapdk), which provides:
- ASAP7
- FreePDK45
- IHP130
- Skywater130
- GF180MCU
## Design Methodology
1. **One Verilog module per RTL file** - Clean separation of concerns
2. **One Python class per cell** - Encapsulates Verilog and metadata
3. **Consistent naming** - RTL uses `la_` prefix, Python removes it and capitalizes
4. **Technology abstraction** - Generic interfaces, technology-specific implementations
5. **SiliconCompiler native** - All cells are Design subclasses
## Quick Start
### Installation
```bash
git clone https://github.com/siliconcompiler/lambdalib
cd lambdalib
pip install -e .
```
### Creating a Design with Lambdalib Cells
Lambdalib cells are SiliconCompiler `Design` subclasses. Create your own design and add Lambdalib cells as dependencies:
```python
from siliconcompiler import Design
from lambdalib.padring import Padring
from lambdalib.stdlib import Inv, Dffq
from lambdalib.ramlib import Dpram
class MyChip(Design):
def __init__(self):
super().__init__('mychip')
# Set up your design files
self.set_topmodule('mychip', 'rtl')
self.add_file('rtl/mychip.v', 'rtl')
# Add Lambdalib cells as dependencies
self.add_depfileset(Padring(), depfileset='rtl', fileset='rtl')
self.add_depfileset(Inv(), depfileset='rtl', fileset='rtl')
self.add_depfileset(Dffq(), depfileset='rtl', fileset='rtl')
self.add_depfileset(Dpram(), depfileset='rtl', fileset='rtl')
```
### Instantiating Cells in Verilog
In your Verilog RTL, instantiate Lambdalib cells using the `la_` prefix:
```verilog
module mychip (
input clk,
input d,
output q
);
// Instantiate a D flip-flop
la_dffq u_dff (
.clk(clk),
.d(d),
.q(q)
);
// Instantiate a dual-port RAM
la_dpram #(.DW(32), .AW(10)) u_ram (
.clk(clk),
// ... port connections
);
endmodule
```
### Running Synthesis
Generate a fileset and run synthesis with yosys. (or ideally, you should use the free and open source SiliconCompiler, but you do you;-).
```python
if __name__ == "__main__":
chip = MyChip()
chip.write_fileset("mychip.f", fileset="rtl")
# Run: yosys -f mychip.f
```
## Cell Library Reference
### stdlib - Standard Cells (97 cells)
The foundation of digital logic design with optimized implementations for each target technology.
<!-- FIGURE PLACEHOLDER: Add standard cell examples/symbols

-->
#### Logic Gates
| Cell | Description | Cell | Description |
|------|-------------|------|-------------|
| [`And2`](lambdalib/stdlib/la_and2/rtl/la_and2.v) | 2-input AND | [`Nand2`](lambdalib/stdlib/la_nand2/rtl/la_nand2.v) | 2-input NAND |
| [`And3`](lambdalib/stdlib/la_and3/rtl/la_and3.v) | 3-input AND | [`Nand3`](lambdalib/stdlib/la_nand3/rtl/la_nand3.v) | 3-input NAND |
| [`And4`](lambdalib/stdlib/la_and4/rtl/la_and4.v) | 4-input AND | [`Nand4`](lambdalib/stdlib/la_nand4/rtl/la_nand4.v) | 4-input NAND |
| [`Or2`](lambdalib/stdlib/la_or2/rtl/la_or2.v) | 2-input OR | [`Nor2`](lambdalib/stdlib/la_nor2/rtl/la_nor2.v) | 2-input NOR |
| [`Or3`](lambdalib/stdlib/la_or3/rtl/la_or3.v) | 3-input OR | [`Nor3`](lambdalib/stdlib/la_nor3/rtl/la_nor3.v) | 3-input NOR |
| [`Or4`](lambdalib/stdlib/la_or4/rtl/la_or4.v) | 4-input OR | [`Nor4`](lambdalib/stdlib/la_nor4/rtl/la_nor4.v) | 4-input NOR |
| [`Xor2`](lambdalib/stdlib/la_xor2/rtl/la_xor2.v) | 2-input XOR | [`Xnor2`](lambdalib/stdlib/la_xnor2/rtl/la_xnor2.v) | 2-input XNOR |
| [`Xor3`](lambdalib/stdlib/la_xor3/rtl/la_xor3.v) | 3-input XOR | [`Xnor3`](lambdalib/stdlib/la_xnor3/rtl/la_xnor3.v) | 3-input XNOR |
| [`Xor4`](lambdalib/stdlib/la_xor4/rtl/la_xor4.v) | 4-input XOR | [`Xnor4`](lambdalib/stdlib/la_xnor4/rtl/la_xnor4.v) | 4-input XNOR |
#### Buffers and Inverters
| Cell | Description |
|------|-------------|
| [`Buf`](lambdalib/stdlib/la_buf/rtl/la_buf.v) | Non-inverting buffer |
| [`Inv`](lambdalib/stdlib/la_inv/rtl/la_inv.v) | Inverter |
| [`Delay`](lambdalib/stdlib/la_delay/rtl/la_delay.v) | Delay element |
#### Complex Logic (AOI/OAI)
| Cell | Description | Cell | Description |
|------|-------------|------|-------------|
| [`Ao21`](lambdalib/stdlib/la_ao21/rtl/la_ao21.v) | AND-OR (2-1) | [`Aoi21`](lambdalib/stdlib/la_aoi21/rtl/la_aoi21.v) | AND-OR-Invert (2-1) |
| [`Ao211`](lambdalib/stdlib/la_ao211/rtl/la_ao211.v) | AND-OR (2-1-1) | [`Aoi211`](lambdalib/stdlib/la_aoi211/rtl/la_aoi211.v) | AND-OR-Invert (2-1-1) |
| [`Ao22`](lambdalib/stdlib/la_ao22/rtl/la_ao22.v) | AND-OR (2-2) | [`Aoi22`](lambdalib/stdlib/la_aoi22/rtl/la_aoi22.v) | AND-OR-Invert (2-2) |
| [`Ao221`](lambdalib/stdlib/la_ao221/rtl/la_ao221.v) | AND-OR (2-2-1) | [`Aoi221`](lambdalib/stdlib/la_aoi221/rtl/la_aoi221.v) | AND-OR-Invert (2-2-1) |
| [`Ao222`](lambdalib/stdlib/la_ao222/rtl/la_ao222.v) | AND-OR (2-2-2) | [`Aoi222`](lambdalib/stdlib/la_aoi222/rtl/la_aoi222.v) | AND-OR-Invert (2-2-2) |
| [`Ao31`](lambdalib/stdlib/la_ao31/rtl/la_ao31.v) | AND-OR (3-1) | [`Aoi31`](lambdalib/stdlib/la_aoi31/rtl/la_aoi31.v) | AND-OR-Invert (3-1) |
| [`Ao311`](lambdalib/stdlib/la_ao311/rtl/la_ao311.v) | AND-OR (3-1-1) | [`Aoi311`](lambdalib/stdlib/la_aoi311/rtl/la_aoi311.v) | AND-OR-Invert (3-1-1) |
| [`Ao32`](lambdalib/stdlib/la_ao32/rtl/la_ao32.v) | AND-OR (3-2) | [`Aoi32`](lambdalib/stdlib/la_aoi32/rtl/la_aoi32.v) | AND-OR-Invert (3-2) |
| [`Ao33`](lambdalib/stdlib/la_ao33/rtl/la_ao33.v) | AND-OR (3-3) | [`Aoi33`](lambdalib/stdlib/la_aoi33/rtl/la_aoi33.v) | AND-OR-Invert (3-3) |
| [`Oa21`](lambdalib/stdlib/la_oa21/rtl/la_oa21.v) | OR-AND (2-1) | [`Oai21`](lambdalib/stdlib/la_oai21/rtl/la_oai21.v) | OR-AND-Invert (2-1) |
| [`Oa211`](lambdalib/stdlib/la_oa211/rtl/la_oa211.v) | OR-AND (2-1-1) | [`Oai211`](lambdalib/stdlib/la_oai211/rtl/la_oai211.v) | OR-AND-Invert (2-1-1) |
| [`Oa22`](lambdalib/stdlib/la_oa22/rtl/la_oa22.v) | OR-AND (2-2) | [`Oai22`](lambdalib/stdlib/la_oai22/rtl/la_oai22.v) | OR-AND-Invert (2-2) |
| [`Oa221`](lambdalib/stdlib/la_oa221/rtl/la_oa221.v) | OR-AND (2-2-1) | [`Oai221`](lambdalib/stdlib/la_oai221/rtl/la_oai221.v) | OR-AND-Invert (2-2-1) |
| [`Oa222`](lambdalib/stdlib/la_oa222/rtl/la_oa222.v) | OR-AND (2-2-2) | [`Oai222`](lambdalib/stdlib/la_oai222/rtl/la_oai222.v) | OR-AND-Invert (2-2-2) |
| [`Oa31`](lambdalib/stdlib/la_oa31/rtl/la_oa31.v) | OR-AND (3-1) | [`Oai31`](lambdalib/stdlib/la_oai31/rtl/la_oai31.v) | OR-AND-Invert (3-1) |
| [`Oa311`](lambdalib/stdlib/la_oa311/rtl/la_oa311.v) | OR-AND (3-1-1) | [`Oai311`](lambdalib/stdlib/la_oai311/rtl/la_oai311.v) | OR-AND-Invert (3-1-1) |
| [`Oa32`](lambdalib/stdlib/la_oa32/rtl/la_oa32.v) | OR-AND (3-2) | [`Oai32`](lambdalib/stdlib/la_oai32/rtl/la_oai32.v) | OR-AND-Invert (3-2) |
| [`Oa33`](lambdalib/stdlib/la_oa33/rtl/la_oa33.v) | OR-AND (3-3) | [`Oai33`](lambdalib/stdlib/la_oai33/rtl/la_oai33.v) | OR-AND-Invert (3-3) |
#### Multiplexers
| Cell | Description |
|------|-------------|
| [`Mux2`](lambdalib/stdlib/la_mux2/rtl/la_mux2.v) | 2:1 multiplexer |
| [`Mux3`](lambdalib/stdlib/la_mux3/rtl/la_mux3.v) | 3:1 multiplexer |
| [`Mux4`](lambdalib/stdlib/la_mux4/rtl/la_mux4.v) | 4:1 multiplexer |
| [`Muxi2`](lambdalib/stdlib/la_muxi2/rtl/la_muxi2.v) | 2:1 inverting multiplexer |
| [`Muxi3`](lambdalib/stdlib/la_muxi3/rtl/la_muxi3.v) | 3:1 inverting multiplexer |
| [`Muxi4`](lambdalib/stdlib/la_muxi4/rtl/la_muxi4.v) | 4:1 inverting multiplexer |
| [`Dmux2`](lambdalib/stdlib/la_dmux2/rtl/la_dmux2.v) | 1:2 one-hot multiplexer |
| [`Dmux3`](lambdalib/stdlib/la_dmux3/rtl/la_dmux3.v) | 1:3 one-hot multiplexer |
| [`Dmux4`](lambdalib/stdlib/la_dmux4/rtl/la_dmux4.v) | 1:4 one-hot multiplexer |
| [`Dmux5`](lambdalib/stdlib/la_dmux5/rtl/la_dmux5.v) | 1:5 one-hot multiplexer |
| [`Dmux6`](lambdalib/stdlib/la_dmux6/rtl/la_dmux6.v) | 1:6 one-hot multiplexer |
| [`Dmux7`](lambdalib/stdlib/la_dmux7/rtl/la_dmux7.v) | 1:7 one-hot multiplexer |
| [`Dmux8`](lambdalib/stdlib/la_dmux8/rtl/la_dmux8.v) | 1:8 one-hot multiplexer |
#### Flip-Flops and Latches
| Cell | Description |
|------|-------------|
| [`Dffq`](lambdalib/stdlib/la_dffq/rtl/la_dffq.v) | D flip-flop (Q output) |
| [`Dffqn`](lambdalib/stdlib/la_dffqn/rtl/la_dffqn.v) | D flip-flop (Q and QN outputs) |
| [`Dffnq`](lambdalib/stdlib/la_dffnq/rtl/la_dffnq.v) | D flip-flop (negative edge) |
| [`Dffrq`](lambdalib/stdlib/la_dffrq/rtl/la_dffrq.v) | D flip-flop with async reset |
| [`Dffrqn`](lambdalib/stdlib/la_dffrqn/rtl/la_dffrqn.v) | D flip-flop with async reset (Q and QN) |
| [`Dffsq`](lambdalib/stdlib/la_dffsq/rtl/la_dffsq.v) | D flip-flop with async set |
| [`Dffsqn`](lambdalib/stdlib/la_dffsqn/rtl/la_dffsqn.v) | D flip-flop with async set (Q and QN) |
| [`Sdffq`](lambdalib/stdlib/la_sdffq/rtl/la_sdffq.v) | Scan D flip-flop |
| [`Sdffqn`](lambdalib/stdlib/la_sdffqn/rtl/la_sdffqn.v) | Scan D flip-flop (Q and QN) |
| [`Sdffrq`](lambdalib/stdlib/la_sdffrq/rtl/la_sdffrq.v) | Scan D flip-flop with reset |
| [`Sdffrqn`](lambdalib/stdlib/la_sdffrqn/rtl/la_sdffrqn.v) | Scan D flip-flop with reset (Q and QN) |
| [`Sdffsq`](lambdalib/stdlib/la_sdffsq/rtl/la_sdffsq.v) | Scan D flip-flop with set |
| [`Sdffsqn`](lambdalib/stdlib/la_sdffsqn/rtl/la_sdffsqn.v) | Scan D flip-flop with set (Q and QN) |
| [`Latq`](lambdalib/stdlib/la_latq/rtl/la_latq.v) | Transparent latch |
| [`Latnq`](lambdalib/stdlib/la_latnq/rtl/la_latnq.v) | Transparent latch (inverted enable) |
#### Clock Tree Cells
| Cell | Description |
|------|-------------|
| [`Clkbuf`](lambdalib/stdlib/la_clkbuf/rtl/la_clkbuf.v) | Clock buffer (balanced rise/fall) |
| [`Clkinv`](lambdalib/stdlib/la_clkinv/rtl/la_clkinv.v) | Clock inverter |
| [`Clkand2`](lambdalib/stdlib/la_clkand2/rtl/la_clkand2.v) | Clock AND gate |
| [`Clknand2`](lambdalib/stdlib/la_clknand2/rtl/la_clknand2.v) | Clock NAND gate |
| [`Clkor2`](lambdalib/stdlib/la_clkor2/rtl/la_clkor2.v) | Clock OR gate (2-input) |
| [`Clkor4`](lambdalib/stdlib/la_clkor4/rtl/la_clkor4.v) | Clock OR gate (4-input) |
| [`Clknor2`](lambdalib/stdlib/la_clknor2/rtl/la_clknor2.v) | Clock NOR gate |
| [`Clkxor2`](lambdalib/stdlib/la_clkxor2/rtl/la_clkxor2.v) | Clock XOR gate |
#### Arithmetic
| Cell | Description |
|------|-------------|
| [`Csa32`](lambdalib/stdlib/la_csa32/rtl/la_csa32.v) | 3:2 carry-save adder |
| [`Csa42`](lambdalib/stdlib/la_csa42/rtl/la_csa42.v) | 4:2 carry-save adder |
#### Tie Cells
| Cell | Description |
|------|-------------|
| [`Tiehi`](lambdalib/stdlib/la_tiehi/rtl/la_tiehi.v) | Tie to VDD |
| [`Tielo`](lambdalib/stdlib/la_tielo/rtl/la_tielo.v) | Tie to VSS |
---
### auxlib - Auxiliary Cells (22 cells)
Special-purpose standard cells for clock management, synchronization, and power control.
<!-- FIGURE PLACEHOLDER: Add auxiliary cell diagrams

-->
#### Synchronizers
| Cell | Description |
|------|-------------|
| [`Dsync`](lambdalib/auxlib/la_dsync/rtl/la_dsync.v) | Double-stage synchronizer for CDC |
| [`Rsync`](lambdalib/auxlib/la_rsync/rtl/la_rsync.v) | Reset synchronizer |
| [`Drsync`](lambdalib/auxlib/la_drsync/rtl/la_drsync.v) | Double reset synchronizer |
#### Clock Management
| Cell | Description |
|------|-------------|
| [`Clkmux2`](lambdalib/auxlib/la_clkmux2/rtl/la_clkmux2.v) | Glitchless 2:1 clock multiplexer |
| [`Clkmux4`](lambdalib/auxlib/la_clkmux4/rtl/la_clkmux4.v) | Glitchless 4:1 clock multiplexer |
| [`Clkicgand`](lambdalib/auxlib/la_clkicgand/rtl/la_clkicgand.v) | Integrated clock gate (AND-based) |
| [`Clkicgor`](lambdalib/auxlib/la_clkicgor/rtl/la_clkicgor.v) | Integrated clock gate (OR-based) |
#### I/O Buffers
| Cell | Description |
|------|-------------|
| [`Ibuf`](lambdalib/auxlib/la_ibuf/rtl/la_ibuf.v) | Input buffer |
| [`Obuf`](lambdalib/auxlib/la_obuf/rtl/la_obuf.v) | Output buffer |
| [`Tbuf`](lambdalib/auxlib/la_tbuf/rtl/la_tbuf.v) | Tri-state buffer |
| [`Idiff`](lambdalib/auxlib/la_idiff/rtl/la_idiff.v) | Differential input buffer |
| [`Odiff`](lambdalib/auxlib/la_odiff/rtl/la_odiff.v) | Differential output buffer |
#### DDR Cells
| Cell | Description |
|------|-------------|
| [`Iddr`](lambdalib/auxlib/la_iddr/rtl/la_iddr.v) | Input DDR register |
| [`Oddr`](lambdalib/auxlib/la_oddr/rtl/la_oddr.v) | Output DDR register |
#### Power Management
| Cell | Description |
|------|-------------|
| [`Isohi`](lambdalib/auxlib/la_isohi/rtl/la_isohi.v) | Isolation cell (output high) |
| [`Isolo`](lambdalib/auxlib/la_isolo/rtl/la_isolo.v) | Isolation cell (output low) |
| [`Header`](lambdalib/auxlib/la_header/rtl/la_header.v) | Power header switch |
| [`Footer`](lambdalib/auxlib/la_footer/rtl/la_footer.v) | Power footer switch |
| [`Pwrbuf`](lambdalib/auxlib/la_pwrbuf/rtl/la_pwrbuf.v) | Power distribution buffer |
#### Physical Cells
| Cell | Description |
|------|-------------|
| [`Antenna`](lambdalib/auxlib/la_antenna/rtl/la_antenna.v) | Antenna diode for process protection |
| [`Decap`](lambdalib/auxlib/la_decap/rtl/la_decap.v) | Decoupling capacitor |
| [`Keeper`](lambdalib/auxlib/la_keeper/rtl/la_keeper.v) | State keeper cell |
---
### ramlib - Memory Modules (6 modules)
Parameterizable memory generators with consistent interfaces across technologies.
<!-- FIGURE PLACEHOLDER: Add memory block diagrams

-->
| Module | Description | Key Parameters |
|--------|-------------|----------------|
| [`Spram`](lambdalib/ramlib/la_spram/rtl/la_spram.v) | Single-port RAM | width, depth |
| [`Dpram`](lambdalib/ramlib/la_dpram/rtl/la_dpram.v) | Dual-port RAM | width, depth |
| [`Tdpram`](lambdalib/ramlib/la_tdpram/rtl/la_tdpram.v) | True dual-port RAM | width, depth |
| [`Spregfile`](lambdalib/ramlib/la_spregfile/rtl/la_spregfile.v) | Single-port register file | width, depth |
| [`Syncfifo`](lambdalib/ramlib/la_syncfifo/rtl/la_syncfifo.v) | Synchronous FIFO | width, depth |
| [`Asyncfifo`](lambdalib/ramlib/la_asyncfifo/rtl/la_asyncfifo.v) | Asynchronous FIFO (CDC-safe) | width, depth |
---
### iolib - I/O Cells (16 cells)
Complete I/O pad library for chip periphery.
<!-- FIGURE PLACEHOLDER: Add I/O cell cross-section diagrams

-->
#### Digital I/O
| Cell | Description |
|------|-------------|
| [`Iobidir`](lambdalib/iolib/la_iobidir/rtl/la_iobidir.v) | Bidirectional I/O pad |
| [`Ioinput`](lambdalib/iolib/la_ioinput/rtl/la_ioinput.v) | Input-only pad |
| [`Ioxtal`](lambdalib/iolib/la_ioxtal/rtl/la_ioxtal.v) | Crystal oscillator pad |
| [`Iorxdiff`](lambdalib/iolib/la_iorxdiff/rtl/la_iorxdiff.v) | Differential receiver (LVDS) |
| [`Iotxdiff`](lambdalib/iolib/la_iotxdiff/rtl/la_iotxdiff.v) | Differential transmitter (LVDS) |
#### Analog I/O
| Cell | Description |
|------|-------------|
| [`Ioanalog`](lambdalib/iolib/la_ioanalog/rtl/la_ioanalog.v) | Analog pass-through with ESD |
#### Power Pads
| Cell | Description |
|------|-------------|
| [`Iovdd`](lambdalib/iolib/la_iovdd/rtl/la_iovdd.v) | Core power (VDD) |
| [`Iovss`](lambdalib/iolib/la_iovss/rtl/la_iovss.v) | Core ground (VSS) |
| [`Iovddio`](lambdalib/iolib/la_iovddio/rtl/la_iovddio.v) | I/O power (VDDIO) |
| [`Iovssio`](lambdalib/iolib/la_iovssio/rtl/la_iovssio.v) | I/O ground (VSSIO) |
| [`Iovdda`](lambdalib/iolib/la_iovdda/rtl/la_iovdda.v) | Analog power (VDDA) |
| [`Iovssa`](lambdalib/iolib/la_iovssa/rtl/la_iovssa.v) | Analog ground (VSSA) |
| [`Iopoc`](lambdalib/iolib/la_iopoc/rtl/la_iopoc.v) | Power-on control |
| [`Iocorner`](lambdalib/iolib/la_iocorner/rtl/la_iocorner.v) | Corner cell |
| [`Ioclamp`](lambdalib/iolib/la_ioclamp/rtl/la_ioclamp.v) | ESD clamp |
| [`Iocut`](lambdalib/iolib/la_iocut/rtl/la_iocut.v) | Power ring cut |
---
### padring - Padring Generator (3 modules)
Automated padring generation with pure Verilog output.
<!-- FIGURE PLACEHOLDER: Add padring example layout

-->
| Module | Description |
|--------|-------------|
| [`Padring`](lambdalib/padring/la_padring/rtl/la_padring.v) | Main padring generator |
**Features:**
- Pure Verilog parameterizable generator
- Support for all 4 sides (North/East/South/West)
- Differential pair handling
- Power section management
- 40-bit per-cell configuration
---
### veclib - Vectorized Cells (15 cells)
Bus-width scalable cells for efficient datapath design.
<!-- FIGURE PLACEHOLDER: Add vectorized cell concept diagram

-->
#### Vectorized Logic Gates
| Cell | Description |
|------|-------------|
| [`Vbuf`](lambdalib/veclib/la_vbuf/rtl/la_vbuf.v) | Vector buffer |
| [`Vinv`](lambdalib/veclib/la_vinv/rtl/la_vinv.v) | Vector inverter |
| [`Vmux`](lambdalib/veclib/la_vmux/rtl/la_vmux.v) | General one-hot vector multiplexer |
| [`Vmux2`](lambdalib/veclib/la_vmux2/rtl/la_vmux2.v) | 2:1 vector multiplexer |
| [`Vmux2b`](lambdalib/veclib/la_vmux2b/rtl/la_vmux2b.v) | 2:1 binary vector multiplexer |
| [`Vmux3`](lambdalib/veclib/la_vmux3/rtl/la_vmux3.v) | 3:1 one-hot vector multiplexer |
| [`Vmux4`](lambdalib/veclib/la_vmux4/rtl/la_vmux4.v) | 4:1 one-hot vector multiplexer |
| [`Vmux5`](lambdalib/veclib/la_vmux5/rtl/la_vmux5.v) | 5:1 one-hot vector multiplexer |
| [`Vmux6`](lambdalib/veclib/la_vmux6/rtl/la_vmux6.v) | 6:1 one-hot vector multiplexer |
| [`Vmux7`](lambdalib/veclib/la_vmux7/rtl/la_vmux7.v) | 7:1 one-hot vector multiplexer |
| [`Vmux8`](lambdalib/veclib/la_vmux8/rtl/la_vmux8.v) | 8:1 one-hot vector multiplexer |
#### Vectorized Registers
| Cell | Description |
|------|-------------|
| [`Vdffq`](lambdalib/veclib/la_vdffq/rtl/la_vdffq.v) | Vector D flip-flop |
| [`Vdffnq`](lambdalib/veclib/la_vdffnq/rtl/la_vdffnq.v) | Vector D flip-flop (negative edge) |
| [`Vlatq`](lambdalib/veclib/la_vlatq/rtl/la_vlatq.v) | Vector latch |
| [`Vlatnq`](lambdalib/veclib/la_vlatnq/rtl/la_vlatnq.v) | Vector latch (inverted enable) |
---
### fpgalib - FPGA Primitives (3 cells)
Building blocks for FPGA architectures.
<!-- FIGURE PLACEHOLDER: Add FPGA primitive diagrams

-->
| Cell | Description |
|------|-------------|
| [`Lut4`](lambdalib/fpgalib/la_lut4/rtl/la_lut4.v) | 4-input lookup table |
| [`Ble4p0`](lambdalib/fpgalib/la_ble4p0/rtl/la_ble4p0.v) | Basic logic element |
| [`Clb4p0`](lambdalib/fpgalib/la_clb4p0/rtl/la_clb4p0.v) | Configurable logic block (4 BLEs) |
---
### analoglib - Analog Circuits (2 modules)
Analog and mixed-signal building blocks.
<!-- FIGURE PLACEHOLDER: Add analog block diagrams

-->
| Module | Description |
|--------|-------------|
| [`PLL`](lambdalib/analoglib/la_pll/rtl/la_pll.v) | Phase-locked loop |
| [`Ring`](lambdalib/analoglib/la_ring/rtl/la_ring.v) | Ring oscillator |
## Contributing
We welcome contributions! Please see our [GitHub Issues](https://github.com/siliconcompiler/lambdalib/issues) for tracking requests and bugs.
## License
MIT License
Copyright (c) 2023 Zero ASIC Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| text/markdown | Zero ASIC | null | null | null | MIT License
Copyright (c) 2023 Zero ASIC Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"siliconcompiler>=0.35.0",
"Jinja2>=3.1.3",
"pytest==9.0.2; extra == \"test\"",
"pytest-xdist==3.8.0; extra == \"test\"",
"pytest-timeout==2.4.0; extra == \"test\"",
"cocotb==2.0.1; extra == \"test\"",
"cocotb-bus==0.3.0; extra == \"test\"",
"cocotb==2.0.1; extra == \"cocotb\"",
"cocotb-bus==0.3.0; ... | [] | [] | [] | [
"Homepage, https://github.com/siliconcompiler/lambdalib"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:58:47.652161 | lambdalib-0.10.3.tar.gz | 84,660 | 65/6f/7ee0ccdfd7ce39c8ecfec99898ebb680ed8ed56bca07083c2a1bd46c04a9/lambdalib-0.10.3.tar.gz | source | sdist | null | false | 844799d72f990f145d144475dbb4821e | ef6852a364813065bb331b4d4a9112bb4eb09c3b2fed4c7b2a96f6d58c92e9d8 | 656f7ee0ccdfd7ce39c8ecfec99898ebb680ed8ed56bca07083c2a1bd46c04a9 | null | [
"LICENSE"
] | 4,612 |
2.3 | fintoc | 2.17.0 | The official Python client for the Fintoc API. | <h1 align="center">Fintoc meets Python 🐍</h1>
<p align="center">
<em>
You have just found the Python-flavored client of <a href="https://fintoc.com/" target="_blank">Fintoc</a>.
</em>
</p>
<p align="center">
<a href="https://pypi.org/project/fintoc" target="_blank">
<img src="https://img.shields.io/pypi/v/fintoc?label=version&logo=python&logoColor=%23fff&color=306998" alt="PyPI - Version">
</a>
<a href="https://github.com/fintoc-com/fintoc-python/actions?query=workflow%3Atests" target="_blank">
<img src="https://img.shields.io/github/workflow/status/fintoc-com/fintoc-python/tests?label=tests&logo=python&logoColor=%23fff" alt="Tests">
</a>
<a href="https://codecov.io/gh/fintoc-com/fintoc-python" target="_blank">
<img src="https://img.shields.io/codecov/c/gh/fintoc-com/fintoc-python?label=coverage&logo=codecov&logoColor=ffffff" alt="Coverage">
</a>
<a href="https://github.com/fintoc-com/fintoc-python/actions?query=workflow%3Alinters" target="_blank">
<img src="https://img.shields.io/github/workflow/status/fintoc-com/fintoc-python/linters?label=linters&logo=github" alt="Linters">
</a>
</p>
## Table of Contents
- [Installation](#installation)
- [Usage](#usage)
- [Quickstart](#quickstart)
- [Calling endpoints](#calling-endpoints)
- [list](#list)
- [get](#get)
- [create](#create)
- [update](#update)
- [delete](#delete)
- [V2 Endpoints](#v2-endpoints)
- [Nested actions or resources](#nested-actions-or-resources)
- [Webhook Signature Validation](#webhook-signature-validation)
- [Idempotency Keys](#idempotency-keys)
- [Generate the JWS Signature](#gnerate-the-jws-signature)
- [Serialization](#serialization)
- [Acknowledgements](#acknowledgements)
## Installation
Install using pip!
```sh
pip install fintoc
```
**Note:** This SDK requires [**Python 3.6+**](https://docs.python.org/3/whatsnew/3.6.html).
## Usage
The idea behind this SDK is to stick to the API design as much as possible, so that it feels ridiculously natural to use even while only reading the raw API documentation.
### Quickstart
To be able to use this SDK, you first need to get your secret API Key from the [Fintoc Dashboard](https://dashboard.fintoc.com/login). Once you have your API key, all you need to do is initialize a `Fintoc` object with it and you're ready to start enjoying Fintoc!
```python
from fintoc import Fintoc
client = Fintoc("your_api_key")
# list all succeeded payment intents since the beginning of 2025
payment_intents = client.payment_intents.list(since="2025-01-01", status="succeeded")
for pi in payment_intents:
print(pi.created_at, pi.amount, pi.customer_email)
# Get a specific payment intent
payment_intent = client.payment_intents.get("pi_12345235412")
print(payment_intent.customer_email)
```
### Calling endpoints
The SDK provides direct access to Fintoc API resources following the API structure. Simply use the resource name and follow it by the appropriate action you want.
Notice that **not every resource has all of the methods**, as they correspond to the API capabilities.
#### `list`
You can use the `list` method to list all the instances of the resource:
```python
webhook_endpoints = client.webhook_endpoints.list()
```
The `list` method returns **a generator** with all the instances of the resource. This method can also receive the arguments that the API receives for that specific resource. For example, the `PaymentIntent` resource can be filtered using `since` and `until`, so if you wanted to get a range of `payment intents`, all you need to do is to pass the parameters to the method:
```python
payment_intents = client.payment_intents.list(since="2025-01-01", until="2025-02-01")
```
You can also pass the `lazy=False` parameter to the method to force the SDK to return a list of all the instances of the resource instead of the generator. **Beware**: this could take **very long**, depending on the amount of instances that exist of said resource:
```python
payment_intents = client.payment_intents.list(since="2025-01-01", until="2025-02-01", lazy=False)
isinstance(payment_intents, list) # True
```
#### `get`
You can use the `get` method to get a specific instance of the resource:
```python
payment_intent = client.payment_intents.get("pi_8anqVLlBC8ROodem")
```
#### `create`
You can use the `create` method to create an instance of the resource:
```python
webhook_endpoint = client.webhook_endpoints.create(
url="https://webhook.site/58gfb429-c33c-20c7-584b-d5ew3y3202a0",
enabled_events=["link.credentials_changed"],
description="Fantasting webhook endpoint",
)
```
The `create` method of the managers creates and returns a new instance of the resource. The attributes used for creating the object are passed as `kwargs`, and correspond to the parameters specified by the API documentation for the creation of said resource.
#### `update`
You can use the `update` method to update an instance of the resource:
```python
webhook_endpoint = client.webhook_endpoints.update(
"we_8anqVLlBC8ROodem",
enabled_events=["account.refresh_intent.succeeded"],
disabled=True,
)
```
The `update` method updates and returns an existing instance of the resource using its identifier to find it. The first parameter of the method corresponds to the identifier being used to find the existing instance of the resource. The attributes to be modified are passed as `kwargs`, and correspond to the parameters specified by the API documentation for the update action of said resource.
#### `delete`
You can use the `delete` method to delete an instance of the resource:
```python
deleted_identifier = client.webhook_endpoints.delete("we_8anqVLlBC8ROodem")
```
The `delete` method deletes an existing instance of the resource using its identifier to find it and returns the identifier.
#### v2 Endpoints
To call v2 API endpoints, like the [Transfers API](https://docs.fintoc.com/reference/transfers), you need to prepend the resource name with the `v2` namespace, the same as the API does it:
```python
transfer = client.v2.transfers.create(
amount=49523,
currency="mxn",
account_id="acc_123545",
counterparty={"account_number": "014180655091438298"},
metadata={"factura": "14814"},
)
```
#### Nested actions or resources
To call nested actions just call the method as it appears in the API. For example to [simulate receiving a transfer for the Transfers](https://docs.fintoc.com/reference/receive-an-inbound-transfer) product you can do:
```python
transfer = client.v2.simulate.receive_transfer(
amount=9912400,
currency="mxn",
account_number_id="acno_2vF18OHZdXXxPJTLJ5qghpo1pdU",
)
```
### Webhook Signature Validation
To ensure the authenticity of incoming webhooks from Fintoc, you should always validate the signature. The SDK provides a `WebhookSignature` class to verify the `Fintoc-Signature` header
```python
WebhookSignature.verify_header(
payload=request.get_data().decode('utf-8'),
header=request.headers.get('Fintoc-Signature'),
secret='your_webhook_secret'
)
```
The `verify_header` method takes the following parameters:
- `payload`: The raw request body as a string
- `header`: The Fintoc-Signature header value
- `secret`: Your webhook secret key (found in your Fintoc dashboard)
- `tolerance`: (Optional) Number of seconds to tolerate when checking timestamp (default: 300)
If the signature is invalid or the timestamp is outside the tolerance window, a `WebhookSignatureError` will be raised with a descriptive message.
For a complete example of handling webhooks, see [examples/webhook.py](examples/webhook.py).
### Idempotency Keys
You can provide an [Idempotency Key](https://docs.fintoc.com/reference/idempotent-requests) using the `idempotency_key` argument. For example:
```python
transfer = client.v2.transfers.create(
idempotency_key="12345678910"
amount=49523,
currency="mxn",
account_id="acc_123545",
counterparty={"account_number": "014180655091438298"},
metadata={"factura": "14814"},
)
```
### Generate the JWS Signature
Some endpoints need a [JWS Signature](https://docs.fintoc.com/docs/setting-up-jws-keys), in addition to your API Key, to verify the integrity and authenticity of API requests. To generate the signature, initialize the Fintoc client with the `jws_private_key` argument, and the SDK will handle the rest:
```python
import os
from fintoc import Fintoc
# Provide a path to your PEM file
client = Fintoc("your_api_key", jws_private_key="private_key.pem")
# Or pass the PEM key directly as a string
client = Fintoc("your_api_key", jws_private_key=os.environ.get('JWS_PRIVATE_KEY'))
# You can now create transfers securely
```
### Serialization
Any resource of the SDK can be serialized! To get the serialized resource, just call the `serialize` method!
```python
payment_intent = client.payment_intents.list(lazy=False)[0]
serialization = payment_intent.serialize()
```
The serialization corresponds to a dictionary with only simple types, that can be JSON-serialized.
## Acknowledgements
The first version of this SDK was originally designed and handcrafted by [**@nebil**](https://github.com/nebil),
[ad](https://en.wikipedia.org/wiki/Ad_honorem) [piscolem](https://en.wiktionary.org/wiki/piscola).
He built it with the help of Gianni Roberto's [Picchi 2](https://www.youtube.com/watch?v=WqjUlmkYr2g).
| text/markdown | Daniel Leal | daniel@fintoc.com | Daniel Leal | daniel@fintoc.com | BSD-3-Clause | null | [
"License :: OSI Approved :: BSD License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming L... | [] | null | null | <4.0,>=3.7 | [] | [] | [] | [
"cryptography<44.0.0",
"httpx<1.0,>=0.16"
] | [] | [] | [] | [
"Homepage, https://fintoc.com/",
"Issue Tracker, https://github.com/fintoc-com/fintoc-python/issues",
"Repository, https://github.com/fintoc-com/fintoc-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:58:42.674543 | fintoc-2.17.0.tar.gz | 20,660 | 4d/f4/a91a6e6a14d7e41ae988d186fc24a60aba76522453284ec44b8419a9be04/fintoc-2.17.0.tar.gz | source | sdist | null | false | 6da3f53d8ea90b39aa31d58af96095b9 | 48c641209f8472c3d56d04fdd459a8cad1b5a29961556bac531133d08776836b | 4df4a91a6e6a14d7e41ae988d186fc24a60aba76522453284ec44b8419a9be04 | null | [] | 357 |
2.4 | digitalhub | 0.15.0b9 | Python SDK for Digitalhub | # DigitalHub SDK
[](https://github.com/scc-digitalhub/digitalhub-sdk/LICENSE) 

The Digitalhub library is a python tool for managing projects, entities and executions in Digitalhub. It exposes CRUD methods to create, read, update and delete entities, tools to execute functions or workflows, collect or store execution results and data.
Explore the full documentation at the [link](https://scc-digitalhub.github.io/sdk-docs/).
## Quick start
To install the Digitalhub, you can use pip:
```bash
pip install digitalhub[full]
```
To be able to create and execute functions or workflows, you need to install the runtime you want to use. The Digitalhub SDK supports multiple runtimes, each with its own installation instructions:
- [Digitalhub SDK Runtime Python](https://github.com/scc-digitalhub/digitalhub-sdk-runtime-python)
- [Digitalhub SDK Runtime Dbt](https://github.com/scc-digitalhub/digitalhub-sdk-runtime-dbt)
- [Digitalhub SDK Runtime Container](https://github.com/scc-digitalhub/digitalhub-sdk-runtime-container)
- [Digitalhub SDK Runtime Hera](https://github.com/scc-digitalhub/digitalhub-sdk-runtime-hera)
- [Digitalhub SDK Runtime Modelserve](https://github.com/scc-digitalhub/digitalhub-sdk-runtime-modelserve)
## Development
See CONTRIBUTING for contribution instructions.
## Security Policy
The current release is the supported version. Security fixes are released together with all other fixes in each new release.
If you discover a security vulnerability in this project, please do not open a public issue.
Instead, report it privately by emailing us at digitalhub@fbk.eu. Include as much detail as possible to help us understand and address the issue quickly and responsibly.
## Contributing
To report a bug or request a feature, please first check the existing issues to avoid duplicates. If none exist, open a new issue with a clear title and a detailed description, including any steps to reproduce if it's a bug.
To contribute code, start by forking the repository. Clone your fork locally and create a new branch for your changes. Make sure your commits follow the [Conventional Commits v1.0](https://www.conventionalcommits.org/en/v1.0.0/) specification to keep history readable and consistent.
Once your changes are ready, push your branch to your fork and open a pull request against the main branch. Be sure to include a summary of what you changed and why. If your pull request addresses an issue, mention it in the description (e.g., “Closes #123”).
Please note that new contributors may be asked to sign a Contributor License Agreement (CLA) before their pull requests can be merged. This helps us ensure compliance with open source licensing standards.
We appreciate contributions and help in improving the project!
## Authors
This project is developed and maintained by **DSLab – Fondazione Bruno Kessler**, with contributions from the open source community. A complete list of contributors is available in the project’s commit history and pull requests.
For questions or inquiries, please contact: [digitalhub@fbk.eu](mailto:digitalhub@fbk.eu)
## Copyright and license
Copyright © 2025 DSLab – Fondazione Bruno Kessler and individual contributors.
This project is licensed under the Apache License, Version 2.0.
You may not use this file except in compliance with the License. Ownership of contributions remains with the original authors and is governed by the terms of the Apache 2.0 License, including the requirement to grant a license to the project.
| text/markdown | null | Fondazione Bruno Kessler <digitalhub@fbk.eu>, Matteo Martini <mmartini@fbk.eu> | null | null | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2025 DSLab, Fondazione Bruno Kessler
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | data, dataops, kubernetes | [
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"boto3",
"gitpython>=3",
"numpy",
"psycopg2-binary",
"pyarrow",
"pydantic",
"python-slugify",
"pyyaml",
"requests",
"sqlalchemy",
"bumpver; extra == \"dev\"",
"jsonschema; extra == \"dev\"",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"mlcroissant; extra == \"full\"",
"... | [] | [] | [] | [
"Homepage, https://github.com/scc-digitalhub/digitalhub-sdk"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:58:39.342646 | digitalhub-0.15.0b9.tar.gz | 132,709 | f2/f1/c2a1c746dd58c7a73242977205124fc35c55ef43000ea878f47a4cf61b1d/digitalhub-0.15.0b9.tar.gz | source | sdist | null | false | fe3a101713700ad84d8cc546a6311c10 | 6945d320c27fd7d52f77376b2ad07086b8b63421d50a26ba13ce9e3401719c3e | f2f1c2a1c746dd58c7a73242977205124fc35c55ef43000ea878f47a4cf61b1d | null | [
"AUTHORS",
"LICENSE"
] | 242 |
2.4 | vercel | 0.5.0 | Python SDK for Vercel | # Vercel Python SDK
## Installation
```bash
pip install vercel
```
## Requirements
- Python 3.10+
## Usage
This package provides both synchronous and asynchronous clients to interact with the Vercel API.
<br/>
---
### Headers and request context
```python
from typing import Callable
from fastapi import FastAPI, Request
from vercel.headers import geolocation, ip_address, set_headers
app = FastAPI()
@app.middleware("http")
async def vercel_context_middleware(request: Request, call_next: Callable):
set_headers(request.headers)
return await call_next(request)
@app.get("/api/headers")
async def headers_info(request: Request):
ip = ip_address(request.headers)
geo = geolocation(request)
return {"ip": ip, "geo": geo}
```
<br/>
---
### Runtime Cache
#### Sync
```python
from vercel.cache import get_cache
def main():
cache = get_cache(namespace="demo")
cache.delete("greeting")
cache.set("greeting", {"hello": "world"}, {"ttl": 60, "tags": ["demo"]})
value = cache.get("greeting") # dict or None
cache.expire_tag("demo") # invalidate by tag
```
#### Sync Client
```python
from vercel.cache import RuntimeCache
cache = RuntimeCache(namespace="demo")
def main():
cache.delete("greeting")
cache.set("greeting", {"hello": "world"}, {"ttl": 60, "tags": ["demo"]})
value = cache.get("greeting") # dict or None
cache.expire_tag("demo") # invalidate by tag
```
#### Async
```python
from vercel.cache.aio import get_cache
async def main():
cache = get_cache(namespace="demo")
await cache.delete("greeting")
await cache.set("greeting", {"hello": "world"}, {"ttl": 60, "tags": ["demo"]})
value = await cache.get("greeting") # dict or None
await cache.expire_tag("demo") # invalidate by tag
```
#### Async Client
```python
from vercel.cache import AsyncRuntimeCache
cache = AsyncRuntimeCache(namespace="demo")
async def main():
await await cache.delete("greeting")
await await cache.set("greeting", {"hello": "world"}, {"ttl": 60, "tags": ["demo"]})
value = await cache.get("greeting") # dict or None
await cache.expire_tag("demo") # invalidate by tag
```
<br/>
---
<br/>
### Vercel OIDC Tokens
```python
from typing import Callable
from fastapi import FastAPI, Request
from vercel.oidc import decode_oidc_payload, get_vercel_oidc_token
# async
# from vercel.oidc.aio import get_vercel_oidc_token
app = FastAPI()
@app.middleware("http")
async def vercel_context_middleware(request: Request, call_next: Callable):
set_headers(request.headers)
return await call_next(request)
@app.get("/oidc")
def oidc():
token = get_vercel_oidc_token()
payload = decode_oidc_payload(token)
user_id = payload.get("user_id")
project_id = payload.get("project_id")
return {
"user_id": user_id,
"project_id" project_id,
}
```
Notes:
- When run locally, this requires a valid Vercel CLI login on the machine running the code for refresh.
- Project info is resolved from `.vercel/project.json`.
<br/>
---
<br/>
### Blob Storage
Requires `BLOB_READ_WRITE_TOKEN` to be set as an env var or `token` to be set when constructing a client
#### Sync
```python
from vercel.blob import BlobClient
client = BlobClient()
# or BlobClient(token="...")
# Create a folder entry, upload a local file, list, then download
client.create_folder("examples/assets", overwrite=True)
uploaded = client.upload_file(
"./README.md",
"examples/assets/readme-copy.txt",
access="public",
content_type="text/plain",
)
listing = client.list_objects(prefix="examples/assets/")
client.download_file(uploaded.url, "/tmp/readme-copy.txt", overwrite=True)
```
Async usage:
```python
import asyncio
from vercel.blob import AsyncBlobClient
async def main():
client = AsyncBlobClient() # uses BLOB_READ_WRITE_TOKEN from env
# Upload bytes
uploaded = await client.put(
"examples/assets/hello.txt",
b"hello from python",
access="public",
content_type="text/plain",
)
# Inspect metadata, list, download bytes, then delete
meta = await client.head(uploaded.url)
listing = await client.list_objects(prefix="examples/assets/")
content = await client.get(uploaded.url)
await client.delete([b.url for b in listing.blobs])
asyncio.run(main())
```
Synchronous usage:
```python
from vercel.blob import BlobClient
client = BlobClient() # or BlobClient(token="...")
# Create a folder entry, upload a local file, list, then download
client.create_folder("examples/assets", overwrite=True)
uploaded = client.upload_file(
"./README.md",
"examples/assets/readme-copy.txt",
access="public",
content_type="text/plain",
)
listing = client.list_objects(prefix="examples/assets/")
client.download_file(uploaded.url, "/tmp/readme-copy.txt", overwrite=True)
```
#### Multipart Uploads
For large files, the SDK provides three approaches with different trade-offs:
##### 1. Automatic (Simplest)
The SDK handles everything automatically:
```python
from vercel.blob import auto_multipart_upload
# Synchronous
result = auto_multipart_upload(
"large-file.bin",
large_data, # bytes, file object, or iterator
part_size=8 * 1024 * 1024, # 8MB parts (default)
)
# Asynchronous
result = await auto_multipart_upload_async(
"large-file.bin",
large_data,
)
```
##### 2. Uploader Pattern (Recommended)
A middle-ground that provides a clean API while giving you control over parts and concurrency:
```python
from vercel.blob import BlobClient, create_multipart_uploader
# Create the uploader (initializes the upload)
client = BlobClient()
uploader = client.create_multipart_uploader("large-file.bin", content_type="application/octet-stream")
# Upload parts (you control when and how)
parts = []
for i, chunk in enumerate(chunks, start=1):
part = uploader.upload_part(i, chunk)
parts.append(part)
# Complete the upload
result = uploader.complete(parts)
```
Async version with concurrent uploads:
```python
from vercel.blob import AsyncBlobClient, create_multipart_uploader_async
client = AsyncBlobClient()
uploader = await client.create_multipart_uploader("large-file.bin")
# Upload parts concurrently
tasks = [uploader.upload_part(i, chunk) for i, chunk in enumerate(chunks, start=1)]
parts = await asyncio.gather(*tasks)
# Complete
result = await uploader.complete(parts)
```
The uploader pattern is ideal when you:
- Want to control how parts are created (e.g., stream from disk, manage memory)
- Need custom concurrency control
- Want a cleaner API than the manual approach
Notes:
- Part numbers must be in the range 1..10,000.
- `add_random_suffix` defaults to True for the uploader (matches TS SDK); manual create defaults to False.
- Abort/cancel: an abortable uploader API is not yet exposed (future enhancement).
##### 3. Manual (Most Control)
Full control over each step, but more verbose:
```python
from vercel.blob import (
create_multipart_upload,
upload_part,
complete_multipart_upload,
)
# Phase 1: Create
resp = create_multipart_upload("large-file.bin")
upload_id = resp["uploadId"]
key = resp["key"]
# Phase 2: Upload parts
part1 = upload_part(
"large-file.bin",
chunk1,
upload_id=upload_id,
key=key,
part_number=1,
)
part2 = upload_part(
"large-file.bin",
chunk2,
upload_id=upload_id,
key=key,
part_number=2,
)
# Phase 3: Complete
result = complete_multipart_upload(
"large-file.bin",
[part1, part2],
upload_id=upload_id,
key=key,
)
```
See `examples/multipart_uploader.py` for complete working examples.
## Development
- Lint/typecheck/tests:
```bash
uv pip install -e .[dev]
uv run ruff format --check && uv run ruff check . && uv run mypy src && uv run pytest -v
```
- CI runs lint, typecheck, examples as smoke tests, and builds wheels.
- Publishing: push a tag (`vX.Y.Z`) that matches `project.version` to publish via PyPI Trusted Publishing.
## License
MIT
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.27.0",
"pydantic>=2.7.0",
"anyio>=4.0.0",
"python-dotenv",
"websockets>=12.0"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:58:16.035927 | vercel-0.5.0.tar.gz | 60,236 | 79/50/f5abe32c1ca3f92f6bb41e1dd732d515042f30baa3e7610c1f1ed8dd1a5f/vercel-0.5.0.tar.gz | source | sdist | null | false | 15467af0b872ab8be3e50b721bfa3aa8 | 876f7c5f97eb1acb3cc2a027ec73809dae50b07e3ba4f4f3cb22a60486401839 | 7950f5abe32c1ca3f92f6bb41e1dd732d515042f30baa3e7610c1f1ed8dd1a5f | MIT | [] | 1,073 |
2.4 | wikipya | 4.1.3 | A simple async python library for search pages and images in wikis | <div align="center">
<h1>📚 wikipya</h1>
<h3>A simple async python library for search pages and images in wikis</h3>
</div><br>
## 🛠 Usage
```python
# Import wikipya
from wikipya import Wikipya
# Create Wikipya object with Wikipedia methods
wiki = Wikipya(lang="en").get_instance()
# or use other MediaEiki server (or other service, but this is'n fully supported now)
wikipya = Wikipya(url="https://ipv6.lurkmo.re/api.php", lurk=True, prefix="").get_instance()
# for use Lurkmore (russian). simple and fast
# Get a pages list from search
search = await wiki.search("test")
# Get a pages list from opensearch
opensearch = await wiki.opensearch("test")
# Get page class
# You can give to wiki.page() search item, title of page, page id
# Search item (supported ONLY by wiki.search)
page = await wiki.page(search[0])
# Page title
page = await wiki.page("git")
# Pageid
page = await wiki.page(800543)
print(page.html) # Get page html
print(page.parsed) # Get html cleared of link, and other non-formating tags
# Get image
image = await wiki.image(page.title) # may not work in non-wikipedia services, check true prefix, or create issue
print(image.source) # Image url
print(image.width) # Image width
print(image.height) # Image height
```
## 🎉 Features
- Full async
- Support of other instances of MediaWiki
- Support cleaning of HTML with TgHTML
- Uses models by [pydantic](https://github.com/samuelcolvin/pydantic)
## 🚀 Install
To install, run this code:
```
pip install wikipya
```
| text/markdown | null | Daniel Zakharov <gzdan734@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp>=3.13.3",
"beautifulsoup4>=4.14.2",
"msgspec>=0.20.0",
"pydantic>=2.12.5",
"tghtml>=1.1.5"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T13:58:10.989072 | wikipya-4.1.3-py3-none-any.whl | 18,161 | 53/12/5376492f102969527fe93894232e1a5b88206935db850fed481b28a86d82/wikipya-4.1.3-py3-none-any.whl | py3 | bdist_wheel | null | false | aa2e184ef59b1a33f8cb933964bbf8f6 | 7b513b40a668ddbde7bc61b322ec0ea3420f0176737f83be56d42d4434b0a713 | 53125376492f102969527fe93894232e1a5b88206935db850fed481b28a86d82 | MIT | [
"LICENSE"
] | 91 |
2.3 | rollfast | 0.1.2 | JAX implementation of experimental optimizers and schedulers. | # rollfast: Advanced Optimization Primitives in JAX
`rollfast` is a high-performance optimization library for JAX, designed to
implement cutting-edge optimizers that go beyond standard Euclidean gradient
descent. It provides production-ready implementations of optimizers like
**PSGD** (Preconditioned Stochastic Gradient Descent) and **PRISM** (Anisotropic
Spectral Shaping), along with a robust **Schedule-Free** wrapper.
Built on top of the [Optax](https://github.com/google-deepmind/optax) ecosystem,
`rollfast` prioritizes memory efficiency (via scanned layers and Kronecker
factorizations), multi-gpu compatibility, mixed-precision trainings and
scalability for large models.
## Algorithms
### 1. PRISM (Anisotropic Spectral Shaping)
PRISM allows for structured optimization by applying anisotropic spectral
shaping to parameter updates. Unlike standard adaptive methods (Adam) that
operate element-wise, or full-matrix second-order methods (Shampoo/PSGD) that
approximate the Hessian, PRISM optimizes the singular value distribution of
weight matrices directly.
- **Mechanism**: Decomposes updates using Newton-Schulz iterations to
approximate SVD, applying "innovation" updates to the singular vectors while
damping singular values.
- **Partitioning**: Automatically partitions parameters. High-rank tensors
(Linear/Conv weights) are optimized via PRISM; vectors (biases, layernorms) are
optimized via AdamW.
- **Reference**: *PRISM: Structured Optimization via Anisotropic Spectral
Shaping* (Yang, 2026).
### 2. PSGD Kron (Lie Group Preconditioning)
PSGD reformulates preconditioner estimation as a strongly convex optimization
problem on Lie groups. It updates the preconditioner $Q$ (where $P = Q^T Q$)
using multiplicative updates that avoid explicit matrix inversion.
- **Mechanism**: Maintains a Kronecker-factored preconditioner updated via the
triangular or orthogonal group.
- **Reference**: *Stochastic Hessian Fittings with Lie Groups* (Li, 2024).
### 3. Schedule-Free Optimization
A wrapper that eliminates the need for complex learning rate schedules by
maintaining two sequences of parameters: a primary sequence $z$ (stepped via the
base optimizer) and an averaged sequence $x$ (used for evaluation).
- **Features**: Supports "Practical" and "Schedulet" weighting modes for
theoretically grounded averaging.
- **Reference**: *The Road Less Scheduled* (Defazio et al., 2024).
### 4. Magma (Momentum-Aligned Gradient Masking)
While training large language models (LLMs) typically relies almost exclusively
on dense adaptive optimizers, `rollfast` implements a stochastic masking
intervention that proves randomly masking parameter updates can be highly
effective.
- **Mechanism**: Random masking induces a curvature-dependent geometric
regularization that smooths the optimization trajectory.
- **Alignment**: Momentum-aligned gradient masking (Magma) modulates the masked
updates using momentum-gradient alignment.
- **Integration**: It acts as a simple drop-in replacement for adaptive
optimizers with consistent gains and negligible computational overhead.
______________________________________________________________________
## Installation
```bash
pip install rollfast
```
## Usage
### 1. PRISM (Standard)
PRISM automatically handles parameter partitioning. You simply provide the
learning rate and structural hyperparameters.
```python
import jax
import jax.numpy as jnp
from rollfast import prism
# Define parameters
params = {
'linear': {'w': jnp.zeros((128, 128)), 'b': jnp.zeros((128,))},
}
# Initialize PRISM
# 'w' will be optimized by PRISM (Spectral Shaping)
# 'b' will be optimized by AdamW
optimizer = prism(
learning_rate=1e-3,
ns_iters=5, # Newton-Schulz iterations for orthogonalization
gamma=1.0, # Innovation damping
weight_decay=0.01
)
opt_state = optimizer.init(params)
```
### 2. Schedule-Free PRISM
The `schedule_free_prism` function wraps the PRISM optimizer with the
Schedule-Free logic and the WSD (Warmup-Stable-Decay) scheduler for the internal
step size.
```python
from rollfast.optim import schedule_free_prism
optimizer = schedule_free_prism(
learning_rate=1.0, # Peak LR for internal steps
total_steps=10000, # Required for WSD schedule generation
warmup_fraction=0.1,
weighting_mode="schedulet",
sf_b1=0.9, # Schedule-Free interpolation (beta)
gamma=0.8, # PRISM specific arg
)
# Note: In Schedule-Free, you must compute gradients at the averaged location 'x'
# but apply updates to the state 'z'.
```
### 3. PSGD Kron
The classic Kronecker-factored PSGD optimizer.
```python
from rollfast.optim import kron
optimizer = kron(
learning_rate=1e-3,
b1=0.9,
preconditioner_lr=0.1,
preconditioner_mode='Q0.5EQ1.5', # Procrustes-regularized update
whiten_grad=True
)
```
### Advanced: Scanned Layers (Memory Efficiency)
For deep architectures (e.g., Transformers) implemented via `jax.lax.scan`,
`rollfast` supports explicit handling of scanned layers to prevent unrolling
computation graphs.
```python
import jax
from rollfast.optim import kron
# Boolean pytree mask where True indicates a scanned parameter
scanned_layers_mask = ...
optimizer = kron(
learning_rate=3e-4,
scanned_layers=scanned_layers_mask,
lax_map_scanned_layers=True, # Use lax.map for preconditioner updates
lax_map_batch_size=8
)
```
______________________________________________________________________
## Configuration
### Stability & Clipping Parameters
These parameters ensure robustness against gradient spikes and numerical
instability, critical for training at scale.
| Parameter | Default | Description |
| :---------------------------- | :------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `raw_global_grad_clip` | `None` | If set, computes the global L2 norm of gradients *before* the optimizer step. If the norm exceeds this threshold, the update is either clipped or skipped. |
| `permissive_spike_protection` | `True` | Controls behavior when `raw_global_grad_clip` is triggered. `True` clips the gradient and proceeds; `False` strictly skips the update (zeroing the step). |
| `grad_clip_max_amps` | `(2.0, 10.0)` | Post-processing clipping. Clips individual tensors by RMS (`2.0`) and absolute value (`10.0`) to prevent heavy tails in the update distribution. |
### Schedule-Free Hyperparameters
When using `schedule_free_*` optimizers, these arguments control the underlying
WSD (Warmup-Stable-Decay) schedule and the iterate averaging.
| Parameter | Default | Description |
| :---------------- | :---------- | :---------------------------------------------------------------------------------------------------------------- |
| `warmup_fraction` | `0.1` | Fraction of `total_steps` used for linear warmup. |
| `decay_fraction` | `0.1` | Fraction of `total_steps` used for linear decay (cooldown) at the end of training. |
| `weighting_mode` | `PRACTICAL` | Strategy for $c_t$ calculation: `THEORETICAL` ($1/t$), `PRACTICAL` ($\gamma_t^2$), or `SCHEDULET` ($\gamma_t$). |
### PRISM Specifics
| Parameter | Default | Description |
| :------------------- | :------ | :------------------------------------------------------------------------------------------ |
| `ns_iters` | `5` | Newton-Schulz iterations. Higher values provide better orthogonality but cost more compute. |
| `gamma` | `1.0` | Damping coefficient for the innovation term. Controls the "anisotropy" of spectral shaping. |
| `shape_nesterov` | `True` | If True, shapes Nesterov momentum; otherwise shapes raw momentum. |
| `adam_learning_rate` | `None` | Optional override for the Adam branch learning rate. Defaults to `learning_rate` if None. |
### PSGD Specifics
| Parameter | Default | Description |
| :-------------------------- | :------ | :---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `track_lipschitz` | `True` | Enables adaptive step sizes for the preconditioner $Q$ by tracking the Lipschitz constant of the gradient. |
| `max_skew_triangular` | `1.0` | Threshold for diagonal approximation. If a dimension's aspect ratio squared exceeds this relative to total numel, it is treated as diagonal to save memory. |
| `preconditioner_init_scale` | `None` | Initial scale for $Q$. If `None`, it is estimated on the first step using gradient statistics. |
### Magma Specifics
Magma acts as an intervention layer applicable to both PRISM and PSGD optimizers
by passing `use_magma=True`.
**Architectural Warning:** Magma introduces intentional update bias (damping)
that scales down the expected update magnitude. At equilibrium, you may need to
scale your global learning rate by ~4x to maintain the original update volume
and prevent vanishing progress.
| Parameter | Default | Description |
| :---------- | :------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `use_magma` | `False` | Enables Momentum-aligned gradient masking. Operates at the PyTree leaf level to ensure strict cryptographic PRNG independence and JAX topological isomorphism. |
| `magma_tau` | `2.0` | Temperature parameter for the alignment sigmoid $\sigma(\text{cossim} / \tau)$. At default `2.0`, non-masked steps scale updates by ~0.5, which combined with 50% Bernoulli masking yields an expected magnitude attenuation of ~0.25x. |
| `key` | `42` | Stateful PRNG seed initialized for Magma's Bernoulli sampling. `rollfast` dynamically cycles this key across shards and layers to prevent cryptographic correlation and ensure statistical independence from the base optimizer's noise injections (e.g., Procrustes). |
#### Preconditioner Modes
The geometry of the preconditioner update $dQ$ is controlled via
`preconditioner_mode`.
| Mode | Formula | Description |
| :---------- | :---------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------- |
| `Q0.5EQ1.5` | $dQ = Q^{0.5} \mathcal{E} Q^{1.5}$ | **Recommended**. Uses an online orthogonal Procrustes solver to keep $Q$ approximately SPD. Numerically stable for low precision. |
| `EQ` | $dQ = \mathcal{E} Q$ | The original triangular update. Requires triangular solves. Only mode compatible with triangular $Q$. |
| `QUAD` | Quadratic Form | Ensures $Q$ remains symmetric positive definite via quadratic form updates. |
| `NS` | Newton-Schulz | Iteratively projects $Q$ onto the SPD manifold using Newton-Schulz iterations. Exact but more expensive. |
| `EXP` | Matrix Exponential | Geodesic update on the SPD manifold. Uses matrix exponential. |
| `TAYLOR2` | Taylor Expansion | Second-order Taylor approximation of the matrix exponential update. |
| `HYPER` | Hyperbolic | Multiplicative hyperbolic update. |
______________________________________________________________________
## Citations
If you use `rollfast` in your research, please cite the relevant papers for the algorithms you utilize.
**PRISM:**
```bibtex
@misc{2602.03096,
Author = {Yujie Yang},
Title = {PRISM: Structured Optimization via Anisotropic Spectral Shaping},
Year = {2026},
Eprint = {arXiv:2602.03096},
}
```
**Schedule-Free:**
```bibtex
@misc{2405.15682,
Author = {Aaron Defazio and Xingyu Alice Yang and Harsh Mehta and Konstantin Mishchenko and Ahmed Khaled and Ashok Cutkosky},
Title = {The Road Less Scheduled},
Year = {2024},
Eprint = {arXiv:2405.15682},
}
@misc{2511.07767,
Author = {Yuen-Man Pun and Matthew Buchholz and Robert M. Gower},
Title = {Schedulers for Schedule-free: Theoretically inspired hyperparameters},
Year = {2025},
Eprint = {arXiv:2511.07767},
}
```
**PSGD:**
```bibtex
@article{li2024stochastic,
title={Stochastic Hessian Fittings with Lie Groups},
author={Li, Xi-Lin},
journal={arXiv preprint arXiv:2402.11858},
year={2024}
}
```
**Magma:**
```bibtex
@misc{2602.15322,
Author = {Taejong Joo and Wenhan Xia and Cheolmin Kim and Ming Zhang and Eugene Ie},
Title = {On Surprising Effectiveness of Masking Updates in Adaptive Optimizers},
Year = {2026},
Eprint = {arXiv:2602.15322},
}
```
| text/markdown | clementpoiret | clementpoiret <clement@linux.com> | null | null | MIT | jax, optax, optimizer, psgd, deep-learning, second-order-optimization, preconditioning | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engine... | [] | null | null | >=3.11 | [] | [] | [] | [
"jax>=0.6.2",
"optax>=0.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/clementpoiret/rollfast",
"Repository, https://github.com/clementpoiret/rollfast",
"Issues, https://github.com/clementpoiret/rollfast/issues"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"NixOS","version":"26.05","id":"yarara","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T13:57:24.798599 | rollfast-0.1.2-py3-none-any.whl | 45,628 | d8/40/172eb28f57ec3857a766fe04afc5f7044de2f670b13f5ece3d0de8a9c52b/rollfast-0.1.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 282fdf5b0ab724a1832f31fd3a290006 | b582b98838257e2bb3614bc782566ae280598a1ca5fe6d5c462bb1fc06b66ac6 | d840172eb28f57ec3857a766fe04afc5f7044de2f670b13f5ece3d0de8a9c52b | null | [] | 213 |
2.3 | lamindb | 2.2.0 | A data framework for biology. | # LaminDB [](https://docs.lamin.ai) [](https://docs.lamin.ai/llms.txt) [](https://codecov.io/gh/laminlabs/lamindb) [](https://pypi.org/project/lamindb) [](https://cran.r-project.org/package=laminr) [](https://github.com/laminlabs/lamindb) [](https://pepy.tech/project/lamindb)
LaminDB is an open-source data framework for biology to query, trace, and validate datasets and models at scale.
You get context & memory through a lineage-native lakehouse that supports bio-formats, registries & ontologies.
<details>
<summary>Why?</summary>
(1) Reproducing, tracing & understanding how datasets, models & results are created is critical to quality R&D.
Without context, humans & agents make mistakes and cannot close feedback loops across data generation & analysis.
Without memory, compute & intelligence are wasted on fragmented, non-compounding tasks — LLM context windows are small.
(2) Training & fine-tuning models with thousands of datasets — across LIMS, ELNs, orthogonal assays — is now a primary path to scaling R&D.
But without queryable & validated data or with data locked in organizational & infrastructure siloes, it leads to garbage in, garbage out or is quite simply impossible.
Imagine building software without git or pull requests: an agent's quality would be impossible to verify.
While code has git and tables have dbt/warehouses, biological data has lacked a framework for managing its unique complexity.
LaminDB fills the gap.
It is a lineage-native lakehouse that understands bio-registries and formats (`AnnData`, `.zarr`, …) based on the established open data stack:
Postgres/SQLite for metadata and cross-platform storage for datasets.
By offering queries, tracing & validation in a single API, LaminDB provides the context & memory to turn messy, agentic biological R&D into a scalable process.
</details>
<img width="800px" src="https://lamin-site-assets.s3.amazonaws.com/.lamindb/BunYmHkyFLITlM5M000C.svg">
How?
- **lineage** → track inputs & outputs of notebooks, scripts, functions & pipelines with a single line of code
- **lakehouse** → manage, monitor & validate schemas for standard and bio formats; query across many datasets
- **FAIR datasets** → validate & annotate `DataFrame`, `AnnData`, `SpatialData`, `parquet`, `zarr`, …
- **LIMS & ELN** → programmatic experimental design with bio-registries, ontologies & markdown notes
- **unified access** → storage locations (local, S3, GCP, …), SQL databases (Postgres, SQLite) & ontologies
- **reproducible** → auto-track source code & compute environments with data & code versioning
- **change management** → branching & merging similar to git, plan management for agents
- **zero lock-in** → runs anywhere on open standards (Postgres, SQLite, `parquet`, `zarr`, etc.)
- **scalable** → you hit storage & database directly through your `pydata` or R stack, no REST API involved
- **simple** → just `pip install` from PyPI or `install.packages('laminr')` from CRAN
- **distributed** → zero-copy & lineage-aware data sharing across infrastructure (databases & storage locations)
- **integrations** → [git](https://docs.lamin.ai/track#sync-code-with-git), [nextflow](https://docs.lamin.ai/nextflow), [vitessce](https://docs.lamin.ai/vitessce), [redun](https://docs.lamin.ai/redun), and [more](https://docs.lamin.ai/integrations)
- **extensible** → create custom plug-ins based on the Django ORM, the basis for LaminDB's registries
GUI, permissions, audit logs? [LaminHub](https://lamin.ai) is a collaboration hub built on LaminDB similar to how GitHub is built on git.
<details>
<summary>Who?</summary>
Scientists and engineers at leading research institutions and biotech companies, including:
- **Industry** → Pfizer, Altos Labs, Ensocell Therapeutics, ...
- **Academia & Research** → scverse, DZNE (National Research Center for Neuro-Degenerative Diseases), Helmholtz Munich (National Research Center for Environmental Health), ...
- **Research Hospitals** → Global Immunological Swarm Learning Network: Harvard, MIT, Stanford, ETH Zürich, Charité, U Bonn, Mount Sinai, ...
From personal research projects to pharma-scale deployments managing petabytes of data across:
entities | OOMs
--- | ---
observations & datasets | 10¹² & 10⁶
runs & transforms| 10⁹ & 10⁵
proteins & genes | 10⁹ & 10⁶
biosamples & species | 10⁵ & 10²
... | ...
</details>
## Docs
Point an agent to [llms.txt](https://docs.lamin.ai/llms.txt) and let them do the work or read the [docs](https://docs.lamin.ai).
## Quickstart
Install the Python package:
```shell
pip install lamindb
```
### Query databases
You can browse public databases at [lamin.ai/explore](https://lamin.ai/explore). To query [laminlabs/cellxgene](https://lamin.ai/laminlabs/cellxgene), run:
```python
import lamindb as ln
db = ln.DB("laminlabs/cellxgene") # a database object for queries
df = db.Artifact.to_dataframe() # a dataframe listing datasets & models
```
To get a [specific dataset](https://lamin.ai/laminlabs/cellxgene/artifact/BnMwC3KZz0BuKftR), run:
```python
artifact = db.Artifact.get("BnMwC3KZz0BuKftR") # a metadata object for a dataset
artifact.describe() # describe the context of the dataset
```
<details>
<summary>See the output.</summary>
<img src="https://lamin-site-assets.s3.amazonaws.com/.lamindb/mxlUQiRLMU4Zos6k0001.png" width="550">
</details>
Access the content of the dataset via:
```python
local_path = artifact.cache() # return a local path from a cache
adata = artifact.load() # load object into memory
accessor = artifact.open() # return a streaming accessor
```
You can query by biological entities like `Disease` through plug-in `bionty`:
```python
alzheimers = db.bionty.Disease.get(name="Alzheimer disease")
df = db.Artifact.filter(diseases=alzheimers).to_dataframe()
```
### Configure your database
You can create a LaminDB instance at [lamin.ai](https://lamin.ai) and invite collaborators.
To connect to a remote instance, run:
```shell
lamin login
lamin connect account/name
```
If you prefer to work with a local SQLite database (no login required), run this instead:
```shell
lamin init --storage ./quickstart-data --modules bionty
```
On the terminal and in a Python session, LaminDB will now auto-connect.
### CLI
To save a file or folder from the command line, run:
```shell
lamin save myfile.txt --key examples/myfile.txt
```
To sync a file into a local cache (artifacts) or development directory (transforms), run:
```shell
lamin load --key examples/myfile.txt
```
Read more: [docs.lamin.ai/cli](https://docs.lamin.ai/cli).
### Lineage: scripts & notebooks
To create a dataset while tracking source code, inputs, outputs, logs, and environment:
```python
import lamindb as ln
# → connected lamindb: account/instance
ln.track() # track code execution
open("sample.fasta", "w").write(">seq1\nACGT\n") # create dataset
ln.Artifact("sample.fasta", key="sample.fasta").save() # save dataset
ln.finish() # mark run as finished
```
Running this snippet as a script (`python create-fasta.py`) produces the following data lineage:
```python
artifact = ln.Artifact.get(key="sample.fasta") # get artifact by key
artifact.describe() # context of the artifact
artifact.view_lineage() # fine-grained lineage
```
<img src="https://lamin-site-assets.s3.amazonaws.com/.lamindb/BOTCBgHDAvwglN3U0004.png" width="550"> <img src="https://lamin-site-assets.s3.amazonaws.com/.lamindb/EkQATsQL5wqC95Wj0006.png" width="140">
<details>
<summary>Access run & transform.</summary>
```python
run = artifact.run # get the run object
transform = artifact.transform # get the transform object
run.describe() # context of the run
```
<img src="https://lamin-site-assets.s3.amazonaws.com/.lamindb/rJrHr3XaITVS4wVJ0000.png" width="550" />
```python
transform.describe() # context of the transform
```
<img src="https://lamin-site-assets.s3.amazonaws.com/.lamindb/JYwmHBbgf2MRCfgL0000.png" width="550" />
</details>
<details>
<summary>15 sec video.</summary>
[15 sec video](https://lamin-site-assets.s3.amazonaws.com/.lamindb/Xdiikc2c1tPtHcvF0000.mp4)
</details>
### Lineage: functions & workflows
You can achieve the same traceability for functions & workflows:
<!-- #skip_laminr -->
```python
import lamindb as ln
@ln.flow()
def create_fasta(fasta_file: str = "sample.fasta"):
open(fasta_file, "w").write(">seq1\nACGT\n") # create dataset
ln.Artifact(fasta_file, key=fasta_file).save() # save dataset
if __name__ == "__main__":
create_fasta()
```
<!-- #end_skip_laminr -->
Beyond what you get for scripts & notebooks, this automatically tracks function & CLI params and integrates well with established Python workflow managers: [docs.lamin.ai/track](https://docs.lamin.ai/track). To integrate advanced bioinformatics pipeline managers like Nextflow, see [docs.lamin.ai/pipelines](https://docs.lamin.ai/pipelines).
<details>
<summary>A richer example.</summary>
Here is a an automatically generated re-construction of the project of [Schmidt _el al._ (Science, 2022)](https://pubmed.ncbi.nlm.nih.gov/35113687/):
<img src="https://lamin-site-assets.s3.amazonaws.com/.lamindb/KQmzmmLOeBN0C8Yk0004.png" width="850">
A phenotypic CRISPRa screening result is integrated with scRNA-seq data. Here is the result of the screen input:
<img src="https://lamin-site-assets.s3.amazonaws.com/.lamindb/JvLaK9Icj11eswQn0000.png" width="850">
You can explore it [here](https://lamin.ai/laminlabs/lamindata/artifact/W1AiST5wLrbNEyVq) on LaminHub or [here](https://github.com/laminlabs/schmidt22) on GitHub.
</details>
### Labeling & queries by fields
You can label an artifact by running:
```python
my_label = ln.ULabel(name="My label").save() # a universal label
project = ln.Project(name="My project").save() # a project label
artifact.ulabels.add(my_label)
artifact.projects.add(project)
```
Query for it:
```python
ln.Artifact.filter(ulabels=my_label, projects=project).to_dataframe()
```
You can also query by the metadata that lamindb automatically collects:
```python
ln.Artifact.filter(run=run).to_dataframe() # by creating run
ln.Artifact.filter(transform=transform).to_dataframe() # by creating transform
ln.Artifact.filter(size__gt=1e6).to_dataframe() # size greater than 1MB
```
If you want to include more information into the resulting dataframe, pass `include`.
```python
ln.Artifact.to_dataframe(include=["created_by__name", "storage__root"]) # include fields from related registries
```
Note: The query syntax for `DB` objects and for your default database is the same.
### Queries by features
You can annotate datasets and samples with features. Let's define some:
```python
from datetime import date
ln.Feature(name="gc_content", dtype=float).save()
ln.Feature(name="experiment_note", dtype=str).save()
ln.Feature(name="experiment_date", dtype=date, coerce=True).save() # accept date strings
```
During annotation, feature names and data types are validated against these definitions:
```python
artifact.features.add_values({
"gc_content": 0.55,
"experiment_note": "Looks great",
"experiment_date": "2025-10-24",
})
```
Query for it:
```python
ln.Artifact.filter(experiment_date="2025-10-24").to_dataframe() # query all artifacts annotated with `experiment_date`
```
If you want to include the feature values into the dataframe, pass `include`.
```python
ln.Artifact.to_dataframe(include="features") # include the feature annotations
```
### Lake ♾️ LIMS ♾️ Sheets
You can create records for the entities underlying your experiments: samples, perturbations, instruments, etc., for example:
```python
sample = ln.Record(name="Sample", is_type=True).save() # create entity type: Sample
ln.Record(name="P53mutant1", type=sample).save() # sample 1
ln.Record(name="P53mutant2", type=sample).save() # sample 2
```
Define features and annotate an artifact with a sample:
```python
ln.Feature(name="design_sample", dtype=sample).save()
artifact.features.add_values({"design_sample": "P53mutant1"})
```
You can query & search the `Record` registry in the same way as `Artifact` or `Run`.
```python
ln.Record.search("p53").to_dataframe()
```
<details>
<summary>You can create relationships of entities and edit them like Excel sheets on LaminHub.</summary>
<img width="800px" src="https://lamin-site-assets.s3.amazonaws.com/.lamindb/XSzhWUb0EoHOejiw0001.png">
</details>
### Data versioning
If you change source code or datasets, LaminDB manages versioning for you.
Assume you run a new version of our `create-fasta.py` script to create a new version of `sample.fasta`.
```python
import lamindb as ln
ln.track()
open("sample.fasta", "w").write(">seq1\nTGCA\n") # a new sequence
ln.Artifact("sample.fasta", key="sample.fasta", features={"design_sample": "P53mutant1"}).save() # annotate with the new sample
ln.finish()
```
If you now query by `key`, you'll get the latest version of this artifact with the latest version of the source code linked with previous versions of artifact and source code are easily queryable:
```python
artifact = ln.Artifact.get(key="sample.fasta") # get artifact by key
artifact.versions.to_dataframe() # see all versions of that artifact
```
### Lakehouse ♾️ feature store
Here is how you ingest a `DataFrame`:
```python
import pandas as pd
df = pd.DataFrame({
"sequence_str": ["ACGT", "TGCA"],
"gc_content": [0.55, 0.54],
"experiment_note": ["Looks great", "Ok"],
"experiment_date": [date(2025, 10, 24), date(2025, 10, 25)],
})
ln.Artifact.from_dataframe(df, key="my_datasets/sequences.parquet").save() # no validation
```
To validate & annotate the content of the dataframe, use the built-in schema `valid_features`:
```python
ln.Feature(name="sequence_str", dtype=str).save() # define a remaining feature
artifact = ln.Artifact.from_dataframe(
df,
key="my_datasets/sequences.parquet",
schema="valid_features" # validate columns against features
).save()
artifact.describe()
```
<details>
<summary>30 sec video.</summary>
[30 sec video](https://lamin-site-assets.s3.amazonaws.com/.lamindb/lJBlG7wEbNgkl2Cy0000.mp4)
</details>
You can filter for datasets by schema and then launch distributed queries and batch loading.
### Lakehouse beyond tables
To validate an `AnnData` with built-in schema `ensembl_gene_ids_and_valid_features_in_obs`, call:
```python
import anndata as ad
import numpy as np
adata = ad.AnnData(
X=pd.DataFrame([[1]*10]*21).values,
obs=pd.DataFrame({'cell_type_by_model': ['T cell', 'B cell', 'NK cell'] * 7}),
var=pd.DataFrame(index=[f'ENSG{i:011d}' for i in range(10)])
)
artifact = ln.Artifact.from_anndata(
adata,
key="my_datasets/scrna.h5ad",
schema="ensembl_gene_ids_and_valid_features_in_obs"
)
artifact.describe()
```
To validate a `spatialdata` or any other array-like dataset, you need to construct a `Schema`. You can do this by composing simple `pandera`-style schemas: [docs.lamin.ai/curate](https://docs.lamin.ai/curate).
### Ontologies
Plugin `bionty` gives you >20 public ontologies as `SQLRecord` registries. This was used to validate the `ENSG` ids in the `adata` just before.
```python
import bionty as bt
bt.CellType.import_source() # import the default ontology
bt.CellType.to_dataframe() # your extendable cell type ontology in a simple registry
```
Read more: [docs.lamin.ai/manage-ontologies](https://docs.lamin.ai/manage-ontologies).
<details>
<summary>30 sec video.</summary>
[30 sec video](https://lamin-site-assets.s3.amazonaws.com/.lamindb/nUSeIxsaPcBKVuvK0000.mp4)
</details>
| text/markdown | null | Lamin Labs <open-source@lamin.ai> | null | null | null | null | [
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | null | null | <3.14,>=3.10 | [] | [
"lamindb"
] | [] | [
"lamin_utils==0.16.3",
"lamin_cli==1.14.0",
"lamindb_setup[aws]==1.21.0",
"bionty==2.2.1",
"pertdb==2.1.1",
"wetlab<2.1.0,>=2.0.0",
"jupytext",
"nbconvert>=7.2.1",
"nbproject==0.11.1",
"pyyaml",
"pyarrow",
"pandera>=0.24.0",
"typing_extensions!=4.6.0",
"python-dateutil",
"pandas<3.0.0,>=... | [] | [] | [] | [
"Home, https://github.com/laminlabs/lamindb"
] | python-requests/2.32.5 | 2026-02-19T13:57:18.796554 | lamindb-2.2.0.tar.gz | 517,025 | e7/86/f807920fe2c656a040b408df68b8097f0e6b6fae8eadea4dd36e640c8004/lamindb-2.2.0.tar.gz | source | sdist | null | false | 5a777d41d1268fd7c4e69f8f84ccfdf2 | a6900a14e973f32b4fc653ce1250ebea98bd56f94e8b2e263d19844a0df0fabf | e786f807920fe2c656a040b408df68b8097f0e6b6fae8eadea4dd36e640c8004 | null | [] | 529 |
2.4 | slurmformspawner | 2.11.1 | slurmformspawner: JupyterHub SlurmSpawner with a dynamic spawn form | # slurmformspawner
JupyterHub SlurmSpawner with a dynamic spawn form
## Requirements
- Python >= 3.7
- JupyterHub >= 4.0.0
- batchspawner>= 1.3.0
- cachetools
- traitlets
## Configuration
### SlurmFormSpawner
| Variable | Type | Description | Default |
| --------------------------------- | :------ | :---------------------------------------------- | ------- |
| `c.SlurmFormSpawner.disable_form` | `CBool` | Disable the spawner input form, use only default values instead | `False` |
| `c.SlurmFormSpawner.error_template_path` | `Unicode` | Path to the Jinja2 template of the error page | `os.path.join(sys.prefix, 'share', 'slurmformspawner', 'templates', 'error.html')` |
| `c.SlurmFormSpawner.submit_template_path` | `Unicode` | Path to the Jinja2 template of the submit file | `os.path.join(sys.prefix, 'share', 'slurmformspawner', 'templates', 'submit.sh')` |
| `c.SlurmFormSpawner.ui_args` | `Dict` | Dictionary of dictionaries describing the UI options | refer to `ui_args` section |
| `c.SlurmFormSpawner.profile_args` | `Dict` | Dictionary of dictionaries describing profiles | refer to `profile_args` section |
#### `ui_args`
`ui_args` is a dictionary where the keys are labels that will be re-used in `SbatchForm.ui` and the values are dictionnaries describing how to launch the user interface.
Each option dictionary can have the following keys:
- `name` (required): string that will appear in the Spawner form
- `url` (optional): url user is being redirected to after spawning the single-user server (refer to `JUPYTERHUB_DEFAULT_URL` documentation)
- `args` (optional): list of flags and options that will be appended to jupyter single-user command that should redirect to the UI.
- `modules` (optional): list of module names that needs to be loaded to make the user interface work
Here is an example of a dictionary that would configure Jupyter Notebook, a terminal and RStudio.
```
c.SlurmFormSpawner.ui_args = {
'notebook' : {
'name': 'Jupyter Notebook'
},
'terminal' : {
'name': 'Terminal',
'url': '/terminal/1'
},
'rstudio' : {
'name': 'RStudio',
'url': '/rstudio',
'modules': ['rstudio-server']
}
}
```
#### `profile_args`
`profile_args` is a dictionary where the keys are labels that are used in a JavaScript function to set values of the form according values specified in the `params` sub dictionary.
Each dictionary has the following keys:
- `name` (required): string that will appear in the Spawner form
- `params` (required): dictionary that can specify the value of each of the parameters in SbatchForm (see SbatchForm section).
Here is an example of how you could define profiles
```
c.SlurmFormSpawner.profile_args = {
'shell' : {
'name': 'Shell session',
'params': {
'nprocs': 1,
'oversubscribe': True,
'ui': 'terminal'
}
},
'parallel_testing' : {
'name': 'Parallel Testing',
'params': {
'nprocs': 8,
'oversubscribe': False,
'ui': 'lab',
'runtime': 1,
}
}
}
```
### SbatchForm
| Variable | Type | Description | Default |
| --------------------------------- | :------ | :---------------------------------------------- | ------- |
| `c.SbatchForm.runtime` | `Dict({'max', 'min', 'step', 'lock', 'def'})` | Runtime widget parameters | refer to `form.py` |
| `c.SbatchForm.nprocs` | `Dict({'max', 'min', 'step', 'lock', 'def'})` | Number of cores widget parameters | refer to `form.py` |
| `c.SbatchForm.memory` | `Dict({'max', 'min', 'step', 'lock', 'def'})` | Memory (MB) widget parameters | refer to `form.py` |
| `c.SbatchForm.oversubscribe` | `Dict({'def', 'lock'})` | Oversubscribe widget parameters | refer to `form.py` |
| `c.SbatchForm.gpus` | `Dict({'def', 'choices', 'lock'})` | GPUs widget parameters | refer to `form.py` |
| `c.SbatchForm.ui` | `Dict({'def', 'choices', 'lock'})` | User interface widget parameters | refer to `form.py` |
| `c.SbatchForm.profile` | `Dict({'def', 'choices', 'lock'})` | User interface widget parameters | refer to `form.py` |
| `c.SbatchForm.reservation` | `Dict({'def', 'choices', 'lock'})` | Reservation widget parameters | refer to `form.py` |
| `c.SbatchForm.account` | `Dict({'def', 'choices', 'lock'})` | Account widget parameters | refer to `form.py` |
| `c.SbatchForm.partition` | `Dict({'def', 'choices', 'lock'})` | Slurm partition parameters | refer to `form.py` |
| `c.SbatchForm.feature` | `Dict({'def', 'choices', 'lock'})` | Slurm feature (constraint) parameters | refer to `form.py` |
| `c.SbatchForm.form_template_path` | `Unicode` | Path to the Jinja2 template of the form | `os.path.join(sys.prefix, 'share', 'slurmformspawner', 'templates', 'form.html')` |
### SlurmAPI
| Variable | Type | Description | Default |
| --------------------------------- | :-------- | :---------------------------------------------------------------- | ------- |
| `c.SlurmAPI.info_cache_ttl` | `Integer` | Slurm sinfo output cache time-to-live (seconds) | 300 |
| `c.SlurmAPI.acct_cache_ttl` | `Integer` | Slurm sacct output cache time-to-live (seconds) | 300 |
| `c.SlurmAPI.acct_cache_size` | `Integer` | Slurm sacct output cache size (number of users) | 100 |
| `c.SlurmAPI.res_cache_ttl` | `Integer` | Slurm scontrol (reservations) output cache time-to-live (seconds) | 300 |
## screenshot

| text/markdown | Félix-Antoine Fortin | felix-antoine.fortin@calculquebec.ca | null | null | MIT | Interactive, Web, JupyterHub | [
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 3"
] | [
"Linux"
] | https://github.com/cmd-ntrf/slurmformspawner | null | null | [] | [] | [] | [
"batchspawner>=1.3.0",
"WTForms==3.2.1",
"jinja2>=2.10.1",
"cachetools"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:56:19.959361 | slurmformspawner-2.11.1.tar.gz | 22,992 | 4f/fb/a4fe80c38157be96b289f881356beb69e2f058b068dca2565f61bee8afe3/slurmformspawner-2.11.1.tar.gz | source | sdist | null | false | 6c113a92cf2ff25d57b6806f8fc29963 | 8613d6490edbb229c7c1130aef2d77c616be7cad91504f996f679cfe49ca562f | 4ffba4fe80c38157be96b289f881356beb69e2f058b068dca2565f61bee8afe3 | null | [
"LICENSE"
] | 250 |
2.4 | fastpysgi | 0.1 | An ultra fast WSGI/ASGI server for Python | <p align="center"><img src="./logo.png"></p>
--------------------------------------------------------------------
[](https://pypi.python.org/pypi/fastpysgi)
# FastPySGI
FastPySGI is an ultra fast WSGI/ASGI server for Python 3.
Its written in C and uses [libuv](https://github.com/libuv/libuv) and [llhttp](https://github.com/nodejs/llhttp) under the hood for blazing fast performance.
## Supported Platforms
| Platform | Linux | MacOs | Windows |
| :------: | :---: | :---: | :-----: |
| <b>Support</b> | :white_check_mark: | :white_check_mark: | :white_check_mark: |
## Performance
FastPySGI is one of the fastest general use WSGI servers out there!
For a comparison against other popular WSGI servers, see [PERFORMANCE.md](./performance_benchmarks/PERFORMANCE.md)
## Installation
Install using the [pip](https://pip.pypa.io/en/stable/) package manager.
```bash
pip install fastpysgi
```
## Quick start
Create a new file `example.py` with the following:
```python
import fastpysgi
def app(environ, start_response):
headers = [('Content-Type', 'text/plain')]
start_response('200 OK', headers)
return [b'Hello, World!']
if __name__ == '__main__':
fastpysgi.run(wsgi_app=app, host='0.0.0.0', port=5000)
```
Run the server using:
```bash
python3 example.py
```
Or, by using the `fastpysgi` command:
```bash
fastpysgi example:app
```
## Example usage with Flask
```python
import fastpysgi
from flask import Flask
app = Flask(__name__)
@app.get('/')
def hello_world():
return 'Hello, World!', 200
if __name__ == '__main__':
fastpysgi.run(wsgi_app=app, host='127.0.0.1', port=5000)
```
## Testing
To run the test suite using [pytest](https://docs.pytest.org/en/latest/getting-started.html), run the following command:
```bash
python3 -m pytest
```
| text/markdown | James Roberts, remittor | null | null | null | MIT License
Copyright (c) 2021 James Roberts
Copyright (c) 2022 remittor
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Operating System :: OS Independent",
"Topic :: Internet :: WWW/HTTP :: WSGI :: Server"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"click>=7.0",
"pytest; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/remittor/fastpysgi",
"Issues, https://github.com/remittor/fastpysgi/issues",
"Repository, https://github.com/remittor/fastpysgi",
"Donate, https://github.com/remittor/donate"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T13:55:48.600826 | fastpysgi-0.1.tar.gz | 655,702 | 27/21/44e9c713c4596f943b5b93a68e516ed533ccdd2af96974453f05cea9da42/fastpysgi-0.1.tar.gz | source | sdist | null | false | 693217eed4c0f158b64f47ca6e8cf90d | 64d36da225c19af667853bacd3cd494b4f2ff1821ed56f900fe38c96243433d9 | 272144e9c713c4596f943b5b93a68e516ed533ccdd2af96974453f05cea9da42 | null | [
"LICENSE"
] | 6,124 |
2.4 | cipipeline | 0.3.0 | CIPipeline is a tool to analyze and process Calcium Imaging data. | # CIPipeline
## Description
CIPipeline (imported as `ci_pipe`) is a Python library for building and running calcium-imaging processing pipelines. It provides core pipeline primitives, optional adapters for Inscopix (`isx`) and CaImAn (`caiman`), utilities, plotters and example Jupyter notebooks.
This project was developed as a final project by students from Facultad de Ingeniería, Universidad de Buenos Aires, under the supervision of Dr. Fernando Chaure, in collaboration with the CGK Laboratory.
## Authors
- González Agustín
- Loyarte Iván
- Rueda Nazarena
- Singer Joaquín
## Installation
1. Install the library from PyPI
```bash
pip install cipipeline
```
2. Install libraries/packages required for specific modules
Currently, CIPipeline supports the following optional modules:
- **Inscopix `isx`** (required for the `isx` module): Software and installation instructions can be downloaded from the vendor site: https://www.inscopix.com
Note: Do not confuse this with the public `isx` library available on PyPI or GitHub. This project requires the proprietary Inscopix software package.
- **CaImAn** (required for the `caiman` module):
- Project: https://github.com/flatironinstitute/CaImAn
- Docs: https://caiman.readthedocs.io
CaImAn strongly recommends installing via conda for full functionality; follow the CaImAn docs.
3. Jupyter (recommended for opening example notebooks)
```bash
pip install jupyterlab
# or
pip install notebook
```
## Quick Start
Here's a simple example of creating and running a calcium imaging pipeline with ISX:
```python
import isx
from ci_pipe.pipeline import CIPipe
# Create a pipeline from videos in a directory
pipeline = CIPipe.with_videos_from_directory(
'input_dir',
outputs_directory='output_dir',
isx=isx
)
# Run a complete processing pipeline
(
pipeline
.set_defaults(
isx_bp_subtract_global_minimum=False,
isx_mc_max_translation=25,
isx_acr_filters=[('SNR', '>', 3), ('Event Rate', '>', 0), ('# Comps', '=', 1)]
)
.isx.preprocess_videos()
.isx.bandpass_filter_videos()
.isx.motion_correction_videos(isx_mc_series_name="series1")
.isx.normalize_dff_videos()
.isx.extract_neurons_pca_ica()
.isx.detect_events_in_cells()
.isx.auto_accept_reject_cells()
.isx.longitudinal_registration(isx_lr_reference_selection_strategy='by_num_cells_desc')
)
```
For more examples, including CaImAn integration and advanced workflows, see the notebooks in `docs/examples`.
## Documentation and Resources
- PyPI package: https://pypi.org/project/cipipeline
- CGK Lab: https://cgk-laboratory.github.io
- Inscopix: https://www.inscopix.com
- CaImAn: https://github.com/flatironinstitute/CaImAn and https://caiman.readthedocs.io
- Jupyter starter guide: https://jupyter.org/install
## Examples
Example Jupyter notebooks are available in `docs/examples`. To run them locally:
```bash
git clone https://github.com/CGK-Laboratory/ci_pipe
cd ci_pipe
pip install -e .
# install optional dependencies if needed (isx, caiman)
jupyter lab
# open notebooks in docs/examples
```
---
Read the Spanish version in `README_es.md`
| text/markdown | null | Gonzalez Agustín <agngonzalez@fi.uba.ar>, Loyarte Iván <iloyarte@fi.uba.ar>, Rueda Nazarena <nrueda@fi.uba.ar>, Singer Joaquín <josinger@fi.uba.ar> | null | null | MIT License
Copyright (c) 2025 CGKlab
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [] | [] | null | null | <3.11,>=3.10 | [] | [] | [] | [
"rich>=13.7.0",
"PyYAML>=6.0"
] | [] | [] | [] | [
"Homepage, https://github.com/CGK-Laboratory/ci_pipe",
"Repository, https://github.com/CGK-Laboratory/ci_pipe"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T13:55:42.825921 | cipipeline-0.3.0.tar.gz | 26,331 | cd/d0/ddb2fa123330b1c115b4fea30275c954c26771e1fd89fe1579b23174c3f0/cipipeline-0.3.0.tar.gz | source | sdist | null | false | 240f6c3f4f38cfb9309c6ea91ca3553b | c929011def08571775fedcf24bb92152c04fc6807c3993b02f3b41cbce294d58 | cdd0ddb2fa123330b1c115b4fea30275c954c26771e1fd89fe1579b23174c3f0 | null | [
"LICENSE"
] | 241 |
2.4 | nullspace-optimizer | 1.3.0 | Null space algorithm for nonlinear constrained optimization |
# Null Space Optimizer
`nullspace_optimizer` is a Python package implementing the null space
algorithm for nonlinear constrained optimization. It has been developed
in the context of topology optimization problems with the level-set and
the density method, but it can in principle be used for solving
arbitrary smooth nonlinear equality and inequality constrained
optimization problems of the form
```math
\begin{aligned}
\begin{aligned}
\min_{x\in \mathcal{X}}& \quad J(x)\\
\textrm{s.t.} & \left\{\begin{aligned}
g_i(x)&=0, \text{ for all } 1\leqslant i\leqslant p,\\
h_j(x) &\leqslant 0, \text{ for all }1\leqslant j \leqslant q,\\
\end{aligned}\right.
\end{aligned}
\end{aligned}
```
{.align-center width="400px"}
## Official documentation
[Official documentation](https://null-space-optimizer.readthedocs.io/en/latest/index.html)
## Contribute and support
- Issue tracker:
<https://gitlab.com/florian.feppon/null-space-optimizer/-/issues>
- Source code:
<https://gitlab.com/florian.feppon/null-space-optimizer>
If I am not responding on the issue tracker, feel free to send me an
email to florian.feppon\[at\]kuleuven.be
## Citation
Please cite either of the following references when using this source:
> Feppon F., Allaire G. and Dapogny C. *Null space gradient flows for
> constrained optimization with applications to shape optimization.*
> 2020. ESAIM: COCV, 26 90
> [doi:10.1051/cocv/2020015](https://doi.org/10.1051/cocv/2020015)
> Feppon F. *Density based topology optimization with the Null Space Optimizer: a
> tutorial and a comparison* (2024).
> [Structural and Multidisciplinary Optimization, 67(4), 1-34](https://link.springer.com/article/10.1007/s00158-023-03710-w).
``` bibtex
@article{feppon2020optim,
author = {{Feppon, F.} and {Allaire, G.} and {Dapogny, C.}},
doi = {10.1051/cocv/2020015},
journal = {ESAIM: COCV},
pages = {90},
title = {Null space gradient flows for constrained optimization with applications to shape optimization},
url = {https://doi.org/10.1051/cocv/2020015},
volume = 26,
year = 2020
}
```
``` bibtex
@article{Feppon2024density,
title = "Density-based topology optimization with the Null Space Optimizer: a tutorial and a comparison",
author = "Feppon, Florian",
journal = "Structural and Multidisciplinary Optimization",
publisher = "Springer",
volume = 67,
number = 1,
pages = "1--34",
month = jan,
year = 2024
}
```
## Licence
The Null Space Optimizer is a free software distributed under the terms
of the GNU General Public Licence
[GPL3](https://www.gnu.org/licenses/gpl-3.0.html).
| text/markdown | Florian Feppon, Dries Toebat | florian.feppon@kuleuven.be | null | null | GNU GPL version 3 | nonlinear constrained optimization | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Mathematics"
] | [] | https://null-space-optimizer.readthedocs.io/en/latest/ | null | >=3.9 | [] | [] | [] | [
"numpy>=1.26.4",
"scipy>=1.11.4",
"matplotlib>=3.8.3",
"cvxopt>=1.3.2",
"osqp>=0.6.2",
"sympy>=1.12",
"qpalm>=1.2.2",
"piqp>=0.6.2",
"colored>=1.4.4",
"pypardiso>=0.4.7"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.5 | 2026-02-19T13:55:26.729063 | nullspace_optimizer-1.3.0.tar.gz | 85,072 | 69/74/0b8c958bae9dd03aec8af2010d0c5000cd70fe72c25431e2ffc4b551bdd3/nullspace_optimizer-1.3.0.tar.gz | source | sdist | null | false | 2529506b8ce15cbb26873d121d58c854 | 779214302b6d09b36bae8a0fc04cf7381f3d8b4c2a851fb0e918de8feb33fb01 | 69740b8c958bae9dd03aec8af2010d0c5000cd70fe72c25431e2ffc4b551bdd3 | null | [
"LICENSE"
] | 228 |
2.4 | cledar-sdk | 2.2.0 | Cledar Python SDK | # Cledar Python SDK
## Project Description
**Cledar Python SDK** is a shared set of production‑ready services and utilities used across Cledar projects. It can be installed from PyPI (recommended), or consumed as a Git dependency or Git submodule.
Included modules:
- kafka_service: Kafka Producer/Consumer, helpers and DLQ handler
- storage_service: Object storage abstraction (S3/ABFS/local via fsspec)
- monitoring_service: FastAPI monitoring server with Prometheus metrics and healthchecks
- redis_service: Redis‑backed typed config store
- kserve_service: KServe helpers
- common_logging: Common logging utilities
---
## Installation and Setup
1. **From PyPI (recommended)**
Using pip:
```bash
pip install cledar-sdk
```
Using uv:
```bash
uv add cledar-sdk
```
Pin a specific version (example):
```bash
pip install "cledar-sdk==1.0.1"
```
2. **From Git (alternative)**
Using pip (SSH, specific tag):
```bash
pip install "git+ssh://git@github.com/Cledar/cledar-python-sdk.git@v1.0.1"
```
Using uv (SSH, specific tag):
```bash
uv add --git ssh://git@github.com/Cledar/cledar-python-sdk.git@v1.0.1
```
You can also point to a branch (e.g. `main`) instead of a tag.
3. **As a Git submodule**
```bash
git submodule add git@github.com:Cledar/cledar-python-sdk.git vendor/cledar-python-sdk
git submodule update --init --recursive
```
Optionally install it in editable mode from the submodule path:
```bash
uv add -e ./vendor/cledar-python-sdk
```
4. **Developing locally**
```bash
git clone git@github.com/Cledar/cledar-python-sdk.git
cd cledar-python-sdk
uv sync
```
Python version required: 3.12.7
## Testing
Unit tests are implemented using **pytest** and **unittest**.
1. Run tests:
```bash
uv run pytest
```
2. Adding tests:
Place tests under each module's `tests` directory (e.g. `kafka_service/tests`, `storage_service/tests`) or create files with the `_test.py` suffix.
---
## Quick Start Examples
### Kafka
Producer:
```python
from kafka_service.clients.producer import KafkaProducer
from kafka_service.config.schemas import KafkaProducerConfig
cfg = KafkaProducerConfig(
kafka_servers="localhost:9092",
kafka_group_id="example",
kafka_topic_prefix="dev",
compression_type="snappy",
kafka_partitioner="consistent_random",
)
producer = KafkaProducer(config=cfg)
producer.connect()
producer.send(topic="my-topic", value='{"id":"123","payload":"hello"}', key="123")
```
Consumer:
```python
from kafka_service.clients.consumer import KafkaConsumer
from kafka_service.config.schemas import KafkaConsumerConfig
cfg = KafkaConsumerConfig(
kafka_servers="localhost:9092",
kafka_group_id="example",
kafka_topic_prefix="dev",
kafka_offset="earliest",
kafka_auto_commit_interval_ms=5000,
)
consumer = KafkaConsumer(config=cfg)
consumer.connect()
consumer.subscribe(["my-topic"])
msg = consumer.consume_next()
if msg:
consumer.commit(msg)
```
### Object Storage (S3/ABFS/local)
```python
from storage_service.object_storage import ObjectStorageService
from storage_service.models import ObjectStorageServiceConfig
cfg = ObjectStorageServiceConfig(
s3_access_key="minioadmin",
s3_secret_key="minioadmin",
s3_endpoint_url="http://localhost:9000",
)
storage = ObjectStorageService(config=cfg)
storage.upload_file(
file_path="README.md",
destination_path="s3://bucket/path/README.md",
)
```
### Monitoring Server
```python
from cledar.monitoring import MonitoringServer, MonitoringServerConfig
config = MonitoringServerConfig(
readiness_checks={"s3": storage.is_alive},
liveness_checks={"app": lambda: True},
)
server = MonitoringServer(host="0.0.0.0", port=8080, config=config)
server.start_monitoring_server()
```
### Redis Config Store
```python
from redis import Redis
from redis_service.redis_config_store import RedisConfigStore
redis = Redis(host="localhost", port=6379, db=0)
store = RedisConfigStore(redis=redis, prefix="example:")
# See redis_service/example.py for a full typed config provider example
```
## Code Quality
- **pydantic** - settings management
- **ruff**, **mypy** - Linting, formatting, and static type checking
- **pre-commit** - Pre-commit file checks
## Linting
If you want to run linting or type checker manually, you can use the following commands. Pre-commit will run these checks automatically before each commit.
```bash
uv run ruff format .
uv run ruff check .
uv run mypy .
```
## Pre-commit setup
To get started follow these steps:
1. Install `pre-commit` by running the following command:
```
pip install pre-commit
```
2. Once `pre-commit` is installed, set up the pre-commit hooks by running:
```
pre-commit install
```
3. Pre-commit hooks will analyze only committed files. To analyze all files after installation run the following:
```
pre-commit run --all-files
```
### Automatic Fixing Before Commits:
pre-commit will run Ruff (format + lint) and mypy during the commit process:
```bash
git commit -m "Describe your changes"
```
To skip pre-commit hooks for a single commit, use the `--no-verify` flag:
```bash
git commit -m "Your commit message" --no-verify
```
---
## Technologies and Libraries
### Main Dependencies:
- **python** >= "3.12.7"
- **pydantic-settings**
- **confluent-kafka**
- **fastapi**
- **prometheus-client**
- **uvicorn**
- **redis**
- **fsspec/s3fs/adlfs** (S3/ABFS backends)
- **boto3** and **boto3-stubs**
### Developer Tools:
- **uv** - Dependency and environment management
- **pydantic** - settings management
- **ruff** - Linting and formatting
- **mypy** - Static type checker
- **pytest**, **unittest** - Unit tests
- **pre-commit** - Code quality hooks
---
## Commit conventions
We use [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) for our commit messages. This helps us to create a better, more readable changelog.
Example of a commit message:
```bash
refactor(XXX-NNN): spaghetti code is now a carbonara
```
| text/markdown | Cledar | null | null | null | null | null | [] | [] | null | null | >=3.12.7 | [] | [] | [] | [
"adlfs>=2025.8.0",
"boto3-stubs>=1.34.138",
"boto3>=1.34.138",
"confluent-kafka>=2.4.0",
"ecs-logging>=2.1.0",
"fastapi>=0.112.3",
"fsspec>=2025.9.0",
"prometheus-client>=0.20.0",
"pydantic-settings>=2.3.3",
"pydantic>=2.7.0",
"redis>=5.2.1",
"s3fs>=2025.9.0",
"uvicorn>=0.30.6"
] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"20.04","id":"focal","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T13:55:21.685147 | cledar_sdk-2.2.0-py3-none-any.whl | 132,364 | 1a/17/51096a121d2e86d7d035ea059274d449b88e1613555bf75abe7c22a8d091/cledar_sdk-2.2.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 19bd9a7e42ba9c74025e69f045b95c4f | 88d4f0d388ca8457c3faa1f994d8b5a30811268d5426187a3f2ee4d83561b3a3 | 1a1751096a121d2e86d7d035ea059274d449b88e1613555bf75abe7c22a8d091 | null | [
"LICENSE"
] | 241 |
2.4 | vtk-sdk-python-wheel-helper | 1.0.0 | vtk-sdk-python-wheel-helper | # VTK-SDK Python Helper
VTK-SDK Python Helper is a collection of CMake modules to help you build VTK-compatible wheels using the [VTK-SDK](https://docs.vtk.org/en/latest/advanced/wheel_sdks.html)!
It is distributed as a Python package, used as a build-requirements for project builds using scikit-build-core. See Usage section for more information.
## Features
High-level CMake functions to build VTK modules in a wheel compatible with VTK wheel using the VTK-SDK:
- Build VTK based modules using a high-level API, compatible with any VTK version >= 9.6.0.
- Generate a package init correctly initializing your module against VTK.
- Generate a SDK version of your project to then build other VTK modules against it. You can "chain" projects using this.
- Package native runtime dependencies
## Usage
Add vtk-sdk-python-wheel-helper to your build requirements, with scikit-build-core build-system:
```toml
[build-system]
requires = [
"scikit-build-core",
"vtk-sdk==X.Y.Z", # Version of "vtk-sdk" should always be specified using "==X.Y.Z" and match the one associated with the "vtk" dependency below.
"vtk-sdk-python-wheel-helper" # you can use the latest version, it supports VTK 9.6.0 and newer.
]
build-backend = "scikit_build_core.build"
```
vtk-sdk-python-wheel-helper package adds an entry to CMAKE_MODULE_PATH variable, so you can directly include it:
```cmake
include(VTKSDKPythonWheelHelper)
```
vtk-sdk adds a path to CMAKE_PREFIX_PATH, this enables VTKSDKPythonWheelHelper to find VTK automatically.
Then you get access to the helper's functions, for example:
```cmake
vtksdk_build_modules(${SKBUILD_PROJECT_NAME} MODULES SuperProject::AmazingModule)
vtksdk_generate_package_init(${SKBUILD_PROJECT_NAME} MODULES SuperProject::AmazingModule)
```
See `tests/BasicProject` for more information about building your own module and SDK.
See `tests/packages/build_module` for more information about building your own modules against your **own SDK**!
Other usage example can be found on [SlicerCore repository](https://github.com/KitwareMedical/SlicerCore).
## Documentation
CMake functions documentation can be found in the CMake files and online @ https://vtk-sdk-python-wheel-helper.readthedocs.io/en/latest
## Future work
- Support PYI generation using VTK helper script
- Support debug symbol wheel generation
## License
Apache License, Version 2.0.
See LICENSE file for details.
| text/markdown | null | Alexy Pellegrini <alexy.pellegrini@kitware.com> | null | null | Apache 2.0 License | CMake, Python, VTK | [
"License :: OSI Approved :: Apache Software License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"sphinx<9,>=8; extra == \"docs\"",
"sphinxcontrib-moderncmakedomain; extra == \"docs\"",
"pytest; extra == \"test\"",
"virtualenv; extra == \"test\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:54:50.399122 | vtk_sdk_python_wheel_helper-1.0.0.tar.gz | 22,763 | bc/df/4b9cbd5a955cd70aa7aff12a1c2bc0ffd10851e995235e199c09d933a46a/vtk_sdk_python_wheel_helper-1.0.0.tar.gz | source | sdist | null | false | b3543c554308ca155d1f6539af7d6b0a | b3de8a38502156eb17a690a65cb10a31880c8930d01908ddb3d7067e280ffc50 | bcdf4b9cbd5a955cd70aa7aff12a1c2bc0ffd10851e995235e199c09d933a46a | null | [
"LICENSE"
] | 255 |
2.4 | aquacal | 1.4.1 | Refractive multi-camera calibration for underwater arrays with Snell's law modeling | # AquaCal
     [](https://doi.org/10.5281/zenodo.18644658)
Refractive multi-camera calibration for underwater arrays. AquaCal calibrates cameras in air viewing through a flat water surface, using Snell's law to achieve accurate 3D reconstruction in refractive environments.
## :construction: Status :construction:
**02/17/26: This project is under active and rapid development.**
The API and internal structure are subject to frequent breaking changes without notice. It is not yet recommended for
production use. A stable release is planned by the end of the month. This section will be updated accordingly once that
milestone is reached.
## Features
- **Snell's law refractive projection** — Accurate ray-tracing through air-water interfaces
- **Multi-camera pose graph** — BFS-based extrinsic initialization for camera arrays
- **Joint bundle adjustment** — Simultaneous optimization of extrinsics, interface distances, and board poses
- **Sparse Jacobian optimization** — Scalable to 10+ cameras with column grouping
- **ChArUco board detection** — Robust corner detection for calibration targets
## Installation
```bash
pip install aquacal
```
## Quick Start
1. Install AquaCal:
```bash
pip install aquacal
```
2. Generate a configuration file from your calibration videos:
```bash
aquacal init --intrinsic-dir videos/intrinsic/ --extrinsic-dir videos/extrinsic/
```
3. Run calibration:
```bash
aquacal calibrate config.yaml
```
Results are saved to `output/calibration.json` with camera intrinsics, extrinsics, interface distances, and diagnostics.
## Documentation
Full documentation is available at [aquacal.readthedocs.io](https://aquacal.readthedocs.io):
- **[Overview](https://aquacal.readthedocs.io/en/latest/overview.html)** — What is refractive calibration and when do you need it?
- **[User Guide](https://aquacal.readthedocs.io/en/latest/guide/index.html)** — Theory, methodology, and coordinate conventions
- **[API Reference](https://aquacal.readthedocs.io/en/latest/api/index.html)** — Detailed module and function documentation
- **[Tutorials](https://aquacal.readthedocs.io/en/latest/tutorials/index.html)** — Interactive Jupyter notebook examples
- **[Configuration Reference](https://aquacal.readthedocs.io/en/latest/api/config.html)** — YAML config schema and options
## Citation
If you use AquaCal in your research, please cite:
```bibtex
@software{aquacal,
title = {AquaCal: Refractive Multi-Camera Calibration},
author = {Lancaster, Tucker},
year = {2026},
url = {https://github.com/tlancaster6/AquaCal},
version = {1.2.0},
doi = {10.5281/zenodo.18644658}
}
```
See [CITATION.cff](CITATION.cff) for full citation metadata.
## Contributing
We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
MIT License. See [LICENSE](LICENSE) for details.
| text/markdown | Tucker Lancaster | null | null | null | MIT | calibration, multi-camera, underwater, refraction, computer-vision, charuco | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Image Processing",
"Topic :: Scientific/Engineering :: Image Recognition",
"License :: OSI Approved :: MIT License",
"Pro... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy",
"scipy",
"opencv-python>=4.6",
"pyyaml",
"matplotlib",
"pandas",
"requests",
"tqdm",
"natsort>=8.4.0",
"pytest; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"ruff; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"python-semantic-release; extra == \"dev\"",
"sphinx>=6.2;... | [] | [] | [] | [
"Homepage, https://github.com/tlancaster6/AquaCal",
"Documentation, https://aquacal.readthedocs.io",
"Repository, https://github.com/tlancaster6/AquaCal",
"Issues, https://github.com/tlancaster6/AquaCal/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:54:45.448432 | aquacal-1.4.1.tar.gz | 109,865 | eb/75/d8ac868f518e857c2e3cd86402b32a2febb6f3d2f87df1b95c9725e69c00/aquacal-1.4.1.tar.gz | source | sdist | null | false | a87e561c00fc5a000c08ad4f1cde58df | 03d2a78cea4f238fa324a9721e79e12c4a916985357820c639fc03430ffb963b | eb75d8ac868f518e857c2e3cd86402b32a2febb6f3d2f87df1b95c9725e69c00 | null | [
"LICENSE"
] | 292 |
2.4 | legit-api-client | 1.1.4668 | Inventory | # legit-api-client
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator)
This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:
- API version: 1.0
- Package version: 1.1.4668
- Generator version: 7.20.0
- Build package: org.openapitools.codegen.languages.PythonClientCodegen
## Requirements.
Python 3.9+
## Installation & Usage
### pip install
If the python package is hosted on a repository, you can install directly using:
```sh
pip install git+https://github.com/GIT_USER_ID/GIT_REPO_ID.git
```
(you may need to run `pip` with root permission: `sudo pip install git+https://github.com/GIT_USER_ID/GIT_REPO_ID.git`)
Then import the package:
```python
import legit_api_client
```
### Setuptools
Install via [Setuptools](http://pypi.python.org/pypi/setuptools).
```sh
python setup.py install --user
```
(or `sudo python setup.py install` to install the package for all users)
Then import the package:
```python
import legit_api_client
```
### Tests
Execute `pytest` to run the tests.
## Getting Started
Please follow the [installation procedure](#installation--usage) and then run the following:
```python
import legit_api_client
from legit_api_client.rest import ApiException
from pprint import pprint
# Defining the host is optional and defaults to http://localhost
# See configuration.py for a list of all supported configuration parameters.
configuration = legit_api_client.Configuration(
host = "http://localhost"
)
# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.
# Configure Bearer authorization (JWT): BearerAuth
configuration = legit_api_client.Configuration(
access_token = os.environ["BEARER_TOKEN"]
)
# Enter a context with an instance of the API client
with legit_api_client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = legit_api_client.AmazonEcrKeyIntegrationApi(api_client)
id = 'id_example' # str |
try:
api_instance.amazon_ecr_key_integration_delete(id)
except ApiException as e:
print("Exception when calling AmazonEcrKeyIntegrationApi->amazon_ecr_key_integration_delete: %s\n" % e)
```
## Documentation for API Endpoints
All URIs are relative to *http://localhost*
Class | Method | HTTP request | Description
------------ | ------------- | ------------- | -------------
*AmazonEcrKeyIntegrationApi* | [**amazon_ecr_key_integration_delete**](docs/AmazonEcrKeyIntegrationApi.md#amazon_ecr_key_integration_delete) | **DELETE** /api/v1.0/integrations/amazon-ecr/key/{id} |
*AmazonEcrRoleIntegrationApi* | [**amazon_ecr_role_integration_delete**](docs/AmazonEcrRoleIntegrationApi.md#amazon_ecr_role_integration_delete) | **DELETE** /api/v1.0/integrations/amazon-ecr/role/{id} |
*BrokersApi* | [**api_v10_brokers_get**](docs/BrokersApi.md#api_v10_brokers_get) | **GET** /api/v1.0/brokers | Get all brokers
*CloudInstancesApi* | [**api_v10_cloud_instances_get**](docs/CloudInstancesApi.md#api_v10_cloud_instances_get) | **GET** /api/v1.0/cloud-instances | Get all cloud instances
*CollaboratorsApi* | [**api_v10_collaborators_get**](docs/CollaboratorsApi.md#api_v10_collaborators_get) | **GET** /api/v1.0/collaborators | Get all collaborators
*CollaboratorsApi* | [**api_v10_collaborators_id_get**](docs/CollaboratorsApi.md#api_v10_collaborators_id_get) | **GET** /api/v1.0/collaborators/{id} | Get collaborator by id
*CollaboratorsApi* | [**api_v10_collaborators_id_permissions_get**](docs/CollaboratorsApi.md#api_v10_collaborators_id_permissions_get) | **GET** /api/v1.0/collaborators/{id}/permissions | Get collaborator permissions
*ComplianceApi* | [**api_v10_compliance_automatic_checks_check_id_delete**](docs/ComplianceApi.md#api_v10_compliance_automatic_checks_check_id_delete) | **DELETE** /api/v1.0/compliance/automatic-checks/{checkId} | Delete automatic check
*ComplianceApi* | [**api_v10_compliance_framework_id_get**](docs/ComplianceApi.md#api_v10_compliance_framework_id_get) | **GET** /api/v1.0/compliance/{frameworkId} | Get compliance by id
*ComplianceApi* | [**api_v10_compliance_framework_id_requirement_name_automatic_checks_post**](docs/ComplianceApi.md#api_v10_compliance_framework_id_requirement_name_automatic_checks_post) | **POST** /api/v1.0/compliance/{frameworkId}/{requirementName}/automatic-checks | Create automatic check
*ComplianceApi* | [**api_v10_compliance_get**](docs/ComplianceApi.md#api_v10_compliance_get) | **GET** /api/v1.0/compliance | Get all complainces
*ComplianceApi* | [**api_v10_compliance_report_framework_id_post**](docs/ComplianceApi.md#api_v10_compliance_report_framework_id_post) | **POST** /api/v1.0/compliance/report/{frameworkId} | Request compliance report
*ContainerImagesApi* | [**api_v10_containers_cloud_instances_post**](docs/ContainerImagesApi.md#api_v10_containers_cloud_instances_post) | **POST** /api/v1.0/containers/cloud-instances | Manage container to cloud instance manual correlation
*ContainerImagesApi* | [**api_v10_containers_container_image_id_versions_get**](docs/ContainerImagesApi.md#api_v10_containers_container_image_id_versions_get) | **GET** /api/v1.0/containers/{containerImageId}/versions | Get Container Image Versions
*ContainerImagesApi* | [**api_v10_containers_container_name_versions_digest_sbom_get**](docs/ContainerImagesApi.md#api_v10_containers_container_name_versions_digest_sbom_get) | **GET** /api/v1.0/containers/{containerName}/versions/{digest}/sbom | Download Container Image Version SBOM
*ContainerImagesApi* | [**api_v10_containers_get**](docs/ContainerImagesApi.md#api_v10_containers_get) | **GET** /api/v1.0/containers | Get all Container Images
*ContainerImagesApi* | [**api_v10_containers_id_get**](docs/ContainerImagesApi.md#api_v10_containers_id_get) | **GET** /api/v1.0/containers/{id} | Get Container Image By Id
*ContainerImagesApi* | [**api_v10_containers_repositories_post**](docs/ContainerImagesApi.md#api_v10_containers_repositories_post) | **POST** /api/v1.0/containers/repositories | Manage container to repository manual correlation
*CustomFieldsApi* | [**api_v10_custom_fields_get**](docs/CustomFieldsApi.md#api_v10_custom_fields_get) | **GET** /api/v1.0/custom-fields | Get Custom Fields
*DependenciesV2Api* | [**get_dependencies_count_v2**](docs/DependenciesV2Api.md#get_dependencies_count_v2) | **GET** /api/v2.0/dependencies/count | Get unique dependencies count
*DependenciesV2Api* | [**get_dependencies_v2**](docs/DependenciesV2Api.md#get_dependencies_v2) | **GET** /api/v2.0/dependencies | Get unique dependencies
*DependenciesV2Api* | [**get_dependency_licenses**](docs/DependenciesV2Api.md#get_dependency_licenses) | **GET** /api/v2.0/dependencies/licenses | Get the licenses for dependencies
*DependenciesV2Api* | [**get_dependency_licenses_data**](docs/DependenciesV2Api.md#get_dependency_licenses_data) | **GET** /api/v2.0/dependencies/licenses-metadata | Get the licenses for dependencies with additional data
*DependenciesV2Api* | [**get_dependency_related_issue_ids**](docs/DependenciesV2Api.md#get_dependency_related_issue_ids) | **GET** /api/v2.0/dependencies/{dependencyId}/related-issues | Get the issue ids related to a dependency
*DependenciesV2Api* | [**get_repository_ids_for_dependency_v2**](docs/DependenciesV2Api.md#get_repository_ids_for_dependency_v2) | **GET** /api/v2.0/dependencies/{dependencyId}/related-repositories | Get the repositories related to a dependency
*IntegrationsApi* | [**api_v10_integrations_amazon_ecr_key_id_put**](docs/IntegrationsApi.md#api_v10_integrations_amazon_ecr_key_id_put) | **PUT** /api/v1.0/integrations/amazon-ecr/key/{id} | Update Amazon ECR Key integration
*IntegrationsApi* | [**api_v10_integrations_amazon_ecr_key_post**](docs/IntegrationsApi.md#api_v10_integrations_amazon_ecr_key_post) | **POST** /api/v1.0/integrations/amazon-ecr/key | Create Amazon ECR key integration
*IntegrationsApi* | [**api_v10_integrations_amazon_ecr_role_id_put**](docs/IntegrationsApi.md#api_v10_integrations_amazon_ecr_role_id_put) | **PUT** /api/v1.0/integrations/amazon-ecr/role/{id} | Update Amazon ECR Role integration
*IntegrationsApi* | [**api_v10_integrations_amazon_ecr_role_post**](docs/IntegrationsApi.md#api_v10_integrations_amazon_ecr_role_post) | **POST** /api/v1.0/integrations/amazon-ecr/role | Create Amazon ECR Role integration
*IntegrationsApi* | [**api_v10_integrations_azure_acr_id_delete**](docs/IntegrationsApi.md#api_v10_integrations_azure_acr_id_delete) | **DELETE** /api/v1.0/integrations/azure-acr/{id} | Delete Azure ACR integration by Id
*IntegrationsApi* | [**api_v10_integrations_azure_acr_id_put**](docs/IntegrationsApi.md#api_v10_integrations_azure_acr_id_put) | **PUT** /api/v1.0/integrations/azure-acr/{id} | Update Azure GCR integration
*IntegrationsApi* | [**api_v10_integrations_azure_acr_post**](docs/IntegrationsApi.md#api_v10_integrations_azure_acr_post) | **POST** /api/v1.0/integrations/azure-acr | Create Azure ACR integration
*IntegrationsApi* | [**api_v10_integrations_get**](docs/IntegrationsApi.md#api_v10_integrations_get) | **GET** /api/v1.0/integrations | Get all integrations
*IntegrationsApi* | [**api_v10_integrations_google_gcr_id_delete**](docs/IntegrationsApi.md#api_v10_integrations_google_gcr_id_delete) | **DELETE** /api/v1.0/integrations/google-gcr/{id} | Delete Google GCR integration by Id
*IntegrationsApi* | [**api_v10_integrations_google_gcr_id_put**](docs/IntegrationsApi.md#api_v10_integrations_google_gcr_id_put) | **PUT** /api/v1.0/integrations/google-gcr/{id} | Update Google GCR integration
*IntegrationsApi* | [**api_v10_integrations_google_gcr_post**](docs/IntegrationsApi.md#api_v10_integrations_google_gcr_post) | **POST** /api/v1.0/integrations/google-gcr | Create Google GCR integration
*IntegrationsApi* | [**api_v10_integrations_id_delete**](docs/IntegrationsApi.md#api_v10_integrations_id_delete) | **DELETE** /api/v1.0/integrations/{id} | Delete integration by id
*IntegrationsApi* | [**api_v10_integrations_id_get**](docs/IntegrationsApi.md#api_v10_integrations_id_get) | **GET** /api/v1.0/integrations/{id} | Get integration by id
*IntegrationsApi* | [**api_v10_integrations_jenkins_id_delete**](docs/IntegrationsApi.md#api_v10_integrations_jenkins_id_delete) | **DELETE** /api/v1.0/integrations/jenkins/{id} | Delete Jenkins integration by Id
*IntegrationsApi* | [**api_v10_integrations_jenkins_id_put**](docs/IntegrationsApi.md#api_v10_integrations_jenkins_id_put) | **PUT** /api/v1.0/integrations/jenkins/{id} | Update Jenkins integration
*IntegrationsApi* | [**api_v10_integrations_jenkins_post**](docs/IntegrationsApi.md#api_v10_integrations_jenkins_post) | **POST** /api/v1.0/integrations/jenkins | Create Jenkins integration
*IntegrationsApi* | [**api_v10_integrations_workspace_bulk_post**](docs/IntegrationsApi.md#api_v10_integrations_workspace_bulk_post) | **POST** /api/v1.0/integrations/workspace/bulk | Apply bulk workspace operation
*IssuesApi* | [**add_issue_comment**](docs/IssuesApi.md#add_issue_comment) | **POST** /api/v1.0/issues/{id}/comments | Add issue comment
*IssuesApi* | [**add_issue_tag**](docs/IssuesApi.md#add_issue_tag) | **POST** /api/v1.0/issues/{issueId}/tags/{tagId} | Add a tag to an issue
*IssuesApi* | [**create_issues_tag**](docs/IssuesApi.md#create_issues_tag) | **POST** /api/v1.0/issues/tags | Create issues tag
*IssuesApi* | [**delete_issue_comment**](docs/IssuesApi.md#delete_issue_comment) | **DELETE** /api/v1.0/issues/{issueId}/comments/{commentId} | Delete issue comment
*IssuesApi* | [**delete_issues_tag**](docs/IssuesApi.md#delete_issues_tag) | **DELETE** /api/v1.0/issues/tags/{id} | Delete issues tag
*IssuesApi* | [**get_issue_by_id**](docs/IssuesApi.md#get_issue_by_id) | **GET** /api/v1.0/issues/{id} | Get Issue by id
*IssuesApi* | [**get_issues**](docs/IssuesApi.md#get_issues) | **GET** /api/v1.0/issues | Get issues
*IssuesApi* | [**get_issues_tags**](docs/IssuesApi.md#get_issues_tags) | **GET** /api/v1.0/issues/tags | Get issues tags
*IssuesApi* | [**patch_issue_assignee**](docs/IssuesApi.md#patch_issue_assignee) | **PATCH** /api/v1.0/issues/{id}/assign | Assign user to issue
*IssuesApi* | [**patch_issue_status**](docs/IssuesApi.md#patch_issue_status) | **PATCH** /api/v1.0/issues/{id}/status | Change issue status
*IssuesApi* | [**patch_issues_statuses**](docs/IssuesApi.md#patch_issues_statuses) | **PATCH** /api/v1.0/issues/statuses | Change statuses of multiple issues
*IssuesApi* | [**remove_issue_assignee**](docs/IssuesApi.md#remove_issue_assignee) | **POST** /api/v1.0/issues/{id}/unassign | Unassign user from issue
*IssuesApi* | [**remove_issue_tag**](docs/IssuesApi.md#remove_issue_tag) | **DELETE** /api/v1.0/issues/{issueId}/tags/{tagId} | Remove a tag from an issue
*IssuesV2Api* | [**get_action_history_by_issue_id**](docs/IssuesV2Api.md#get_action_history_by_issue_id) | **GET** /api/v2.0/issues/{issueId}/action-history | Get the action history of a single issue by its id
*IssuesV2Api* | [**get_additional_data_by_issue_by_id**](docs/IssuesV2Api.md#get_additional_data_by_issue_by_id) | **GET** /api/v2.0/issues/{id}/additional-data | Get additional data on issue by id
*IssuesV2Api* | [**get_extended_issues**](docs/IssuesV2Api.md#get_extended_issues) | **GET** /api/v2.0/issues/extended | Get extended data on issues
*IssuesV2Api* | [**get_issue_action_history**](docs/IssuesV2Api.md#get_issue_action_history) | **GET** /api/v2.0/issues/action-history | Get the action history of issues
*IssuesV2Api* | [**get_issue_by_id_v2**](docs/IssuesV2Api.md#get_issue_by_id_v2) | **GET** /api/v2.0/issues/{id} | Get Issue data by id
*IssuesV2Api* | [**get_issue_comments**](docs/IssuesV2Api.md#get_issue_comments) | **GET** /api/v2.0/issues/comments | Get the comments on issues
*IssuesV2Api* | [**get_issue_remediation**](docs/IssuesV2Api.md#get_issue_remediation) | **GET** /api/v2.0/issues/remediation | Get the remediation steps of issues
*IssuesV2Api* | [**get_issue_tags**](docs/IssuesV2Api.md#get_issue_tags) | **GET** /api/v2.0/issues/tags | Get the tags on issues
*IssuesV2Api* | [**get_issue_tickets**](docs/IssuesV2Api.md#get_issue_tickets) | **GET** /api/v2.0/issues/tickets | Get the tickets on issues
*IssuesV2Api* | [**get_issues_additional_data**](docs/IssuesV2Api.md#get_issues_additional_data) | **GET** /api/v2.0/issues/additional-data | Get additional data on issues by their ids
*IssuesV2Api* | [**get_issues_count**](docs/IssuesV2Api.md#get_issues_count) | **GET** /api/v2.0/issues/count | Get the issues count
*IssuesV2Api* | [**get_issues_v2**](docs/IssuesV2Api.md#get_issues_v2) | **GET** /api/v2.0/issues | Get issues
*IssuesV2Api* | [**get_issues_vulnerabilities**](docs/IssuesV2Api.md#get_issues_vulnerabilities) | **GET** /api/v2.0/issues/vulnerabilities | Get the vulnerabilities of the given issues
*ModelsApi* | [**api_v20_ai_models_by_url_get**](docs/ModelsApi.md#api_v20_ai_models_by_url_get) | **GET** /api/v2.0/ai-models/by-url | Get model reputation by hugging face url
*ModelsApi* | [**api_v20_ai_models_get**](docs/ModelsApi.md#api_v20_ai_models_get) | **GET** /api/v2.0/ai-models | Get model reputation by name
*PoliciesApi* | [**api_v10_policies_get**](docs/PoliciesApi.md#api_v10_policies_get) | **GET** /api/v1.0/policies | Get Policies
*PoliciesApi* | [**get_policy_by_id**](docs/PoliciesApi.md#get_policy_by_id) | **GET** /api/v1.0/policies/{id} | Get Policy by id
*PoliciesApi* | [**get_policy_by_name**](docs/PoliciesApi.md#get_policy_by_name) | **GET** /api/v1.0/policies/by-name/{name} | Get by Policy name
*ProductUnitsApi* | [**api_v10_products_get**](docs/ProductUnitsApi.md#api_v10_products_get) | **GET** /api/v1.0/products | Get all Product Units
*ProductUnitsApi* | [**api_v10_products_id_assets_get**](docs/ProductUnitsApi.md#api_v10_products_id_assets_get) | **GET** /api/v1.0/products/{id}/assets | Get Product Unit Assets
*ProductUnitsApi* | [**api_v10_products_id_custom_fields_by_name_patch**](docs/ProductUnitsApi.md#api_v10_products_id_custom_fields_by_name_patch) | **PATCH** /api/v1.0/products/{id}/custom-fields-by-name | Update Product Unit Custom Fields By Custom Field Name
*ProductUnitsApi* | [**api_v10_products_id_custom_fields_patch**](docs/ProductUnitsApi.md#api_v10_products_id_custom_fields_patch) | **PATCH** /api/v1.0/products/{id}/custom-fields | Update Product Unit Custom Fields by ID
*ProductUnitsApi* | [**api_v10_products_id_delete**](docs/ProductUnitsApi.md#api_v10_products_id_delete) | **DELETE** /api/v1.0/products/{id} | Delete Product Unit
*ProductUnitsApi* | [**api_v10_products_id_get**](docs/ProductUnitsApi.md#api_v10_products_id_get) | **GET** /api/v1.0/products/{id} | Get Product Unit by id
*ProductUnitsApi* | [**api_v10_products_id_patch**](docs/ProductUnitsApi.md#api_v10_products_id_patch) | **PATCH** /api/v1.0/products/{id} | Update Product Unit
*ProductUnitsApi* | [**api_v10_products_issuescount_get**](docs/ProductUnitsApi.md#api_v10_products_issuescount_get) | **GET** /api/v1.0/products/issuescount | Get product units issue count by severity
*ProductUnitsApi* | [**api_v10_products_post**](docs/ProductUnitsApi.md#api_v10_products_post) | **POST** /api/v1.0/products | Create new Product Unit
*ProductUnitsApi* | [**api_v10_products_ticket_template_post**](docs/ProductUnitsApi.md#api_v10_products_ticket_template_post) | **POST** /api/v1.0/products/ticket-template | Set ticket template for a product unit
*RepositoriesApi* | [**api_v10_repositories_get**](docs/RepositoriesApi.md#api_v10_repositories_get) | **GET** /api/v1.0/repositories | Get all repositories
*RepositoriesApi* | [**api_v10_repositories_id_controls_get**](docs/RepositoriesApi.md#api_v10_repositories_id_controls_get) | **GET** /api/v1.0/repositories/{id}/controls | Get repository controls
*RepositoriesApi* | [**api_v10_repositories_id_get**](docs/RepositoriesApi.md#api_v10_repositories_id_get) | **GET** /api/v1.0/repositories/{id} | Get repository by id
*RepositoriesApi* | [**api_v10_repositories_id_owner_delete**](docs/RepositoriesApi.md#api_v10_repositories_id_owner_delete) | **DELETE** /api/v1.0/repositories/{id}/owner | Delete owner from repository
*RepositoriesApi* | [**api_v10_repositories_id_owner_owner_id_put**](docs/RepositoriesApi.md#api_v10_repositories_id_owner_owner_id_put) | **PUT** /api/v1.0/repositories/{id}/owner/{ownerId} | Set owner to repository
*RepositoriesApi* | [**api_v10_repositories_id_score_get**](docs/RepositoriesApi.md#api_v10_repositories_id_score_get) | **GET** /api/v1.0/repositories/{id}/score | Get repository score
*RepositoriesApi* | [**api_v10_repositories_repo_id_tags_tag_id_delete**](docs/RepositoriesApi.md#api_v10_repositories_repo_id_tags_tag_id_delete) | **DELETE** /api/v1.0/repositories/{repoId}/tags/{tagId} | Remove tag from repository
*RepositoriesApi* | [**api_v10_repositories_repo_id_tags_tag_id_post**](docs/RepositoriesApi.md#api_v10_repositories_repo_id_tags_tag_id_post) | **POST** /api/v1.0/repositories/{repoId}/tags/{tagId} | Add tag to repository
*RepositoriesApi* | [**api_v10_repositories_score_get**](docs/RepositoriesApi.md#api_v10_repositories_score_get) | **GET** /api/v1.0/repositories/score | Get repositories score
*RepositoryGroupsApi* | [**api_v10_repository_groups_get**](docs/RepositoryGroupsApi.md#api_v10_repository_groups_get) | **GET** /api/v1.0/repository-groups | Get all repository groups
*RepositoryGroupsApi* | [**api_v10_repository_groups_id_get**](docs/RepositoryGroupsApi.md#api_v10_repository_groups_id_get) | **GET** /api/v1.0/repository-groups/{id} | Get repository group by id
*SDLCAssetsApi* | [**api_v10_sdlc_assets_discovered_get**](docs/SDLCAssetsApi.md#api_v10_sdlc_assets_discovered_get) | **GET** /api/v1.0/sdlc-assets/discovered | Get all discovered SDLC assets
*SDLCAssetsApi* | [**api_v10_sdlc_assets_graph_evidence_by_ids_get**](docs/SDLCAssetsApi.md#api_v10_sdlc_assets_graph_evidence_by_ids_get) | **GET** /api/v1.0/sdlc-assets/graph/evidence/by-ids | Get SDLC asset graph evidence by Ids
*SDLCAssetsApi* | [**api_v10_sdlc_assets_graph_get**](docs/SDLCAssetsApi.md#api_v10_sdlc_assets_graph_get) | **GET** /api/v1.0/sdlc-assets/graph | Get SDLC asset graph
*SavedQueriesApi* | [**api_v10_saved_queries_saved_query_id_results_get**](docs/SavedQueriesApi.md#api_v10_saved_queries_saved_query_id_results_get) | **GET** /api/v1.0/saved-queries/{savedQueryId}/results | Get Saved Queries Results
*TagsApi* | [**api_v10_tags_get**](docs/TagsApi.md#api_v10_tags_get) | **GET** /api/v1.0/tags | Get all tags
*TagsApi* | [**api_v10_tags_post**](docs/TagsApi.md#api_v10_tags_post) | **POST** /api/v1.0/tags | Create tag
*TagsApi* | [**api_v10_tags_tag_id_delete**](docs/TagsApi.md#api_v10_tags_tag_id_delete) | **DELETE** /api/v1.0/tags/{tagId} | Delete tag
*WorkspacesApi* | [**api_v10_workspaces_get**](docs/WorkspacesApi.md#api_v10_workspaces_get) | **GET** /api/v1.0/workspaces | Get all workspaces
*WorkspacesApi* | [**api_v10_workspaces_group_post**](docs/WorkspacesApi.md#api_v10_workspaces_group_post) | **POST** /api/v1.0/workspaces/group | Create new workspace group
*WorkspacesApi* | [**api_v10_workspaces_hierarchy_get**](docs/WorkspacesApi.md#api_v10_workspaces_hierarchy_get) | **GET** /api/v1.0/workspaces/hierarchy | Get workspace hierarchy
*WorkspacesApi* | [**api_v10_workspaces_id_delete**](docs/WorkspacesApi.md#api_v10_workspaces_id_delete) | **DELETE** /api/v1.0/workspaces/{id} | Delete workspace
*WorkspacesApi* | [**api_v10_workspaces_id_get**](docs/WorkspacesApi.md#api_v10_workspaces_id_get) | **GET** /api/v1.0/workspaces/{id} | Get workspace by id
*WorkspacesApi* | [**api_v10_workspaces_permissions_post**](docs/WorkspacesApi.md#api_v10_workspaces_permissions_post) | **POST** /api/v1.0/workspaces/permissions | Add permissions to workspace/group
*WorkspacesApi* | [**api_v10_workspaces_post**](docs/WorkspacesApi.md#api_v10_workspaces_post) | **POST** /api/v1.0/workspaces | Create new workspace
*WorkspacesApi* | [**api_v10_workspaces_workspace_id_integrations_get**](docs/WorkspacesApi.md#api_v10_workspaces_workspace_id_integrations_get) | **GET** /api/v1.0/workspaces/{workspaceId}/integrations | Get integrations by workspace id
## Documentation For Models
- [AddIssueCommentDto](docs/AddIssueCommentDto.md)
- [AiModelReputationDto](docs/AiModelReputationDto.md)
- [AiSecretValidationResult](docs/AiSecretValidationResult.md)
- [AmazonEcrKeyIntegrationCreateDto](docs/AmazonEcrKeyIntegrationCreateDto.md)
- [AmazonEcrKeyIntegrationEditDto](docs/AmazonEcrKeyIntegrationEditDto.md)
- [AmazonEcrRoleIntegrationCreateDto](docs/AmazonEcrRoleIntegrationCreateDto.md)
- [AmazonEcrRoleIntegrationEditDto](docs/AmazonEcrRoleIntegrationEditDto.md)
- [ApplyBulkWorkspaceOperationDto](docs/ApplyBulkWorkspaceOperationDto.md)
- [ApplyBulkWorkspaceRolesOperationDto](docs/ApplyBulkWorkspaceRolesOperationDto.md)
- [AssetConnectionDto](docs/AssetConnectionDto.md)
- [AutomaticCheckDto](docs/AutomaticCheckDto.md)
- [AwsRegion](docs/AwsRegion.md)
- [AzureContainerRegistryIntegrationCreateDto](docs/AzureContainerRegistryIntegrationCreateDto.md)
- [AzureContainerRegistryIntegrationEditDto](docs/AzureContainerRegistryIntegrationEditDto.md)
- [BasicIssue](docs/BasicIssue.md)
- [BrokerConnectionStatus](docs/BrokerConnectionStatus.md)
- [BrokerDto](docs/BrokerDto.md)
- [BrokerStatus](docs/BrokerStatus.md)
- [BusinessImpact](docs/BusinessImpact.md)
- [Category](docs/Category.md)
- [ClosingReason](docs/ClosingReason.md)
- [CloudInstanceDto](docs/CloudInstanceDto.md)
- [CloudInstancesToContainersOperationDto](docs/CloudInstancesToContainersOperationDto.md)
- [CollaboratorDto](docs/CollaboratorDto.md)
- [CollaboratorRepositoryPermission](docs/CollaboratorRepositoryPermission.md)
- [CommercialUseAllowance](docs/CommercialUseAllowance.md)
- [ComplianceCriteriaDto](docs/ComplianceCriteriaDto.md)
- [ComplianceDto](docs/ComplianceDto.md)
- [ComplianceReportDto](docs/ComplianceReportDto.md)
- [ComplianceReportIntegrationDto](docs/ComplianceReportIntegrationDto.md)
- [ComplianceReportRequestDto](docs/ComplianceReportRequestDto.md)
- [ComplianceReportRequirementDto](docs/ComplianceReportRequirementDto.md)
- [ComplianceReportWorkspaceDto](docs/ComplianceReportWorkspaceDto.md)
- [ComplianceRequirementDto](docs/ComplianceRequirementDto.md)
- [ComplianceWithCriteriasDto](docs/ComplianceWithCriteriasDto.md)
- [ContainerImageDto](docs/ContainerImageDto.md)
- [ContainerImageVersionDto](docs/ContainerImageVersionDto.md)
- [ContainerToCloudResourceOperationDto](docs/ContainerToCloudResourceOperationDto.md)
- [ContainerToRepositoryOperationDto](docs/ContainerToRepositoryOperationDto.md)
- [ControlClassification](docs/ControlClassification.md)
- [ControlSourceType](docs/ControlSourceType.md)
- [ControlType](docs/ControlType.md)
- [CorrelatedCloudInstance](docs/CorrelatedCloudInstance.md)
- [CorrelatedRepositoryDto](docs/CorrelatedRepositoryDto.md)
- [CreateAutomaticCheckDto](docs/CreateAutomaticCheckDto.md)
- [CreateIssuesTagDto](docs/CreateIssuesTagDto.md)
- [CreateProductUnitBadRequestDto](docs/CreateProductUnitBadRequestDto.md)
- [CreateProductUnitDto](docs/CreateProductUnitDto.md)
- [CreateTagDto](docs/CreateTagDto.md)
- [CreateWorkspaceDto](docs/CreateWorkspaceDto.md)
- [CreateWorkspaceGroupDto](docs/CreateWorkspaceGroupDto.md)
- [CustomFieldBoolTypeDto](docs/CustomFieldBoolTypeDto.md)
- [CustomFieldDateTypeDto](docs/CustomFieldDateTypeDto.md)
- [CustomFieldDto](docs/CustomFieldDto.md)
- [CustomFieldDtoType](docs/CustomFieldDtoType.md)
- [CustomFieldEntity](docs/CustomFieldEntity.md)
- [CustomFieldFileTypeDto](docs/CustomFieldFileTypeDto.md)
- [CustomFieldIdentityTypeDto](docs/CustomFieldIdentityTypeDto.md)
- [CustomFieldNumberTypeDto](docs/CustomFieldNumberTypeDto.md)
- [CustomFieldTextTypeDto](docs/CustomFieldTextTypeDto.md)
- [CustomFieldTicketTemplateTypeDto](docs/CustomFieldTicketTemplateTypeDto.md)
- [CustomFieldType](docs/CustomFieldType.md)
- [CustomerFacingDependencyDto](docs/CustomerFacingDependencyDto.md)
- [CustomerFacingDependencyDtoCustomerFacingCursorPagedDto](docs/CustomerFacingDependencyDtoCustomerFacingCursorPagedDto.md)
- [CustomerFacingDependencyLicense](docs/CustomerFacingDependencyLicense.md)
- [CustomerFacingIssueActionDto](docs/CustomerFacingIssueActionDto.md)
- [CustomerFacingIssueDto](docs/CustomerFacingIssueDto.md)
- [CustomerFacingIssueToActionHistoryDto](docs/CustomerFacingIssueToActionHistoryDto.md)
- [CustomerFacingIssueToAdditionalDataDto](docs/CustomerFacingIssueToAdditionalDataDto.md)
- [CustomerFacingIssueToCommentsDto](docs/CustomerFacingIssueToCommentsDto.md)
- [CustomerFacingIssueToRemediationDto](docs/CustomerFacingIssueToRemediationDto.md)
- [CustomerFacingIssueToTagsDto](docs/CustomerFacingIssueToTagsDto.md)
- [CustomerFacingIssueToTicketsDto](docs/CustomerFacingIssueToTicketsDto.md)
- [CustomerFacingIssueToVulnerabilityDto](docs/CustomerFacingIssueToVulnerabilityDto.md)
- [CustomerFacingIssueVulnerabilityDto](docs/CustomerFacingIssueVulnerabilityDto.md)
- [CustomerFacingIssuesPageDto](docs/CustomerFacingIssuesPageDto.md)
- [CustomerFacingLicenseMetadata](docs/CustomerFacingLicenseMetadata.md)
- [CveSeverity](docs/CveSeverity.md)
- [DastConfidenceLevel](docs/DastConfidenceLevel.md)
- [DastDataDto](docs/DastDataDto.md)
- [DependencyCategory](docs/DependencyCategory.md)
- [DependencyDeclaration](docs/DependencyDeclaration.md)
- [DependencyFixType](docs/DependencyFixType.md)
- [DependencyVulnerabilityDataDto](docs/DependencyVulnerabilityDataDto.md)
- [DetailedSdlcAssetInformationDto](docs/DetailedSdlcAssetInformationDto.md)
- [DiscoveredSdlcAssetDto](docs/DiscoveredSdlcAssetDto.md)
- [DiscoveryConnectionEvidenceType](docs/DiscoveryConnectionEvidenceType.md)
- [FrameworkPolicyType](docs/FrameworkPolicyType.md)
- [GetCustomFieldsResponseDto](docs/GetCustomFieldsResponseDto.md)
- [GetSdlcAssetGraphEvidenceByIdsResponseDto](docs/GetSdlcAssetGraphEvidenceByIdsResponseDto.md)
- [GoogleContainerRegistryIntegrationCreateDto](docs/GoogleContainerRegistryIntegrationCreateDto.md)
- [GoogleContainerRegistryIntegrationEditDto](docs/GoogleContainerRegistryIntegrationEditDto.md)
- [IntegrationBadResponseDto](docs/IntegrationBadResponseDto.md)
- [IntegrationDto](docs/IntegrationDto.md)
- [IntegrationError](docs/IntegrationError.md)
- [IntegrationFailingReason](docs/IntegrationFailingReason.md)
- [IntegrationManagementDto](docs/IntegrationManagementDto.md)
- [IntegrationStatus](docs/IntegrationStatus.md)
- [IntegrationType](docs/IntegrationType.md)
- [IssueActionType](docs/IssueActionType.md)
- [IssueAssignment](docs/IssueAssignment.md)
- [IssueClosingLocationDto](docs/IssueClosingLocationDto.md)
- [IssueCommentDto](docs/IssueCommentDto.md)
- [IssueCountBySeverityDto](docs/IssueCountBySeverityDto.md)
- [IssueDto](docs/IssueDto.md)
- [IssueIgnoringReasonDto](docs/IssueIgnoringReasonDto.md)
- [IssueOpeningReasonDto](docs/IssueOpeningReasonDto.md)
- [IssueOriginDto](docs/IssueOriginDto.md)
- [IssueOriginParams](docs/IssueOriginParams.md)
- [IssueReachability](docs/IssueReachability.md)
- [IssueSortingColumn](docs/IssueSortingColumn.md)
- [IssueStatus](docs/IssueStatus.md)
- [IssueTagDto](docs/IssueTagDto.md)
- [IssueTicketingDto](docs/IssueTicketingDto.md)
- [IssueType](docs/IssueType.md)
- [IssuesTagDto](docs/IssuesTagDto.md)
- [JenkinsIntegrationCreateDto](docs/JenkinsIntegrationCreateDto.md)
- [JenkinsIntegrationEditDto](docs/JenkinsIntegrationEditDto.md)
- [LegitScoreCategoryDto](docs/LegitScoreCategoryDto.md)
- [LegitScoreDto](docs/LegitScoreDto.md)
- [LegitScoreGrade](docs/LegitScoreGrade.md)
- [LegitScoreRequirementDto](docs/LegitScoreRequirementDto.md)
- [LegitScoreRequirementGroupType](docs/LegitScoreRequirementGroupType.md)
- [LegitScoreRequirementType](docs/LegitScoreRequirementType.md)
- [LicenseCopyleftType](docs/LicenseCopyleftType.md)
- [LicenseRestrictionLevel](docs/LicenseRestrictionLevel.md)
- [ListSortDirection](docs/ListSortDirection.md)
- [ManualCheckDto](docs/ManualCheckDto.md)
- [ModelReputation](docs/ModelReputation.md)
- [OriginType](docs/OriginType.md)
- [PackageSource](docs/PackageSource.md)
- [PackageType](docs/PackageType.md)
- [PatchLegitIssueAssigneeDto](docs/PatchLegitIssueAssigneeDto.md)
- [PatchLegitIssueStatusDto](docs/PatchLegitIssueStatusDto.md)
- [PatchLegitIssueUnifiedStatusDto](docs/PatchLegitIssueUnifiedStatusDto.md)
- [PatchLegitIssuesStatusDto](docs/PatchLegitIssuesStatusDto.md)
- [PatchProductUnitCustomFieldByName](docs/PatchProductUnitCustomFieldByName.md)
- [PatchProductUnitCustomFieldDto](docs/PatchProductUnitCustomFieldDto.md)
- [PatchProductUnitCustomFieldsByNameDto](docs/PatchProductUnitCustomFieldsByNameDto.md)
- [PatchProductUnitCustomFieldsDto](docs/PatchProductUnitCustomFieldsDto.md)
- [PatchProductUnitDto](docs/PatchProductUnitDto.md)
- [PatchSecurityChampionIdDto](docs/PatchSecurityChampionIdDto.md)
- [PermissionMetaType](docs/PermissionMetaType.md)
- [PolicyDto](docs/PolicyDto.md)
- [ProblemDetails](docs/ProblemDetails.md)
- [ProductConnectionType](docs/ProductConnectionType.md)
- [ProductTreeNodeDto](docs/ProductTreeNodeDto.md)
- [ProductUnitAssetDto](docs/ProductUnitAssetDto.md)
- [ProductUnitDto](docs/ProductUnitDto.md)
- [ProductUnitDtoCustomFieldsValue](docs/ProductUnitDtoCustomFieldsValue.md)
- [ProductUnitEnvironment](docs/ProductUnitEnvironment.md)
- [ProductUnitIssueDto](docs/ProductUnitIssueDto.md)
- [ProductUnitNameDto](docs/ProductUnitNameDto.md)
- [ProductUnitType](docs/ProductUnitType.md)
- [ProgrammingLanguage](docs/ProgrammingLanguage.md)
- [RepositoriesToContainersOperationDto](docs/RepositoriesToContainersOperationDto.md)
- [RepositoryAutomaticBusinessImpactFactor](docs/RepositoryAutomaticBusinessImpactFactor.md)
- [RepositoryContextFieldDto](docs/RepositoryContextFieldDto.md)
- [RepositoryControlDto](docs/RepositoryControlDto.md)
- [RepositoryDirectory](docs/RepositoryDirectory.md)
- [RepositoryDto](docs/RepositoryDto.md)
- [RepositoryGroupDto](docs/RepositoryGroupDto.md)
- [RepositoryVisibility](docs/RepositoryVisibility.md)
- [SbomFormat](docs/SbomFormat.md)
- [ScmType](docs/ScmType.md)
- [SdlcAssetDto](docs/SdlcAssetDto.md)
- [SdlcAssetGraphAssetDto](docs/SdlcAssetGraphAssetDto.md)
- [SdlcAssetGraphDto](docs/SdlcAssetGraphDto.md)
- [SdlcAssetGraphEvidenceDto](docs/SdlcAssetGraphEvidenceDto.md)
- [SdlcAssetGraphLinkDto](docs/SdlcAssetGraphLinkDto.md)
- [SdlcAssetGraphLinkDtoEvidences](docs/SdlcAssetGraphLinkDtoEvidences.md)
- [SdlcAssetMetaType](docs/SdlcAssetMetaType.md)
- [SdlcAssetType](docs/SdlcAssetType.md)
- [SecretIssueValidityStatus](docs/SecretIssueValidityStatus.md)
- [SecretsDataDto](docs/SecretsDataDto.md)
- [SetProductUnitTicketTemplateDto](docs/SetProductUnitTicketTemplateDto.md)
- [Severity](docs/Severity.md)
- [SnoozedType](docs/SnoozedType.md)
- [SourceDto](docs/SourceDto.md)
- [StringCustomerFacingCursorPagedDto](docs/StringCustomerFacingCursorPagedDto.md)
- [TagDto](docs/TagDto.md)
- [TagSource](docs/TagSource.md)
- [UserDto](docs/UserDto.md)
- [UserPermission](docs/UserPermission.md)
- [UserRole](docs/UserRole.md)
- [VulnerabilityType](docs/VulnerabilityType.md)
- [WorkspaceCreatedDto](docs/WorkspaceCreatedDto.md)
- [WorkspaceDto](docs/WorkspaceDto.md)
- [WorkspaceGroupDto](docs/WorkspaceGroupDto.md)
- [WorkspaceGroupTreeNodeDto](docs/WorkspaceGroupTreeNodeDto.md)
- [WorkspaceHierarchyDto](docs/WorkspaceHierarchyDto.md)
- [WorkspaceTreeNodeDto](docs/WorkspaceTreeNodeDto.md)
- [WorkspaceType](docs/WorkspaceType.md)
<a id="documentation-for-authorization"></a>
## Documentation For Authorization
Authentication schemes defined for the API:
<a id="BearerAuth"></a>
### BearerAuth
- **Type**: Bearer authentication (JWT)
## Author
| text/markdown | OpenAPI Generator community | OpenAPI Generator Community <team@openapitools.org> | null | null | null | OpenAPI, OpenAPI-Generator, Inventory | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"urllib3<3.0.0,>=2.1.0",
"python-dateutil>=2.8.2",
"pydantic>=2",
"typing-extensions>=4.7.1"
] | [] | [] | [] | [
"Repository, https://github.com/GIT_USER_ID/GIT_REPO_ID"
] | twine/6.2.0 CPython/3.9.25 | 2026-02-19T13:54:09.575319 | legit_api_client-1.1.4668.tar.gz | 181,076 | b4/39/cf917a79e325520cab6becbe4364cea89dbfa6de8f26eb80ea1a5bf6a021/legit_api_client-1.1.4668.tar.gz | source | sdist | null | false | c9320dc86d20fea79cf30341b58cd6d0 | 608fe4267db71c0b7c090f9fa8c75c33a1968c628fde724165b6147777a4aa59 | b439cf917a79e325520cab6becbe4364cea89dbfa6de8f26eb80ea1a5bf6a021 | MIT | [] | 259 |
2.4 | lara-sdk | 1.6.4 | A Python library for Lara's API. | # Lara Python SDK
[](https://python.org)
[](LICENSE)
This SDK empowers you to build your own branded translation AI leveraging our translation fine-tuned language model.
All major translation features are accessible, making it easy to integrate and customize for your needs.
## 🌍 **Features:**
- **Text Translation**: Single strings, multiple strings, and complex text blocks
- **Document Translation**: Word, PDF, and other document formats with status monitoring
- **Translation Memory**: Store and reuse translations for consistency
- **Glossaries**: Enforce terminology standards across translations
- **Language Detection**: Automatic source language identification
- **Advanced Options**: Translation instructions and more
## 📚 Documentation
Lara's SDK full documentation is available at [https://developers.laratranslate.com/](https://developers.laratranslate.com/)
## 🚀 Quick Start
### Installation
```bash
pip install lara-sdk
```
### Basic Usage
```python
import os
from lara_sdk import Credentials, Translator
# Set your credentials using environment variables (recommended)
credentials = Credentials(
os.environ.get('LARA_ACCESS_KEY_ID'),
os.environ.get('LARA_ACCESS_KEY_SECRET')
)
# Create translator instance
lara = Translator(credentials)
# Simple text translation
try:
result = lara.translate("Hello, world!", target="fr-FR", source="en-US")
print(f"Translation: {result.translation}")
# Output: Translation: Bonjour, le monde !
except Exception as error:
print(f"Translation error: {error}")
```
## 📖 Examples
The `examples/` directory contains comprehensive examples for all SDK features.
**All examples use environment variables for credentials, so set them first:**
```bash
export LARA_ACCESS_KEY_ID="your-access-key-id"
export LARA_ACCESS_KEY_SECRET="your-access-key-secret"
```
### Text Translation
- **[text_translation.py](examples/text_translation.py)** - Complete text translation examples
- Single string translation
- Multiple strings translation
- Translation with instructions
- TextBlocks translation (mixed translatable/non-translatable content)
- Auto-detect source language
- Advanced translation options
- Get available languages
```bash
cd examples
python text_translation.py
```
### Document Translation
- **[document_translation.py](examples/document_translation.py)** - Document translation examples
- Basic document translation
- Advanced options with memories and glossaries
- Step-by-step translation with status monitoring
```bash
cd examples
python document_translation.py
```
### Translation Memory Management
- **[memories_management.py](examples/memories_management.py)** - Memory management examples
- Create, list, update, delete memories
- Add individual translations
- Multiple memory operations
- TMX file import with progress monitoring
- Translation deletion
- Translation with TUID and context
```bash
cd examples
python memories_management.py
```
### Glossary Management
- **[glossaries_management.py](examples/glossaries_management.py)** - Glossary management examples
- Create, list, update, delete glossaries
- CSV import with status monitoring
- Glossary export
- Glossary terms count
- Import status checking
```bash
cd examples
python glossaries_management.py
```
### Language Detection
- **[language_detection.py](examples/language_detection.py)** - Language detection examples
- Single string detection
- Multiple strings detection
- Detection with hint parameter
- Detection with passlist to restrict languages
- Combined hint and passlist
```bash
cd examples
python language_detection.py
```
## 🔧 API Reference
### Core Components
### 🔐 Authentication
The SDK supports authentication via access key and secret:
```python
from lara_sdk import Credentials, Translator
credentials = Credentials("your-access-key-id", "your-access-key-secret")
lara = Translator(credentials)
```
**Environment Variables (Recommended):**
```bash
export LARA_ACCESS_KEY_ID="your-access-key-id"
export LARA_ACCESS_KEY_SECRET="your-access-key-secret"
```
```python
import os
from lara_sdk import Credentials
credentials = Credentials(
os.environ['LARA_ACCESS_KEY_ID'],
os.environ['LARA_ACCESS_KEY_SECRET']
)
```
**Alternative Constructor:**
```python
# You can also pass credentials directly to Translator
lara = Translator(
access_key_id="your-access-key-id",
access_key_secret="your-access-key-secret"
)
```
### 🌍 Translator
```python
# Create translator with credentials
lara = Translator(credentials)
```
#### Text Translation
```python
# Basic translation
result = lara.translate("Hello", target="fr-FR", source="en-US")
# Multiple strings
result = lara.translate(["Hello", "World"], target="fr-FR", source="en-US")
# TextBlocks (mixed translatable/non-translatable content)
from lara_sdk import TextBlock
text_blocks = [
TextBlock(text="Translatable text", translatable=True),
TextBlock(text="<br>", translatable=False), # Non-translatable HTML
TextBlock(text="More translatable text", translatable=True)
]
result = lara.translate(text_blocks, target="fr-FR", source="en-US")
# With advanced options
result = lara.translate(
"Hello",
target="fr-FR",
source="en-US",
instructions=["Formal tone"],
adapt_to=["memory-id"], # Replace with actual memory IDs
glossaries=["glossary-id"], # Replace with actual glossary IDs
style="fluid",
timeout_ms=10000
)
```
### 📖 Document Translation
#### Simple document translation
```python
translated_content = lara.documents.translate(
file_path="/path/to/your/document.txt", # Replace with actual file path
filename="document.txt",
source="en-US",
target="fr-FR"
)
# With options
translated_content = lara.documents.translate(
file_path="/path/to/your/document.txt", # Replace with actual file path
filename="document.txt",
source="en-US",
target="fr-FR",
adapt_to=["mem_1A2b3C4d5E6f7G8h9I0jKl"], # Replace with actual memory IDs
glossaries=["gls_1A2b3C4d5E6f7G8h9I0jKl"], # Replace with actual glossary IDs
style="fluid"
)
```
### Document translation with status monitoring
#### Document upload
```python
#Optional: upload options
document = lara.documents.upload(
file_path="/path/to/your/document.txt", # Replace with actual file path
filename="document.txt",
source="en-US",
target="fr-FR",
adapt_to=["mem_1A2b3C4d5E6f7G8h9I0jKl"], # Replace with actual memory IDs
glossaries=["gls_1A2b3C4d5E6f7G8h9I0jKl"] # Replace with actual glossary IDs
)
```
#### Document translation status monitoring
```python
status = lara.documents.status(document.id)
```
#### Download translated document
```python
translated_content = lara.documents.download(document.id)
```
### 🧠 Memory Management
```python
# Create memory
memory = lara.memories.create("MyMemory")
# Create memory with external ID (MyMemory integration)
memory = lara.memories.create("Memory from MyMemory", external_id="aabb1122") # Replace with actual external ID
# Important: To update/overwrite a translation unit you must provide a tuid. Calls without a tuid always create a new unit and will not update existing entries.
# Add translation to single memory
memory_import = lara.memories.add_translation("mem_1A2b3C4d5E6f7G8h9I0jKl", "en-US", "fr-FR", "Hello", "Bonjour", tuid="greeting_001")
# Add translation to multiple memories
memory_import = lara.memories.add_translation(["mem_1A2b3C4d5E6f7G8h9I0jKl", "mem_2XyZ9AbC8dEf7GhI6jKlMn"], "en-US", "fr-FR", "Hello", "Bonjour", tuid="greeting_002")
# Add with context
memory_import = lara.memories.add_translation(
"mem_1A2b3C4d5E6f7G8h9I0jKl", "en-US", "fr-FR", "Hello", "Bonjour",
tuid="tuid", sentence_before="sentenceBefore", sentence_after="sentenceAfter"
)
# TMX import from file
memory_import = lara.memories.import_tmx("mem_1A2b3C4d5E6f7G8h9I0jKl", "/path/to/your/memory.tmx") # Replace with actual TMX file path
# Delete translation
# Important: if you omit tuid, all entries that match the provided fields will be removed
delete_job = lara.memories.delete_translation(
"mem_1A2b3C4d5E6f7G8h9I0jKl", "en-US", "fr-FR", "Hello", "Bonjour", tuid="greeting_001"
)
# Wait for import completion
completed_import = lara.memories.wait_for_import(memory_import, max_wait_time=300) # 5 minutes
```
### 📚 Glossary Management
```python
# Create glossary
glossary = lara.glossaries.create("MyGlossary")
# Import CSV from file
glossary_import = lara.glossaries.import_csv("gls_1A2b3C4d5E6f7G8h9I0jKl", "/path/to/your/glossary.csv") # Replace with actual CSV file path
# Check import status
import_status = lara.glossaries.get_import_status(import_id)
# Wait for import completion
completed_import = lara.glossaries.wait_for_import(glossary_import, max_wait_time=300) # 5 minutes
# Export glossary
csv_data = lara.glossaries.export("gls_1A2b3C4d5E6f7G8h9I0jKl", "csv/table-uni", "en-US")
# Get glossary terms count
counts = lara.glossaries.counts("gls_1A2b3C4d5E6f7G8h9I0jKl")
```
### 🌐 Language Detection
```python
# Basic language detection
result = lara.detect("Hello, world!")
print(f"Detected language: {result.language}")
print(f"Content type: {result.content_type}")
# Detect multiple strings
result = lara.detect(["Hello", "Bonjour", "Hola"])
# Detection with hint
result = lara.detect("Hello", hint="en")
# Detection with passlist (restrict to specific languages)
result = lara.detect(
"Guten Tag",
passlist=["de-DE", "en-US", "fr-FR"]
)
# Combined hint and passlist
result = lara.detect(
"Buongiorno",
hint="it",
passlist=["it-IT", "es-ES", "pt-PT"]
)
```
### Translation Options
```python
result = lara.translate(
text,
target="fr-FR", # Target language (required)
source="en-US", # Source language (optional, auto-detect if None)
source_hint="en", # Hint for source language detection
adapt_to=["memory-id"], # Memory IDs to adapt to
glossaries=["glossary-id"], # Glossary IDs to use
instructions=["instruction"], # Translation instructions
style="fluid", # Translation style (fluid, faithful, creative)
content_type="text/plain", # Content type (text/plain, text/html, etc.)
multiline=True, # Enable multiline translation
timeout_ms=10000, # Request timeout in milliseconds
no_trace=False, # Disable request tracing
verbose=False, # Enable verbose response
)
```
### Language Codes
The SDK supports full language codes (e.g., `en-US`, `fr-FR`, `es-ES`) as well as simple codes (e.g., `en`, `fr`, `es`):
```python
# Full language codes (recommended)
result = lara.translate("Hello", target="fr-FR", source="en-US")
# Simple language codes
result = lara.translate("Hello", target="fr", source="en")
```
### 🌐 Supported Languages
The SDK supports all languages available in the Lara API. Use the `languages()` method to get the current list:
```python
languages = lara.languages()
print(f"Supported languages: {', '.join(languages)}")
```
## ⚙️ Configuration
### Error Handling
The SDK provides detailed error information:
```python
from lara_sdk import LaraApiError, LaraError
try:
result = lara.translate("Hello", target="fr-FR", source="en-US")
print(f"Translation: {result.translation}")
except LaraApiError as error:
print(f"API Error [{error.status_code}]: {error.message}")
print(f"Error type: {error.type}")
except LaraError as error:
print(f"SDK Error: {error}")
except Exception as error:
print(f"Unexpected error: {error}")
```
## 📋 Requirements
- Python 3.8 or higher
- pip
- Valid Lara API credentials
## 🧪 Testing
Run the examples to test your setup.
```bash
# All examples use environment variables for credentials, so set them first:
export LARA_ACCESS_KEY_ID="your-access-key-id"
export LARA_ACCESS_KEY_SECRET="your-access-key-secret"
```
```bash
# Run basic text translation example
cd examples
python text_translation.py
```
## 🏗️ Building from Source
```bash
# Clone the repository
git clone https://github.com/translated/lara-python.git
cd lara-python
# Install in development mode
pip install -e .
```
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
Happy translating! 🌍✨
| text/markdown | null | Translated <support@laratranslate.com> | null | Translated <support@laratranslate.com> | null | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"requests",
"gzip-stream"
] | [] | [] | [] | [
"Homepage, https://laratranslate.com/",
"Source, https://github.com/translated/lara-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:54:05.240809 | lara_sdk-1.6.4.tar.gz | 16,644 | 74/8e/92df6255a8e44953d919e419c9cfd56e367a76f955932b179084f1786a4e/lara_sdk-1.6.4.tar.gz | source | sdist | null | false | b51c364bf5a4e08813b1d51ceb23ac2e | fb35a3a0ec98a0deef507212814de0c8a747933357e130f8a8678a40876fb6c4 | 748e92df6255a8e44953d919e419c9cfd56e367a76f955932b179084f1786a4e | null | [
"LICENSE"
] | 280 |
2.4 | japan-trading-agents | 0.5.2 | Multi-agent AI trading analysis for Japanese stocks — powered by the Japan Finance Data Stack | # japan-trading-agents
> Multi-agent AI trading analysis for Japanese stocks — powered by real government data, not just LLM reasoning.
[](https://github.com/ajtgjmdjp/japan-trading-agents/actions/workflows/ci.yml)
[](https://pypi.org/project/japan-trading-agents/)
[](LICENSE)
[](https://pypi.org/project/japan-trading-agents/)
## What is this?
**9 AI agents** analyze Japanese stocks using **real financial data** from government and market data sources:
| Agent | Role | Data Source |
|-------|------|-------------|
| Fundamental Analyst | Financial statements, DuPont analysis | [EDINET](https://github.com/ajtgjmdjp/edinet-mcp) (有報) |
| Macro Analyst | GDP, CPI, interest rates | [e-Stat](https://github.com/ajtgjmdjp/estat-mcp) + [BOJ](https://github.com/ajtgjmdjp/boj-mcp) |
| Event Analyst | Earnings, dividends, M&A | [TDNet](https://github.com/ajtgjmdjp/tdnet-disclosure-mcp) (適時開示) |
| Sentiment Analyst | News sentiment scoring | [Japan News](https://github.com/ajtgjmdjp/japan-news-mcp) RSS |
| Technical Analyst | Price action, volume | Yahoo Finance (yfinance) |
| Bull Researcher | Builds the bullish case | All analyst reports |
| Bear Researcher | Challenges with risks | All analyst reports |
| Trader | BUY/SELL/HOLD decision | Debate + analysis |
| Risk Manager | Risk validation | Final approval |
### How it works
```
jta analyze 7203 (Toyota)
|
v
[Data Fetch] ── yfinance + 5 MCP sources in parallel
|
v
[5 Analysts] ── Fundamental, Macro, Event, Sentiment, Technical (parallel)
|
v
[Bull vs Bear Debate] ── Sequential argumentation
|
v
[Trader Decision] ── BUY / SELL / HOLD with confidence
|
v
[Risk Manager] ── Approve or reject with concerns
```
## Quick Start
```bash
# Install with all data sources
pip install "japan-trading-agents[all-data]"
# Set your LLM API key
export OPENAI_API_KEY=sk-...
# Analyze a stock
jta analyze 7203
```
## Use Any LLM
Powered by [litellm](https://github.com/BerriAI/litellm) — supports 100+ LLM providers:
```bash
# OpenAI
jta analyze 7203 --model gpt-4o
# Anthropic
jta analyze 7203 --model claude-sonnet-4-5-20250929
# Google
jta analyze 7203 --model gemini/gemini-2.0-flash
# Local (Ollama)
jta analyze 7203 --model ollama/llama3.2
# Any litellm-supported model
jta analyze 7203 --model deepseek/deepseek-chat
```
## CLI Commands
```bash
# Full analysis
jta analyze 7203
# With EDINET code override
jta analyze 7203 --edinet-code E02144
# Multi-round debate
jta analyze 7203 --debate-rounds 2
# JSON output
jta analyze 7203 --json-output
# Check data sources
jta check
# MCP server mode
jta serve
```
## Data Sources
Each data source is an independent MCP package. Install only what you need:
```bash
pip install japan-trading-agents # Core only
pip install "japan-trading-agents[edinet]" # + EDINET
pip install "japan-trading-agents[all-data]" # All 6 sources
```
| Source | Package | API Key Required |
|--------|---------|:---:|
| Yahoo Finance (stock prices) | `yfinance` (bundled) | No |
| EDINET (financial statements) | `edinet-mcp` | Yes (free) |
| TDNet (disclosures) | `tdnet-disclosure-mcp` | No |
| e-Stat (government statistics) | `estat-mcp` | Yes (free) |
| BOJ (central bank data) | `boj-mcp` | No |
| News (financial news RSS) | `japan-news-mcp` | No |
The system gracefully degrades — agents work with whatever sources are available.
## Architecture
- **No LangChain/LangGraph** — pure `asyncio` for orchestration
- **litellm** for multi-provider LLM support (single interface, 100+ providers)
- **Pydantic** models for structured agent outputs
- **Rich** CLI with streaming progress
- **FastMCP** server for MCP integration
- All LLM calls are mocked in tests — **no API keys needed to run tests**
## Part of the Japan Finance Data Stack
| Layer | Tool | Description |
|-------|------|-------------|
| Corporate Filings | [edinet-mcp](https://github.com/ajtgjmdjp/edinet-mcp) | XBRL financial statements |
| Disclosures | [tdnet-disclosure-mcp](https://github.com/ajtgjmdjp/tdnet-disclosure-mcp) | Real-time corporate disclosures |
| Government Statistics | [estat-mcp](https://github.com/ajtgjmdjp/estat-mcp) | GDP, CPI, employment |
| Central Bank | [boj-mcp](https://github.com/ajtgjmdjp/boj-mcp) | Interest rates, money supply |
| Financial News | [japan-news-mcp](https://github.com/ajtgjmdjp/japan-news-mcp) | RSS news aggregation |
| Stock Prices | [yfinance](https://github.com/ranaroussi/yfinance) | Yahoo Finance (TSE coverage) |
| Benchmark | [jfinqa](https://github.com/ajtgjmdjp/jfinqa) | Japanese financial QA benchmark |
| **Analysis** | **japan-trading-agents** | **Multi-agent trading analysis** |
See [awesome-japan-finance-data](https://github.com/ajtgjmdjp/awesome-japan-finance-data) for a complete list of Japanese finance data resources.
## Development
```bash
git clone https://github.com/ajtgjmdjp/japan-trading-agents
cd japan-trading-agents
uv sync --extra dev
uv run pytest -v
uv run ruff check src tests
uv run mypy src
```
## Disclaimer
This is not financial advice. For educational and research purposes only. Do not make investment decisions based on this tool's output.
## License
Apache-2.0
| text/markdown | null | null | null | null | null | agent, ai, edinet, finance, japan, llm, mcp, multi-agent, stock, trading | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"httpx>=0.27",
"litellm>=1.0",
"loguru>=0.7",
"pydantic>=2.0",
"rich>=13.0",
"yfinance>=0.2",
"boj-mcp>=0.2.0; extra == \"all-data\"",
"edinet-mcp>=0.6.0; extra == \"all-data\"",
"estat-mcp>=0.2.4; extra == \"all-data\"",
"tdnet-disclosure-mcp>=0.1.1; extra == \"all-data\"",
"boj... | [] | [] | [] | [
"Homepage, https://github.com/ajtgjmdjp/japan-trading-agents",
"Repository, https://github.com/ajtgjmdjp/japan-trading-agents",
"Issues, https://github.com/ajtgjmdjp/japan-trading-agents/issues"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T13:52:37.217143 | japan_trading_agents-0.5.2.tar.gz | 311,309 | b1/ee/4ee63b2634d9da8373f84eede41941f0d4088803e266b4f64c7715d1f3dd/japan_trading_agents-0.5.2.tar.gz | source | sdist | null | false | 45f4b8360c4d35a7b24ea11ef7a34407 | 749c035a18f9918f8751c7c5650bfd0ec3d693df75b651bd514c23cac5b91dbb | b1ee4ee63b2634d9da8373f84eede41941f0d4088803e266b4f64c7715d1f3dd | Apache-2.0 | [
"LICENSE"
] | 226 |
2.4 | pdbeccdutils | 1.0.3 | Toolkit to parse and process small molecules in wwPDB | [](https://www.codefactor.io/repository/github/PDBeurope/ccdutils/overview/master)     
# pdbeccdutils
An RDKit-based python toolkit for parsing and processing small molecule definitions in [wwPDB Chemical Component Dictionary](https://www.wwpdb.org/data/ccd) and [wwPDB The Biologically Interesting Molecule Reference Dictionary](https://www.wwpdb.org/data/bird).`pdbeccdutils` provides streamlined access to all metadata of small molecules in the PDB and offers a set of convenient methods to compute various properties of small molecules using RDKIt such as 2D depictions, 3D conformers, physicochemical properties, matching common fragments and scaffolds, mapping to small-molecule databases using UniChem.
## Features
* `gemmi` CCD read/write.
* Generation of 2D depictions (`No image available` generated if the flattening cannot be done) along with the quality check.
* Generation of 3D conformations.
* Fragment library search (PDBe hand-curated library, ENAMINE, DSI).
* Chemical scaffolds (Murcko scaffold, Murcko general, BRICS).
* Lightweight implementation of [parity method](https://doi.org/10.1016/j.str.2018.02.009) by Jon Tyzack.
* RDKit molecular properties per component.
* UniChem mapping.
* Generating complete representation of multiple [Covalently Linked Components (CLC)](https://www.ebi.ac.uk/pdbe/news/introducing-covalently-linked-components)
## Dependencies
* [RDKit](http://www.rdkit.org/) for small molecule representation. Presently tested with `2023.9.6`
* [GEMMI](https://gemmi.readthedocs.io/en/latest/index.html) for parsing mmCIF files.
* [scipy](https://www.scipy.org/) for depiction quality check.
* [numpy](https://www.numpy.org/) for molecular scaling.
* [networkx](https://networkx.org/) for bound-molecules.
## Installation
create a [virtual environment](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#create-and-use-virtual-environments) and install using pip
```bash
pip install pdbeccdutils
```
## Contribution
We encourage you to contribute to this project. The package uses [poetry](https://python-poetry.org/) for packaging and dependency management. You can develop locally using:
```bash
git clone https://github.com/PDBeurope/ccdutils.git
cd ccdutils
pip install poetry
poetry install --with tests,docs
pre-commit install
```
The pre-commit hook will run linting, formatting and update `poetry.lock`. The `poetry.lock` file will lock all dependencies and ensure that they match pyproject.toml versions.
To add a new dependency
```bash
# Latest resolvable version
poetry add <package>
# Optionally fix a version
poetry add <package>@<version>
```
To change a version of a dependency, either edit pyproject.toml and run:
```bash
poetry sync --with dev
```
or
```bash
poetry add <package>@<version>
```
## Documentation
The documentation is generated using `sphinx` in `sphinx_rtd_theme` and hosted on GitHub Pages. To generate the documentation locally,
```bash
cd doc
poetry run sphinx-build -b html . _build/html
# See the documentation at http://localhost:8080.
python -m http.server 8080 -d _build/html
```
| text/markdown | Protein Data Bank in Europe | pdbehelp@ebi.ac.uk | null | null | Apache License 2.0. | PDB, ligand, small molecule, complex, CCD, PRD, CLC | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"License :: Other/Proprietary License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating S... | [] | null | null | <4,>=3.10 | [] | [] | [] | [
"gemmi>=0.6.6",
"networkx>=3.3",
"numpy>=1.26.4",
"pillow>=10.4.0",
"rdkit>=2023.9.6",
"requests>=2.32.3",
"scipy>=1.14.1"
] | [] | [] | [] | [
"Documentation, https://pdbeurope.github.io/ccdutils/",
"Repository, https://github.com/PDBeurope/ccdutils"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:52:10.968616 | pdbeccdutils-1.0.3.tar.gz | 1,036,725 | 0a/c3/5dc4678280398dc1bc246f33c88d67c64a0fa63cc43cff9cbae7a7b46699/pdbeccdutils-1.0.3.tar.gz | source | sdist | null | false | 37f8a6ad4ef0c0d15fe74c24e9af9b1f | a23167c7e0c05d48be91f2b44523ee80a66889360b5775053c298a5b65f2b3f8 | 0ac35dc4678280398dc1bc246f33c88d67c64a0fa63cc43cff9cbae7a7b46699 | null | [
"LICENSE"
] | 775 |
2.4 | otcdocstheme | 1.14.2 | T Cloud Public Docs Theme | ========================
Team and repository tags
========================
.. image:: https://governance.openstack.org/tc/badges/openstackdocstheme.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
.. Change things from this point on
T Cloud Public docs Sphinx Theme
================================
Theme and extension support for Sphinx documentation.
Intended for use by T Cloud Public `projects`.
.. `projects`: https://github.com/OpenTelekomCloud
* Free software: Apache License, Version 2.0
* Source: https://github.com/OpenTelekomCloud/otcdocstheme
| null | T Cloud Public Ecosystem Squad | otc_ecosystem_squad@t-systems.com | null | null | null | null | [
"Environment :: OpenStack",
"Environment :: Other Environment",
"Intended Audience :: Information Technology",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
"Programming Langua... | [] | https://cloud.otc.com | null | >=3.6 | [] | [] | [] | [
"pbr!=2.1.0,>=2.0.0",
"dulwich>=0.15.0",
"setuptools",
"otc-metadata"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.10.9 | 2026-02-19T13:52:04.419410 | otcdocstheme-1.14.2.tar.gz | 6,763,174 | 85/1b/94d10d2c1756d4f2a3e5eeec2a764f7f0a0b1a936baed09061fab893b9a8/otcdocstheme-1.14.2.tar.gz | source | sdist | null | false | 8d520fdba5f0fc50e5be568ab70a953a | 0e33d4bf9e85e4a4dc21dcd0b1f8abf7f6c590d1245488d4f105f8233b760a62 | 851b94d10d2c1756d4f2a3e5eeec2a764f7f0a0b1a936baed09061fab893b9a8 | null | [
"LICENSE"
] | 1,308 |
2.4 | msasim | 26.2.1 | A fast MSA simulator | # Sailfish
Sailfish is a high-performance multiple sequence alignment (MSA) simulator written in C++ with a Python API. It enables rapid generation of large-scale simulated datasets with support for indels, substitutions, and realistic evolutionary models.
## Features
- High-performance C++ engine with ergonomic Python interface
- Support for both DNA and protein sequence evolution
- Flexible indel modeling with multiple length distributions (Zipf, Geometric, Poisson, Custom)
- 26+ substitution models including JTT, WAG, LG, HKY, GTR, and more
- Gamma rate heterogeneity and invariant sites
- Per-branch parameter specification for heterogeneous models
- Low-memory mode for large-scale simulations (1M+ sequences)
- Reproducible simulations with explicit seed control
## Installation
```bash
pip install msasim
```
Requirements: Python >= 3.6
## Quick Start
### Basic Example with Indels and Substitutions
```python
from msasim import sailfish as sim
from msasim.sailfish import MODEL_CODES, ZipfDistribution
# Configure simulation protocol
sim_protocol = sim.SimProtocol(
tree="(A:0.5,B:0.5);",
root_seq_size=100,
deletion_rate=0.01,
insertion_rate=0.01,
deletion_dist=ZipfDistribution(1.7, 50),
insertion_dist=ZipfDistribution(1.7, 50),
seed=42
)
# Create simulator
simulation = sim.Simulator(sim_protocol, simulation_type=sim.SIMULATION_TYPE.PROTEIN)
# Configure substitution model with gamma rate heterogeneity
simulation.set_replacement_model(
model=MODEL_CODES.WAG,
gamma_parameters_alpha=1.0,
gamma_parameters_categories=4
)
# Run simulation
msa = simulation()
# Output results
msa.write_msa("output.fasta")
msa.print_msa()
```
### Substitutions-Only Simulation
```python
from msasim import sailfish as sim
from msasim.sailfish import MODEL_CODES
# No indels configured
protocol = sim.SimProtocol(
tree="path/to/tree.nwk",
root_seq_size=500,
seed=42
)
simulator = sim.Simulator(protocol, simulation_type=sim.SIMULATION_TYPE.PROTEIN)
simulator.set_replacement_model(model=MODEL_CODES.LG)
msa = simulator()
msa.write_msa("alignment.fasta")
```
### Batch Simulations
```python
from msasim import sailfish as sim
from msasim.sailfish import MODEL_CODES
# Initialize once with seed
protocol = sim.SimProtocol(tree="tree.nwk", root_seq_size=500, seed=42)
simulator = sim.Simulator(protocol, simulation_type=sim.SIMULATION_TYPE.PROTEIN)
simulator.set_replacement_model(model=MODEL_CODES.JTT)
# Generate multiple replicates
# Internal RNG advances automatically for reproducibility
for i in range(100):
msa = simulator()
msa.write_msa(f"replicate_{i:04d}.fasta")
```
### Low-Memory Mode for Large Simulations
```python
import pathlib
from msasim import sailfish as sim
from msasim.sailfish import MODEL_CODES
protocol = sim.SimProtocol(tree="large_tree.nwk", root_seq_size=10000, seed=42)
simulator = sim.Simulator(protocol, simulation_type=sim.SIMULATION_TYPE.DNA)
simulator.set_replacement_model(model=MODEL_CODES.NUCJC)
# Write directly to disk without holding MSA in memory
simulator.simulate_low_memory(pathlib.Path("large_alignment.fasta"))
```
## Documentation
For complete API documentation, including all available models, distributions, and advanced features, see [API_REFERENCE.md](API_REFERENCE.md).
## Core Concepts
### Simulation Types
- `SIMULATION_TYPE.NOSUBS`: Indels only, no substitutions
- `SIMULATION_TYPE.DNA`: DNA sequences with nucleotide models
- `SIMULATION_TYPE.PROTEIN`: Protein sequences with amino acid models
### Available Models
**Nucleotide**: JC, HKY, GTR, Tamura92
**Protein**: WAG, LG, JTT (JONES), Dayhoff, MTREV24, CPREV45, HIV models, and more
See [API_REFERENCE.md#substitution-models](API_REFERENCE.md#substitution-models) for the complete list.
### Indel Length Distributions
- **ZipfDistribution**: Power-law distribution (typical for biological data)
- **GeometricDistribution**: Exponentially decreasing
- **PoissonDistribution**: Poisson-based
- **CustomDistribution**: User-defined probability vector
## Common Use Cases
### 1. Phylogenetic Method Validation
Generate known-truth alignments with specified evolutionary parameters to test inference methods.
### 2. Benchmarking Alignment Tools
Create challenging datasets with varying indel rates and substitution patterns.
### 3. Statistical Power Analysis
Simulate datasets under different evolutionary scenarios to assess method sensitivity.
### 4. Model Comparison
Generate alignments under different substitution models for model selection studies.
## Performance Notes
Typical simulation times on modern hardware:
- 10K sequences × 30K sites: ~10 seconds
- 100K sequences × 30K sites: ~2 minutes
- 1M sequences × 30K sites: ~20 minutes
For simulations with >100K sequences or memory constraints, use `simulate_low_memory()`.
Memory usage estimate: `(num_sequences × alignment_length) / 300,000` MB
## Project Goals
- Ease of use: Simple, intuitive Python API
- Speed: High-performance C++ implementation
- Modularity: Flexible configuration of all evolutionary parameters
## Contributing
Bug reports and feature requests are welcome via GitHub issues.
## Citation
If you use Sailfish in your research, please cite:
[Citation information to be added]
## License
[License information to be added]
| text/markdown | Elya Wygoda | elya.wygoda@gmail.com | null | null | null | null | [] | [] | https://github.com/elyawy/Sailfish-backend | null | >=3.6 | [] | [] | [] | [
"pytest; extra == \"test\"",
"scipy>=1.0.0; extra == \"correlation\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T13:51:51.168429 | msasim-26.2.1.tar.gz | 23,937 | ae/29/c96abd2858a2938e6f4d0d5fc5690055bab62110034d71148616cabb27b0/msasim-26.2.1.tar.gz | source | sdist | null | false | 9591f9bdea21d7c5ccbc4e9b851073b3 | 5b6e5c3bfefad389bdba8dab655443d718b1e62ecde9f63aef96d5b19f00f54e | ae29c96abd2858a2938e6f4d0d5fc5690055bab62110034d71148616cabb27b0 | null | [
"LICENSE"
] | 2,649 |
2.4 | fast-bi-dbt-runner | 2026.1.0.0b6 | A comprehensive Python library for managing DBT (Data Build Tool) DAGs within the Fast.BI data development platform | # Fast.BI DBT Runner
[](https://badge.fury.io/py/fast-bi-dbt-runner)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/fast-bi/dbt-workflow-core-runner/actions)
[](https://github.com/fast-bi/dbt-workflow-core-runner/actions)
A comprehensive Python library for managing DBT (Data Build Tool) DAGs within the Fast.BI data development platform. This package provides multiple execution operators optimized for different cost-performance trade-offs, from low-cost slow execution to high-cost fast execution.
## 🚀 Overview
Fast.BI DBT Runner is part of the [Fast.BI Data Development Platform](https://fast.bi), designed to provide flexible and scalable DBT workload execution across various infrastructure options. The package offers four distinct operator types, each optimized for specific use cases and requirements.
## 🎯 Key Features
- **Multiple Execution Operators**: Choose from K8S, Bash, API, or GKE operators
- **Cost-Performance Optimization**: Scale from low-cost to high-performance execution
- **Airflow Integration**: Seamless integration with Apache Airflow workflows
- **Manifest Parsing**: Intelligent DBT manifest parsing for dynamic DAG generation
- **Airbyte Integration**: Built-in support for Airbyte task group building
- **Flexible Configuration**: Extensive configuration options for various deployment scenarios
## 📦 Installation
### Basic Installation (Core Package)
```bash
pip install fast-bi-dbt-runner
```
### With Airflow Integration
```bash
pip install fast-bi-dbt-runner[airflow]
```
### With Development Tools
```bash
pip install fast-bi-dbt-runner[dev]
```
### With Documentation Tools
```bash
pip install fast-bi-dbt-runner[docs]
```
### Complete Installation
```bash
pip install fast-bi-dbt-runner[airflow,dev,docs]
```
## 🏗️ Architecture
### Operator Types
The package provides four different operators for running DBT transformation pipelines:
#### 1. K8S (Kubernetes) Operator - Default Choice
- **Best for**: Cost optimization, daily/nightly jobs, high concurrency
- **Characteristics**: Creates dedicated Kubernetes pods per task
- **Trade-offs**: Most cost-effective but slower execution speed
- **Use cases**: Daily ETL pipelines, projects with less frequent runs
#### 2. Bash Operator
- **Best for**: Balanced cost-speed ratio, medium-sized projects
- **Characteristics**: Runs within Airflow worker resources
- **Trade-offs**: Faster than K8S but limited by worker capacity
- **Use cases**: Medium-sized projects, workflows requiring faster execution
#### 3. API Operator
- **Best for**: High performance, time-sensitive workflows
- **Characteristics**: Dedicated machine per project, always-on resources
- **Trade-offs**: Fastest execution but highest cost
- **Use cases**: Large-scale projects, real-time analytics, high-frequency execution
#### 4. GKE Operator
- **Best for**: Complete isolation, external client workloads
- **Characteristics**: Creates dedicated GKE clusters
- **Trade-offs**: Full isolation but higher operational complexity
- **Use cases**: External client workloads, isolated environment requirements
## 🚀 Quick Start
### Basic Usage
```python
from fast_bi_dbt_runner import DbtManifestParserK8sOperator
# Create a K8S operator instance
operator = DbtManifestParserK8SOperator(
task_id='run_dbt_models',
project_id='my-gcp-project',
dbt_project_name='my_analytics',
operator='k8s'
)
# Execute DBT models
operator.execute(context)
```
### Configuration Example
```python
# K8S Operator Configuration
k8s_config = {
'PLATFORM': 'Airflow',
'OPERATOR': 'k8s',
'PROJECT_ID': 'my-gcp-project',
'DBT_PROJECT_NAME': 'my_analytics',
'DAG_SCHEDULE_INTERVAL': '@daily',
'DATA_QUALITY': 'True',
'DBT_SOURCE': 'True'
}
# API Operator Configuration
api_config = {
'PLATFORM': 'Airflow',
'OPERATOR': 'api',
'PROJECT_ID': 'my-gcp-project',
'DBT_PROJECT_NAME': 'realtime_analytics',
'DAG_SCHEDULE_INTERVAL': '*/15 * * * *',
'MODEL_DEBUG_LOG': 'True'
}
```
## 📚 Documentation
For detailed documentation, visit our [Fast.BI Platform Documentation](https://wiki.fast.bi/en/User-Guide/Data-Orchestration/Data-Model-CICD-Configuration).
### Key Documentation Sections
- [Operator Selection Guide](https://wiki.fast.bi/en/User-Guide/Data-Orchestration/Data-Model-CICD-Configuration#operator-selection-guide)
- [Configuration Variables](https://wiki.fast.bi/en/User-Guide/Data-Orchestration/Data-Model-CICD-Configuration#core-variables)
- [Advanced Configuration Examples](https://wiki.fast.bi/en/User-Guide/Data-Orchestration/Data-Model-CICD-Configuration#advanced-configuration-examples)
- [Best Practices](https://wiki.fast.bi/en/User-Guide/Data-Orchestration/Data-Model-CICD-Configuration#notes-and-best-practices)
## 🔧 Configuration
### Core Variables
| Variable | Description | Default Value |
|----------|-------------|---------------|
| `PLATFORM` | Data orchestration platform | Airflow |
| `OPERATOR` | Execution operator type | k8s |
| `PROJECT_ID` | Google Cloud project identifier | Required |
| `DBT_PROJECT_NAME` | DBT project identifier | Required |
| `DAG_SCHEDULE_INTERVAL` | Pipeline execution schedule | @once |
### Feature Flags
| Variable | Description | Default |
|----------|-------------|---------|
| `DBT_SEED` | Enable seed data loading | False |
| `DBT_SOURCE` | Enable source loading | False |
| `DBT_SNAPSHOT` | Enable snapshot creation | False |
| `DATA_QUALITY` | Enable quality service | False |
| `DEBUG` | Enable connection verification | False |
## 🎯 Use Cases
### Daily ETL Pipeline
```python
# Low-cost, reliable daily processing
config = {
'OPERATOR': 'k8s',
'DAG_SCHEDULE_INTERVAL': '@daily',
'DBT_SOURCE': 'True',
'DATA_QUALITY': 'True'
}
```
### Real-time Analytics
```python
# High-performance, frequent execution
config = {
'OPERATOR': 'api',
'DAG_SCHEDULE_INTERVAL': '*/15 * * * *',
'MODEL_DEBUG_LOG': 'True'
}
```
### External Client Workload
```python
# Isolated, dedicated resources
config = {
'OPERATOR': 'gke',
'CLUSTER_NAME': 'client-isolated-cluster',
'DATA_QUALITY': 'True'
}
```
## 🔍 Monitoring and Debugging
### Enable Debug Logging
```python
config = {
'DEBUG': 'True',
'MODEL_DEBUG_LOG': 'True'
}
```
### Data Quality Integration
```python
config = {
'DATA_QUALITY': 'True',
'DATAHUB_ENABLED': 'True'
}
```
## 🚀 CI/CD and Automation
This package uses GitHub Actions for continuous integration and deployment:
- **Automated Testing**: Tests across Python 3.9-3.12
- **Code Quality**: Linting, formatting, and type checking
- **Automated Publishing**: Automatic PyPI releases on version tags
- **Documentation**: Automated documentation building and deployment
### Release Process
1. Create a version tag: `git tag v1.0.0`
2. Push the tag: `git push origin v1.0.0`
3. GitHub Actions automatically:
- Tests the package
- Builds and validates
- Publishes to PyPI
- Creates a GitHub release
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details.
### Development Setup
```bash
# Clone the repository
git clone https://github.com/fast-bi/dbt-workflow-core-runner.git
cd dbt-workflow-core-runner
# Install in development mode with all tools
pip install -e .[dev,airflow]
# Run tests
pytest
# Check code quality
flake8 fast_bi_dbt_runner/
black --check fast_bi_dbt_runner/
mypy fast_bi_dbt_runner/
```
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🆘 Support
- **Documentation**: [Fast.BI Platform Wiki](https://wiki.fast.bi)
- **Email**: support@fast.bi
- **Issues**: [GitHub Issues](https://github.com/fast-bi/dbt-workflow-core-runner/issues)
- **Source**: [GitHub Repository](https://github.com/fast-bi/dbt-workflow-core-runner)
## 🔗 Related Projects
- [Fast.BI Platform](https://fast.bi) - Complete data development platform
- [Fast.BI Replication Control](https://pypi.org/project/fast-bi-replication-control/) - Data replication management
- [Apache Airflow](https://airflow.apache.org/) - Workflow orchestration platform
---
**Fast.BI DBT Runner** - Empowering data teams with flexible, scalable DBT execution across the Fast.BI platform.
| text/markdown | Fast.Bi | "Fast.BI" <support@fast.bi> | Fast.Bi | "Fast.BI" <administrator@fast.bi> | MIT | dbt, data-build-tool, airflow, kubernetes, data-pipeline, etl, data-engineering, fast-bi, data-orchestration, gke, bash-operator, api-operator, workflow, data-workflow, manifest-parser | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: P... | [] | https://gitlab.fast.bi/infrastructure/bi-platform-pypi-packages/fast_bi_dbt_runner | null | >=3.8 | [] | [] | [] | [
"kubernetes>=18.0.0",
"google-cloud-storage>=2.0.0",
"google-auth>=2.26.1",
"requests>=2.25.0",
"pyyaml>=5.4.0",
"jinja2>=3.0.0",
"apache-airflow[kubernetes]<3.0.0,>=2.7.0; extra == \"airflow\"",
"black>=21.0.0; extra == \"dev\"",
"flake8>=3.8.0; extra == \"dev\"",
"mypy>=0.800; extra == \"dev\"",... | [] | [] | [] | [
"Homepage, https://github.com/fast-bi/dbt-workflow-core-runner",
"Documentation, https://wiki.fast.bi/en/User-Guide/Data-Orchestration/Data-Model-CICD-Configuration",
"Repository, https://github.com/fast-bi/dbt-workflow-core-runner",
"Bug Tracker, https://github.com/fast-bi/dbt-workflow-core-runner/issues",
... | twine/6.2.0 CPython/3.11.9 | 2026-02-19T13:51:10.259210 | fast_bi_dbt_runner-2026.1.0.0b6.tar.gz | 46,206 | e2/27/2c77b6a664d5ac22208628e23de60a8ad4a191ed2839c0af42734b2e2cb6/fast_bi_dbt_runner-2026.1.0.0b6.tar.gz | source | sdist | null | false | ff972aaae037db4cc9c8c284cbfa4ed8 | 46ce0c03f7ed2f83269110c94b07674c98649556bcc2cfc62617870eff4b3cdb | e2272c77b6a664d5ac22208628e23de60a8ad4a191ed2839c0af42734b2e2cb6 | null | [
"LICENSE"
] | 262 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.