metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | llama-cpp-py-sync | 0.8095 | Auto-synchronized Python bindings for llama.cpp using CFFI ABI mode | # llama-cpp-py-sync
**Auto-synchronized Python bindings for llama.cpp**
[](https://github.com/FarisZahrani/llama-cpp-py-sync/actions/workflows/build.yml)
[](https://github.com/FarisZahrani/llama-cpp-py-sync/actions/workflows/sync.yml)
[](https://github.com/FarisZahrani/llama-cpp-py-sync/actions/workflows/test.yml)
[](https://pypi.org/project/llama-cpp-py-sync/)
[](https://opensource.org/licenses/MIT)
## Overview
**llama-cpp-py-sync** provides Python bindings for `llama.cpp` that are kept up-to-date automatically. It generates bindings from upstream headers using **CFFI ABI mode**, and ships prebuilt wheels.
### Key Features
- Automatic upstream sync and binding regeneration
- Prebuilt wheels built by CI
- CPU wheels published to PyPI
- Backend-specific wheels (CUDA / Vulkan / Metal) published to GitHub Releases
- CI checks that the generated CFFI surface matches the upstream C API (functions, structs, enums, and signatures)
- A small, explicit Python API (`Llama.generate`, `tokenize`, `get_embeddings`, etc.)
### What You Get (and What You Don’t)
- This project binds to the **public C API** that llama.cpp exposes in `llama.h`.
- It does **not** attempt to bind llama.cpp’s internal C++ implementation such as private headers, C++ classes/templates, or functions that never appear in `llama.h`.
- We use **CFFI ABI mode**: Python loads a prebuilt shared library at runtime (no compiled Python extension module for the bindings).
- Because of that, you still need a compatible llama.cpp shared library available, either bundled in the wheel or via `LLAMA_CPP_LIB`.
- You get a small high-level API (`llama_cpp_py_sync.Llama`) for common tasks, and an “escape hatch” to call the low-level C functions directly via CFFI when needed.
### High-level vs Low-level APIs
- High-level API: `llama_cpp_py_sync.Llama` is the recommended entry point for typical usage such as generation, tokenization, and embeddings.
```python
import llama_cpp_py_sync as llama
with llama.Llama("path/to/model.gguf", n_ctx=2048, n_gpu_layers=0) as llm:
print(llm.generate("Hello", max_tokens=64))
```
- Low-level API: `llama_cpp_py_sync._cffi_bindings` exposes CFFI access to the underlying llama.cpp C API for advanced use.
```python
from llama_cpp_py_sync._cffi_bindings import get_ffi, get_lib
ffi = get_ffi()
lib = get_lib()
print(ffi.string(lib.llama_print_system_info()).decode("utf-8", errors="replace"))
```
## Installation
This project supports **Python 3.8+**. During the current testing phase, CI builds are pinned to **Python 3.11.9** for reproducibility, but the published wheels are intended to work across supported Python versions.
### From PyPI (Recommended)
```bash
pip install llama-cpp-py-sync
```
This installs the **CPU** wheel.
Note: depending on CI configuration and platform support, additional wheels (e.g. macOS Metal) may also be published to PyPI.
### Quick Chat (Recommended)
After installing from PyPI, you can start an interactive chat session with:
```bash
python -m llama_cpp_py_sync chat
```
If you do not pass `--model` (and `LLAMA_MODEL` is not set), the CLI will prompt before downloading a default GGUF model and cache it locally for future runs.
To auto-download without prompting, pass `--yes`.
One-shot prompt:
```bash
python -m llama_cpp_py_sync chat --prompt "Say 'ok'." --max-tokens 32
```
Use a specific local model:
```bash
python -m llama_cpp_py_sync chat --model path/to/model.gguf
```
### From GitHub Releases (Wheel)
Download the wheel for your platform/backend from GitHub Releases and install the `.whl`:
```bash
pip install path/to/llama_cpp_py_sync-*.whl
```
### From Source
```bash
git clone https://github.com/FarisZahrani/llama-cpp-py-sync.git
cd llama-cpp-py-sync
# Sync upstream llama.cpp
python scripts/sync_upstream.py
# Regenerate CFFI bindings from the synced llama.cpp headers
# (Optional) record the exact llama.cpp commit SHA in the generated file.
python scripts/gen_bindings.py --commit-sha "$(python scripts/sync_upstream.py --sha)"
# Build the shared library
python scripts/build_llama_cpp.py
# Install the package
pip install -e .
```
`vendor/llama.cpp` is cloned locally by `scripts/sync_upstream.py` (and in CI during builds) and is not committed to this repository.
## Quick Start
```python
import llama_cpp_py_sync as llama
# Load a model
llm = llama.Llama("path/to/model.gguf", n_ctx=2048, n_gpu_layers=35)
# Generate text
response = llm.generate("Hello, world!", max_tokens=100)
print(response)
# Streaming generation
for token in llm.generate("Write a poem:", max_tokens=100, stream=True):
print(token, end="", flush=True)
# Clean up
llm.close()
```
### Using Context Manager
```python
with llama.Llama("model.gguf", n_gpu_layers=35) as llm:
print(llm.generate("Once upon a time"))
```
### Embeddings
```python
# Load an embedding model
with llama.Llama("embed-model.gguf", embedding=True) as llm:
emb = llm.get_embeddings("Hello, world!")
print(f"Embedding dimension: {len(emb)}")
```
### Check Available Backends
```python
from llama_cpp_py_sync import get_available_backends, get_backend_info
print(get_available_backends()) # ['cuda', 'blas'] or similar
info = get_backend_info()
print(f"CUDA available: {info.cuda}")
print(f"Metal available: {info.metal}")
```
<details>
<summary>Full API (click to expand)</summary>
```python
import llama_cpp_py_sync as llama
# Versions
llama.__version__
llama.__llama_cpp_commit__
# Main class
llm = llama.Llama(
model_path="path/to/model.gguf",
n_ctx=512,
n_batch=512,
n_threads=None,
n_gpu_layers=0,
seed=-1,
use_mmap=True,
use_mlock=False,
verbose=False,
embedding=False,
)
text = llm.generate(
"Hello",
max_tokens=256,
temperature=0.8,
top_k=40,
top_p=0.95,
min_p=0.05,
repeat_penalty=1.1,
stop_sequences=None,
stream=False,
)
stream = llm.generate(
"Hello",
max_tokens=256,
stream=True,
)
tokens = llm.tokenize("Hello")
text = llm.detokenize(tokens)
piece = llm.token_to_piece(tokens[0])
llm.get_model_desc()
llm.get_model_size()
llm.get_model_n_params()
# Embeddings (requires embedding=True)
emb = llm.get_embeddings("Hello")
llm.close()
# Module-level embeddings helpers
llama.get_embeddings("path/to/model.gguf", "Hello")
llama.get_embeddings_batch("path/to/model.gguf", ["Hello", "World"])
# Backend helpers
llama.get_available_backends()
llama.get_backend_info()
llama.is_cuda_available()
llama.is_metal_available()
llama.is_vulkan_available()
llama.is_rocm_available()
llama.is_blas_available()
```
</details>
## How It Works
### Automatic Synchronization
1. **Scheduled Checks**: GitHub Actions checks upstream llama.cpp on a schedule
2. **Tag Mirroring**: When an upstream tag exists, the workflow can mirror it into this repository
3. **Wheel Building**: CI builds wheels for all platforms/backends
4. **Release Publishing**: GitHub Releases are created only for tags that exist upstream
5. **PyPI Publishing**: CPU-only wheels are published to PyPI for upstream tags (if configured)
### Bindings Validation (API Surface)
To keep the Python bindings aligned with upstream, CI runs a validation step that compares upstream `llama.h` to the generated CFFI `cdef`.
It checks:
- Public function coverage (missing/extra)
- Struct and enum coverage (missing fields/members)
- Function signatures (return + parameter types)
Local run (after syncing upstream headers):
```bash
python scripts/sync_upstream.py
python scripts/gen_bindings.py --commit-sha "$(python scripts/sync_upstream.py --sha)"
python scripts/validate_cffi_surface.py --check-structs --check-enums --check-signatures
```
### CFFI ABI Mode
Unlike pybind11 or manual ctypes, CFFI ABI mode:
- Reads C declarations directly (no compilation needed for bindings)
- Loads the shared library at runtime via `ffi.dlopen()`
- Automatically handles type conversions
- Works across platforms without modification
### Version Tracking
Check which llama.cpp version you're running:
```python
import llama_cpp_py_sync as llama
print(f"Package version: {llama.__version__}")
print(f"llama.cpp commit: {llama.__llama_cpp_commit__}")
print(f"llama.cpp tag: {getattr(llama, '__llama_cpp_tag__', '')}")
```
## GPU Backend Selection
### Build-time Detection
The build system automatically detects available backends:
| Backend | Platform | Detection |
|---------|----------|-----------|
| CUDA | Linux, Windows | `CUDA_HOME` or `/usr/local/cuda` |
| ROCm | Linux | `ROCM_PATH` or `/opt/rocm` |
| Metal | macOS | Xcode SDK |
| Vulkan | All | `VULKAN_SDK` environment variable |
| BLAS | All | OpenBLAS, MKL, or Accelerate |
### Runtime Configuration
```python
# Use GPU acceleration
llm = llama.Llama("model.gguf", n_gpu_layers=35)
# CPU only (no GPU offload)
llm = llama.Llama("model.gguf", n_gpu_layers=0)
# Full GPU offload (all layers)
llm = llama.Llama("model.gguf", n_gpu_layers=-1)
```
## API Reference
### Llama Class
```python
class Llama:
def __init__(
self,
model_path: str,
n_ctx: int = 512, # Context window size
n_batch: int = 512, # Batch size for prompt processing
n_threads: int = None, # CPU threads (auto-detect if None)
n_gpu_layers: int = 0, # Layers to offload to GPU
seed: int = -1, # Random seed (-1 for random)
use_mmap: bool = True, # Memory map model file
use_mlock: bool = False, # Lock model in RAM
verbose: bool = False, # Print loading info
embedding: bool = False, # Enable embedding mode
): ...
def generate(
self,
prompt: str,
max_tokens: int = 256,
temperature: float = 0.8,
top_k: int = 40,
top_p: float = 0.95,
min_p: float = 0.05,
repeat_penalty: float = 1.1,
stop_sequences: List[str] = None,
stream: bool = False,
) -> Union[str, Iterator[str]]: ...
def tokenize(self, text: str, add_special: bool = True) -> List[int]: ...
def detokenize(self, tokens: List[int]) -> str: ...
def get_embeddings(self, text: str) -> List[float]: ...
def close(self): ...
```
### Backend Functions
```python
def get_available_backends() -> List[str]: ...
def get_backend_info() -> BackendInfo: ...
def is_cuda_available() -> bool: ...
def is_metal_available() -> bool: ...
def is_vulkan_available() -> bool: ...
def is_rocm_available() -> bool: ...
def is_blas_available() -> bool: ...
```
### Embedding Functions
```python
def get_embeddings(model: Union[str, Llama], text: str) -> List[float]: ...
def get_embeddings_batch(model: Union[str, Llama], texts: List[str]) -> List[List[float]]: ...
def cosine_similarity(a: List[float], b: List[float]) -> float: ...
```
## Examples
See the `examples/` directory:
- `basic_generation.py` - Simple text generation
- `streaming_generation.py` - Real-time token streaming
- `embeddings_example.py` - Generate and compare embeddings
- `backend_info.py` - Check available GPU backends
- `benchmark.py` - Measure token throughput
## Smoke Test / Chat CLI
This repository includes an interactive smoke test that can run either as a one-shot prompt (CI-friendly) or as a back-and-forth chat.
```bash
# Interactive chat (Ctrl+C or blank line to exit)
python -m llama_cpp_py_sync chat
# One-shot prompt
python -m llama_cpp_py_sync chat --prompt "Say 'ok'." --max-tokens 16
# Use a specific model
python -m llama_cpp_py_sync chat --model path/to/model.gguf
```
By default it uses `LLAMA_MODEL` if set. Otherwise it downloads a default GGUF model and caches it locally.
If the default model is missing, the CLI will prompt before downloading it. To auto-download without prompting, pass `--yes` or set `LLAMA_AUTO_DOWNLOAD=1`.
If the default model is missing, the CLI will prompt before downloading it. To auto-download without prompting, pass `--yes`.
Model cache location:
- **Windows**: `%LOCALAPPDATA%\llama-cpp-py-sync\models\`
- **Linux/macOS**: `~/.cache/llama-cpp-py-sync/models/`
## Building from Source
### Prerequisites
- Python 3.8+
- Ninja
- CMake (configure step)
- C/C++ compiler (GCC, Clang, MSVC)
- Git
### Build Commands
```bash
# Clone repository
git clone https://github.com/FarisZahrani/llama-cpp-py-sync.git
cd llama-cpp-py-sync
# Sync upstream llama.cpp
python scripts/sync_upstream.py
# Regenerate bindings from the synced llama.cpp headers
# (Optional) record the exact llama.cpp commit SHA in the generated file.
python scripts/gen_bindings.py --commit-sha "$(python scripts/sync_upstream.py --sha)"
# Build with auto-detected backends
python scripts/build_llama_cpp.py
# Build a specific backend
python scripts/build_llama_cpp.py --backend cuda
python scripts/build_llama_cpp.py --backend vulkan
python scripts/build_llama_cpp.py --backend cpu
# On Windows, the build script bundles required runtime DLLs (MSVC/OpenMP and backend runtimes)
# next to the built library by default. You can disable this behavior with:
python scripts/build_llama_cpp.py --no-bundle-runtime-dlls
# Detect available backends without building
python scripts/build_llama_cpp.py --detect-only
# Build wheel
pip install build
python -m build --wheel
```
### Low-level C API access (advanced)
If you need direct access to the underlying C API (beyond the high-level `Llama` wrapper), you can use the generated CFFI bindings:
```python
from llama_cpp_py_sync._cffi_bindings import get_ffi, get_lib
ffi = get_ffi()
lib = get_lib()
print(ffi.string(lib.llama_print_system_info()).decode("utf-8", errors="replace"))
```
## Project Structure
```
llama-cpp-py-sync/
├── src/llama_cpp_py_sync/ # Python package
│ ├── __init__.py # Public API
│ ├── _cffi_bindings.py # Auto-generated CFFI bindings
│ ├── _version.py # Version info
│ ├── llama.py # High-level Llama class
│ ├── embeddings.py # Embedding utilities
│ └── backends.py # Backend detection
├── scripts/ # Build and sync scripts
│ ├── sync_upstream.py # Sync upstream llama.cpp
│ ├── gen_bindings.py # Generate CFFI bindings
│ ├── build_llama_cpp.py # Build shared library
│ └── auto_version.py # Version generation
├── examples/ # Example scripts
├── vendor/llama.cpp/ # Upstream source (cloned at build time)
├── .github/workflows/ # CI/CD pipelines
├── pyproject.toml # Package metadata
└── README.md # This file
```
## Contributing
Contributions are welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Run checks:
```bash
python scripts/run_tests.py
```
Optionally also verify wheel packaging locally:
```bash
python scripts/run_tests.py --build-wheel
```
5. Submit a pull request
## License
MIT License - see [LICENSE](LICENSE) for details.
This project uses llama.cpp which is also MIT licensed.
Third-party license notices are included in [THIRD_PARTY_NOTICES.txt](THIRD_PARTY_NOTICES.txt).
## Acknowledgments
- [ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp) - The upstream C/C++ implementation
- [CFFI](https://cffi.readthedocs.io/) - C Foreign Function Interface for Python
| text/markdown | null | Faris Al-Zahrani <contact@fariszahrani.com> | null | Faris Al-Zahrani <contact@fariszahrani.com> | MIT | llama, llama.cpp, llm, language-model, ai, machine-learning, gguf, inference, cffi, bindings | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python ... | [] | null | null | >=3.8 | [] | [] | [] | [
"cffi>=1.15.0",
"certifi>=2023.7.22",
"numpy>=1.20.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/FarisZahrani/llama-cpp-py-sync",
"Repository, https://github.com/FarisZahrani/llama-cpp-py-sync",
"Documentation, https://github.com/FarisZahrani/llama-cpp-py-sync#readme",
"Issues, https://github.com/FarisZahrani/llama-cpp-py-sync/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T03:49:10.474690 | llama_cpp_py_sync-0.8095-1metal-py3-none-macosx_14_0_arm64.whl | 4,762,990 | d6/c6/c26a2c78b7abe0a4e9d921fea9b809cb2499dce079bec0bfeedd7ff0ad2d/llama_cpp_py_sync-0.8095-1metal-py3-none-macosx_14_0_arm64.whl | py3 | bdist_wheel | null | false | a27f72df4e61754f3804efebd0045343 | 30325e7e860b71ecfc1b330805009012d0f57c037fef8815d0de53b27450d1bc | d6c6c26a2c78b7abe0a4e9d921fea9b809cb2499dce079bec0bfeedd7ff0ad2d | null | [
"LICENSE"
] | 310 |
2.4 | python-hwpx | 2.0 | Hancom HWPX 패키지를 로드하고 편집하기 위한 Python 유틸리티 모음 | # python-hwpx
`python-hwpx`는 Hancom HWPX 문서를 읽고, 편집하고, 자동화 스크립트로 재가공하기 위한 파이썬 도구 모음입니다. Open Packaging Convention(OPC) 컨테이너를 검사하는 저수준 도구부터 문단·표·메모를 쉽게 다루는 고수준 API, 텍스트 추출과 객체 검색 유틸리티까지 하나로 제공합니다.
## 특징 요약
- **패키지 로딩과 검증** – `hwpx.opc.package.HwpxPackage`로 `mimetype`, `container.xml`, `version.xml`을 확인하며 모든 파트를 메모리에 적재합니다.
- **문서 편집 API** – `hwpx.document.HwpxDocument`는 문단과 표, 메모, 헤더 속성을 파이썬 객체로 노출하고 새 콘텐츠를 손쉽게 추가합니다. 섹션 머리말·꼬리말을 수정하면 `<hp:headerApply>`/`<hp:footerApply>`와 마스터 페이지 링크도 함께 갱신합니다.
- **타입이 지정된 본문 모델** – `hwpx.oxml.body`는 표·컨트롤·인라인 도형·변경 추적 태그를 데이터 클래스에 매핑하고, `HwpxOxmlParagraph.model`/`HwpxOxmlRun.model`로 이를 조회·수정한 뒤 XML로 되돌릴 수 있도록 지원합니다.
- **메모와 필드 앵커** – `add_memo_with_anchor()`로 메모를 생성하면서 MEMO 필드 컨트롤을 자동 삽입해 한/글에서 바로 표시되도록 합니다.
- **헤더 참조 목록 탐색** – 글머리표, 문단 속성, 테두리 채우기, 스타일, 변경 추적 항목, 작성자 정보를 데이터클래스로 파싱하고 `document.border_fills`·`document.bullets`·`document.styles` 같은 조회 헬퍼로 ID 기반 검색을 단순화했습니다.
- **바탕쪽·이력·버전 파트 제어** – 매니페스트에 포함된 master-page/history/version 파트를 `document.master_pages`, `document.histories`, `document.version`으로 직접 편집하고 저장합니다.
- **스타일 기반 텍스트 치환** – 런 서식(색상, 밑줄, `charPrIDRef`)으로 필터링해 텍스트를 선택적으로 교체하거나 삭제합니다. 하이라이트
마커나 태그로 분리된 문자열도 서식을 유지한 채 치환합니다.
- **텍스트 추출 파이프라인** – `hwpx.tools.text_extractor.TextExtractor`는 하이라이트, 각주, 컨트롤을 원하는 방식으로 표현하며 문단 텍스트를 반환합니다.
- **풍부한 문서** – 빠른 시작, 50개의 사용 패턴, 설치/FAQ/스키마 개요를 Sphinx 기반 웹 문서로 제공합니다.
## 설치
PyPI에서 최신 버전을 바로 설치할 수 있습니다.
```bash
python -m pip install python-hwpx
```
개발 버전이나 문서 빌드를 직접 수정하려면 저장소를 클론한 뒤 편집 가능한 설치를 사용하세요.
```bash
git clone https://github.com/<your-org>/python-hwpx.git
cd python-hwpx
python -m pip install -e .[dev]
```
Sphinx 문서는 `docs/` 아래에 있으며, `python -m pip install -r docs/requirements.txt` 후 `make -C docs html`로 로컬 미리보기가 가능합니다.
## 5분 안에 맛보기
```python
from io import BytesIO
from hwpx import HwpxDocument
from hwpx.templates import blank_document_bytes
# 1) 빈 템플릿으로 문서 열기
source = BytesIO(blank_document_bytes())
document = HwpxDocument.open(source)
print("sections:", len(document.sections))
# 2) 문단과 표, 메모 추가
section = document.sections[0]
paragraph = document.add_paragraph("자동 생성한 문단", section=section)
# 표에 사용할 기본 실선 테두리 채우기가 없으면 add_table()이 자동으로 생성합니다.
table = document.add_table(rows=2, cols=2, section=section)
table.set_cell_text(0, 0, "항목")
table.set_cell_text(0, 1, "값")
table.set_cell_text(1, 0, "문단 수")
table.set_cell_text(1, 1, str(len(document.paragraphs)))
document.add_memo_with_anchor("배포 전 검토", paragraph=paragraph, memo_shape_id_ref="0")
# 3) 다른 이름으로 저장
document.save_to_path("output/example.hwpx")
```
`HwpxDocument.add_table()`은 문서에 정의된 테두리 채우기가 없으면 헤더 참조 목록에 "기본 실선" `borderFill`을 만들어 표와 모든 셀에 참조를 연결합니다.
표 셀 텍스트를 편집하는 `table.set_cell_text()`는 기존 단락에 남아 있는 `lineSegArray`와 같은 줄 배치 캐시를 제거하여 한/글이 문서를 다시 열 때 줄바꿈을 새로 계산하도록 합니다. 병합된 표 구조를 다뤄야 한다면 `table.iter_grid()` 또는 `table.get_cell_map()`으로 논리 격자와 실제 셀의 매핑을 확인하고, `set_cell_text(..., logical=True, split_merged=True)`로 논리 좌표 기반 편집과 자동 병합 해제를 동시에 처리할 수 있습니다.
더 많은 실전 패턴은 [빠른 시작](docs/quickstart.md)과 [사용 가이드](docs/usage.md)의 "빠른 예제 모음"에서 확인할 수 있습니다.
### 저장 API 변경 안내
`HwpxDocument`는 저장 사용 케이스를 다음처럼 분리해 제공합니다.
- `save_to_path(path) -> str | PathLike[str]`: 지정한 경로로 저장하고 같은 경로를 반환
- `save_to_stream(stream) -> BinaryIO`: 파일/버퍼 스트림에 저장하고 같은 스트림을 반환
- `to_bytes() -> bytes`: 메모리에서 직렬화한 바이트를 반환
기존 `save()`는 하위 호환을 위해 유지되지만 deprecated 경고를 발생시킵니다. 새 코드에서는 위 3개 메서드 사용을 권장합니다.
## 문서
[사용법](https://airmang.github.io/python-hwpx/)
## 예제와 도구
- `examples/` 디렉터리는 텍스트 추출, 객체 검색, QA 체크리스트 생성 예제를 제공합니다. PyPI 패키지에는 포함되지 않으므로 필요하면 저장소를 클론하거나 웹 문서의 코드 스니펫을 활용하세요.
- `hwpx.templates.blank_document_bytes()`는 추가 리소스 없이 빈 HWPX 문서를 만들 수 있는 내장 템플릿을 제공합니다.
## 알려진 제약
- `add_shape()`/`add_control()`은 한/글이 요구하는 모든 하위 요소를 생성하지 않으므로, 복잡한 개체를 추가할 때는 편집기에서 열어 검증해 주세요.
## 기여하기
버그 리포트와 개선 제안은 언제나 환영합니다. 개발 환경 설정과 테스트 방법은 [CONTRIBUTING.md](CONTRIBUTING.md)를 참고하세요.
## 라이선스와 연락처
- 라이선스: [LICENSE](LICENSE)
- 문의: 이슈 트래커 또는 kokyuhyun@hotmail.com
| text/markdown | python-hwpx Maintainers | null | null | null | Non-Commercial License
Copyright (c) 2024 python-hwpx Maintainers
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to use,
copy, modify, merge, publish, distribute, and sublicense the Software only for
non-commercial purposes, subject to the following conditions:
1. Non-Commercial Use Only. The Software may be used, copied, modified,
merged, published, distributed, and sublicensed only for non-commercial
purposes. "Non-Commercial" means use that is not primarily intended for or
directed toward commercial advantage, monetary compensation, or any form of
direct or indirect commercial exploitation.
2. Attribution. The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
3. No Warranty of Commercial Support. The maintainers are not obligated to
provide commercial support, maintenance, or updates.
THIS SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
If you require permissions to use this Software for commercial purposes,
please contact the copyright holders to negotiate an alternative licensing
arrangement.
| hwp, hwpx, hancom, opc, xml | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Develo... | [] | null | null | >=3.10 | [] | [] | [] | [
"lxml<6,>=4.9",
"build>=1.0; extra == \"dev\"",
"twine>=4.0; extra == \"dev\"",
"pytest>=7.4; extra == \"dev\"",
"pytest>=7.4; extra == \"test\"",
"pytest-cov>=5.0; extra == \"test\"",
"mypy>=1.10; extra == \"typecheck\"",
"pyright>=1.1.390; extra == \"typecheck\""
] | [] | [] | [] | [
"Homepage, https://github.com/airmang/python-hwpx",
"Documentation, https://github.com/airmang/python-hwpx/tree/main/docs",
"Issues, https://github.com/airmang/python-hwpx/issues"
] | twine/6.2.0 CPython/3.12.7 | 2026-02-19T03:48:42.371742 | python_hwpx-2.0.tar.gz | 92,835 | fe/b3/b7859fd287d0b0280021edea28fd7b6457148b26b878d53e7853c057c332/python_hwpx-2.0.tar.gz | source | sdist | null | false | 48223f6c09844814e288db4e5399b657 | eeb42ece5512e1854ad994bb1603a7195f92955eb89751446e286a3f04f0fb3a | feb3b7859fd287d0b0280021edea28fd7b6457148b26b878d53e7853c057c332 | null | [
"LICENSE"
] | 870 |
2.4 | fpdf2 | 2.8.6 | Simple & fast PDF generation for Python | [](https://pypi.org/pypi/fpdf2#history)
[](https://pypi.org/project/fpdf2/)
[](https://www.gnu.org/licenses/lgpl-3.0)
[](https://github.com/py-pdf/fpdf2/actions?query=branch%3Amaster)
[](https://codecov.io/gh/py-pdf/fpdf2)
[](https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/)
[](https://github.com/py-pdf/fpdf2/actions/workflows/continuous-integration-workflow.yml)
[](https://libraries.io/pypi/fpdf2/dependents)
[](https://pepy.tech/project/fpdf2)
[](https://github.com/py-pdf/fpdf2/graphs/contributors)
[](https://github.com/py-pdf/fpdf2/commits/master)
[](https://github.com/py-pdf/fpdf2/issues)
[](https://github.com/py-pdf/fpdf2/pulls)
[](https://makeapullrequest.com)
[](https://www.firsttimersonly.com/)
→ come look at our [good first issues](https://github.com/py-pdf/fpdf2/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
# fpdf2

`fpdf2` is a PDF creation library for Python:
```python
from fpdf import FPDF
pdf = FPDF()
pdf.add_page()
pdf.set_font('helvetica', size=12)
pdf.cell(text="hello world")
pdf.output("hello_world.pdf")
```
Go try it **now** online in a Jupyter notebook: [](https://colab.research.google.com/github/py-pdf/fpdf2/blob/master/tutorial/notebook.ipynb) or [](https://nbviewer.org/github/py-pdf/fpdf2/blob/master/tutorial/notebook.ipynb)
Compared with other PDF libraries, `fpdf2` is **fast, versatile, easy to learn and to extend** ([example](https://github.com/digidigital/Extensions-and-Scripts-for-pyFPDF-fpdf2)).
It is also entirely written in Python and has very few dependencies:
[Pillow](https://pillow.readthedocs.io/en/stable/), [defusedxml](https://pypi.org/project/defusedxml/), & [fontTools](https://fonttools.readthedocs.io/en/latest/index.html). It is a fork and the successor of `PyFPDF` (_cf._ [history](https://py-pdf.github.io/fpdf2/History.html)).
**Development status**: this project is **mature** and **actively maintained**.
We are looking for contributing developers: if you want to get involved but don't know how,
or would like to volunteer helping maintain this lib, [open a discussion](https://github.com/py-pdf/fpdf2/discussions)!
## Installation Instructions
```bash
pip install fpdf2
```
To get the latest, unreleased, development version straight from the development branch of this repository:
```bash
pip install git+https://github.com/py-pdf/fpdf2.git@master
```
## Features
* Python 3.10+ support
* [Unicode](https://py-pdf.github.io/fpdf2/Unicode.html) (UTF-8) TrueType font subset embedding
* Internal / external [links](https://py-pdf.github.io/fpdf2/Links.html)
* Embedding images, including transparency and alpha channel
* Arbitrary path drawing and basic [SVG](https://py-pdf.github.io/fpdf2/SVG.html) import
* Embedding [barcodes](https://py-pdf.github.io/fpdf2/Barcodes.html), [charts & graphs](https://py-pdf.github.io/fpdf2/Maths.html), [emojis, symbols & dingbats](https://py-pdf.github.io/fpdf2/EmojisSymbolsDingbats.html)
* [Tables](https://py-pdf.github.io/fpdf2/Tables.html) and also [cell / multi-cell / plaintext writing](https://py-pdf.github.io/fpdf2/Text.html), with [automatic page breaks](https://py-pdf.github.io/fpdf2/PageBreaks.html), line break and text justification
* Choice of measurement unit, page format & margins. Optional page header and footer
* Basic [conversion from HTML to PDF](https://py-pdf.github.io/fpdf2/HTML.html)
* A [templating system](https://py-pdf.github.io/fpdf2/Templates.html) to render PDFs in batches
* Images & links alternative descriptions, for accessibility
* Table of contents & [document outline](https://py-pdf.github.io/fpdf2/DocumentOutlineAndTableOfContents.html)
* [Document encryption](https://py-pdf.github.io/fpdf2/Encryption.html) & [document signing](https://py-pdf.github.io/fpdf2/Signing.html)
* [Annotations](https://py-pdf.github.io/fpdf2/Annotations.html), including text highlights, and [file attachments](https://py-pdf.github.io/fpdf2/FileAttachments.html)
* [Presentation mode](https://py-pdf.github.io/fpdf2/Presentations.html) with control over page display duration & transitions
* Optional basic Markdown-like styling: `**bold**, __italics__`
* Can render [mathematical equations & charts](https://py-pdf.github.io/fpdf2/Maths.html)
* Usage examples with [Django](https://www.djangoproject.com/), [Flask](https://flask.palletsprojects.com), [FastAPI](https://fastapi.tiangolo.com/), [streamlit](https://streamlit.io/), AWS lambdas... : [Usage in web APIs](https://py-pdf.github.io/fpdf2/UsageInWebAPI.html)
* more than 1300 unit tests running under Linux & Windows, with `qpdf`-based PDF diffing, timing & memory usage checks, and a high code coverage
Our 350+ reference PDF test files, generated by `fpdf2`, are validated using 3 different checkers:
[](https://github.com/qpdf/qpdf)
[](https://www.datalogics.com/repair-pdf-files)
[](https://verapdf.org)
## Please show the value
Choosing a project dependency can be difficult. We need to ensure stability and maintainability of our projects.
Surveys show that GitHub stars count play an important factor when assessing library quality.
⭐ Please give this repository a star. It takes seconds and will help your fellow developers! ⭐
## Please share with the community
This library relies on community interactions. Please consider sharing a post about `fpdf2` and the value it provides 😊
[](https://reddit.com/submit?url=https://github.com/py-pdf/fpdf2&title=fpdf2)
[](https://news.ycombinator.com/submitlink?u=https://github.com/py-pdf/fpdf2)
[](https://twitter.com/share?url=https://github.com/py-pdf/fpdf2&t=fpdf2)
[](https://www.facebook.com/sharer/sharer.php?u=https://github.com/py-pdf/fpdf2)
[](https://www.linkedin.com/shareArticle?url=https://github.com/py-pdf/fpdf2&title=fpdf2)
## Documentation
- [Documentation Home](https://py-pdf.github.io/fpdf2/)
- Tutorial in several languages: [English](https://py-pdf.github.io/fpdf2/Tutorial.html) - [Deutsch](https://py-pdf.github.io/fpdf2/Tutorial-de.html) - [español](https://py-pdf.github.io/fpdf2/Tutorial-es.html) - [हिंदी](https://py-pdf.github.io/fpdf2/Tutorial-hi.html) - [português](https://py-pdf.github.io/fpdf2/Tutorial-pt.html) - [Русский](https://py-pdf.github.io/fpdf2/Tutorial-ru.html) - [Italian](https://py-pdf.github.io/fpdf2/Tutorial-it.html) - [français](https://py-pdf.github.io/fpdf2/Tutorial-fr.html) - [Ελληνικά](https://py-pdf.github.io/fpdf2/Tutorial-gr.html) - [עברית](https://py-pdf.github.io/fpdf2/Tutorial-he.html) - [简体中文](https://py-pdf.github.io/fpdf2/Tutorial-zh.html) - [বাংলা](https://py-pdf.github.io/fpdf2/Tutorial-bn.html) - [ភាសាខ្មែរ](https://py-pdf.github.io/fpdf2/Tutorial-km.html) - [日本語](https://py-pdf.github.io/fpdf2/Tutorial-ja.html) - [Dutch](https://py-pdf.github.io/fpdf2/Tutorial-nl.html) - [Polski](https://py-pdf.github.io/fpdf2/Tutorial-pl.html) - [Türkçe](https://py-pdf.github.io/fpdf2/Tutorial-tr.html) - [Indonesian](https://py-pdf.github.io/fpdf2/Tutorial-id.html)- [Slovenščina](https://py-pdf.github.io/fpdf2/Tutorial-sl.html)
- Release notes: [CHANGELOG.md](https://github.com/py-pdf/fpdf2/blob/master/CHANGELOG.md)
- A series of blog posts: [fpdf2 tag @ ludochaordic](https://chezsoi.org/lucas/blog/tag/fpdf2.html)
You can also have a look at the `tests/`, they're great usage examples!
## Development
Please check the [dedicated documentation page](https://py-pdf.github.io/fpdf2/Development.html).
## Contributors ✨
This library could only exist thanks to the dedication of many volunteers around the world:
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<table>
<tbody>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/reingart"><img src="https://avatars.githubusercontent.com/u/1041385?v=4?s=100" width="100px;" alt="Mariano Reingart"/><br /><sub><b>Mariano Reingart</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=reingart" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://lymaconsulting.github.io/"><img src="https://avatars.githubusercontent.com/u/8921892?v=4?s=100" width="100px;" alt="David Ankin"/><br /><sub><b>David Ankin</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Aalexanderankin" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=alexanderankin" title="Code">💻</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=alexanderankin" title="Documentation">📖</a> <a href="#maintenance-alexanderankin" title="Maintenance">🚧</a> <a href="#question-alexanderankin" title="Answering Questions">💬</a> <a href="https://github.com/py-pdf/fpdf2/pulls?q=is%3Apr+reviewed-by%3Aalexanderankin" title="Reviewed Pull Requests">👀</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=alexanderankin" title="Tests">⚠️</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/alexp1917"><img src="https://avatars.githubusercontent.com/u/66129071?v=4?s=100" width="100px;" alt="Alex Pavlovich"/><br /><sub><b>Alex Pavlovich</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Aalexp1917" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=alexp1917" title="Code">💻</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=alexp1917" title="Documentation">📖</a> <a href="#question-alexp1917" title="Answering Questions">💬</a> <a href="https://github.com/py-pdf/fpdf2/pulls?q=is%3Apr+reviewed-by%3Aalexp1917" title="Reviewed Pull Requests">👀</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=alexp1917" title="Tests">⚠️</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://chezsoi.org/lucas/blog/"><img src="https://avatars.githubusercontent.com/u/925560?v=4?s=100" width="100px;" alt="Lucas Cimon"/><br /><sub><b>Lucas Cimon</b></sub></a><br /><a href="#blog-Lucas-C" title="Blogposts">📝</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=Lucas-C" title="Code">💻</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=Lucas-C" title="Documentation">📖</a> <a href="#infra-Lucas-C" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="#maintenance-Lucas-C" title="Maintenance">🚧</a> <a href="#question-Lucas-C" title="Answering Questions">💬</a> <a href="https://github.com/py-pdf/fpdf2/issues?q=author%3ALucas-C" title="Bug reports">🐛</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/eumiro"><img src="https://avatars.githubusercontent.com/u/6774676?v=4?s=100" width="100px;" alt="Miroslav Šedivý"/><br /><sub><b>Miroslav Šedivý</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=eumiro" title="Code">💻</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=eumiro" title="Tests">⚠️</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/fbernhart"><img src="https://avatars.githubusercontent.com/u/70264417?v=4?s=100" width="100px;" alt="Florian Bernhart"/><br /><sub><b>Florian Bernhart</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=fbernhart" title="Code">💻</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=fbernhart" title="Tests">⚠️</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://pr.linkedin.com/in/edwoodocasio/"><img src="https://avatars.githubusercontent.com/u/82513?v=4?s=100" width="100px;" alt="Edwood Ocasio"/><br /><sub><b>Edwood Ocasio</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=eocasio" title="Code">💻</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=eocasio" title="Tests">⚠️</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/marcelotduarte"><img src="https://avatars.githubusercontent.com/u/12752334?v=4?s=100" width="100px;" alt="Marcelo Duarte"/><br /><sub><b>Marcelo Duarte</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=marcelotduarte" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/RomanKharin"><img src="https://avatars.githubusercontent.com/u/6203756?v=4?s=100" width="100px;" alt="Roman Kharin"/><br /><sub><b>Roman Kharin</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=RomanKharin" title="Code">💻</a> <a href="#ideas-RomanKharin" title="Ideas, Planning, & Feedback">🤔</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/cgfrost"><img src="https://avatars.githubusercontent.com/u/166104?v=4?s=100" width="100px;" alt="Christopher Frost"/><br /><sub><b>Christopher Frost</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Acgfrost" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=cgfrost" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://www.ne.ch/sitn"><img src="https://avatars.githubusercontent.com/u/1681332?v=4?s=100" width="100px;" alt="Michael Kalbermatten"/><br /><sub><b>Michael Kalbermatten</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Akalbermattenm" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=kalbermattenm" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://yanone.de/"><img src="https://avatars.githubusercontent.com/u/175386?v=4?s=100" width="100px;" alt="Yanone"/><br /><sub><b>Yanone</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=yanone" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/leoleozhu"><img src="https://avatars.githubusercontent.com/u/738445?v=4?s=100" width="100px;" alt="Leo Zhu"/><br /><sub><b>Leo Zhu</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=leoleozhu" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://www.abishekgoda.com/"><img src="https://avatars.githubusercontent.com/u/310520?v=4?s=100" width="100px;" alt="Abishek Goda"/><br /><sub><b>Abishek Goda</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=abishek" title="Code">💻</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://www.cd-net.net/"><img src="https://avatars.githubusercontent.com/u/1515637?v=4?s=100" width="100px;" alt="Arthur Moore"/><br /><sub><b>Arthur Moore</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=EmperorArthur" title="Code">💻</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=EmperorArthur" title="Tests">⚠️</a> <a href="https://github.com/py-pdf/fpdf2/issues?q=author%3AEmperorArthur" title="Bug reports">🐛</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://boghison.com/"><img src="https://avatars.githubusercontent.com/u/7976283?v=4?s=100" width="100px;" alt="Bogdan Cuza"/><br /><sub><b>Bogdan Cuza</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=boghison" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/craigahobbs"><img src="https://avatars.githubusercontent.com/u/1263515?v=4?s=100" width="100px;" alt="Craig Hobbs"/><br /><sub><b>Craig Hobbs</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=craigahobbs" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/xitrushiy"><img src="https://avatars.githubusercontent.com/u/17336659?v=4?s=100" width="100px;" alt="xitrushiy"/><br /><sub><b>xitrushiy</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Axitrushiy" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=xitrushiy" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/jredrejo"><img src="https://avatars.githubusercontent.com/u/1008178?v=4?s=100" width="100px;" alt="José L. Redrejo Rodríguez"/><br /><sub><b>José L. Redrejo Rodríguez</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=jredrejo" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://jugmac00.github.io/"><img src="https://avatars.githubusercontent.com/u/9895620?v=4?s=100" width="100px;" alt="Jürgen Gmach"/><br /><sub><b>Jürgen Gmach</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=jugmac00" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/Larivact"><img src="https://avatars.githubusercontent.com/u/8731884?v=4?s=100" width="100px;" alt="Larivact"/><br /><sub><b>Larivact</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=Larivact" title="Code">💻</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/leonelcamara"><img src="https://avatars.githubusercontent.com/u/1198145?v=4?s=100" width="100px;" alt="Leonel Câmara"/><br /><sub><b>Leonel Câmara</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=leonelcamara" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/mark-steadman"><img src="https://avatars.githubusercontent.com/u/15779053?v=4?s=100" width="100px;" alt="Mark Steadman"/><br /><sub><b>Mark Steadman</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Amark-steadman" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=mark-steadman" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/sergeyfitts"><img src="https://avatars.githubusercontent.com/u/40498252?v=4?s=100" width="100px;" alt="Sergey"/><br /><sub><b>Sergey</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=sergeyfitts" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/Stan-C421"><img src="https://avatars.githubusercontent.com/u/82440217?v=4?s=100" width="100px;" alt="Stan-C421"/><br /><sub><b>Stan-C421</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=Stan-C421" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/viraj-shah18"><img src="https://avatars.githubusercontent.com/u/44942391?v=4?s=100" width="100px;" alt="Viraj Shah"/><br /><sub><b>Viraj Shah</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=viraj-shah18" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/cornicis"><img src="https://avatars.githubusercontent.com/u/11545033?v=4?s=100" width="100px;" alt="cornicis"/><br /><sub><b>cornicis</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=cornicis" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/moe-25"><img src="https://avatars.githubusercontent.com/u/85580959?v=4?s=100" width="100px;" alt="moe-25"/><br /><sub><b>moe-25</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=moe-25" title="Code">💻</a> <a href="https://github.com/py-pdf/fpdf2/pulls?q=is%3Apr+reviewed-by%3Amoe-25" title="Reviewed Pull Requests">👀</a> <a href="#research-moe-25" title="Research">🔬</a> <a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Amoe-25" title="Bug reports">🐛</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/niphlod"><img src="https://avatars.githubusercontent.com/u/122119?v=4?s=100" width="100px;" alt="Simone Bizzotto"/><br /><sub><b>Simone Bizzotto</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=niphlod" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/bnyw"><img src="https://avatars.githubusercontent.com/u/32655514?v=4?s=100" width="100px;" alt="Boonyawe Sirimaha"/><br /><sub><b>Boonyawe Sirimaha</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Abnyw" title="Bug reports">🐛</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/torque"><img src="https://avatars.githubusercontent.com/u/949138?v=4?s=100" width="100px;" alt="T"/><br /><sub><b>T</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=torque" title="Code">💻</a> <a href="#design-torque" title="Design">🎨</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/AubsUK"><img src="https://avatars.githubusercontent.com/u/68870168?v=4?s=100" width="100px;" alt="AubsUK"/><br /><sub><b>AubsUK</b></sub></a><br /><a href="#question-AubsUK" title="Answering Questions">💬</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://www.schorsch.com/"><img src="https://avatars.githubusercontent.com/u/17468844?v=4?s=100" width="100px;" alt="Georg Mischler"/><br /><sub><b>Georg Mischler</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Agmischler" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=gmischler" title="Code">💻</a> <a href="#design-gmischler" title="Design">🎨</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=gmischler" title="Documentation">📖</a> <a href="#ideas-gmischler" title="Ideas, Planning, & Feedback">🤔</a> <a href="#question-gmischler" title="Answering Questions">💬</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=gmischler" title="Tests">⚠️</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://www.buymeacoffee.com/ping"><img src="https://avatars.githubusercontent.com/u/104607?v=4?s=100" width="100px;" alt="ping"/><br /><sub><b>ping</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Aping" title="Bug reports">🐛</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://portfedh@gmail.com"><img src="https://avatars.githubusercontent.com/u/59422723?v=4?s=100" width="100px;" alt="Portfedh"/><br /><sub><b>Portfedh</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=portfedh" title="Documentation">📖</a> <a href="#tutorial-portfedh" title="Tutorials">✅</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/tabarnhack"><img src="https://avatars.githubusercontent.com/u/34366899?v=4?s=100" width="100px;" alt="Tabarnhack"/><br /><sub><b>Tabarnhack</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=tabarnhack" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/Mridulbirla13"><img src="https://avatars.githubusercontent.com/u/24730417?v=4?s=100" width="100px;" alt="Mridul Birla"/><br /><sub><b>Mridul Birla</b></sub></a><br /><a href="#translation-Mridulbirla13" title="Translation">🌍</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/digidigital"><img src="https://avatars.githubusercontent.com/u/28964886?v=4?s=100" width="100px;" alt="digidigital"/><br /><sub><b>digidigital</b></sub></a><br /><a href="#translation-digidigital" title="Translation">🌍</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/xit4"><img src="https://avatars.githubusercontent.com/u/7601720?v=4?s=100" width="100px;" alt="Xit"/><br /><sub><b>Xit</b></sub></a><br /><a href="#translation-xit4" title="Translation">🌍</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/AABur"><img src="https://avatars.githubusercontent.com/u/41373199?v=4?s=100" width="100px;" alt="Alexander Burchenko"/><br /><sub><b>Alexander Burchenko</b></sub></a><br /><a href="#translation-AABur" title="Translation">🌍</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/fuscati"><img src="https://avatars.githubusercontent.com/u/48717599?v=4?s=100" width="100px;" alt="André Assunção"/><br /><sub><b>André Assunção</b></sub></a><br /><a href="#translation-fuscati" title="Translation">🌍</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://frenchcomputerguy.com/"><img src="https://avatars.githubusercontent.com/u/5825096?v=4?s=100" width="100px;" alt="Quentin Brault"/><br /><sub><b>Quentin Brault</b></sub></a><br /><a href="#translation-Tititesouris" title="Translation">🌍</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/paulacampigotto"><img src="https://avatars.githubusercontent.com/u/36995920?v=4?s=100" width="100px;" alt="Paula Campigotto"/><br /><sub><b>Paula Campigotto</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Apaulacampigotto" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=paulacampigotto" title="Code">💻</a> <a href="https://github.com/py-pdf/fpdf2/pulls?q=is%3Apr+reviewed-by%3Apaulacampigotto" title="Reviewed Pull Requests">👀</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/bettman-latin"><img src="https://avatars.githubusercontent.com/u/91155492?v=4?s=100" width="100px;" alt="bettman-latin"/><br /><sub><b>bettman-latin</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=bettman-latin" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/oleksii-shyman"><img src="https://avatars.githubusercontent.com/u/8827452?v=4?s=100" width="100px;" alt="oleksii-shyman"/><br /><sub><b>oleksii-shyman</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=oleksii-shyman" title="Code">💻</a> <a href="#design-oleksii-shyman" title="Design">🎨</a> <a href="#ideas-oleksii-shyman" title="Ideas, Planning, & Feedback">🤔</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://lcomrade.su"><img src="https://avatars.githubusercontent.com/u/70049256?v=4?s=100" width="100px;" alt="lcomrade"/><br /><sub><b>lcomrade</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=lcomrade" title="Documentation">📖</a> <a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Alcomrade" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=lcomrade" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/pwt"><img src="https://avatars.githubusercontent.com/u/1089749?v=4?s=100" width="100px;" alt="pwt"/><br /><sub><b>pwt</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Apwt" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=pwt" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/mcerveny"><img src="https://avatars.githubusercontent.com/u/1438115?v=4?s=100" width="100px;" alt="Martin Cerveny"/><br /><sub><b>Martin Cerveny</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Amcerveny" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=mcerveny" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/Spenhouet"><img src="https://avatars.githubusercontent.com/u/7819068?v=4?s=100" width="100px;" alt="Spenhouet"/><br /><sub><b>Spenhouet</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3ASpenhouet" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/pulls?q=is%3Apr+reviewed-by%3ASpenhouet" title="Reviewed Pull Requests">👀</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/mtkumar123"><img src="https://avatars.githubusercontent.com/u/89176219?v=4?s=100" width="100px;" alt="mtkumar123"/><br /><sub><b>mtkumar123</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=mtkumar123" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/RedShy"><img src="https://avatars.githubusercontent.com/u/24901693?v=4?s=100" width="100px;" alt="Davide Consalvo"/><br /><sub><b>Davide Consalvo</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=RedShy" title="Code">💻</a> <a href="#question-RedShy" title="Answering Questions">💬</a> <a href="#design-RedShy" title="Design">🎨</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://blog.whatgeek.com.pt"><img src="https://avatars.githubusercontent.com/u/2813722?v=4?s=100" width="100px;" alt="Bruno Santos"/><br /><sub><b>Bruno Santos</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Afeiticeir0" title="Bug reports">🐛</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/cgkoutzigiannis"><img src="https://avatars.githubusercontent.com/u/41803093?v=4?s=100" width="100px;" alt="cgkoutzigiannis"/><br /><sub><b>cgkoutzigiannis</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=cgkoutzigiannis" title="Tests">⚠️</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/iwayankurniawan"><img src="https://avatars.githubusercontent.com/u/30134645?v=4?s=100" width="100px;" alt="I Wayan Kurniawan"/><br /><sub><b>I Wayan Kurniawan</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=iwayankurniawan" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://rysta.io"><img src="https://avatars.githubusercontent.com/u/4029642?v=4?s=100" width="100px;" alt="Sven Eliasson"/><br /><sub><b>Sven Eliasson</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=comino" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/gonzalobarbaran"><img src="https://avatars.githubusercontent.com/u/59395855?v=4?s=100" width="100px;" alt="gonzalobarbaran"/><br /><sub><b>gonzalobarbaran</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=gonzalobarbaran" title="Code">💻</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://www.nuttapat.me"><img src="https://avatars.githubusercontent.com/u/2115896?v=4?s=100" width="100px;" alt="Nuttapat Koonarangsri"/><br /><sub><b>Nuttapat Koonarangsri</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=hackinteach" title="Documentation">📖</a> <a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Ahackinteach" title="Bug reports">🐛</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/sokratisvas"><img src="https://avatars.githubusercontent.com/u/77175483?v=4?s=100" width="100px;" alt="Sokratis Vasiliou"/><br /><sub><b>Sokratis Vasiliou</b></sub></a><br /><a href="#translation-sokratisvas" title="Translation">🌍</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/semaeostomea"><img src="https://avatars.githubusercontent.com/u/100974908?v=4?s=100" width="100px;" alt="semaeostomea"/><br /><sub><b>semaeostomea</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=semaeostomea" title="Documentation">📖</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=semaeostomea" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/Jmillan-Dev"><img src="https://avatars.githubusercontent.com/u/39383390?v=4?s=100" width="100px;" alt="Josué Millán Zamora"/><br /><sub><b>Josué Millán Zamora</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=Jmillan-Dev" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/me-suzy"><img src="https://avatars.githubusercontent.com/u/2770489?v=4?s=100" width="100px;" alt="me-suzy"/><br /><sub><b>me-suzy</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Ame-suzy" title="Bug reports">🐛</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/dmail00"><img src="https://avatars.githubusercontent.com/u/79044603?v=4?s=100" width="100px;" alt="dmail00"/><br /><sub><b>dmail00</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Admail00" title="Bug reports">🐛</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=dmail00" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/GerardoAllende"><img src="https://avatars.githubusercontent.com/u/8699267?v=4?s=100" width="100px;" alt="Gerardo Allende"/><br /><sub><b>Gerardo Allende</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=GerardoAllende" title="Code">💻</a> <a href="#research-GerardoAllende" title="Research">🔬</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://nicholasjin.github.io/"><img src="https://avatars.githubusercontent.com/u/15252734?v=4?s=100" width="100px;" alt="Nicholas Jin"/><br /><sub><b>Nicholas Jin</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Anicholasjin" title="Bug reports">🐛</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://portfolio-yk-jp.vercel.app/"><img src="https://avatars.githubusercontent.com/u/69574727?v=4?s=100" width="100px;" alt="Yusuke"/><br /><sub><b>Yusuke</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=yk-jp" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/Tillrzhtgrfho"><img src="https://avatars.githubusercontent.com/u/86628355?v=4?s=100" width="100px;" alt="Tillrzhtgrfho"/><br /><sub><b>Tillrzhtgrfho</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3ATillrzhtgrfho" title="Bug reports">🐛</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://dario.icu/"><img src="https://avatars.githubusercontent.com/u/35274810?v=4?s=100" width="100px;" alt="Dario Ackermann"/><br /><sub><b>Dario Ackermann</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/issues?q=author%3Adarioackermann" title="Bug reports">🐛</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/TzviGreenfeld"><img src="https://avatars.githubusercontent.com/u/43534411?v=4?s=100" width="100px;" alt="Tzvi Greenfeld"/><br /><sub><b>Tzvi Greenfeld</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=TzviGreenfeld" title="Documentation">📖</a> <a href="#translation-TzviGreenfeld" title="Translation">🌍</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/devdev29"><img src="https://avatars.githubusercontent.com/u/88680035?v=4?s=100" width="100px;" alt="devdev29"/><br /><sub><b>devdev29</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=devdev29" title="Documentation">📖</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=devdev29" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/Zenigata"><img src="https://avatars.githubusercontent.com/u/1022393?v=4?s=100" width="100px;" alt="Johan Bonneau"/><br /><sub><b>Johan Bonneau</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=Zenigata" title="Documentation">📖</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/jmunoz94"><img src="https://avatars.githubusercontent.com/u/48921408?v=4?s=100" width="100px;" alt="Jesús Alberto Muñoz Mesa"/><br /><sub><b>Jesús Alberto Muñoz Mesa</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=jmunoz94" title="Tests">⚠️</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=jmunoz94" title="Documentation">📖</a> <a href="#translation-jmunoz94" title="Translation">🌍</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://jdeep.me"><img src="https://avatars.githubusercontent.com/u/64089730?v=4?s=100" width="100px;" alt="Jaydeep Das"/><br /><sub><b>Jaydeep Das</b></sub></a><br /><a href="#question-JDeepD" title="Answering Questions">💬</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/seanpmulholland"><img src="https://avatars.githubusercontent.com/u/79894395?v=4?s=100" width="100px;" alt="Sean"/><br /><sub><b>Sean</b></sub></a><br /><a href="https://github.com/py-pdf/fpdf2/commits?author=seanpmulholland" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/andersonhc"><img src="https://avatars.githubusercontent.com/u/948125?v=4?s=100" width="100px;" alt="Anderson Herzogenrath da Costa"/><br /><sub><b>Anderson Herzogenrath da Costa</b></sub></a><br /><a href="#question-andersonhc" title="Answering Questions">💬</a> <a href="https://github.com/py-pdf/fpdf2/commits?author=andersonhc" title="Code">💻</a> <a href="#research-andersonhc" titl | text/markdown | Olivier PLATHEY, Max, Lucas Cimon (@Lucas-C), Georg Mischler (@gmischler), Anderson Herzogenrath da Costa (@andersonhc) | null | null | null | null | pdf, unicode, png, jpg, ttf, barcode, library, markdown | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Langu... | [] | null | null | >=3.10 | [] | [] | [] | [
"defusedxml",
"Pillow!=9.2.*,>=8.3.2",
"fonttools>=4.34.0",
"bandit; extra == \"dev\"",
"black; extra == \"dev\"",
"mypy; extra == \"dev\"",
"pre-commit; extra == \"dev\"",
"pyright; extra == \"dev\"",
"pylint; extra == \"dev\"",
"semgrep; extra == \"dev\"",
"zizmor; extra == \"dev\"",
"lxml; ... | [] | [] | [] | [
"Homepage, https://py-pdf.github.io/fpdf2/",
"Documentation, https://py-pdf.github.io/fpdf2/",
"Code, https://github.com/py-pdf/fpdf2",
"Issue tracker, https://github.com/py-pdf/fpdf2/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:45:25.736108 | fpdf2-2.8.6.tar.gz | 362,052 | 74/3f/b23cf3a1943905645f44df95de8289770e69f85c9442abf89f93dc5c4a74/fpdf2-2.8.6.tar.gz | source | sdist | null | false | 474f5528037bbf60ee21f4d74171beb5 | 5132f26bbeee69a7ca6a292e4da1eb3241147b5aea9348b35e780ecd02bf5fc2 | 743fb23cf3a1943905645f44df95de8289770e69f85c9442abf89f93dc5c4a74 | LGPL-3.0-only | [
"LICENSE"
] | 178,347 |
2.4 | dflockd-client | 1.6.0 | dflockd python client | # dflockd-client
<!--toc:start-->
- [dflockd-client](#dflockd-client)
- [Installation](#installation)
- [Quick start](#quick-start)
- [Async client](#async-client)
- [Sync client](#sync-client)
- [Manual acquire/release](#manual-acquirerelease)
- [Two-phase lock acquisition](#two-phase-lock-acquisition)
- [Parameters](#parameters)
- [TLS](#tls)
- [Semaphores](#semaphores)
- [Parameters](#parameters-1)
- [Stats](#stats)
- [Multi-server sharding](#multi-server-sharding)
<!--toc:end-->
A Python client library for [dflockd](https://github.com/mtingers/dflockd) — a
lightweight distributed lock server with FIFO ordering, automatic lease expiry,
and background renewal.
[Read the docs here](https://mtingers.github.io/dflockd-client-py/)
## Installation
```bash
pip install dflockd-client
```
Or with uv:
```bash
uv add dflockd-client
```
## Quick start
### Async client
```python
import asyncio
from dflockd_client.client import DistributedLock
async def main():
async with DistributedLock("my-key", acquire_timeout_s=10) as lock:
print(lock.token, lock.lease)
# critical section — lease auto-renews in background
asyncio.run(main())
```
### Sync client
```python
from dflockd_client.sync_client import DistributedLock
with DistributedLock("my-key", acquire_timeout_s=10) as lock:
print(lock.token, lock.lease)
# critical section — lease auto-renews in background thread
```
### Manual acquire/release
Both clients support explicit `acquire()` / `release()` outside of a context manager:
```python
from dflockd_client.sync_client import DistributedLock
lock = DistributedLock("my-key")
if lock.acquire():
try:
pass # critical section
finally:
lock.release()
```
### Two-phase lock acquisition
The `enqueue()` / `wait()` methods split lock acquisition into two steps, allowing you to notify an external system after joining the queue but before blocking:
```python
from dflockd_client.sync_client import DistributedLock
lock = DistributedLock("my-key")
status = lock.enqueue() # join queue, returns "acquired" or "queued"
notify_external_system() # your application logic here
if lock.wait(timeout_s=10): # block until granted (no-op if already acquired)
try:
pass # critical section
finally:
lock.release()
```
Async equivalent:
```python
lock = DistributedLock("my-key")
status = await lock.enqueue()
await notify_external_system()
if await lock.wait(timeout_s=10):
try:
pass # critical section
finally:
await lock.release()
```
### Parameters
| Parameter | Default | Description |
| ------------------- | ----------------------- | ----------------------------------------------------------------------- |
| `key` | _(required)_ | Lock name |
| `acquire_timeout_s` | `10` | Seconds to wait for lock acquisition |
| `lease_ttl_s` | `None` (server default) | Lease duration in seconds |
| `servers` | `[("127.0.0.1", 6388)]` | List of `(host, port)` tuples |
| `sharding_strategy` | `stable_hash_shard` | `Callable[[str, int], int]` — maps `(key, num_servers)` to server index |
| `renew_ratio` | `0.5` | Renew at `lease * ratio` seconds |
| `ssl_context` | `None` | `ssl.SSLContext` for TLS connections. `None` uses plain TCP |
| `auth_token` | `None` | Auth token for servers started with `--auth-token`. `None` skips auth |
## Authentication
When the dflockd server is started with `--auth-token`, pass the token to authenticate:
```python
from dflockd_client.sync_client import DistributedLock
with DistributedLock("my-key", auth_token="mysecret") as lock:
print(lock.token, lock.lease)
```
Async equivalent:
```python
from dflockd_client.client import DistributedLock
async with DistributedLock("my-key", auth_token="mysecret") as lock:
print(lock.token, lock.lease)
```
Both `DistributedLock` and `DistributedSemaphore` accept `auth_token` in the async and sync clients. A `PermissionError` is raised if the token is invalid.
## TLS
To connect to a TLS-enabled dflockd server, pass an `ssl.SSLContext`:
```python
import ssl
from dflockd_client.sync_client import DistributedLock
ctx = ssl.create_default_context() # uses system CA bundle
# or: ctx = ssl.create_default_context(cafile="/path/to/ca.pem")
with DistributedLock("my-key", ssl_context=ctx) as lock:
print(lock.token, lock.lease)
```
Async equivalent:
```python
import ssl
from dflockd_client.client import DistributedLock
ctx = ssl.create_default_context()
async with DistributedLock("my-key", ssl_context=ctx) as lock:
print(lock.token, lock.lease)
```
Both `DistributedLock` and `DistributedSemaphore` accept `ssl_context` in the async and sync clients.
## Semaphores
`DistributedSemaphore` allows up to N concurrent holders per key, using the same API patterns as `DistributedLock`:
```python
from dflockd_client.sync_client import DistributedSemaphore
# Allow up to 3 concurrent workers on this key
with DistributedSemaphore("my-key", limit=3, acquire_timeout_s=10) as sem:
print(sem.token, sem.lease)
# critical section — up to 3 holders at once
```
Async equivalent:
```python
from dflockd_client.client import DistributedSemaphore
async with DistributedSemaphore("my-key", limit=3, acquire_timeout_s=10) as sem:
print(sem.token, sem.lease)
```
Manual acquire/release and two-phase (`enqueue()` / `wait()`) work the same as locks.
### Parameters
| Parameter | Default | Description |
| ------------------- | ----------------------- | ----------------------------------------------------------------------- |
| `key` | _(required)_ | Semaphore name |
| `limit` | _(required)_ | Maximum concurrent holders |
| `acquire_timeout_s` | `10` | Seconds to wait for acquisition |
| `lease_ttl_s` | `None` (server default) | Lease duration in seconds |
| `servers` | `[("127.0.0.1", 6388)]` | List of `(host, port)` tuples |
| `sharding_strategy` | `stable_hash_shard` | `Callable[[str, int], int]` — maps `(key, num_servers)` to server index |
| `renew_ratio` | `0.5` | Renew at `lease * ratio` seconds |
| `ssl_context` | `None` | `ssl.SSLContext` for TLS connections. `None` uses plain TCP |
| `auth_token` | `None` | Auth token for servers started with `--auth-token`. `None` skips auth |
## Stats
Query server state (connections, held locks, active semaphores) using the low-level `stats()` function:
```python
import asyncio
from dflockd_client.client import stats
async def main():
reader, writer = await asyncio.open_connection("127.0.0.1", 6388)
result = await stats(reader, writer)
print(result)
# {'connections': 1, 'locks': [], 'semaphores': [], 'idle_locks': [], 'idle_semaphores': []}
writer.close()
await writer.wait_closed()
asyncio.run(main())
```
Sync equivalent:
```python
import socket
from dflockd_client.sync_client import stats
sock = socket.create_connection(("127.0.0.1", 6388))
rfile = sock.makefile("r", encoding="utf-8")
result = stats(sock, rfile)
print(result)
rfile.close()
sock.close()
```
Returns a dict with `connections`, `locks`, `semaphores`, `idle_locks`, and `idle_semaphores`.
## Multi-server sharding
When running multiple dflockd instances, the client can distribute keys across servers using consistent hashing. Each key always routes to the same server.
```python
from dflockd_client.sync_client import DistributedLock
servers = [("server1", 6388), ("server2", 6388), ("server3", 6388)]
with DistributedLock("my-key", servers=servers) as lock:
print(lock.token, lock.lease)
```
The default strategy uses `zlib.crc32` for stable, deterministic hashing. You can provide a custom strategy:
```python
from dflockd_client.sync_client import DistributedLock
def my_strategy(key: str, num_servers: int) -> int:
"""Route all keys to the first server."""
return 0
with DistributedLock("my-key", servers=servers, sharding_strategy=my_strategy) as lock:
pass
```
| text/markdown | Matth Ingersoll | Matth Ingersoll <matth@mtingers.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"pytest-asyncio>=1.3.0; extra == \"dev\"",
"pytest-cov>=7.0.0; extra == \"dev\"",
"pyright>=1.1; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/mtingers/dflockd-client-py",
"Repository, https://github.com/mtingers/dflockd-client-py",
"Documentation, https://mtingers.github.io/dflockd-client-py/",
"Bug Tracker, https://github.com/mtingers/dflockd-client-py/issues",
"Changelog, https://github.com/mtingers/dflockd-client-... | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T03:43:55.284910 | dflockd_client-1.6.0.tar.gz | 7,109 | bf/7d/5374b0f823f1c881558040f67245f53a99dd0dab10f7894f47a1aad65945/dflockd_client-1.6.0.tar.gz | source | sdist | null | false | 5224d7401f68a47540786ed4309e2006 | 875fb044e5f55f5fb3f5a4963ddf61d147ba51d3841a7ec935e149cb47480dc0 | bf7d5374b0f823f1c881558040f67245f53a99dd0dab10f7894f47a1aad65945 | MIT | [] | 275 |
2.4 | funbuild | 1.6.19 | funbuild | # funbuild
[](https://badge.fury.io/py/funbuild)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
funbuild 是一个现代化的 Python 项目构建和管理工具,旨在简化 Python 项目的开发、构建、发布和维护流程。它集成了多种构建工具和最佳实践,为开发者提供一站式的项目管理解决方案。
## ✨ 特性
- 🚀 **多构建工具支持**: 支持 PyPI、Poetry 和 UV 等多种构建方式
- 🔄 **自动化版本管理**: 智能版本升级和发布流程
- 📦 **依赖管理**: 自动处理项目依赖和环境配置
- 🔧 **Git 集成**: 内置 Git 操作,包括拉取、推送和历史清理
- 🤖 **AI 提交信息**: 集成 gcop 自动生成智能提交信息
- 🧹 **项目清理**: 自动清理缓存、构建文件和历史记录
- 📋 **标签管理**: 自动创建和管理 Git 标签
- 🎯 **命令行界面**: 简洁易用的 CLI 工具
## 📋 系统要求
- Python 3.9 或更高版本
- Git(用于版本控制功能)
## 🚀 安装
### 从 PyPI 安装(推荐)
```bash
pip install funbuild
```
### 从源码安装
```bash
git clone https://github.com/farfarfun/funbuild.git
cd funbuild
pip install .
```
## 📖 使用指南
### 基本命令
在项目根目录下,您可以使用以下命令来管理您的构建流程:
#### 版本管理
```bash
# 升级项目版本
funbuild upgrade
# 升级到指定版本
funbuild upgrade --version 1.2.3
```
#### Git 操作
```bash
# 拉取最新代码
funbuild pull
# 推送代码(支持自动生成提交信息)
funbuild push --message "您的提交信息"
# 使用 AI 自动生成提交信息
funbuild push
```
#### 项目构建
```bash
# 安装项目依赖
funbuild install
# 构建并发布项目
funbuild build --message "发布信息"
# 仅构建不发布
funbuild build --no-publish
```
#### 项目维护
```bash
# 清理 Git 历史记录
funbuild clean_history
# 清理构建缓存和临时文件
funbuild clean
# 创建 Git 标签
funbuild tags
```
### 高级用法
#### 配置文件
funbuild 使用 `pyproject.toml` 文件进行配置。您可以在该文件中自定义构建行为:
```toml
[tool.funbuild]
# 构建类型:pypi, poetry, uv
build_type = "uv"
# 自动版本升级策略
version_strategy = "patch" # major, minor, patch
# 发布前是否运行测试
run_tests = true
# 自定义构建命令
build_commands = [
"ruff check .",
"pytest tests/",
]
```
#### 环境变量
```bash
# 设置构建类型
export FUNBUILD_TYPE=uv
# 设置发布仓库
export FUNBUILD_REPOSITORY=https://upload.pypi.org/legacy/
# 启用详细日志
export FUNBUILD_VERBOSE=1
```
## 🔧 集成工具
funbuild 集成了以下优秀的工具:
- **[uv](https://github.com/astral-sh/uv)**: 现代 Python 包管理器
- **[ruff](https://github.com/astral-sh/ruff)**: 快速 Python 代码检查和格式化工具
- **[gcop](https://github.com/farfarfun/gcop)**: AI 驱动的 Git 提交信息生成器
- **[typer](https://typer.tiangolo.com/)**: 现代 CLI 应用框架
## 📁 项目结构
```
funbuild/
├── src/
│ └── funbuild/
│ ├── core/ # 核心构建逻辑
│ ├── shell/ # Shell 命令执行
│ └── tool/ # 工具集成
├── examples/ # 使用示例
├── tests/ # 测试文件
├── pyproject.toml # 项目配置
└── README.md # 项目文档
```
## 🤝 贡献指南
我们欢迎任何形式的贡献!请遵循以下步骤:
1. Fork 本仓库
2. 创建您的特性分支 (`git checkout -b feature/AmazingFeature`)
3. 提交您的更改 (`git commit -m 'Add some AmazingFeature'`)
4. 推送到分支 (`git push origin feature/AmazingFeature`)
5. 打开一个 Pull Request
### 开发环境设置
```bash
# 克隆仓库
git clone https://github.com/farfarfun/funbuild.git
cd funbuild
# 安装开发依赖
pip install -e ".[dev]"
# 运行测试
pytest tests/
# 代码格式化
ruff format .
ruff check . --fix
```
## 📄 许可证
本项目采用 [MIT 许可证](LICENSE)。
## 🔗 相关链接
- [GitHub 仓库](https://github.com/farfarfun/funbuild)
- [PyPI 页面](https://pypi.org/project/funbuild/)
- [发布记录](https://github.com/farfarfun/funbuild/releases)
- [问题反馈](https://github.com/farfarfun/funbuild/issues)
## 👥 维护者
- **牛哥** - [niuliangtao@qq.com](mailto:niuliangtao@qq.com)
- **farfarfun** - [farfarfun@qq.com](mailto:farfarfun@qq.com)
## 🙏 致谢
感谢所有为 funbuild 项目做出贡献的开发者和用户!
---
如果您觉得 funbuild 对您有帮助,请给我们一个 ⭐️!
| text/markdown | null | 牛哥 <niuliangtao@qq.com>, farfarfun <farfarfun@qq.com> | null | 牛哥 <niuliangtao@qq.com>, farfarfun <farfarfun@qq.com> | MIT | build, requirements, packaging, uv | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"uv>=0.8.4",
"ruff>=0.12.7",
"toml>=0.10.2",
"typer-slim",
"gcop>=1.7.3",
"nltlog>=1.0.11"
] | [] | [] | [] | [
"Organization, https://github.com/farfarfun",
"Repository, https://github.com/farfarfun/funbuild",
"Releases, https://github.com/farfarfun/funbuild/releases"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T03:42:42.721665 | funbuild-1.6.19-py3-none-any.whl | 9,616 | aa/55/28911fb5d7d5a057b5c1c55b50b7fb5c581833737b67811fd16324a62341/funbuild-1.6.19-py3-none-any.whl | py3 | bdist_wheel | null | false | efd2febb32773f145413db0b4b42da2c | 6224af8637654a73e7a3be9697957be2de464b4e0d21731202c6e757525b941f | aa5528911fb5d7d5a057b5c1c55b50b7fb5c581833737b67811fd16324a62341 | null | [] | 110 |
2.4 | pmtvs-valve | 0.0.1 | Signal analysis primitives | # pmtvs-valve
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:37:22.347550 | pmtvs_valve-0.0.1.tar.gz | 1,254 | 63/96/f17fcacf541a179f2a0444787aa38ec9957169f691178f4fe5d97a5211bc/pmtvs_valve-0.0.1.tar.gz | source | sdist | null | false | 532563f801943006da29defc3387c249 | 3f702897cba9639fe8b475c7a7cf37d859427a70558f976faeac4414b7c9bcfd | 6396f17fcacf541a179f2a0444787aa38ec9957169f691178f4fe5d97a5211bc | null | [] | 291 |
2.4 | pmtvs-pipeline | 0.0.1 | Signal analysis primitives | # pmtvs-pipeline
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:36:17.823159 | pmtvs_pipeline-0.0.1.tar.gz | 1,261 | 8b/05/9a7db2276c995753ff26b3ef1cdbf1389d68862448d3a06c163810e6e9ca/pmtvs_pipeline-0.0.1.tar.gz | source | sdist | null | false | 9370d1beb01122d8b339f9acdec9bf79 | 53cdf82b7fb39ac349aa3856a01756fefb3bed4fd53673f3ec66f409b0b92479 | 8b059a7db2276c995753ff26b3ef1cdbf1389d68862448d3a06c163810e6e9ca | null | [] | 293 |
2.4 | hwpx-mcp-server | 1.0.0 | Model Context Protocol server for local HWPX document automation. | # 한글 MCP (HWPX) 서버 - 한글 자동화 HWPX 문서 자동 생성·편집·검증mcp #
이 프로젝트는 **한글 MCP(HWPX) 서버**로, HWPX 문서를 한글 워드프로세서 없이 직접 열고 자동화할 수 있도록 설계되었습니다.
Gemini CLI, Claude Desktop과 같은 MCP 클라이언트에 연결하여 문서 생성·편집·탐색 기능을 제공합니다.
[](https://www.google.com/search?q=https://pypi.org/project/hwpx-mcp-server/)
[](https://opensource.org/licenses/MIT)
[](https://www.google.com/search?q=https://github.com/your-repo/hwpx-mcp-server/actions/workflows/ci.yml)
**순수 파이썬으로 HWPX 문서를 자유롭게 다루는 가장 강력한 방법.**
`hwpx-mcp-server`는 [Model Context Protocol](https://github.com/modelcontextprotocol/specification) 표준을 따르는 서버로, 강력한 [`python-hwpx`](https://www.google.com/search?q=%5Bhttps://github.com/airmang/python-hwpx%5D\(https://github.com/airmang/python-hwpx\)) 라이브러리를 기반으로 합니다. Gemini, Claude와 같은 최신 AI 클라이언트와 완벽하게 연동하여 한글 워드 프로세서 로컬 HWPX 문서를 열람, 검색, 편집, 저장하는 풍부한 기능을 제공합니다.
-----
## ✨ 주요 기능
* **✅ 표준 MCP 서버 구현**: 공식 `mcp` SDK를 사용하여 안정적인 표준 입/출력 기반 서버를 제공합니다.
* **📂 제로 설정**: 별도 설정 없이 현재 작업 디렉터리를 기준으로 즉시 경로를 처리합니다.
* **📄 강력한 문서 편집**: 텍스트 추출, 페이지네이션부터 스타일, 표, 메모, 개체 편집까지 모두 가능합니다.
* **🧩 HWP 호환 + 자동 변환**: `.hwp` 바이너리 문서를 읽기 전용으로 조회/검색할 수 있고, `convert_hwp_to_hwpx` 도구로 `.hwpx`로 자동 변환해 편집 파이프라인에 바로 연결할 수 있습니다.
* **🛡️ 안전한 저장**: 자동 백업(`*.bak`) 옵션으로 예기치 않은 데이터 손실을 방지합니다.
* **🚀 즉시 실행**: `uv`만 있으면 `uvx hwpx-mcp-server` 한 줄로 바로 시작할 수 있습니다.
## 🚀 빠른 시작
### 1\. `uv` 설치
가장 먼저 파이썬 패키지 설치 도구인 `uv`를 설치하세요.
[👉 Astral uv 설치 가이드](https://docs.astral.sh/uv/getting-started/installation/)
### 2\. MCP 클라이언트 설정
사용 중인 MCP 클라이언트 설정에 아래와 같이 서버 정보를 추가하세요.
```json
{
"mcpServers": {
"hwpx": {
"command": "uvx",
"args": ["hwpx-mcp-server"],
"env": {
"HWPX_MCP_PAGING_PARA_LIMIT": "200",
"HWPX_MCP_AUTOBACKUP": "1",
"LOG_LEVEL": "INFO"
}
}
}
}
```
### 3\. 서버 실행 (로컬 환경에서 사용할 경우)
터미널에서 아래 명령어를 실행하면 서버가 바로 시작됩니다.
```bash
uvx hwpx-mcp-server
```
> `uvx` 명령은 첫 실행 시 필요한 종속성을 자동으로 설치하며, 반드시 `python-hwpx 1.9` 이상의 버전이 준비되어야 합니다.
> 서버는 실행된 현재 디렉터리를 기준으로 경로를 해석하므로, 별도의 작업 디렉터리 설정 없이 바로 사용할 수 있습니다.
### 4. 원격 배포용 Streamable HTTP 실행 예시
원격 MCP 클라이언트(예: 사내 게이트웨이, AI 오케스트레이터)에서 접근해야 한다면 transport를 `streamable-http`로 전환해 실행할 수 있습니다.
```bash
uvx hwpx-mcp-server --transport streamable-http --host 0.0.0.0 --port 8080
```
권장 운영 방식은 아래와 같습니다.
- **포트 바인딩**: 컨테이너/VM에서는 `--host 0.0.0.0`으로 바인딩하고, 외부에는 방화벽 또는 보안 그룹으로 허용 대역만 열어두세요.
- **리버스 프록시**: Nginx/Traefik/Caddy 앞단에 두고 TLS 종료(HTTPS), 인증, 요청 제한(rate limit)을 프록시 레이어에서 적용하는 구성을 권장합니다.
- **프록시 타임아웃**: Streamable HTTP는 SSE 응답을 사용하므로 `proxy_read_timeout`(Nginx 기준) 같은 장기 연결 타임아웃을 충분히 크게 설정해야 합니다.
- **헬스체크 분리**: 프록시/오케스트레이터의 헬스체크 엔드포인트는 MCP 경로와 분리해, 장기 연결 트래픽과 충돌하지 않도록 구성하세요.
## ⚙️ 환경 변수
| 변수 | 설명 | 기본값 |
| --- | --- | --- |
| `HWPX_MCP_PAGING_PARA_LIMIT` | 페이지네이션 도구가 반환할 최대 문단 수 | `200` |
| `HWPX_MCP_AUTOBACKUP` | `1`이면 저장 전 `<file>.bak` 백업 생성 | `0` |
| `LOG_LEVEL` | stderr에 JSONL 형식으로 출력할 로그 레벨 | `INFO` |
| `HWPX_MCP_HARDENING` | `1`로 설정 시 하드닝 편집 파이프라인과 검색/컨텍스트 도구 활성화 | `0` |
| `HWPX_MCP_TOOLSET` | 노출할 도구 카테고리 CSV(`core`,`tables`,`styles`,`pipeline`,`debug`) | 미설정 시 전체 도구 |
> ℹ️ `read_text` 도구는 기본적으로 최대 200개의 문단을 반환합니다. 더 큰 덤프가 필요하면 도구 호출 시 `limit` 인수를 직접 지정하거나 `HWPX_MCP_PAGING_PARA_LIMIT` 환경 변수를 확장하세요. 이는 Microsoft Office Word에서 필요한 범위만 순차적으로 읽는 워크플로와 동일합니다.
> 🔐 `HWPX_MCP_HARDENING=1`로 실행하면 새 편집 파이프라인(`plan → preview → apply`)과 검색/컨텍스트 도구가 함께 노출됩니다. 값이 `0` 또는 미설정이면 기존 도구 표면만 유지됩니다.
> 🧰 `HWPX_MCP_TOOLSET`을 사용하면 도구 노출 범위를 카테고리 단위로 줄일 수 있습니다. 예: `HWPX_MCP_TOOLSET=core,tables`.
>
> ✅ 권장 프로파일은 **`core` 우선**입니다. 프로덕션에서는 `core`로 시작하고 필요 시 `tables`, `styles`, `pipeline`, `debug`를 점진적으로 추가하세요.
>
> - 최소 표면: `HWPX_MCP_TOOLSET=core`
> - 표 편집 중심: `HWPX_MCP_TOOLSET=core,tables`
> - 스타일 편집 포함: `HWPX_MCP_TOOLSET=core,styles`
> - 하드닝 파이프라인 포함: `HWPX_MCP_TOOLSET=core,pipeline` + `HWPX_MCP_HARDENING=1`
### 📁 문서 로케이터(Document Locator)
모든 도구의 입력은 이제 문서를 가리키는 **discriminated union** 로케이터를 사용합니다. 기본값은 기존과 동일하게 상위 수준의 `path` 필드이며, 별도 선언 없이도 계속 사용할 수 있습니다. 필요에 따라 명시적으로 `type`을 지정해 HTTP 백엔드나 사전 등록된 핸들을 사용할 수 있습니다.
- **로컬 파일 (기존 스키마와 동일)**
```jsonc
{
"name": "open_info",
"arguments": {
"path": "sample.hwpx"
}
}
```
- **HTTP 백엔드와 연계** — 서버를 HTTP 스토리지 모드로 실행한 경우, 원격 경로를 `uri` 필드로 지정하고 필요 시 `backend` 힌트를 제공할 수 있습니다.
```jsonc
{
"name": "open_info",
"arguments": {
"type": "uri",
"uri": "reports/weekly.hwpx",
"backend": "http"
}
}
```
- **사전 등록된 핸들 사용** — 이제 일반 도구(`open_info`, `read_text`, `set_table_cell_text` 등)도 `handleId`를 공식 지원합니다.
```jsonc
{
"name": "open_info",
"arguments": {
"type": "handle",
"handleId": "h_0123456789abcdef"
}
}
```
- **멀티 문서 교차 작업 예시** — 문서 A의 표를 문서 B로 복사하는 흐름입니다.
```jsonc
{
"name": "copy_table_between_documents",
"arguments": {
"sourceDocument": {"type": "handle", "handleId": "h_source"},
"sourceTableIndex": 0,
"targetDocument": {"type": "handle", "handleId": "h_target"},
"targetSectionIndex": 0,
"autoFit": true
}
}
```
각 변형은 필요에 따라 `backend` 필드를 추가로 가질 수 있으며, 명시적으로 `document` 객체를 중첩하여 전달하는 것도 허용됩니다. 스키마는 Sanitizer를 거쳐 `$ref` 없이 평탄화된 형태로 제공됩니다.
### 🔐 하드닝 편집 파이프라인 (옵션)
하드닝 플래그를 켜면 모든 편집 요청이 **계획(Plan) → 검토(Preview) → 적용(Apply)**의 3단계를 거치도록 설계된 신규 도구가 함께 노출됩니다. 하드닝 플래그는 저렴한 LLM 모델의 요청에도 성공적인 작업을 수행하기 위해서 도입한 테스트중인 기능입니다.
1. **`hwpx.plan_edit`**: 변경 대상과 의도한 작업을 설명하면 서버가 안정적인 `planId`와 예상 작업 요약을 제공합니다.
2. **`hwpx.preview_edit`**: 발급된 `planId`로 미리보기를 요청하면 실제 diff, 모호성 경고, 안전 점수 등을 포함한 리뷰 데이터를 반환합니다. 이 단계가 기록되지 않으면 적용 단계로 넘어갈 수 없습니다.
3. **`hwpx.apply_edit`**: `preview`를 거친 동일한 `planId`에 `confirm: true`를 명시해야 실제 문서 변경이 이루어집니다. `idempotencyKey`를 지정하면 동일 요청이 반복되더라도 안전하게 무시됩니다.
각 단계는 표준 `ServerResponse` 래퍼를 사용하며, 오류 발생 시 `PREVIEW_REQUIRED`, `AMBIGUOUS_TARGET`, `UNSAFE_WILDCARD`, `IDEMPOTENT_REPLAY` 등의 코드와 함께 후속 행동 예시(`next_actions`)를 반환합니다. 모든 스키마는 draft-07 호환 Sanitizer를 통해 `$ref`, `anyOf` 없이 평탄화되어 노출됩니다.
또한 하드닝 모드에서는 지원 도구가 확장됩니다.
- **`hwpx.search`**: 정규식 또는 키워드 기반으로 문서 전반을 검색하여 노출 가능한 문맥만 포함한 일치 결과를 제공합니다.
- **`hwpx.get_context`**: 특정 문단 주변의 제한된 창(window)만 추출하여 프라이버시를 유지한 채 리뷰에 활용할 수 있습니다.
## 🧠 Prompt 템플릿 (prompts/list, prompts/get)
서버는 바로 호출 가능한 버전드 프롬프트 ID를 제공합니다. 버전 호환성 관리를 위해 `prompt_id@vN` 네이밍 규칙을 사용합니다.
- `summary@v1`: 문서 전체 텍스트 요약
- `table_to_csv@v1`: 표 추출 후 CSV 변환
- `document_lint@v1`: 텍스트 규칙 린트
각 프롬프트는 설명에 **도구 이름/인자명 ↔ 템플릿 변수 매핑**, 인자 스키마, 예시 입력/출력을 포함합니다.
### 예시 1) prompts/get으로 요약 프롬프트 요청
```json
{
"method": "prompts/get",
"params": {
"name": "summary@v1",
"arguments": {
"path": "sample.hwpx",
"summaryStyle": "임원 보고",
"maxSentences": "4"
}
}
}
```
### 예시 2) prompts/get으로 표 CSV 프롬프트 요청
```json
{
"method": "prompts/get",
"params": {
"name": "table_to_csv@v1",
"arguments": {
"path": "sample.hwpx",
"tableIndex": "0",
"delimiter": ",",
"headerPolicy": "첫 행 헤더 유지"
}
}
}
```
### 예시 3) prompts/get으로 문서 린트 프롬프트 요청
```json
{
"method": "prompts/get",
"params": {
"name": "document_lint@v1",
"arguments": {
"path": "sample.hwpx",
"maxLineLen": "100",
"forbidPatterns": "\"TODO\", \"TBD\""
}
}
}
```
## 🛠️ 제공 도구
다양한 문서 편집 및 관리 도구를 제공합니다. 각 도구의 상세한 입출력 형식은 `ListTools` 응답에 포함된 JSON 스키마를 통해 확인할 수 있습니다.
\<details\>
\<summary\>\<b\>전체 도구 목록 펼쳐보기...\</b\>\</summary\>
- **문서 정보 및 탐색**
- `open_info`: 문서 메타데이터 및 단락·헤더 개수 요약
- `list_sections`, `list_headers`: 섹션/헤더 구조 탐색
- `list_master_pages_histories_versions`: 마스터 페이지/히스토리/버전 요약
- **콘텐츠 추출 및 검색**
- `read_text`, `read_paragraphs`, `text_extract_report`: 페이지네이션, 선택 문단, 주석 포함 텍스트 추출
- `analyze_template_structure`: 양식 문서를 헤더/본문/푸터(휴리스틱)로 요약하고 플레이스홀더 후보를 탐지
- `find`, `find_runs_by_style`: 텍스트 검색 및 스타일 기반 검색
- `hwpx.search` *(플래그 활성 시)*: 정규식/키워드 검색과 안정적인 노드 식별자 반환
- `hwpx.get_context` *(플래그 활성 시)*: 문단 전후 문맥만 제한적으로 조회
- **문서 편집**
- `replace_text_in_runs`: 스타일을 보존하며 텍스트 치환 (기본적으로 문서를 저장하므로,
미리보기만 원하면 `dryRun: true`를 지정하세요.)
- `add_paragraph`, `insert_paragraphs_bulk`: 문단 추가
- `add_table`, `get_table_cell_map`, `set_table_cell_text`, `replace_table_region`, `split_table_cell`: 표 생성·편집 및 병합 해제
- `add_shape`, `add_control`: 개체 추가
- `add_memo`, `attach_memo_field`, `add_memo_with_anchor`, `remove_memo`: 메모 관리
- `hwpx.plan_edit`, `hwpx.preview_edit`, `hwpx.apply_edit` *(플래그 활성 시)*: 검증된 3단계 편집 파이프라인
- **스타일링**
- `ensure_run_style`, `list_styles_and_bullets`: 스타일 및 글머리표 목록 확인/생성
- `apply_style_to_text_ranges`, `apply_style_to_paragraphs`: 단어/문단 단위 스타일 적용
- **파일 관리**
- `save`, `save_as`: 문서 저장
- `fill_template`: 템플릿 사본 생성 + 다중 치환을 1회 호출로 수행
- `make_blank`: 새 빈 문서 생성
- `convert_hwp_to_hwpx`: HWP 바이너리를 HWPX로 변환(기본 텍스트/표 중심)
- **구조 검증 및 고급 검색**
- `object_find_by_tag`, `object_find_by_attr`: XML 요소 검색
- `validate_structure`, `lint_text_conventions`: 문서 구조 검증 및 텍스트 린트
\</details\>
### 🎯 필요한 문단만 빠르게 읽기
대용량 문서를 순차적으로 확인할 때는 `read_text` 페이지네이션이 편리하지만, 특정 문단만 바로 확인하고 싶을 때는 `read_paragraphs` 도구가 더 적합합니다. `paragraphIndexes` 배열에 원하는 문단 번호만 전달하면, 요청한 문단만 순서대로 반환합니다. 각 항목에는 원본 문단 인덱스(`paragraphIndex`)와 추출된 텍스트가 함께 포함되므로, 이전 호출에서 기억한 문단을 정확히 다시 불러올 수 있습니다.
```jsonc
{
"name": "read_paragraphs",
"arguments": {
"path": "sample.hwpx",
"paragraphIndexes": [1, 4, 9],
"withHighlights": false,
"withFootnotes": false
}
}
```
선택된 문단만 처리하므로 큰 문서를 반복해서 탐색할 때 불필요한 텍스트 복사를 줄이고, 하이라이트/각주 옵션도 `read_text`와 동일하게 활용할 수 있습니다. 존재하지 않는 인덱스를 요청하면 오류가 발생하므로, 이전에 받은 문단 개수 정보를 활용해 안전하게 요청하세요.
### 🔍 검색 문맥 길이 조절
`find` 도구는 각 일치 항목 주변의 전후 80자를 기본으로 잘라 `context` 스니펫을 반환하며, 잘린 경우 문자열 앞뒤에 `...`이 붙습니다. 더 넓은 범위가 필요하면 `contextRadius` 인수를 사용해 유지할 문자 수를 조정할 수 있습니다.
```jsonc
{
"name": "find",
"arguments": {
"path": "sample.hwpx",
"query": "HWPX",
"contextRadius": 200
}
}
```
`contextRadius` 값은 일치 구간 앞뒤 각각에 포함할 문자 수를 의미합니다.
### 📐 표 편집 고급 옵션
`get_table_cell_map` 도구를 사용하면 표의 전체 격자를 그대로 직렬화하여 각 위치가 어느 앵커 셀(`anchor`)에 속하는지, 병합 범위(`rowSpan`, `colSpan`)는 얼마인지 한눈에 확인할 수 있습니다. 응답은 항상 행×열 전체를 채우며, 각 위치에 대해 `row`/`column` 좌표와 병합된 앵커 셀의 텍스트를 알려 줍니다.
```jsonc
{
"name": "get_table_cell_map",
"arguments": {"path": "sample.hwpx", "tableIndex": 0},
"result": {
"rowCount": 3,
"columnCount": 3,
"grid": [
[
{"row": 0, "column": 0, "anchor": {"row": 0, "column": 0}, "rowSpan": 2, "colSpan": 2, "text": "제목"},
{"row": 0, "column": 1, "anchor": {"row": 0, "column": 0}, "rowSpan": 2, "colSpan": 2, "text": "제목"},
{"row": 0, "column": 2, "anchor": {"row": 0, "column": 2}, "rowSpan": 3, "colSpan": 1, "text": "요약"}
],
[
{"row": 1, "column": 0, "anchor": {"row": 0, "column": 0}, "rowSpan": 2, "colSpan": 2, "text": "제목"},
{"row": 1, "column": 1, "anchor": {"row": 0, "column": 0}, "rowSpan": 2, "colSpan": 2, "text": "제목"},
{"row": 1, "column": 2, "anchor": {"row": 0, "column": 2}, "rowSpan": 3, "colSpan": 1, "text": "요약"}
],
"... 생략 ..."
]
}
}
```
`set_table_cell_text`와 `replace_table_region`은 선택적인 `logical`/`splitMerged` 플래그를 지원합니다. `logical: true`로 지정하면 방금 확인한 논리 좌표계를 그대로 사용할 수 있고, `splitMerged: true`를 함께 전달하면 쓰기 전에 자동으로 해당 병합 영역을 분할합니다. 긴 텍스트를 채울 때는 `autoFit: true`를 추가로 지정하면 각 열 너비가 셀 내용 길이에 맞춰 다시 계산되어 표 전체 폭(`hp:sz`)과 셀 크기(`hp:cellSz`)가 함께 업데이트됩니다. 병합을 직접 해제해야 할 때는 `split_table_cell` 도구가 원래 범위를 알려주면서 셀을 분할합니다.
```jsonc
{
"name": "set_table_cell_text",
"arguments": {
"path": "sample.hwpx",
"tableIndex": 0,
"row": 1,
"col": 1,
"text": "논리 좌표 편집",
"logical": true,
"splitMerged": true,
"autoFit": true,
"dryRun": false
}
}
```
위 예시는 2×2로 병합된 셀에 논리 좌표 `(1, 1)`을 지정하여 자동 분할 후 텍스트를 기록합니다. 분할 여부와 원래 범위를 확인하려면 `split_table_cell`을 호출하세요.
```jsonc
{
"name": "split_table_cell",
"arguments": {"path": "sample.hwpx", "tableIndex": 0, "row": 0, "col": 0},
"result": {"startRow": 0, "startCol": 0, "rowSpan": 2, "colSpan": 2}
}
```
응답의 `rowSpan`/`colSpan` 값은 분할되기 전 병합 범위를 알려주므로, 프런트엔드 클라이언트가 UI 상태를 즉시 갱신할 수 있습니다.
## ☢️ 고급 기능: OPC 패키지 내부 살펴보기
> **⚠️ 경고:** 아래 도구들은 HWPX 문서의 내부 OPC 파트를 그대로 노출합니다. 구조를 잘못 해석하면 문서를 오해할 수 있으니, 스키마와 관계를 충분히 이해한 상태에서 활용하세요. 현재 MCP 서버는 의도치 않은 손상을 막기 위해 **읽기 전용 도구만** 제공합니다.
* `package_parts`: 패키지에 포함된 모든 OPC 파트의 경로 목록을 확인합니다.
* `package_get_text`: 지정한 파트를 텍스트로 읽어옵니다 (인코딩 지정 가능).
* `package_get_xml`: 지정한 파트를 XML 문자열로 반환합니다.
#### 시나리오 예시
스타일 정의 XML 파일(`Styles.xml`)의 내용을 확인하고 싶다면:
1. `package_parts` 도구에 `{"path": "sample.hwpx"}`를 전달하여 `Contents/Styles.xml`과 같은 파트 이름을 찾습니다.
2. `package_get_xml` 도구에 `{"path": "sample.hwpx", "partName": "Contents/Styles.xml"}`을 전달하여 해당 파트의 원본 XML을 안전하게 검토합니다.
## 🔁 HWP → HWPX 자동 변환
`convert_hwp_to_hwpx` 도구는 내부적으로 `hwp5proc xml` 결과를 매핑해 `.hwp` 문서를 `.hwpx`로 변환합니다.
- 입력: `source`(필수, `.hwp` 경로), `output`(선택, 미지정 시 같은 경로에 `.hwpx`)
- 출력: 변환 성공 여부, 변환된 문단/표 개수, 변환 제외 요소 목록, 경고 메시지
예시:
```json
{
"name": "convert_hwp_to_hwpx",
"arguments": {
"source": "legacy/report.hwp",
"output": "legacy/report.hwpx"
}
}
```
### 지원 범위
- **P0**: 일반 문단 텍스트
- **P1(부분 지원)**: 표의 행/열과 셀 텍스트
- **P2/P3**: OLE, 각주/미주, 변경 추적, 양식 컨트롤 등은 경고와 함께 스킵될 수 있음
### 알려진 제한사항
- 변환 목표는 100% 시각 재현이 아니라 **텍스트 보존 + 기본 구조 이관**입니다.
- 복잡한 서식(세밀한 스타일, 고급 개체, 일부 병합 표)은 결과 문서에서 수동 보정이 필요할 수 있습니다.
- `hwp5proc` 실행 환경이 없으면 변환 도구는 실패하며 설치 안내 오류를 반환합니다.
## 🧪 테스트
핵심 기능부터 모든 MCP 도구의 실제 호출까지 검증하는 엔드투엔드 테스트 스위트가 포함되어 있습니다.
```bash
# 1. 테스트 의존성 설치
python -m pip install -e .[test]
# 2. 테스트 실행
python -m pytest
```
`tests/test_mcp_end_to_end.py`는 서버가 노출하는 대부분의 도구를 실제로 호출하여 텍스트, 표, 메모 편집, OPC 패키지 읽기, 자동 백업 생성 등 핵심 동작을 완벽하게 검증합니다.
## 🧑💻 개발 참고
* 이 서버는 `python-hwpx>=1.9`, `mcp`, `anyio`, `pydantic` 등 순수 파이썬 라이브러리로만 구성됩니다.
* 모든 도구 핸들러는 `HwpxOps`의 경로 헬퍼와 `HwpxDocument` API를 통해 문서를 안전하게 조작합니다.
* 파괴적 작업(수정/저장)에는 `dryRun` 플래그를 우선 제공하며, 자동 백업 옵션이 활성화되어 있으면 `.bak` 파일을 생성하여 안정성을 높입니다.
* JSON 스키마는 내부 `schema.builder` 경로를 통해 draft-07 호환 Sanitizer를 거친 후 노출되므로 `$ref`/`anyOf`가 제거된 평탄한 구조를 기대할 수 있습니다.
### 🔒 서버 하드닝 & JSON 스키마 (draft-07) — 선택 사용
- `HWPX_MCP_HARDENING=1`을 설정하면 plan/preview/apply 파이프라인, `hwpx.search`, `hwpx.get_context`가 활성화됩니다.
- 플래그를 끄면 (`0` 또는 미설정) 기존 도구만 유지하면서도 강화된 스키마 Sanitizer는 계속 적용됩니다.
- `pytest -q`를 실행하면 스키마 회귀, 파이프라인 게이트, 멱등성 검증 테스트가 함께 수행되어 배포 전 안전성을 확인할 수 있습니다.
## 📜 라이선스
이 프로젝트는 [MIT 라이선스](https://www.google.com/search?q=LICENSE)로 배포됩니다. 자세한 내용은 라이선스 파일을 확인하세요.
## 이메일
광교고등학교 교사 고규현 : kokyuhyun@hotmail.com
## 🧩 양식(템플릿) 문서 작업
### 1) 구조 파악: `analyze_template_structure`
양식 문서를 열자마자 수정 가능/불가 영역과 플레이스홀더 후보를 파악하려면 아래 도구를 사용합니다.
```json
{
"name": "analyze_template_structure",
"arguments": {
"path": "sample.hwpx",
"placeholderPatterns": ["\\{\\{[^{}]+\\}\\}", "본문 영역"],
"lockKeywords": ["학교장", "직인", "로고"]
}
}
```
응답에는 `summary`(문단 수/플레이스홀더 수), `regions`(header/body/footer), `placeholders`(토큰/문단 인덱스/수정 가능 여부)가 포함됩니다.
### 2) 한 번에 채우기: `fill_template`
기존의 `save_as -> find -> replace...` 다단계 대신, `fill_template` 하나로 템플릿 복사와 다중 치환을 처리할 수 있습니다.
```json
{
"name": "fill_template",
"arguments": {
"source": "forms/notice_template.hwpx",
"output": "out/notice_2026.hwpx",
"replacements": {
"본문 영역": "실제 안내문 본문",
"제2025년": "제2026년",
"2025. 1. 1.": "2026. 3. 5."
}
}
}
```
## 🗂️ 문서 Handle Registry 도구 및 세션 수명 정책
### 신규 도구
- `open_document_handle`: 로케이터(path/uri/handleId)를 등록하고 표준 `handle` 객체를 반환
- `list_open_documents`: 현재 프로세스에 등록된 handle 목록과 세션 정책(`sessionPolicy`) 반환
- `close_document_handle`: 특정 handle을 레지스트리에서 해제
- `copy_table_between_documents`: 문서 A의 표를 읽어 문서 B에 복사(교차 문서 작업)
### 세션 수명 정책
- **레지스트리 범위**: 프로세스 단위(`registryScope=process`)
- **요청 처리 단위**: 요청 단위(`requestScope=request`) — 각 MCP 요청은 독립 실행되지만, handle 레지스트리는 프로세스 내에서 유지
- **캐시/레지스트리 해제 조건**
1. `close_document_handle` 호출 시 해당 handle 즉시 해제
2. 서버 프로세스 종료/재시작 시 전체 레지스트리 해제
## 📚 Resources 사용 예시 및 URI 계약
MCP Resources를 통해 **등록된 handle 기반 읽기 전용 조회**를 사용할 수 있습니다. 서버는 도구 호출 과정에서 실제 문서 경로를 해석할 때 handle을 자동 등록하며, `resources/list`에서는 현재 등록된 handle만 노출합니다.
### URI 스킴
- `hwpx://documents/{handle}/metadata`: 문서 메타데이터/섹션·문단·헤더 개수
- `hwpx://documents/{handle}/paragraphs`: 문서 전체 문단 텍스트
- `hwpx://documents/{handle}/tables`: 표 인덱스/행/열 요약
`{handle}`은 `h_<16자리해시>` 형태의 불투명 식별자이며, 등록되지 않은 handle은 표준화된 `HANDLE_NOT_FOUND` 에러로 반환됩니다.
### 호출 흐름 예시
1. 먼저 `open_document_handle`(또는 `open_info`/`read_text`)를 호출해 문서를 열면 해당 문서 handle이 등록됩니다.
2. `resources/list`를 호출하면 해당 handle의 `metadata/paragraphs/tables` URI가 나타납니다.
3. `resources/read`로 원하는 URI를 읽어 JSON(`application/json`) 본문을 받습니다.
예시 URI:
```text
hwpx://documents/h_0123456789abcdef/metadata
hwpx://documents/h_0123456789abcdef/paragraphs
hwpx://documents/h_0123456789abcdef/tables
```
| text/markdown | Kohkyuhyun | null | null | null | MIT License
Copyright (c) 2025 HWPX MCP Server
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"python-hwpx>=1.9",
"pydantic>=2.7",
"anyio>=4.0",
"mcp>=1.14.1",
"modelcontextprotocol>=0.1.0",
"olefile>=0.47",
"uvicorn>=0.30",
"pytest>=8.0; extra == \"test\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.7 | 2026-02-19T03:35:49.094077 | hwpx_mcp_server-1.0.0.tar.gz | 88,885 | 08/ec/20c452b64092a35745066b7f1bd5b19f8c8227d610899540592227169431/hwpx_mcp_server-1.0.0.tar.gz | source | sdist | null | false | aad4a8153fc4e1a4b771424bf326a1be | eff6961b9bbd5850a737f8149e6af71626315128f713d9c297b97f8d14e41bb0 | 08ec20c452b64092a35745066b7f1bd5b19f8c8227d610899540592227169431 | null | [
"LICENSE"
] | 418 |
2.4 | endstone-easyhotpotato | 0.1.0 | 基于 EndStone 的烫手山芋插件 / The Python hot potato plugin based on EndStone. | <div align="center">

<h3>EndStone-EasyHotPotato</h3>
<p>
<b>一个基于 EndStone 的烫手山芋游戏插件</b>
Powered by EndStone.<br>
</p>
</div>
<div align="center">
[](README.md) [](README_EN.md)
[](https://github.com/MengHanLOVE1027/endstone-easyhotpotato/releases) [](https://opensource.org/licenses/AGPL-3.0) [](https://www.python.org/) [](https://endstone.io) [](https://github.com/MengHanLOVE1027/endstone-easyhotpotato/releases)
</div>
---
## 📖 简介
EasyHotPotato 是一个专为 EndStone 服务器设计的烫手山芋游戏插件,提供了一个刺激有趣的多人竞技游戏体验。玩家需要在限定时间内将"烫手山芋"传递给其他玩家,避免在自己手中爆炸。插件支持自动游戏流程管理、玩家战绩记录、排行榜系统、粒子特效、音效反馈和BossBar倒计时等功能,为服务器玩家提供完整的游戏体验。
---
## ✨ 核心特性
| 特性 | 描述 |
| ----------------- | --------------------------------- |
| 🎮**自动游戏流程** | 完整的游戏生命周期管理,自动处理加入、等待、开始、结束等流程 |
| 📊**战绩记录系统** | 记录玩家胜场、总场次和胜率等数据 |
| 🏆**排行榜功能** | 按胜场数和胜率自动排序的玩家排行榜 |
| 📜**战局记录** | 记录每场游戏的详细信息,包括参与玩家、游戏时长、获胜者等 |
| 🎨**粒子特效** | 山芋持有者火焰粒子效果和淘汰时爆炸效果 |
| 🔊**音效反馈** | 传递山芋、山芋爆炸等游戏事件的音效提示 |
| 📊**BossBar显示** | 实时显示游戏状态、倒计时和山芋持有者信息 |
| 🌈**彩虹跑马灯** | 精美的BossBar彩虹循环效果 |
| 🗺️**区域管理** | 灵活的等待区域和竞技区域设置 |
| ⚙️**可配置参数** | 支持自定义游戏时长、最低人数、活动半径等参数 |
| 🎯**位置检测** | 自动检测玩家是否离开竞技区域 |
| 🌍**多语言界面** | 支持中文、英文等多语言显示 |
| 📝**完整日志系统** | 彩色日志输出,按日期分割存储 |
---
## 🗂️ 目录结构
```
服务器根目录/
├── logs/
│ └── EasyHotPotato/ # 日志目录
│ └── easyhotpotato_YYYYMMDD.log # 主日志文件
├── plugins/
│ ├── endstone_easyhotpotato-x.x.x-py3-none-any.whl # 插件主文件
│ └── EasyHotPotato/ # 插件资源目录
│ ├── config/
│ │ └── config.json # 配置文件
│ └── data/ # 数据目录
│ ├── player_stats.json # 玩家战绩数据
│ └── game_history.json # 战局记录数据
```
---
## 🚀 快速开始
### 安装步骤
1. **下载插件**
- 从 [Release页面](https://github.com/MengHanLOVE1027/endstone-easyhotpotato/releases) 下载最新版本
- 或从 [MineBBS](https://www.minebbs.com/resources/easyhotpotato-ehp-endstone.15329/) 获取
2. **安装插件**
```bash
# 将插件主文件复制到服务器 plugins 目录
cp endstone_easyhotpotato-x.x.x-py3-none-any.whl plugins/
```
3. **启动服务器**
- 重启服务器或使用 `/reload` 命令
- 插件会自动生成默认配置文件和数据目录
---
## ⚙️ 配置详解
配置文件位于:`plugins/EasyHotPotato/config/config.json`
### 📋 主要配置项
```json
{
// 📍 等待中心坐标
"waitPos": {
"x": 0,
"y": 0,
"z": 0,
"dimid": 0
},
// 📍 竞技中心坐标
"gamePos": {
"x": 100,
"y": 64,
"z": 100,
"dimid": 0
},
// 📏 活动半径
"areaSize": {
"x": 10,
"z": 10
},
// ⏰ 游戏参数
"waitTime": 120, // 满人后的预备等待时间(秒)
"preTime": 10, // 正式开赛前的热身倒计时(秒)
"gameTime": 180, // 游戏时长(秒)
// 👥 玩家数量设置
"minPlayers": 2, // 触发自动开赛的最低人数
"maxPlayers": 0 // 最大参与人数,0表示无上限
}
```
---
## 🎮 命令手册
### 玩家命令
| 命令 | 权限 | 描述 |
| ---------------- | ---- | ------------------ |
| `/easyhotpotato` | 所有玩家 | 打开游戏主菜单 |
| `/easyhotpotato status` | 所有玩家 | 查看游戏状态 |
| `/easyhotpotato stats [玩家]` | 所有玩家 | 查看战绩,可指定玩家名称 |
| `/easyhotpotato help` | 所有玩家 | 查看帮助 |
### 管理员命令
| 命令 | 权限 | 描述 |
| ---------------- | ---- | ------------------ |
| `/easyhotpotato` | OP | 打开游戏主菜单(包含后台管理选项) |
---
## 🎯 游戏规则
1. **加入游戏**:通过主菜单进入战场,等待其他玩家加入
2. **游戏开始**:达到最低人数后,经过等待时间和热身倒计时后游戏正式开始
3. **传递山芋**:持有山芋的玩家必须通过物理攻击其他玩家来传递山芋
4. **山芋爆炸**:每轮都有随机的倒计时,计时器归零时山芋爆炸,淘汰当前持有者
5. **区域限制**:离开比赛区域将立即判负
6. **游戏胜利**:最后剩下的玩家获得胜利
7. **战绩记录**:胜场最多的玩家将登上排行榜
---
## 🔧 高级功能
### 🎨 粒子特效
插件为山芋持有者提供持续的火焰粒子效果,并在淘汰时播放爆炸粒子效果,增强游戏视觉体验。
### 🔊 音效反馈
- **传递音效**:山芋传递时播放火焰射击音效
- **爆炸音效**:山芋爆炸时播放爆炸音效
- **倒计时音效**:最后5秒播放经验音效,声调逐渐升高
### 📊 BossBar显示
- **等待阶段**:显示当前玩家数量和所需最低人数
- **倒计时阶段**:显示游戏开始倒计时
- **游戏进行中**:显示山芋持有者和剩余时间
- **淘汰提示**:显示被淘汰玩家信息
---
## 🛠️ 故障排除
### 常见问题
<details>
<summary><b>❓ 游戏无法开始</b></summary>
**检查步骤:**
1. 确认参与玩家数量达到最低要求
```bash
/easyhotpotato status
```
2. 检查配置文件中的等待时间和热身时间设置
3. 查看日志文件
```bash
cat logs/EasyHotPotato/easyhotpotato_*.log
```
</details>
<details>
<summary><b>❓ 玩家无法加入游戏</b></summary>
**排查方法:**
1. 检查是否达到最大人数限制
2. 确认玩家是否已经在游戏中
3. 查看日志文件了解详细错误信息
</details>
<details>
<summary><b>❓ 玩家被意外淘汰</b></summary>
**排查方法:**
1. 检查玩家是否离开了活动区域
2. 确认活动半径设置是否合理
3. 查看日志文件了解淘汰原因
</details>
### 📊 日志文件说明
| 日志文件 | 位置 | 用途 |
| -------- | ----------------------------------------------------- | -------------------------- |
| 主日志 | `logs/EasyHotPotato/easyhotpotato_YYYYMMDD.log` | 记录游戏操作、玩家行为等常规信息 |
---
## 📄 许可证
本项目采用 **AGPL-3.0** 许可证开源。
```
版权所有 (c) 2023 梦涵LOVE
本程序是自由软件:您可以自由地重新发布和修改它,
但必须遵循AGPL-3.0许可证的条款。
```
完整许可证文本请参阅 [LICENSE](LICENSE) 文件。
---
## 👥 贡献指南
欢迎提交 Issue 和 Pull Request!
1. **Fork 项目仓库**
2. **创建功能分支**
```bash
git checkout -b feature/AmazingFeature
```
3. **提交更改**
```bash
git commit -m 'Add some AmazingFeature'
```
4. **推送分支**
```bash
git push origin feature/AmazingFeature
```
5. **创建 Pull Request**
---
## 🌟 支持与反馈
- **GitHub Issues**: [提交问题](https://github.com/MengHanLOVE1027/endstone-easyhotpotato/issues)
- **MineBBS**: [讨论帖](https://www.minebbs.com/resources/easyhotpotato-ehp-endstone.15329/)
- **作者**: 梦涵LOVE
---
<div align="center">
**⭐ 如果这个项目对你有帮助,请给我们一个 Star!**
[](https://star-history.com/#MengHanLOVE1027/endstone-easyhotpotato&Date)
**Made with ❤️ by MengHanLOVE**
</div>
| text/markdown | MengHanLOVE | MengHanLOVE <2193438288@qq.com> | null | null | null | endstone, plugins, backup | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://github.com/MengHanLOVE1027 | null | >=3.8 | [] | [] | [] | [
"psutil>=5.9.0",
"requests>=2.28.0"
] | [] | [] | [] | [
"Homepage, https://github.com/MengHanLOVE1027/endstone-easyhotpotato"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:35:46.647512 | endstone_easyhotpotato-0.1.0.tar.gz | 39,404 | 3b/2b/57187d2d9d4e4d5369e3a39edd85a0787b4c6cd8bf68af0dc554184f127f/endstone_easyhotpotato-0.1.0.tar.gz | source | sdist | null | false | 3a82657e117a04d3dfb71a2743ef7614 | 28ac064a8f3a60e96a3c3051c6cca5de09c15d5107bf097bff26079018403614 | 3b2b57187d2d9d4e4d5369e3a39edd85a0787b4c6cd8bf68af0dc554184f127f | AGPL-3.0-or-later | [
"LICENSE"
] | 306 |
2.4 | pmtvs-electrical | 0.0.1 | Signal analysis primitives | # pmtvs-electrical
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:35:13.066428 | pmtvs_electrical-0.0.1.tar.gz | 1,254 | bb/cf/f84a8ee043907a7cb2fb81fbf7902416063daea1b69f01b19b2a2ddc1b65/pmtvs_electrical-0.0.1.tar.gz | source | sdist | null | false | 08e05ef128b1c0316089bccd822d0e87 | 16882fa7a428172ff34b753c96755df3ac0a92b610e458a34e8a2aed20c2e47e | bbcff84a8ee043907a7cb2fb81fbf7902416063daea1b69f01b19b2a2ddc1b65 | null | [] | 296 |
2.4 | zoo-framework | 0.6.0 | 🎪 Zoo Framework - A simple and quick multi-threaded Python framework with zoo metaphor | <div align="center">
<img src="https://mxstorage.oss-cn-beijing.aliyuncs.com/oss-accesslog/zf-main-logo.png" alt="Zoo Framework Logo" width="400"/>
# 🎪 Zoo Framework
**A simple and quick multi-threaded Python framework with zoo metaphor**
[](https://www.python.org/)
[](https://pypi.org/project/zoo-framework/)
[](LICENSE)
[](https://github.com/YearsAlso/zoo-framework/actions)
[](https://codecov.io/gh/YearsAlso/zoo-framework)
[English](#english) | [中文](#中文)
</div>
---
<a name="english"></a>
## 🇬🇧 English
### 🎯 What is Zoo Framework?
Zoo Framework is a Python multi-threaded framework based on the **zoo metaphor**. It provides an intuitive way to manage concurrent tasks through familiar concepts:
| Concept | Real World | Framework Component |
|---------|------------|---------------------|
| 🦁 **Worker** | Animals | Task execution units |
| 🏠 **Cage** | Cages | Thread-safe containers |
| 👨🌾 **Master** | Zookeeper | Framework manager |
| 🍎 **Event** | Food | Inter-worker communication |
| 🥘 **FIFO** | Feeder queue | Event management |
### ✨ Features
- 🔄 **Multi-threaded Execution** - Efficient concurrent task processing
- 🔄 **State Machine** - Powerful state management with persistence
- 📢 **Event System** - Flexible publish-subscribe messaging
- 🔌 **Plugin System** - Extensible architecture for third-party plugins
- 🏠 **Thread Safety** - Automatic thread-safe wrappers
- 📊 **Health Monitoring** - SVM (State Vector Machine) worker monitoring
- 🚀 **Async Support** - Native asyncio integration
- 📝 **Structured Logging** - JSON-formatted logs with metrics
### 📦 Installation
```bash
# From PyPI
pip install zoo-framework
# Or with all optional dependencies
pip install zoo-framework[dev,docs]
```
### 🚀 Quick Start
```python
from zoo_framework.core import Master
from zoo_framework.workers import BaseWorker
from zoo_framework.core.aop import cage
@cage # Thread-safe wrapper
class MyWorker(BaseWorker):
"""🦁 Your first animal in the zoo!"""
def __init__(self):
super().__init__({
"is_loop": True, # Loop execution
"delay_time": 1.0, # Execute every 1 second
"name": "MyWorker"
})
self.counter = 0
def _execute(self):
""️⃣ Execute business logic"""
self.counter += 1
print(f"🎪 Hello from MyWorker! Count: {self.counter}")
# Start the zoo
if __name__ == "__main__":
master = Master()
master.run()
```
### 🏗️ Architecture
```mermaid
graph TB
M[👨🌾 Master] -->|Manages| W[👷 Workers]
M -->|Monitors| SVM[📊 SVM]
W -->|Lives in| C[🏠 Cages]
W -->|Consumes| E[🍎 Events]
E -->|Queued in| F[📊 FIFO]
```
### 📚 Documentation
- [Development Guide](docs/DEVELOPMENT.md) - Setup development environment
- [Architecture](docs/ARCHITECTURE.md) - Framework architecture
- [Contributing](docs/CONTRIBUTING.md) - How to contribute
- [API Reference](docs/API_REFERENCE.md) - API documentation
### 🤝 Contributing
We welcome contributions! Please see [Contributing Guide](docs/CONTRIBUTING.md) for details.
```bash
# Fork and clone
git clone https://github.com/YOUR_USERNAME/zoo-framework.git
# Setup development environment
pip install -e ".[dev]"
pre-commit install
# Run tests
pytest
```
### 📄 License
Apache License 2.0 © [XiangMeng](https://github.com/YearsAlso)
---
<a name="中文"></a>
## 🇨🇳 中文
### 🎯 Zoo Framework 是什么?
Zoo Framework 是一个基于**动物园隐喻**的 Python 多线程框架。它通过熟悉的概念提供直观的方式来管理并发任务:
| 概念 | 现实世界 | 框架组件 |
|------|----------|----------|
| 🦁 **Worker** | 动物 | 任务执行单元 |
| 🏠 **Cage** | 笼子 | 线程安全容器 |
| 👨🌾 **Master** | 园长 | 框架管理者 |
| 🍎 **Event** | 食物 | Worker 间通信 |
| 🥘 **FIFO** | 饲养员队列 | 事件管理 |
### ✨ 特性
- 🔄 **多线程执行** - 高效的并发任务处理
- 🔄 **状态机** - 强大的状态管理,支持持久化
- 📢 **事件系统** - 灵活的发布-订阅消息机制
- 🔌 **插件系统** - 可扩展的第三方插件架构
- 🏠 **线程安全** - 自动线程安全包装器
- 📊 **健康监控** - SVM(状态向量机)Worker 监控
- 🚀 **异步支持** - 原生 asyncio 集成
- 📝 **结构化日志** - 带指标的 JSON 格式日志
### 📦 安装
```bash
# 从 PyPI 安装
pip install zoo-framework
# 或安装所有可选依赖
pip install zoo-framework[dev,docs]
```
### 🚀 快速开始
```python
from zoo_framework.core import Master
from zoo_framework.workers import BaseWorker
from zoo_framework.core.aop import cage
@cage # 线程安全包装器
class MyWorker(BaseWorker):
"""🦁 动物园里的第一只动物!"""
def __init__(self):
super().__init__({
"is_loop": True, # 循环执行
"delay_time": 1.0, # 每秒执行一次
"name": "MyWorker"
})
self.counter = 0
def _execute(self):
"""⚡ 执行业务逻辑"""
self.counter += 1
print(f"🎪 Hello from MyWorker! 计数: {self.counter}")
# 启动动物园
if __name__ == "__main__":
master = Master()
master.run()
```
### 🏗️ 架构
```mermaid
graph TB
M[👨🌾 Master 园长] -->|管理| W[👷 Workers 动物]
M -->|监控| SVM[📊 SVM 状态机]
W -->|住在| C[🏠 Cages 笼子]
W -->|消费| E[🍎 Events 食物]
E -->|排队于| F[📊 FIFO 队列]
```
### 🎪 核心概念
#### 👷 Worker - 动物
Worker 是执行任务的基本单元,就像动物园里的动物:
```python
from zoo_framework.workers import BaseWorker
class LionWorker(BaseWorker):
def __init__(self):
super().__init__({
"is_loop": True,
"delay_time": 2.0,
"name": "🦁 LionWorker"
})
def _execute(self):
print("🦁 狮子正在巡视领地!")
```
#### 🏠 Cage - 笼子
Cage 提供线程安全和生命周期管理:
```python
from zoo_framework.core.aop import cage
@cage # 把 Worker 放进安全的笼子里
class SafeWorker(BaseWorker):
def _execute(self):
# 线程安全的代码
pass
```
#### 🔄 State Machine - 状态机
管理复杂的状态转换:
```python
from zoo_framework.statemachine import StateMachineManager
sm = StateMachineManager()
sm.create_state_machine("order")
sm.add_state("order", "pending")
sm.add_state("order", "paid")
sm.transition("order", "pending", "paid")
```
### 📚 文档
- [开发指南](docs/DEVELOPMENT.md) - 搭建开发环境
- [架构设计](docs/ARCHITECTURE.md) - 框架架构说明
- [贡献指南](docs/CONTRIBUTING.md) - 如何贡献代码
- [API 参考](docs/API_REFERENCE.md) - API 文档
### 🛠️ CLI 工具
```bash
# 创建简单对象
zfc --create simple_object
# 创建线程示例
zfc --thread demo
```
### 🤝 贡献代码
我们欢迎贡献!请查看[贡献指南](docs/CONTRIBUTING.md)了解详情。
```bash
# Fork 并克隆
git clone https://github.com/YOUR_USERNAME/zoo-framework.git
# 搭建开发环境
pip install -e ".[dev]"
pre-commit install
# 运行测试
pytest
```
### 📄 许可证
Apache License 2.0 © [XiangMeng](https://github.com/YearsAlso)
---
<div align="center">
🎪 **Happy Coding in the Zoo!** 🦁
</div>
| text/markdown | null | XiangMeng <mengxiang931015@live.com> | null | null | Apache-2.0 | async, event-driven, framework, multi-threaded, state-machine | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Application Fra... | [] | null | null | >=3.13 | [] | [] | [] | [
"click>=8.0.0",
"gevent>=23.0.0",
"jinja2>=3.0.0",
"python-dotenv>=1.0.0",
"pyyaml>=6.0",
"typing-extensions>=4.7.0",
"build>=1.0.0; extra == \"dev\"",
"bump-my-version>=0.15.0; extra == \"dev\"",
"mypy>=1.7.0; extra == \"dev\"",
"pre-commit>=3.5.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; ex... | [] | [] | [] | [
"Homepage, https://github.com/YearsAlso/zoo-framework",
"Documentation, https://yearsalso.github.io/zoo-framework/",
"Repository, https://github.com/YearsAlso/zoo-framework",
"Issues, https://github.com/YearsAlso/zoo-framework/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:34:56.124064 | zoo_framework-0.6.0.tar.gz | 132,830 | 92/e8/d71f56150ce0a549b35e4fa0e140f392bbca1ab49af8ba324c309f7a8399/zoo_framework-0.6.0.tar.gz | source | sdist | null | false | a17216c900813f2ba873d4ab5a785a50 | c9c58d90262d80f06d8bb04d901e678a3f9111bf0f354c789d16993f97b31399 | 92e8d71f56150ce0a549b35e4fa0e140f392bbca1ab49af8ba324c309f7a8399 | null | [
"LICENSE"
] | 278 |
2.4 | scikit-fallback | 0.2.0.post0 | Efficient and reliable machine learning with a reject option and selectiveness | [](https://pypi.org/project/scikit-fallback/)
[](https://pepy.tech/project/scikit-fallback)

[](https://www.codefactor.io/repository/github/sanjaradylov/scikit-fallback)

[](https://github.com/sanjaradylov/scikit-fallback/actions/workflows/build-docs.yml)
[](https://www.python.org/downloads/release/python-3913/)
[](https://github.com/psf/black)
[](https://github.com/astral-sh/ruff)
[](https://x.com/intent/tweet?text=Wow:%20https%3A%2F%2Fgithub.com%2Fsanjaradylov%2Fscikit-fallback%20@sanjaradylov)
# 👁 Overview
**`scikit-fallback`** is a `scikit-learn`-compatible Python package for selective machine learning.
## TL;DR
🔙 Augment your classification pipelines with
[`skfb.estimators`](https://scikit-fallback.readthedocs.io/en/latest/estimators.html#estimators)
such as
[`AnomalyFallbackClassifier`](https://scikit-fallback.readthedocs.io/en/latest/estimators.html#anomaly-based-fallback-classifiers)
and
[`ThresholdFallbackClassifier`](https://scikit-fallback.readthedocs.io/en/latest/estimators.html#skfb.estimators.ThresholdFallbackClassifier)
to allow them to *abstain* from predictions in cases of uncertanty or anomaly.<br>
📊 Inspect their performance by calculating *combined*, *prediction-rejection* metrics
such as [`predict_reject_recall_score`](https://scikit-fallback.readthedocs.io/en/latest/metrics.html#skfb.metrics.predict_reject_recall_score),
or visualizing distributions of confidence scores with
[`PairedHistogramDisplay`](https://scikit-fallback.readthedocs.io/en/latest/metrics.html#skfb.metrics.PairedHistogramDisplay),
and other tools from [`skfb.metrics`](https://scikit-fallback.readthedocs.io/en/latest/metrics.html#).<br>
🎶 Combine your costly ensembles with
[`RoutingClassifier`](https://scikit-fallback.readthedocs.io/en/latest/ensemble.html#skfb.ensemble.RoutingClassifier)
or in
[`ThresholdCascadeClassifierCV`](https://scikit-fallback.readthedocs.io/en/latest/ensemble.html#skfb.ensemble.ThresholdCascadeClassifierCV)
and other
[`skfb.ensemble`](https://scikit-fallback.readthedocs.io/en/latest/ensemble.html#)
meta-estimators to streamline
inference while elevating model performance.<br>
📒See [documentation](https://scikit-fallback.readthedocs.io/en/latest/index.html),
[tutorials](https://medium.com/@sshadylov), and [examples](./examples/) for more details and motivation.
# 🤔 Why `scikit-fallback`?
To *fall back (on)* means to retreat from making predictions, to rely on other
tools for support. `scikit-fallback` offers functionality to enhance your machine learning
solutions with selectiveness and a reject option.
## Machine Learning with Rejections
To allow your classification pipelines to abstain from predictions, you can
wrap them with a *rejector*. Training a rejector means both fitting your model and
learning to accept or reject predictions. Evaluation of a rejector depends
on *fallback mode* (inference with or without *fallback labels*) and measures the ability
of the rejector to both accept correct predictions and reject ambiguous ones.
For example,
[`skfb.estimators.ThresholdFallbackClassifierCV`](https://scikit-fallback.readthedocs.io/en/latest/estimators.html#skfb.estimators.ThresholdFallbackClassifierCV)
fits a base estimator and then finds the best confidence threshold s.t. predictions w/
maximum probability lower that this are rejected:
```python
>>> import numpy as np
>>> from sklearn.linear_model import LogisticRegression
>>> from skfb.estimators import ThresholdFallbackClassifierCV
>>> X = np.array([[0, 0], [4, 4], [1, 1], [3, 3], [2.5, 2], [2., 2.5]])
>>> y = np.array([0, 1, 0, 1, 0, 1])
>>> # Train LogisticRegression and let it fallback based on confidence scores.
>>> rejector = ThresholdFallbackClassifierCV(
... estimator=LogisticRegression(random_state=0),
... thresholds=(0.5, 0.55, 0.6, 0.65),
... ambiguity_threshold=0.0,
... cv=2,
... fallback_label=-1,
... fallback_mode="store").fit(X, y)
>>> # If probability is lower than this, predict `fallback_label` = -1.
>>> rejector.threshold_
0.55
>>> # Make predictions and see which inputs were accepted or rejected.
>>> y_pred = rejector.predict(X)
>>> # If `fallback_mode` == `"store", always accept but also mask rejections.
>>> y_pred, y_pred.get_dense_fallback_mask()
(FBNDArray([0, 1, 0, 1, 1, 1]),
array([False, False, False, False, True, False]))
>>> # This allows calculation of combined metrics (e.g., predict-reject accuracy).
>>> rejector.score(X, y)
1.0
>>> # Otherwise, allow fallbacks
>>> rejector.set_params(fallback_mode="return").predict(X)
array([ 0, 1, 0, 1, -1, 1])
>>> # and calculate accuracy only on accepted samples,
>>> rejector.score(X, y)
1.0
>>> # or just switch off rejections and fallback to a plain LogisticRegression.
>>> rejector.set_params(fallback_mode="ignore").score(X, y)
0.8333333333333334
```
See [Estimators](https://scikit-fallback.readthedocs.io/en/latest/estimators.html#) for
more examples of rejection meta-estimators and
[Combined Metrics](https://scikit-fallback.readthedocs.io/en/latest/metrics.html)
for evaluation and inspection tools.
## Dynamic Ensembling
While common ensembling methods such as voting and stacking aim to boost predictive performance,
they also increase inference costs as a result of output aggregations. Alternatively, we could
learn to choose which individual model or subset of models in an ensemble should make a
decision, thereby reducing inference overhead while bargaining, or sometimes even
improving, predictive performance.
For example,
[`skfb.ensemble.ThresholdCascadeClassifierCV`](https://scikit-fallback.readthedocs.io/en/latest/ensemble.html#skfb.ensemble.ThresholdCascadeClassifierCV)
builds a *cascade* from a sequence of
models arranged by their inference costs (and basically, by their performance - e.g., from
weakest but fastest to strongest but slowest) and learns confidence thresholds that determine
whether the current model in the sequence makes a prediction or defers to the next model
based on its confidence score for a given input:
```python
>>> from skfb.ensemble import ThresholdCascadeClassifierCV
>>> from sklearn.datasets import make_classification
>>> from sklearn.ensemble import HistGradientBoostingClassifier
>>> X, y = make_classification(
... n_samples=1_000, n_features=100, n_redundant=97, class_sep=0.1, flip_y=0.05,
... random_state=0)
>>> weak = HistGradientBoostingClassifier(max_iter=10, max_depth=2, random_state=0)
>>> okay = HistGradientBoostingClassifier(max_iter=20, max_depth=3, random_state=0)
>>> buff = HistGradientBoostingClassifier(max_iter=99, max_depth=4, random_state=0)
>>> # Train all models and learn thresholds per model s.t. if the current model's max
>>> # confidence score is lower, it defers the decision to the next in the cascade.
>>> cascading = ThresholdCascadeClassifierCV(
... estimators=[weak, okay, buff],
... costs=[1.1, 1.2, 1.99],
... cv_thresholds=5,
... cv=3,
... scoring="accuracy",
... return_earray=True,
... response_method="predict_proba").fit(X, y)
>>> # Best thresholds for `weak` and `okay`
>>> # (`buff` will always predict if `weak` and `okay` fall back):
>>> cascading.best_thresholds_
array([0.6125, 0.8375])
>>> # If `return_earray` is True, predictions will be of type `skfb.core.FBNDArray`,
>>> # which store `acceptance_rate` w/ the ratios of accepted inputs per model.
>>> cascading.predict(X).acceptance_rates
array([0.659, 0.003, 0.338])
```
# 🏗 Installation
`scikit-fallback` requires:
* Python (>= 3.9,< 3.14)
* scikit-learn (>=1.0)
* numpy
* scipy
* matplotlib (>=3.0) (optional)
and along with the requirements can be installed via `pip` :
```bash
pip install scikit-fallback
```
---
**Note:** when using *Python 3.9 w/ scikit-learn>=1.7*, subclassing of `BaseEstimator` and
scikit-learn mixins might result in runtime errors related to `__sklearn_tags__`. Also,
if you have *scikit-learn<=1.2*, you will see warnings about the unavailability of nested
or general parameter validation, which *you can ignore*.
# 🔗 Links
1. [Documentation](https://scikit-fallback.readthedocs.io/en/latest/index.html)
2. [Medium Series](https://medium.com/@sshadylov)
3. Examples & Notebooks: [examples/](./examples/) and https://kaggle.com/sshadylov
4. Related Research:
1. Hendrickx, K., Perini, L., Van der Plas, D. et al. Machine learning with a reject option: a survey. Mach Learn 113, 3073–3110 (2024).
2. Wittawat Jitkrittum, Neha Gupta, Aditya K Menon, Harikrishna Narasimhan, Ankit Rawat, and Sanjiv Kumar. When does confidence-based cascade deferral suffice? NeurIPS, 36, 2024.
3. And more (coming soon).
| text/markdown | Sanjar Ad[yi]lov | null | Sanjar Ad[yi]lov | null | BSD 3-Clause License
Copyright (c) 2024-2026, Sanjar Ad[yi]lov.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| null | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX",
"Operating System :: Unix",
"Operating System :: MacOS",
"Prog... | [] | null | null | >=3.9 | [] | [] | [] | [
"scikit-learn>=1.0",
"black>=24.4.0; extra == \"tests\"",
"matplotlib>=3.0; extra == \"tests\"",
"pre-commit>=3.7.0; extra == \"tests\"",
"pylint~=3.1.0; extra == \"tests\"",
"pytest>=8.0; extra == \"tests\"",
"jupyterlab; extra == \"dev\"",
"matplotlib>=3.0; extra == \"examples\""
] | [] | [] | [] | [
"Homepage, https://github.com/sanjaradylov/scikit-fallback"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-19T03:34:19.121008 | scikit_fallback-0.2.0.post0.tar.gz | 42,711 | a4/bd/e09ad69d1a06939d704a7f9a97a2b8fbd589a9a5a94ea1a834b242f4041c/scikit_fallback-0.2.0.post0.tar.gz | source | sdist | null | false | 46beceb1f8878d8d1488e239523b0b6e | 70b00cd76e6737557526d6f6356262aa3d843f2c9943f4bb4344261a90db0b11 | a4bde09ad69d1a06939d704a7f9a97a2b8fbd589a9a5a94ea1a834b242f4041c | null | [
"LICENSE"
] | 287 |
2.4 | pmtvs-process | 0.0.1 | Signal analysis primitives | # pmtvs-process
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:34:08.389078 | pmtvs_process-0.0.1.tar.gz | 1,260 | 48/50/755a75ec10652b7875d3764e9d14f0a0baa88bbe4104cfa61fde44c1772e/pmtvs_process-0.0.1.tar.gz | source | sdist | null | false | 37b2e40e18fa891db6987698df3b95bb | 532d806ae43aa06634405adcfdbe304c1ecc7d2a644f7b770a41c404df84b6d6 | 4850755a75ec10652b7875d3764e9d14f0a0baa88bbe4104cfa61fde44c1772e | null | [] | 297 |
2.4 | zlib-rs | 0.1.2 | High-performance, memory-safe Python bindings for zlib-rs (Rust). Up to 22x faster than CPython zlib. | > **Notice:** This project was built entirely with AI assistance and is an experimental proof of concept.
> It is not intended for production use. Use at your own risk.
# zlib-rs-python
[](https://github.com/farhaanaliii/zlib-rs-python/actions/workflows/CI.yml)
[](https://pypi.org/project/zlib-rs/)
[](https://pypi.org/project/zlib-rs/)
[](https://opensource.org/licenses/MIT)
A high-performance, memory-safe Python binding for the [zlib-rs](https://github.com/trifectatechfoundation/zlib-rs) Rust crate. Up to **22x faster** than CPython's built-in `zlib` module.
## Features
- **High Performance** -- Up to 22x faster compression and 17x faster checksums compared to CPython's built-in zlib (see [benchmarks](#benchmarks)).
- **Memory Safe** -- Built on top of `zlib-rs`, a memory-safe, pure-Rust implementation of the zlib compression algorithm.
- **Drop-in Compatible** -- Follows the standard Python `zlib` API. Swap imports and go.
- **Cross-Platform** -- Builds and runs on Linux, macOS, and Windows with no C dependencies.
- **Zero C Dependencies** -- No system zlib headers or C compiler required. Everything is compiled from Rust source.
## Installation
### From PyPI
```bash
pip install zlib-rs
```
### From Source
```bash
pip install maturin
git clone https://github.com/farhaanaliii/zlib-rs-python.git
cd zlib-rs-python
maturin develop --release
```
## Quick Start
```python
import zlib_rs as zlib
# One-shot compression
data = b"Hello, zlib-rs!" * 1000
compressed = zlib.compress(data)
original = zlib.decompress(compressed)
assert data == original
# Streaming compression
compressor = zlib.compressobj(level=6)
compressed = compressor.compress(data)
compressed += compressor.flush()
decompressor = zlib.decompressobj()
result = decompressor.decompress(compressed)
assert data == result
# Checksums
checksum_adler = zlib.adler32(data)
checksum_crc = zlib.crc32(data)
```
### Drop-in Replacement
To use `zlib-rs` globally in an existing project without changing every import:
```python
import sys
import zlib_rs
sys.modules["zlib"] = zlib_rs
# Now any library importing 'zlib' will use 'zlib-rs' instead
import zlib
compressed = zlib.compress(b"transparent replacement")
```
## API Reference
| Function / Class | Description |
|---|---|
| `compress(data, level=-1)` | One-shot compression. Returns compressed bytes. |
| `decompress(data, wbits=15, bufsize=16384)` | One-shot decompression. Returns original bytes. |
| `compressobj(level, method, wbits, ...)` | Create a streaming compression object. |
| `decompressobj(wbits, zdict)` | Create a streaming decompression object. |
| `adler32(data, value=1)` | Compute Adler-32 checksum. |
| `crc32(data, value=0)` | Compute CRC-32 checksum. |
All standard zlib constants are available: `Z_BEST_COMPRESSION`, `Z_BEST_SPEED`, `Z_DEFAULT_COMPRESSION`, `Z_DEFAULT_STRATEGY`, `Z_DEFLATED`, `Z_FINISH`, `Z_NO_FLUSH`, `Z_SYNC_FLUSH`, `MAX_WBITS`, `DEF_MEM_LEVEL`, etc.
## Benchmarks
All benchmarks measured on Windows, Python 3.12, with release optimizations
(`lto = "fat"`, `codegen-units = 1`, `target-cpu=native`).
### One-Shot Compression
| Data Size | Level | CPython zlib | zlib-rs | Speedup |
|-----------|-------|--------------|---------|---------|
| 1 KB | 1 | 57.2 us | 46.3 us | 1.2x faster |
| 1 KB | 6 | 62.6 us | 20.0 us | 3.1x faster |
| 64 KB | 1 | 155.6 us | 63.2 us | 2.5x faster |
| 64 KB | 6 | 398.1 us | 86.3 us | 4.6x faster |
| 1 MB | 6 | 6.45 ms | 401.4 us | **16.1x faster** |
| 10 MB | 6 | 94.60 ms | 4.22 ms | **22.4x faster** |
### One-Shot Decompression
| Data Size | Level | CPython zlib | zlib-rs | Speedup |
|-----------|-------|--------------|---------|---------|
| 1 KB | 1 | 4.4 us | 2.3 us | 1.9x faster |
| 64 KB | 9 | 50.5 us | 15.0 us | 3.4x faster |
| 1 MB | 6 | 1.18 ms | 664.0 us | 1.8x faster |
| 10 MB | 1 | 11.99 ms | 5.86 ms | 2.0x faster |
### Checksums
| Operation | Data Size | CPython zlib | zlib-rs | Speedup |
|-----------|-----------|--------------|---------|----------|
| adler32 | 64 KB | 21.0 us | 2.0 us | 10.5x faster |
| crc32 | 64 KB | 41.9 us | 3.2 us | 13.1x faster |
| adler32 | 1 MB | 364.6 us | 33.1 us | 11.0x faster |
| crc32 | 1 MB | 814.8 us | 48.3 us | **16.9x faster** |
### Running Benchmarks
```bash
maturin develop --release
python benchmarks/bench_zlib.py
```
## Development
### Prerequisites
- [Rust](https://rustup.rs/) 1.75 or later
- Python 3.8 or later
- [maturin](https://github.com/PyO3/maturin)
### Setup
```bash
git clone https://github.com/farhaanaliii/zlib-rs-python.git
cd zlib-rs-python
python -m venv .venv
# Linux / macOS
source .venv/bin/activate
# Windows (PowerShell)
.\.venv\Scripts\Activate.ps1
pip install maturin pytest
```
### Build
Debug build (fast compile, slow runtime):
```bash
maturin develop
```
Release build (slow compile, fast runtime -- recommended for benchmarks):
```bash
maturin develop --release
```
### Test
```bash
pytest tests/ -v
```
### Dev Scripts
Platform-specific dev scripts are provided in the `scripts/` directory:
| Script | Description |
|---|---|
| `scripts/dev.ps1` | Windows: build, test, and benchmark |
| `scripts/dev.sh` | Linux/macOS: build, test, and benchmark |
| `scripts/release.ps1` | Windows: bump version, tag, and push release |
| `scripts/release.sh` | Linux/macOS: bump version, tag, and push release |
Usage:
```bash
# Windows (PowerShell)
.\scripts\dev.ps1 build # Build release
.\scripts\dev.ps1 test # Run tests
.\scripts\dev.ps1 bench # Run benchmarks
.\scripts\dev.ps1 all # Build + test + bench
# Linux / macOS
./scripts/dev.sh build
./scripts/dev.sh test
./scripts/dev.sh bench
./scripts/dev.sh all
```
## Project Structure
```
zlib-rs-python/
src/
lib.rs # Rust source -- PyO3 bindings to zlib-rs
python/
zlib_rs/
__init__.py # Python package entry point
tests/
test_compatibility.py # Basic cross-compat tests
test_compress.py # One-shot compress/decompress
test_streaming.py # Streaming operations
test_checksums.py # adler32/crc32
test_edge_cases.py # Edge cases, constants, errors
benchmarks/
bench_zlib.py # Performance comparison vs CPython zlib
scripts/
dev.ps1 # Windows dev script
dev.sh # Linux/macOS dev script
release.ps1 # Windows release script
release.sh # Linux/macOS release script
.github/
workflows/
CI.yml # Test + lint on push/PR
release.yml # Cross-platform build + PyPI publish on tag
Cargo.toml # Rust dependencies and release profile
pyproject.toml # Python package metadata (PyPI)
LICENSE
README.md
```
## Releasing
Releases are automated via GitHub Actions. To create a new release:
```bash
# Windows
.\scripts\release.ps1 0.2.0
# Linux / macOS
./scripts/release.sh 0.2.0
```
This will:
1. Update version in `Cargo.toml` and `__init__.py`
2. Build and test locally
3. Commit, create a `v0.2.0` tag, and push to GitHub
4. GitHub Actions then builds cross-platform wheels (Linux x86_64/aarch64, Windows x86_64, macOS x86_64/arm64)
5. Tests wheels on all platforms
6. Publishes to [PyPI](https://pypi.org/project/zlib-rs/)
7. Creates a [GitHub Release](https://github.com/farhaanaliii/zlib-rs-python/releases) with all wheel assets
## Contributing
Contributions are welcome. Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
## Acknowledgements
- [zlib-rs](https://github.com/trifectatechfoundation/zlib-rs) -- The underlying Rust zlib implementation by the Trifecta Tech Foundation.
- [PyO3](https://github.com/PyO3/pyo3) -- Rust bindings for Python.
- [maturin](https://github.com/PyO3/maturin) -- Build system for Rust-based Python packages.
| text/markdown; charset=UTF-8; variant=GFM | null | Farhaan Ali <i.farhanali.dev@gmail.com> | null | null | null | zlib, compression, rust, performance, deflate | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",... | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/farhaanaliii/zlib-rs-python#readme",
"Homepage, https://github.com/farhaanaliii/zlib-rs-python",
"Issues, https://github.com/farhaanaliii/zlib-rs-python/issues",
"Repository, https://github.com/farhaanaliii/zlib-rs-python"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:33:10.123569 | zlib_rs-0.1.2.tar.gz | 22,763 | 10/d5/37a2db5c2bae8b900dc52527a3a1bb740731741cfe680ec816d10ce282de/zlib_rs-0.1.2.tar.gz | source | sdist | null | false | 0ca1c6996af8f95c9ea80e92cca79a47 | 1e6214c5f2a83a2751c9716d9da8672745825f62a3d89669ac73a1e7ea3efba0 | 10d537a2db5c2bae8b900dc52527a3a1bb740731741cfe680ec816d10ce282de | null | [
"LICENSE"
] | 994 |
2.4 | pmtvs-structural | 0.0.1 | Signal analysis primitives | # pmtvs-structural
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:33:04.292386 | pmtvs_structural-0.0.1.tar.gz | 1,273 | e8/92/08b9410a671421b54286fbd6ed00d9d9dc60212027f4e5753a701a3701d7/pmtvs_structural-0.0.1.tar.gz | source | sdist | null | false | 11548db85d31a37554b89d0158ae78ab | 59d542056a57b1afc9eae3d86834e94a0e86bd68ccf70a508042647363922c15 | e89208b9410a671421b54286fbd6ed00d9d9dc60212027f4e5753a701a3701d7 | null | [] | 285 |
2.1 | odoo-addon-mass-mailing-partner | 18.0.1.0.2.1 | Link partners with mass-mailing | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
===============================
Link partners with mass-mailing
===============================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:a7056f8ef21fb459f07fcc4abce2aa8010faf0bc4fc5233689edbddb3797efbb
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fmass--mailing-lightgray.png?logo=github
:target: https://github.com/OCA/mass-mailing/tree/18.0/mass_mailing_partner
:alt: OCA/mass-mailing
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/mass-mailing-18-0/mass-mailing-18-0-mass_mailing_partner
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/mass-mailing&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module links mass-mailing contacts with partners.
Features
--------
- When creating or saving a mass-mailing contact, partners are matched
through email, linking matched partner, or creating a new one if no
match and the maling list partner mandatory field is checked.
- Mailing contacts smart button in partner form.
- Mass mailing stats smart button in partner form.
- Filter and group by partner in mail statistics tree view
**Table of contents**
.. contents::
:local:
Configuration
=============
At first install, all existing mass mailing contacts are matched against
partners. And also mass mailing statistics are matched using model and
res_id.
Usage
=====
In partner view, there is a new action called "Add to mailing list".
This action open a pop-up to select a mailing list. Selected partners
will be added as mailing list contacts.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/mass-mailing/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/mass-mailing/issues/new?body=module:%20mass_mailing_partner%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Tecnativa
Contributors
------------
- `Tecnativa <https://www.tecnativa.com>`__:
- Pedro M. Baeza
- Rafael Blasco
- Antonio Espinosa
- Javier Iniesta
- Jairo Llopis
- David Vidal
- Ernesto Tejeda
- Victor M.M. Torres
- Manuel Calero
- Víctor Martínez
- `Hibou Corp. <https://hibou.io>`__
- `Trobz <https://trobz.com>`__:
- Nguyễn Minh Chiến <chien@trobz.com>
- `360ERP <https://www.360erp.com>`__:
- Kevin Khao
Other credits
-------------
The migration of this module from 15.0 to 16.0 was financially supported
by Camptocamp
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/mass-mailing <https://github.com/OCA/mass-mailing/tree/18.0/mass_mailing_partner>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Tecnativa, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/mass-mailing | null | >=3.10 | [] | [] | [] | [
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T03:32:57.310798 | odoo_addon_mass_mailing_partner-18.0.1.0.2.1-py3-none-any.whl | 154,605 | 7e/8a/9d3a3a9944f40634b113576850ddd7d9a8311ae08de5933fb1bd8e0939fe/odoo_addon_mass_mailing_partner-18.0.1.0.2.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 49fd6b2ca9a39a29cbf21ee6b3e92413 | 86f9f54471945ee8355e437058439e15283a1f00755b134f23e130a70bdecc92 | 7e8a9d3a3a9944f40634b113576850ddd7d9a8311ae08de5933fb1bd8e0939fe | null | [] | 107 |
2.4 | afm-langchain | 0.1.3 | AFM LangChain execution backend | # AFM LangChain Backend
[](https://pypi.org/project/afm-langchain/)
[](https://pypi.org/project/afm-langchain/)
[](https://opensource.org/licenses/Apache-2.0)
LangChain execution backend for [Agent-Flavored Markdown (AFM)](https://github.com/wso2/agent-flavored-markdown) agents.
This package implements the `AgentRunner` protocol from `afm-core`, providing LLM orchestration using the LangChain framework.
## Features
- **AgentRunner Protocol Implementation**: Pluggable backend for AFM agents
- **LLM Provider Support**: OpenAI and Anthropic models
- **MCP Tool Integration**: Connect external tools via Model Context Protocol
- **Conversation Management**: Session history and state management
- **Plugin Registration**: Auto-discovered via Python entry points
## Installation
This package is typically installed as part of `afm-cli`. For LangChain-specific use:
```bash
pip install afm-langchain
```
## Supported Providers
### OpenAI
```yaml
model:
provider: openai
name: gpt-4o # or other OpenAI models
```
Requires: `OPENAI_API_KEY` environment variable
### Anthropic
```yaml
model:
provider: anthropic
name: claude-sonnet-4-5 # or other Claude models
```
Requires: `ANTHROPIC_API_KEY` environment variable
## Development
### Setup
This project uses [uv](https://docs.astral.sh/uv/) for dependency management.
```bash
# Clone the repository
git clone https://github.com/wso2/reference-implementations-afm.git
cd python-interpreter
# Install dependencies
uv sync
# Activate the virtual environment
source .venv/bin/activate
```
### Running Tests
```bash
# Run afm-langchain tests
uv run pytest packages/afm-langchain/tests/
# Run with coverage
uv run pytest packages/afm-langchain/tests/ --cov=afm_langchain
```
### Code Quality
```bash
# Format code
uv run ruff format
# Lint code
uv run ruff check
```
### Project Structure
```text
packages/afm-langchain/src/afm_langchain/
├── __init__.py
├── backend.py # LangChainRunner implementation
├── model_factory.py # LLM provider factory
├── mcp_manager.py # MCP tool management
└── tools_adapter.py # Tool calling adapter
```
## Usage
The LangChain backend is automatically registered and used when you run an AFM agent:
```python
from afm.runner import get_runner
# Get the LangChain runner
runner = get_runner("langchain")
# Run an agent
result = await runner.run(agent, user_input)
```
## Documentation
For comprehensive documentation, see the [project README](https://github.com/wso2/reference-implementations-afm/tree/main/python-interpreter).
## License
Apache-2.0
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3.11"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"afm-core<0.2.0,>=0.1.0",
"langchain>=1.2.8",
"langchain-openai>=1.1.7",
"langchain-anthropic>=1.3.1",
"mcp>=1.26.0",
"langchain-mcp-adapters>=0.2.1"
] | [] | [] | [] | [
"Repository, https://github.com/wso2/reference-implementations-afm"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T03:32:33.555502 | afm_langchain-0.1.3-py3-none-any.whl | 11,266 | aa/02/df734c6358b9127739363c1ef912525938e587916c3a4bc8a0ed81618db8/afm_langchain-0.1.3-py3-none-any.whl | py3 | bdist_wheel | null | false | bdc4b65a6493e407ae78cdc0297d8ff1 | 2d081b4d4a268c036049d0822d49860e4039c9a0110f82f4ef691e224cc5ddd7 | aa02df734c6358b9127739363c1ef912525938e587916c3a4bc8a0ed81618db8 | Apache-2.0 | [] | 284 |
2.4 | pmtvs-rotating-machinery | 0.0.1 | Signal analysis primitives | # pmtvs-rotating-machinery
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:31:58.136197 | pmtvs_rotating_machinery-0.0.1.tar.gz | 1,288 | 35/68/0b85c45e95e4e2a0ecf0b675de591ef8234d39fbce979c7a4e6da2857e73/pmtvs_rotating_machinery-0.0.1.tar.gz | source | sdist | null | false | 5899cc494933cbec8f8ada4d629f07e1 | c528e264620026205d61184f10eb8c3be80425ef7c2c1e67ddbe9c597ec8bcd1 | 35680b85c45e95e4e2a0ecf0b675de591ef8234d39fbce979c7a4e6da2857e73 | null | [] | 283 |
2.4 | pmtvs-manufacturing | 0.0.1 | Signal analysis primitives | # pmtvs-manufacturing
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:30:53.419856 | pmtvs_manufacturing-0.0.1.tar.gz | 1,278 | 97/3e/8a291d017ad680b6a14f83be91a64acae0ad6f126cbec8afca3820fa16cd/pmtvs_manufacturing-0.0.1.tar.gz | source | sdist | null | false | 60e35e6049fe9a626809f82c022f6b40 | e036f86bed45a21f9164dd62f6efeaea10efb41cc3a0661d2bc454427c787cab | 973e8a291d017ad680b6a14f83be91a64acae0ad6f126cbec8afca3820fa16cd | null | [] | 280 |
2.4 | pmtvs-energy | 0.0.1 | Signal analysis primitives | # pmtvs-energy
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:29:49.417656 | pmtvs_energy-0.0.1.tar.gz | 1,243 | 9f/7b/2837447c789753bcc254a3c6938d0613261d44f06ce04f56cfb71e7cfb0f/pmtvs_energy-0.0.1.tar.gz | source | sdist | null | false | 3dc0a91c3b89c3723e2c60acf9db0314 | 541bdc38b329eedd50a77feb44b3a3b563f207adb109e1a4b12d3ba5374c7fef | 9f7b2837447c789753bcc254a3c6938d0613261d44f06ce04f56cfb71e7cfb0f | null | [] | 277 |
2.4 | afm-cli | 0.2.9 | AFM CLI metapackage: installs afm-core and afm-langchain | # AFM CLI
[](https://pypi.org/project/afm-cli/)
[](https://pypi.org/project/afm-cli/)
[](https://opensource.org/licenses/Apache-2.0)
A command-line interface for running [Agent-Flavored Markdown (AFM)](https://github.com/wso2/agent-flavored-markdown) agents.
This package provides everything you need to run AFM agents, including:
- **afm-core**: Parser, validation, and interface implementations
- **afm-langchain**: LangChain-based execution backend with support for OpenAI and Anthropic
## Installation
```bash
pip install afm-cli
```
## Quick Start
### 1. Set your API key
```bash
export OPENAI_API_KEY="your-api-key-here"
# or
export ANTHROPIC_API_KEY="your-api-key-here"
```
### 2. Create an AFM file
Create a file named `agent.afm.md`:
```markdown
---
name: Friendly Assistant
interfaces:
- type: consolechat
model:
provider: openai
name: gpt-4o
---
# Role
You are a helpful and friendly AI assistant.
# Instructions
- Be concise but thorough in your responses
- If you don't know something, say so honestly
- Always be polite and professional
```
### 3. Run the agent
```bash
afm run agent.afm.md
```
This will start an interactive console chat with your agent.
## Usage
### `afm run <file>`
Run an AFM agent file and start its configured interfaces.
```bash
# Run with default settings
afm run agent.afm.md
# Run on a specific port (for web interfaces)
afm run agent.afm.md --port 8080
```
### `afm validate <file>`
Validate an AFM file without running it.
```bash
afm validate agent.afm.md
```
### `afm framework list`
List available execution backends.
```bash
afm framework list
```
## Configuration
### Environment Variables
- `OPENAI_API_KEY` - Required for OpenAI models
- `ANTHROPIC_API_KEY` - Required for Anthropic models
### CLI Options
- `-p, --port PORT` - Port for web interfaces (default: 8000)
- `--help` - Show help message
## Features
### Interface Types
AFM agents can expose multiple interfaces simultaneously:
- **consolechat**: Interactive terminal-based chat (default)
- **webchat**: HTTP API with built-in web UI
- **webhook**: Webhook endpoint for event-driven agents
### MCP Tool Support
Agents can use external tools via Model Context Protocol (MCP).
For more examples and detailed documentation, see [AFM Examples](https://wso2.github.io/agent-flavored-markdown/examples/).
## Docker
You can also run agents using Docker:
```bash
# Build the image
docker build -t afm-langchain-interpreter .
# Run with an AFM file mounted
docker run -v $(pwd)/agent.afm.md:/app/agent.afm.md \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
-p 8000:8000 \
afm-langchain-interpreter afm /app/agent.afm.md
```
## Documentation
For more detailed documentation, see the [project README](https://github.com/wso2/reference-implementations-afm/tree/main/python-interpreter).
## License
Apache-2.0
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3.11"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"afm-core==0.1.5",
"afm-langchain>=0.1.0"
] | [] | [] | [] | [
"Repository, https://github.com/wso2/reference-implementations-afm"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T03:29:06.589585 | afm_cli-0.2.9-py3-none-any.whl | 3,333 | b0/dc/494e46ae566e75f8c0a821471fe3327c56f4a34112de333a552189c32ea6/afm_cli-0.2.9-py3-none-any.whl | py3 | bdist_wheel | null | false | 69753f248ebe908173e212b981f4e6d0 | 1b4d32e69e259efcf350cd80d34f331bd1d8bfe4fa9de50e57f4a3a30e29a5c9 | b0dc494e46ae566e75f8c0a821471fe3327c56f4a34112de333a552189c32ea6 | Apache-2.0 | [] | 261 |
2.4 | afm-core | 0.1.6 | AFM (Agent-Flavored Markdown) core: parser, CLI, protocols, and interfaces | # AFM Core
[](https://pypi.org/project/afm-core/)
[](https://pypi.org/project/afm-core/)
[](https://opensource.org/licenses/Apache-2.0)
Core library for [Agent-Flavored Markdown (AFM)](https://github.com/wso2/agent-flavored-markdown) - providing parsing, validation, and interface implementations.
## Features
- **AFM File Parser**: Extracts YAML frontmatter and Markdown content (Role, Instructions sections)
- **Pydantic Models**: Type-safe validation of AFM schema for agents, interfaces, and tools
- **Interface Implementations**: Console chat, web chat, and webhook interfaces
- **AgentRunner Protocol**: Pluggable backend system for different execution frameworks
- **CLI Framework**: Entry point for the `afm` command with validation and run commands
## Installation
This package is typically installed as part of `afm-cli`. For development or custom integrations:
```bash
pip install afm-core
```
## Usage
```python
from afm.parser import parse_afm_file
# Parse an AFM file (returns an AFMRecord)
record = parse_afm_file("agent.afm.md")
# Access parsed data
print(f"Agent: {record.metadata.name}")
print(f"Model: {record.metadata.model.provider}/{record.metadata.model.name}")
print(f"Interfaces: {[i.type for i in record.metadata.interfaces]}")
print(f"Role: {record.role}")
print(f"Instructions: {record.instructions}")
```
## Development
### Setup
This project uses [uv](https://docs.astral.sh/uv/) for dependency management.
```bash
# Clone the repository
git clone https://github.com/wso2/reference-implementations-afm.git
cd python-interpreter
# Install dependencies
uv sync
```
### Running Tests
```bash
# Run all tests
uv run pytest packages/afm-core/tests/
# Run with coverage
uv run pytest packages/afm-core/tests/ --cov=afm
```
### Code Quality
```bash
# Format code
uv run ruff format
# Lint code
uv run ruff check
```
### Project Structure
```text
packages/afm-core/src/afm/
├── __init__.py
├── cli.py # CLI entry point
├── exceptions.py # Custom exceptions
├── models.py # Pydantic models
├── parser.py # AFM file parsing
├── runner.py # AgentRunner protocol and runner utilities
├── schema_validator.py # Schema validation
├── templates.py # Prompt templates
├── update.py # Update checker
├── variables.py # Variable substitution
└── interfaces/ # Interface implementations
├── __init__.py
├── base.py # Interface utilities (get_interfaces, get_http_path)
├── console_chat.py
├── web_chat.py
└── webhook.py
```
## Documentation
For comprehensive documentation, see the [project README](https://github.com/wso2/reference-implementations-afm/tree/main/python-interpreter).
## License
Apache-2.0
| text/markdown | null | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3.11"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.0.0",
"pydantic>=2.0",
"pyyaml>=6.0",
"jsonschema>=4.26.0",
"textual>=2.1.0",
"fastapi>=0.128.1",
"httpx>=0.28.1",
"uvicorn>=0.40.0",
"platformdirs>=4.0",
"packaging>=24.0",
"rich>=14.3.2"
] | [] | [] | [] | [
"Repository, https://github.com/wso2/reference-implementations-afm"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T03:29:04.580593 | afm_core-0.1.6.tar.gz | 30,011 | c2/a9/d25b2a0882c148996392fe350f7c94aa04a2176b59d3b78c393d4f959e18/afm_core-0.1.6.tar.gz | source | sdist | null | false | 033299229819b0e43e5295d8871a9af2 | b547a0b62913da1e7a9694c5bdfae7b6d3d8d0ad27d383020d2ea97dc3cfa45b | c2a9d25b2a0882c148996392fe350f7c94aa04a2176b59d3b78c393d4f959e18 | Apache-2.0 | [] | 263 |
2.4 | pmtvs-financial | 0.0.1 | Signal analysis primitives | # pmtvs-financial
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:28:44.605146 | pmtvs_financial-0.0.1.tar.gz | 1,252 | 73/4c/1fd620bc44103aeb2ab4d899b4dd8922bbc83b951e14ea2594fb3b8c473f/pmtvs_financial-0.0.1.tar.gz | source | sdist | null | false | c1f141d9f3a121f7fc2c3bcf07f8601d | e18339339fd691277340dc939efa44276d80ae8267cc4cba4397411be4170fbe | 734c1fd620bc44103aeb2ab4d899b4dd8922bbc83b951e14ea2594fb3b8c473f | null | [] | 275 |
2.4 | ataraxis-data-structures | 6.0.0 | Provides classes and structures for storing, manipulating, and sharing data between Python processes. | # ataraxis-data-structures
Provides classes and structures for storing, manipulating, and sharing data between Python processes.


[](https://github.com/astral-sh/uv)
[](https://github.com/astral-sh/ruff)




___
## Detailed Description
This library aggregates the classes and methods used by other Ataraxis and Sun lab libraries for working with data.
This includes classes to manipulate the data, share (move) the data between different Python processes, and store the
data in non-volatile memory (on disk). Generally, these classes either implement novel functionality not available
through other popular libraries or extend existing functionality to match specific needs of other project Ataraxis
libraries.
## Features
- Supports Windows, Linux, and macOS.
- Provides a process- and thread-safe way of sharing data between multiple processes through a NumPy array structure.
- Extends the standard Python dataclass to support saving and loading its data to and from YAML files.
- Provides a fast and scalable data logger optimized for saving serialized data from multiple parallel processes in
non-volatile memory.
- Offers efficient batch processing of log archives with support for parallel workflows.
- Includes a file-based processing pipeline tracker for coordinating multi-process and multi-host data processing jobs.
- Provides utilities for data integrity verification, directory transfer, and time-series interpolation.
- Apache 2.0 License.
## Table of Contents
- [Dependencies](#dependencies)
- [Installation](#installation)
- [Usage](#usage)
- [YamlConfig](#yamlconfig)
- [SharedMemoryArray](#sharedmemoryarray)
- [DataLogger](#datalogger)
- [LogArchiveReader](#logarchivereader)
- [ProcessingTracker](#processingtracker)
- [Processing Utilities](#processing-utilities)
- [API Documentation](#api-documentation)
- [Developers](#developers)
- [Versioning](#versioning)
- [Authors](#authors)
- [License](#license)
- [Acknowledgments](#acknowledgments)
___
## Dependencies
For users, all library dependencies are installed automatically by all supported installation methods. For developers,
see the [Developers](#developers) section for information on installing additional development dependencies.
___
## Installation
### Source
***Note,*** installation from source is ***highly discouraged*** for anyone who is not an active project developer.
1. Download this repository to the local machine using the preferred method, such as git-cloning. Use one of the
[stable releases](https://github.com/Sun-Lab-NBB/ataraxis-data-structures/tags) that include precompiled binary
and source code distribution (sdist) wheels.
2. If the downloaded distribution is stored as a compressed archive, unpack it using the appropriate decompression tool.
3. `cd` to the root directory of the prepared project distribution.
4. Run `pip install .` to install the project and its dependencies.
### pip
Use the following command to install the library and all of its dependencies via
[pip](https://pip.pypa.io/en/stable/): `pip install ataraxis-data-structures`
___
## Usage
This section provides an overview of each component exposed by the library. For detailed information about method
signatures and parameters, consult the [API documentation](#api-documentation).
### YamlConfig
The YamlConfig class extends the functionality of the standard Python dataclass module by bundling the dataclass
instances with methods to save and load their data to and from .yaml files. Primarily, this functionality is
implemented to support storing runtime configuration data in a non-volatile, human-readable, and editable format.
The YamlConfig class is designed to be subclassed by custom dataclass instances to gain the .yaml saving and loading
functionality realized through the inherited `to_yaml()` and `from_yaml()` methods:
```python
from ataraxis_data_structures import YamlConfig
from dataclasses import dataclass
from pathlib import Path
import tempfile
# All YamlConfig functionality is accessed via subclassing.
@dataclass
class MyConfig(YamlConfig):
integer: int = 0
string: str = "random"
# Instantiates the test class using custom values that do not match the default initialization values.
config = MyConfig(integer=123, string="hello")
# Saves the instance data to a YAML file in a temporary directory. The saved data can be modified by directly
# editing the saved .yaml file.
tempdir = tempfile.TemporaryDirectory() # Creates a temporary directory for illustration purposes.
out_path = Path(tempdir.name).joinpath("my_config.yaml") # Resolves the path to the output file.
config.to_yaml(file_path=out_path)
# Ensures that the cache file has been created.
assert out_path.exists()
# Creates a new MyConfig instance using the data inside the .yaml file.
loaded_config = MyConfig.from_yaml(file_path=out_path)
# Ensures that the loaded data matches the original MyConfig instance data.
assert loaded_config.integer == config.integer
assert loaded_config.string == config.string
```
### SharedMemoryArray
The SharedMemoryArray class supports sharing data between multiple Python processes in a thread- and process-safe way.
To do so, it implements a shared memory buffer accessed via an n-dimensional NumPy array instance, allowing different
processes to read and write any element(s) of the array.
#### SharedMemoryArray Creation
The SharedMemoryArray only needs to be instantiated once by the main runtime process (thread) and provided to all
children processes as an input. The initialization process uses the specified prototype NumPy array and unique buffer
name to generate a (new) NumPy array whose data is stored in a shared memory buffer accessible from any thread or
process. ***Note,*** the array dimensions and datatype cannot be changed after initialization.
```python
from ataraxis_data_structures import SharedMemoryArray
import numpy as np
# The prototype array and buffer name determine the layout of the SharedMemoryArray for its entire lifetime:
prototype = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.float64)
buffer_name = "unique_buffer_name" # Has to be unique for all concurrently used SharedMemoryArray instances.
# To initialize the array, use the create_array() method. Do not call the class initialization method directly!
sma = SharedMemoryArray.create_array(name=buffer_name, prototype=prototype)
# Ensures that the shared memory buffer is destroyed when the instance is garbage-collected.
sma.enable_buffer_destruction()
# The instantiated SharedMemoryArray object wraps an n-dimensional NumPy array with the same dimensions and data
# type as the prototype and uses the unique shared memory buffer name to identify the shared memory buffer to
# connect to from different processes.
assert sma.name == buffer_name
assert sma.shape == prototype.shape
assert sma.datatype == prototype.dtype
# Demonstrates the current values for the critical SharedMemoryArray parameters evaluated above:
print(sma)
```
#### SharedMemoryArray Connection, Disconnection, and Destruction
Each process using the SharedMemoryArray instance, including the process that created it, must use the `connect()`
method to connect to the array before reading or writing data. At the end of its runtime, each connected process must
call the `disconnect()` method to release the local reference to the shared buffer. The main process also needs to call
the `destroy()` method to destroy the shared memory buffer.
```python
import numpy as np
from ataraxis_data_structures import SharedMemoryArray
# Initializes a SharedMemoryArray
prototype = np.zeros(shape=6, dtype=np.uint64)
buffer_name = "unique_buffer"
sma = SharedMemoryArray.create_array(name=buffer_name, prototype=prototype)
# This method has to be called before attempting to manipulate the data inside the array.
sma.connect()
# The connection status of the array can be verified at any time by using is_connected property:
assert sma.is_connected
# Each process that connected to the shared memory buffer must disconnect from it at the end of its runtime. On
# Windows platforms, when all processes are disconnected from the buffer, the buffer is automatically
# garbage-collected.
sma.disconnect() # For each connect() call, there has to be a matching disconnect() call
assert not sma.is_connected
# On Unix platforms, the buffer persists even after being disconnected by all instances, unless it is explicitly
# destroyed.
sma.destroy() # For each create_array() call, there has to be a matching destroy() call
```
#### Reading and Writing SharedMemoryArray Data
For routine data writing or reading operations, the SharedMemoryArray supports accessing its data via indexing or
slicing, just like a regular NumPy array. Critically, accessing the data in this way is process-safe, as the instance
first acquires an exclusive multiprocessing Lock before interfacing with the data. For more complex access scenarios,
it is possible to use the `array()` method to directly access and manipulate the underlying NumPy array object used by
the instance.
```python
import numpy as np
from ataraxis_data_structures import SharedMemoryArray
# Initializes a SharedMemoryArray
prototype = np.array([1, 2, 3, 4, 5, 6], dtype=np.uint64)
buffer_name = "unique_buffer"
sma = SharedMemoryArray.create_array(name=buffer_name, prototype=prototype)
sma.connect()
# The SharedMemoryArray data can be accessed directly using indexing or slicing, just like any regular NumPy array
# or Python iterable:
# Index
assert sma[2] == np.uint64(3)
assert isinstance(sma[2], np.uint64)
sma[2] = 123 # Written data must be convertible to the datatype of the underlying NumPy array
assert sma[2] == np.uint64(123)
# Slice
assert np.array_equal(sma[:4], np.array([1, 2, 123, 4], dtype=np.uint64))
assert isinstance(sma[:4], np.ndarray)
# It is also possible to directly access the underlying NumPy array, which allows using the full range of NumPy
# operations. The accessor method can be used from within a context manager to enforce exclusive access to the
# array's data via an internal multiprocessing lock mechanism:
with sma.array(with_lock=True) as array:
print(f"Before clipping: {array}")
# Clipping replaces the out-of-bounds value '123' with '10'.
array = np.clip(array, 0, 10)
print(f"After clipping: {array}")
# Cleans up the array buffer
sma.disconnect()
sma.destroy()
```
#### Using SharedMemoryArray from Multiple Processes
While all methods showcased above run in the same process, the main advantage of the SharedMemoryArray class is that
it behaves the same way when used from different Python processes. The following example demonstrates using a
SharedMemoryArray across multiple concurrent processes:
```python
from multiprocessing import Process
from ataraxis_base_utilities import console
from ataraxis_time import PrecisionTimer
import numpy as np
from ataraxis_data_structures import SharedMemoryArray
def concurrent_worker(shared_memory_object: SharedMemoryArray, index: int) -> None:
"""This worker runs in a remote process.
It increments the shared memory array variable by 1 if the variable is even. Since each increment shifts it to
be odd, to work as intended, this process has to work together with a different process that increments odd
values. The process shuts down once the value reaches 200.
Args:
shared_memory_object: The SharedMemoryArray instance to work with.
index: The index inside the array to increment
"""
# Connects to the array
shared_memory_object.connect()
# Runs until the value becomes 200
while shared_memory_object[index] < 200:
# Reads data from the input index
shared_value = shared_memory_object[index]
# Checks if the value is even and below 200
if shared_value % 2 == 0 and shared_value < 200:
# Increments the value by one and writes it back to the array
shared_memory_object[index] = shared_value + 1
# Disconnects and terminates the process
shared_memory_object.disconnect()
if __name__ == "__main__":
console.enable() # Enables terminal printouts
# Initializes a SharedMemoryArray
sma = SharedMemoryArray.create_array("test_concurrent", np.zeros(5, dtype=np.int32))
# Generates multiple processes and uses each to repeatedly write and read data from different indices of the
# same array.
processes = [Process(target=concurrent_worker, args=(sma, i)) for i in range(5)]
for p in processes:
p.start()
# Finishes setting up the local array instance by connecting to the shared memory buffer and enabling the
# shared memory buffer cleanup when the instance is garbage-collected (a safety feature).
sma.connect()
sma.enable_buffer_destruction()
# Marks the beginning of the test runtime
console.echo(f"Running the multiprocessing example on {len(processes)} processes...")
timer = PrecisionTimer("ms")
timer.reset()
# For each of the array indices, increments the value of the index if it is odd. Child processes increment
# even values and ignore odd ones, so the only way for this code to finish is if children and parent process
# take turns incrementing shared values until they reach 200
while np.any(sma[0:5] < 200): # Runs as long as any value is below 200
# Note, while it is possible to index the data from the SharedMemoryArray, it is also possible to retrieve
# and manipulate the underlying NumPy array directly. This allows using the full range of NumPy operations
# on the shared memory data:
with sma.array(with_lock=True) as arr:
mask = (arr % 2 != 0) & (arr < 200) # Uses a boolean mask to discover odd values below 200
arr[mask] += 1 # Increments only the values that meet the condition above
# Waits for the processes to join
for p in processes:
p.join()
# Verifies that all processes ran as expected and incremented their respective variable
assert np.all(sma[0:5] == 200)
# Marks the end of the test runtime.
time_taken = timer.elapsed
console.echo(f"Example runtime: complete. Time taken: {time_taken / 1000:.2f} seconds.")
# Cleans up the shared memory array after all processes are terminated
sma.disconnect()
sma.destroy()
```
### DataLogger
The DataLogger class initializes and manages the runtime of a logger process running in an independent Process and
exposes a shared Queue object for buffering and piping data from any other Process to the logger. Currently, the class
is specifically designed for saving serialized byte arrays used by other Ataraxis libraries, most notably the
ataraxis-video-system and the ataraxis-transport-layer.
#### Creating and Starting the DataLogger
DataLogger is intended to only be initialized once in the main runtime thread (Process) and provided to all children
Processes as an input. ***Note,*** while a single DataLogger instance is typically enough for most use cases, it is
possible to use more than a single DataLogger instance at the same time.
```python
from pathlib import Path
import tempfile
from ataraxis_data_structures import DataLogger
# Due to the internal use of the 'Process' class, each DataLogger call has to be protected by the __main__ guard
# at the highest level of the call hierarchy.
if __name__ == "__main__":
# As a minimum, each DataLogger has to be given the path to the output directory and a unique name to
# distinguish the instance from any other concurrently active DataLogger instance.
tempdir = tempfile.TemporaryDirectory() # Creates a temporary directory for illustration purposes
logger = DataLogger(output_directory=Path(tempdir.name), instance_name="my_name")
# The DataLogger initialized above creates a new directory: 'tempdir/my_name_data_log' to store logged entries.
# Before the DataLogger starts saving data, its saver process needs to be initialized via the start() method.
# Until the saver is initialized, the instance buffers all incoming data in RAM (via the internal Queue
# object), which may eventually exhaust the available memory.
logger.start()
# Each call to the start() method must be matched with a corresponding call to the stop() method. This method
# shuts down the logger process and releases any resources held by the instance.
logger.stop()
```
#### Data Logging
The DataLogger is explicitly designed to log serialized data of arbitrary size. To enforce the correct data
formatting, all data submitted to the logger must be packaged into a LogPackage class instance before it is put into
the DataLogger's input queue.
```python
from pathlib import Path
import tempfile
import numpy as np
from ataraxis_data_structures import DataLogger, LogPackage, assemble_log_archives
from ataraxis_time import get_timestamp, TimestampFormats
if __name__ == "__main__":
# Initializes and starts the DataLogger.
tempdir = tempfile.TemporaryDirectory()
logger = DataLogger(output_directory=Path(tempdir.name), instance_name="my_name")
logger.start()
# The DataLogger uses a multiprocessing Queue to buffer and pipe the incoming data to the saver process. The
# queue is accessible via the 'input_queue' property of each logger instance.
logger_queue = logger.input_queue
# The DataLogger is explicitly designed to log serialized data. All data submitted to the logger must be
# packaged into a LogPackage instance to ensure that it adheres to the proper format expected by the logger
# instance.
source_id = np.uint8(1) # Has to be an uint8 type
timestamp = np.uint64(get_timestamp(output_format=TimestampFormats.INTEGER)) # Has to be an uint64 type
data = np.array([1, 2, 3, 4, 5], dtype=np.uint8) # Has to be an uint8 NumPy array
logger_queue.put(LogPackage(source_id, timestamp, data))
# The timer used to timestamp the log entries has to be precise enough to resolve two consecutive data
# entries. Due to these constraints, it is recommended to use a nanosecond or microsecond timer, such as the
# one offered by the ataraxis-time library.
timestamp = np.uint64(get_timestamp(output_format=TimestampFormats.INTEGER))
data = np.array([6, 7, 8, 9, 10], dtype=np.uint8)
logger_queue.put(LogPackage(source_id, timestamp, data)) # Same source id as the package above
# Stops the data logger.
logger.stop()
# The DataLogger saves the input LogPackage instances as serialized NumPy byte array .npy files. The output
# directory for the saved files can be queried from the DataLogger instance's 'output_directory' property.
assert len(list(logger.output_directory.glob("**/*.npy"))) == 2
```
#### Log Archive Assembly
To optimize the log writing speed and minimize the time the data sits in the volatile memory, all log entries are saved
to disk as separate NumPy array .npy files. While this format is efficient for time-critical runtimes, it is not
optimal for long-term storage and data transfer. To help with optimizing the post-runtime data storage, the library
offers the `assemble_log_archives()` function which aggregates .npy files from the same data source into an
(uncompressed) .npz archive.
```python
from pathlib import Path
import tempfile
import numpy as np
from ataraxis_data_structures import DataLogger, LogPackage, assemble_log_archives
if __name__ == "__main__":
# Creates and starts the DataLogger instance.
tempdir = tempfile.TemporaryDirectory()
logger = DataLogger(output_directory=Path(tempdir.name), instance_name="my_name")
logger.start()
logger_queue = logger.input_queue
# Generates and logs 255 data messages. This generates 255 unique .npy files under the logger's output
# directory.
for i in range(255):
logger_queue.put(LogPackage(np.uint8(1), np.uint64(i), np.array([i, i, i], dtype=np.uint8)))
# Stops the data logger.
logger.stop()
# Depending on the runtime context, a DataLogger instance can generate a large number of individual .npy files
# as part of its runtime. While having advantages for real-time data logging, this format of storing the data
# is not ideal for later data transfer and manipulation. Therefore, it is recommended to always use the
# assemble_log_archives() function to aggregate the individual .npy files into one or more .npz archives.
assemble_log_archives(
log_directory=logger.output_directory, remove_sources=True, memory_mapping=True, verbose=True
)
# The archive assembly creates a single .npz file named after the source_id (1_log.npz), using all available
# .npy files. Generally, each unique data source is assembled into a separate .npz archive.
assert len(list(logger.output_directory.glob("**/*.npy"))) == 0
assert len(list(logger.output_directory.glob("**/*.npz"))) == 1
```
### LogArchiveReader
The LogArchiveReader class provides efficient access to .npz log archives generated by DataLogger instances. It
supports onset timestamp discovery, message iteration, and batch assignment for multiprocessing workflows.
#### Basic Usage
Each .npz archive contains messages from a single source (producer). The LogArchiveReader automatically discovers the
onset timestamp stored in the archive and converts elapsed timestamps to absolute UTC timestamps.
```python
from pathlib import Path
from ataraxis_data_structures import LogArchiveReader
# Creates a reader for an existing archive
archive_path = Path("/path/to/1_log.npz")
reader = LogArchiveReader(archive_path)
# The onset timestamp is automatically discovered (UTC epoch reference in microseconds)
print(f"Onset timestamp: {reader.onset_timestamp_us}")
print(f"Message count: {reader.message_count}")
# Iterates through all messages in the archive
for message in reader.iter_messages():
print(f"Timestamp: {message.timestamp_us}, Payload size: {len(message.payload)}")
```
#### Batch Processing for Multiprocessing Workflows
The LogArchiveReader supports efficient batch processing for parallel workflows. The main process can generate batch
assignments, and worker processes can create lightweight reader instances by passing the pre-discovered onset timestamp
to skip redundant scanning.
```python
from pathlib import Path
from concurrent.futures import ProcessPoolExecutor
from ataraxis_data_structures import LogArchiveReader
import numpy as np
def process_batch(archive_path: Path, onset_us: np.uint64, keys: list[str]) -> int:
"""Worker function that processes a batch of messages."""
# Creates a lightweight reader with pre-discovered onset (skips onset discovery)
reader = LogArchiveReader(archive_path, onset_us=onset_us)
processed = 0
for message in reader.iter_messages(keys=keys):
# Process each message...
processed += 1
return processed
if __name__ == "__main__":
archive_path = Path("/path/to/1_log.npz")
# Main process discovers onset and generates batches
reader = LogArchiveReader(archive_path)
onset_us = reader.onset_timestamp_us
batches = reader.get_batches(workers=4, batch_multiplier=4)
# Distributes batches to worker processes
with ProcessPoolExecutor(max_workers=4) as executor:
futures = [executor.submit(process_batch, archive_path, onset_us, batch) for batch in batches]
total_processed = sum(f.result() for f in futures)
print(f"Processed {total_processed} messages")
```
#### Reading All Messages at Once
For smaller archives, all messages can be read into memory at once:
```python
from pathlib import Path
from ataraxis_data_structures import LogArchiveReader
reader = LogArchiveReader(Path("/path/to/1_log.npz"))
# Returns a tuple of (timestamps_array, payloads_list)
timestamps, payloads = reader.read_all_messages()
print(f"Read {len(timestamps)} messages")
```
### ProcessingTracker
The ProcessingTracker class tracks the state of data processing pipelines and provides tools for communicating this
state between multiple processes and host machines. It uses a file-based approach with a .yaml file for state storage
and a .lock file for thread-safe access.
#### Creating and Initializing the Tracker
The ProcessingTracker extends YamlConfig and uses file locking to ensure safe concurrent access from multiple
processes.
```python
from pathlib import Path
from ataraxis_data_structures import ProcessingTracker
# Creates a tracker pointing to a .yaml file
tracker = ProcessingTracker(file_path=Path("/path/to/tracker.yaml"))
# Initializes jobs to be tracked (each job is a tuple of (job_name, specifier))
# Specifiers differentiate instances of the same job (e.g., different data batches)
job_ids = tracker.initialize_jobs([
("process_video", "session_001"),
("process_video", "session_002"),
("extract_frames", "session_001"),
("extract_frames", "session_002"),
("generate_report", ""), # Empty specifier for jobs without batches
])
print(f"Initialized {len(job_ids)} jobs")
```
#### Managing Job Lifecycle
Jobs transition through states: SCHEDULED → RUNNING → SUCCEEDED or FAILED.
```python
from pathlib import Path
from ataraxis_data_structures import ProcessingTracker, ProcessingStatus
tracker = ProcessingTracker(file_path=Path("/path/to/tracker.yaml"))
# Generates a job ID using the same name and specifier used during initialization
job_id = ProcessingTracker.generate_job_id("process_video", "session_001")
# Marks the job as started (optionally with an executor ID like a SLURM job ID)
tracker.start_job(job_id, executor_id="slurm_12345")
# Queries the current status
status = tracker.get_job_status(job_id)
print(f"Job status: {status}") # ProcessingStatus.RUNNING
# Marks the job as completed successfully
tracker.complete_job(job_id)
# Or, if the job failed:
# tracker.fail_job(job_id, error_message="Out of memory")
```
#### Querying Pipeline State
The ProcessingTracker provides methods for querying the overall pipeline state and individual job information.
```python
from pathlib import Path
from ataraxis_data_structures import ProcessingTracker, ProcessingStatus
tracker = ProcessingTracker(file_path=Path("/path/to/tracker.yaml"))
# Checks if all jobs have completed successfully
if tracker.complete:
print("Pipeline completed successfully!")
# Checks if any job has failed
if tracker.encountered_error:
print("Pipeline encountered errors!")
# Gets a summary of job counts by status
summary = tracker.get_summary()
for status, count in summary.items():
print(f"{status.name}: {count}")
# Gets all job IDs with a specific status
failed_jobs = tracker.get_jobs_by_status(ProcessingStatus.FAILED)
scheduled_jobs = tracker.get_jobs_by_status("SCHEDULED") # String names also work
# Searches for jobs by name or specifier patterns
matches = tracker.find_jobs(job_name="process", specifier="session_001")
for job_id, (name, spec) in matches.items():
print(f"Found job: {name} ({spec})")
# Gets detailed job information
job_info = tracker.get_job_info(job_id)
print(f"Job: {job_info.job_name}, Status: {job_info.status}")
print(f"Started at: {job_info.started_at}, Completed at: {job_info.completed_at}")
```
#### Retrying Failed Jobs
Failed jobs can be reset for retry:
```python
from pathlib import Path
from ataraxis_data_structures import ProcessingTracker
tracker = ProcessingTracker(file_path=Path("/path/to/tracker.yaml"))
# Resets all failed jobs back to SCHEDULED status
retried_ids = tracker.retry_failed_jobs()
print(f"Reset {len(retried_ids)} failed jobs for retry")
# Or reset the entire tracker
tracker.reset()
```
### Processing Utilities
The library provides several utility functions for common data processing tasks.
#### Directory Checksum Calculation
The `calculate_directory_checksum()` function computes an xxHash3-128 checksum for an entire directory, accounting for
both file contents and directory structure.
```python
from pathlib import Path
from ataraxis_data_structures import calculate_directory_checksum
# Calculates checksum with progress tracking
checksum = calculate_directory_checksum(
directory=Path("/path/to/data"),
num_processes=None, # Uses all available CPU cores
progress=True, # Shows progress bar
save_checksum=True, # Saves to ax_checksum.txt in the directory
)
print(f"Directory checksum: {checksum}")
# Calculates checksum without saving or progress tracking (for batch processing)
checksum = calculate_directory_checksum(
directory=Path("/path/to/data"),
progress=False,
save_checksum=False,
excluded_files={"ax_checksum.txt", ".gitignore"}, # Excludes specific files
)
```
#### Directory Transfer
The `transfer_directory()` function copies directories with optional integrity verification and parallel processing.
```python
from pathlib import Path
from ataraxis_data_structures import transfer_directory
# Transfers with integrity verification
transfer_directory(
source=Path("/path/to/source"),
destination=Path("/path/to/destination"),
num_threads=4, # Uses 4 threads for parallel copy
verify_integrity=True, # Verifies checksum after transfer
remove_source=False, # Keeps source after transfer
progress=True, # Shows progress bar
)
# Moves data (transfers and removes source)
transfer_directory(
source=Path("/path/to/source"),
destination=Path("/path/to/destination"),
num_threads=0, # Uses all available CPU cores
verify_integrity=True,
remove_source=True, # Removes source after successful transfer
)
```
#### Directory Deletion
The `delete_directory()` function removes directories using parallel file deletion for improved performance.
```python
from pathlib import Path
from ataraxis_data_structures import delete_directory
# Deletes a directory and all its contents
delete_directory(Path("/path/to/directory"))
```
#### Data Interpolation
The `interpolate_data()` function aligns time-series data to target coordinates using linear interpolation (for
continuous data) or last-known-value interpolation (for discrete data).
```python
import numpy as np
from ataraxis_data_structures import interpolate_data
# Source data with timestamps and values
source_timestamps = np.array([0, 100, 200, 300, 400], dtype=np.uint64)
source_values = np.array([10.0, 20.0, 15.0, 25.0, 30.0], dtype=np.float64)
# Target timestamps for interpolation
target_timestamps = np.array([50, 150, 250, 350], dtype=np.uint64)
# Continuous interpolation (linear)
interpolated_continuous = interpolate_data(
source_coordinates=source_timestamps,
source_values=source_values,
target_coordinates=target_timestamps,
is_discrete=False,
)
print(f"Continuous: {interpolated_continuous}") # [15.0, 17.5, 20.0, 27.5]
# Discrete interpolation (last known value)
discrete_values = np.array([1, 2, 3, 4, 5], dtype=np.uint8)
interpolated_discrete = interpolate_data(
source_coordinates=source_timestamps,
source_values=discrete_values,
target_coordinates=target_timestamps,
is_discrete=True,
)
print(f"Discrete: {interpolated_discrete}") # [1, 2, 3, 4]
```
___
## API Documentation
See the [API documentation](https://ataraxis-data-structures-api-docs.netlify.app/) for the detailed description of the
methods and classes exposed by components of this library.
___
## Developers
This section provides installation, dependency, and build-system instructions for the developers that want to modify
the source code of this library.
### Installing the Project
***Note,*** this installation method requires **mamba version 2.3.2 or above**. Currently, all Sun lab automation
pipelines require that mamba is installed through the [miniforge3](https://github.com/conda-forge/miniforge) installer.
1. Download this repository to the local machine using the preferred method, such as git-cloning.
2. If the downloaded distribution is stored as a compressed archive, unpack it using the appropriate decompression tool.
3. `cd` to the root directory of the prepared project distribution.
4. Install the core Sun lab development dependencies into the ***base*** mamba environment via the
`mamba install tox uv tox-uv` command.
5. Use the `tox -e create` command to create the project-specific development environment followed by `tox -e install`
command to install the project into that environment as a library.
### Additional Dependencies
In addition to installing the project and all user dependencies, install the following dependencies:
1. [Python](https://www.python.org/downloads/) distributions, one for each version supported by the developed project.
Currently, this library supports the three latest stable versions. It is recommended to use a tool like
[pyenv](https://github.com/pyenv/pyenv) to install and manage the required versions.
### Development Automation
This project uses `tox` for development automation. The following tox environments are available:
| Environment | Description |
|----------------------|--------------------------------------------------------------|
| `lint` | Runs ruff formatting, ruff linting, and mypy type checking |
| `stubs` | Generates py.typed marker and .pyi stub files |
| `{py312,...}-test` | Runs the test suite via pytest for each supported Python |
| `coverage` | Aggregates test coverage into an HTML report |
| `docs` | Builds the API documentation via Sphinx |
| `build` | Builds sdist and wheel distributions |
| `upload` | Uploads distributions to PyPI via twine |
| `install` | Builds and installs the project into its mamba environment |
| `uninstall` | Uninstalls the project from its mamba environment |
| `create` | Creates the project's mamba development environment |
| `remove` | Removes the project's mamba development environment |
| `provision` | Recreates the mamba environment from scratch |
| `export` | Exports the mamba environment as .yml and spec.txt files |
| `import` | Creates or updates the mamba environment from a .yml file |
Run any environment using `tox -e ENVIRONMENT`. For example, `tox -e lint`.
***Note,*** all pull requests for this project have to successfully complete the `tox` task before being merged. To
expedite the task's runtime, use the `tox --parallel` command to run some tasks in parallel.
### Automation Troubleshooting
Many packages used in `tox` automation pipelines (uv, mypy, ruff) and `tox` itself may experience runtime failures. In
most cases, this is related to their caching behavior. If an unintelligible error is encountered with any of the
automation components, deleting the corresponding cache directories (`.tox`, `.ruff_cache`, `.mypy_cache`, etc.)
manually or via a CLI command typically resolves the issue.
___
## Versioning
This project uses [semantic versioning](https://semver.org/). See the
[tags on this repository](https://github.com/Sun-Lab-NBB/ataraxis-data-structures/tags) for the available project
releases.
___
## Authors
- Ivan Kondratyev ([Inkaros](https://github.com/Inkaros))
___
## License
This project is licensed under the Apache 2.0 License: see the [LICENSE](LICENSE) file for details.
___
## Acknowledgments
- All Sun lab [members](https://neuroai.github.io/sunlab/people) for providing the inspiration and comments during the
development of this library.
- The creators of all other dependencies and projects listed in the [pyproject.toml](pyproject.toml) file.
| text/markdown | Ivan Kondratyev | null | null | Ivan Kondratyev <ik278@cornell.edu> | null | ataraxis, checksum, data logging, data manipulation, data structures, data transfer, interpolation, processing, shared memory | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming L... | [] | null | null | <3.15,>=3.12 | [] | [] | [] | [
"ataraxis-base-utilities<7,>=6",
"ataraxis-time<7,>=6",
"dacite<2,>=1",
"filelock<4,>=3",
"numpy<3,>=2",
"pyyaml<7,>=6",
"xxhash<4,>=3"
] | [] | [] | [] | [
"Homepage, https://github.com/Sun-Lab-NBB/ataraxis-data-structures",
"Documentation, https://ataraxis-data-structures-api-docs.netlify.app/"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T03:28:30.592342 | ataraxis_data_structures-6.0.0.tar.gz | 89,386 | b3/5b/e68add2f849a57c74daeb6b61c0c1c3bfa35989b330d66331b410fcded71/ataraxis_data_structures-6.0.0.tar.gz | source | sdist | null | false | 8db5aec35ce299cb7177e14c4a9e036a | 9cc8af587238b7ad5dd30f23757cb834fb8ffa3ca3345db7355b0024b7ce5396 | b35be68add2f849a57c74daeb6b61c0c1c3bfa35989b330d66331b410fcded71 | Apache-2.0 | [
"LICENSE"
] | 272 |
2.1 | odoo-addon-maintenance-equipment-usage | 18.0.1.0.0.4 | Maintenance Equipment Usage | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
===========================
Maintenance Equipment Usage
===========================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:fb59eb35211a56c4edea67cc3ce2430fee9f06f3daf363e4a664b3d474940c8b
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fmaintenance-lightgray.png?logo=github
:target: https://github.com/OCA/maintenance/tree/18.0/maintenance_equipment_usage
:alt: OCA/maintenance
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/maintenance-18-0/maintenance-18-0-maintenance_equipment_usage
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/maintenance&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module allows to record usages of maintenante equipments by
employees, with their dates, states and comments.
**Table of contents**
.. contents::
:local:
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/maintenance/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/maintenance/issues/new?body=module:%20maintenance_equipment_usage%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* César Fernández
* Tecnativa
Contributors
------------
- César Fernández Domínguez
- `Tecnativa <https://www.tecnativa.com>`__:
- Víctor Martínez
- Pedro M. Baeza
- `Heliconia Solutions Pvt. Ltd. <https://www.heliconia.io>`__
- Bhavesh Heliconia
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-victoralmau| image:: https://github.com/victoralmau.png?size=40px
:target: https://github.com/victoralmau
:alt: victoralmau
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-victoralmau|
This module is part of the `OCA/maintenance <https://github.com/OCA/maintenance/tree/18.0/maintenance_equipment_usage>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | =?utf-8?q?C=C3=A9sar_Fern=C3=A1ndez=2C_Tecnativa=2C_Odoo_Community_Association_=28OCA=29?= | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/maintenance | null | >=3.10 | [] | [] | [] | [
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T03:28:00.473618 | odoo_addon_maintenance_equipment_usage-18.0.1.0.0.4-py3-none-any.whl | 34,808 | 6d/c7/d6232298cf2398ce623e5f9652b0f755b03ebaf718744f098b96e21f369f/odoo_addon_maintenance_equipment_usage-18.0.1.0.0.4-py3-none-any.whl | py3 | bdist_wheel | null | false | 91b594ee1ad4d99d956dd622732fe235 | 15dfddd17ce6b11f85fdbaf58f0d8325cc88702a0254570c1c93da957569021b | 6dc7d6232298cf2398ce623e5f9652b0f755b03ebaf718744f098b96e21f369f | null | [] | 106 |
2.4 | pmtvs-audio | 0.0.1 | Signal analysis primitives | # pmtvs-audio
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:27:40.439060 | pmtvs_audio-0.0.1.tar.gz | 1,246 | 87/13/3d4e4df967c55b6d69e047767422ce0c922516a949ab586787c53e0118f7/pmtvs_audio-0.0.1.tar.gz | source | sdist | null | false | fefd0dc4066122683d1730bfe51a57ec | de6df5ac444071adb0cf8ea1361f98ca8a0560ebf7111eccfe1d23e5c522b4bb | 87133d4e4df967c55b6d69e047767422ce0c922516a949ab586787c53e0118f7 | null | [] | 267 |
2.1 | odoo-addon-mail-post-defer | 18.0.1.0.1.1 | Faster and cancellable outgoing messages | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
========================
Deferred Message Posting
========================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:9e2d83f15087adaaf73a0d037632aa9b26388f81a23d8cf43836c5f8b19b0ba1
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Alpha-red.png
:target: https://odoo-community.org/page/development-status
:alt: Alpha
.. |badge2| image:: https://img.shields.io/badge/license-LGPL--3-blue.png
:target: http://www.gnu.org/licenses/lgpl-3.0-standalone.html
:alt: License: LGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fmail-lightgray.png?logo=github
:target: https://github.com/OCA/mail/tree/18.0/mail_post_defer
:alt: OCA/mail
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/mail-18-0/mail-18-0-mail_post_defer
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/mail&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module enhances mail threads by using the mail queue by default.
Without this module, Odoo attempts to notify recipients of your message
immediately. If your mail server is slow or you have many followers,
this can mean a lot of time. Install this module and make Odoo more
snappy!
All emails will be kept in the outgoing queue by at least 30 seconds,
giving you some time to re-think what you wrote. During that time, you
can still delete the message and start again.
.. IMPORTANT::
This is an alpha version, the data model and design can change at any time without warning.
Only for development or testing purpose, do not use in production.
`More details on development status <https://odoo-community.org/page/development-status>`_
**Table of contents**
.. contents::
:local:
Configuration
=============
You usually don't need to do anything. The module is configured
appropriately out of the box. Just make sure the following scheduled
actions are active:
- Mail: Email Queue Manager (mail.ir_cron_mail_scheduler_action)
- Notification: Send scheduled message notifications
(mail.ir_cron_send_scheduled_message)
The mail queue processing is made by a cron job. This is normal Odoo
behavior, not specific to this module. However, since you will start
using that queue for every message posted by any user in any thread,
this module configures that job to execute every minute by default.
You can still change that cadence after installing the module (although
it is not recommended). To do so:
1. Log in with an administrator user.
2. Activate developer mode.
3. Go to *Settings > Technical > Automation > Scheduled Actions*.
4. Edit the action named "Mail: Email Queue Manager".
5. Lower down the frequency in the field *Execute Every*. Recommended: 1
minute.
Usage
=====
To use this module, you need to:
1. Go to the form view of any record that has a mail thread. It can be a
partner, for example.
2. Post a message.
The mail is now in the outgoing mail queue. It will be there for at
least 30 seconds. It will be really sent the next time the "Mail: Email
Queue Manager" cron job is executed.
While the message has not been yet sent:
1. Click the little envelope. You will see a paper airplane icon,
indicating it is still outgoing.
2. Hover over the message and click on *⠇ > 🗑️ Delete*. Mails will not
be sent.
Known issues / Roadmap
======================
- Add minimal deferring time configuration if it ever becomes necessary.
See https://github.com/OCA/social/pull/1001#issuecomment-1461581573
for the rationale behind current hardcoded value of 30 seconds.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/mail/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/mail/issues/new?body=module:%20mail_post_defer%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Moduon
Contributors
------------
- Jairo Llopis (https://www.moduon.team/)
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-Yajo| image:: https://github.com/Yajo.png?size=40px
:target: https://github.com/Yajo
:alt: Yajo
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-Yajo|
This module is part of the `OCA/mail <https://github.com/OCA/mail/tree/18.0/mail_post_defer>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Moduon, Odoo Community Association (OCA) | support@odoo-community.org | null | null | LGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Lesser General Public License v3 (LGPLv3)",
"Development Status :: 3 - Alpha"
] | [] | https://github.com/OCA/mail | null | >=3.10 | [] | [] | [] | [
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T03:26:45.818148 | odoo_addon_mail_post_defer-18.0.1.0.1.1-py3-none-any.whl | 52,499 | cd/3f/326262f572349061d47d59fd382acbfaa72a0c632de4cd11c161f958da04/odoo_addon_mail_post_defer-18.0.1.0.1.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 6cf7a8012abcbbde73c9764539c199fc | 3319b6529ad529455ab30a3ee5718856d0df460a5fb1233d0b9406e588258345 | cd3f326262f572349061d47d59fd382acbfaa72a0c632de4cd11c161f958da04 | null | [] | 95 |
2.4 | pmtvs-seismic | 0.0.1 | Signal analysis primitives | # pmtvs-seismic
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:26:35.677082 | pmtvs_seismic-0.0.1.tar.gz | 1,266 | 02/02/e4ed7f099e7a497bedd058958baffbe0acf1a1b97e8a66aab8e8e16f0949/pmtvs_seismic-0.0.1.tar.gz | source | sdist | null | false | 263c7814c7380dc81aa3a5edd873745e | 59bc0f0f3d38a2b230e1ea9e2fce256aab21de48146ab79056c9393628325ee5 | 0202e4ed7f099e7a497bedd058958baffbe0acf1a1b97e8a66aab8e8e16f0949 | null | [] | 277 |
2.4 | ghosttrace | 0.3.1 | Record AI agent decisions, including phantom branches, latency, and costs. | _**Overwriting the file as intended to improve the repository presentation.**_
# 👻 GhostTrace
[](https://pypi.org/project/ghosttrace/)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/ghosttrace/)
**GhostTrace** is a lightweight Python library designed to record the "roads not taken" by your AI agents. It captures rejected alternatives (phantom branches), tracks their latency, and estimates API costs, providing deep insights into your agent's decision-making process.
---
## ✨ Features
- **🛤️ Phantom Branch Tracking**: Record every alternative decision your agent considered but rejected.
- **💰 Cost Estimation**: Automatically calculate estimated USD costs for various LLM models (GPT-4, Claude-3, etc.).
- **⏱️ Latency Monitoring**: Measure the time taken for each decision branch.
- **📄 JSON Exports**: Save traces to a structured `.ghost.json` format for later analysis or replaying.
- **💻 Interactive CLI**: Replay agent sessions in your terminal with rich, formatted output.
---
## 🚀 Installation
Install GhostTrace via pip:
```bash
pip install ghosttrace
```
---
## 🛠️ Quick Start
### Basic Usage
```python
from ghosttrace.ghost_writer import GhostWriter
# Initialize the writer
writer = GhostWriter(output_dir='.')
def my_evaluate_fn(decision, context):
# Your logic to evaluate a decision
if "risky" in decision:
return {"status": "rejected", "reason": "Too risky for production"}
return {"status": "accepted"}
# Evaluate and record a decision with tracking
result = writer.evaluate_and_record(
decision="Update database schema directly",
evaluate_fn=my_evaluate_fn,
context={"env": "production"},
input_tokens=1200,
output_tokens=400,
model="gpt-4-turbo"
)
```
### Using the CLI
GhostTrace comes with a built-in CLI to replay your agent's traces:
```bash
# Run a mock recording session
ghosttrace record --goal "Refactor auth module"
# Replay the session
ghosttrace replay <session_id>.ghost.json
# Show phantom branches (the roads not taken)
ghosttrace replay <session_id>.ghost.json --show-phantoms
```
---
## 📊 Supported Models for Cost Tracking
GhostTrace supports cost estimation for popular models including:
- **OpenAI**: `gpt-4`, `gpt-4-turbo`, `gpt-3.5-turbo`
- **Anthropic**: `claude-3-opus`, `claude-3-sonnet`, `claude-3-haiku`
- **Custom**: Default pricing available for other models.
---
## 🤝 Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request to help improve GhostTrace.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---
<p align="center">
Made with 👻 by <a href="https://github.com/ahmedallam222">Ahmed Allam</a>
</p>
| text/markdown | null | Ahmed Allam <ahmedallam222@gmail.com> | null | null | MIT | ai, agents, llm, tracing, observability, cost-tracking | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"typer[all]>=0.9.0",
"rich>=13.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/ahmedallam222/ghosttrace",
"Bug Tracker, https://github.com/ahmedallam222/ghosttrace/issues",
"Source Code, https://github.com/ahmedallam222/ghosttrace"
] | twine/6.2.0 CPython/3.11.0rc1 | 2026-02-19T03:25:53.013857 | ghosttrace-0.3.1.tar.gz | 9,209 | 14/78/35c55ac464be4dade84796399584a7f6b388a35b03ca1dbac1f19db37649/ghosttrace-0.3.1.tar.gz | source | sdist | null | false | 070f05d3a2595c742125248e1abdeeea | 1897af8c35e7ee6245c7ce626cde5da9b8bde4ea8c0d7e1dc3878ffb1af118fc | 147835c55ac464be4dade84796399584a7f6b388a35b03ca1dbac1f19db37649 | null | [] | 259 |
2.4 | pmtvs-biomedical | 0.0.1 | Signal analysis primitives | # pmtvs-biomedical
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:25:31.165379 | pmtvs_biomedical-0.0.1.tar.gz | 1,265 | 6c/d4/835dbc23dffea6254edb66794728836289945ffc896162003c3e00fbfb25/pmtvs_biomedical-0.0.1.tar.gz | source | sdist | null | false | 60095bc3335928aec140006846807284 | ca36db7d08f7184f738adf163c33c783f24e4a2983472a91701de6b38d1380e7 | 6cd4835dbc23dffea6254edb66794728836289945ffc896162003c3e00fbfb25 | null | [] | 277 |
2.3 | sai-chat | 0.1.3 | Simple AI interface to chat with your LLM models from the terminal | # sai
Simple AI interface to chat with your Ollama models from the terminal
# Features
- [x] Pretty print real time responses in Markdown, using `rich` library.
- [x] Keep conversation context.
- [x] Autodetect and option to select models.
- [x] Add support for custom prompts.
- [ ] Add conversation persistency (sessions).
# Requirements
An Ollama instance is required to get access to local models.
By default, the URL is set to `http://localhost:11434`.
# Install
You can install it using any package manager of your preference like `pip`,
but the recommended way is `uv tool`.
## Recommended
Using `uv`:
```shell
uv tool install sai-chat
```
# Usage
Start using it in your terminal just by running `sai` command:
```shell
luis@laptop:~ $ sai
╭───────────────────────────────────────────────────────╮
│ Welcome to Sai. Chat with your local LLM models. │
│ │
│ Available commands: │
│ │
│ • /setup : Setup Ollama URL and preferences │
│ • /model : Select a model │
│ • /roles : List and select a role │
│ • /role add : Create a new custom role │
│ • /role delete : Delete a custom role │
│ • /help : Show this help message │
│ • /quit : Exit the application │
╰───────────────────────────────────────────────────────╯
> hi
╭────────────────────────────────────── LLM Response ✔ ─╮
│ Hi there! How can I help you today? 😊 │
╰────────────────────── gemma3:1b ──────────────────────╯
>
```
# Status
This project is under development. Feel free to contribute or provide feedback! | text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"httpx>=0.28.1",
"inquirer>=3.4.1",
"python-dotenv>=1.2.1",
"rich>=14.3.2",
"tomli-w>=1.2.0"
] | [] | [] | [] | [
"Homepage, https://github.com/luisgdev/sai",
"Repository, https://github.com/luisgdev/sai",
"Issues, https://github.com/luisgdev/sai/issues",
"Documentation, https://github.com/luisgdev/sai#readme"
] | uv/0.5.13 | 2026-02-19T03:25:29.907270 | sai_chat-0.1.3.tar.gz | 6,441 | f5/9b/4b1286a951db36eb64af7cccdfa41b89d00b2702bdb9058eac67a5ca281c/sai_chat-0.1.3.tar.gz | source | sdist | null | false | 28abda0b5bb428d5e419190722c7b384 | 1060d06782b1064d3427a437f02caed31ec94063da47c4dcecc8d1c7b8eb2f54 | f59b4b1286a951db36eb64af7cccdfa41b89d00b2702bdb9058eac67a5ca281c | null | [] | 253 |
2.4 | pmtvs-industrial | 0.0.1 | Signal analysis primitives | # pmtvs-industrial
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:24:26.861444 | pmtvs_industrial-0.0.1.tar.gz | 1,266 | 04/dc/e2c21fb7c152022f17bdc520852f013527f885e9e2b5f1fba580a159d160/pmtvs_industrial-0.0.1.tar.gz | source | sdist | null | false | 75f013b5f841209d1b334e5cb6f3bcc5 | 898de34f26cf955b8ee76f398d7f74a022ebf65a5a2fdb20ea1780e78bc4b189 | 04dce2c21fb7c152022f17bdc520852f013527f885e9e2b5f1fba580a159d160 | null | [] | 270 |
2.4 | MsTargetPeaker | 0.3.6 | A peak-searching procedure to identify optimal chromatographic peak regions for peak integration. | # MsTargetPeaker: a quality-aware deep reinforcement learning approach for peak identification in targeted proteomics
MsTargetPeaker incorporates a deep reinforcement learning agent and Monte Carlo tree search to locate target peak regions in targeted mass spectrometry.
The agent was trained with proximal policy optimization on a big collection of targeted MS datasets containing around 1.7M peak groups.
During the training, we established a gymnasium environment for the agent to move peak boundaries to locate target signals.
To define optimal peaks, we designed a reward function incorporating our previously developed TMSQE quality scoring.
Thus, the agent can learn autonomously to find high-scoring peak regions without using maunally annotated peaks.
In the end, the training process took about 200M timesteps to reach performance plateau.
The peak search procedure in MsTargetPeaker was performed using Monte Carlo tree search guided by this agent to enhance the generability, especially for unseen datasets. To further enhance the precision on ambiguous peaks, additional search rounds were appended trying to locate peak regions enclosing true target signals.
After running the peak search, the generated peak csv file can be imported into Skyline for manual re-evaluation or peak integration.
MsTargetPeaker also provides a peak reporter to generate interpretable peak quality reports.
Currently, MsTargetPeaker supports peptide MRM/PRM data.
## Installation
MsTargetPeaker was built as a Python package. You can use the following command to install the package.
```
pip install MsTargetPeaker
```
After you install mstargetpeaker, you can use `mstarget-peaker` and `mstarget-reporter` as the command line tools for identification of peak regions and assessment of the peak quality.
Use `--help` or `-h` to see detailed argument descriptions.
## Input Data Format
MsTargetPeaker currently accepts chromatogram data in tab-separated value (TSV) format. This chromatogram file can be exported via **Skyline**.
The required nine column headers for chromatograms are listed as follows.
| FileName | PeptideModifiedSequence | PrecursorCharge | ProductMz | FragmentIon | ProductCharge | IsotopeLabelType | Times | Intensities|
|----------|---------------------------|------------------|-----------|-------------|---------------|------------------|-------|------------|
## Usage
Use the following command to run `MsTargetPeaker` to search peak regions in chromatograms.
```Shell
MsTargetPeaker <chromatogram_tsv> <output_peak_boundary_csv>
```
With this command, MsTargetPeaker takes the first chromatogram TSV file as input and outputs the resulting peak regions to the CSV file specified in the second argument.
The resulting peak CSV file can be imported into Skyline to update peak regions in the chromatograms.
The full arguments are shown below:
```Shell
MsTargetPeaker [-h] [--speed SPEED] [--search SEARCH] [--config CONFIG] [--picked PICKED] [--process_num PROCESS_NUM] [--internal_standard_type {heavy,light}] [--device DEVICE] chromatogram_tsv output_peak_boundary_cs
```
| Argument | | Description | Value Type |Default Values<tr><td colspan="5">**INPUT**</td></tr>
|:--------|--|:------------|------------|----------:|
|chromatogram_tsv| |The chromatogram TSV file path| File path|no default<tr><td colspan="5">**OUTPUT**</td></tr>
|output_peak_boundary_csv| |The output peak boundary CSV file path|File path|no default<tr><td colspan="5">**Options**</td></tr>
|--help| -h |Show the detailed argument list|(no value)|unset|
|--version| -v |Display the package version|(no value)|unset|
|--speed | -s |The speed mode of UltraFast (10X), Faster (5X), Fast (2X), or Standard (1X speed). This can be customized in the config file.| string |UltraFast|
|--mode | -m |The search mode using the parameter set of MRM or PRM. This can be customized in the config file.| string |MRM|
|--prescreen| -pre |Prescreen peak regions for better peak boundaries as initial state.|int|50|
|--internal_standard_type|-r|Set the internal standard reference to heavy or light ions.|{`heavy`, `light`}|heavy<tr><td colspan="5">**GROUPING**</td></tr>
| --process_num | -p | The parallel process number to search peak regions | integer | 4|
| --device | -d | Use cpu or cuda device for peak picking. | string | auto <tr><td colspan="5">**Incremental Peak Search**</td></tr>
| --picked | | The previously picked boundaries for incremental peak search. | File path | unset |
|--start_round|-sr|Specify the starting MCTS round in the config file. This is useful for incremental peak search.| int| 1|
|--end_round|-er|Specify the ending MCTS round in the config file. This is useful for incremental peak search.| int| 7|
## Incremental Peak Search
MsTargetPeaker supports incremental peak search from a previously identified peak boundary csv file (You may use the peak boundary results from Skyline or other peak identification tools).
To further reduce the search time, users can initially use `--speed=SuperFast` to have a quick result.
Then, specify `--picked={the peak csv file}` with the `--start_round=4` to start the search with parameters of the 4th to the last round of MCTS.
With this setting we can re-search peak groups which rewards failed to pass the threshold set in the config file.
## Configuration
The default configuration file is MsTargetPeaker.cfg. You may customize this file to suit your preferences.
## Quality Reporter
The reporter can be run independently to generate the following five reports:
1. Transition quality files in a folder.
2. An Excel file containing two sheets: sample quality and replicate group quality.
3. A PDF showing chromatogram plots.
4. A PDF swhoing the probability density functions of peak start and end for each target.
To run the quality reporter, use the following command:
```
MsTargetReporter [-h] [--internal_standard_type {heavy,light}] [--top_n_fragment TOP_N_FRAGMENT] [--group_csv GROUP_CSV] [--output_chromatogram_pdf] [--chromatogram_dpi CHROMATOGRAM_DPI]
[--chromatogram_nrow CHROMATOGRAM_NROW] [--chromatogram_ncol CHROMATOGRAM_NCOL] [--chromatogram_fig_w CHROMATOGRAM_FIG_W] [--chromatogram_fig_h CHROMATOGRAM_FIG_H] [--output_mixed_mol] [--reorder_by_group]
chromatogram_tsv peak_boundary_csv output_folder
```
The full arguments are shown below:
| Argument | | Description | Value Type |Default Values<tr><td colspan="5">**INPUT**</td></tr>
|:--------|--|:------------|------------|----------:|
|chromatogram_tsv| |The chromatogram TSV file path| File path|no default|
|peak_boundary_csv| |The output peak boundary CSV file path|File path|no default<tr><td colspan="5">**OUTPUT**</td></tr>
|output_folder| |The output peak boundary CSV file path|File path|no default<tr><td colspan="5">**Options**</td></tr>
|--help| -h |Show the detailed argument list|(no value)|unset|
|--group_csv|-g|The CSV file containing the replicate group information|File path|unset|
|--top_n_fragment|-n|Automatically select top N transition ions for reporting the quality|integer|5<tr><td colspan="5">**Options for Generating Chromatogram Plots**</td></tr>
|--output_chromatogram_pdf|-pdf|Set for generating chromatogram plots in a file named chromatogram_plots.pdf|File path|unset|
|--output_mixed_mol|-mix|If set, chromatogram plots for each target molecule will be mixed in one PDF page.|(no value)|unset|
|--reorder_by_group|-r|If set, target molecule will be reordered based on the replicate group. Only works if the --group_csv is provided.|(no value)|unset|
|--chromatogram_dpi|-dpi|The dpi of chromatogram plots. Only works when --output_chromatogram_pdf is set.|integer|200|
|--chromatogram_nrow|-nrow| | |
|--chromatogram_ncol|-ncol| | |
|--chromatogram_fig_w|-figw| | |
|--chromatogram_fig_h|-figh| | |
## Utility Functions
### Chromatogram Checking
We noticed that certain exported chromatogram TSV files from Skyline may have unpaired arrays of `Time` and `Intensity` between light and heavy ions.
Also, as we currently rely on the modified peptide sequence and sample file name to recognize each peak group,
it may cause issues if the chromatogram data contain duplicate peptide-sample names.
We provided `MsTargetChromChecker` to solve these two issues.
For the unaligned data points in light and heavy ions, we apply interpolations to make the same number of retention time and its intensity for both light and heavy ions.
For duplicated names for peak groups, MsTargetChromChecker appends a suffix to the sample file names. The suffix has a pattern of `::n`, where n is a number indicating the duplication number.
Use the following command to run `MsTargetChromChecker`,
```
MsTargetChromChecker [-h] chromatogram_tsv output_chrom_tsv
```
### Parallelism
As it takes time to run `MsTargetPeaker`, we provide `MsTargetChromSplitter` to split the task into smaller ones.
Each splitted task can be run parallelly on different processes or machines. The results from these tasks can then be merged with `MsTargetPeakMerger`.
`MsTargetChromSplitter`
```
MsTargetChromSplitter [-h] -n [number of file] chromatogram_tsv output_folder
```
The default spliting number is the number of target moleculars (without specifying the `-n` argument).
`MsTargetPeakMerger`
```
MsTargetPeakMerger [-h] input_folder output_csv_file
```
This `MsTargetPeakMerger` accepts a folder containing multiple peak csv files and output the merged version of those csv files.
You can use `MsTargetChromSplitter` to split the input chromatogram TSV file, run MsTargetPeaker in parallel on each split file to search for peak regions, and merge the resulting peak CSV files in a folder using `MsTargetPeakMerger`.
| text/markdown | null | Chi Yang <chiyang@mail.cgu.edu.tw> | null | null | null | null | [] | [] | null | null | null | [] | [] | [] | [
"numpy<2",
"scipy",
"pandas",
"xlsxwriter",
"gymnasium",
"tqdm",
"matplotlib>=3.6.3",
"torch>=1.13"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.1 | 2026-02-19T03:24:25.983625 | mstargetpeaker-0.3.6.tar.gz | 4,236,266 | 34/e2/773570b32f747da18955e3332986afa2bb4d7fb59b15af8bf964c279440d/mstargetpeaker-0.3.6.tar.gz | source | sdist | null | false | c3561e43fb72f26c61ceef583c1b50ec | 318acad6ae6d048fe39cb39d92020c62c0abf5c7d6ec124f5aa6457897ce9e2a | 34e2773570b32f747da18955e3332986afa2bb4d7fb59b15af8bf964c279440d | null | [
"LICENSE"
] | 0 |
2.4 | claude-code-plugins-v2 | 1.0.0 | Bundled plugins for Claude Code including Agent SDK development tools, PR review toolkit, and commit workflows | # Claude Code
 [![npm]](https://www.npmjs.com/package/@anthropic-ai/claude-code)
[npm]: https://img.shields.io/npm/v/@anthropic-ai/claude-code.svg?style=flat-square
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows -- all through natural language commands. Use it in your terminal, IDE, or tag @claude on Github.
**Learn more in the [official documentation](https://code.claude.com/docs/en/overview)**.
<img src="./demo.gif" />
## Get started
> [!NOTE]
> Installation via npm is deprecated. Use one of the recommended methods below.
For more installation options, uninstall steps, and troubleshooting, see the [setup documentation](https://code.claude.com/docs/en/setup).
1. Install Claude Code:
**MacOS/Linux (Recommended):**
```bash
curl -fsSL https://claude.ai/install.sh | bash
```
**Homebrew (MacOS/Linux):**
```bash
brew install --cask claude-code
```
**Windows (Recommended):**
```powershell
irm https://claude.ai/install.ps1 | iex
```
**WinGet (Windows):**
```powershell
winget install Anthropic.ClaudeCode
```
**NPM (Deprecated):**
```bash
npm install -g @anthropic-ai/claude-code
```
2. Navigate to your project directory and run `claude`.
## Plugins
This repository includes several Claude Code plugins that extend functionality with custom commands and agents. See the [plugins directory](./plugins/README.md) for detailed documentation on available plugins.
## Reporting Bugs
We welcome your feedback. Use the `/bug` command to report issues directly within Claude Code, or file a [GitHub issue](https://github.com/anthropics/claude-code/issues).
## Connect on Discord
Join the [Claude Developers Discord](https://anthropic.com/discord) to connect with other developers using Claude Code. Get help, share feedback, and discuss your projects with the community.
## Data collection, usage, and retention
When you use Claude Code, we collect feedback, which includes usage data (such as code acceptance or rejections), associated conversation data, and user feedback submitted via the `/bug` command.
### How we use your data
See our [data usage policies](https://code.claude.com/docs/en/data-usage).
### Privacy safeguards
We have implemented several safeguards to protect your data, including limited retention periods for sensitive information, restricted access to user session data, and clear policies against using feedback for model training.
For full details, please review our [Commercial Terms of Service](https://www.anthropic.com/legal/commercial-terms) and [Privacy Policy](https://www.anthropic.com/legal/privacy).
| text/markdown | Anthropic | Anthropic <support@anthropic.com> | null | null | MIT | claude, claude-code, plugins, ai, development | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | https://github.com/anthropics/claude-code | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/anthropics/claude-code",
"Documentation, https://code.claude.com/docs",
"Repository, https://github.com/anthropics/claude-code",
"Issues, https://github.com/anthropics/claude-code/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T03:23:35.824941 | claude_code_plugins_v2-1.0.0.tar.gz | 62,024,278 | 36/4b/ab0986d419cfaca7f8ed77a02c446eb76137e590331b766f1c2824132f42/claude_code_plugins_v2-1.0.0.tar.gz | source | sdist | null | false | acf19994938576fa9f600ddd9a146c36 | 64736518287239486424d96f1d8b52bca38e2c65358cb88320b5c1a6149d11c8 | 364bab0986d419cfaca7f8ed77a02c446eb76137e590331b766f1c2824132f42 | null | [
"LICENSE.md"
] | 123 |
2.4 | pmtvs-aerospace | 0.0.1 | Signal analysis primitives | # pmtvs-aerospace
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:23:22.956029 | pmtvs_aerospace-0.0.1.tar.gz | 1,261 | 39/5a/fe222b45670f843376c2aca69061a3d94137fdb49842a2ea579513ea326e/pmtvs_aerospace-0.0.1.tar.gz | source | sdist | null | false | 36025521292d31690d87ebc0d57861df | d6b095ff09399e40e33d618535a5f1c16849a5dc18d13d9122f8c4d1d73a24f6 | 395afe222b45670f843376c2aca69061a3d94137fdb49842a2ea579513ea326e | null | [] | 261 |
2.1 | cdktn-provider-cloudflare | 14.0.0 | Prebuilt cloudflare Provider for CDK Terrain (cdktn) | # CDKTN prebuilt bindings for cloudflare/cloudflare provider version 5.17.0
This repo builds and publishes the [Terraform cloudflare provider](https://registry.terraform.io/providers/cloudflare/cloudflare/5.17.0/docs) bindings for [CDK Terrain](https://cdktn.io).
## Available Packages
### NPM
The npm package is available at [https://www.npmjs.com/package/@cdktn/provider-cloudflare](https://www.npmjs.com/package/@cdktn/provider-cloudflare).
`npm install @cdktn/provider-cloudflare`
### PyPI
The PyPI package is available at [https://pypi.org/project/cdktn-provider-cloudflare](https://pypi.org/project/cdktn-provider-cloudflare).
`pipenv install cdktn-provider-cloudflare`
### Nuget
The Nuget package is available at [https://www.nuget.org/packages/Io.Cdktn.Providers.Cloudflare](https://www.nuget.org/packages/Io.Cdktn.Providers.Cloudflare).
`dotnet add package Io.Cdktn.Providers.Cloudflare`
### Maven
The Maven package is available at [https://mvnrepository.com/artifact/io.cdktn/cdktn-provider-cloudflare](https://mvnrepository.com/artifact/io.cdktn/cdktn-provider-cloudflare).
```
<dependency>
<groupId>io.cdktn</groupId>
<artifactId>cdktn-provider-cloudflare</artifactId>
<version>[REPLACE WITH DESIRED VERSION]</version>
</dependency>
```
### Go
The go package is generated into the [`github.com/cdktn-io/cdktn-provider-cloudflare-go`](https://github.com/cdktn-io/cdktn-provider-cloudflare-go) package.
`go get github.com/cdktn-io/cdktn-provider-cloudflare-go/cloudflare/<version>`
Where `<version>` is the version of the prebuilt provider you would like to use e.g. `v11`. The full module name can be found
within the [go.mod](https://github.com/cdktn-io/cdktn-provider-cloudflare-go/blob/main/cloudflare/go.mod#L1) file.
## Docs
Find auto-generated docs for this provider here:
* [Typescript](./docs/API.typescript.md)
* [Python](./docs/API.python.md)
* [Java](./docs/API.java.md)
* [C#](./docs/API.csharp.md)
* [Go](./docs/API.go.md)
You can also visit a hosted version of the documentation on [constructs.dev](https://constructs.dev/packages/@cdktn/provider-cloudflare).
## Versioning
This project is explicitly not tracking the Terraform cloudflare provider version 1:1. In fact, it always tracks `latest` of `~> 5.0` with every release. If there are scenarios where you explicitly have to pin your provider version, you can do so by [generating the provider constructs manually](https://cdktn.io/docs/concepts/providers#import-providers).
These are the upstream dependencies:
* [CDK Terrain](https://cdktn.io) - Last official release
* [Terraform cloudflare provider](https://registry.terraform.io/providers/cloudflare/cloudflare/5.17.0)
* [Terraform Engine](https://terraform.io)
If there are breaking changes (backward incompatible) in any of the above, the major version of this project will be bumped.
## Features / Issues / Bugs
Please report bugs and issues to the [CDK Terrain](https://cdktn.io) project:
* [Create bug report](https://github.com/open-constructs/cdk-terrain/issues)
* [Create feature request](https://github.com/open-constructs/cdk-terrain/issues)
## Contributing
### Projen
This is mostly based on [Projen](https://projen.io), which takes care of generating the entire repository.
### cdktn-provider-project based on Projen
There's a custom [project builder](https://github.com/cdktn-io/cdktn-provider-project) which encapsulate the common settings for all `cdktn` prebuilt providers.
### Provider Version
The provider version can be adjusted in [./.projenrc.js](./.projenrc.js).
### Repository Management
The repository is managed by [CDKTN Repository Manager](https://github.com/cdktn-io/cdktn-repository-manager/).
| text/markdown | CDK Terrain Maintainers | null | null | null | MPL-2.0 | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: JavaScript",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Typing :: Typed",
... | [] | https://github.com/cdktn-io/cdktn-provider-cloudflare.git | null | ~=3.9 | [] | [] | [] | [
"cdktn<0.23.0,>=0.22.0",
"constructs<11.0.0,>=10.4.2",
"jsii<2.0.0,>=1.119.0",
"publication>=0.0.3",
"typeguard<4.3.0,>=2.13.3"
] | [] | [] | [] | [
"Source, https://github.com/cdktn-io/cdktn-provider-cloudflare.git"
] | twine/6.1.0 CPython/3.14.3 | 2026-02-19T03:23:22.638217 | cdktn_provider_cloudflare-14.0.0.tar.gz | 11,505,295 | 31/3e/ae3830fdd061cbbb0c70b1eaf99037b7c056feac550958c27352a4957972/cdktn_provider_cloudflare-14.0.0.tar.gz | source | sdist | null | false | 7cf94fb873b5104de0cf69309f580dc0 | 07119c415949d5c007179a90b086a26f7412ca36baddfd1c175f7c6f34db34f6 | 313eae3830fdd061cbbb0c70b1eaf99037b7c056feac550958c27352a4957972 | null | [] | 260 |
2.4 | nemosine-mind | 1.0.1 | MiND: A Minimal Deterministic Middleware for Auditing, Reproducing, and Logging Large Language Model Interactions in Research Workflows. | # MiND
**MiND: A Minimal Deterministic Middleware for Auditable LLM Interaction**
MiND is a lightweight, API-agnostic middleware designed to mediate interactions between users and large language models (LLMs). It provides a deterministic orchestration layer that enables structured preprocessing, routing, and logging of prompts and responses without modifying the underlying models.
The name **MiND** originally refers to *Minimal Nemosine Design*. In the context of this repository and its associated software publication, MiND is used to denote a **Minimal Deterministic Middleware** for auditable LLM interaction. This naming reflects the architectural role of the system while remaining consistent with its origin as an extracted core from the broader Nemosine framework.
Internally, the reference implementation still is called AME ("arquiteura Mínima Executável). In this repository, AME corresponds directly to MiND and represents its executable reference implementation.
---
## Overview
Large Language Models (LLMs) are increasingly integrated into workflows involving sensitive data, iterative reasoning, and long-term user interaction. Despite this growing adoption, most LLM-based applications still rely on direct submission of unstructured prompts to proprietary model APIs. This interaction paradigm offers limited control over:
- Data exposure and privacy
- Traceability and auditability
- Portability of user interaction histories
- Post-hoc inspection of model behavior
MiND addresses these limitations by introducing a deterministic middleware layer positioned between users and LLMs. Instead of forwarding raw user inputs directly to a model, MiND performs structured preprocessing, routing, and logging of interactions before and after each model invocation.
MiND is explicitly non-agentic. It does not perform autonomous planning, goal formulation, multi-step reasoning, tool orchestration, or adaptive control. The middleware operates strictly as a deterministic and externally controlled interaction layer.
MiND does not implement fine-tuning, RLHF, model alignment techniques, or internal model modification. Any agent-like behavior, if desired, must be implemented externally and remains out of scope for this software.
---
## Minimal Execution
MiND is intentionally designed as a minimal deterministic middleware. A minimal execution consists of cloning the repository, installing the backend dependencies listed in src/backend/requirements.txt, and running the backend entry point located in src/backend/main.py. This execution initializes the deterministic orchestration flow and validates the middleware structure. Full interaction with external LLM providers may require a valid API key configured as an environment variable, depending on the selected backend, and is not required for this minimal execution check.
---
## Design Principles
MiND is built around the following core principles:
- **Determinism**
The middleware itself is non-agentic and deterministic. It does not perform autonomous planning or decision-making.
- **Externalized State**
Interaction state, context updates, and routing decisions are handled explicitly outside the LLM, rather than being embedded implicitly in conversational history.
- **Auditability**
Every interaction generates structured artifacts, enabling inspection and post-hoc analysis without access to model internals.
- **API Agnosticism**
MiND is designed to operate independently of any specific LLM provider or API.
- **Minimalism**
The framework provides a minimal executable core intended as architectural infrastructure, not as a full application or agent framework.
---
## Architecture
MiND operates by classifying incoming inputs into predefined processing modules. Each module is responsible for a restricted subset of the interaction context, such as:
- Input classification
- Context retrieval
- Prompt assembly
- Response handling and validation
By isolating these responsibilities, MiND prevents unintended cross-contamination of conversational state and makes each processing step explicit and inspectable.
During execution, MiND generates structured artifacts, including:
- JSON-based interaction logs
- Persistent records stored in a relational database (optional)
As a result, prompt construction, context updates, and response delivery become explicit steps rather than opaque side effects of conversational history managed by external platforms.
---
## Practical Capabilities
By externalizing interaction state and control logic, MiND enables several practical capabilities:
- **Auditable interaction trails**
Creation of structured logs for LLM usage without requiring access to model internals.
- **Reduced data exposure**
Limiting the information transmitted to each individual model invocation, including optional redaction or symbolic encoding of sensitive data.
- **Portability across LLM providers**
Preservation of user interaction histories when switching between different LLM APIs.
- **External behavioral constraints**
Experimentation with response formats, policies, or interaction rules without fine-tuning or modifying the underlying models.
---
## Scope and Non-Goals
MiND is designed as middleware infrastructure and deliberately avoids several common claims:
- It does **not** attempt to infer causal mechanisms inside LLMs.
- It does **not** replace model fine-tuning, RLHF, or training-based alignment methods.
- It is **not** an autonomous agent framework.
Its contribution lies in providing a controlled, inspectable, and reproducible interaction layer around existing LLMs.
---
## Example Use Case
In a research workflow involving sensitive medical data, MiND can be configured to:
- Log all interactions in structured form
- Redact patient identifiers before model invocation
- Store interaction traces locally
- Avoid reliance on provider-specific conversation histories
This enables reproducible experimentation and auditing while minimizing data exposure.
---
## Target Audience
MiND is intended for:
- Researchers requiring reproducible LLM experiments with controlled prompt construction
- Developers building privacy-sensitive LLM applications
- Organizations needing auditable AI interaction logs for compliance or post-hoc analysis
---
## Relationship to Nemosine
MiND originates as the minimal executable core extracted from the broader **Nemosine** cognitive architecture. While Nemosine encompasses higher-level symbolic, modular, and theoretical constructs, MiND focuses exclusively on the minimal deterministic middleware required to operationalize controlled LLM interaction.
MiND can be used independently and does not require adoption of the broader Nemosine framework.
---
## License
This project is licensed under the **GNU General Public License v3.0 (GPL-3.0)**.
---
## Author
**Edervaldo José de Souza Melo**
Independent Researcher — Brazil
---
## Status
- Executable minimal architecture
- Deterministic and non-agentic
- Auditable via structured logs
- Designed as architectural infrastructure, not a final product
| text/markdown | Edervaldo José de Souza Melo | null | null | null | GNU General Public License v3.0 only | deterministic middleware, reproducible AI, LLM auditing, logging, research software | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
... | [] | null | null | >=3.9 | [] | [] | [] | [
"fastapi>=0.111",
"uvicorn[standard]>=0.30",
"pydantic>=2.7",
"python-dotenv>=1.0",
"openai>=1.40"
] | [] | [] | [] | [
"Homepage, https://github.com/edersouzamelo/nemosine-10-MiND",
"Repository, https://github.com/edersouzamelo/nemosine-10-MiND",
"Documentation, https://github.com/edersouzamelo/nemosine-10-MiND",
"Archive, https://doi.org/10.5281/zenodo.18637799"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-19T03:23:09.817918 | nemosine_mind-1.0.1.tar.gz | 23,714 | fd/28/c1492436a010a26e157e48991c3a8bad58ec7d7ef56448d665cf877d59b1/nemosine_mind-1.0.1.tar.gz | source | sdist | null | false | c92e35063c09b9b657b3509080958bc8 | 4270d097387088a4514a67b33aac0f4504fa1a350907c1a7fea8530091f4749b | fd28c1492436a010a26e157e48991c3a8bad58ec7d7ef56448d665cf877d59b1 | null | [
"LICENSE"
] | 263 |
2.4 | pmtvs-automotive | 0.0.1 | Signal analysis primitives | # pmtvs-automotive
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:22:18.344654 | pmtvs_automotive-0.0.1.tar.gz | 1,265 | f0/7a/19a260696b670483ce083b1f77393e5a23f4eb6850686d1eb1b19bfdff74/pmtvs_automotive-0.0.1.tar.gz | source | sdist | null | false | d24bfec3e8f46cd53033505cbeb80433 | 61c80a19dc9ed723b619317a3dbf4de45c4af1ca90bfd24022c6bcb94f93b644 | f07a19a260696b670483ce083b1f77393e5a23f4eb6850686d1eb1b19bfdff74 | null | [] | 258 |
2.4 | claude-code-plugins-v1 | 1.0.5 | Bundled plugins for Claude Code including Agent SDK development tools, PR review toolkit, and commit workflows | # Claude Code
 [![npm]](https://www.npmjs.com/package/@anthropic-ai/claude-code)
[npm]: https://img.shields.io/npm/v/@anthropic-ai/claude-code.svg?style=flat-square
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows -- all through natural language commands. Use it in your terminal, IDE, or tag @claude on Github.
**Learn more in the [official documentation](https://code.claude.com/docs/en/overview)**.
<img src="./demo.gif" />
## Get started
> [!NOTE]
> Installation via npm is deprecated. Use one of the recommended methods below.
For more installation options, uninstall steps, and troubleshooting, see the [setup documentation](https://code.claude.com/docs/en/setup).
1. Install Claude Code:
**MacOS/Linux (Recommended):**
```bash
curl -fsSL https://claude.ai/install.sh | bash
```
**Homebrew (MacOS/Linux):**
```bash
brew install --cask claude-code
```
**Windows (Recommended):**
```powershell
irm https://claude.ai/install.ps1 | iex
```
**WinGet (Windows):**
```powershell
winget install Anthropic.ClaudeCode
```
**NPM (Deprecated):**
```bash
npm install -g @anthropic-ai/claude-code
```
2. Navigate to your project directory and run `claude`.
## Plugins
This repository includes several Claude Code plugins that extend functionality with custom commands and agents. See the [plugins directory](./plugins/README.md) for detailed documentation on available plugins.
## Reporting Bugs
We welcome your feedback. Use the `/bug` command to report issues directly within Claude Code, or file a [GitHub issue](https://github.com/anthropics/claude-code/issues).
## Connect on Discord
Join the [Claude Developers Discord](https://anthropic.com/discord) to connect with other developers using Claude Code. Get help, share feedback, and discuss your projects with the community.
## Data collection, usage, and retention
When you use Claude Code, we collect feedback, which includes usage data (such as code acceptance or rejections), associated conversation data, and user feedback submitted via the `/bug` command.
### How we use your data
See our [data usage policies](https://code.claude.com/docs/en/data-usage).
### Privacy safeguards
We have implemented several safeguards to protect your data, including limited retention periods for sensitive information, restricted access to user session data, and clear policies against using feedback for model training.
For full details, please review our [Commercial Terms of Service](https://www.anthropic.com/legal/commercial-terms) and [Privacy Policy](https://www.anthropic.com/legal/privacy).
| text/markdown | Anthropic | Anthropic <support@anthropic.com> | null | null | MIT | claude, claude-code, plugins, ai, development | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | https://github.com/anthropics/claude-code | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/anthropics/claude-code",
"Documentation, https://code.claude.com/docs",
"Repository, https://github.com/anthropics/claude-code",
"Issues, https://github.com/anthropics/claude-code/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T03:21:47.215889 | claude_code_plugins_v1-1.0.5-py3-none-any.whl | 62,016,657 | 4d/77/8227998c1c31df9e7d839e5100457ca9274cfb3a836fdce372cf179d5911/claude_code_plugins_v1-1.0.5-py3-none-any.whl | py3 | bdist_wheel | null | false | 6e1f5e4ac97ee7217110c9e0de5631b7 | 55ecc4c1c56a24bdcdc039dba309f91ca227ce42911cf940eac71f2ff3e38e74 | 4d778227998c1c31df9e7d839e5100457ca9274cfb3a836fdce372cf179d5911 | null | [
"LICENSE.md"
] | 112 |
2.4 | pmtvs-memory | 0.0.1 | Signal analysis primitives | # pmtvs-memory
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:21:14.137910 | pmtvs_memory-0.0.1.tar.gz | 1,252 | 80/0b/159b0d50e48884326475436acbdfa7bdcbf66b843b3e0e6ede0c48f68731/pmtvs_memory-0.0.1.tar.gz | source | sdist | null | false | e90251c46f1156369d8a0a913fe61866 | 8810e34da364041fc7b5d1c09689b99884840347730d2bc1f87dfb12841d4b10 | 800b159b0d50e48884326475436acbdfa7bdcbf66b843b3e0e6ede0c48f68731 | null | [] | 259 |
2.4 | dd-embed | 0.1.0 | Shared embedding model abstraction layer for Digital Duck projects | # dd-embed
Shared embedding model abstraction layer for Digital Duck projects.
Extracted from semanscope and maniscope. Zero heavy deps in core (only numpy).
Adapters lazy-import their SDKs only when used.
## Install
```bash
pip install dd-embed # numpy only
pip install "dd-embed[sentence-transformers]" # + sentence-transformers
pip install "dd-embed[openai]" # + OpenAI SDK (also covers openrouter)
pip install "dd-embed[voyageai]" # + Voyage AI SDK
pip install "dd-embed[gemini]" # + Google GenAI SDK
pip install "dd-embed[all]" # all provider SDKs
```
## Quick Start
```python
from dd_embed import embed
# Using sentence-transformers (local, free)
embeddings = embed(["hello", "world"], provider="sentence_transformers",
model_name="all-MiniLM-L6-v2")
print(embeddings.shape) # (2, 384)
# Using OpenAI
embeddings = embed(["hello"], provider="openai", api_key="sk-...")
# Using Ollama (local)
embeddings = embed(["hello"], provider="ollama", model_name="bge-m3")
```
## Built-in Adapters
| Name | Class | SDK | Notes |
|------|-------|-----|-------|
| `sentence_transformers` | `SentenceTransformerAdapter` | `sentence-transformers` | Local, free, used by maniscope |
| `huggingface` | `HuggingFaceAdapter` | `transformers` + `torch` | AutoModel + mean pooling, E5/Qwen support |
| `ollama` | `OllamaEmbedAdapter` | `requests` | Local Ollama server |
| `openai` | `OpenAIEmbedAdapter` | `openai` | OpenAI embeddings API |
| `openrouter` | `OpenAIEmbedAdapter` (configured) | `openai` | OpenAI-compat endpoint |
| `gemini` | `GeminiEmbedAdapter` | `google-generativeai` | Google Gemini embeddings |
| `voyage` | `VoyageEmbedAdapter` | `voyageai` | Voyage AI embeddings |
## Embedding Cache
Disk-persistent, per-word granular cache (ported from semanscope):
```python
from dd_embed import EmbeddingCache, get_adapter
cache = EmbeddingCache() # default: ~/projects/embedding_cache/dd_embed/master.pkl
adapter = get_adapter("sentence_transformers", model_name="all-MiniLM-L6-v2")
embeddings, cached, computed = cache.get_embeddings(
texts=["apple", "banana", "cherry"],
model_name="all-MiniLM-L6-v2",
scope="en",
embed_fn=lambda texts: adapter.embed(texts).embeddings,
)
print(f"Cached: {cached}, Computed: {computed}")
cache.save()
```
## Custom Adapters
```python
from dd_embed import EmbeddingAdapter, EmbeddingResult, register_adapter, embed
import numpy as np
class MyAdapter(EmbeddingAdapter):
def embed(self, texts, **kwargs):
vecs = np.random.randn(len(texts), 128) # your logic here
return EmbeddingResult(
embeddings=vecs, success=True, provider="my_api",
model="v1", dimensions=128, num_texts=len(texts),
)
register_adapter("my_api", MyAdapter)
result = embed(["hello"], provider="my_api")
```
## Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `OPENAI_API_KEY` | OpenAI API key | -- |
| `OPENROUTER_API_KEY` | OpenRouter API key | -- |
| `GEMINI_API_KEY` | Google Gemini API key | -- |
| `VOYAGE_API_KEY` | Voyage AI API key | -- |
| `OLLAMA_HOST` | Ollama server URL | `http://localhost:11434` |
## License
MIT
| text/markdown | null | Digital Duck <p2p2learn@outlook.com> | null | null | null | embedding, digital-duck, ai, sentence-transformers, nlp | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.24.0",
"sentence-transformers>=2.0.0; extra == \"sentence-transformers\"",
"transformers>=4.30.0; extra == \"transformers\"",
"torch>=2.0.0; extra == \"transformers\"",
"openai>=1.0.0; extra == \"openai\"",
"voyageai>=0.1.0; extra == \"voyageai\"",
"google-generativeai>=0.3.0; extra == \"gemin... | [] | [] | [] | [
"Homepage, https://github.com/digital-duck/dd-embed",
"Repository, https://github.com/digital-duck/dd-embed",
"Issues, https://github.com/digital-duck/dd-embed/issues"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-19T03:20:38.068462 | dd_embed-0.1.0.tar.gz | 14,653 | 26/5e/3fa85b67e5b831c6eec1e340f2ac3741dcda21989547088cd181c6e3b4f4/dd_embed-0.1.0.tar.gz | source | sdist | null | false | e888e2fe5892a627355f0fb782e649e7 | 7f39ef28fb5f3226c18746cdf608d9af4b319c669a1d1f117593851781148525 | 265e3fa85b67e5b831c6eec1e340f2ac3741dcda21989547088cd181c6e3b4f4 | MIT | [
"LICENSE"
] | 301 |
2.4 | pmtvs-continuity | 0.0.1 | Signal analysis primitives | # pmtvs-continuity
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:20:09.624971 | pmtvs_continuity-0.0.1.tar.gz | 1,277 | 9e/f3/4085f382555875d3b3a25904b5ffb953a244c3fb80972ca1a8a442d4b6f5/pmtvs_continuity-0.0.1.tar.gz | source | sdist | null | false | 62472220d1427d73ff513ee8a5a6f5a4 | fd58485e62d7da3bde8bb717ef34034988cf8a99202deab9e783d1b488f53ac5 | 9ef34085f382555875d3b3a25904b5ffb953a244c3fb80972ca1a8a442d4b6f5 | null | [] | 260 |
2.4 | zeus | 0.14.0 | A framework for deep learning energy measurement and optimization. | <div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/ml-energy/zeus/master/docs/assets/img/logo_dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/ml-energy/zeus/master/docs/assets/img/logo_light.svg">
<img alt="Zeus logo" width="55%" src="https://raw.githubusercontent.com/ml-energy/zeus/master/docs/assets/img/logo_light.svg">
</picture>
<h1>Deep Learning Energy Measurement and Optimization</h1>
[](https://join.slack.com/t/zeus-ml/shared_invite/zt-36fl1m7qa-Ihky6FbfxLtobx40hMj3VA)
[](https://hub.docker.com/r/symbioticlab/zeus)
[](https://ml.energy/zeus)
[](/LICENSE)
</div>
---
**Project News** ⚡
- \[2025/12\] With NVIDIA, Google, and Meta, we led a NeurIPS 25 tutorial on [Energy and Power as First‑Class ML Design Metrics](https://ml.energy/tutorials/neurips25/)!
- \[2025/12\] [The ML.ENERGY leaderboard](https://ml.energy/leaderboard) got a major upgrade to v3. Read our in-depth [technical analysis blog post](https://ml.energy/blog/measurement/energy/diagnosing-inference-energy-consumption-with-the-mlenergy-leaderboard-v30/).
- \[2025/09\] We shared our experience and design philosophy for [The ML.ENERGY Benchmark](https://github.com/ml-energy/benchmark) in our [NeurIPS 25 D&B Spotlight paper](https://neurips.cc/virtual/2025/loc/san-diego/poster/121781).
- \[2025/05\] Zeus now supports CPU, DRAM, AMD GPU, Apple Silicon, and NVIDIA Jetson platform energy measurement!
- \[2024/11\] Perseus, an optimizer for large model training, appeared at SOSP'24! [Paper](https://dl.acm.org/doi/10.1145/3694715.3695970) | [Blog](https://ml.energy/zeus/research_overview/perseus) | [Optimizer](https://ml.energy/zeus/optimize/pipeline_frequency_optimizer)
- \[2024/05\] Zeus is now a PyTorch ecosystem project. Read the PyTorch blog post [here](https://pytorch.org/blog/zeus/)!
- \[2024/02\] Zeus was selected as a [2024 Mozilla Technology Fund awardee](https://foundation.mozilla.org/en/blog/open-source-AI-for-environmental-justice/)!
---
Zeus is a library for (1) [**measuring**](https://ml.energy/zeus/measure) the energy consumption of Deep Learning workloads and (2) [**optimizing**](https://ml.energy/zeus/optimize) their energy consumption.
Zeus is part of [The ML.ENERGY Initiative](https://ml.energy).
## Repository Organization
```
zeus/
├── zeus/ # ⚡ Zeus Python package
│ ├── monitor/ # - Energy and power measurement (programmatic & CLI)
│ ├── optimizer/ # - Collection of time and energy optimizers
│ ├── device/ # - Abstraction layer over CPU and GPU devices
│ ├── utils/ # - Utility functions and classes
│ ├── _legacy/ # - Legacy code to keep our research papers reproducible
│ ├── metric.py # - Prometheus metric export support
│ ├── show_env.py # - Installation & device detection verification script
│ └── callback.py # - Base class for callbacks during training
│
├── zeusd # 🌩️ Zeus daemon
│
├── docker/ # 🐳 Dockerfiles and Docker Compose files
│
└── examples/ # 🛠️ Zeus usage examples
```
## Getting Started
Please refer to our [Getting Started](https://ml.energy/zeus/getting_started) page.
After that, you might look at
- [Measuring Energy](https://ml.energy/zeus/measure)
- [Optimizing Energy](https://ml.energy/zeus/optimize)
### Docker image
We provide a Docker image fully equipped with all dependencies and environments.
Refer to our [Docker Hub repository](https://hub.docker.com/r/mlenergy/zeus) and [`Dockerfile`](docker/Dockerfile).
### Examples
We provide working examples for integrating and running Zeus in the [`examples/`](/examples) directory.
## Research
Zeus is rooted on multiple research papers.
Even more research is ongoing, and Zeus will continue to expand and get better at what it's doing.
1. Zeus (NSDI 23): [Paper](https://www.usenix.org/conference/nsdi23/presentation/you) | [Blog](https://ml.energy/zeus/research_overview/zeus) | [Slides](https://www.usenix.org/system/files/nsdi23_slides_chung.pdf)
1. Chase (ICLR Workshop 23): [Paper](https://arxiv.org/abs/2303.02508)
1. Perseus (SOSP 24): [Paper](https://arxiv.org/abs/2312.06902) | [Blog](https://ml.energy/zeus/research_overview/perseus) | [Slides](https://jaewonchung.me/pdf.js/web/viewer.html?file=/assets/attachments/pubs/Perseus_slides.pdf#pagemode=none)
1. The ML.ENERGY Benchmark (NeurIPS 25 D&B Spotlight): [Paper](https://arxiv.org/abs/2505.06371) | [Repository](https://github.com/ml-energy/benchmark) | [Leaderboard](https://ml.energy/leaderboard)
1. Where Do the Joules Go? Diagnosing Inference Energy Consumption: [ArXiv](https://arxiv.org/abs/2601.22076) | [Blog](https://ml.energy/blog/measurement/energy/diagnosing-inference-energy-consumption-with-the-mlenergy-leaderboard-v30/)
If you find Zeus relevant to your research, please consider citing:
```bibtex
@inproceedings{zeus-nsdi23,
title = {Zeus: Understanding and Optimizing {GPU} Energy Consumption of {DNN} Training},
author = {Jie You and Jae-Won Chung and Mosharaf Chowdhury},
booktitle = {USENIX NSDI},
year = {2023}
}
```
## Other Resources
1. Energy-Efficient Deep Learning with PyTorch and Zeus (PyTorch conference 2023): [Recording](https://youtu.be/veM3x9Lhw2A) | [Slides](https://ml.energy/assets/attachments/pytorch_conf_2023_slides.pdf)
## Contact
Jae-Won Chung (jwnchung@umich.edu)
## Newsletter
Subscribe to the [ML.ENERGY newsletter](https://buttondown.com/ml-energy) for the latest news on Zeus and other projects by the ML.ENERGY Initiative.
| text/markdown | Zeus Team | null | null | null | null | deep-learning, power, energy, mlsys | [
"Environment :: GPU :: NVIDIA CUDA",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy",
"pandas",
"scikit-learn",
"nvidia-ml-py",
"pydantic",
"rich",
"tyro",
"httpx",
"amdsmi",
"python-dateutil",
"pydantic<2; extra == \"pfo\"",
"fastapi[standard]; extra == \"pfo-server\"",
"pydantic<2; extra == \"pfo-server\"",
"lowtime; extra == \"pfo-server\"",
"aiofiles; extra =... | [] | [] | [] | [
"Repository, https://github.com/ml-energy/zeus",
"Homepage, https://ml.energy/zeus",
"Documentation, https://ml.energy/zeus"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:19:39.554181 | zeus-0.14.0.tar.gz | 211,285 | 4b/80/d7ca989c16af40c5b05c5fd85cfe52231deb5441060eb6e9ca5d7d9b4c5c/zeus-0.14.0.tar.gz | source | sdist | null | false | 872e1531cae48787bc62dff086e7c23e | 090772b220a525b8edabcdd7fb5b7edd54e3f8951eba8ef89c9714b55c5e8aa8 | 4b80d7ca989c16af40c5b05c5fd85cfe52231deb5441060eb6e9ca5d7d9b4c5c | Apache-2.0 | [
"LICENSE"
] | 334 |
2.4 | pmtvs-derivatives | 0.0.1 | Signal analysis primitives | # pmtvs-derivatives
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:19:05.805727 | pmtvs_derivatives-0.0.1.tar.gz | 1,256 | 91/6a/8c14e6175879b1647f75c0da785eeb364b1a75b32fb81ec0fef6e850696f/pmtvs_derivatives-0.0.1.tar.gz | source | sdist | null | false | 04247a1d6d762f05d9bd355cdbf6e7c7 | 9147bb731797a8ffe8cc0490f777c6acabe3b3ec17718bfe74eda4af3cbeb735 | 916a8c14e6175879b1647f75c0da785eeb364b1a75b32fb81ec0fef6e850696f | null | [] | 249 |
2.4 | torch-quickfps-cu130 | 2.1.0 | PyTorch bucket-based farthest point sampling (CPU + CUDA). | # PyTorch QuickFPS
Efficient **farthest point sampling (FPS)** for PyTorch, adapted from [fpsample](https://github.com/leonardodalinky/fpsample).
This project provides bucket-based FPS on both CPU and GPU. The GPU path is optimized for high-dimensional sampling (e.g., feature embeddings).
---
## Installation
### 1) Install PyTorch (required)
Install PyTorch using the official instructions for your platform/CUDA:
* https://pytorch.org/get-started/locally/
### 2) Install `torch_quickfps`
#### Option A: prebuilt wheels from pip
```bash
# CPU-only
pip install torch-quickfps
# CUDA 12.8
pip install torch-quickfps-cu128
# CUDA 13.0
pip install torch-quickfps-cu130
```
Notes:
* The CUDA wheel you choose should match the CUDA-enabled PyTorch you installed (e.g., cu128 wheel with a cu128 PyTorch build).
#### Option B: install from source (GitHub)
```bash
pip install --no-build-isolation git+https://github.com/Astro-85/torch_quickfps
```
---
## Usage
```python
import torch
import torch_quickfps
x = torch.rand(64, 2048, 256)
# Random sample
sampled_points, indices = torch_quickfps.sample(x, 1024)
# Random sample with specific tree height
sampled_points, indices = torch_quickfps.sample(x, 1024, h=3)
# Random sample with start point index (int)
sampled_points, indices = torch_quickfps.sample(x, 1024, start_idx=0)
# For high-dimensional embeddings on CUDA, set low_d for faster bucketing
sampled_points, indices = torch_quickfps.sample(x, 1024, h=8, low_d=8)
# Indices-only
indices = torch_quickfps.sample(x, 1024, return_points=False)
# (equivalently)
indices = torch_quickfps.sample_idx(x, 1024)
# Masked sampling: only sample from valid points (mask shape [B, N])
mask = torch.ones(x.shape[:-1], dtype=torch.bool)
mask[:, 1000:] = False # e.g. padding
sampled_points, indices = torch_quickfps.sample(x, 512, mask=mask)
print(sampled_points.size(), indices.size())
# torch.Size([64, 1024, 256]) torch.Size([64, 1024])
```
---
## Performance comparison
Comparison includes CPU, a vanilla GPU FPS baseline, and our bucketed GPU implementation.
* **N**: number of input points
* **D**: point dimension
* **K**: number of sampled points
* **CPU vs GPU (bucketed)**: `CPU_ms / GPU_bucketed_ms`
* **GPU baseline vs bucketed**: `GPU_baseline_ms / GPU_bucketed_ms`
| N | D | K | CPU (ms) | GPU baseline (ms) | GPU bucketed (ms) | CPU vs GPU (bucketed) | GPU baseline vs bucketed |
| ----: | ---: | ---: | --------: | ----------------: | ----------------: | --------------------: | -----------------------: |
| 1000 | 8 | 250 | 0.271 | 0.404 | 2.671 | 0.10x | 0.15x |
| 1000 | 1024 | 250 | 69.697 | 94.144 | 4.867 | 14.32x | 19.34x |
| 1000 | 4096 | 250 | 248.521 | 378.458 | 10.614 | 23.41x | 35.65x |
| 2000 | 8 | 500 | 1.578 | 1.299 | 5.432 | 0.29x | 0.24x |
| 2000 | 1024 | 500 | 213.804 | 399.292 | 11.018 | 19.41x | 36.24x |
| 2000 | 4096 | 500 | 869.318 | 1585.913 | 33.974 | 25.59x | 46.68x |
| 5000 | 8 | 1250 | 6.151 | 7.156 | 16.970 | 0.36x | 0.42x |
| 5000 | 1024 | 1250 | 1075.742 | 2483.299 | 47.459 | 22.67x | 52.33x |
| 5000 | 4096 | 1250 | 4547.318 | 10027.665 | 154.874 | 29.36x | 64.75x |
| 10000 | 8 | 2500 | 22.135 | 26.152 | 43.379 | 0.51x | 0.60x |
| 10000 | 1024 | 2500 | 4503.257 | 9959.041 | 186.622 | 24.13x | 53.36x |
| 10000 | 4096 | 2500 | 21699.598 | 40439.047 | 645.883 | 33.60x | 62.61x |
---
## Reference
Bucket-based FPS (QuickFPS) is proposed in the following paper:
```bibtex
@article{han2023quickfps,
title={QuickFPS: Architecture and Algorithm Co-Design for Farthest Point Sampling in Large-Scale Point Clouds},
author={Han, Meng and Wang, Liang and Xiao, Limin and Zhang, Hao and Zhang, Chenhao and Xu, Xiangrong and Zhu, Jianfeng},
journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},
year={2023},
publisher={IEEE}
}
```
Thanks to the authors for their great work.
| text/markdown | Andrew Lu | alu1@seas.upenn.edu | null | null | MIT | pytorch, farthest, furthest, sampling, sample, fps, quickfps | [] | [] | https://github.com/Astro-85/torch_quickfps | null | >=3.10 | [] | [] | [] | [
"torch>=2.10"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T03:18:04.459796 | torch_quickfps_cu130-2.1.0-cp310-abi3-manylinux_2_28_x86_64.whl | 2,105,600 | 53/17/ab1d3f00cb6e17237a23ea7c5a741183ae3212608a4d955c380a8916bf67/torch_quickfps_cu130-2.1.0-cp310-abi3-manylinux_2_28_x86_64.whl | cp310 | bdist_wheel | null | false | 3ee0857e2b2ce01040cf6ba3378987d7 | d390148c08543fe0fca560eea7490df1f6d6000d44c5e28af83018e9a43b2465 | 5317ab1d3f00cb6e17237a23ea7c5a741183ae3212608a4d955c380a8916bf67 | null | [
"LICENSE"
] | 102 |
2.4 | torch-quickfps-cu128 | 2.1.0 | PyTorch bucket-based farthest point sampling (CPU + CUDA). | # PyTorch QuickFPS
Efficient **farthest point sampling (FPS)** for PyTorch, adapted from [fpsample](https://github.com/leonardodalinky/fpsample).
This project provides bucket-based FPS on both CPU and GPU. The GPU path is optimized for high-dimensional sampling (e.g., feature embeddings).
---
## Installation
### 1) Install PyTorch (required)
Install PyTorch using the official instructions for your platform/CUDA:
* https://pytorch.org/get-started/locally/
### 2) Install `torch_quickfps`
#### Option A: prebuilt wheels from pip
```bash
# CPU-only
pip install torch-quickfps
# CUDA 12.8
pip install torch-quickfps-cu128
# CUDA 13.0
pip install torch-quickfps-cu130
```
Notes:
* The CUDA wheel you choose should match the CUDA-enabled PyTorch you installed (e.g., cu128 wheel with a cu128 PyTorch build).
#### Option B: install from source (GitHub)
```bash
pip install --no-build-isolation git+https://github.com/Astro-85/torch_quickfps
```
---
## Usage
```python
import torch
import torch_quickfps
x = torch.rand(64, 2048, 256)
# Random sample
sampled_points, indices = torch_quickfps.sample(x, 1024)
# Random sample with specific tree height
sampled_points, indices = torch_quickfps.sample(x, 1024, h=3)
# Random sample with start point index (int)
sampled_points, indices = torch_quickfps.sample(x, 1024, start_idx=0)
# For high-dimensional embeddings on CUDA, set low_d for faster bucketing
sampled_points, indices = torch_quickfps.sample(x, 1024, h=8, low_d=8)
# Indices-only
indices = torch_quickfps.sample(x, 1024, return_points=False)
# (equivalently)
indices = torch_quickfps.sample_idx(x, 1024)
# Masked sampling: only sample from valid points (mask shape [B, N])
mask = torch.ones(x.shape[:-1], dtype=torch.bool)
mask[:, 1000:] = False # e.g. padding
sampled_points, indices = torch_quickfps.sample(x, 512, mask=mask)
print(sampled_points.size(), indices.size())
# torch.Size([64, 1024, 256]) torch.Size([64, 1024])
```
---
## Performance comparison
Comparison includes CPU, a vanilla GPU FPS baseline, and our bucketed GPU implementation.
* **N**: number of input points
* **D**: point dimension
* **K**: number of sampled points
* **CPU vs GPU (bucketed)**: `CPU_ms / GPU_bucketed_ms`
* **GPU baseline vs bucketed**: `GPU_baseline_ms / GPU_bucketed_ms`
| N | D | K | CPU (ms) | GPU baseline (ms) | GPU bucketed (ms) | CPU vs GPU (bucketed) | GPU baseline vs bucketed |
| ----: | ---: | ---: | --------: | ----------------: | ----------------: | --------------------: | -----------------------: |
| 1000 | 8 | 250 | 0.271 | 0.404 | 2.671 | 0.10x | 0.15x |
| 1000 | 1024 | 250 | 69.697 | 94.144 | 4.867 | 14.32x | 19.34x |
| 1000 | 4096 | 250 | 248.521 | 378.458 | 10.614 | 23.41x | 35.65x |
| 2000 | 8 | 500 | 1.578 | 1.299 | 5.432 | 0.29x | 0.24x |
| 2000 | 1024 | 500 | 213.804 | 399.292 | 11.018 | 19.41x | 36.24x |
| 2000 | 4096 | 500 | 869.318 | 1585.913 | 33.974 | 25.59x | 46.68x |
| 5000 | 8 | 1250 | 6.151 | 7.156 | 16.970 | 0.36x | 0.42x |
| 5000 | 1024 | 1250 | 1075.742 | 2483.299 | 47.459 | 22.67x | 52.33x |
| 5000 | 4096 | 1250 | 4547.318 | 10027.665 | 154.874 | 29.36x | 64.75x |
| 10000 | 8 | 2500 | 22.135 | 26.152 | 43.379 | 0.51x | 0.60x |
| 10000 | 1024 | 2500 | 4503.257 | 9959.041 | 186.622 | 24.13x | 53.36x |
| 10000 | 4096 | 2500 | 21699.598 | 40439.047 | 645.883 | 33.60x | 62.61x |
---
## Reference
Bucket-based FPS (QuickFPS) is proposed in the following paper:
```bibtex
@article{han2023quickfps,
title={QuickFPS: Architecture and Algorithm Co-Design for Farthest Point Sampling in Large-Scale Point Clouds},
author={Han, Meng and Wang, Liang and Xiao, Limin and Zhang, Hao and Zhang, Chenhao and Xu, Xiangrong and Zhu, Jianfeng},
journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},
year={2023},
publisher={IEEE}
}
```
Thanks to the authors for their great work.
| text/markdown | Andrew Lu | alu1@seas.upenn.edu | null | null | MIT | pytorch, farthest, furthest, sampling, sample, fps, quickfps | [] | [] | https://github.com/Astro-85/torch_quickfps | null | >=3.10 | [] | [] | [] | [
"torch>=2.10"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T03:18:02.256497 | torch_quickfps_cu128-2.1.0-cp310-abi3-manylinux_2_28_x86_64.whl | 2,290,873 | 4a/35/30f5acbc372463606fb603c629a54bb36e05425fede169f5e5245c1075f0/torch_quickfps_cu128-2.1.0-cp310-abi3-manylinux_2_28_x86_64.whl | cp310 | bdist_wheel | null | false | 4054b2e4eacd0a6342ba23be6a572772 | 7ca79e4d76d5ff7d06906282661b28853d2f8cab1a8e3d541652827ef9e5ad6e | 4a3530f5acbc372463606fb603c629a54bb36e05425fede169f5e5245c1075f0 | null | [
"LICENSE"
] | 106 |
2.4 | torch-quickfps | 2.1.0 | PyTorch bucket-based farthest point sampling (CPU + CUDA). | # PyTorch QuickFPS
Efficient **farthest point sampling (FPS)** for PyTorch, adapted from [fpsample](https://github.com/leonardodalinky/fpsample).
This project provides bucket-based FPS on both CPU and GPU. The GPU path is optimized for high-dimensional sampling (e.g., feature embeddings).
---
## Installation
### 1) Install PyTorch (required)
Install PyTorch using the official instructions for your platform/CUDA:
* https://pytorch.org/get-started/locally/
### 2) Install `torch_quickfps`
#### Option A: prebuilt wheels from pip
```bash
# CPU-only
pip install torch-quickfps
# CUDA 12.8
pip install torch-quickfps-cu128
# CUDA 13.0
pip install torch-quickfps-cu130
```
Notes:
* The CUDA wheel you choose should match the CUDA-enabled PyTorch you installed (e.g., cu128 wheel with a cu128 PyTorch build).
#### Option B: install from source (GitHub)
```bash
pip install --no-build-isolation git+https://github.com/Astro-85/torch_quickfps
```
---
## Usage
```python
import torch
import torch_quickfps
x = torch.rand(64, 2048, 256)
# Random sample
sampled_points, indices = torch_quickfps.sample(x, 1024)
# Random sample with specific tree height
sampled_points, indices = torch_quickfps.sample(x, 1024, h=3)
# Random sample with start point index (int)
sampled_points, indices = torch_quickfps.sample(x, 1024, start_idx=0)
# For high-dimensional embeddings on CUDA, set low_d for faster bucketing
sampled_points, indices = torch_quickfps.sample(x, 1024, h=8, low_d=8)
# Indices-only
indices = torch_quickfps.sample(x, 1024, return_points=False)
# (equivalently)
indices = torch_quickfps.sample_idx(x, 1024)
# Masked sampling: only sample from valid points (mask shape [B, N])
mask = torch.ones(x.shape[:-1], dtype=torch.bool)
mask[:, 1000:] = False # e.g. padding
sampled_points, indices = torch_quickfps.sample(x, 512, mask=mask)
print(sampled_points.size(), indices.size())
# torch.Size([64, 1024, 256]) torch.Size([64, 1024])
```
---
## Performance comparison
Comparison includes CPU, a vanilla GPU FPS baseline, and our bucketed GPU implementation.
* **N**: number of input points
* **D**: point dimension
* **K**: number of sampled points
* **CPU vs GPU (bucketed)**: `CPU_ms / GPU_bucketed_ms`
* **GPU baseline vs bucketed**: `GPU_baseline_ms / GPU_bucketed_ms`
| N | D | K | CPU (ms) | GPU baseline (ms) | GPU bucketed (ms) | CPU vs GPU (bucketed) | GPU baseline vs bucketed |
| ----: | ---: | ---: | --------: | ----------------: | ----------------: | --------------------: | -----------------------: |
| 1000 | 8 | 250 | 0.271 | 0.404 | 2.671 | 0.10x | 0.15x |
| 1000 | 1024 | 250 | 69.697 | 94.144 | 4.867 | 14.32x | 19.34x |
| 1000 | 4096 | 250 | 248.521 | 378.458 | 10.614 | 23.41x | 35.65x |
| 2000 | 8 | 500 | 1.578 | 1.299 | 5.432 | 0.29x | 0.24x |
| 2000 | 1024 | 500 | 213.804 | 399.292 | 11.018 | 19.41x | 36.24x |
| 2000 | 4096 | 500 | 869.318 | 1585.913 | 33.974 | 25.59x | 46.68x |
| 5000 | 8 | 1250 | 6.151 | 7.156 | 16.970 | 0.36x | 0.42x |
| 5000 | 1024 | 1250 | 1075.742 | 2483.299 | 47.459 | 22.67x | 52.33x |
| 5000 | 4096 | 1250 | 4547.318 | 10027.665 | 154.874 | 29.36x | 64.75x |
| 10000 | 8 | 2500 | 22.135 | 26.152 | 43.379 | 0.51x | 0.60x |
| 10000 | 1024 | 2500 | 4503.257 | 9959.041 | 186.622 | 24.13x | 53.36x |
| 10000 | 4096 | 2500 | 21699.598 | 40439.047 | 645.883 | 33.60x | 62.61x |
---
## Reference
Bucket-based FPS (QuickFPS) is proposed in the following paper:
```bibtex
@article{han2023quickfps,
title={QuickFPS: Architecture and Algorithm Co-Design for Farthest Point Sampling in Large-Scale Point Clouds},
author={Han, Meng and Wang, Liang and Xiao, Limin and Zhang, Hao and Zhang, Chenhao and Xu, Xiangrong and Zhu, Jianfeng},
journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},
year={2023},
publisher={IEEE}
}
```
Thanks to the authors for their great work.
| text/markdown | Andrew Lu | alu1@seas.upenn.edu | null | null | MIT | pytorch, farthest, furthest, sampling, sample, fps, quickfps | [] | [] | https://github.com/Astro-85/torch_quickfps | null | >=3.10 | [] | [] | [] | [
"torch>=2.10"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T03:18:00.308363 | torch_quickfps-2.1.0-cp310-abi3-manylinux_2_28_x86_64.whl | 1,072,765 | b7/94/d1bb46f593be1610bff56b5f2a6c51175866c92a52b2b5f3e78d0eaea0df/torch_quickfps-2.1.0-cp310-abi3-manylinux_2_28_x86_64.whl | cp310 | bdist_wheel | null | false | 1d49e25b856970c97eb00cdfac4efe8f | 4b8fe90157309c3b508e2af76a655e2b3d50bd8a599cc1bdf019adeadcc1317c | b794d1bb46f593be1610bff56b5f2a6c51175866c92a52b2b5f3e78d0eaea0df | null | [
"LICENSE"
] | 104 |
2.4 | pmtvs-acf | 0.0.1 | Signal analysis primitives | # pmtvs-acf
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:17:59.979418 | pmtvs_acf-0.0.1.tar.gz | 1,249 | 5a/50/4f817de9ebdef6132c5432cb80fc2808b726439dd1039719c7ee8bde5f44/pmtvs_acf-0.0.1.tar.gz | source | sdist | null | false | 2f01749b466217895461ba506f21c289 | 9ba2f245f8a5bd3ba0a0de8d9b71fa49a9383d86e77bfdb9025728ba07a170cb | 5a504f817de9ebdef6132c5432cb80fc2808b726439dd1039719c7ee8bde5f44 | null | [] | 249 |
2.1 | odoo-addon-l10n-it-edi-doi-extension | 18.0.1.1.0.2 | Declaration of Intent for Italy (OCA) | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=====================================
Declaration of Intent for Italy (OCA)
=====================================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:dd3f65ff3d3ab5dfa833e27947ecec69833e36661bb35bfd01abda6583856522
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fl10n--italy-lightgray.png?logo=github
:target: https://github.com/OCA/l10n-italy/tree/18.0/l10n_it_edi_doi_extension
:alt: OCA/l10n-italy
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/l10n-italy-18-0/l10n-italy-18-0-l10n_it_edi_doi_extension
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/l10n-italy&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
**English**
This module extends the functionality of l10n_it_edi_doi, enabling the
use of the Declaration of Intent (Dichiarazione di Intento) for incoming
vendor bills and purchase orders.
Key features:
- Support for multiple Declarations of Intent per invoice
- Dedicated tab in invoice form for managing DOI associations
- Automatic validation of DOI amounts and available thresholds
- Smart warnings when invoice amounts don't match DOI coverage
- Backward compatibility with single-declaration workflow
**Italiano**
Questo modulo estende la funzionalità di l10n_it_edi_doi, permettendo
l'utilizzo della Dichiarazione di Intento per le fatture di acquisto in
ingresso e gli ordini di acquisto.
Caratteristiche principali:
- Supporto per dichiarazioni di intento multiple per fattura
- Tab dedicato nel form fattura per gestire le associazioni DOI
- Validazione automatica degli importi e soglie disponibili
- Avvisi intelligenti quando gli importi non corrispondono
- Retrocompatibilità con il flusso a dichiarazione singola
**Table of contents**
.. contents::
:local:
Usage
=====
**English**
In the company configuration, it is necessary to define a dedicated tax
for the Declaration of Intent for incoming vendor bills.
In the contacts, you can create a Declaration of Intent by choosing
between two types:
- "Issued from company": for declarations issued by the company.
- "Received from customer": for declarations received from suppliers.
**Multiple Declarations of Intent:**
When creating or editing a vendor bill, you can now associate multiple
Declarations of Intent:
1. Go to the "Declarations of Intent" tab in the invoice form
2. Add one or more declarations using the list
3. For each declaration, specify the amount to be covered
4. The module will automatically:
- Validate that amounts don't exceed available thresholds
- Show a warning if total DOI amounts don't match invoice amount
- Update the invoiced amounts on each declaration
- Generate protocol numbers in the XML export
You can also use the traditional single-declaration field for backward
compatibility, or mix both approaches for different invoices.
**Italiano**
Nella configurazione dell'azienda è necessario definire un'imposta
dedicata alla Dichiarazione di Intento per le fatture in ingresso. Nei
contatti è possibile creare una Dichiarazione di Intento scegliendo tra
due tipologie:
- "Issued from company": per le dichiarazioni emesse dall'azienda.
- "Received from customer": per le dichiarazioni ricevute dai fornitori.
**Dichiarazioni di Intento Multiple:**
Durante la creazione o modifica di una fattura fornitore, è ora
possibile associare più Dichiarazioni di Intento:
1. Accedi al tab "Dichiarazioni di Intento" nel form della fattura
2. Aggiungi una o più dichiarazioni usando la lista
3. Per ogni dichiarazione, specifica l'importo da coprire
4. Il modulo automaticamente:
- Valida che gli importi non superino le soglie disponibili
- Mostra un avviso se il totale DOI non corrisponde all'importo
fattura
- Aggiorna gli importi fatturati su ogni dichiarazione
- Genera i numeri di protocollo nell'esportazione XML
È possibile continuare ad usare il campo tradizionale a dichiarazione
singola per retrocompatibilità, o combinare entrambi gli approcci per
fatture diverse.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/l10n-italy/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/l10n-italy/issues/new?body=module:%20l10n_it_edi_doi_extension%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Nextev Srl
Contributors
------------
- Nextev Srl<odoo@nextev.it>
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/l10n-italy <https://github.com/OCA/l10n-italy/tree/18.0/l10n_it_edi_doi_extension>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Nextev Srl, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/l10n-italy | null | >=3.10 | [] | [] | [] | [
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T03:16:59.076716 | odoo_addon_l10n_it_edi_doi_extension-18.0.1.1.0.2-py3-none-any.whl | 47,820 | 67/ec/356965b995bb5a5298f7f5c535ebb0a239b22b7156079ce17e15ea1bb953/odoo_addon_l10n_it_edi_doi_extension-18.0.1.1.0.2-py3-none-any.whl | py3 | bdist_wheel | null | false | aea0d4c599305d3ff2c276afcd0f4d55 | 6147414ee68002a5589f68387a7e70ac837cf839f2205817f3da0d474d78d645 | 67ec356965b995bb5a5298f7f5c535ebb0a239b22b7156079ce17e15ea1bb953 | null | [] | 103 |
2.3 | wanting | 0.14.0 | A library for creating, and working with models that can represent incomplete information. | Wanting
#######
Wanting is a library for creating, and working with models that can represent
incomplete information.
Motivation
**********
Instances of domain models don't always spring into existence fully formed.
They may be partially constructed intially, then filled in over time. Making a
model field optional that is not intially available, but eventually required is
inaccurate because an optional field may *always* be optional, so it never has
to be filled in. It would be better to make the field a required union of the
type it wants, and a placholder type. The wanting types are such placeholders.
They can include metadata, such as the source of the update with missing data,
and even partial data from that source.
Usage
*****
There are two wanting types that may be unioned with the type of a field. When
a field is ``Unavailable``, no information about that field is known. When a
field is ``Unmapped``, there is information about that field, but we are unable
to map that information to a value that the model will accept.
A domain model may look like this:
.. code-block:: python
from typing import Literal
import pydantic
import wanting
class User(pydantic.BaseModel):
"""A model that can have incomplete information."""
name: str
employee_id: str | wanting.Unavailable
department_code: Literal["TECH", "FO", "BO", "HR"] | wanting.Unmapped
Then there is an onboarding system that creates a ``User``. However, the
``employee_id`` is unavailable at this time because it will be generated later.
The onboarding system sources the department code from some other system, which
uses different values than those in the ``User`` model. The onboarding system
knows how to map some of the codes from the other system to the ``User``
department codes, but not all of them. However, because ``employee_id``, and
``department_code`` are unioned with wanting fields, the onboarding system can
still create a fully valid model, with the information it knows:
.. code-block:: python
user = User(
name="Charlotte",
employee_id=wanting.Unavailable(source="onboarding"),
department_code=wanting.Unmapped(source="onboarding", value="art"),
)
The model validates, and all the wanting fields serialize to valid JSON:
.. code-block:: python
assert user.model_dump() == {
"name": "Charlotte",
"employee_id": {
"kind": "unavailable",
"source": "onboarding",
"value": {"serialized": b"null"},
},
"department_code": {
"kind": "unmapped",
"source": "onboarding",
"value": {"serialized": b'"art"'},
},
}
This user can now be persisted, then queried, and updated later by other
systems.
A model class can be queried for its potentially wanting fields:
.. code-block:: python
class Child(pydantic.BaseModel):
"""A model that can have incomplete information."""
regular: int
wanting: int | wanting.Unavailable
class Parent(pydantic.BaseModel):
"""A model that can have top-level, and nested incomplete information."""
regular: int
wanting: int | wanting.Unavailable
nested: Child
def reduce_path(path: list[wanting.FieldInfoEx]) -> str:
"""Reduce the FieldInfoEx objects that comprise a path to a readable string."""
return "->".join(f"{fi.cls.__name__}.{fi.name}" for fi in path)
paths = wanting.fields(Parent)
summary = [reduce_path(path) for path in paths]
assert summary == ["Parent.wanting", "Parent.nested->Child.wanting"]
A model instance can be queried for its wanting values:
.. code-block:: python
p = Parent(
regular=1,
wanting=2,
nested=Child(regular=3, wanting=wanting.Unavailable(source="doc")),
)
assert wanting.values(p) == {
"nested": {"wanting": wanting.Unavailable(source="doc")}
}
A model instance can also be serialized, either including or excluding its
wanting values:
.. code-block:: python
inc = wanting.incex(p)
assert p.model_dump(include=inc) == {
"nested": {
"wanting": {
"kind": "unavailable",
"source": "doc",
"value": {"serialized": b"null"},
}
}
}
assert p.model_dump(exclude=inc) == {
"regular": 1,
"wanting": 2,
"nested": {"regular": 3},
}
Model serialization with respect to wanting fields is invertible. A model can
be serialized, then the result can be deserialized back into an equivalent
model.
.. code-block:: python
p2 = Parent.model_validate(p.model_dump())
assert p == p2
| text/x-rst | Narvin Singh | Narvin Singh <Narvin.A.Singh@gmail.com> | null | null | Wanting is a library for working with incomplete models. Copyright (C) 2025 Narvin Singh This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>. | null | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)",
"Operating System :: OS Independent"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pydantic~=2.12",
"python-docs-theme~=2025.12; extra == \"doc\"",
"sphinx~=9.1; extra == \"doc\""
] | [] | [] | [] | [
"Homepage, https://wanting.readthedocs.io",
"Documentation, https://wanting.readthedocs.io",
"Repository, https://codeberg.org/narvin/wanting",
"Issues, https://codeberg.org/narvin/wanting/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-19T03:16:56.138773 | wanting-0.14.0-py3-none-any.whl | 7,128 | 41/60/8c92b6230b0ed8a91e6155cc211cb1f73a94b839f2f98a6c15e7d15688e4/wanting-0.14.0-py3-none-any.whl | py3 | bdist_wheel | null | false | bbe7b774a018cba323043ce11e2bd3f9 | 7d6cbe699b4556db8def6c346bd14d6b1cfa97a0c92e9843c3f2c998632a5642 | 41608c92b6230b0ed8a91e6155cc211cb1f73a94b839f2f98a6c15e7d15688e4 | null | [] | 230 |
2.4 | pmtvs-mutual-info | 0.0.1 | Signal analysis primitives | # pmtvs-mutual-info
Part of the pmtvs signal analysis ecosystem. Coming soon.
See [pmtvs](https://pypi.org/project/pmtvs/) for the main package.
| text/markdown | null | null | null | null | MIT | null | [
"Development Status :: 1 - Planning",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Physics",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-19T03:16:55.372267 | pmtvs_mutual_info-0.0.1.tar.gz | 1,283 | 01/0e/aae9c9010b35e10c60af39e5502ed5a5e65adf4fea17338f503f1e9bcac6/pmtvs_mutual_info-0.0.1.tar.gz | source | sdist | null | false | f2bc5d7eef08c1a9d2bdc6380fd6a14a | db4b89e88557c5d0dbd5fe2d6bf150d7ac34fc2f2e3c68cf1be9c71a8b1bab53 | 010eaae9c9010b35e10c60af39e5502ed5a5e65adf4fea17338f503f1e9bcac6 | null | [] | 254 |
2.4 | deprecated-params | 0.5.1 | A Wrapper for functions, class objects and methods for deprecating keyword parameters | # Deprecated Params
[](https://badge.fury.io/py/deprecated-params)

[](https://opensource.org/licenses/MIT)
[](https://opensource.org/licenses/Appache-2-0)
Inspired after python's warning.deprecated wrapper, deprecated_params is made to serve the single purpose of deprecating parameter names to warn users
about incoming changes as well as retaining typehinting.
## How to Deprecate Parameters
Parameters should be keyword arguments, not positional, Reason
for this implementation is that in theory you should've already
planned an alternative approach to an argument you wish
to deprecate. Most of the times these arguments will most
likely be one of 3 cases.
- misspellings
- better functionality that replaces old arguments with better ones.
- removed parameters but you want to warn developers
to move without being aggressive about it.
```python
from deprecated_params import deprecated_params
@deprecated_params(['x'])
def func(y, *, x:int = 0):
pass
# DeprecationWarning: Parameter "x" is deprecated
func(None, x=20)
# NOTE: **kw is accepted but also you could put down more than one
# parameter if needed...
@deprecated_params(['foo'], {"foo":"foo was removed in ... don't use it"}, display_kw=False)
class MyClass:
def __init__(self, spam:object, **kw):
self.spam = spam
self.foo = kw.get("foo", None)
# DeprecationWarning: foo was removed in ... don't use it
mc = MyClass("spam", foo="X")
```
## Why I wrote Deprecated Params
I got tired of throwing random warnings in my code and wanted something cleaner that didn't
interfere with a function's actual code and didn't blind anybody trying to go through it.
Contributors and Reviewers should be able to utilize a library that saves them from these problems
while improving the readability of a function. After figuring out that the functionality I was
looking for didn't exist I took the opportunity to implement it.
## Deprecated Params used in real-world Examples
Deprecated-params is now used with two of my own libraries by default.
- [aiothreading (up until 0.1.6)](https://github.com/Vizonex/aiothreading)
- Originally aiothreading had it's own wrapper but I split it off to this library along with a rewrite after finding out that
parameter names were not showing up ides such as vs-code. The rewrite felt a bit bigger and knowing that users would want to utilize
this concept in other places was how this library ultimately got started.
- Lots of interior changes were made and with many arguments being suddenly dropped to increase the performance, the best solution was to warn
developers to stop using certain parameters as they will be deleted in the future.
- It is planned to be dropped as many of the things we wanted to remove have been slowly removed from the library which means this library will
be removed from it but that doesn't mean I won't keep maintaining it, it is invented for short-use cases and can be added and removed freely
without needing additional dependencies.
- [aiocallback (mainly used in version 1.6)](https://github.com/Vizonex/aiocallback)
- Same situation as aiothreading but I decided to buy users more time due to how fast some releases were going and it also allowed
- Currently I removed deprecated-params from aiocallback since it wasn't needed anymore but this is what deprecated-param's purpose
was for, being there only when its need. I desired nothing more or less.
If you would like to add examples of your own libraries that have used this library feel free to throw me an issue or send me a pull request.
| text/markdown | null | Vizonex <VizonexBusiness@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest>=9.0.2; extra == \"pytest\"",
"sphinx==9.1.0; extra == \"docs\"",
"aiohttp-theme==0.1.7; extra == \"docs\""
] | [] | [] | [] | [
"homepage, https://github.com/Vizonex/deprecated-params",
"repository, https://github.com/Vizonex/deprecated-params.git"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:16:51.645435 | deprecated_params-0.5.1.tar.gz | 10,217 | 46/4d/6c931ccfae18ed41940006a47fd74300bf2ccce2facc73773fd0c17a6625/deprecated_params-0.5.1.tar.gz | source | sdist | null | false | 34fea13b3495fbb3f682ed867fa03e73 | a96e07df8fa828ffc8d6f54c05ba63fa3baf5392295d655632b12eccb575757b | 464d6c931ccfae18ed41940006a47fd74300bf2ccce2facc73773fd0c17a6625 | MIT | [
"LICENSE"
] | 275 |
2.4 | dioptra-platform | 1.1.0 | Dioptra is a software test platform for assessing the trustworthy characteristics of artificial intelligence (AI). | # Dioptra: Test Software for the Characterization of AI Technologies
Dioptra is a software test platform for assessing the trustworthy characteristics of artificial intelligence (AI).
Trustworthy AI is: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair - with harmful bias managed[^1].
Dioptra supports the Measure function of the [NIST AI Risk Management Framework](https://nist.gov/itl/ai-risk-management-framework/) by providing functionality to assess, analyze, and track identified AI potential benefits and negative consequences.
Dioptra provides a REST API, which can be controlled via an intuitive web interface, a Python client, or any REST client library of the user's choice for designing, managing, executing, and tracking experiments.
Details are available in the project documentation available at <https://pages.nist.gov/dioptra/>.
[^1]: <https://doi.org/10.6028/NIST.AI.100-1>
<!-- markdownlint-disable MD007 MD030 -->
- [Current Release Status](#current-release-status)
- [Use Cases](#use-cases)
- [Key Properties](#key-properties)
- [Usage Instructions](#usage-instructions)
- [Install Dioptra](#install-dioptra)
- [Develop Dioptra](#develop-dioptra)
- [License](#license)
- [How to Cite](#how-to-cite)
<!-- markdownlint-enable MD007 MD030 -->
## Current Release Status
Release 1.1.0 -- with on-going improvements and development
## Use Cases
We envision the following primary use cases for Dioptra:
- Model Testing:
- 1st party - Assess AI models throughout the development lifecycle
- 2nd party - Assess AI models during acquisition or in an evaluation lab environment
- 3rd party - Assess AI models during auditing or compliance activities
- Research: Aid trustworthy AI researchers in tracking experiments
- Evaluations and Challenges: Provide a common platform and resources for participants
- Red-Teaming: Expose models and resources to a red team in a controlled environment
## Key Properties
Dioptra strives for the following key properties:
- Reproducible: Dioptra automatically creates snapshots of resources so experiments can be reproduced and validated
- Traceable: The full history of experiments and their inputs are tracked
- Extensible: Support for expanding functionality and importing existing Python packages via a plugin system
- Interoperable: A type system promotes interoperability between plugins
- Modular: New experiments can be composed from modular components in a simple yaml file
- Secure: Dioptra provides user authentication with access controls coming soon
- Interactive: Users can interact with Dioptra via an intuitive web interface
- Shareable and Reusable: Dioptra can be deployed in a multi-tenant environment so users can share and reuse components
## Usage Instructions
### Install Dioptra
See the [Install Dioptra](https://pages.nist.gov/dioptra/getting-started/install-dioptra-explanation.html) section of the documentation for more detailed instructions.
1. Pull the Dioptra docker images:
```sh
# pull the core dioptra images:
docker pull ghcr.io/usnistgov/dioptra/nginx:1.1.0
docker pull ghcr.io/usnistgov/dioptra/mlflow-tracking:1.1.0
docker pull ghcr.io/usnistgov/dioptra/restapi:1.1.0
# pull the worker images:
docker pull ghcr.io/usnistgov/dioptra/pytorch-cpu:1.1.0
docker pull ghcr.io/usnistgov/dioptra/tensorflow2-cpu:1.1.0
# optionally pull the GPU worker images:
docker pull ghcr.io/usnistgov/dioptra/pytorch-gpu:1.1.0
docker pull ghcr.io/usnistgov/dioptra/tensorflow2-gpu:1.1.0
```
2. Prepare your Dioptra deployment:
```sh
cruft create https://github.com/usnistgov/dioptra --checkout main \
--directory cookiecutter-templates/cookiecutter-dioptra-deployment
```
3. Initialize your Dioptra deployment:
```sh
cd dioptra-deployment # Or your deployment folder name
./init-deployment.sh --branch main
```
4. Run Dioptra
```
docker compose up -d
```
Your Dioptra deployment is now accessible at `http://localhost`. We recommend getting started with the [Hello World Tutorial](https://pages.nist.gov/dioptra/tutorials/hello_world/index.html)
## Develop Dioptra
If you are interested in contributing to Dioptra, please see the [Developer Guide](DEVELOPER.md)
## License
[](http://creativecommons.org/licenses/by/4.0/)
This Software (Dioptra) is being made available as a public service by the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/), an Agency of the United States Department of Commerce.
This software was developed in part by employees of NIST and in part by NIST contractors.
Copyright in portions of this software that were developed by NIST contractors has been licensed or assigned to NIST.
Pursuant to Title 17 United States Code Section 105, works of NIST employees are not subject to copyright protection in the United States.
However, NIST may hold international copyright in software created by its employees and domestic copyright (or licensing rights) in portions of software that were assigned or licensed to NIST.
To the extent that NIST holds copyright in this software, it is being made available under the [Creative Commons Attribution 4.0 International license (CC BY 4.0)](http://creativecommons.org/licenses/by/4.0/).
The disclaimers of the CC BY 4.0 license apply to all parts of the software developed or licensed by NIST.
## How to Cite
Glasbrenner, James, Booth, Harold, Manville, Keith, Sexton, Julian, Chisholm, Michael Andy, Choy, Henry, Hand, Andrew, Hodges, Bronwyn, Scemama, Paul, Cousin, Dmitry, Trapnell, Eric, Trapnell, Mark, Huang, Howard, Rowe, Paul, Byrne, Alex (2024), Dioptra Test Platform, National Institute of Standards and Technology, https://doi.org/10.18434/mds2-3398 (Accessed 'Today's Date')
N.B.: Replace 'Today's Date' with today's date
| text/markdown | Michael Andy Chisholm, Andrew Hand, Paul Scemama, Alexander Byrne, Luke Barber, Cory Miniter | Harold Booth <harold.booth@nist.gov>, James Glasbrenner <jglasbrenner@mitre.org>, Keith Manville <kmanville@mitre.org>, Julian Sexton <jtsexton@mitre.org>, Henry Choy <hchoy@mitre.org>, Bronwyn Hodges <bhodges@mitre.org>, Dmitry Cousin <dmitry.cousin@nist.gov>, Eric Trapnell <eric.trapnell@nist.gov>, Mark Trapnell <mark.trapnell@nist.gov>, Colton Lapp <colton.lapp@nist.gov>, Howard Huang <hhuang@mitre.org>, Paul Rowe <prowe@mitre.org> | null | Harold Booth <harold.booth@nist.gov>, James Glasbrenner <jglasbrenner@mitre.org>, Keith Manville <kmanville@mitre.org> | null | null | [
"Development Status :: 5 - Production/Stable",
"Framework :: Flask",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"To... | [] | null | null | >=3.11 | [] | [] | [] | [
"alembic>=1.13.0",
"async-timeout>=4.0.0",
"boto3>=1.16.0",
"click<9,>=8.0.0",
"entrypoints>=0.3",
"flask-accepts>=0.17.0",
"flask-cors>=3.0.1",
"flask-login>=0.6.0",
"flask-migrate>=2.5.0",
"flask-restx>=0.5.1",
"flask-sqlalchemy>=2.4.0",
"flask>=2.0.0",
"gunicorn>=20.0.0",
"injector>=0.1... | [] | [] | [] | [
"repository, https://github.com/usnistgov/dioptra",
"documentation, https://pages.nist.gov/dioptra",
"Issue Tracker, https://github.com/usnistgov/dioptra/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T03:16:42.029192 | dioptra_platform-1.1.0-py3-none-any.whl | 595,825 | d0/e6/5630df0c272a2b130408975ee323386c354adb30d4b360ad1951d45a85bf/dioptra_platform-1.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 19f5d1a1987ad9d4781b6f6cc9951f1a | 9081a101854828d872dc034d8659fedae89409db1b009501d4d0140962461fcb | d0e65630df0c272a2b130408975ee323386c354adb30d4b360ad1951d45a85bf | null | [
"LICENSE"
] | 239 |
2.4 | PhantomTrace | 0.4.0 | PhantomTrace — a mathematical framework where numbers exist in present or absent states with custom operations to include addition, subtraction, multiplication, division, and erasure. | # PhantomTrace
A Python library implementing an experimental mathematical framework where numbers can exist in two states: **present** or **absent**. It defines five operations that interact with these states in consistent, rule-based ways.
Zero is redefined: `0` is not emptiness — it's one absence (`1(0)`). This means every operation has a defined result, including division by zero.
**Read the paper**: [Absence Theory](https://www.academia.edu/150254484/Absence_Theory_Quantified_Absence_and_State_Aware_Arithmetic_within_Domains_of_Reference)
## Installation
```bash
pip install phantomtrace
```
## Shorthand Notation (v0.3.0)
Use the `n()` shorthand to create numbers quickly:
```python
from absence_calculator import n
n(5) # → 5 (present)
n(5)(0) # → 5(0) (absent) — closest to writing 5(0) directly
n(5)(1) # → 5 (stays present)
n(3)(5) # → 15 (multiplier — 3 × 5)
n(10)(0) # → 10(0) (absent)
# Build vectors naturally
vec = [n(10)(0), n(20), n(30)(0), n(40), n(50)(0)]
# → [10(0), 20, 30(0), 40, 50(0)]
```
You can also use the full form: `AbsentNumber(value, absence_level)`.
## Quick Start
```python
from absence_calculator import n, add, subtract, multiply, divide, erase, format_result
# Create numbers — present (default) or absent
five = n(5) # 5 (present)
three_absent = n(3)(0) # 3(0) (absent)
# Addition — same state combines, mixed state is unresolved
result = add(n(5), n(3))
print(result) # 8
# Subtraction — equal values cancel to void
result = subtract(n(7), n(7))
print(result) # void
# Multiplication — states combine (like XOR)
result = multiply(n(5)(0), n(3))
print(result) # 15(0)
# Erasure — flips the state of the erased portion
result = erase(n(5), n(3))
print(result) # 2 + 3(0)
# Over-erasure — excess becomes erased debt
result = erase(n(7), n(10))
print(result) # 7(0) + erased 3
# Resolve erased excess by adding
resolved = add(result, n(3))
print(resolved) # 10(0)
# Division by zero — defined! (0 is one absence)
result = divide(n(10), n(1)(0))
print(result) # 10(0)
```
## Using the Expression Solver
```python
from absence_calculator import solve, format_result
# Parse and solve string expressions
print(format_result(solve("5 + 3"))) # 8
print(format_result(solve("5(0) + 3(0)"))) # 8(0)
print(format_result(solve("7 - 7"))) # void
print(format_result(solve("5(0) * 3"))) # 15(0)
print(format_result(solve("5 erased 3"))) # 2 + 3(0)
print(format_result(solve("7 erased 10"))) # 7(0) + erased 3 (over-erasure)
print(format_result(solve("5(0)(0)"))) # 5 (double absence = present)
# Parenthesized expressions (operations on unresolved inputs)
print(format_result(solve("(1 + 5(0)) erased 1"))) # 6(0)
# Zero operations
print(format_result(solve("0 + 0"))) # 2(0) (two absences)
print(format_result(solve("0 * 0"))) # 1 (absence of absence = presence)
print(format_result(solve("10 * 0"))) # 10(0)
print(format_result(solve("10 / 0"))) # 10(0)
```
## Interactive Calculator
After installing, you can run the interactive calculator from the command line:
```bash
phantomtrace
```
Or as a Python module:
```bash
python -m absence_calculator
```
This gives you a `calc >>` prompt where you can type expressions and see results.
## Core Concepts
### Objects and States
An object is a number that has both a **value** and a **state**:
- **Present** (default): Written normally, e.g. `5`. Present quantities reflect the presence of a given unit of interest. (e.g. if the unit is a cat, then 5 represents 5 cats that are there or in a present state)
- **Absent**: Written with `(0)`, e.g. `5(0)` — think of it as `5 * 0`. Absent quantities reflect the absence of a given unit of interest. (e.g. if the unit is a phone, then 5(0) represents 5 phones that are not currently there but are still considered for computation)
Both states carry magnitude. `5` and `5(0)` both have a value of 5 — the state tells you whether it's present or absent, but the magnitude never disappears.
### Absence
- **Zero**: `0` is not emptiness, it's one absence (`1(0) = 1 * 0 = 0`)
- **Absence of absence** returns to present: `5(0)(0) = 5`, and `0(0) = 1`
### Operations
| Operation | Symbol | Rule |
|-----------|--------|------|
| Addition | `+` | Expands the amount of objects under consideration. Same state: magnitudes combine. Mixed: unresolved |
| Subtraction | `-` | Contracts the amount of objects under consideration. (If the domain of consideration is constricted to nothing then the result is void. Void is not an object, nor the new zero, it simply means we are not considering anything on which to act.) Same state: magnitudes reduce. Mixed: unresolved|
| Multiplication | `*` | Magnitudes multiply. States combine (present*present=present, absent*present=absent, absent*absent=present) |
| Division | `/` | Magnitudes divide. States combine same as multiplication. Division by 0 is defined! |
| Erasure | `erased` | Same state required. Remainder keeps state, erased portion flips state. Over-erasure creates erased excess |
### Over-Erasure (v0.2.0)
When you erase more than the total, the result carries an **erased excess** (erasure debt):
- `7 erased 10` = `7(0) + erased 3` — all 7 flip state, 3 excess erasure persists
- Adding resolves excess: `(7(0) + erased 3) + 3` = `10(0)`
- Erasing erased: `(erased 3) erased (erased 3)` = `erased 3(0)` (absence of erased)
### Compound Expressions (v0.2.0)
Operations can now accept unresolved expressions as inputs:
- `(1 + 5(0)) erased 1` = `6(0)` — erases the present part, combining with the absent part
### Result Types
- **AbsentNumber**: A number with a state (present or absent)
- **Void**: Complete cancellation — not zero, but the absence of any quantity under consideration
- **ErasureResult**: Two parts — remainder (keeps state) and erased portion (flipped state)
- **ErasedExcess**: Excess erasure debt that persists until resolved
- **Unresolved**: An expression that cannot be simplified (e.g., adding present + absent)
## Toggle Module
The toggle module flips states of elements in vectors, matrices, and tensors using pattern-based index selection.
### Core Toggle Operations
- `toggle.where(pattern, range, data)` — flip elements **at** pattern-computed indices
- `toggle.exclude(pattern, range, data)` — flip everything **except** pattern-computed indices
- `toggle.all(data)` — flip **every** element at any depth
The **pattern** is a function (or string expression) that's evaluated across all whole numbers. The **range** is an output filter — only results that fall within `(start, end)` become target indices. The function determines the shape of the pattern; the range determines how far it reaches.
### Vectors — Present
```python
from absence_calculator import toggle, n
# Present vector — all elements start as present
vec = [10, 20, 30, 40, 50]
# x*2 produces 0, 2, 4, 6, 8... — range (0, 4) keeps outputs 0 through 4
# Hits: indices 0, 2, 4 (the even positions)
toggle.where(lambda x: x * 2, (0, 4), vec)
# → [10(0), 20, 30(0), 40, 50(0)] targets flipped to absent
toggle.exclude(lambda x: x * 2, (0, 4), vec)
# → [10, 20(0), 30, 40(0), 50] non-targets flipped to absent
toggle.all(vec)
# → [10(0), 20(0), 30(0), 40(0), 50(0)] everything flipped to absent
```
### Vectors — Absent
```python
# Absent vector — all elements start as absent
vec = [n(10)(0), n(20)(0), n(30)(0), n(40)(0), n(50)(0)]
toggle.where(lambda x: x * 2, (0, 4), vec)
# → [10, 20(0), 30, 40(0), 50] targets flipped back to present
toggle.exclude(lambda x: x * 2, (0, 4), vec)
# → [10(0), 20, 30(0), 40, 50(0)] non-targets flipped back to present
toggle.all(vec)
# → [10, 20, 30, 40, 50] everything flipped back to present
```
### Vectors — Mixed
```python
# Mixed vector — some present, some absent
vec = [n(10), n(20)(0), n(30), n(40)(0), n(50)]
toggle.where(lambda x: x * 2, (0, 4), vec)
# → [10(0), 20(0), 30(0), 40(0), 50(0)] targets flip (present→absent, absent→present)
toggle.exclude(lambda x: x * 2, (0, 4), vec)
# → [10, 20, 30, 40, 50] non-targets flip
toggle.all(vec)
# → [10(0), 20, 30(0), 40, 50(0)] every element flips its state
```
### String Patterns and Single Index
```python
# String pattern — "x^2" computes target indices
toggle.where("x^2", (1, 4), [4, 7, 19, 22, 26])
# → [4, 7(0), 19, 22, 26(0)] indices 1 (1²) and 4 (2²) toggled
# Single index — use pattern "x" with range (i, i)
toggle.where("x", (2, 2), [10, 20, 30, 40, 50])
# → [10, 20, 30(0), 40, 50] only index 2 toggled
```
### Matrices — Present
```python
# Present matrix — toggle.all flips every element in every row
matrix = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
toggle.all(matrix)
# → [[0, 2(0), 3(0)],
# [4(0), 5(0), 6(0)],
# [7(0), 8(0), 9(0)]]
toggle.where("x", (0, 0), matrix)
# → [[0, 2, 3], index 0 toggled in each row
# [4(0), 5, 6],
# [7(0), 8, 9]]
toggle.exclude("x", (0, 0), matrix)
# → [[1, 2(0), 3(0)], everything except index 0 toggled
# [4, 5(0), 6(0)],
# [7, 8(0), 9(0)]]
```
### Matrices — Absent
```python
# Absent matrix
matrix = [[n(1)(0), n(2)(0)], [n(3)(0), n(4)(0)]]
toggle.all(matrix)
# → [[1, 2], everything flipped back to present
# [3, 4]]
```
### Matrices — Mixed
```python
# Mixed matrix — rows have different states
matrix = [[n(10)(0), n(20), n(30)(0)],
[n(40), n(50)(0), n(60)]]
toggle.where("x", (1, 1), matrix)
# → [[10(0), 20(0), 30(0)], index 1 toggled in each row
# [40, 50, 60]]
toggle.all(matrix)
# → [[10, 20(0), 30], every element flips
# [40(0), 50, 60(0)]]
```
## Tensor Module (v0.4.0)
The toggle module now supports multi-dimensional tensors — nested lists of AbsentNumbers at any depth. Vectors are rank 1, matrices are rank 2, and you can go as deep as you need. Every element always retains both its value and its state — nothing is ever removed, only toggled.
### Creating Tensors
```python
from absence_calculator import toggle, n
# Vector (rank 1) — 5 elements, all absent
v = toggle.tensor(5, fill='absent')
# → [1(0), 2(0), 3(0), 4(0), 5(0)]
# Values are sequential — position IS identity
# Matrix (rank 2) — 3 rows of 4 elements, all present
m = toggle.tensor((3, 4), fill='present')
# → [[1, 2, 3, 4],
# [1, 2, 3, 4],
# [1, 2, 3, 4]]
# 3D Tensor (rank 3) — 2 matrices of 3 rows of 4 elements
t = toggle.tensor((2, 3, 4), fill='absent')
# Each element has a value and a state — absent, but never gone
# 4D Tensor (rank 4)
t4 = toggle.tensor((2, 2, 3, 5))
# Matrices inside matrices — as deep as you need
```
### Inspecting Tensors
```python
toggle.rank(n(5)) # → 0 (scalar)
toggle.rank([n(1), n(2)]) # → 1 (vector)
toggle.rank([[n(1), n(2)], [n(3), n(4)]]) # → 2 (matrix)
toggle.rank(toggle.tensor((2, 3, 4))) # → 3 (3D tensor)
toggle.shape(toggle.tensor((3, 4))) # → (3, 4)
toggle.shape(toggle.tensor((2, 3, 4))) # → (2, 3, 4)
```
### Toggling at Any Depth
`toggle.all()` works at every depth — flips every element regardless of how deeply nested:
```python
# All works on vectors, matrices, tensors, anything
t = toggle.tensor((2, 3, 4), fill='present')
t_flipped = toggle.all(t)
# Every element in the entire 2×3×4 structure is now absent
# Values preserved — only states changed
```
### Axis-Aware Toggling (v0.4.0)
`where()` and `exclude()` now accept an `axis` parameter to control which level toggling happens at:
```python
# Matrix 3×5, all absent
m = toggle.tensor((3, 5), fill='absent')
# Identity function, range (0, 2) — keeps outputs 0, 1, 2
result = toggle.where(lambda x: x, (0, 2), m, axis=-1)
# Row 0: P P P _ _
# Row 1: P P P _ _
# Row 2: P P P _ _
# x*2, range (0, 4) — keeps outputs 0 through 4, hits 0, 2, 4
result = toggle.where(lambda x: x * 2, (0, 4), m, axis=-1)
# Row 0: P _ P _ P
# Row 1: P _ P _ P
# Row 2: P _ P _ P
# Even indices only — the pattern controls the shape, the range controls the reach
# 3D tensor — toggling reaches the deepest vectors
t = toggle.tensor((2, 2, 4), fill='absent')
result = toggle.where(lambda x: x, (1, 2), t, axis=-1)
# Outputs in [1,2] → toggles indices 1 and 2 in every vector:
# [0][0]: _ P P _
# [0][1]: _ P P _
# [1][0]: _ P P _
# [1][1]: _ P P _
```
### Selecting Slices
`toggle.select()` pulls out a sub-structure along any axis. The result is still a valid tensor — one rank lower:
```python
m = [[n(1), n(2), n(3)],
[n(4), n(5), n(6)],
[n(7), n(8), n(9)]]
# Select row 1 (axis=0) — gives a vector
toggle.select(m, axis=0, index=1)
# → [4, 5, 6]
# Select column 2 (axis=1) — gives a vector
toggle.select(m, axis=1, index=2)
# → [3, 6, 9]
# 3D tensor — selecting axis=0 gives a matrix
t = [[[n(1), n(2)], [n(3), n(4)]],
[[n(5), n(6)], [n(7), n(8)]]]
toggle.select(t, axis=0, index=0)
# → [[1, 2], [3, 4]]
# Selecting axis=2 gives a matrix of single values
toggle.select(t, axis=2, index=1)
# → [[2, 4], [6, 8]]
```
### Replacing Slices
`toggle.assign()` replaces a slice at a given position:
```python
m = [[n(1), n(2)], [n(3), n(4)]]
new_row = [n(10), n(20)]
result = toggle.assign(m, axis=0, index=0, value=new_row)
# → [[10, 20], [3, 4]] row 0 replaced, row 1 unchanged
```
### Combining Across an Axis
`toggle.across()` applies a function element-by-element across one axis, combining sub-structures. Each result is still an AbsentNumber with its value and state:
```python
m = [[n(1), n(2)(0), n(3)], # Row 0: P _ P
[n(1)(0), n(2), n(3)]] # Row 1: _ P P
def both_present(x, y):
if x.is_present and y.is_present:
return n(x.value)
return n(x.value)(0)
toggle.across(m, axis=0, fn=both_present)
# → [1(0), 2(0), 3]
# Only index 2 is present in both rows
# Values and states preserved — nothing removed
```
### Counting
```python
v = [n(1), n(2), n(3)(0), n(4), n(5)(0)]
toggle.count_present(v) # → 3 (three elements are present)
m = [[n(1), n(2)(0)], [n(3)(0), n(4)]]
toggle.count_present(m) # → 2 (two elements present total)
toggle.present_indices(v) # → [0, 1, 3] (which positions are present)
```
### Intersect and Union (Convenience)
Shorthand for common `across()` patterns — both preserve all values and states:
```python
a = [n(1), n(2), n(3)(0), n(4), n(5)(0)] # P P _ P _
b = [n(1), n(2)(0), n(3), n(4)(0), n(5)(0)] # P _ P _ _
toggle.intersect(a, b) # → P _ _ _ _ (present only where BOTH are present)
toggle.union(a, b) # → P P P P _ (present where EITHER is present)
# These work at any depth — matrices, 3D tensors, etc.
# Equivalent to across() with AND/OR logic functions
```
### Backward Compatibility
`toggle.ys` = `toggle.where`, `toggle.nt` = `toggle.exclude` — old names still work.
All existing vector and matrix code works unchanged. The `axis` parameter defaults to `-1` (last axis), which matches the original behavior.
## API Reference
### Types
- `AbsentNumber(value, absence_level=0)` — A number with a state. `absence_level` 0 = present, 1 = absent. Callable: `num(0)` flips state, `num(1)` keeps state, `num(k)` multiplies
- `n(value, absence_level=0)` — Shorthand for creating AbsentNumbers: `n(5)` = present 5, `n(5)(0)` = absent 5
- `Void` / `VOID` — Represents complete cancellation
- `ErasureResult(remainder, erased)` — Result of an erasure operation
- `ErasedExcess(value, absence_level=0)` — Excess erasure debt from over-erasure
- `Unresolved(left, op, right)` — An expression that can't be simplified
### Functions
- `add(x, y)` — Add two values (supports compound inputs with excess resolution)
- `subtract(x, y)` — Subtract two AbsentNumbers
- `multiply(x, y)` — Multiply two AbsentNumbers
- `divide(x, y)` — Divide two AbsentNumbers
- `erase(x, y)` — Erase y from x (supports over-erasure and compound inputs)
- `solve(expr_string)` — Parse and evaluate a string expression (supports parentheses)
- `format_result(result)` — Convert any result to a readable string
- `parse_number(s)` — Parse a string like `"5(0)"` into an AbsentNumber
### Toggle — Core
- `toggle.where(pattern, range, data, axis=-1)` — Toggle elements at pattern-computed indices along a specific axis
- `toggle.exclude(pattern, range, data, axis=-1)` — Toggle all elements NOT at pattern-computed indices along a specific axis
- `toggle.all(data)` — Flip the state of every element at any depth
### Toggle — Tensor (v0.4.0)
- `toggle.tensor(shape, fill='absent')` — Create a tensor of any shape filled with sequential AbsentNumbers
- `toggle.rank(data)` — Detect depth of nesting (0=scalar, 1=vector, 2=matrix, 3+=tensor)
- `toggle.shape(data)` — Return dimensions as a tuple
- `toggle.select(data, axis, index)` — Extract a slice along an axis (result is one rank lower)
- `toggle.assign(data, axis, index, value)` — Replace a slice at a given position
- `toggle.across(data, axis, fn)` — Combine elements across an axis using a function
- `toggle.count_present(data, axis=None)` — Count present elements (total or along an axis)
- `toggle.present_indices(vector)` — Return which positions are present in a vector
- `toggle.intersect(a, b)` — Present only where both inputs are present (convenience for across with AND)
- `toggle.union(a, b)` — Present where either input is present (convenience for across with OR)
- `toggle.ys` / `toggle.nt` — Backward-compatible aliases for `where` / `exclude`
### Constants
- `ALL_OPERATIONS` — Dictionary describing all operations with rules and examples
## License
MIT
| text/markdown | PhantomTrace Project | null | null | null | MIT | math, calculus, absence, abstract-algebra, number-theory, phantomtrace, dual-state | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Pyth... | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/phantomtrace/phantomtrace",
"Documentation, https://github.com/phantomtrace/phantomtrace#readme",
"Issues, https://github.com/phantomtrace/phantomtrace/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T03:15:50.578714 | phantomtrace-0.4.0.tar.gz | 15,109 | 5c/90/251ed0db9248cab3d0d51df5e1cb14c492825471ec1b2a8f99420b281158/phantomtrace-0.4.0.tar.gz | source | sdist | null | false | 2e8c74455f39ed4a3b9ec6cc8089ee66 | 885a55a59867b6775d885a52f898e204ccc4b7d1e892134dd27926b540ece316 | 5c90251ed0db9248cab3d0d51df5e1cb14c492825471ec1b2a8f99420b281158 | null | [
"LICENSE"
] | 0 |
2.4 | python-doctor | 0.2.0 | One command. One score. Built for AI agents. Scans Python codebases and returns a 0-100 health score. | # Python Doctor 🐍
[](https://github.com/saikatkumardey/python-doctor/actions/workflows/test.yml)
[](https://pypi.org/project/python-doctor/)
[](https://pypi.org/project/python-doctor/)
[](LICENSE)
**One command. One score. Built for AI agents.**
Python Doctor scans a Python codebase and returns a 0-100 health score with structured, actionable output. It's designed so an AI agent can run it, read the results, fix the issues, and verify the fix — in a loop, without human intervention.
```bash
python-doctor .
# 📊 Score: 98/100 (Excellent)
```
## Why?
Setting up linting, security scanning, dead code detection, and complexity analysis means configuring 5+ tools, reading 5 different output formats, and deciding what matters. Python Doctor wraps them all into a single command with a single score.
An agent doesn't need to know what Bandit is. It just needs to know the score dropped and which lines to fix.
## Install the CLI
```bash
# Using pip
pip install python-doctor
# Using uv
uv tool install python-doctor
# Or clone and run directly (for development)
git clone https://github.com/saikatkumardey/python-doctor.git
cd python-doctor
uv run python-doctor /path/to/project
```
## Add to Your Coding Agent
Python Doctor works with any agent that can run shell commands. Install the CLI (above), then add the rule to your agent:
### Claude Code (Plugin)
Install as a Claude Code plugin to get the `/python-doctor` slash command:
```bash
/plugin marketplace add saikatkumardey/python-doctor
/plugin install python-doctor@python-doctor-plugins
```
Then use `/python-doctor` after modifying Python files. Claude will run the scan, fix issues by severity, and re-run until the score target is met.
<details>
<summary>Manual setup (without plugin)</summary>
Add to your `CLAUDE.md`:
```markdown
## Python Health Check
Before finishing work on Python files, run:
python-doctor . --json
Fix any findings with severity "error". Target score: 80+.
If score drops below 50, do not commit — fix the issues first.
```
</details>
### Cursor
Add to `.cursor/rules/python-doctor.mdc`:
```markdown
---
description: Python codebase health check
globs: "**/*.py"
alwaysApply: false
---
Run `python-doctor . --json` after modifying Python files.
Fix findings. Target score: 80+. Do not commit below 50.
```
### OpenAI Codex
Add to `AGENTS.md`:
```markdown
## Python Health Check
After modifying Python files, run `python-doctor . --json` to check codebase health.
Fix any findings. Target score: 80+. Exit code 1 means score < 50 — fix before committing.
```
### Windsurf / Cline / Aider
Add to your project rules or system prompt:
```
After modifying Python files, run: python-doctor . --json
Read the output. Fix findings with severity "error" first, then warnings.
Re-run to verify the score improved. Target: 80+.
```
### GitHub Actions (CI)
```yaml
- name: Health Check
run: |
uv tool install python-doctor
python-doctor . --verbose
```
Exits with code 1 if score < 50.
## Usage
```bash
# Scan current directory
python-doctor .
# Verbose — show all findings with line numbers
python-doctor . --verbose
# Just the score (for CI or quick checks)
python-doctor . --score
# Structured JSON for agents
python-doctor . --json
# Auto-fix what Ruff can handle, then report the rest
python-doctor . --fix
```
## What It Checks
9 categories, 5 external tools + 4 custom AST analyzers:
| Category | Max | What |
|----------|-----|------|
| 🔒 Security | -30 | Bandit (SQLi, hardcoded secrets, unsafe calls). Auto-skips `assert` in test files. |
| 🧹 Lint | -25 | Ruff (unused imports, undefined names, style) |
| 💀 Dead Code | -15 | Vulture (unused functions, variables, imports) |
| 🔄 Complexity | -15 | Radon (cyclomatic complexity > 10) |
| 🏗 Structure | -15 | File sizes, test ratio, type hints, README, LICENSE, linter/type-checker config |
| 📦 Dependencies | -15 | Build file exists, no mixed systems, pip-audit vulnerabilities |
| 📝 Docstrings | -10 | Public function/class docstring coverage |
| 🔗 Imports | -10 | Star imports, circular import detection |
| ⚡ Exceptions | -10 | Bare `except:`, silently swallowed exceptions |
Score = `max(0, 100 - total_deductions)`. Each category is capped at its max.
## The Loop
This is how an agent uses it:
1. `python-doctor . --json` → read the report
2. Fix the findings (auto-fix with `--fix`, manual fixes for the rest)
3. `python-doctor . --score` → verify improvement
4. Repeat until score target met
We built Python Doctor, then ran it on itself. Score: 47. Fixed everything it flagged. Score: 98. The tool eats its own dogfood.
## License
MIT — Saikat Kumar Dey, 2026
| text/markdown | null | Saikat Kumar Dey <deysaikatkumar@gmail.com> | null | null | MIT | ai-agents, code-quality, developer-tools, linting, python, static-analysis | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"bandit>=1.7.0",
"radon>=6.0.0",
"ruff>=0.4.0",
"vulture>=2.11",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/saikatkumardey/python-doctor",
"Repository, https://github.com/saikatkumardey/python-doctor",
"Issues, https://github.com/saikatkumardey/python-doctor/issues",
"Changelog, https://github.com/saikatkumardey/python-doctor/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:14:57.998496 | python_doctor-0.2.0.tar.gz | 33,912 | f6/1c/2fdd7de1f95bf895bb65b350719c4a8cb543bf1dea76ff6fc5ad74707445/python_doctor-0.2.0.tar.gz | source | sdist | null | false | 0633df856ab3a02412e5f7579f14711e | 043a100530ab03674692f678c630602a5c650c67f599b423ddb374fdd05697d1 | f61c2fdd7de1f95bf895bb65b350719c4a8cb543bf1dea76ff6fc5ad74707445 | null | [
"LICENSE"
] | 230 |
2.4 | asap-protocol | 1.4.0 | Async Simple Agent Protocol - A streamlined protocol for agent-to-agent communication | # ASAP: Async Simple Agent Protocol
*✨ From **agents**, for **agents**. Delivering reliability, **as soon as possible.***

> A production-ready protocol for agent-to-agent communication and task coordination.
**Quick Info**: `v1.4.0` | `Apache 2.0` | `Python 3.13+` | [Documentation](https://github.com/adriannoes/asap-protocol/blob/main/docs/index.md) | [PyPI](https://pypi.org/project/asap-protocol/) | [Changelog](https://github.com/adriannoes/asap-protocol/blob/main/CHANGELOG.md)
## Why ASAP?
Building multi-agent systems today suffers from three core technical challenges that existing protocols like A2A don't fully address:
1. **$N^2$ Connection Complexity**: Most protocols assume static point-to-point HTTP connections that don't scale.
2. **State Drift**: Lack of native persistence makes it impossible to reliably resume long-running agentic workflows.
3. **Fragmentation**: No unified way to handle task delegation, artifact exchange and tool execution (MCP) in a single envelope.
**ASAP** provides a production-ready communication layer that simplifies these complexities. It's ideal for **multi-agent orchestration**, **stateful workflows** (persistence, resumability), **MCP integration**, and **production systems** requiring high-performance, type-safe agent communication.
For simple point-to-point communication, a basic HTTP API might suffice; ASAP shines when you need orchestration, state management and multi-agent coordination. See the [spec](https://github.com/adriannoes/asap-protocol/blob/main/.cursor/product-specs/strategy/v0-original-specs.md) for details.
### Key Features
- **Stateful orchestration** — Task state machine with snapshotting for resumable workflows.
- **Schema-first** — Pydantic v2 + JSON Schema for cross-agent interoperability.
- **Async-native** — `asyncio` + `httpx`; sync and async handlers supported.
- **MCP integration** — Tool execution and coordination in a single envelope.
- **Observable** — `trace_id` and `correlation_id` for debugging.
- **Security** — Bearer auth, OAuth2/JWT (v1.1), Ed25519 signed manifests (v1.2), optional mTLS, replay prevention, HTTPS, rate limiting. [v1.1 Security Model](https://github.com/adriannoes/asap-protocol/blob/main/docs/security/v1.1-security-model.md) (trust limits, Custom Claims).
- **Economics (v1.3)** — Usage metering, delegation tokens, SLA framework with breach alerts.
## Installation
We recommend using [uv](https://github.com/astral-sh/uv) for dependency management:
```bash
uv add asap-protocol
```
Or with pip:
```bash
pip install asap-protocol
```
📦 **Available on [PyPI](https://pypi.org/project/asap-protocol/)**. For reproducible environments, prefer `uv` when possible.
## Quick Start
**Run the demo** (echo agent + coordinator in one command):
```bash
uv run python -m asap.examples.run_demo
```
**v1.4.0 showcase** (Pagination on Usage & SLA history):
```bash
uv run python -m asap.examples.v1_4_0_showcase
```
**v1.3.0 showcase** (Delegation + Metering + SLA in one command):
```bash
uv run python -m asap.examples.v1_3_0_showcase
```
**Build your first agent** [here](docs/tutorials/first-agent.md) — server setup, client code, step-by-step (~15 min).
[15+ examples](src/asap/examples/README.md): orchestration, state migration, MCP, OAuth2, WebSocket, resilience.
## Testing
```bash
uv run pytest -n auto --tb=short
```
With coverage:
```bash
uv run pytest --cov=src --cov-report=term-missing
```
[Testing Guide](https://github.com/adriannoes/asap-protocol/blob/main/docs/testing.md) (structure, fixtures, property/load/chaos tests). [Contributing](https://github.com/adriannoes/asap-protocol/blob/main/CONTRIBUTING.md) (dev setup, CI).
### Compliance Harness (v1.2)
Validate that your agent follows the ASAP protocol:
```bash
uv add asap-compliance
pytest --asap-agent-url https://your-agent.example.com -m asap_compliance
```
See [Compliance Testing Guide](https://github.com/adriannoes/asap-protocol/blob/main/docs/guides/compliance-testing.md) for handshake, schema and state machine validation.
## Benchmarks
[Benchmark Results](https://github.com/adriannoes/asap-protocol/blob/main/benchmarks/RESULTS.md): load (1,500+ RPS), stress, memory.
## Documentation
**Learn**
- [Docs](https://github.com/adriannoes/asap-protocol/blob/main/docs/index.md) | [API Reference](https://github.com/adriannoes/asap-protocol/blob/main/docs/api-reference.md)
- [Tutorials](https://github.com/adriannoes/asap-protocol/tree/main/docs/tutorials) — First agent to production checklist
- [Migration from A2A/MCP](https://github.com/adriannoes/asap-protocol/blob/main/docs/migration.md)
**Deep Dive**
- [State Management](https://github.com/adriannoes/asap-protocol/blob/main/docs/state-management.md) | [Best Practices: Failover & Migration](https://github.com/adriannoes/asap-protocol/blob/main/docs/best-practices/agent-failover-migration.md) | [Error Handling](https://github.com/adriannoes/asap-protocol/blob/main/docs/error-handling.md)
- [Transport](https://github.com/adriannoes/asap-protocol/blob/main/docs/transport.md) | [Security](https://github.com/adriannoes/asap-protocol/blob/main/docs/security.md) | [v1.1 Security Model](https://github.com/adriannoes/asap-protocol/blob/main/docs/security/v1.1-security-model.md) (OAuth2 trust, Custom Claims, ADR-17)
- **v1.2**: [Identity Signing](https://github.com/adriannoes/asap-protocol/blob/main/docs/guides/identity-signing.md) | [Compliance Testing](https://github.com/adriannoes/asap-protocol/blob/main/docs/guides/compliance-testing.md) | [Migration v1.1→v1.2](https://github.com/adriannoes/asap-protocol/blob/main/docs/guides/migration-v1.1-to-v1.2.md) | [mTLS](https://github.com/adriannoes/asap-protocol/blob/main/docs/security/mtls.md)
- [Observability](https://github.com/adriannoes/asap-protocol/blob/main/docs/observability.md) | [Testing](https://github.com/adriannoes/asap-protocol/blob/main/docs/testing.md)
**Decisions & Operations**
- [ADRs](https://github.com/adriannoes/asap-protocol/tree/main/docs/adr) — 17 Architecture Decision Records
- [Tech Stack](https://github.com/adriannoes/asap-protocol/blob/main/.cursor/dev-planning/architecture/tech-stack-decisions.md) — Rationale for Python, Pydantic, Next.js choices
- [Deployment](https://github.com/adriannoes/asap-protocol/blob/main/docs/deployment/kubernetes.md) | [Troubleshooting](https://github.com/adriannoes/asap-protocol/blob/main/docs/troubleshooting.md)
**Release**
- [Changelog](https://github.com/adriannoes/asap-protocol/blob/main/CHANGELOG.md) | [PyPI](https://pypi.org/project/asap-protocol/)
## CLI
**v1.1** adds OAuth2, WebSocket, Discovery (well-known + Lite Registry), State Storage (SQLite), and Webhooks. **v1.2** adds Ed25519 signed manifests, trust levels, optional mTLS, and the [Compliance Harness](https://github.com/adriannoes/asap-protocol/blob/main/asap-compliance/README.md). **v1.3** adds delegation commands (`asap delegation create`, `asap delegation revoke`).
```bash
asap --version # Show version
asap list-schemas # List all available schemas
asap export-schemas # Export JSON schemas to file
asap keys generate -o key.pem # Generate Ed25519 keypair (v1.2)
asap manifest sign -k key.pem manifest.json # Sign manifest (v1.2)
asap manifest verify signed.json # Verify signature (v1.2)
asap manifest info signed.json # Show trust level (v1.2)
```
See [CLI reference](https://github.com/adriannoes/asap-protocol/blob/main/docs/guides/identity-signing.md) or run `asap --help`.
See [docs index](https://github.com/adriannoes/asap-protocol/blob/main/docs/index.md#v11-features-api-reference--guides) and [Identity Signing](https://github.com/adriannoes/asap-protocol/blob/main/docs/guides/identity-signing.md) for details.
## What's Next? 🔭
ASAP is evolving toward an **Agent Marketplace** — an open ecosystem where AI agents discover, trust and collaborate autonomously:
- **v1.1**: Identity Layer (OAuth2, WebSocket, Discovery) ✅
- **v1.2**: Trust Layer (Signed Manifests, Compliance Harness, mTLS) ✅
- **v1.3**: Economics Layer (Metering, SLAs, Delegation) ✅
- **v1.4**: Resilience & Scale (Type Safety, Storage Pagination) ✅
- **v2.0**: Agent Marketplace with Web App
See our [vision document](https://github.com/adriannoes/asap-protocol/blob/main/.cursor/product-specs/strategy/vision-agent-marketplace.md) for the full roadmap.
## Contributing
**Community feedback and contributions are essential** for ASAP Protocol's evolution.
We're working on improvements and your input helps shape the future of the protocol. Every contribution, from bug reports to feature suggestions, documentation improvements and code contributions, makes a real difference.
Check out our [contributing guidelines](https://github.com/adriannoes/asap-protocol/blob/main/CONTRIBUTING.md) to get started. It's easier than you think! 🚀
## License
This project is licensed under the Apache 2.0 License - see the [license](https://github.com/adriannoes/asap-protocol/blob/main/LICENSE) file for details.
---
**Built with [Cursor](https://cursor.com/)** using Composer 1.5, Claude Sonnet/Opus 4.5/4.6, Gemini 3.0 Pro and Kimi K2.5.
| text/markdown | ASAP Protocol Contributors | null | null | null | Apache-2.0 | a2a, agent, async, communication, mcp, protocol | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Dis... | [] | null | null | >=3.13 | [] | [] | [] | [
"aiosqlite>=0.20",
"authlib>=1.3",
"brotli>=1.2.0",
"cryptography>=41.0",
"fastapi>=0.128.0",
"httpx[http2]>=0.28.1",
"jcs>=0.2.0",
"joserfc>=1.0",
"jsonschema>=4.23.0",
"limits>=3.0",
"opentelemetry-api>=1.20",
"opentelemetry-exporter-otlp-proto-grpc>=1.20",
"opentelemetry-instrumentation-f... | [] | [] | [] | [
"Homepage, https://github.com/adriannoes/asap-protocol",
"Documentation, https://adriannoes.github.io/asap-protocol",
"Repository, https://github.com/adriannoes/asap-protocol",
"Issues, https://github.com/adriannoes/asap-protocol/issues",
"PyPI, https://pypi.org/project/asap-protocol/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:13:22.540665 | asap_protocol-1.4.0.tar.gz | 1,257,154 | a8/5d/74deec716a0e4cef5330fd98eaa86e73496c863ea6f88bc51cdca72cc34f/asap_protocol-1.4.0.tar.gz | source | sdist | null | false | 336b826d3b45a1df73b900dd57211520 | 468482e30e11212310ccf6c370a2c92f118380620bb21da5fff0c7ded72803b1 | a85d74deec716a0e4cef5330fd98eaa86e73496c863ea6f88bc51cdca72cc34f | null | [
"LICENSE"
] | 239 |
2.4 | llamabot | 0.17.15 | A Pythonic interface to LLMs. | # LlamaBot: A Pythonic bot interface to LLMs
LlamaBot implements a Pythonic interface to LLMs,
making it much easier to experiment with LLMs in a Jupyter notebook
and build Python apps that utilize LLMs.
All models supported by [LiteLLM](https://github.com/BerriAI/litellm) are supported by LlamaBot.
## Install LlamaBot
To install LlamaBot:
```python
pip install llamabot==0.17.11
```
This will give you the minimum set of dependencies for running LlamaBot.
To install all of the optional dependencies, run:
```python
pip install "llamabot[all]"
```
## Get access to LLMs
### Option 1: Using local models with Ollama
LlamaBot supports using local models through Ollama.
To do so, head over to the [Ollama website](https://ollama.ai) and install Ollama.
Then follow the instructions below.
### Option 2: Use an API provider
#### OpenAI
If you have an OpenAI API key, then configure LlamaBot to use the API key by running:
```bash
export OPENAI_API_KEY="sk-your1api2key3goes4here"
```
#### Mistral
If you have a Mistral API key, then configure LlamaBot to use the API key by running:
```bash
export MISTRAL_API_KEY="your-api-key-goes-here"
```
#### Other API providers
Other API providers will usually specify an environment variable to set.
If you have an API key, then set the environment variable accordingly.
### Option 3: Using local models with LMStudio
LlamaBot supports using local models through LMStudio via LiteLLM.
To use LMStudio with LlamaBot:
1. Install and set up [LMStudio](https://lmstudio.ai/)
2. Load your desired model in LMStudio
3. Start the local server in LMStudio (usually runs on `http://localhost:1234`)
4. Set the environment variable for LMStudio's API base:
```bash
export LM_STUDIO_API_BASE="http://localhost:1234"
```
5. Use the model with LlamaBot using the `lm_studio/` prefix:
```python
import llamabot as lmb
system_prompt = "You are a helpful assistant."
bot = lmb.SimpleBot(
system_prompt,
model_name="lm_studio/your-model-name" # Use lm_studio/ prefix
)
```
Replace `your-model-name` with the actual name of the model you've loaded in LMStudio. LlamaBot can use any model provider that LiteLLM supports, and LMStudio is one of the many supported providers.
## How to use
!!! tip "Not sure which bot to use?"
Check out the [**Which Bot Should I Use?**](getting-started/which-bot.md) guide to help you choose the right bot for your needs.
### SimpleBot
The simplest use case of LlamaBot
is to create a `SimpleBot` that keeps no record of chat history.
This is effectively the same as a _stateless function_
that you program with natural language instructions rather than code.
This is useful for prompt experimentation,
or for creating simple bots that are preconditioned on an instruction to handle texts
and are then called upon repeatedly with different texts.
#### Using `SimpleBot` with an API provider
For example, to create a Bot that explains a given chunk of text
like Richard Feynman would:
```python
import llamabot as lmb
system_prompt = "You are Richard Feynman. You will be given a difficult concept, and your task is to explain it back."
feynman = lmb.SimpleBot(
system_prompt,
model_name="gpt-4.1-mini"
)
```
For using GPT, you need to have the `OPENAI_API_KEY` environment variable configured. If you want to use `SimpleBot` with a local Ollama model, [check out this example](#using-simplebot-with-a-local-ollama-model)
Now, `feynman` is callable on any arbitrary chunk of text and will return a rephrasing of that text in Richard Feynman's style (or more accurately, according to the style prescribed by the `system_prompt`).
For example:
```python
prompt = """
Enzyme function annotation is a fundamental challenge, and numerous computational tools have been developed.
However, most of these tools cannot accurately predict functional annotations,
such as enzyme commission (EC) number,
for less-studied proteins or those with previously uncharacterized functions or multiple activities.
We present a machine learning algorithm named CLEAN (contrastive learning–enabled enzyme annotation)
to assign EC numbers to enzymes with better accuracy, reliability,
and sensitivity compared with the state-of-the-art tool BLASTp.
The contrastive learning framework empowers CLEAN to confidently (i) annotate understudied enzymes,
(ii) correct mislabeled enzymes, and (iii) identify promiscuous enzymes with two or more EC numbers—functions
that we demonstrate by systematic in silico and in vitro experiments.
We anticipate that this tool will be widely used for predicting the functions of uncharacterized enzymes,
thereby advancing many fields, such as genomics, synthetic biology, and biocatalysis.
"""
feynman(prompt)
```
This will return something that looks like:
```text
Alright, let's break this down.
Enzymes are like little biological machines that help speed up chemical reactions in our
bodies. Each enzyme has a specific job, or function, and we use something called an
Enzyme Commission (EC) number to categorize these functions.
Now, the problem is that we don't always know what function an enzyme has, especially if
it's a less-studied or new enzyme. This is where computational tools come in. They try
to predict the function of these enzymes, but they often struggle to do so accurately.
So, the folks here have developed a new tool called CLEAN, which stands for contrastive
learning–enabled enzyme annotation. This tool uses a machine learning algorithm, which
is a type of artificial intelligence that learns from data to make predictions or
decisions.
CLEAN uses a method called contrastive learning. Imagine you have a bunch of pictures of
cats and dogs, and you want to teach a machine to tell the difference. You'd show it
pairs of pictures, some of the same animal (two cats or two dogs) and some of different
animals (a cat and a dog). The machine would learn to tell the difference by contrasting
the features of the two pictures. That's the basic idea behind contrastive learning.
CLEAN uses this method to predict the EC numbers of enzymes more accurately than
previous tools. It can confidently annotate understudied enzymes, correct mislabeled
enzymes, and even identify enzymes that have more than one function.
The creators of CLEAN have tested it with both computer simulations and lab experiments,
and they believe it will be a valuable tool for predicting the functions of unknown
enzymes. This could have big implications for fields like genomics, synthetic biology,
and biocatalysis, which all rely on understanding how enzymes work.
```
#### Using `SimpleBot` with a Local Ollama Model
If you want to use an Ollama model hosted locally,
then you would use the following syntax:
```python
import llamabot as lmb
system_prompt = "You are Richard Feynman. You will be given a difficult concept, and your task is to explain it back."
bot = lmb.SimpleBot(
system_prompt,
model_name="ollama_chat/llama2:13b"
)
```
Simply specify the `model_name` keyword argument following the `<provider>/<model name>` format. For example:
* `ollama_chat/` as the prefix, and
* a model name from the [Ollama library of models](https://ollama.ai/library)
All you need to do is make sure Ollama is running locally;
see the [Ollama documentation](https://ollama.ai/) for more details.
(The same can be done for the `QueryBot` class below!)
The `model_name` argument is optional. If you don't provide it, Llamabot will try to use the default model. You can configure that in the `DEFAULT_LANGUAGE_MODEL` environment variable.
### SimpleBot with memory for chat functionality
If you want chat functionality with memory, you can use SimpleBot with ChatMemory. This allows the bot to remember previous conversations:
```python
import llamabot as lmb
# Create a bot with memory
system_prompt = "You are Richard Feynman. You will be given a difficult concept, and your task is to explain it back."
# For simple linear memory (fast, no LLM calls)
memory = lmb.ChatMemory()
# For intelligent threading (uses LLM for smart connections)
# memory = lmb.ChatMemory.threaded(model="gpt-4o-mini")
feynman = lmb.SimpleBot(
system_prompt,
memory=memory,
model_name="gpt-4.1-mini"
)
# Have a conversation
response1 = feynman("Can you explain quantum mechanics?")
print(response1)
# The bot remembers the previous conversation
response2 = feynman("Can you give me a simpler explanation?")
print(response2)
```
The ChatMemory system provides intelligent conversation memory that can maintain context across multiple interactions. It supports both linear memory (fast, no LLM calls) and graph-based memory with intelligent threading (uses LLM to connect related conversation topics).
**Note**: For RAG (Retrieval-Augmented Generation) with document stores, use `QueryBot` with a document store instead of SimpleBot with memory. SimpleBot's memory parameter is specifically for conversational memory, while QueryBot is designed for document retrieval and question answering.
For more details on chat memory, see the [Chat Memory component documentation](reference/components/chat_memory.md).
### ToolBot
ToolBot is a specialized bot designed for single-turn tool execution and function calling. It analyzes user requests and selects the most appropriate tool to execute, making it perfect for automation tasks and data analysis workflows.
```python
import llamabot as lmb
from llamabot.components.tools import write_and_execute_code
# Create a ToolBot with code execution capabilities
bot = lmb.ToolBot(
system_prompt="You are a data analysis assistant.",
model_name="gpt-4.1",
tools=[write_and_execute_code(globals_dict=globals())],
memory=lmb.ChatMemory(),
)
# Create some data
import pandas as pd
import numpy as np
data = pd.DataFrame({
'x': np.random.randn(100),
'y': np.random.randn(100)
})
# Use the bot to analyze the data
response = bot("Calculate the correlation between x and y in the data DataFrame")
print(response)
```
ToolBot is ideal for:
* **Data analysis workflows** where you need to execute custom code
* **Automation tasks** that require specific function calls
* **API integrations** that need to call external services
* **Single-turn function calling** scenarios
### QueryBot
QueryBot lets you query a collection of documents.
QueryBot now works with a docstore that you create first, making it more modular.
Here's how to use QueryBot with a docstore:
```python
import llamabot as lmb
from pathlib import Path
# First, create a docstore and add your documents
docstore = lmb.LanceDBDocStore(table_name="eric_ma_blog")
docstore.add_documents([
Path("/path/to/blog/post1.txt"),
Path("/path/to/blog/post2.txt"),
# ... more documents
])
# Then, create a QueryBot with the docstore
bot = lmb.QueryBot(
system_prompt="You are an expert on Eric Ma's blog.",
docstore=docstore,
# Optional:
# model_name="gpt-4.1-mini"
# or
# model_name="ollama_chat/mistral"
)
result = bot("Do you have any advice for me on career development?")
```
You can also use an existing docstore:
```python
import llamabot as lmb
# Load an existing docstore
docstore = lmb.LanceDBDocStore(table_name="eric_ma_blog")
# Create QueryBot with the existing docstore
bot = lmb.QueryBot(
system_prompt="You are an expert on Eric Ma's blog",
docstore=docstore,
# Optional:
# model_name="gpt-4.1-mini"
# or
# model_name="ollama_chat/mistral"
)
result = bot("Do you have any advice for me on career development?")
```
For more explanation about the `model_name`, see [the examples with `SimpleBot`](#using-simplebot-with-a-local-ollama-model).
### StructuredBot
StructuredBot is designed for getting structured, validated outputs from LLMs.
Unlike SimpleBot, StructuredBot enforces Pydantic schema validation and provides
automatic retry logic when the LLM doesn't produce valid output.
```python
import llamabot as lmb
from pydantic import BaseModel
from typing import List
class Person(BaseModel):
name: str
age: int
hobbies: List[str]
# Create a StructuredBot with your Pydantic model
bot = lmb.StructuredBot(
system_prompt="Extract person information from text.",
pydantic_model=Person,
model_name="gpt-4o"
)
# The bot will return a validated Person object
person = bot("John is 25 years old and enjoys hiking and photography.")
print(person.name) # "John"
print(person.age) # 25
print(person.hobbies) # ["hiking", "photography"]
```
StructuredBot is perfect for:
* **Data extraction** from unstructured text
* **API responses** that need to match specific schemas
* **Form processing** with validation
* **Structured outputs** for downstream processing
### ImageBot
With the release of the OpenAI API updates,
as long as you have an OpenAI API key,
you can generate images with LlamaBot:
```python
import llamabot as lmb
bot = lmb.ImageBot()
# Within a Jupyter/Marimo notebook:
url = bot("A painting of a dog.")
# Or within a Python script
filepath = bot("A painting of a dog.")
# Now, you can do whatever you need with the url or file path.
```
If you're in a Jupyter/Marimo notebook,
you'll see the image show up magically as part of the output cell as well.
### Working with images and user input
You can easily pass images to your bots using `lmb.user()` with image file paths.
This is particularly useful for vision models that can analyze, describe, or answer questions about images:
```python
import llamabot as lmb
# Create a bot that can analyze images
vision_bot = lmb.SimpleBot(
"You are an expert image analyst. Describe what you see in detail.",
model_name="gpt-4o" # Use a vision-capable model
)
# Pass an image file path using lmb.user()
response = vision_bot(lmb.user("/path/to/your/image.jpg"))
print(response)
# You can also combine text and images
response = vision_bot(lmb.user(
"What colors are prominent in this image?",
"/path/to/your/image.jpg"
))
print(response)
```
The `lmb.user()` function automatically detects image files (PNG, JPG, JPEG, GIF, WebP)
and converts them to the appropriate format for the model.
You can use local file paths or even image URLs.
### Developer Messages with `lmb.dev()`
For development and debugging scenarios, you can use `lmb.dev()` to create developer messages that provide context about code changes, debugging instructions, or development tasks:
```python
import llamabot as lmb
# Create a bot for code development
dev_bot = lmb.SimpleBot(
"You are a helpful coding assistant. Help with development tasks.",
model_name="gpt-4o-mini"
)
# Use dev() for development context
response = dev_bot(lmb.dev("Add error handling to this function"))
print(response)
# Combine multiple development instructions
response = dev_bot(lmb.dev(
"Refactor this code to be more modular",
"Add comprehensive docstrings",
"Follow PEP8 style guidelines"
))
print(response)
```
**When to use `lmb.dev()`:**
* **Development tasks**: Code refactoring, debugging, testing
* **Code review**: Providing feedback on code quality
* **Documentation**: Adding docstrings, comments, or README updates
* **Debugging**: Describing issues or requesting fixes
**Message Type Hierarchy:**
* `lmb.system()` - Bot behavior and instructions
* `lmb.user()` - User input and questions
* `lmb.dev()` - Development context and tasks
### Experimentation
Automagically record your prompt experimentation locally on your system
by using llamabot's `Experiment` context manager:
```python
import llamabot as lmb
@lmb.prompt("system")
def sysprompt():
"""You are a funny llama."""
@lmb.prompt("user")
def joke_about(topic):
"""Tell me a joke about {{ topic }}."""
@lmb.metric
def response_length(response) -> int:
return len(response.content)
with lmb.Experiment(name="llama_jokes") as exp:
# You would have written this outside of the context manager anyways!
bot = lmb.SimpleBot(sysprompt(), model_name="gpt-4o")
response = bot(joke_about("cars"))
_ = response_length(response)
```
And now they will be viewable in the locally-stored message logs:

## CLI Demos
Llamabot comes with CLI demos of what can be built with it and a bit of supporting code.
And here is one where I use `llamabot`'s `SimpleBot` to create a bot
that automatically writes commit messages for me.
[](https://asciinema.org/a/594334)
## Contributing
### New features
New features are welcome!
These are early and exciting days for users of large language models.
Our development goals are to keep the project as simple as possible.
Features requests that come with a pull request will be prioritized;
the simpler the implementation of a feature (in terms of maintenance burden),
the more likely it will be approved.
### Bug reports
Please submit a bug report using the issue tracker.
### Questions/Discussions
Please use the issue tracker on GitHub.
## Contributors
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<table>
<tbody>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/RenaLu"><img src="https://avatars.githubusercontent.com/u/12033704?v=4?s=100" width="100px;" alt="Rena Lu"/><br /><sub><b>Rena Lu</b></sub></a><br /><a href="#code-RenaLu" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://giessel.com"><img src="https://avatars.githubusercontent.com/u/1160997?v=4?s=100" width="100px;" alt="andrew giessel"/><br /><sub><b>andrew giessel</b></sub></a><br /><a href="#ideas-andrewgiessel" title="Ideas, Planning, & Feedback">🤔</a> <a href="#design-andrewgiessel" title="Design">🎨</a> <a href="#code-andrewgiessel" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/aidanbrewis"><img src="https://avatars.githubusercontent.com/u/83365064?v=4?s=100" width="100px;" alt="Aidan Brewis"/><br /><sub><b>Aidan Brewis</b></sub></a><br /><a href="#code-aidanbrewis" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://ericmjl.github.io/"><img src="https://avatars.githubusercontent.com/u/2631566?v=4?s=100" width="100px;" alt="Eric Ma"/><br /><sub><b>Eric Ma</b></sub></a><br /><a href="#ideas-ericmjl" title="Ideas, Planning, & Feedback">🤔</a> <a href="#design-ericmjl" title="Design">🎨</a> <a href="#code-ericmjl" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://stackoverflow.com/users/116/mark-harrison"><img src="https://avatars.githubusercontent.com/u/7154?v=4?s=100" width="100px;" alt="Mark Harrison"/><br /><sub><b>Mark Harrison</b></sub></a><br /><a href="#ideas-marhar" title="Ideas, Planning, & Feedback">🤔</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/reka"><img src="https://avatars.githubusercontent.com/u/382113?v=4?s=100" width="100px;" alt="reka"/><br /><sub><b>reka</b></sub></a><br /><a href="#doc-reka" title="Documentation">📖</a> <a href="#code-reka" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/anujsinha3"><img src="https://avatars.githubusercontent.com/u/21972901?v=4?s=100" width="100px;" alt="anujsinha3"/><br /><sub><b>anujsinha3</b></sub></a><br /><a href="#code-anujsinha3" title="Code">��</a> <a href="#doc-anujsinha3" title="Documentation">📖</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/ElliotSalisbury"><img src="https://avatars.githubusercontent.com/u/2605537?v=4?s=100" width="100px;" alt="Elliot Salisbury"/><br /><sub><b>Elliot Salisbury</b></sub></a><br /><a href="#doc-ElliotSalisbury" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://eefricker.github.io"><img src="https://avatars.githubusercontent.com/u/65178728?v=4?s=100" width="100px;" alt="Ethan Fricker, PhD"/><br /><sub><b>Ethan Fricker, PhD</b></sub></a><br /><a href="#doc-eefricker" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://speakerdeck.com/eltociear"><img src="https://avatars.githubusercontent.com/u/22633385?v=4?s=100" width="100px;" alt="Ikko Eltociear Ashimine"/><br /><sub><b>Ikko Eltociear Ashimine</b></sub></a><br /><a href="#doc-eltociear" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/amirmolavi"><img src="https://avatars.githubusercontent.com/u/19491452?v=4?s=100" width="100px;" alt="Amir Molavi"/><br /><sub><b>Amir Molavi</b></sub></a><br /><a href="#infra-amirmolavi" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="#doc-amirmolavi" title="Documentation">📖</a></td>
</tr>
</tbody>
</table>
<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->
<!-- ALL-CONTRIBUTORS-LIST:END -->
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.14,>=3.10 | [] | [] | [] | [
"beautifulsoup4",
"chonkie",
"docstring-parser<0.18,>=0.17.0",
"duckduckgo-search",
"fastapi>=0.115.9",
"fastmcp",
"httpx",
"jinja2",
"litellm>=1.71.0",
"loguru",
"networkx",
"numpy",
"numpydoc",
"openai",
"pdfminer-six",
"pocketflow",
"pydantic>=2.0",
"pyprojroot",
"python-doten... | [] | [] | [] | [
"Documentation, https://ericmjl.github.io/llamabot",
"Source Code, https://github.com/ericmjl/llamabot"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T03:12:51.908475 | llamabot-0.17.15.tar.gz | 2,253,808 | f9/86/95bd8aa342a1d592e053e6703d2fe8938982dfec4a9383edcfffad579f85/llamabot-0.17.15.tar.gz | source | sdist | null | false | 8013cd7b20cbf48df951a4ba7387f2f5 | d5e0a1276bdc00077ac5f5d37b86efdd72c048e26e99d1d3066906375655e868 | f98695bd8aa342a1d592e053e6703d2fe8938982dfec4a9383edcfffad579f85 | null | [] | 323 |
2.4 | monsterui | 1.0.44 | The simplicity of FastHTML with the power of Tailwind | # MonsterUI
MonsterUI is a UI framework for FastHTML for building beautiful web interfaces with minimal code. It combines the simplicity of Python with the power of Tailwind. Perfect for data scientists, ML engineers, and developers who want to quickly turn their Python code into polished web apps without the complexity of traditional UI frameworks. Follows semantic HTML patterns when possible.
MonsterUI adds the following Tailwind-based libraries [Franken UI](https://franken-ui.dev/) and [DaisyUI](https://daisyui.com/) to FastHTML, as well as Python's [Mistletoe](https://github.com/miyuchina/mistletoe) for Markdown, [HighlightJS](https://highlightjs.org/) for code highlighting, and [Katex](https://katex.org/) for latex support.
# Getting Started
## Installation
To install this library, uses
`pip install MonsterUI`
## Getting Started
### TLDR
Run `python file.py` on this to start:
``` python
from fasthtml.common import *
from monsterui.all import *
# Choose a theme color (blue, green, red, etc)
hdrs = Theme.blue.headers()
# Create your app with the theme
app, rt = fast_app(hdrs=hdrs)
@rt
def index():
socials = (('github','https://github.com/AnswerDotAI/MonsterUI'),
('twitter','https://twitter.com/isaac_flath/'),
('linkedin','https://www.linkedin.com/in/isaacflath/'))
return Titled("Your First App",
Card(
H1("Welcome!"),
P("Your first MonsterUI app", cls=TextPresets.muted_sm),
P("I'm excited to see what you build with MonsterUI!"),
footer=DivLAligned(*[UkIconLink(icon,href=url) for icon,url in socials])))
serve()
```
## LLM context files
Using LLMs for development is a best practice way to get started and
explore. While LLMs cannot code for you, they can be helpful assistants.
You must check, refactor, test, and vet any code any LLM generates for
you - but they are helpful productivity tools. Take a look inside the
`llms.txt` file to see links to particularly useful context files!
- [llms.txt](https://raw.githubusercontent.com/AnswerDotAI/MonsterUI/refs/heads/main/docs/llms.txt): Links to what is included
- [llms-ctx.txt](https://raw.githubusercontent.com/AnswerDotAI/MonsterUI/refs/heads/main/docs/llms-ctx.txt): MonsterUI Documentation Pages
- [API list](https://raw.githubusercontent.com/AnswerDotAI/MonsterUI/refs/heads/main/docs/apilist.txt): API list for MonsterUI (included in llms-ctx.txt)
- [llms-ctx-full.txt](https://raw.githubusercontent.com/AnswerDotAI/MonsterUI/refs/heads/main/docs/llms-ctx-full.txt): Full context that includes all api reference pages as markdown
In addition you can add `/md` (for markdown) to a url to get a markdown representation and `/rmd` for rendered markdown representation (nice for looking to see what would be put into context.
### Step by Step
To get started, check out:
1. Start by importing the modules as follows:
``` python
from fasthtml.common import *
from monsterui.all import *
```
2. Instantiate the app with the MonsterUI headers
``` python
app = FastHTML(hdrs=Theme.blue.headers())
# Alternatively, using the fast_app method
app, rt = fast_app(hdrs=Theme.slate.headers())
```
> *The color option can be any of the theme options available out of the
> box*
> `katex` and `highlightjs` are not included by default. To include them set `katex=True` or `highlightjs=True` when calling `.headers`. (i.e. `Theme.slate.headers(katex=True)`)*
From here, you can explore the API Reference & examples to see how to
implement the components. You can also check out these demo videos to as
a quick start guide:
- MonsterUI [documentation page and Tutorial
app](https://monsterui.answer.ai/tutorial_app)
- Isaac & Hamel : [Building his website’s team
page](https://youtu.be/22Jn46-mmM0)
- Isaac & Audrey : [Building a blog](https://youtu.be/gVWAsywxLXE)
- Isaac : [Building a blog](https://youtu.be/22NJgfAqgko)
More resources and improvements to the documentation will be added here
soon!
| text/markdown | null | isaac flath <isaac.flath@gmail.com> | null | null | Apache-2.0 | nbdev, jupyter, notebook, python | [
"Natural Language :: English",
"Intended Audience :: Developers",
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"python-fasthtml",
"fastcore",
"lxml",
"mistletoe",
"mistlefoot>=0.0.3",
"beautifulsoup4",
"pandas; extra == \"dev\"",
"jinja2; extra == \"dev\"",
"llms-txt; extra == \"dev\"",
"pysymbol_llm; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/AnswerDotAI/MonsterUI",
"Documentation, https://monsterui.answer.ai/MonsterUI"
] | twine/6.2.0 CPython/3.12.8 | 2026-02-19T03:12:22.995832 | monsterui-1.0.44.tar.gz | 37,755 | 59/f9/58a6752571e4d72d8ed609471d9dc09c93349e08227de160d77f907cb1c5/monsterui-1.0.44.tar.gz | source | sdist | null | false | 130229dd4860ad74a372b9f1f9892ab3 | 0dddca5dd1bbec519c4b4c59ef4fa8b4d83fc2feb2265b13c2d0494b98b236bd | 59f958a6752571e4d72d8ed609471d9dc09c93349e08227de160d77f907cb1c5 | null | [
"LICENSE"
] | 1,651 |
2.4 | colnade-polars | 0.4.1 | Polars backend adapter for Colnade | # colnade-polars
Polars backend adapter for [Colnade](https://pypi.org/project/colnade/) — a statically type-safe DataFrame abstraction layer for Python.
## Installation
```bash
pip install colnade colnade-polars
```
## Usage
```python
from colnade import Column, Schema, UInt64, Utf8
from colnade_polars import read_parquet
class Users(Schema):
id: Column[UInt64]
name: Column[Utf8]
df = read_parquet("users.parquet", Users)
# df is DataFrame[Users] — fully type-checked
```
See the [full documentation](https://colnade.com/) for details.
| text/markdown | jwde | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"colnade>=0.4.1",
"polars>=1.0",
"pyarrow>=12.0"
] | [] | [] | [] | [
"Homepage, https://colnade.com",
"Documentation, https://colnade.com",
"Repository, https://github.com/jwde/colnade"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:12:15.397195 | colnade_polars-0.4.1.tar.gz | 6,225 | b7/01/701a00a4fec7436f7c4c392d04f647ff231062d3b507d687fd3a0220bbef/colnade_polars-0.4.1.tar.gz | source | sdist | null | false | 93943df62b5cde06fc4e35603e5f0d30 | ead6cc5991430ca444c23c043c3473dfa9638bfe040400814d622ac0d51a5e94 | b701701a00a4fec7436f7c4c392d04f647ff231062d3b507d687fd3a0220bbef | MIT | [] | 236 |
2.4 | colnade | 0.4.1 | A statically type-safe DataFrame abstraction layer | # Colnade
A statically type-safe DataFrame abstraction layer for Python.
Colnade replaces string-based column references (`pl.col("age")`) with typed descriptors (`Users.age`), so column misspellings, type mismatches, and schema violations are caught by your type checker — before your code runs.
Works with [ty](https://github.com/astral-sh/ty), mypy, and pyright. No plugins, no code generation.
## Installation
```bash
pip install colnade colnade-polars
```
Colnade requires Python 3.10+. Install the backend adapter for your engine:
| Backend | Install |
|---------|---------|
| Polars | `pip install colnade-polars` |
| Pandas | `pip install colnade-pandas` |
| Dask | `pip install colnade-dask` |
## Quick Start
### 1. Define a schema
```python
from colnade import Column, Schema, UInt64, Float64, Utf8
class Users(Schema):
id: Column[UInt64]
name: Column[Utf8]
age: Column[UInt64]
score: Column[Float64]
```
### 2. Read typed data
```python
from colnade_polars import read_parquet
df = read_parquet("users.parquet", Users)
# df is DataFrame[Users] — the type checker knows the schema
```
### 3. Transform with full type safety
```python
# Column references are attributes, not strings
result = (
df.filter(Users.age > 25)
.sort(Users.score.desc())
.select(Users.name, Users.score)
)
```
### 4. Bind to an output schema
```python
class UserSummary(Schema):
name: Column[Utf8]
score: Column[Float64]
output = result.cast_schema(UserSummary)
# output is DataFrame[UserSummary]
```
## Safety Model
Colnade catches errors at three levels:
1. **In your editor** — misspelled columns, type mismatches, and schema violations are flagged by your type checker (`ty`, `pyright`, `mypy`) before code runs
2. **At data boundaries** — runtime validation ensures files and external data match your schemas (columns, types, nullability)
3. **On your data values** — field constraints validate domain invariants like ranges and patterns *(coming soon)*
## Key Features
### Type-safe column references
Column references are class attributes verified by the type checker at lint time:
```python
Users.name # Column[Utf8] — valid
Users.naem # ty error: Class `Users` has no attribute `naem`
```
### Schema-preserving operations
Operations that don't change the schema (filter, sort, limit, with_columns) preserve the type parameter:
```python
def process(df: DataFrame[Users]) -> DataFrame[Users]:
return df.filter(Users.age > 25).sort(Users.score.desc())
```
### Typed expressions
Column descriptors build an expression tree with typed operators:
```python
Users.age > 18 # Expr[Bool] — comparison
Users.score * 2 # Expr[Float64] — arithmetic
(Users.age > 18) & (Users.score > 80) # Expr[Bool] — logical
Users.name.str_starts_with("A") # Expr[Bool] — string method
```
### Aggregations
```python
result = df.group_by(Users.name).agg(
Users.score.mean().alias(UserStats.avg_score),
Users.id.count().alias(UserStats.user_count),
)
```
### Null handling
```python
# Fill nulls, filter nulls, check nulls
df.with_columns(Users.score.fill_null(0.0).alias(Users.score))
df.filter(Users.score.is_not_null())
df.drop_nulls(Users.score)
```
### Joins with typed output
```python
joined = users.join(orders, on=Users.id == Orders.user_id)
# JoinedDataFrame[Users, Orders] — both schemas accessible
class UserOrders(Schema):
user_name: Column[Utf8] = mapped_from(Users.name)
amount: Column[Float64]
result = joined.cast_schema(UserOrders)
```
### Schema-polymorphic utility functions
Write generic functions that work with any schema:
```python
from colnade.schema import S
def first_n(df: DataFrame[S], n: int) -> DataFrame[S]:
return df.head(n)
# Works with any schema — type preserved
users_subset: DataFrame[Users] = first_n(users_df, 10)
```
### Struct and List support
```python
class Address(Schema):
city: Column[Utf8]
zip_code: Column[Utf8]
class UserProfile(Schema):
name: Column[Utf8]
address: Column[Struct[Address]]
tags: Column[List[Utf8]]
# Access nested data
df.filter(UserProfile.address.field(Address.city) == "New York")
df.with_columns(UserProfile.tags.list.len().alias(tag_count_col))
```
### Lazy execution
```python
from colnade_polars import scan_parquet
lazy = scan_parquet("users.parquet", Users)
# LazyFrame[Users] — builds a query plan
result = lazy.filter(Users.age > 25).sort(Users.score.desc()).collect()
# Executes the optimized query plan
```
### Untyped escape hatch
When you need to drop down to untyped operations:
```python
untyped = df.untyped() # UntypedDataFrame — string-based columns
retyped = untyped.to_typed(Users) # Back to DataFrame[Users]
```
## Type Checker Error Showcase
Colnade catches real errors at lint time. Here are actual error messages from `ty`:
### Misspelled column name
```python
x = Users.agee
```
```
error[unresolved-attribute]: Class `Users` has no attribute `agee`
```
### Schema mismatch at function boundary
```python
df: DataFrame[Users] = read_parquet("users.parquet", Users)
wrong: DataFrame[Orders] = df
```
```
error[invalid-assignment]: Object of type `DataFrame[Users]` is not assignable
to `DataFrame[Orders]`
```
### Nullability mismatch in mapped_from
```python
class Bad(Schema):
age: Column[UInt8] = mapped_from(Users.age) # Users.age is Column[UInt8 | None]
```
```
error[invalid-assignment]: Object of type `Column[UInt8 | None]` is not
assignable to `Column[UInt8]`
```
## Comparison with Existing Solutions
| Feature | Colnade | Pandera | StaticFrame | Patito | Narwhals |
|---------|---------|---------|-------------|--------|----------|
| Column refs checked statically | Yes | No | No | No | No |
| Schema preserved through ops | Yes | Nominal only | No | No | No |
| Works with existing engines | Yes | Yes | No | Polars only | Yes |
| No plugins or code gen | Yes | No (mypy plugin) | Yes | Yes | Yes |
| Generic utility functions | Yes | No | No | No | No |
| Struct/List typed access | Yes | No | No | No | No |
| Lazy execution support | Yes | No | No | No | Yes |
## Documentation
Full documentation is available at [colnade.com](https://colnade.com/), including:
- [Getting Started](https://colnade.com/getting-started/installation/) — installation and quick start
- [User Guide](https://colnade.com/user-guide/core-concepts/) — concepts, schemas, expressions, joins
- [Tutorials](https://colnade.com/tutorials/basic-usage/) — worked examples with real data
- [API Reference](https://colnade.com/api/) — auto-generated from source
## Examples
Runnable examples are in the [`examples/`](examples/) directory:
- [`basic_usage.py`](examples/basic_usage.py) — Schema definition, filter, select, aggregate
- [`null_handling.py`](examples/null_handling.py) — Nullable columns, fill_null, drop_nulls
- [`joins.py`](examples/joins.py) — Joining DataFrames, JoinedDataFrame, cast_schema
- [`generic_functions.py`](examples/generic_functions.py) — Schema-polymorphic utility functions
- [`nested_types.py`](examples/nested_types.py) — Struct and List column operations
- [`full_pipeline.py`](examples/full_pipeline.py) — Complete ETL pipeline example
## License
MIT
| text/markdown | jwde | null | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: ... | [] | null | null | >=3.10 | [] | [] | [] | [
"typing-extensions>=4.0"
] | [] | [] | [] | [
"Homepage, https://colnade.com",
"Documentation, https://colnade.com",
"Repository, https://github.com/jwde/colnade",
"Issues, https://github.com/jwde/colnade/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:11:56.525635 | colnade-0.4.1.tar.gz | 238,456 | cf/1b/c9b36ab69cb42150bcabe0bf2937f9182465ceb8cb94f15f62727f9675b1/colnade-0.4.1.tar.gz | source | sdist | null | false | b017ea1b618872343afc91d0bfdf4a8f | d351561656efcb20098ac3e17af7fd3896992ecbb503a5673efec13d98f584f0 | cf1bc9b36ab69cb42150bcabe0bf2937f9182465ceb8cb94f15f62727f9675b1 | MIT | [
"LICENSE"
] | 257 |
2.4 | datafog | 4.3.0b2 | Lightning-fast PII detection and anonymization library with 190x performance advantage | # DataFog Python
DataFog is a Python library for detecting and redacting personally identifiable information (PII).
It provides:
- Fast structured PII detection via regex
- Optional NER support via spaCy and GLiNER
- A simple agent-oriented API for LLM applications
- Backward-compatible `DataFog` and `TextService` classes
## Installation
```bash
# Core install (regex engine)
pip install datafog
# Add spaCy support
pip install datafog[nlp]
# Add GLiNER + spaCy support
pip install datafog[nlp-advanced]
# Everything
pip install datafog[all]
```
## Quick Start
```python
import datafog
text = "Contact john@example.com or call (555) 123-4567"
clean = datafog.sanitize(text, engine="regex")
print(clean)
# Contact [EMAIL_1] or call [PHONE_1]
```
## For LLM Applications
```python
import datafog
# 1) Scan prompt text before sending to an LLM
prompt = "My SSN is 123-45-6789"
scan_result = datafog.scan_prompt(prompt, engine="regex")
if scan_result.entities:
print(f"Detected {len(scan_result.entities)} PII entities")
# 2) Redact model output before returning it
output = "Email me at jane.doe@example.com"
safe_result = datafog.filter_output(output, engine="regex")
print(safe_result.redacted_text)
# Email me at [EMAIL_1]
# 3) One-liner redaction
print(datafog.sanitize("Card: 4111-1111-1111-1111", engine="regex"))
# Card: [CREDIT_CARD_1]
```
### Guardrails
```python
import datafog
# Reusable guardrail object
guard = datafog.create_guardrail(engine="regex", on_detect="redact")
@guard
def call_llm() -> str:
return "Send to admin@example.com"
print(call_llm())
# Send to [EMAIL_1]
```
## Engines
Use the engine that matches your accuracy and dependency constraints:
- `regex`:
- Fastest and always available.
- Best for structured entities: `EMAIL`, `PHONE`, `SSN`, `CREDIT_CARD`, `IP_ADDRESS`, `DATE`, `ZIP_CODE`.
- `spacy`:
- Requires `pip install datafog[nlp]`.
- Useful for unstructured entities like person and organization names.
- `gliner`:
- Requires `pip install datafog[nlp-advanced]`.
- Stronger NER coverage than regex for unstructured text.
- `smart`:
- Cascades regex with optional NER engines.
- If optional deps are missing, it degrades gracefully and warns.
## Backward-Compatible APIs
The existing public API remains available.
### `DataFog` class
```python
from datafog import DataFog
result = DataFog().scan_text("Email john@example.com")
print(result["EMAIL"])
```
### `TextService` class
```python
from datafog.services import TextService
service = TextService(engine="regex")
result = service.annotate_text_sync("Call (555) 123-4567")
print(result["PHONE"])
```
## CLI
```bash
# Scan text
datafog scan-text "john@example.com"
# Redact text
datafog redact-text "john@example.com"
# Replace text with pseudonyms
datafog replace-text "john@example.com"
# Hash detected entities
datafog hash-text "john@example.com"
```
## Telemetry
DataFog includes anonymous telemetry by default.
To opt out:
```bash
export DATAFOG_NO_TELEMETRY=1
# or
export DO_NOT_TRACK=1
```
Telemetry does not include input text or detected PII values.
## Development
```bash
git clone https://github.com/datafog/datafog-python
cd datafog-python
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -e ".[all,dev]"
pytest tests/
```
| text/markdown | Sid Mohan | sid@datafog.ai | null | null | null | pii detection anonymization privacy regex performance | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Soft... | [] | null | null | <3.13,>=3.10 | [] | [] | [] | [
"pydantic<3.0,>=2.0",
"pydantic-settings>=2.0.0",
"typing-extensions>=4.0",
"spacy<4.0,>=3.7.0; extra == \"nlp\"",
"gliner>=0.2.5; extra == \"nlp-advanced\"",
"torch<2.7,>=2.1.0; extra == \"nlp-advanced\"",
"transformers>=4.20.0; extra == \"nlp-advanced\"",
"huggingface-hub>=0.16.0; extra == \"nlp-adv... | [] | [] | [] | [
"Homepage, https://datafog.ai",
"Documentation, https://docs.datafog.ai",
"Discord, https://discord.gg/bzDth394R4",
"Twitter, https://twitter.com/datafoginc",
"GitHub, https://github.com/datafog/datafog-python"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-19T03:11:14.337112 | datafog-4.3.0b2.tar.gz | 72,886 | ef/aa/492e3fa2f30fd9f57828a5e956e6312d64a03c45f6bf5d2c7962abb5b520/datafog-4.3.0b2.tar.gz | source | sdist | null | false | 28f68199e10ee4853df48cf8133a3489 | 0dfc1b1c4bb3ab7077d9d298b817320e2b3bf90b17cc989758555c2dad4fe0c4 | efaa492e3fa2f30fd9f57828a5e956e6312d64a03c45f6bf5d2c7962abb5b520 | null | [
"LICENSE"
] | 205 |
2.4 | ragbits-guardrails | 1.4.0.dev202602190302 | Guardrails module for Ragbits components | # Ragbits Guardrails
Ragbits Guardrails is a Python package that contains utilities for ensuring the safety and relevance of responses generated by Ragbits components.
## Installation
You can install the latest version of Ragbits Guardrails using pip:
```bash
pip install ragbits-guardrails
```
## Quickstart
Example of using the OpenAI Moderation Guardrail to verify a message:
```python
import asyncio
from ragbits.guardrails.base import GuardrailManager, GuardrailVerificationResult
from ragbits.guardrails.openai_moderation import OpenAIModerationGuardrail
async def verify_message(message: str) -> list[GuardrailVerificationResult]:
manager = GuardrailManager([OpenAIModerationGuardrail()])
return await manager.verify(message)
if __name__ == '__main__':
print(asyncio.run(verify_message("Test message")))
```
## Documentation
* [How-To Guides - Guardrails](https://ragbits.deepsense.ai/how-to/use_guardrails/)
<!--
TODO:
* Add link to API Reference once classes from the Guardrails package are added to the API Reference.
-->
| text/markdown | null | "deepsense.ai" <ragbits@deepsense.ai> | null | null | null | Evaluation, GenAI, Generative AI, LLMs, Large Language Models, RAG, Retrieval Augmented Generation | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Pro... | [] | null | null | >=3.10 | [] | [] | [] | [
"ragbits-core==1.4.0.dev202602190302",
"openai<2.0.0,>=1.91.0; extra == \"openai\""
] | [] | [] | [] | [
"Homepage, https://github.com/deepsense-ai/ragbits",
"Bug Reports, https://github.com/deepsense-ai/ragbits/issues",
"Documentation, https://ragbits.deepsense.ai/",
"Source, https://github.com/deepsense-ai/ragbits"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T03:11:06.856394 | ragbits_guardrails-1.4.0.dev202602190302.tar.gz | 4,037 | 25/f7/52e963d2f7e0f31760e9e104757d37d5a954beb76eb8b6fdfc7b87925c18/ragbits_guardrails-1.4.0.dev202602190302.tar.gz | source | sdist | null | false | 4ba8556a99e88f5064fa57b88fd1890d | 001f39da46dbdcf6ddb3e4be601b1586bf2ecc6971c186bb86358329881b98a6 | 25f752e963d2f7e0f31760e9e104757d37d5a954beb76eb8b6fdfc7b87925c18 | MIT | [] | 203 |
2.4 | ragbits-evaluate | 1.4.0.dev202602190302 | Evaluation module for Ragbits components | # Ragbits Evaluate
Ragbits Evaluate is a package that contains tools for evaluating the performance of AI pipelines defined with Ragbits components. It also helps with automatically finding the best hyperparameter configurations for them.
## Installation
To install the Ragbits Evaluate package, run:
```sh
pip install ragbits-evaluate
```
<!--
TODO: Add a minimalistic example inspired by the Quickstart chapter on Ragbits Evaluate once it is ready.
-->
## Documentation
<!--
TODO:
* Add link to the Quickstart chapter on Ragbits Evaluate once it is ready.
* Add link to API Reference once classes from the Evaluate package are added to the API Reference.
-->
* [How-To Guides - Evaluate](https://ragbits.deepsense.ai/how-to/evaluate/optimize/)
| text/markdown | null | "deepsense.ai" <ragbits@deepsense.ai> | null | null | null | Evaluation, GenAI, Generative AI, LLMs, Large Language Models, RAG, Retrieval Augmented Generation | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Pro... | [] | null | null | >=3.10 | [] | [] | [] | [
"datasets<4.0.0,>=3.0.1",
"deepeval<3.0.0,>=2.0.0",
"distilabel<2.0.0,>=1.5.0",
"hydra-core<2.0.0,>=1.3.2",
"neptune[optuna]<2.0.0,>=1.12.0",
"optuna<5.0.0,>=4.0.0",
"ragbits-core==1.4.0.dev202602190302",
"continuous-eval<1.0.0,>=0.3.12; extra == \"relari\""
] | [] | [] | [] | [
"Homepage, https://github.com/deepsense-ai/ragbits",
"Bug Reports, https://github.com/deepsense-ai/ragbits/issues",
"Documentation, https://ragbits.deepsense.ai/",
"Source, https://github.com/deepsense-ai/ragbits"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T03:11:06.025441 | ragbits_evaluate-1.4.0.dev202602190302.tar.gz | 55,809 | ea/98/d13bb3cd3d6def630f9ffc52ff384620b916952db9efc427fdd0cc3305bd/ragbits_evaluate-1.4.0.dev202602190302.tar.gz | source | sdist | null | false | 79da9556f8637cdec826c6b3c7ccdab4 | f95c5ffc157f42521089a327118a5d7937737b013ee245bd7d338489dfeb94ef | ea98d13bb3cd3d6def630f9ffc52ff384620b916952db9efc427fdd0cc3305bd | MIT | [] | 202 |
2.4 | ragbits-document-search | 1.4.0.dev202602190302 | Document Search module for Ragbits | # Ragbits Document Search
Ragbits Document Search is a Python package that provides tools for building RAG applications. It helps ingest, index, and search documents to retrieve relevant information for your prompts.
## Installation
You can install the latest version of Ragbits Document Search using pip:
```bash
pip install ragbits-document-search
```
## Quickstart
```python
import asyncio
from ragbits.core.embeddings import LiteLLMEmbedder
from ragbits.core.vector_stores.in_memory import InMemoryVectorStore
from ragbits.document_search import DocumentSearch
async def main() -> None:
"""
Run the example.
"""
embedder = LiteLLMEmbedder(
model_name="text-embedding-3-small",
)
vector_store = InMemoryVectorStore(embedder=embedder)
document_search = DocumentSearch(
vector_store=vector_store,
)
# Ingest all .txt files from the "biographies" directory
await document_search.ingest("local://biographies/*.txt")
# Search the documents for the query
results = await document_search.search("When was Marie Curie-Sklodowska born?")
print(results)
if __name__ == "__main__":
asyncio.run(main())
```
## Documentation
* [Quickstart 2: Adding RAG Capabilities](https://ragbits.deepsense.ai/quickstart/quickstart2_rag/)
* [How-To Guides - Document Search](https://ragbits.deepsense.ai/how-to/document_search/async_processing/)
* [API Reference - Document Search](https://ragbits.deepsense.ai/api_reference/document_search/)
| text/markdown | null | "deepsense.ai" <ragbits@deepsense.ai> | null | null | null | Document Search, GenAI, Generative AI, LLMs, Large Language Models, RAG, Retrieval Augmented Generation | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Pro... | [] | null | null | >=3.10 | [] | [] | [] | [
"docling[easyocr]<3.0.0,>=2.15.1",
"filetype<2.0.0,>=1.2.0",
"opencv-python<5.0.0.0,>=4.11.0.86",
"python-pptx<2.0.0,>=1.0.0",
"ragbits-core==1.4.0.dev202602190302",
"rerankers<1.0.0,>=0.6.1",
"ray[data]<3.0.0,>=2.43.0; extra == \"ray\"",
"unstructured-client<1.0.0,>=0.26.0; extra == \"unstructured\""... | [] | [] | [] | [
"Homepage, https://github.com/deepsense-ai/ragbits",
"Bug Reports, https://github.com/deepsense-ai/ragbits/issues",
"Documentation, https://ragbits.deepsense.ai/",
"Source, https://github.com/deepsense-ai/ragbits"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T03:11:04.896236 | ragbits_document_search-1.4.0.dev202602190302.tar.gz | 722,507 | 46/f2/ecc326ef863ee07a3394625c4c2f2eb8686545c4b053e753ec4d9bf188c6/ragbits_document_search-1.4.0.dev202602190302.tar.gz | source | sdist | null | false | 8a5d6e69320df351fc4aa70d3e26a806 | 32c176628a98b458e48e11ccc164dea06911f28185c645705320d7ac21506959 | 46f2ecc326ef863ee07a3394625c4c2f2eb8686545c4b053e753ec4d9bf188c6 | MIT | [] | 211 |
2.4 | ragbits-core | 1.4.0.dev202602190302 | Building blocks for rapid development of GenAI applications | # Ragbits Core
Ragbits Core is a collection of utilities and tools that are used across all Ragbits packages. It includes fundamentals, such as utilities for logging, configuration, prompt creation, classes for comunicating with LLMs, embedders, vector stores, and more.
## Installation
```sh
pip install ragbits-core
```
## Quick Start
```python
import asyncio
from pydantic import BaseModel
from ragbits.core.prompt import Prompt
from ragbits.core.llms.litellm import LiteLLM
class Dog(BaseModel):
breed: str
age: int
temperament: str
class DogNamePrompt(Prompt[Dog, str]):
system_prompt = """
You are a dog name generator. You come up with funny names for dogs given the dog details.
"""
user_prompt = """
The dog is a {breed} breed, {age} years old, and has a {temperament} temperament.
"""
async def main() -> None:
llm = LiteLLM("gpt-4o")
dog = Dog(breed="Golden Retriever", age=3, temperament="friendly")
prompt = DogNamePrompt(dog)
response = await llm.generate(prompt)
print(response)
if __name__ == "__main__":
asyncio.run(main())
```
## Documentation
* [Quickstart 1: Working with Prompts and LLMs](https://ragbits.deepsense.ai/quickstart/quickstart1_prompts/)
* [How-To Guides - Core](https://ragbits.deepsense.ai/how-to/prompts/use_prompting/)
* [API Reference - Core](https://ragbits.deepsense.ai/api_reference/core/prompt/)
| text/markdown | null | "deepsense.ai" <ragbits@deepsense.ai> | null | null | null | GenAI, Generative AI, LLMs, Large Language Models, Prompt Management, RAG, Retrieval Augmented Generation | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Pro... | [] | null | null | >=3.10 | [] | [] | [] | [
"aiohttp<4.0.0,>=3.10.8",
"filetype<2.0.0,>=1.2.0",
"griffe<2.0.0,>=1.7.3",
"jinja2<4.0.0,>=3.1.4",
"litellm<2.0.0,>=1.74.0",
"pydantic<3.0.0,>=2.9.1",
"tomli<3.0.0,>=2.0.2",
"typer<1.0.0,>=0.12.5",
"azure-core<2.0.0,>=1.32.0; extra == \"azure\"",
"azure-identity<2.0.0,>=1.19.0; extra == \"azure\"... | [] | [] | [] | [
"Homepage, https://github.com/deepsense-ai/ragbits",
"Bug Reports, https://github.com/deepsense-ai/ragbits/issues",
"Documentation, https://ragbits.deepsense.ai/",
"Source, https://github.com/deepsense-ai/ragbits"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T03:11:03.080765 | ragbits_core-1.4.0.dev202602190302.tar.gz | 186,917 | a3/d5/00740c86432f2384469adea8dcf6531487d4221c0c561ae20eab992c6915/ragbits_core-1.4.0.dev202602190302.tar.gz | source | sdist | null | false | 94d130dd9ead98f793434570134bdf03 | 79a3d3b33e72fd594c7aab0b95f0fb29e4604587bd1b4520066431800ef95e0a | a3d500740c86432f2384469adea8dcf6531487d4221c0c561ae20eab992c6915 | MIT | [] | 278 |
2.4 | ragbits-cli | 1.4.0.dev202602190302 | A CLI application for ragbits - building blocks for rapid development of GenAI applications | # Ragbits CLI
Ragbits CLI provides the `ragbits` command-line interface (CLI) tool that allows you to interact with Ragbits from the terminal. Other packages can extend the CLI by adding their own commands, so the exact set of available commands may vary depending on the installed packages.
## Installation
To use the complete Ragbits stack, install the `ragbits` package:
```sh
pip install ragbits
```
## Example Usage
The following example assumes that `ragbits-core` is installed and that the current ddirectory contains a `song_prompt.py` file with a prompt class named `SongPrompt`, as defined in the [Quickstart Guide](https://ragbits.deepsense.ai/quickstart/quickstart1_prompts/#making-the-prompt-dynamic).
The example demonstrates how to execute the prompt using the `ragbits` CLI tool.
The left side of the table shows the system and user prompts (rendered with placeholders replaced by the provided values), and the right side shows the generated response from the Large Language Model.
```sh
$ ragbits prompts exec song_prompt:SongPrompt --payload '{"subject": "unicorns", "age_group": 12, "genre": "pop"}'
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Question ┃ Answer ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ [{'role': 'system', 'content': 'You │ (Verse 1) │
│ are a professional songwriter. │ In a land of rainbows and glitter, │
│ You only use language that is │ Where flowers bloom and skies are │
│ appropriate for children.'}, │ brighter, │
│ {'role': 'user', 'content': 'Write a │ There's a magical creature so rare, │
│ song about a unicorns for 12 years │ With a horn that sparkles in the air. │
│ old pop fans.'}] │ │
│ │ (Chorus) │
│ │ Unicorns, unicorns, oh so divine, │
│ │ With their mane that shines and eyes │
│ │ that shine, │
│ │ Gallop through the meadows, so free, │
│ │ In a world of wonder, just you and │
│ │ me. │
└───────────────────────────────────────┴───────────────────────────────────────┘
```
## Documentation
* [Documentation of the `ragbits` CLI](https://ragbits.deepsense.ai/cli/main/)
| text/markdown | null | "deepsense.ai" <ragbits@deepsense.ai> | null | null | null | GenAI, Generative AI, LLMs, Large Language Models, Prompt Management, RAG, Retrieval Augmented Generation | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Pro... | [] | null | null | >=3.10 | [] | [] | [] | [
"ragbits-core==1.4.0.dev202602190302",
"typer<1.0.0,>=0.12.5"
] | [] | [] | [] | [
"Homepage, https://github.com/deepsense-ai/ragbits",
"Bug Reports, https://github.com/deepsense-ai/ragbits/issues",
"Documentation, https://ragbits.deepsense.ai/",
"Source, https://github.com/deepsense-ai/ragbits"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T03:11:02.389019 | ragbits_cli-1.4.0.dev202602190302.tar.gz | 8,372 | ad/5a/83a5724f6722ed75878e9d216d73ac7d2c7cf01bf44e6a264a330e595915/ragbits_cli-1.4.0.dev202602190302.tar.gz | source | sdist | null | false | 472dd8767fdc6593b9ea07b5878c4dd3 | ae4595dc84ce7f8533881282309cb90e247ed531a049d3d551f62b1b31919224 | ad5a83a5724f6722ed75878e9d216d73ac7d2c7cf01bf44e6a264a330e595915 | MIT | [] | 212 |
2.4 | ragbits-chat | 1.4.0.dev202602190302 | Building blocks for rapid development of GenAI applications | # Ragbits Chat
ragbits-chat is a Python package that provides tools for building conversational AI applications.
The package includes:
- Framework for building chat experiences
- History management for conversation tracking
- UI components for building chat interfaces
For detailed information, please refer to the [API documentation](https://ragbits.deepsense.ai/how-to/chatbots/api/). | text/markdown | null | "deepsense.ai" <ragbits@deepsense.ai> | null | null | null | GenAI, Generative AI, LLMs, Large Language Models, Prompt Management, RAG, Retrieval Augmented Generation | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Pro... | [] | null | null | >=3.10 | [] | [] | [] | [
"bcrypt>=4.2.0",
"fastapi<1.0.0,>=0.115.0",
"httpx<1.0.0,>=0.28.1",
"python-jose[cryptography]>=3.5.0",
"ragbits-core==1.4.0.dev202602190302",
"uvicorn<1.0.0,>=0.31.0",
"sqlalchemy<3.0.0,>=2.0.39; extra == \"sql\""
] | [] | [] | [] | [
"Homepage, https://github.com/deepsense-ai/ragbits",
"Bug Reports, https://github.com/deepsense-ai/ragbits/issues",
"Documentation, https://ragbits.deepsense.ai/",
"Source, https://github.com/deepsense-ai/ragbits"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T03:11:00.903027 | ragbits_chat-1.4.0.dev202602190302.tar.gz | 556,085 | c1/c9/a08df412cf466b3bab8580349f0b5463d4100b6e1f9728c66650b5bc9653/ragbits_chat-1.4.0.dev202602190302.tar.gz | source | sdist | null | false | 69155f366d082e7cca4150da9c46fa16 | 1fa5bd25e9d376d3bdf989873419ce46ea7c4af59ca5c8545261f1b6b3125d41 | c1c9a08df412cf466b3bab8580349f0b5463d4100b6e1f9728c66650b5bc9653 | MIT | [] | 206 |
2.4 | ragbits-agents | 1.4.0.dev202602190302 | Building blocks for rapid development of GenAI applications | # Ragbits Agents
Ragbits Agents contains primitives for building agentic systems.
The package is in the experimental phase, the API may change in the future.
## Installation
To install the Ragbits Agents package, run:
```sh
pip install ragbits-agents
```
<!--
TODO: Add a minimalistic example inspired by the Quickstart chapter on Ragbits Evaluate once it is ready.
-->
<!--
TODO:
* Add link to the Quickstart chapter on Ragbits Evaluate once it is ready.
* Add link to API Reference once classes from the Evaluate package are added to the API Reference.
-->
| text/markdown | null | "deepsense.ai" <ragbits@deepsense.ai> | null | null | null | Agents, GenAI, Generative AI, LLMs, Large Language Models, RAG, Retrieval Augmented Generation | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Pro... | [] | null | null | >=3.10 | [] | [] | [] | [
"ragbits-core==1.4.0.dev202602190302",
"a2a-sdk<1.0.0,>=0.2.9; extra == \"a2a\"",
"fastapi<1.0.0,>=0.115.0; extra == \"a2a\"",
"uvicorn<1.0.0,>=0.31.0; extra == \"a2a\"",
"textual<1.0.0,>=0.85.2; extra == \"cli\"",
"mcp<2.0.0,>=1.9.4; extra == \"mcp\"",
"openai<2.0.0,>=1.91.0; extra == \"openai\""
] | [] | [] | [] | [
"Homepage, https://github.com/deepsense-ai/ragbits",
"Bug Reports, https://github.com/deepsense-ai/ragbits/issues",
"Documentation, https://ragbits.deepsense.ai/",
"Source, https://github.com/deepsense-ai/ragbits"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T03:10:59.979643 | ragbits_agents-1.4.0.dev202602190302.tar.gz | 52,785 | b7/30/a2c28b63b577587e2264cf028c4ba710c6c3721990d80dbbdccfa8e165c1/ragbits_agents-1.4.0.dev202602190302.tar.gz | source | sdist | null | false | 229806bb37274cef1dfa8a607a35059c | 20ebbdc043564a30484737d6d684a462b9d882355a464098558dba9b9e442dde | b730a2c28b63b577587e2264cf028c4ba710c6c3721990d80dbbdccfa8e165c1 | MIT | [] | 203 |
2.4 | ragbits | 1.4.0.dev202602190302 | Building blocks for rapid development of GenAI applications | <div align="center">
<h1>🐰 Ragbits</h1>
*Building blocks for rapid development of GenAI applications*
[Homepage](https://deepsense.ai/rd-hub/ragbits/) | [Documentation](https://ragbits.deepsense.ai) | [Contact](https://deepsense.ai/contact/)
<a href="https://trendshift.io/repositories/13966" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13966" alt="deepsense-ai%2Fragbits | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
[](https://pypi.org/project/ragbits)
[](https://pypi.org/project/ragbits)
[](https://pypi.org/project/ragbits)
</div>
---
## Features
### 🔨 Build Reliable & Scalable GenAI Apps
- **Swap LLMs anytime** – Switch between [100+ LLMs via LiteLLM](https://ragbits.deepsense.ai/stable/how-to/llms/use_llms/) or run [local models](https://ragbits.deepsense.ai/stable/how-to/llms/use_local_llms/).
- **Type-safe LLM calls** – Use Python generics to [enforce strict type safety](https://ragbits.deepsense.ai/stable/how-to/prompts/use_prompting/#how-to-configure-prompts-output-data-type) in model interactions.
- **Bring your own vector store** – Connect to [Qdrant](https://ragbits.deepsense.ai/stable/api_reference/core/vector-stores/#ragbits.core.vector_stores.qdrant.QdrantVectorStore), [PgVector](https://ragbits.deepsense.ai/stable/api_reference/core/vector-stores/#ragbits.core.vector_stores.pgvector.PgVectorStore), and more with built-in support.
- **Developer tools included** – [Manage vector stores](https://ragbits.deepsense.ai/stable/cli/main/#ragbits-vector-store), query pipelines, and [test prompts from your terminal](https://ragbits.deepsense.ai/stable/quickstart/quickstart1_prompts/#testing-the-prompt-from-the-cli).
- **Modular installation** – Install only what you need, reducing dependencies and improving performance.
### 📚 Fast & Flexible RAG Processing
- **Ingest 20+ formats** – Process PDFs, HTML, spreadsheets, presentations, and more. Process data using [Docling](https://github.com/docling-project/docling), [Unstructured](https://github.com/Unstructured-IO/unstructured) or create a custom parser.
- **Handle complex data** – Extract tables, images, and structured content with built-in VLMs support.
- **Connect to any data source** – Use prebuilt connectors for S3, GCS, Azure, or implement your own.
- **Scale ingestion** – Process large datasets quickly with [Ray-based parallel processing](https://ragbits.deepsense.ai/stable/how-to/document_search/distributed_ingestion/#how-to-ingest-documents-in-a-distributed-fashion).
### 🤖 Build Multi-Agent Workflows with Ease
- **Multi-agent coordination** – Create teams of specialized agents with role-based collaboration using [A2A protocol](https://ragbits.deepsense.ai/stable/tutorials/agents) for interoperability.
- **Real-time data integration** – Leverage [Model Context Protocol (MCP)](https://ragbits.deepsense.ai/stable/how-to/agents/provide_mcp_tools) for live web access, database queries, and API integrations.
- **Conversation state management** – Maintain context across interactions with [automatic history tracking](https://ragbits.deepsense.ai/stable/how-to/agents/define_and_use_agents/#conversation-history).
### 🚀 Deploy & Monitor with Confidence
- **Real-time observability** – Track performance with [OpenTelemetry](https://ragbits.deepsense.ai/stable/how-to/project/use_tracing/#opentelemetry-trace-handler) and [CLI insights](https://ragbits.deepsense.ai/stable/how-to/project/use_tracing/#cli-trace-handler).
- **Built-in testing** – Validate prompts [with promptfoo](https://ragbits.deepsense.ai/stable/how-to/prompts/promptfoo/) before deployment.
- **Auto-optimization** – Continuously evaluate and refine model performance.
- **Chat UI** – Deploy [chatbot interface](https://ragbits.deepsense.ai/stable/how-to/chatbots/api/) with API, persistance and user feedback.
## Installation
### Stable Release
To get started quickly, you can install the latest stable release with:
```sh
pip install ragbits
```
### Nightly Builds
For the latest development features, you can install nightly builds that are automatically published from the `develop` branch:
```sh
pip install ragbits --pre
```
**Note:** Nightly builds include the latest features and bug fixes but may be less stable than official releases. They follow the version format `X.Y.Z.devYYYYMMDDHHMM`.
### Package Contents
This is a starter bundle of packages, containing:
- [`ragbits-core`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-core) - fundamental tools for working with prompts, LLMs and vector databases.
- [`ragbits-agents`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-agents) - abstractions for building agentic systems.
- [`ragbits-document-search`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-document-search) - retrieval and ingestion piplines for knowledge bases.
- [`ragbits-evaluate`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-evaluate) - unified evaluation framework for Ragbits components.
- [`ragbits-guardrails`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-guardrails) - utilities for ensuring the safety and relevance of responses.
- [`ragbits-chat`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-chat) - full-stack infrastructure for building conversational AI applications.
- [`ragbits-cli`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-cli) - `ragbits` shell command for interacting with Ragbits components.
Alternatively, you can use individual components of the stack by installing their respective packages.
## Quickstart
### Basics
To define a prompt and run LLM:
```python
import asyncio
from pydantic import BaseModel
from ragbits.core.llms import LiteLLM
from ragbits.core.prompt import Prompt
class QuestionAnswerPromptInput(BaseModel):
question: str
class QuestionAnswerPrompt(Prompt[QuestionAnswerPromptInput, str]):
system_prompt = """
You are a question answering agent. Answer the question to the best of your ability.
"""
user_prompt = """
Question: {{ question }}
"""
llm = LiteLLM(model_name="gpt-4.1-nano")
async def main() -> None:
prompt = QuestionAnswerPrompt(QuestionAnswerPromptInput(question="What are high memory and low memory on linux?"))
response = await llm.generate(prompt)
print(response)
if __name__ == "__main__":
asyncio.run(main())
```
### Document Search
To build and query a simple vector store index:
```python
import asyncio
from ragbits.core.embeddings import LiteLLMEmbedder
from ragbits.core.vector_stores import InMemoryVectorStore
from ragbits.document_search import DocumentSearch
embedder = LiteLLMEmbedder(model_name="text-embedding-3-small")
vector_store = InMemoryVectorStore(embedder=embedder)
document_search = DocumentSearch(vector_store=vector_store)
async def run() -> None:
await document_search.ingest("web://https://arxiv.org/pdf/1706.03762")
result = await document_search.search("What are the key findings presented in this paper?")
print(result)
if __name__ == "__main__":
asyncio.run(run())
```
### Retrieval-Augmented Generation
To build a simple RAG pipeline:
```python
import asyncio
from collections.abc import Iterable
from pydantic import BaseModel
from ragbits.core.embeddings import LiteLLMEmbedder
from ragbits.core.llms import LiteLLM
from ragbits.core.prompt import Prompt
from ragbits.core.vector_stores import InMemoryVectorStore
from ragbits.document_search import DocumentSearch
from ragbits.document_search.documents.element import Element
class QuestionAnswerPromptInput(BaseModel):
question: str
context: Iterable[Element]
class QuestionAnswerPrompt(Prompt[QuestionAnswerPromptInput, str]):
system_prompt = """
You are a question answering agent. Answer the question that will be provided using context.
If in the given context there is not enough information refuse to answer.
"""
user_prompt = """
Question: {{ question }}
Context: {% for chunk in context %}{{ chunk.text_representation }}{%- endfor %}
"""
llm = LiteLLM(model_name="gpt-4.1-nano")
embedder = LiteLLMEmbedder(model_name="text-embedding-3-small")
vector_store = InMemoryVectorStore(embedder=embedder)
document_search = DocumentSearch(vector_store=vector_store)
async def run() -> None:
question = "What are the key findings presented in this paper?"
await document_search.ingest("web://https://arxiv.org/pdf/1706.03762")
chunks = await document_search.search(question)
prompt = QuestionAnswerPrompt(QuestionAnswerPromptInput(question=question, context=chunks))
response = await llm.generate(prompt)
print(response)
if __name__ == "__main__":
asyncio.run(run())
```
### Agentic RAG
To build an agentic RAG pipeline:
```python
import asyncio
from ragbits.agents import Agent
from ragbits.core.embeddings import LiteLLMEmbedder
from ragbits.core.llms import LiteLLM
from ragbits.core.vector_stores import InMemoryVectorStore
from ragbits.document_search import DocumentSearch
embedder = LiteLLMEmbedder(model_name="text-embedding-3-small")
vector_store = InMemoryVectorStore(embedder=embedder)
document_search = DocumentSearch(vector_store=vector_store)
llm = LiteLLM(model_name="gpt-4.1-nano")
agent = Agent(llm=llm, tools=[document_search.search])
async def main() -> None:
await document_search.ingest("web://https://arxiv.org/pdf/1706.03762")
response = await agent.run("What are the key findings presented in this paper?")
print(response.content)
if __name__ == "__main__":
asyncio.run(main())
```
### Chat UI
To expose your GenAI application through Ragbits API:
```python
from collections.abc import AsyncGenerator
from ragbits.agents import Agent, ToolCallResult
from ragbits.chat.api import RagbitsAPI
from ragbits.chat.interface import ChatInterface
from ragbits.chat.interface.types import ChatContext, ChatResponse, LiveUpdateType
from ragbits.core.embeddings import LiteLLMEmbedder
from ragbits.core.llms import LiteLLM, ToolCall
from ragbits.core.prompt import ChatFormat
from ragbits.core.vector_stores import InMemoryVectorStore
from ragbits.document_search import DocumentSearch
embedder = LiteLLMEmbedder(model_name="text-embedding-3-small")
vector_store = InMemoryVectorStore(embedder=embedder)
document_search = DocumentSearch(vector_store=vector_store)
llm = LiteLLM(model_name="gpt-4.1-nano")
agent = Agent(llm=llm, tools=[document_search.search])
class MyChat(ChatInterface):
async def setup(self) -> None:
await document_search.ingest("web://https://arxiv.org/pdf/1706.03762")
async def chat(
self,
message: str,
history: ChatFormat,
context: ChatContext,
) -> AsyncGenerator[ChatResponse]:
async for result in agent.run_streaming(message):
match result:
case str():
yield self.create_live_update(
update_id="1",
type=LiveUpdateType.START,
label="Answering...",
)
yield self.create_text_response(result)
case ToolCall():
yield self.create_live_update(
update_id="2",
type=LiveUpdateType.START,
label="Searching...",
)
case ToolCallResult():
yield self.create_live_update(
update_id="2",
type=LiveUpdateType.FINISH,
label="Search",
description=f"Found {len(result.result)} relevant chunks.",
)
yield self.create_live_update(
update_id="1",
type=LiveUpdateType.FINISH,
label="Answer",
)
if __name__ == "__main__":
api = RagbitsAPI(MyChat)
api.run()
```
## Rapid development
Create Ragbits projects from templates:
```sh
uvx create-ragbits-app
```
Explore `create-ragbits-app` repo [here](https://github.com/deepsense-ai/create-ragbits-app). If you have a new idea for a template, feel free to contribute!
## Documentation
- [Tutorials](https://ragbits.deepsense.ai/stable/tutorials/intro) - Get started with Ragbits in a few minutes
- [How-to](https://ragbits.deepsense.ai/stable/how-to/prompts/use_prompting) - Learn how to use Ragbits in your projects
- [CLI](https://ragbits.deepsense.ai/stable/cli/main) - Learn how to run Ragbits in your terminal
- [API reference](https://ragbits.deepsense.ai/stable/api_reference/core/prompt) - Explore the underlying Ragbits API
## Contributing
We welcome contributions! Please read [CONTRIBUTING.md](https://github.com/deepsense-ai/ragbits/tree/main/CONTRIBUTING.md) for more information.
## License
Ragbits is licensed under the [MIT License](https://github.com/deepsense-ai/ragbits/tree/main/LICENSE).
| text/markdown | null | "deepsense.ai" <ragbits@deepsense.ai> | null | null | null | GenAI, Generative AI, LLMs, Large Language Models, Prompt Management, RAG, Retrieval Augmented Generation | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Pro... | [] | null | null | >=3.10 | [] | [] | [] | [
"ragbits-agents==1.4.0.dev202602190302",
"ragbits-chat==1.4.0.dev202602190302",
"ragbits-cli==1.4.0.dev202602190302",
"ragbits-core==1.4.0.dev202602190302",
"ragbits-document-search==1.4.0.dev202602190302",
"ragbits-evaluate==1.4.0.dev202602190302",
"ragbits-guardrails==1.4.0.dev202602190302",
"ragbit... | [] | [] | [] | [
"Homepage, https://github.com/deepsense-ai/ragbits",
"Bug Reports, https://github.com/deepsense-ai/ragbits/issues",
"Documentation, https://ragbits.deepsense.ai/",
"Source, https://github.com/deepsense-ai/ragbits"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-19T03:10:59.259420 | ragbits-1.4.0.dev202602190302.tar.gz | 13,698 | bb/d3/48bf7ade77cad8904beda8a218c22d53fcba8caf90cfdbcf722f980fce51/ragbits-1.4.0.dev202602190302.tar.gz | source | sdist | null | false | 1a0eb5a8973198ecaf6259cd1497127e | 0db605e3bafbab750b1f48b81098e96df001393bb8ce5ae17a9fe2a34ad70d4b | bbd348bf7ade77cad8904beda8a218c22d53fcba8caf90cfdbcf722f980fce51 | MIT | [] | 191 |
2.4 | motd-claude-code-plugins-v4 | 1.0.0 | Bundled plugins for Claude Code including Agent SDK development tools, PR review toolkit, and commit workflows | # Claude Code
 [![npm]](https://www.npmjs.com/package/@anthropic-ai/claude-code)
[npm]: https://img.shields.io/npm/v/@anthropic-ai/claude-code.svg?style=flat-square
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows -- all through natural language commands. Use it in your terminal, IDE, or tag @claude on Github.
**Learn more in the [official documentation](https://code.claude.com/docs/en/overview)**.
<img src="./demo.gif" />
## Get started
> [!NOTE]
> Installation via npm is deprecated. Use one of the recommended methods below.
For more installation options, uninstall steps, and troubleshooting, see the [setup documentation](https://code.claude.com/docs/en/setup).
1. Install Claude Code:
**MacOS/Linux (Recommended):**
```bash
curl -fsSL https://claude.ai/install.sh | bash
```
**Homebrew (MacOS/Linux):**
```bash
brew install --cask claude-code
```
**Windows (Recommended):**
```powershell
irm https://claude.ai/install.ps1 | iex
```
**WinGet (Windows):**
```powershell
winget install Anthropic.ClaudeCode
```
**NPM (Deprecated):**
```bash
npm install -g @anthropic-ai/claude-code
```
2. Navigate to your project directory and run `claude`.
## Plugins
This repository includes several Claude Code plugins that extend functionality with custom commands and agents. See the [plugins directory](./plugins/README.md) for detailed documentation on available plugins.
## Reporting Bugs
We welcome your feedback. Use the `/bug` command to report issues directly within Claude Code, or file a [GitHub issue](https://github.com/anthropics/claude-code/issues).
## Connect on Discord
Join the [Claude Developers Discord](https://anthropic.com/discord) to connect with other developers using Claude Code. Get help, share feedback, and discuss your projects with the community.
## Data collection, usage, and retention
When you use Claude Code, we collect feedback, which includes usage data (such as code acceptance or rejections), associated conversation data, and user feedback submitted via the `/bug` command.
### How we use your data
See our [data usage policies](https://code.claude.com/docs/en/data-usage).
### Privacy safeguards
We have implemented several safeguards to protect your data, including limited retention periods for sensitive information, restricted access to user session data, and clear policies against using feedback for model training.
For full details, please review our [Commercial Terms of Service](https://www.anthropic.com/legal/commercial-terms) and [Privacy Policy](https://www.anthropic.com/legal/privacy).
| text/markdown | Anthropic | Anthropic <support@anthropic.com> | null | null | MIT | claude, claude-code, plugins, ai, development | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | https://github.com/anthropics/claude-code | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/anthropics/claude-code",
"Documentation, https://code.claude.com/docs",
"Repository, https://github.com/anthropics/claude-code",
"Issues, https://github.com/anthropics/claude-code/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T03:09:41.160515 | motd_claude_code_plugins_v4-1.0.0.tar.gz | 3,687,914 | 92/ed/90c57fadfd811af37d6dec0620b870a97b43807a0d3b25c8003e81e0a68c/motd_claude_code_plugins_v4-1.0.0.tar.gz | source | sdist | null | false | 83242a9163f99d3956b1fc02bb7511fa | a86d0721743d94d39b739851927453ceeaf7963c4063e6d4a9800379f8e0d60b | 92ed90c57fadfd811af37d6dec0620b870a97b43807a0d3b25c8003e81e0a68c | null | [
"LICENSE.md"
] | 237 |
2.4 | eschatch | 0.2.0 | A true pty wrapper that transparently logs I/O and lets you invoke an LLM to inject commands into any running terminal application | <p align="center">
<img width="256" height="148" alt="eschatch_sm" src="https://github.com/user-attachments/assets/01fe229a-770c-4d94-9a2b-e73f1f99d0ae" />
</p>
**ESChatch** is a true pty wrapper that transparently logs every byte of input/output from any terminal application (`zsh`, `vim`, `python`, SSH sessions, etc.), then lets you hit **Ctrl+X**, type a natural-language task, and have an LLM instantly generate and inject the exact keystrokes back into the running program.
The screen-scrape context makes it magically context-aware across all applications.
## Features
- **Universal terminal copilot** - Works in shells, editors, REPLs, SSH sessions, anywhere
- **Transparent logging** - Logs all I/O to configurable session directories
- **Context-aware** - LLM sees your terminal screen and recent input history
- **Configurable** - Escape keys, LLM providers, models, prompts via TOML config
- **Safety rails** - Preview mode and destructive command detection
- **Multi-provider** - Supports OpenAI, Ollama, vLLM, litellm proxy, and more
- **Chat mode** - Multi-turn conversations with context retention
- **Special commands** - `/explain`, `/debug`, `/chat`, `/clear`, `/help`
## Quick Start
### Installation
```bash
# Install from source
pip install -e .
# Or with pipx (recommended)
pipx install git+https://github.com/day50/ESChatch.git
```
### Usage
```bash
# Start with default shell (bash)
eschatch
# Use zsh
eschatch -e zsh
# Start Python REPL
eschatch -e python
# Open vim with a file
eschatch -e "vim myfile.py"
# SSH session
eschatch -e "ssh user@host"
```
### Basic Workflow
1. Start ESChatch with your desired command
2. Work normally in the terminal
3. Press **Ctrl+X** to enter escape mode
4. Type your task (e.g., "find all .py files modified today")
5. Press **Enter**
6. The LLM generates and injects the appropriate command
### Special Commands
ESChatch supports special slash commands:
- `/chat` - Enable multi-turn conversation mode (press Enter twice to exit)
- `/explain` - Explain what's happening in the current terminal state
- `/debug` - Analyze errors and suggest fixes
- `/clear` - Clear conversation history
- `/help` - Show available commands
Example:
```
Ctrl+X → /explain
→ "You're in a Python REPL with pandas loaded. The last operation filtered a DataFrame..."
Ctrl+X → /debug
→ "The error shows a FileNotFoundError. Check that 'data.csv' exists in your current directory..."
```
## Configuration
### Config File
Create `~/.config/eschatch/config.toml`:
```toml
[general]
escape_key = "ctrl+x" # Options: ctrl+x, ctrl+space, ctrl+c, ctrl+d, escape
log_dir = "~/.eschatch/logs"
session_dir = "~/.eschatch/sessions"
[llm]
provider = "litellm" # Options: litellm, ollama, openai, vllm, simonw
model = "gpt-4o-mini"
base_url = "http://localhost:4000"
api_key = "sk-..." # Optional for local providers
temperature = 0.0
max_tokens = 500
[context]
max_bytes = 2000 # Amount of I/O history to include
sliding_window = true
[prompt]
system = """You are an expert Linux/terminal operator..."""
[safety]
preview_mode = false # Show commands before injecting
confirm_destructive = true # Warn about rm -rf, dd, etc.
```
### Environment Variables
Override config with environment variables:
```bash
export ESCHATCH_MODEL="gpt-4o"
export ESCHATCH_BASE_URL="http://localhost:4000"
export ESCHATCH_API_KEY="sk-..."
```
### Generate Default Config
```bash
eschatch --install-config
```
## LLM Providers
### OpenAI / litellm Proxy
```toml
[llm]
provider = "litellm"
model = "gpt-4o-mini"
base_url = "http://localhost:4000"
api_key = "sk-..."
```
### Ollama (Local)
```toml
[llm]
provider = "ollama"
model = "llama3.1"
base_url = "http://localhost:11434"
```
### vLLM
```toml
[llm]
provider = "vllm"
model = "meta-llama/Llama-3.1-8B"
base_url = "http://localhost:8000/v1"
```
### simonw/llm CLI
```toml
[llm]
provider = "simonw"
```
## Examples
### Shell Commands
```
Task: "find all Python files modified in the last 2 days"
→ find . -name "*.py" -mtime -2 -type f
Task: "show disk usage sorted by size"
→ du -sh * | sort -rh
```
### Inside Vim
```
Task: "format this Python function"
→ :%!black -
Task: "search for all occurrences of 'TODO'"
→ /TODO
```
### Python REPL
```
Task: "load the json file and parse it"
→ import json; data = json.load(open("file.json"))
```
### SSH Session
Works transparently over SSH - the LLM sees the remote terminal state.
## Safety Features
### Preview Mode
Enable to review generated commands before injection:
```bash
eschatch --preview
```
Or in config:
```toml
[safety]
preview_mode = true
```
### Destructive Command Detection
Automatically warns about dangerous patterns like:
- `rm -rf /` or `rm -rf ~`
- `dd if=...`
- `mkfs`
- Fork bombs
- Direct disk writes
## Architecture
```
┌─────────────────────────────────────────────────────────┐
│ User Terminal │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ ESChatch (pty) │
│ ┌─────────────┐ ┌──────────────┐ ┌────────────────┐ │
│ │ Input Log │ │ Output Log │ │ Escape Handler │ │
│ └─────────────┘ └──────────────┘ └────────────────┘ │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ LLM Client │
│ (OpenAI / Ollama / vLLM / litellm / simonw) │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Injected Command → Application │
└─────────────────────────────────────────────────────────┘
```
## Command Line Options
```
usage: eschatch [-h] [--exec EXEC_COMMAND] [--config CONFIG]
[--model MODEL] [--base-url BASE_URL] [--preview]
[--install-config] [--verbose]
ESChatch - LLM-powered terminal command injection
options:
-h, --help show this help message and exit
--exec, -e EXEC_COMMAND
Command to execute (default: bash)
--config, -c CONFIG Path to config file
--model, -m MODEL LLM model to use
--base-url BASE_URL LLM base URL
--preview Preview mode - show generated commands
--install-config Create default config file
--verbose, -v Enable verbose logging
```
## Limitations
- **Unix-only** - Uses pty, so Linux/macOS only
- **Single-threaded** - One session at a time
- **Raw terminal** - Requires tty (won't work in some IDE terminals)
- **Early alpha** - Actively developed, may have bugs
## Troubleshooting
### LLM connection fails
- Check your `base_url` and `api_key` in config
- Ensure your LLM provider is running (Ollama, litellm proxy, etc.)
- Run with `--verbose` for detailed logs
### Escape key not working
- Try different escape keys in config: `ctrl+space`, `escape`
- Some terminals may intercept certain key combinations
### Commands not injecting correctly
- Check that the LLM understands the current context
- Review the screen scrape in verbose logs
- Try a more specific task description
## Development
```bash
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black .
# Lint
ruff check .
```
## License
MIT License - see [LICENSE](LICENSE)
## Contributing
Contributions welcome! Please read [CONTRIBUTING.md](CONTRIBUTING.md) first.
## Related Projects
- [sidechat](https://github.com/day50/sidechat) - LLM side panel for terminals
- [llm](https://github.com/simonw/llm) - CLI tool for LLM access
- [litellm](https://github.com/BerriAI/litellm) - Unified LLM API
## Acknowledgments
ESChatch was inspired by the idea of making LLMs a seamless part of the terminal workflow, working at the pty level to be truly application-agnostic.
| text/markdown | null | "day50.dev" <day50@proton.me> | null | null | MIT | cli, copilot, llm, pty, shell, terminal | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming La... | [] | null | null | >=3.8 | [] | [] | [] | [
"litellm>=1.0.0",
"toml>=0.10.0",
"black>=23.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/day50/ESChatch",
"Repository, https://github.com/day50/ESChatch"
] | twine/6.1.0 CPython/3.13.12 | 2026-02-19T03:09:00.448013 | eschatch-0.2.0.tar.gz | 9,464 | a6/83/2a8ef96d89e23b77448ebeb089f51c76572c5350dabfad8fa4cf8906192f/eschatch-0.2.0.tar.gz | source | sdist | null | false | d8cab938a1de5ba4102a755327a10e95 | 1fc51218084685a7801942df3748603aeeb00ff663e757fec858c238665bcbbc | a6832a8ef96d89e23b77448ebeb089f51c76572c5350dabfad8fa4cf8906192f | null | [
"LICENSE"
] | 244 |
2.4 | OCDocker | 0.13.2 | OCDocker is a Python package for molecular docking automation, virtual screening and AI consensus scoring. | [](https://codecov.io/gh/Arturossi/OCDocker)






OCDocker
========
Project Description
-------------------
OCDocker is a Python toolkit and CLI for automated molecular docking, virtual
screening, and AI‑assisted consensus scoring. It streamlines end‑to‑end flows
from preparation through docking, pose clustering and rescoring, with optional
database persistence and analysis utilities.
Key capabilities:
- Multi‑engine docking: AutoDock Vina, Smina, Gnina, PLANTS
- Pipelines: run engines, cluster poses by RMSD (medoid), rescore and export
- Rescoring: built‑in engine rescoring and ODDT models (RFScore, NNScore, PLEC)
- OCScore analytics: DNN/XGBoost/Transformer optimizers, ranking metrics, SHAP
- Database integration: PostgreSQL (default), MySQL support, or SQLite fallback for dev/tests
- CLI and Python API: doctor diagnostics, timeouts, binary checks, reproducible configs
- Packaging: pip (recommended inside a conda/mamba env), Dockerfiles for engines, docs and examples
Community
---------
- Code of Conduct: [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md)
- Contributing: [CONTRIBUTING.md](CONTRIBUTING.md)
- Security: [SECURITY.md](SECURITY.md)
- Collaborators: [COLLABORATORS.md](COLLABORATORS.md)
Documentation
-------------
- Manual (GitHub): [MANUAL.md](MANUAL.md)
- Sphinx docs: `docs/` (install docs deps first; then run `make -C docs html`)
- Error handling guide: [docs/ERROR_HANDLING.md](docs/ERROR_HANDLING.md)
Installation
------------
Quickstart (minimal, SQLite)
----------------------------
If you want the fastest path without setting up PostgreSQL/MySQL, use SQLite (local file DB):
1) Install system dependencies (see [System dependencies](#system-dependencies)).
2) Create a conda env with Python 3.11 (prefer `mamba`) and install OCDocker with pip.
3) Run with SQLite enabled:
```bash
export OCDOCKER_DB_BACKEND=sqlite
ocdocker doctor
```
SQLite is recommended for quick experiments and development. For multi-user or long-running workloads, use PostgreSQL (default backend) or MySQL.
Recommended method (mamba + pip)
--------------------------------
**Important:** Install the required system dependencies first (see [System dependencies](#system-dependencies)).
If `mamba` is not installed yet:
```bash
conda install -n base -c conda-forge mamba
```
Then create the environment and install OCDocker from PyPI:
```bash
mamba create -n ocdocker python=3.11 -y
conda activate ocdocker
pip install ocdocker
```
`pip install ocdocker` installs the core package only. To include every optional runtime stack, use `pip install "ocdocker[all]"`.
Install optional feature stacks as needed:
```bash
# Docking workflows
pip install "ocdocker[docking]"
# Docking + DB support
pip install "ocdocker[docking,db]"
# ML workflows (PyTorch/XGBoost/Optuna)
pip install "ocdocker[ml]"
# All optional runtime features
pip install "ocdocker[all]"
```
**Installing from source with pip:**
For development, install from source with pip inside the same conda environment. Ensure the system dependencies are installed first (see [System dependencies](#system-dependencies)).
```bash
# Clone the repository
git clone https://github.com/Arturossi/OCDocker
cd OCDocker
# Create and activate conda env (if not already active)
mamba create -n ocdocker python=3.11 -y
conda activate ocdocker
# Install the package in development mode
pip install -e .
# Optional: install feature extras in editable mode
pip install -e ".[docking,db,ml]"
```
**Note on chemistry packages (`rdkit`, `openbabel`):**
These packages may require system libraries. Install the system dependencies first (see [System dependencies](#system-dependencies)). If pip installation fails, verify your compiler/toolchain and OpenBabel system packages are installed.
Prerequisites
-------------
- Python 3.11+
- Conda (Miniconda/Anaconda) and mamba
- pip (inside the conda environment)
- NVIDIA driver/runtime compatible with CUDA 12.8 (required for Gnina CUDA builds)
- Ubuntu/Debian-like system with internet access
- sudo privileges (needed for system packages, and optional PostgreSQL/MySQL/Vina installs)
- ~10-15 GB of free disk space for dependencies, tools, and caches (minimal installs use less)
- bash shell (used in command examples and helper scripts)
System dependencies
-------------------
Before installing OCDocker, you must install the following system packages:
```bash
sudo apt-get install openbabel libopenbabel-dev swig cmake g++
```
These packages are required for building and using OpenBabel Python bindings, which are essential for OCDocker's molecular processing capabilities.
PostgreSQL setup (quick tutorial)
---------------------------------
This section is optional. Skip it if you are using SQLite (see [Quickstart](#quickstart-minimal-sqlite)).
OCDocker stores docking and optimization results in PostgreSQL by default.
1) Install and start PostgreSQL (Ubuntu/Debian)
```bash
sudo apt-get update && sudo apt-get install -y postgresql postgresql-contrib
sudo systemctl enable --now postgresql
```
2) Create a role and databases
```bash
sudo -u postgres psql
```
```sql
CREATE ROLE ocdocker LOGIN;
-- set the role credential interactively from psql before exit
CREATE DATABASE ocdocker OWNER ocdocker;
CREATE DATABASE optimization OWNER ocdocker;
\q
```
3) Configure `OCDocker.cfg` (or `OCDocker.yml`)
```ini
DB_BACKEND = postgresql
HOST = localhost
PORT = 5432
USER = ocdocker
PASSWORD = <db_password>
DATABASE = ocdocker
OPTIMIZEDB = optimization
```
4) Test connectivity from Python
```python
from sqlalchemy import create_engine
from urllib.parse import quote_plus
user = "ocdocker"
password = quote_plus("<db_password>")
host = "localhost"
port = 5432
db = "optimization"
engine = create_engine(f"postgresql+psycopg://{user}:{password}@{host}:{port}/{db}")
with engine.connect() as conn:
print(conn.exec_driver_sql("SELECT 1").scalar())
```
MySQL remains supported:
- Set `DB_BACKEND = mysql` (or `OCDOCKER_DB_BACKEND=mysql`).
- Use `PORT = 3306`.
- SQLAlchemy URL format: `mysql+pymysql://...`.
Notes:
- For CI/tests or local experiments, set `OCDOCKER_DB_BACKEND=sqlite` to bypass server DBs.
- You can also set SQLite via config (`DB_BACKEND = sqlite`) and choose a custom file via `SQLITE_PATH`.
Troubleshooting
---------------
- MGLTools issues (e.g., NumPy import errors):
- Consider reinstalling MGLTools from source or using the official archives; ensure system Python/conda paths don’t shadow MGLTools’ bundled Python.
- Verify the `pythonsh` and `prepare_*` paths configured in `OCDocker.cfg`.
- Database authentication errors:
- PostgreSQL: ensure service is running (`sudo systemctl status postgresql`) and role/database exist.
- MySQL: ensure service is running (`sudo systemctl status mysql`) and user/database grants exist.
- DSSP not found:
- Install via `sudo apt-get install -y dssp`, or adjust the `dssp` path in `OCDocker.cfg` to match your system.
GPU (optional)
--------------
OCDocker can leverage NVIDIA GPUs for PyTorch-based components (e.g., OCScore DNN/SHAP pipelines).
### Requirements
- Recent NVIDIA driver compatible with your installed PyTorch CUDA build (for torch 2.4.x, a modern 535+ driver is a good baseline)
### Quick checks
```bash
# Driver + GPU visible?
nvidia-smi
# PyTorch sees the GPU?
python -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('Device count:', torch.cuda.device_count())"
```
### Troubleshooting GPU
- If `torch.cuda.is_available()` is False:
- Ensure the NVIDIA driver is installed and loaded (e.g., `sudo ubuntu-drivers autoinstall` then reboot)
- Verify your driver version and installed torch CUDA build are compatible
- Make sure you activated the correct conda environment (`ocdocker`)
- Avoid mixing multiple CUDA toolkits unless you intentionally need that setup
Or perform each software installation manually with the below steps.
Download and install MGLTools
-----------------------------
Use either the step‑by‑step install or the single all‑in‑one command below.
- Option 1 (Step‑by‑step)
- Download the archive
```bash
wget https://ccsb.scripps.edu/download/532/ -O mgltools_install.tar.gz
```
- Extract it
```bash
tar -xvzf mgltools_install.tar.gz
```
- Enter the created directory and run the installer
```bash
cd mgltools_x86_64Linux2_1.5.X
source ./install.sh
```
- Option 2 (All‑in‑one, easy to automate)
```bash
wget https://ccsb.scripps.edu/download/532/ -O mgltools_install.tar.gz \
&& mkdir -p mgltools \
&& tar -xvzf mgltools_install.tar.gz -C mgltools --strip-components=1 \
&& rm mgltools_install.tar.gz \
&& cd mgltools \
&& source ./install.sh
```
Note: The `prepare_*` scripts are located at `<installation_dir>/mgltools/MGLToolsPckgs/AutoDockTools`.
If you still can’t run MGLTools (e.g., NumPy errors), consider reinstalling from source and ensure your environment paths don’t shadow the MGLTools Python.
Install DSSP
---------------
To install DSSP in Ubuntu 18.04+:
```bash
sudo apt install dssp
```
By default, the DSSP path is '/usr/bin/dssp'.
Download and install AutoDock Vina
---------------
To install it, you have 2 options:
* Option 1 (Step-by-step)
- Go to the website http://vina.scripps.edu/download.html and download the Linux installer (tgz)
- Untar it:
```bash
tar -xvzf autodock_vina_1_1_2_linux_x86.tgz
```
* Option 2 (Use this all-in-one command. It seems to be more complicated, but it’s easier than option 1 and its easy to automate-it)
```bash
mkdir -p vina \
&& wget -O vina/vina https://github.com/ccsb-scripps/AutoDock-Vina/releases/download/v1.2.3/vina_1.2.3_linux_x86_64 \
&& chmod +x vina/vina \
&& sudo install -m 0755 vina/vina /usr/bin/vina
```
Download and install Gnina (CUDA 12.8)
---------------
OCDocker uses the Gnina CUDA 12.8 build. To run it correctly, ensure:
- NVIDIA driver is compatible with CUDA 12.8
- cuDNN 9 runtime is available
Step-by-step:
```bash
mkdir -p gnina
wget -O gnina/gnina.1.3.2.cuda12.8 https://github.com/gnina/gnina/releases/download/v1.3.2/gnina.1.3.2.cuda12.8
chmod +x gnina/gnina.1.3.2.cuda12.8
sudo install -m 0755 gnina/gnina.1.3.2.cuda12.8 /usr/bin/gnina
```
Verify installation:
```bash
gnina --version
```
All-in-one command:
```bash
mkdir -p gnina \
&& wget -O gnina/gnina.1.3.2.cuda12.8 https://github.com/gnina/gnina/releases/download/v1.3.2/gnina.1.3.2.cuda12.8 \
&& chmod +x gnina/gnina.1.3.2.cuda12.8 \
&& sudo install -m 0755 gnina/gnina.1.3.2.cuda12.8 /usr/bin/gnina
```
Usage Overview
--------------
- CLI: `ocdocker` exposes subcommands for docking, pipelines, SHAP analysis, diagnostics, and an interactive console.
- Programmatic: importing modules auto‑bootstraps once by default (see Bootstrap below). You can opt out via an env var and call `bootstrap()` explicitly.
Bootstrap & Configuration
-------------------------
- Auto‑bootstrap on import: when you import OCDocker modules, the environment initializes once (config, DB, dirs). This is skipped during docs/tests.
- Configuration file: set `OCDOCKER_CONFIG` to point to your `OCDocker.cfg`/`OCDocker.yml`, or place one of those files in the working directory.
- Disable auto‑bootstrap: set `OCDOCKER_NO_AUTO_BOOTSTRAP=1` and call `bootstrap()` explicitly:
```python
from OCDocker.Initialise import bootstrap
import argparse
bootstrap(argparse.Namespace(
multiprocess=True,
update=False,
config_file='OCDocker.cfg',
output_level=2,
overwrite=False,
))
```
SQLite Fallback (optional)
--------------------------
- For development/tests, you can bypass PostgreSQL/MySQL entirely by setting `OCDOCKER_DB_BACKEND=sqlite` before import or running the CLI.
- This creates/uses a local `ocdocker.db` under the module directory.
Installer behavior with SQLite
------------------------------
- To skip installing and configuring PostgreSQL/MySQL during `install.sh`, enable SQLite mode before running it:
```bash
export OCDOCKER_DB_BACKEND=sqlite # select SQLite backend
export OCDOCKER_SQLITE_PATH=/path/ocdocker.db # optional custom path
bash ./install.sh
```
- Alternatively, if you already have an `OCDocker.cfg` in the project directory, you can set in the file:
- `DB_BACKEND = sqlite`
- `SQLITE_PATH = /path/to/ocdocker.db` (optional)
In both cases, the installer will:
- Install only `dssp` (skips SQL server packages)
- Skip SQL user/database creation
- Proceed with the remaining steps normally
- Install Gnina CUDA 12.8 (`gnina.1.3.2.cuda12.8`) and register it as `/usr/bin/gnina`
Important note about SQLite
---------------------------
- SQLite is convenient for development and tests but has limitations for concurrent writes and larger workloads.
- For production use, performance, and concurrency, a full PostgreSQL installation is strongly recommended (MySQL is also supported).
Diagnostics: `ocdocker doctor`
--------------------------------
Run a quick environment report:
```bash
ocdocker doctor --conf OCDocker.cfg
```
It checks:
- Config path in use
- Binaries: `vina`, `smina`, `plants` (presence on PATH or configured paths)
- External tool metadata: resolved executable and version (`vina`, `smina`, `plants`, `gnina`, `pythonsh`, `dssp`, `obabel`, `spores`)
- Python deps: rdkit, Biopython, ODDT, SQLAlchemy
- DB backend/driver metadata, client version, server version (when queryable), connectivity, and current/expected user+database checks
Reproducibility: `ocdocker manifest`
------------------------------------
Generate a JSON manifest with OCDocker/Python versions, external tool versions,
platform metadata, git metadata (when available), and installed Python packages:
```bash
ocdocker manifest --conf OCDocker.cfg --output reproducibility_manifest.json
```
From Python code:
```python
import OCDocker.Toolbox.Reproducibility as ocrepro
manifest = ocrepro.generate_reproducibility_manifest(include_python_packages=False)
_ = ocrepro.write_reproducibility_manifest("reproducibility_manifest.json")
```
Docking: Quick Examples
-----------------------
Install docking dependencies first if needed:
```bash
pip install "ocdocker[docking]"
```
Single engine (Vina) with timeout, storing to DB:
```bash
ocdocker vs \
--engine vina \
--receptor path/to/receptor.pdb \
--ligand path/to/ligand.smi \
--box path/to/box.pdb \
--timeout 600 \
--store-db
```
For ``--store-db``, install DB dependencies too:
```bash
pip install "ocdocker[db]"
```
Pipeline across engines with clustering and rescoring:
```bash
ocdocker pipeline \
--receptor path/to/receptor.pdb \
--ligand path/to/ligand.sdf \
--box path/to/box.pdb \
--engines vina,smina,plants \
--store-db
```
Notes:
- `--timeout` limits external tool runtime (also via `OCDOCKER_TIMEOUT`).
- `--store-db` auto-creates tables and stores receptor/ligand descriptors plus supported rescoring columns in the DB.
Timeouts & External Tools
-------------------------
- You can prevent hangs by defining a timeout:
- CLI: `--timeout <seconds>` (for `vs` and `pipeline`)
- Env: `OCDOCKER_TIMEOUT=<seconds>`
Binary Checks
-------------
- The CLI validates presence of required binaries (`vina`/`smina`/`plants`) before running and errors early if missing. Use `ocdocker doctor` to see what’s available.
Interactive Console
-------------------
```bash
ocdocker console --conf OCDocker.cfg
```
This opens an interactive namespace with common OCDocker utilities imported.
Running Python Scripts
----------------------
Run Python scripts with all OCDocker libraries pre-loaded:
```bash
ocdocker script --conf OCDocker.cfg --allow-unsafe-exec script.py [script_args...]
```
Security note: in-process script execution requires explicit opt-in via
`--allow-unsafe-exec` (or `OCDOCKER_ALLOW_SCRIPT_EXEC=1`).
This command bootstraps the OCDocker environment, loads all modules (ocl, ocr, ocvina, etc.),
and executes your script. All OCDocker classes and functions are available without imports.
Example script:
```python
# script.py - All OCDocker modules are pre-loaded!
receptor = ocr.Receptor("receptor.pdb")
ligand = ocl.Ligand("ligand.smi")
vina = ocvina.Vina(...)
# ... use OCDocker functionality
```
See `examples/13_cli_script_example.py` for a complete example.
Container wrappers (Docker, Podman and Singularity)
---------------------------------------------------
OCDocker includes helper scripts that auto-mount likely host paths:
- Docker: `containers/docker/ocdocker.sh`
- Podman: `containers/podman/ocdocker.sh`
- Singularity/Apptainer: `containers/singularity/ocdocker.sh`
All wrappers can:
- parse explicit `--mount` flags
- read mount lists from env vars (`OCDOCKER_DOCKER_MOUNTS` / `OCDOCKER_SINGULARITY_MOUNTS`)
- auto-detect absolute paths passed in CLI arguments
- parse `OCDocker.cfg` paths and add their parent directories as bind mounts
Docker/Podman backend selection:
- Default is PostgreSQL.
- Set `OCDOCKER_DB_BACKEND=mysql` (or `DB_BACKEND=mysql`) to use the MySQL compose override and MySQL container config.
Singularity helper extras:
- `--cfg-source /path/to/OCDocker.cfg` to force which config is parsed for bind hints
- `--dry-run` to print the resolved `apptainer/singularity exec` command without executing it
Singularity SQL sidecars:
- PostgreSQL (default): `containers/singularity/postgresql.sh`
- MySQL (optional): `containers/singularity/mysql.sh`
```bash
# Start local PostgreSQL (default backend)
containers/singularity/postgresql.sh start
# Start local MySQL (optional backend)
containers/singularity/mysql.sh start
```
Default PostgreSQL config matches `containers/singularity/OCDocker.cfg.singularity`:
- `DB_BACKEND=postgresql`
- `HOST=localhost`
- `PORT=5432`
- `USER=ocdocker`
- `PASSWORD=ocdocker_pass`
- `DATABASE=ocdocker`
For MySQL, use `containers/singularity/OCDocker.cfg.singularity.mysql` (or set `DB_BACKEND=mysql` and port `3306`).
Recommended pattern for dynamic script paths:
1. Create one project/work root (for example `/data/your_project`).
2. Keep all inputs/outputs under that root.
3. Bind that root once, instead of many scattered folders.
Singularity example:
```bash
export OCDOCKER_SINGULARITY_IMAGE=/path/to/ocdocker.sif
containers/singularity/ocdocker.sh \
--workdir /data/your_project \
script --allow-unsafe-exec /data/your_project/run.py
```
Environment Variables (reference)
---------------------------------
- `OCDOCKER_CONFIG`: path to `OCDocker.cfg` (config file with external tool paths and parameters).
- `OCDOCKER_DB_BACKEND` / `DB_BACKEND`: database backend override (`postgresql`, `mysql`, or `sqlite`).
- `OCDOCKER_NO_AUTO_BOOTSTRAP`: if set to `1/true/yes`, disables auto‑bootstrap on import; call `bootstrap()` manually.
- `OCDOCKER_SQLITE_PATH`: optional explicit SQLite database file path (used when backend is `sqlite`).
- `OCDOCKER_TIMEOUT`: default timeout (seconds) for external tools when not provided via CLI.
Python Support
--------------
- Requires Python 3.11+.
```bash
mkdir vina && wget https://github.com/ccsb-scripps/AutoDock-Vina/releases/download/v1.2.3/vina_1.2.3_linux_x86_64 -O vina/vina && sudo cp vina/vina /usr/bin/vina
```
Note: The Vina executable will be located in ``installation_dir/vina/bin``.
Testing
=======
This repository ships a test suite under `tests/` that exercises the core library (Toolbox, Docking helpers, DB minimal, parsing, etc.).
Quick start
-----------
```bash
conda activate ocdocker
pytest -q
```
Useful commands
---------------
- Run a specific test file:
```bash
pytest tests/docking/test_vina.py -q
```
- Show test names while running:
```bash
pytest -q -k vina -vv
```
- Coverage (if `pytest-cov` is present):
```bash
pytest --cov=OCDocker --cov-report=term-missing
```
Notes for testing
-----------------
- The tests operate on sample data under `test_files/` and do not require external binaries to actually run (they validate parsing/IO helpers, config generation, log readers, etc.).
- If you want to run end‑to‑end docking locally, ensure you’ve installed external tools (MGLTools, Vina, Smina/PLANTS where applicable) and set paths in `OCDocker.cfg`.
- Some modules (e.g., Initialise) perform environment bootstrapping; the test suite avoids heavy side effects, but for interactive usage consider setting `OCDOCKER_CONFIG=./OCDocker.cfg`.
| text/markdown | null | Artur Duque Rossi <arturossi10@gmail.com> | null | null | UFRJ License Notice
This software is owned by the Federal University of Rio de Janeiro (UFRJ),
developed by Artur Duque Rossi and Pedro Henrique Monteiro Torres, and is
protected under Brazilian Law No. 9,609/1998.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software to:
- run and use the software for any purpose;
- study, reproduce, and modify the software;
- contribute code and improvements;
- publish, present, and disseminate scientific or technical results produced
with the software;
- redistribute the software, with or without modifications.
Condition:
- Any redistribution, publication, or public disclosure related to this
software must preserve this notice and give appropriate credit to UFRJ and
the original developers.
This software is provided "as is", without warranty of any kind, express or
implied.
| docking, virtual screening, AI, bioinformatics, drug discovery | [
"Programming Language :: Python :: 3",
"License :: Other/Proprietary License",
"Operating System :: POSIX :: Linux",
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Topic :: Scientific/Engineering :: Bio-Informatics"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=1.26.4",
"pandas>=2.2.3",
"scipy>=1.14.1",
"scikit-learn>=1.5.2",
"scikit-image>=0.24.0",
"seaborn>=0.13.2",
"matplotlib>=3.9.2",
"pingouin>=0.5.5",
"statsmodels>=0.14.3",
"pyyaml>=6.0.2",
"joblib>=1.4.2",
"dcor>=0.6",
"fsspec>=2024.10.0",
"numba>=0.60.0",
"graphviz>=0.20.3",
"... | [] | [] | [] | [
"Homepage, https://github.com/Arturossi/OCDocker",
"Repository, https://github.com/Arturossi/OCDocker",
"Issues, https://github.com/Arturossi/OCDocker/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:07:41.440789 | ocdocker-0.13.2.tar.gz | 489,251 | a2/9f/fcd807d19f60c88fb1b5620bbc7398c932bed56527f7ebcf4651a1f6fcb9/ocdocker-0.13.2.tar.gz | source | sdist | null | false | 6a7fcd3f480668583d5c5835079a6020 | 19ff09f57f44e97f2b637c4a0bc0254346c1c2596232193d42ccc725a13fae00 | a29ffcd807d19f60c88fb1b5620bbc7398c932bed56527f7ebcf4651a1f6fcb9 | null | [
"LICENSE"
] | 0 |
2.4 | pixelquery | 0.1.8 | Turn your COG files into an analysis-ready time-series data cube | # PixelQuery
> Turn your COG files into an analysis-ready time-series data cube. No infrastructure required.
[](https://pypi.org/project/pixelquery/)
[](https://python.org)
[](LICENSE)
## What is PixelQuery?
PixelQuery converts a directory of Cloud-Optimized GeoTIFFs (COGs) into a
queryable time-series data cube backed by [Icechunk](https://icechunk.io/) virtual Zarr storage.
- **Zero data copy**: Virtual references to original COGs (no duplication)
- **Fast ingestion**: ~9ms per COG (243 files in 2 seconds)
- **Lazy loading**: Data reads from COGs only when you call `.compute()`
- **pip install**: No STAC server, no database, no infrastructure
- **Multi-satellite**: Built-in product profiles for Planet, Sentinel-2, Landsat
## Quick Start
```bash
pip install pixelquery[icechunk]
```
```python
import pixelquery as pq
# Ingest COGs from a directory
result = pq.ingest("./my_cogs/", band_names=["blue", "green", "red", "nir"])
print(f"Ingested {result.scene_count} scenes in {result.elapsed:.1f}s")
# Query as lazy xarray Dataset
ds = pq.open_xarray("./warehouse")
print(ds) # Dimensions: (time: 243, band: 4, y: 874, x: 3519)
```
## 5-Minute Tutorial
### 1. Inspect your COG files
```python
import pixelquery as pq
# Check what you have
meta = pq.inspect_cog("./scene.tif")
print(meta) # CRS, bounds, bands, resolution
```
### 2. Ingest
```python
result = pq.ingest(
"./planet_cogs/",
warehouse="./warehouse",
band_names=["blue", "green", "red", "nir"],
product_id="planet_sr",
)
```
### 3. Query
```python
ds = pq.open_xarray("./warehouse")
# Filter by time range
from datetime import datetime
ds = pq.open_xarray(
"./warehouse",
time_range=(datetime(2024, 1, 1), datetime(2024, 12, 31)),
bands=["red", "nir"],
)
```
### 4. Compute NDVI
```python
nir = ds["data"].sel(band="nir")
red = ds["data"].sel(band="red")
ndvi = (nir - red) / (nir + red)
ndvi.mean(dim="time").compute() # Actual COG reads happen here
```
### 5. Point time-series
```python
ts = pq.timeseries("./warehouse", lon=127.05, lat=37.55)
ts["data"].sel(band="nir").plot() # Plot NIR time-series
```
## Product Profiles
Register satellite product definitions for multi-product warehouses:
```python
pq.register_product(
"sentinel2_l2a",
bands={"blue": 1, "green": 2, "red": 3, "nir": 7},
resolution=10.0,
provider="ESA",
)
# Browse warehouse contents
cat = pq.catalog("./warehouse")
print(cat.summary())
# === PixelQuery Warehouse Summary ===
# Products: 2
#
# planet_sr (Planet)
# Scenes: 243
# Bands: blue, green, red, nir
# Resolution: 3.0m
```
## How It Works
```
COG files (on disk/S3)
|
v
VirtualTIFF parser (reads byte offsets, ~3ms/file)
|
v
Icechunk repository (stores virtual chunk references)
|
v
xarray.open_zarr() (lazy loading)
|
v
.compute() → reads actual pixel data from original COGs
```
No data is copied during ingestion. Icechunk stores only the byte-range
references to the original COG files. Actual pixel data is read on-demand
when you call `.compute()` or `.values`.
## Performance
| Operation | Result |
|-----------|--------|
| Single COG ingest | ~3ms (virtual reference) |
| 243 COG batch | 2.1s (8.6ms/COG) |
| Storage overhead | 0.2MB for 4.4GB data |
| Metadata query | 59ms |
| Compute 6 scenes | 255ms |
## Time Travel
Icechunk provides built-in versioning. Every ingest creates a snapshot.
```python
# View history
history = pq.open_xarray("./warehouse", snapshot_id=None)
# Query at a specific point in time
cat = pq.catalog("./warehouse")
snapshots = cat.get_snapshot_history()
old_ds = pq.open_xarray("./warehouse", snapshot_id=snapshots[-1]["snapshot_id"])
```
## API Reference
### Core Functions
| Function | Description |
|----------|-------------|
| `pq.ingest(source, warehouse, ...)` | Auto-scan and ingest COGs |
| `pq.open_xarray(warehouse, ...)` | Query as lazy xarray Dataset |
| `pq.timeseries(warehouse, lon, lat, ...)` | Extract point time-series |
### Inspection
| Function | Description |
|----------|-------------|
| `pq.inspect_cog(path)` | Read COG metadata (CRS, bounds, bands) |
| `pq.inspect_directory(dir)` | Scan directory for COGs |
### Catalog
| Function | Description |
|----------|-------------|
| `pq.catalog(warehouse)` | Get catalog for warehouse |
| `pq.register_product(...)` | Register a product profile |
| `catalog.summary()` | Formatted warehouse summary |
| `catalog.products()` | List product IDs |
| `catalog.scenes(...)` | List scenes with filters |
## Installation
### From PyPI
```bash
pip install pixelquery[icechunk]
```
### From Source
```bash
git clone https://github.com/yourusername/pixelquery.git
cd pixelquery
pip install -e ".[icechunk,dev]"
```
## When to Use PixelQuery
| Scenario | Best Tool |
|----------|-----------|
| Private COGs -> time-series analysis | **PixelQuery** |
| Public satellite data catalog | STAC + stackstac |
| Enterprise cloud data platform | Arraylake |
| Planetary-scale analysis | Google Earth Engine |
PixelQuery is designed for researchers and developers who have their own COG
files and want to query them as a time-series data cube without setting up
any infrastructure.
## Contributing
Contributions are welcome! Please open an issue or PR.
## License
Apache 2.0
| text/markdown | PixelQuery Contributors | null | null | null | Apache-2.0 | COG, data-cube, earth-observation, geospatial, icechunk, imagery, raster, remote-sensing, satellite, time-series, virtual, zarr | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scie... | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=1.24.0",
"mypy>=1.5.0; extra == \"dev\"",
"pre-commit>=3.5.0; extra == \"dev\"",
"pytest-cov>=4.1.0; extra == \"dev\"",
"pytest>=7.4.0; extra == \"dev\"",
"ruff>=0.8.0; extra == \"dev\"",
"duckdb>=0.10.0; extra == \"full\"",
"geopandas>=0.14.0; extra == \"full\"",
"rasterio>=1.3.0; extra == ... | [] | [] | [] | [
"Homepage, https://github.com/pixelquery/pixelquery",
"Documentation, https://github.com/pixelquery/pixelquery#readme",
"Repository, https://github.com/pixelquery/pixelquery",
"Issues, https://github.com/pixelquery/pixelquery/issues",
"Changelog, https://github.com/pixelquery/pixelquery/releases"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:07:04.524839 | pixelquery-0.1.8.tar.gz | 94,825 | 0c/31/4470aedcb3e8e6eba836b5d470000e44d542b56c47a61a7d63bfaa7d6005/pixelquery-0.1.8.tar.gz | source | sdist | null | false | 2bd62bbbf1a4187d7814a587ac6837c2 | 96c1a1c344f59a3e3383949106dcf5d53d4755bcabde2f29d6e48ee359e41d04 | 0c314470aedcb3e8e6eba836b5d470000e44d542b56c47a61a7d63bfaa7d6005 | null | [
"LICENSE"
] | 212 |
2.4 | opentelemetry-instrumentation-re | 0.1.0 | OpenTelemetry instrumentation for Python's re (regex) module | # OpenTelemetry instrumentation for Python regex libraries
[](https://github.com/nilox94/opentelemetry-instrumentation-re/actions/workflows/test.yml)
This library provides [OpenTelemetry instrumentation](https://opentelemetry.io/docs/concepts/instrumentation/) for three Python regex libraries: the standard library [`re`](https://docs.python.org/3/library/re.html), and the [`regex`](https://pypi.org/project/regex/) and [`google-re2`](https://pypi.org/project/google-re2/) packages.
It emits spans for all regex operations (see [Supported Operations](#supported-operations)) and for compiled pattern methods.
## Installation
Install the package; add the extras for the libraries you want to instrument:
```bash
pip install opentelemetry-instrumentation-re # stdlib re (always available)
pip install opentelemetry-instrumentation-re[regex] # regex package
pip install opentelemetry-instrumentation-re[google-re2] # google-re2 package
```
**Compatibility:** stdlib `re` works out of the box. To instrument `regex`, install version **`>= 2021.0`**; for `google-re2`, install version **`>= 1.0`**. All recent versions are supported.
## Usage
### Manual instrumentation
Instrument the library or libraries you use.
Each exposes the same operations; use the instrumentor that matches your imports.
**stdlib `re`:**
```python
import re
from opentelemetry_instrumentation_re import ReInstrumentor
ReInstrumentor().instrument()
re.search(r"\d+", "hello 42 world")
pattern = re.compile(r"\d+")
pattern.findall("a1 b2 c3")
```
**`regex` package**:
```python
import regex
from opentelemetry_instrumentation_re import RegexInstrumentor
RegexInstrumentor().instrument()
regex.search(r"\d+", "hello 42 world")
```
**`google-re2` package**:
```python
import re2
from opentelemetry_instrumentation_re import GoogleRe2Instrumentor
GoogleRe2Instrumentor().instrument()
re2.search(r"\d+", "hello 42 world")
```
You can instrument more than one of these in the same process if your application uses multiple regex libraries.
### Auto-instrumentation
If you already run your app with [OpenTelemetry Python auto-instrumentation](https://opentelemetry.io/docs/languages/python/automatic/), you don't need to change any code: install this package (and the `[regex]` and/or `[google-re2]` extras for the libraries you use).
The agent discovers and enables the instrumentors automatically.
To disable them, use `OTEL_PYTHON_DISABLED_INSTRUMENTATIONS=re,regex,google_re2` as needed.
For setup of the agent itself, see the linked doc.
### Uninstrumenting
Call `uninstrument()` on the same instrumentor you used to stop tracing for that library:
```python
from opentelemetry_instrumentation_re import ReInstrumentor, RegexInstrumentor, GoogleRe2Instrumentor
ReInstrumentor().uninstrument() # re
RegexInstrumentor().uninstrument() # regex
GoogleRe2Instrumentor().uninstrument() # google-re2
```
## Span Attributes
Spans are named `{library}.{operation}` (e.g. `re.search`, `regex.findall`, `google_re2.sub`).
The following attributes are added to spans:
- `re.operation` - operation name (e.g. `search`, `match`, `sub`, `findall`).
- `re.pattern` - full pattern string (may contain sensitive data; consider your export/backend policy).
- `re.string_length` - length of the input string.
- `re.match_count` - number of matches found (only for `findall` and `subn` operations).
- `re.library.name` - instrumented library: `re`, `regex`, or `google_re2`.
### Example Span
After calling `re.search(r"\d+", "hello 42 world")` with the stdlib `re` instrumentor, the following span is created:
```json
{
"name": "re.search",
"kind": "INTERNAL",
"attributes": {
"re.operation": "search",
"re.pattern": "\\d+",
"re.string_length": 14,
"re.library.name": "re"
}
}
```
Example with `re.findall(r"\d", "a1 b2 c3")`, which includes the `re.match_count` attribute:
```json
{
"name": "re.findall",
"kind": "INTERNAL",
"attributes": {
"re.operation": "findall",
"re.pattern": "\\d",
"re.string_length": 8,
"re.match_count": 3,
"re.library.name": "re"
}
}
```
## Supported Operations
| Operation | Module-level Function | Compiled Pattern Method | Instrumented Libraries | Notes |
|-----------|----------------------|------------------------|------------------------|-------|
| `search` | `re.search()` | `pattern.search()` | `re`, `regex`, `google-re2` | |
| `match` | `re.match()` | `pattern.match()` | `re`, `regex`, `google-re2` | |
| `fullmatch` | `re.fullmatch()` | `pattern.fullmatch()` | `re`, `regex`, `google-re2` | |
| `split` | `re.split()` | `pattern.split()` | `re`, `regex`, `google-re2` | |
| `findall` | `re.findall()` | `pattern.findall()` | `re`, `regex`, `google-re2` | Includes `re.match_count` |
| `finditer` | `re.finditer()` | `pattern.finditer()` | `re`, `regex`, `google-re2` | |
| `sub` | `re.sub()` | `pattern.sub()` | `re`, `regex`, `google-re2` | |
| `subn` | `re.subn()` | `pattern.subn()` | `re`, `regex`, `google-re2` | Includes `re.match_count` |
## Learn more
- [OpenTelemetry Python](https://opentelemetry.io/docs/languages/python/) - language overview and concepts.
- [Python instrumentation](https://opentelemetry.io/docs/languages/python/instrumentation/) - manual instrumentation and TracerProvider setup.
- [Python getting started](https://opentelemetry.io/docs/languages/python/getting-started/) - end-to-end setup with an exporter.
## Development & Contributing
Contributions are welcome.
See [CONTRIBUTING.md](CONTRIBUTING.md) for setup, running tests, linting, **Git hooks (prek)**, and development workflow.
Maintainers: see [RELEASING.md](RELEASING.md) for how to publish releases.
## License
Apache-2.0
| text/markdown | Danilo Gómez | Danilo Gómez <danilogomez3.14@gmail.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language ... | [] | null | null | >=3.10 | [] | [] | [] | [
"opentelemetry-api~=1.28",
"opentelemetry-instrumentation~=0.49",
"typing-extensions>=4.12.0",
"opentelemetry-instrumentation-re[instruments]; extra == \"all\"",
"uv>=0.4; extra == \"build\"",
"basedpyright>=1.38.0; extra == \"dev\"",
"prek>=0.3.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
... | [] | [] | [] | [
"Homepage, https://github.com/nilox94/opentelemetry-instrumentation-re",
"Repository, https://github.com/nilox94/opentelemetry-instrumentation-re"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-19T03:06:40.737335 | opentelemetry_instrumentation_re-0.1.0.tar.gz | 7,283 | 2b/94/90c384cbd0540ddf77f178e72dfa71e988d9071dfdea5fe7319542f5423c/opentelemetry_instrumentation_re-0.1.0.tar.gz | source | sdist | null | false | 07cbbee4656d084c48e2870566051937 | 5d33cec0c0b799f8ec7666627639ac8b342592e25b8e32da0b3f52da4c67b692 | 2b9490c384cbd0540ddf77f178e72dfa71e988d9071dfdea5fe7319542f5423c | Apache-2.0 | [] | 222 |
2.4 | motd-claude-code-plugins-v3 | 1.0.0 | Bundled plugins for Claude Code including Agent SDK development tools, PR review toolkit, and commit workflows | # Claude Code
 [![npm]](https://www.npmjs.com/package/@anthropic-ai/claude-code)
[npm]: https://img.shields.io/npm/v/@anthropic-ai/claude-code.svg?style=flat-square
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows -- all through natural language commands. Use it in your terminal, IDE, or tag @claude on Github.
**Learn more in the [official documentation](https://code.claude.com/docs/en/overview)**.
<img src="./demo.gif" />
## Get started
> [!NOTE]
> Installation via npm is deprecated. Use one of the recommended methods below.
For more installation options, uninstall steps, and troubleshooting, see the [setup documentation](https://code.claude.com/docs/en/setup).
1. Install Claude Code:
**MacOS/Linux (Recommended):**
```bash
curl -fsSL https://claude.ai/install.sh | bash
```
**Homebrew (MacOS/Linux):**
```bash
brew install --cask claude-code
```
**Windows (Recommended):**
```powershell
irm https://claude.ai/install.ps1 | iex
```
**WinGet (Windows):**
```powershell
winget install Anthropic.ClaudeCode
```
**NPM (Deprecated):**
```bash
npm install -g @anthropic-ai/claude-code
```
2. Navigate to your project directory and run `claude`.
## Plugins
This repository includes several Claude Code plugins that extend functionality with custom commands and agents. See the [plugins directory](./plugins/README.md) for detailed documentation on available plugins.
## Reporting Bugs
We welcome your feedback. Use the `/bug` command to report issues directly within Claude Code, or file a [GitHub issue](https://github.com/anthropics/claude-code/issues).
## Connect on Discord
Join the [Claude Developers Discord](https://anthropic.com/discord) to connect with other developers using Claude Code. Get help, share feedback, and discuss your projects with the community.
## Data collection, usage, and retention
When you use Claude Code, we collect feedback, which includes usage data (such as code acceptance or rejections), associated conversation data, and user feedback submitted via the `/bug` command.
### How we use your data
See our [data usage policies](https://code.claude.com/docs/en/data-usage).
### Privacy safeguards
We have implemented several safeguards to protect your data, including limited retention periods for sensitive information, restricted access to user session data, and clear policies against using feedback for model training.
For full details, please review our [Commercial Terms of Service](https://www.anthropic.com/legal/commercial-terms) and [Privacy Policy](https://www.anthropic.com/legal/privacy).
| text/markdown | Anthropic | Anthropic <support@anthropic.com> | null | null | MIT | claude, claude-code, plugins, ai, development | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyt... | [] | https://github.com/anthropics/claude-code | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/anthropics/claude-code",
"Documentation, https://code.claude.com/docs",
"Repository, https://github.com/anthropics/claude-code",
"Issues, https://github.com/anthropics/claude-code/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T03:04:52.289456 | motd_claude_code_plugins_v3-1.0.0.tar.gz | 3,688,127 | 5e/15/e32b855a6ed1d9207eea43dc479a6cd5ee2c54b1bd929ee2414362297c83/motd_claude_code_plugins_v3-1.0.0.tar.gz | source | sdist | null | false | 600d8eb69ab33d6b7026fdda8a9ace46 | 5ed1b8494374b0bba10f4e9ce838b7499fe790ace3739986b21e3edd67c3faae | 5e15e32b855a6ed1d9207eea43dc479a6cd5ee2c54b1bd929ee2414362297c83 | null | [
"LICENSE.md"
] | 250 |
2.4 | continuum-code | 0.2.1 | Rule engine so coding agents obey repo rules (block/warn/ask) with clear explanations. | # Continuum
Rule engine so coding agents **obey repo rules** across Cursor, Claude, and ChatGPT: they get **blocked**, **warned**, or **asked** before breaking rules, with clear explanations of which rule fired and how to fix.
## v0.1 promise
- Add `continuum.yaml` to your repo.
- Run `continuum check` locally and in CI.
- Cursor (MCP) agents are gated: **blocked** / **warned** / **clarification_required** before breaking rules.
- The system **explains** which rule fired and how to fix it.
## Install
```bash
pip install -e . # from repo
# or when published:
pip install continuum-code
```
Requires Python 3.10+. The CLI is installed as **`continuum`**. If another tool named `continuum` is in your PATH, use **`continuum-code`** instead (e.g. `continuum-code --version`, `continuum-code init --pack data-dbt-airflow`), or run **`python -m continuum`**.
### Development on Windows (WSL2)
Keep the repo on the WSL filesystem (e.g. `~/repos/continuum`) for best compatibility.
**One-time (PowerShell as Admin):**
```powershell
wsl --install -d Ubuntu-24.04
```
After WSL install, open Ubuntu and run:
```bash
# Base (keep repo under WSL, e.g. ~/repos/continuum)
sudo apt update && sudo apt install -y build-essential git curl
# Python via uv (fast, clean venvs)
curl -LsSf https://astral.sh/uv/install.sh | sh
source ~/.local/bin/env # or restart shell
```
**Per clone (inside WSL, in repo root):**
```bash
cd ~/repos/continuum # or your path
uv venv
source .venv/bin/activate
uv pip install -e ".[dev]"
```
**Run tests:**
```bash
pytest
```
**Run CLI:**
```bash
continuum check
continuum init --pack python-fastapi
```
For a broader Windows + WSL2 dev setup (Terminal, Docker, Antigravity safety), see your preferred guide.
## Quick start
```bash
# Create config (optional: use a pack)
continuum init
continuum init --pack python-fastapi # or node-backend, typescript-monorepo
# Check current changes (git diff)
continuum check
continuum check --staged # only staged
continuum check --base origin/main # diff against branch
# Explain why a rule fired
continuum explain
continuum explain ban_lodash
# List active contracts
continuum inspect
```
## Config: `continuum.yaml`
```yaml
version: 0.1
scopes:
- id: repo
match: ["**/*"]
contracts:
- type: ban
id: ban_lodash
match:
deps: ["lodash"]
message: "Use native JS or approved utils."
- type: ask_first
id: confirm_migrations
match:
paths: ["**/migrations/**"]
prompt: "Touching migrations. Confirm: (A) add-only (B) destructive (C) refactor"
- id: backend
match: ["backend/**"]
precedence: 10
contracts:
- type: require
id: require_tests
match:
paths: ["backend/**"]
require:
- kind: tests
hint: "Add or adjust unit tests for changed modules."
```
Contract types: **ban** (deps/paths/commands), **require** (tests/logging/ADR), **ask_first** (confirmation gate), **define** (metadata).
## Escape hatch
To skip checks (e.g. emergency hotfix), set `CONTINUUM_SKIP=1`; `continuum check` will exit 0 without running contracts. Prefer adjusting rules (e.g. `severity: warn`) in `continuum.yaml` when possible. See [docs/adoption.md](docs/adoption.md).
## CI (GitHub Action)
```yaml
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: ./actions/continuum-check
with:
base_ref: ${{ github.event.pull_request.base.sha }}
```
Or in another repo: `uses: your-org/continuum/actions/continuum-check@v0.1` (and install `continuum` via pip in the action).
## Cursor / MCP
Run the MCP server so the Cursor agent can call `continuum_check` before applying changes:
```bash
continuum mcp --transport stdio
```
Add to Cursor MCP config:
```json
{
"mcpServers": {
"continuum": {
"command": "continuum",
"args": ["mcp", "--transport", "stdio"]
}
}
}
```
See [docs/cursor-mcp.md](docs/cursor-mcp.md) and [docs/demo.md](docs/demo.md) for setup and the “Refactor auth middleware” demo.
**Golden demo:** Run the 3 scenarios in [demo/README.md](demo/README.md) (dbt marts require, airflow ask-first, banned command) in under 10 minutes.
## Packs
Starter configs:
- **node-backend** – Node/JS backend (ban lodash, require tests, ask on migrations).
- **python-fastapi** – FastAPI app (ban requests in favor of httpx, require tests, ask on migrations).
- **typescript-monorepo** – TS monorepo (ban lodash, require tests in packages).
- **data-dbt-airflow** – dbt + Airflow repos (ask_first on marts/DAGs, require tests on marts, ban risky commands).
```bash
continuum init --pack python-fastapi
continuum init --pack data-dbt-airflow # dbt + Airflow
```
### 5-minute adoption (dbt + Airflow)
1. `pip install continuum-code` (or `pip install -e .` from repo).
2. `continuum init --pack data-dbt-airflow` → writes `continuum.yaml`.
3. `continuum validate` → confirm the file is valid.
4. `continuum check` (or `continuum check --base origin/main` for PRs).
5. Add the GitHub Action for CI; optionally run `continuum mcp --transport stdio` for Cursor.
## Next steps (after v0.1)
- Richer dependency detection (poetry, pnpm, pip-tools).
- Pattern bans (regex on diffs).
- Stricter “require” checks (e.g. tests touched when src touched).
- Decision diffs / supersession (v0.2).
## License
MIT.
| text/markdown | Continuum | null | null | null | null | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"pyyaml>=6.0",
"pydantic>=2.0",
"click>=8.0",
"mcp>=1.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:04:46.860011 | continuum_code-0.2.1.tar.gz | 26,206 | a6/06/74bb25aaf8bc4aca6e4bacaec5b803eb3e05f89fded5a247270cad5ea608/continuum_code-0.2.1.tar.gz | source | sdist | null | false | daa0ef6e3e544ca0b4a39eb7f3bc68f4 | 50b90e61514ef3229a6a445dcb65c4429ad0ab663022f5a0c2d989c2791a197e | a60674bb25aaf8bc4aca6e4bacaec5b803eb3e05f89fded5a247270cad5ea608 | MIT | [] | 226 |
2.1 | odoo-addon-helpdesk-mgmt-project-domain | 16.0.2.0.0.1 | Enable to set a project domain on ticket | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
============================
Helpdesk Mgmt Project Domain
============================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:7050283ec22bbeb5aadb8fe406f26c31a549f217ac7d0b5c092c1d2e15a0d43a
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fhelpdesk-lightgray.png?logo=github
:target: https://github.com/OCA/helpdesk/tree/16.0/helpdesk_mgmt_project_domain
:alt: OCA/helpdesk
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/helpdesk-16-0/helpdesk-16-0-helpdesk_mgmt_project_domain
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/helpdesk&target_branch=16.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module allows you to configure domains to filter projects and tasks
available for selection in helpdesk tickets. It provides both static
domain configuration and dynamic Python-based filtering for enhanced
flexibility.
**Table of contents**
.. contents::
:local:
Use Cases / Context
===================
This module allows you to configure domains to filter projects and tasks
available for selection in helpdesk tickets, reducing errors and
improving efficiency.
Key Benefits
------------
- **Automated Filtering:** Configure domains to show only relevant
projects and tasks for each team
- **Error Reduction:** Minimize manual selection errors by limiting
available options
- **Smart Task Filtering:** Tasks are automatically filtered by the
selected project
- **Flexible Configuration:** Use static domains or dynamic Python code
for complex rules
Main Use Cases
--------------
Project Filtering
~~~~~~~~~~~~~~~~~
- Show only active projects
- Filter by client/partner
- Separate internal from external projects
- Filter by project tags or categories
Task Filtering
~~~~~~~~~~~~~~
- Show only tasks from selected project
- Filter by assignment status (assigned/unassigned)
- Filter by priority or urgency
- Filter by task tags or phases
Advanced Rules
~~~~~~~~~~~~~~
- Use Python code for dynamic filtering based on ticket data
- Combine multiple conditions with AND/OR operators
- Apply different rules for different teams
Configuration
=============
To configure this module, you need to:
1. Configure global project and task domains at company level.
2. Configure team-specific project and task domains.
3. Set up Python-based dynamic domains (optional).
Global Configuration (Company)
------------------------------
1. Go to **Settings > Helpdesk** to configure the global project and
task domains.
2. In the "Project & Task Domain Configuration" section, set the
**Project Domain** field using the visual domain builder.
3. Set the **Task Domain** field using the visual domain builder.
4. These domains will be applied to all teams that don't have their own
domains configured.
5. You can also Activate or Deactivate the global domains.
Team Configuration
------------------
1. Go to **Helpdesk > Configuration > Teams** to configure team-specific
domains.
2. Edit or create a team.
3. In the **Project Domain** tab:
- Set the **Project Domain** field using the visual domain builder.
- Configure the **Project Domain Python Code** field for dynamic
domains (optional).
4. In the **Task Domain** tab:
- Set the **Task Domain** field using the visual domain builder.
- Configure the **Task Domain Python Code** field for dynamic domains
(optional).
5. Team domains will be combined with the company domain using AND
logic.
Domain Builder Usage
--------------------
Both "Project Domain" and "Task Domain" fields use a visual builder that
allows:
1. **Click on the field** to open the domain builder.
2. **Select the field** from the Project/Task model (e.g., Active,
Partner, Type, User).
3. **Choose the operator** (e.g., =, !=, >, <, in, not in).
4. **Define the value** (e.g., True, False, partner name, user name).
5. **Add more conditions** with AND/OR logic.
6. **Save** the domain configuration.
Domain Examples
---------------
Project Domain Examples
~~~~~~~~~~~~~~~~~~~~~~~
Only Active Projects
^^^^^^^^^^^^^^^^^^^^
- Field: Active
- Operator: =
- Value: True
- Domain: ``[('active', '=', True)]``
Projects with Partner
^^^^^^^^^^^^^^^^^^^^^
- Field: Partner
- Operator: !=
- Value: False
- Domain: ``[('partner_id', '!=', False)]``
Projects from Specific Client
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Field: Partner
- Operator: =
- Value: [Client Name]
- Domain: ``[('partner_id', '=', 123)]`` (where 123 is the client ID)
Projects by Tags (Internal Projects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Field: Tags
- Operator: in
- Value: [Internal]
- Domain: ``[('tag_ids', 'in', [4])]`` (where 4 is the tag ID)
Projects by Tags (External Projects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Field: Tags
- Operator: in
- Value: [External]
- Domain: ``[('tag_ids', 'in', [5])]`` (where 5 is the tag ID)
Mixed Project Tags (Internal OR External)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Field: Tags
- Operator: in
- Value: [Internal, External]
- Domain: ``['|', ('tag_ids', 'in', [4]), ('tag_ids', 'in', [5])]``
Task Domain Examples
~~~~~~~~~~~~~~~~~~~~
Only Active Tasks
^^^^^^^^^^^^^^^^^
- Field: Active
- Operator: =
- Value: True
- Domain: ``[('active', '=', True)]``
Tasks by Tags (Development)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Field: Tags
- Operator: in
- Value: [Development]
- Domain: ``[('tag_ids', 'in', [1])]`` (where 1 is the tag ID)
Tasks by Tags (Testing)
^^^^^^^^^^^^^^^^^^^^^^^
- Field: Tags
- Operator: in
- Value: [Testing]
- Domain: ``[('tag_ids', 'in', [2])]`` (where 2 is the tag ID)
Mixed Task Tags (Development OR Testing)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Field: Tags
- Operator: in
- Value: [Development, Testing]
- Domain: ``['|', ('tag_ids', 'in', [1]), ('tag_ids', 'in', [2])]``
Unassigned Tasks
^^^^^^^^^^^^^^^^
- Field: User
- Operator: =
- Value: False
- Domain: ``[('user_ids', '=', False)]``
Tasks Assigned to Specific User
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Field: User
- Operator: in
- Value: [User Name]
- Domain: ``[('user_ids', 'in', [123])]`` (where 123 is the user ID)
Tasks with High Priority
^^^^^^^^^^^^^^^^^^^^^^^^
- Field: Priority
- Operator: =
- Value: 1
- Domain: ``[('priority', '=', '1')]``
Python Domain Code
------------------
For advanced users, you can use Python code to create dynamic domains:
1. Go to the team configuration.
2. Edit the **Project Domain Python Code** or **Task Domain Python
Code** field.
3. Write Python code that returns a domain list.
4. Available variables: ticket, env, user, company, AND, OR, normalize.
Example Python Code
~~~~~~~~~~~~~~~~~~~
Project Domain Example - Client Projects Team
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code:: python
# Filter projects based on ticket partner (from demo data)
if ticket.partner_id:
domain = [('commercial_partner_id', '=', ticket.commercial_partner_id.id)]
else:
domain = [('id', '=', 0)] # No projects if no partner
Task Domain Example - Unassigned Tasks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code:: python
# Filter tasks not assigned to anyone (from demo data)
domain = [('user_ids', '=', False)]
Task Domain Example - Project-based Filtering
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code:: python
# Filter tasks based on ticket project
if ticket.project_id:
domain = [('project_id', '=', ticket.project_id.id)]
else:
domain = [('id', '=', 0)] # No tasks if no project selected
Task Domain Example - Mixed Tags
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code:: python
# Filter tasks by development or testing tags (from demo data)
domain = ['|', ('tag_ids', 'in', [1]), ('tag_ids', 'in', [2])]
Task Domain Example - Priority and Assignment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code:: python
# Filter high priority tasks assigned to specific users
if ticket.partner_id:
domain = AND([
[('priority', '=', '1')], # High priority
[('user_ids', '!=', False)] # Assigned tasks
])
else:
domain = [('priority', '=', '1')] # High priority only
Domain Combination Logic
------------------------
All domains are combined using AND logic:
For Projects:
~~~~~~~~~~~~~
1. **Company global project domain** (base filter for all teams).
2. **Team static project domain** (always combined with company domain).
3. **Team Python project code** (always combined with company + team
domains).
For Tasks:
~~~~~~~~~~
1. **Company global task domain** (base filter for all teams).
2. **Team static task domain** (always combined with company domain).
3. **Team Python task code** (always combined with company + team
domains).
The final project domain will be: Company Project Domain AND Team
Project Domain AND Python Project Domain.
The final task domain will be: Company Task Domain AND Team Task Domain
AND Python Task Domain.
Permissions
-----------
There are no specific permissions required for this module. The domain
filtering respects the user's existing project access permissions set in
the system.
Troubleshooting
---------------
If domains are not working as expected:
1. Check that the domain syntax is correct.
2. Verify that the Python code (if used) has no syntax errors.
3. Ensure that the fields referenced in domains exist in the Project
model.
4. Check the Odoo logs for any domain evaluation errors.
Usage
=====
How It Works
------------
Project Filtering
~~~~~~~~~~~~~~~~~
1. **Ticket Creation**: When a ticket is created, the system checks:
- If the team has a configured project domain
- If not, uses the company's global project domain
- If no domain is configured, all projects remain available
2. **Ticket Editing**: The project domain is automatically applied to
the "Project" field of the ticket
3. **Validation**: If the project domain is invalid, the system ignores
it and shows all projects
Task Filtering
~~~~~~~~~~~~~~
1. **Project Selection**: When a project is selected in a ticket:
- The system applies the configured task domain
- Tasks are automatically filtered by the selected project
- Only tasks belonging to the selected project are shown
2. **Dynamic Filtering**: Task filtering is updated when:
- The project field changes
- The team changes
- Other relevant ticket fields change
3. **Smart Filtering**: The system ensures tasks are always relevant to
the selected project, preventing selection of tasks from other
projects
Domain Application
~~~~~~~~~~~~~~~~~~
- **Static Domains**: Applied immediately when fields change
- **Python Domains**: Evaluated dynamically based on current ticket data
- **Combination Logic**: All applicable domains are combined using AND
logic
- **Fallback Behavior**: If any domain fails, the system gracefully
falls back to showing all available options
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/helpdesk/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/helpdesk/issues/new?body=module:%20helpdesk_mgmt_project_domain%0Aversion:%2016.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Escodoo
Contributors
------------
- `Escodoo <https://escodoo.com.br>`__:
- Marcel Savegnago marcel.savegnago@escodoo.com.br
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-marcelsavegnago| image:: https://github.com/marcelsavegnago.png?size=40px
:target: https://github.com/marcelsavegnago
:alt: marcelsavegnago
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-marcelsavegnago|
This module is part of the `OCA/helpdesk <https://github.com/OCA/helpdesk/tree/16.0/helpdesk_mgmt_project_domain>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| null | Escodoo,Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 16.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/helpdesk | null | >=3.10 | [] | [] | [] | [
"odoo-addon-helpdesk-mgmt-project<16.1dev,>=16.0dev",
"odoo<16.1dev,>=16.0a"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T03:03:16.929330 | odoo_addon_helpdesk_mgmt_project_domain-16.0.2.0.0.1-py3-none-any.whl | 53,003 | 43/4e/6170b12dbe1330a077a3cff26b95e64f15409f5deaf84640974864924c4f/odoo_addon_helpdesk_mgmt_project_domain-16.0.2.0.0.1-py3-none-any.whl | py3 | bdist_wheel | null | false | b0c626fd89130a547b222debc731fe8d | aedb692bdb2badb3aa9e8f9fc8dc477b39d7156138039871412ca340cc35b92e | 434e6170b12dbe1330a077a3cff26b95e64f15409f5deaf84640974864924c4f | null | [] | 89 |
2.4 | paddle | 1.2.7 | Python Atmospheric Dynamics: Discovery and Learning about Exoplanets. An open-source, user-friendly python frontend of canoe | # Paddle
Python Atmospheric Dynamics: Discovering and Learning about Exoplanets. An open-source, user-friendly Python version of [canoe](https://github.com/chengcli/canoe).
## Test the package
Testing the package can be done very easily by following the steps below. All we need is to create a python virtual environment, install the package, and run the test script.
1. Create a python virtual environment
```bash
python -m venv pyenv
```
2. Install paddle package
```bash
pip install paddle
```
3. Run test
```bash
cd tests
python test_saturn_adiabat.py
```
## Develop with Docker
You may need to install Docker to compose up and install the package inside a container if your device or operating system does not support certain dependencies. Follow the instructions below to install docker and docker-compose plugin.
1. Install docker with compose
```bash
curl -fsSL https://get.docker.com | sudo sh
```
2. Start docker using the command below or open docker desktop if applicable.
```bash
sudo systemctl start docker
```
After installing Docker, you can use the Makefile commands below to manage your docker containers from the terminal. By default, the container will mount the current directory to `/paddle` inside the container.
> Mounting a local directory allows you to edit files on your local machine while running and testing the code inside the container; or use the container as a development environment and sync files to your local machine.
If you want to change the mounted directory, you can create a `docker-compose.overrides.yml` file based on the provided `docker-compose.overrides.yml.tmp` template file.
- Create a docker container
```bash
make up
```
- Start a docker container (only if previously created)
```bash
make start
```
- Terminate a docker container
```bash
make down
```
- Build a new docker image (rarely used)
```bash
make build
```
If you use VSCode, it is recommended to install extensions including [Remote Development](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack), [Dev Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) and [Container Tools](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-containers) for a better development experience within a container.
1. Install the extensions mentioned above.
2. Start the container using `make up` or `make start`.
3. Open VSCode and click on the "Containers" icon on the left sidebar. Note that if you have too many extensions installed, the icon may be hidden under the "..." menu.
<img src="docs/_static/readme-extension.png" width="200" style="vertical-align: top;">
<img src="docs/_static/readme-attach.png" width="200" style="vertical-align: top;">
4. Right click on the running container named `paddle` with a green triangle icon (indicating it is running), and select "Attach Visual Studio Code" (see above images). This will open a new VSCode window connected to the container.
5. Open either the default folder `paddle` mounted from your local machine, or any custom workspace folder you have set up inside the `docker-compose.overrides.yml` file. Now you can start developing inside the container as if you were working on your local machine!
<img src="docs/_static/readme-open-folder.png" width="500" style="vertical-align: top;">
## For Developers
Follow the steps below to set up your development environment.
1. Clone the repository
```bash
https://github.com/elijah-mullens/paddle
```
2. Cache your github credential. This will prevent you from being prompted for your username and password every time you push changes to github.
```bash
git config credential.helper 'cache --timeout=86400'
```
3. Create a python virtual environment.
```bash
python -m venv .pyenv
```
4. Install paddle package
```bash
# Install the package normally
pip install paddle
# [Alternatively] if you want to install in editable mode
pip install -e .
```
5. Install pre-commit hook. This will automatically format your code before each commit to ensure consistent code style.
```bash
pip install pre-commit
pre-commit install
```
## Troubleshooting
1. If you have Docker installed but do not have Docker Compose, remove your current Docker installation, which could be docker or docker.io, and re-install it following the guide provided in the [Develop with Docker](#develop-with-docker) section above.
2. If you run out of disk space while building the image. You can relocate the default location for container images:
Here is a simple recipe:
1. Generate config (if you don’t already have one):
```
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null
```
2. Edit `/etc/containerd/config.toml` and set:
```
root = "/home/containerd" # location where you have space
state = "/run/containerd"
```
3. Restart
```
sudo systemctl restart containerd
```
| text/markdown | null | Elijah Mullens <eem85@cornell.edu>, Cheng Li <chengcli@umich.edu> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Pytho... | [] | null | null | >=3.9 | [] | [] | [] | [
"scipy",
"snapy>=1.3.0",
"torch<=2.7.1,>=2.7.0",
"pytest>=7; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/elijah-mullens/paddle",
"Repository, https://github.com/elijah-mullens/paddle",
"Issues, https://github.com/elijah-mullens/paddle/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:02:49.646104 | paddle-1.2.7.tar.gz | 3,116,341 | ec/7e/9b8b74513e3496b63c9283d6fca223b12c02fd1249a50400d55f2e16c79e/paddle-1.2.7.tar.gz | source | sdist | null | false | e8a62ceda642fe3797d059fc4a0956ba | 48af3867655004307db82513fbb99c49e240546c09b2a0a2a50dc8844f719344 | ec7e9b8b74513e3496b63c9283d6fca223b12c02fd1249a50400d55f2e16c79e | null | [
"LICENSE"
] | 731 |
2.4 | agent-safety-box | 1.1.0 | Battle-hardened safety wrapper for AI agents | # 🛡️ agent_safety_box
**Battle-hardened safety wrapper for AI agents** — zero external dependencies, production-grade, fully tested.
```
pip install agent-safety-box
```
---
##The Problem
AI agents are powerful, but they fail unpredictably in production. They hallucinate, get stuck in loops, blow through budgets, and crash silently. As one developer put it, "we're spending 40% of our budget on garbage output." Debugging becomes guesswork. agent_safety_box is the answer
.
## Features
| Feature | `AgentWrapper` | `AgentWrapperV2` |
|---|---|---|
| Budget enforcement (thread-safe) | ✅ | ✅ |
| Per-run cost ceiling | ✅ | ✅ |
| Timeout (sync & async) | ✅ | ✅ |
| Retry + exponential backoff + jitter | ✅ | ✅ |
| Circuit Breaker | ✅ | ✅ |
| Rate Limiter (token-bucket) | ✅ | ✅ |
| Output validation | ✅ | ✅ |
| JSONL audit logging + rotation | ✅ | ✅ |
| Dry-run mode | ✅ | ✅ |
| Tags & metadata | ✅ | ✅ |
| Context manager | ✅ | ✅ |
| Lifecycle hooks (before/after/error) | — | ✅ |
| Bulkhead (concurrency limiter) | — | ✅ |
| Adaptive timeout (auto p95-tuned) | — | ✅ |
| Rolling metrics (p50/p95/p99) | — | ✅ |
| Cost forecaster | — | ✅ |
| Plugin registry | — | ✅ |
| Budget warning callback | — | ✅ |
| Graceful shutdown | — | ✅ |
| Fallback chain (multi-agent failover) | — | ✅ |
---
## Quick Start
```python
from agent_safety_box import AgentWrapper, CircuitBreaker, RateLimiter
cb = CircuitBreaker(failure_threshold=3, reset_timeout=60)
rl = RateLimiter(rate=5.0)
wrapper = AgentWrapper(
agent=my_agent,
budget=100.0,
max_runtime=10.0,
max_cost_per_run=5.0,
max_retries=2,
retry_delay=1.0,
circuit_breaker=cb,
rate_limiter=rl,
output_validator=lambda o: isinstance(o, str) and len(o) > 0,
tags={"env": "prod", "model": "gpt-4o"},
log_path="logs/audit.jsonl",
)
with wrapper:
result = wrapper.run("Summarise this document", cost=1.5)
```
---
## AgentWrapperV2 — Advanced Pipeline
```python
from agent_safety_box.wrapper_v2 import AgentWrapperV2
from agent_safety_box.middleware import HookManager, Bulkhead, AdaptiveTimeout
hooks = HookManager()
hooks.on_before(lambda ctx: print(f"→ {ctx['task']}"))
hooks.on_after(lambda ctx: print(f"✓ done"))
hooks.on_error(lambda ctx, exc: print(f"✗ {exc}"))
w = AgentWrapperV2(
agent=my_agent,
budget=500.0,
hooks=hooks,
bulkhead=Bulkhead(max_concurrent=10),
adaptive_timeout=AdaptiveTimeout(initial=5.0),
on_budget_warning=lambda rem, bud: print(f"⚠ Only {rem:.2f} left!"),
budget_warning_pct=0.20,
plugins={"logger": lambda ctx: my_logger.info(ctx)},
)
result = w.run("task", cost=1.0)
print(w.metrics)
# {
# 'total_runs': 1, 'successes': 1, 'errors': 0,
# 'spent': 1.0, 'remaining': 499.0,
# 'rolling': {'p50_s': 0.003, 'p95_s': 0.003, 'p99_s': 0.003,
# 'error_rate': 0.0, 'throughput_rps': 312.5, 'sample_size': 1},
# 'avg_cost_forecast': 1.0,
# 'bulkhead_active': 0, 'bulkhead_max': 10,
# 'adaptive_timeout_s': 5.0,
# }
```
---
## Fallback Chain
```python
from agent_safety_box.middleware import FallbackChain
chain = FallbackChain(
primary=gpt4_agent,
fallbacks=[gpt35_agent, local_llm_agent],
)
result = chain.run("task")
print(f"Used: {chain.last_used}")
print(chain.stats)
```
---
## Async Support
```python
from agent_safety_box import AsyncAgentWrapper
wrapper = AsyncAgentWrapper(agent=my_async_agent, budget=50.0)
async def main():
result = await wrapper.run("Classify this text", cost=0.5)
asyncio.run(main())
```
---
## Exceptions
All exceptions inherit from `AgentSafetyBoxError`:
| Exception | When raised |
|---|---|
| `BudgetExceededError` | Total budget would be breached |
| `MaxCostExceededError` | Per-run cost ceiling exceeded |
| `TimeoutExceededError` | Agent exceeded `max_runtime` |
| `RetryExhaustedError` | All retry attempts failed |
| `AgentValidationError` | `output_validator` rejected output |
| `CircuitOpenError` | Circuit breaker is OPEN |
| `RateLimitError` | Rate limiter rejected (non-blocking mode) |
---
## Safety Pipeline (per run)
```
1. max_cost_per_run check
2. budget check (thread/async-safe lock)
3. rate_limiter.acquire()
4. circuit_breaker.allow_request()
5. agent.run(task) [with timeout]
6. output_validator(result)
7. actual_cost adjustment (if agent returns {"actual_cost": X})
8. budget commit
9. circuit_breaker.record_success/failure()
10. audit log write
```
---
## Running Tests
```bash
# All 263 tests
python -m unittest discover -s agent_safety_box/tests -p "test_*.py" -v
# Single module
python -m unittest agent_safety_box.tests.test_ultimate -v
```
---
## Architecture
```
agent_safety_box/
├── __init__.py # Public API
├── wrapper.py # AgentWrapper + AsyncAgentWrapper (v1)
├── wrapper_v2.py # AgentWrapperV2 + AsyncAgentWrapperV2 (v2)
├── middleware.py # HookManager, RollingMetrics, Bulkhead,
│ # AdaptiveTimeout, CostForecaster, FallbackChain
├── safety.py # CircuitBreaker + RateLimiter
├── exceptions.py # All custom exceptions
├── logger.py # AuditLogger (JSONL + rotation)
└── tests/
├── conftest.py # Shared stubs and base class
├── test_budget.py # Budget enforcement (31 tests)
├── test_execution.py# Timeout + retry (30 tests)
├── test_logger.py # Audit logging (21 tests)
├── test_safety.py # Circuit + rate limiter (31 tests)
├── test_stress.py # Concurrency + chaos (18 tests)
└── test_ultimate.py # Deep edge cases (132 tests)
```
---
## License
MIT
| text/markdown | agent_safety_box contributors | null | null | null | MIT | ai, agent, safety, wrapper, budget, circuit-breaker | [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Software Development :: Libraries",
"Intended Audience :: Developers"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pytest>=7; extra == \"dev\"",
"pytest-asyncio; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.3 | 2026-02-19T03:02:33.321166 | agent_safety_box-1.1.0.tar.gz | 40,833 | 1b/ba/2d0dbcaf6d3b57c2958c27611bc437506546f7f57ef3a083a0e4c2cc5354/agent_safety_box-1.1.0.tar.gz | source | sdist | null | false | b83ec22afacd1fa7f087ed45fd838f89 | 89558ba67bb66be47884b31e29a1eeec4e004160394dbfbedd889696f44e25d7 | 1bba2d0dbcaf6d3b57c2958c27611bc437506546f7f57ef3a083a0e4c2cc5354 | null | [
"LICENSE"
] | 250 |
2.1 | sourcepp | 2026.2.18 | Several modern C++20 libraries for sanely parsing Valve formats. | <!--suppress HtmlDeprecatedAttribute -->
<div>
<img align="left" width="128px" src="https://github.com/craftablescience/sourcepp/blob/main/branding/logo.png?raw=true" alt="The Source Pretty Parsers logo. A printer-esque device is scanning a page with hex codes and printing a picture of Cordon Freeman." />
<h1>Source Pretty Parsers</h1>
</div>
<div>
<a href="https://github.com/craftablescience/sourcepp/actions" target="_blank" rel="noreferrer"><img src="https://img.shields.io/github/actions/workflow/status/craftablescience/sourcepp/build.yml?label=Build&logo=github&logoColor=%23FFFFFF" alt="Build Status" /></a>
<a href="https://github.com/craftablescience/sourcepp/blob/main/LICENSE" target="_blank" rel="noreferrer"><img src="https://img.shields.io/github/license/craftablescience/sourcepp?label=License&logo=libreofficewriter&logoColor=%23FFFFFF" alt="License" /></a>
<a href="https://discord.gg/ASgHFkX" target="_blank" rel="noreferrer"><img src="https://img.shields.io/discord/678074864346857482?label=Discord&logo=Discord&logoColor=%23FFFFFF" alt="Discord" /></a>
<a href="https://ko-fi.com/craftablescience" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/donate-006dae?label=Ko-fi&logo=ko-fi&logoColor=%23FFFFFF&color=%23B238A1" alt="Ko-Fi" /></a>
</div>
Several modern C++20 libraries for sanely parsing Valve formats.
## Other Languages
<div>
<a href="https://pypi.org/project/sourcepp" target="_blank" rel="noreferrer"><img alt="Version" src="https://img.shields.io/pypi/v/sourcepp?logo=python&logoColor=%23FFFFFF&label=PyPI%20Version" /></a>
<a href="https://pypi.org/project/sourcepp" target="_blank" rel="noreferrer"><img src="https://img.shields.io/pypi/pyversions/sourcepp?logo=python&logoColor=%23FFFFFF&label=Python%20Versions" alt="Python Versions" /></a>
</div>
Wrappers for libraries considered complete exist for C, C#, and/or Python, depending on the library.
The Python wrappers can be found on PyPI in the [sourcepp](https://pypi.org/project/sourcepp) package.
## Included Libraries
<table>
<tr>
<th>Library</th>
<th>Supports</th>
<th>Read</th>
<th>Write</th>
<th>Bindings</th>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td rowspan="1"><code>bsppp</code></td>
<td>
<a href="https://developer.valvesoftware.com/wiki/BSP_(Source)" target="_blank" rel="noreferrer">BSP</a> v17-27
<br> • Console modifications
<br> • Left 4 Dead 2 modifications
<br> • <a href="https://stratasource.org" target="_blank" rel="noreferrer">Strata Source</a> modifications
</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td rowspan="1" align="center">Python</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td rowspan="1"><code>fspp</code><sup>*</sup></td>
<td>Source 1 filesystem accessor</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td rowspan="1" align="center"></td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td rowspan="3"><code>gamepp</code></td>
<td>Get Source engine instance window title/position/size</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td rowspan="3" align="center">C<br>Python</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>Run commands in a Source engine instance remotely</td>
<td align="center">❌</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td rowspan="5"><code>kvpp</code></td>
<td>
<a href="https://developer.valvesoftware.com/wiki/DMX" target="_blank" rel="noreferrer">DMX</a>
<br> • Legacy binary v1-2 encoding (<code>binary_vN</code>)
<br> • Legacy SFM v1-9 encoding (<code>sfm_vN</code>)
<br> • Binary v1-5, v9 encodings (<code>binary</code>, <code>binary_seqids</code>)
<br> • <a href="https://github.com/TeamSpen210/srctools" target="_blank" rel="noreferrer">srctools</a> encodings (<code>unicode_*</code>)
</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td rowspan="5" align="center">Python</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://developer.valvesoftware.com/wiki/KeyValues" target="_blank" rel="noreferrer">KeyValues</a> v1 Binary</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://developer.valvesoftware.com/wiki/KeyValues" target="_blank" rel="noreferrer">KeyValues</a> v1 Text<sup>†</sup></td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td rowspan="5"><code>mdlpp</code><sup>*</sup></td>
<td><a href="https://developer.valvesoftware.com/wiki/MDL_(Source)" target="_blank" rel="noreferrer">MDL</a> v44-49</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td rowspan="5" align="center"></td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://developer.valvesoftware.com/wiki/VTX" target="_blank" rel="noreferrer">VTX</a> v7</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://developer.valvesoftware.com/wiki/VVD" target="_blank" rel="noreferrer">VVD</a> v4</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td rowspan="3"><code>sndpp</code><sup>*</sup></td>
<td>WAV</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td rowspan="3" align="center"></td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>XWV v0-1, v4</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td rowspan="5"><code>steampp</code></td>
<td>Find Steam install folder</td>
<td align="center">✅</td>
<td align="center">-</td>
<td rowspan="5" align="center">C<br>Python</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>Find installed Steam games</td>
<td align="center">✅</td>
<td align="center">-</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>Find Steam game library assets</td>
<td align="center">✅</td>
<td align="center">-</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td rowspan="3"><code>toolpp</code></td>
<td>
<a href="https://developer.valvesoftware.com/wiki/FGD" target="_blank" rel="noreferrer">FGD (Source 1)</a>
<br> • <a href="https://jack.hlfx.ru/en" target="_blank" rel="noreferrer">J.A.C.K.</a> modifications
<br> • <a href="https://ficool2.github.io/HammerPlusPlus-Website" target="_blank" rel="noreferrer">Hammer++</a> modifications
<br> • <a href="https://stratasource.org" target="_blank" rel="noreferrer">Strata Source</a> modifications
</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td rowspan="3" align="center">Python</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>
<a href="https://developer.valvesoftware.com/wiki/Command_Sequences" target="_blank" rel="noreferrer">WC</a> (CmdSeq) v0.1-0.2
<br> • <a href="https://ficool2.github.io/HammerPlusPlus-Website" target="_blank" rel="noreferrer">Hammer++</a> modifications
<br> • <a href="https://stratasource.org" target="_blank" rel="noreferrer">Strata Source</a> modifications
</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td rowspan="3"><code>vcryptpp</code></td>
<td><a href="https://developer.valvesoftware.com/wiki/VICE" target="_blank" rel="noreferrer">VICE</a> encrypted files</td>
<td align="center">✅</td>
<td align="center">✅</td>
<td rowspan="3" align="center">C<br>C#<br>Python</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://developer.valvesoftware.com/wiki/Vfont" target="_blank" rel="noreferrer">VFONT</a> encrypted fonts</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td rowspan="33"><code>vpkpp</code></td>
<td>007 v1.1, v1.3 (007 - Nightfire)</td>
<td align="center">✅</td>
<td align="center">❌</td>
<td rowspan="33" align="center">C<br>C#<br>Python</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>APK (Fairy Tale Busters)</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>FGP v2-3 (PS3, Orange Box)</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>FPX v10 (Tactical Intervention)</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://developer.valvesoftware.com/wiki/GCF_archive" target="_blank" rel="noreferrer">GCF</a> v6</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>GMA v1-3 (Garry's Mod)</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>HOG (Descent)</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>OL (Worldcraft Object Library)</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>ORE (Narbacular Drop)</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>
<a href="https://quakewiki.org/wiki/.pak" target="_blank" rel="noreferrer">PAK</a> (Quake, WON Half-Life)
<br> • <a href="https://en.wikipedia.org/wiki/Sin_(video_game)" target="_blank" rel="noreferrer">SiN</a> modifications
<br> • <a href="https://store.steampowered.com/app/824600/HROT" target="_blank" rel="noreferrer">HROT</a> modifications
</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://docs.godotengine.org/en/stable/tutorials/export/exporting_pcks.html" target="_blank" rel="noreferrer">PCK</a> v1-2 (Godot Engine)</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>
<a href="https://developer.valvesoftware.com/wiki/VPK" target="_blank" rel="noreferrer">VPK</a> pre-v1, v1-2, v54
<br> • <a href="https://www.counter-strike.net/cs2" target="_blank" rel="noreferrer">Counter-Strike: 2</a> modifications
<br> • <a href="https://clientmod.ru" target="_blank" rel="noreferrer">Counter-Strike: Source ClientMod</a> modifications
</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>VPK (Vampire: The Masquerade - Bloodlines)</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>VPP v1-3 (Red Faction)</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>WAD v3</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>XZP v6 (Xbox, Half-Life 2)</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>
ZIP
<br> • <a href="https://github.com/BEEmod/BEE2-items" target="_blank" rel="noreferrer">BEE_PACK</a> alias (BEE2.4 Package)
<br> • <a href="https://developer.valvesoftware.com/wiki/Bonus_Maps" target="_blank" rel="noreferrer">BMZ</a> alias (Source 1 Bonus Maps)
<br> • FPK alias (Tactical Intervention)
<br> • <a href="https://doomwiki.org/wiki/PK3" target="_blank" rel="noreferrer">PK3</a> alias (Quake III)
<br> • <a href="https://doomwiki.org/wiki/PK4" target="_blank" rel="noreferrer">PK4</a> alias (Quake IV, Doom 3)
<br> • PKZ alias (Quake II RTX)
<br> • XZP2 modifications (X360 & PS3, misc. Source 1 titles)
</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td rowspan="37"><code>vtfpp</code></td>
<td><a href="https://wiki.mozilla.org/APNG_Specification" target="_blank" rel="noreferrer">APNG</a></td>
<td align="center">✅</td>
<td align="center">❌</td>
<td rowspan="37" align="center">C<br>Python</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://en.wikipedia.org/wiki/BMP_file_format" target="_blank" rel="noreferrer">BMP</a></td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://openexr.com" target="_blank" rel="noreferrer">EXR</a> v1</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>FRAMES (PS3, Orange Box)</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://en.wikipedia.org/wiki/GIF" target="_blank" rel="noreferrer">GIF</a></td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://en.wikipedia.org/wiki/RGBE_image_format" target="_blank" rel="noreferrer">HDR</a></td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://en.wikipedia.org/wiki/JPEG" target="_blank" rel="noreferrer">JPEG</a></td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>PIC</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://en.wikipedia.org/wiki/PNG" target="_blank" rel="noreferrer">PNG</a></td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://netpbm.sourceforge.net/doc/pnm.html" target="_blank" rel="noreferrer">PNM</a> (PGM, PPM)</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://developer.valvesoftware.com/wiki/PPL" target="_blank" rel="noreferrer">PPL</a> v0</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://www.adobe.com/creativecloud/file-types/image/raster/psd-file.html" target="_blank" rel="noreferrer">PSD</a></td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://qoiformat.org" target="_blank" rel="noreferrer">QOI</a></td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://developer.valvesoftware.com/wiki/Animated_Particles" target="_blank" rel="noreferrer">SHT</a> v0-1</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://en.wikipedia.org/wiki/Truevision_TGA" target="_blank" rel="noreferrer">TGA</a></td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>TTX (TTH, TTZ) v1.0</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>VBF v3</td>
<td align="center">✅</td>
<td align="center">❌</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td>
<a href="https://developer.valvesoftware.com/wiki/VTF_(Valve_Texture_Format)" target="_blank" rel="noreferrer">VTF</a> v7.0-7.6
<br> • <a href="https://stratasource.org" target="_blank" rel="noreferrer">Strata Source</a> modifications
<br> • <a href="https://developer.valvesoftware.com/wiki/Half-Life_2_(Xbox)/Modding_Guide" target="_blank" rel="noreferrer">XTF</a> v5.0 (Xbox, Half-Life 2)
<br> • <a href="https://developer.valvesoftware.com/wiki/VTFX_file_format" target="_blank" rel="noreferrer">VTFX</a> v8 (X360 & PS3, Orange Box)
<br> • <a href="https://developer.valvesoftware.com/wiki/VTFX_file_format" target="_blank" rel="noreferrer">VTF3</a> v8 (PS3, Portal 2 & CS:GO)
</td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
<tr><!-- empty row to disable GitHub striped bg color --></tr>
<tr>
<td><a href="https://developers.google.com/speed/webp" target="_blank" rel="noreferrer">WebP</a></td>
<td align="center">✅</td>
<td align="center">✅</td>
</tr>
</table>
(\*) These libraries are incomplete and still in development. Their interfaces are unstable and will likely change in the future.
Libraries not starred should be considered stable, and their existing interfaces will not change much if at all. Note that wrappers
only exist for stable libraries.
(†) Many text-based formats in Source are close to (if not identical to) KeyValues v1, such as [VMT](https://developer.valvesoftware.com/wiki/VMT) and [VMF](https://developer.valvesoftware.com/wiki/VMF_(Valve_Map_Format)).
## Gallery
Anything using the `sourcepp` parser set that I know of, directly or indirectly.
These are only the tools and games using `sourcepp` that I know of. If you would like to be listed here, [email me](mailto:lauralewisdev@gmail.com) or [join my Discord server](https://discord.gg/ASgHFkX), I'd love to hear from you!
### Tools
- [fgptool](https://github.com/craftablescience/fgptool): A tool to crack the filepath hashes in The Orange Box PS3 file groups.
- [fOptimizer](https://github.com/fxington/foptimizer): A GUI-based collection of tools written in Python for cutting down on unnecessarily bloated Garry's Mod addon sizes.
- [gimp-vtf](https://github.com/chev2/gimp-vtf): A GIMP plugin to load and save VTF files.
- [gm_addon_optimization_tricks](https://github.com/wrefgtzweve/gm_addon_optimization_tricks): A desktop tool to optimize Garry's Mod addons/maps.
- [GodotSource](https://github.com/craftablescience/godotsource): A work-in-progress set of bindings to connect the `sourcepp` libraries to Godot. Allows GDScript to work with the libraries, and allows Godot to directly load Source engine assets from a user project or from installed Source games.
- [MareTF](https://github.com/craftablescience/MareTF): An open source MIT-licensed CLI/GUI tool that can create, extract from, preview the contents of and write to every variant of VTF file. Replicates the functionality of Valve's `vtex.exe` and VTFEdit.
- [Myst IV: Revolution](https://github.com/tomysshadow/M4Revolution): Performs various fixes for the game Myst IV: Revelation.
- [PBR-2-Source](https://github.com/koerismo/PBR-2-Source): A Python-powered GUI for converting PBR materials into materials compatible with the Source engine.
- [QVTF++](https://github.com/craftablescience/qvtfpp): A QImageIO plugin to load VTF textures, based on panzi's QVTF plugin.
- [RectMaker](https://github.com/cplbradley/RectMaker): A freeware GUI tool that can create and modify `.rect` files used in Hammer++'s hotspotting algorithm.
- [reloaded2ps3](https://github.com/craftablescience/reloaded2ps3): Convert the PC version of Portal Reloaded to a playable PS3 game.
- [Verifier](https://github.com/StrataSource/verifier): A small program that can build an index of a game's files, and validate existing files based on that index. Similar to Steam's "Verify integrity of game files" option, but without overwriting any files.
- [VPKEdit](https://github.com/craftablescience/VPKEdit): An open source MIT-licensed CLI/GUI tool that can create, extract from, preview the contents of and write to several pack file formats. Replicates the functionality of Valve's `vpk.exe` and GCFScape.
- [bsp-linux-fix](https://github.com/dresswithpockets/bsp-linux-fix): Patches maps which have improperly cased packed assets by repacking the assets, fixing an issue on Linux.
- [CS2-EomVotesFix](https://github.com/Kitof/CS2-EomVotesFix): Fixes displaying workshop map names and thumbnails during end-of-match voting for LAN events.
- [dham](https://github.com/Seraphli/dham): Modifies Dota 2 hero aliases based on a configuration file and packages the changes.
- [Linux BSP Case Folding Workaround](https://github.com/scorpius2k1/linux-bsp-casefolding-workaround): A bash script designed to resolve issues with improperly cased packed map assets in Source engine games on Linux. Extracting the assets allows the game to find them properly.
- [props_scaling_recompiler](https://github.com/Ambiabstract/props_scaling_recompiler): Allows converting `prop_scalable` into a static prop, effectively implementing static prop scaling outside CS:GO.
- [rock:sail](https://github.com/Le0X8/rocksail): CS2 client-side tool to use skins for free (only visible to the user of the tool).
- [vpk2wad_nd](https://github.com/p2r3/vpk2wad_nd): Converts textures in a VPK to a WAD that can be used by Narbacular Drop maps.
- [VTF Forge](https://github.com/Trico-Everfire/VTF-Forge): A modern multiplatform recreation of VTFEdit, using Qt.
- [VTF Thumbnailer](htps://github.com/craftablescience/vtf-thumbnailer): Adds previews for VTF files in your file explorer of choice on Windows and Linux.
### Games
<table>
<tr>
<td><a href="https://store.steampowered.com/app/440000/Portal_2_Community_Edition/" target="_blank" rel="noreferrer"><img width="250px" src="https://shared.fastly.steamstatic.com/store_item_assets/steam/apps/440000/header.jpg" alt="Portal 2: Community Edition"/></a></td>
<td>
<ul>
<li>Local addon assets are packed with <code>sourcepp</code>.</li>
<li>Verifier and VPKEdit are shipped with the game.</li>
</ul>
</td>
</tr>
<tr>
<td><a href="https://store.steampowered.com/app/669270/Momentum_Mod/" target="_blank" rel="noreferrer"><img width="250px" src="https://shared.fastly.steamstatic.com/store_item_assets/steam/apps/669270/header.jpg" alt="Momentum Mod"/></a></td>
<td>
<ul>
<li>Some bundled textures are created and/or compressed with MareTF.</li>
<li>Some bundled assets are packed with VPKEdit.</li>
</ul>
</td>
</tr>
<tr>
<td><a href="https://store.steampowered.com/app/2954780/Nightmare_House_The_Original_Mod/" target="_blank" rel="noreferrer"><img width="250px" src="https://shared.fastly.steamstatic.com/store_item_assets/steam/apps/2954780/header.jpg" alt="Nightmare House: The Original Mod"/></a></td>
<td>
<ul>
<li>Game assets are packed with VPKEdit.</li>
</ul>
</td>
</tr>
</table>
## Special Thanks
- `bsppp` partial library redesign, lump compression and game lump parsing/writing support contributed by [@Tholp](https://github.com/Tholp1).
- `kvpp`'s support for DMX srctools formats was contributed by [@TeamSpen210](https://github.com/TeamSpen210).
- `steampp` is based on the [SteamAppPathProvider](https://github.com/Trico-Everfire/SteamAppPathProvider) library by [@Trico Everfire](https://github.com/Trico-Everfire) and [Momentum Mod](https://momentum-mod.org) contributors.
- `vpkpp`'s 007 parser is based on [reverse-engineering work](https://raw.githubusercontent.com/SmileyAG/dumpster/refs/heads/src_jb007nightfirepc_alurazoe/file_format_analysis.txt) by Alhexx.
- `vpkpp`'s GCF parser was contributed by [@eepycats](https://github.com/eepycats) and [@ymgve](https://github.com/ymgve).
- `vpkpp`'s HOG parser was contributed by [@erysdren](https://github.com/erysdren).
- `vpkpp`'s OL parser is based on [reverse-engineering work](https://github.com/erysdren/scratch/blob/main/kaitai/worldcraft_ol.ksy) by [@erysdren](https://github.com/erysdren).
- `vpkpp`'s ORE parser is based on [reverse-engineering work](https://github.com/erysdren/narbacular-drop-tools) by [@erysdren](https://github.com/erysdren).
- `vpkpp`'s VPP parser was contributed by [@erysdren](https://github.com/erysdren).
- `vpkpp`'s WAD3 parser/writer was contributed by [@ozxybox](https://github.com/ozxybox).
- `vtfpp`'s NICE/Lanczos-3 resize filter support was contributed by [@koerismo](https://github.com/koerismo).
- `vtfpp`'s SHT parser/writer was contributed by [@Trico Everfire](https://github.com/Trico-Everfire).
- `vtfpp`'s initial VTF write support was loosely based on work by [@Trico Everfire](https://github.com/Trico-Everfire).
- `vtfpp`'s HDRI to cubemap conversion code is modified from the [HdriToCubemap](https://github.com/ivarout/HdriToCubemap) library by [@ivarout](https://github.com/ivarout).
| text/markdown | null | craftablescience <lauralewisdev@gmail.com> | null | craftablescience <lauralewisdev@gmail.com> | null | null | [
"License :: OSI Approved :: MIT License",
"Intended Audience :: Developers",
"Topic :: File Formats",
"Programming Language :: C++",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11... | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"homepage, https://sourcepp.org",
"repository, https://github.com/craftablescience/sourcepp",
"issue tracker, https://github.com/craftablescience/sourcepp/issues",
"funding, https://ko-fi.com/craftablescience"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:02:25.695180 | sourcepp-2026.2.18.tar.gz | 40,320 | 6f/9e/a6cf425eb79005ea302be9995979dedd5890169fefdda76377e9b86ee512/sourcepp-2026.2.18.tar.gz | source | sdist | null | false | b3e529bd211f23e71d102723861cb566 | e463bc0e6b76149c9474a14c67924d1f20a1282718b00800b4c4a22b00601980 | 6f9ea6cf425eb79005ea302be9995979dedd5890169fefdda76377e9b86ee512 | null | [] | 1,003 |
2.1 | odoo-addon-fieldservice | 18.0.5.6.0.3 | Manage Field Service Locations, Workers and Orders | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=============
Field Service
=============
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:032a4ce9dd715b71ba4e2bd86a445f039a7bbce36c7d62d087cb1c86c5db90c2
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Production%2FStable-green.png
:target: https://odoo-community.org/page/development-status
:alt: Production/Stable
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Ffield--service-lightgray.png?logo=github
:target: https://github.com/OCA/field-service/tree/18.0/fieldservice
:alt: OCA/field-service
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/field-service-18-0/field-service-18-0-fieldservice
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/field-service&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module is the base of the Field Service application in Odoo.
**Table of contents**
.. contents::
:local:
Configuration
=============
The base Field Service module can be used with minimal initial
configuration. It also allows for many advanced features, which require
a more in-depth configuration.
Order Stages
------------
The stage of an order is used to monitor its progress. Stages can be
configured based on your company's specific business needs. A basic set
of order stages comes pre-configured for use.
1. Go to *Field Service > Configuration > Stages*
2. Create or edit a stage
3. Set the name for the stage.
4. Set the sequence order for the stage.
5. Select *Order* type to apply this stage to your orders.
6. Additonally, you can set a color for the stage.
Field Service Areas
-------------------
You can manage designated areas or locales for your field service
workers, salesmen, and other resources. For example, salesmen may serve
a particular Territory. There may be multiple Territories served by a
single Branch office location. Multiple Branches are managed within a
District and these Districts are managed under an encompassing Region.
Setup a Territory
~~~~~~~~~~~~~~~~~
1. Go to Settings > Users & Companies > Territories\*
2. Create or select a territory
3. Set the territory Name and description
4. Select or create a branch which this territory serves
5. Choose a type of zip, country whichs defines the boundary used
6. Input a list of zip codes, countries based on your desired
configuration
Setup Branches, Districts, and Regions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If your business requires, define your Branches, Districts, and Regions.
These are found under *Field Service > Configuration > Locations*
Advanced Configurations
-----------------------
Additional features, automations, and GeoEngine features can be enabled
in the General Settings panel for Field Service.
1. Go to *Field Service > Configuration > Settings*
2. Enable additional options
3. Configure new options
Manage Teams
~~~~~~~~~~~~
Teams can be used to organize the processing of field service orders
into groups. Different teams may have different workflows that a field
service order needs to follow.
1. Go to *Field Service > Configuration > Workers > Teams*
2. Create or select a team
3. Set the team name, description, and sequence
You can now define custom stages for each team processing orders.
1. Go to *Field Service > Configuration > Stages*
2. Create or edit a stage
3. Select the teams for which this stage should be used
Manage Categories
~~~~~~~~~~~~~~~~~
Categories are used to group workers and the type of orders a worker can
do.
1. Go to *Field Service > Configuration > Workers > Categories*
2. Create or select a category
3. Set the name and description of category
4. Additionally, you can select a parent category if required
Manage Tags
~~~~~~~~~~~
Tags can be used to filter and report on field service orders
1. Go to *Field Service > Configuration > Orders > Tags*
2. Create or select a tag
3. Set the tag name
4. Set a color index for the tag
Manage Order Templates
~~~~~~~~~~~~~~~~~~~~~~
Order templates allow you to create standard templates for your orders.
1. Go to *Field Service > Master Data > Templates*
2. Create or select a template
3. Set the name
4. Set the standard order instructions
Usage
=====
To use this module, you need to:
Add Field Service Locations
---------------------------
Locations are the specific places where a field service order is
performed.
1. Go to *Field Service > Master Data > Locations*
2. Create a location
Add Field Service Workers
-------------------------
Workers are the people responsible for performing a field service order.
These workers may be subcontractors or a company's own employees.
1. Go to *Field Service > Master Data > Workers*
2. Create a worker
Process Orders
--------------
Once you have established your data, you can begin processing field
service orders.
1. Go to *Field Service > Dashboard > Orders*
2. Create or select an order
3. Enter relevant details for the order
4. Process order through each stage as defined by your business
requirements
Known issues / Roadmap
======================
The roadmap of the Field Service application is documented on
`Github <https://github.com/OCA/field-service/issues/1>`__.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/field-service/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/field-service/issues/new?body=module:%20fieldservice%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Open Source Integrators
Contributors
------------
- Wolfgang Hall <whall@opensourceintegrators.com>
- Maxime Chambreuil <mchambreuil@opensourceintegrators.com>
- Steve Campbell <scampbell@opensourceintegrators.com>
- Bhavesh Odedra <bodedra@opensourceintegrators.com>
- Michael Allen <mallen@opensourceintegrators.com>
- Sandip Mangukiya <smangukiya@opensourceintegrators.com>
- Serpent Consulting Services Pvt. Ltd. <support@serpentcs.com>
- Brian McMaster <brian@mcmpest.com>
- Raphaël Reverdy <raphael.reverdy@akretion.com>
- Ammar Officewala <ammar.o.serpentcs@gmail.com>
- Yves Goldberg <yves@ygol.com>
- Freni Patel <fpatel@opensourceintegrators.com>
- `Tecnativa <https://www.tecnativa.com>`__:
- Víctor Martínez
- Nils Coenen <nils.coenen@nico-solutions.de>
- Alex Comba <alex.comba@agilebg.com>
Other credits
-------------
The development of this module has been financially supported by:
- Open Source Integrators <https://opensourceintegrators.com>
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-max3903| image:: https://github.com/max3903.png?size=40px
:target: https://github.com/max3903
:alt: max3903
.. |maintainer-brian10048| image:: https://github.com/brian10048.png?size=40px
:target: https://github.com/brian10048
:alt: brian10048
Current `maintainers <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-max3903| |maintainer-brian10048|
This module is part of the `OCA/field-service <https://github.com/OCA/field-service/tree/18.0/fieldservice>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Open Source Integrators, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Development Status :: 5 - Production/Stable"
] | [] | https://github.com/OCA/field-service | null | >=3.10 | [] | [] | [] | [
"odoo-addon-base_territory==18.0.*",
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T03:01:45.114343 | odoo_addon_fieldservice-18.0.5.6.0.3-py3-none-any.whl | 358,396 | f8/5e/9e74ee78c9ea6469f415122541479f3d2436141743707cc337b5193ee01d/odoo_addon_fieldservice-18.0.5.6.0.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 28367ea0a5632a28f5419c03247c838b | d35204a8c0321e94a042c42f50e47e4cefabf2ba23edb3569a83a4ae92f5bfdf | f85e9e74ee78c9ea6469f415122541479f3d2436141743707cc337b5193ee01d | null | [] | 126 |
2.4 | adzerk-decision-sdk | 1.0.0b20 | Adzerk Decision SDK | Adzerk Decision SDK
Python Software Development Kit for Adzerk Decision & UserDB APIs
https://github.com/adzerk/adzerk-decision-sdk-python
| null | Adzerk | engineering@adzerk.com | null | null | null | adzerk, Adzerk Decision SDK | [] | [] | https://github.com/adzerk/adzerk-decision-sdk-python | null | >=3.10 | [] | [] | [] | [
"urllib3>=2.0.0",
"six>=1.10",
"certifi>=2023.7.22",
"python-dateutil>=2.8.2"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:01:12.006577 | adzerk_decision_sdk-1.0.0b20.tar.gz | 54,014 | a4/c4/818b7932f4fc75cb4b8beee6177636fd196254f44e56348be7b38fc502e9/adzerk_decision_sdk-1.0.0b20.tar.gz | source | sdist | null | false | 05a70c1005295aaab0cb13c60f902d26 | 3a026e5d59ff0cc4622b1fd63f005b74802b627ce7a1d62bd395fc3b886c1b96 | a4c4818b7932f4fc75cb4b8beee6177636fd196254f44e56348be7b38fc502e9 | null | [
"LICENSE"
] | 216 |
2.4 | aphex-service-clients | 0.2.2 | Generated API clients for Aphex platform services | # Aphex Service Clients
Generated API clients for Aphex platform services with built-in retry logic, exponential backoff, and jitter.
## Installation
```bash
pip install aphex-service-clients
```
## Usage
```python
from aphex_clients.embedding import EmbeddingClient
async with EmbeddingClient(base_url="http://embedding-svc:8000") as client:
response = await client.create_embeddings(input=["Hello world"])
embeddings = [d.embedding for d in response.data]
```
## Regenerating Clients
When OpenAPI specs change:
```bash
pip install -e ".[dev]"
./scripts/generate-clients.sh
```
## Services
- **Embedding Service** - OpenAI-compatible embedding generation
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"attrs>=23.0.0",
"httpx>=0.26.0",
"pydantic>=2.5.0",
"tenacity>=8.2.0",
"openapi-python-client>=0.19.0; extra == \"dev\"",
"pytest-asyncio>=0.23.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T03:00:11.431481 | aphex_service_clients-0.2.2.tar.gz | 50,750 | 4c/00/7cbab157ab7752b97b080158d9e962e86ead5493cf6f4e3b7ec7b7578467/aphex_service_clients-0.2.2.tar.gz | source | sdist | null | false | 4b2bbde27d260615a4bee17162d39dc8 | ad5da04639de9fb6130ade09d13cd8d7d252c370e96cb98a03ea1ec5624d3acb | 4c007cbab157ab7752b97b080158d9e962e86ead5493cf6f4e3b7ec7b7578467 | null | [] | 231 |
2.4 | aovt | 1.0.0 | A tool for decompress/compressed file AOV/ROV game | ENG:
A Tool Python to Decompress/Compress AOV/ROV file with library Python Zstandard (pyzstd)
VI:
1 công cụ Python dùng để Giải mã hoá/Mã hoá tệp với thư viện Python Zstandard (pyzstd)
\# **Update 0.0.5**
ENG:
* Added new alias package "aovt"
* Fix bug
VI:
* Thêm tên gọi gói mới "aovt"
* Sửa lỗi
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.8 | [] | [] | [] | [
"pyzstd",
"packaging",
"requests",
"AoV_Zstd"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.8 | 2026-02-19T02:59:47.985613 | aovt-1.0.0.tar.gz | 1,345 | 7b/0d/74063a988fc384825ad2c9a40009ae42a5966dfe28641a2edf1a9f87af5b/aovt-1.0.0.tar.gz | source | sdist | null | false | 94f0cc863d148da2b3cc81a6e0efffac | 02b19d34be96fc1cb9737d46237a1ee07364f94ffa03411a11fc7bddec14bbc2 | 7b0d74063a988fc384825ad2c9a40009ae42a5966dfe28641a2edf1a9f87af5b | null | [] | 247 |
2.4 | administrate | 0.1.1 | Python SDK for the Administrate.dev REST API | # Administrate Python SDK
The official Python SDK for the [Administrate.dev](https://administrate.dev) REST API.
[Administrate.dev](https://administrate.dev) is AI Agency Management software. It is a monitoring and management platform for AI agencies running n8n and AI automation workflows across multiple clients. It provides a single dashboard to track every workflow, every client, every failure, and all LLM costs, so you can catch problems before clients do and prove the value of your automations.
**Key platform features:**
- **Multi-instance monitoring** — See all n8n instances across every client in one place
- **Error tracking** — Real-time failure detection with automatic error categorization
- **LLM cost tracking** — Connect OpenAI, Anthropic, Azure, and OpenRouter accounts to attribute costs to specific clients
- **Workflow insights** — Execution counts, success rates, and time-saved ROI reporting
- **Sync health** — Know instantly when a data sync fails
- **Webhooks & API** — Full programmatic access for custom integrations
## Installation
```bash
pip install administrate
```
Requires Python 3.9+.
## Quick start
```python
from administrate import Administrate
client = Administrate(api_key="sk_live_...")
# Get account info
account = client.account.get()
print(account.name, account.plan)
# List all clients with auto-pagination
for c in client.clients.list():
print(c.name, c.n8n_instances_count)
# Check for failed executions across all instances
for execution in client.executions.list(errors_only=True):
print(f"{execution.workflow_name}: {execution.error_category}")
# Get LLM cost summary
costs = client.llm_costs.summary()
print(f"Total: ${costs.data.summary.total_cost_cents / 100:.2f}")
```
### Async usage
```python
from administrate import AsyncAdministrate
async with AsyncAdministrate(api_key="sk_live_...") as client:
account = await client.account.get()
async for workflow in client.workflows.list(active=True):
print(workflow.name, workflow.is_active)
```
Every resource method available on `Administrate` has an identical async counterpart on `AsyncAdministrate`.
## Configuration
```python
from administrate import Administrate
client = Administrate(
api_key="sk_live_...", # Required. Must start with "sk_live_"
base_url="https://...", # Default: "https://administrate.dev"
timeout=30.0, # Request timeout in seconds. Default: 30
max_retries=3, # Retry attempts for failed requests. Default: 3
)
```
The client can be used as a context manager to ensure the underlying HTTP connection is closed:
```python
with Administrate(api_key="sk_live_...") as client:
account = client.account.get()
# Connection closed automatically
# Or close manually
client = Administrate(api_key="sk_live_...")
# ... use client ...
client.close()
```
You can also pass a pre-configured `httpx.Client` (or `httpx.AsyncClient` for async) if you need full control over the HTTP layer:
```python
import httpx
http_client = httpx.Client(proxy="http://proxy:8080")
client = Administrate(api_key="sk_live_...", http_client=http_client)
```
## API reference
All API keys are created in **Settings > Developers** within Administrate.dev. Tokens have three permission levels: `read`, `write`, and `full`.
### Account
```python
# Get current token info and account summary
me = client.account.me()
print(me.token.name, me.token.permission)
print(me.account.name, me.account.plan)
# Get full account details
account = client.account.get()
# Update account settings
account = client.account.update(
name="My Agency",
billing_email="billing@example.com",
timezone="Australia/Brisbane",
)
```
### Clients
Clients represent the companies you manage automations for.
```python
# List all clients (auto-paginates)
for c in client.clients.list():
print(c.name, c.code)
# Get a client (includes 7-day metrics)
c = client.clients.get("com_abc123")
print(c.metrics.success_rate, c.metrics.time_saved_minutes)
# Create a client
c = client.clients.create(
name="Acme Corp",
code="acme",
contact_email="ops@acme.com",
timezone="America/New_York",
)
# Update a client
c = client.clients.update("com_abc123", notes="Enterprise tier")
# Delete a client (requires full permission)
client.clients.delete("com_abc123")
```
### Instances
Instances are n8n deployments connected to Administrate.
```python
# List all instances
for inst in client.instances.list():
print(inst.name, inst.sync_status)
# Filter by client or sync status
for inst in client.instances.list(client_id="com_abc123", sync_status="error"):
print(inst.name, inst.last_sync_error)
# Get an instance (includes 7-day metrics)
inst = client.instances.get("n8n_abc123")
print(inst.metrics.executions_count, inst.metrics.success_rate)
# Connect a new n8n instance
inst = client.instances.create(
client_id="com_abc123",
name="Production n8n",
base_url="https://n8n.acme.com",
api_key="n8n_api_key_here",
)
# Trigger a sync
result = client.instances.sync("n8n_abc123", sync_type="all")
# Sync all instances at once
result = client.instances.sync_all(sync_type="workflows")
# Update an instance
inst = client.instances.update("n8n_abc123", name="Staging n8n")
# Delete an instance
client.instances.delete("n8n_abc123")
```
### Workflows
```python
# List workflows with filters
for wf in client.workflows.list(client_id="com_abc123", active=True):
print(wf.name, wf.is_active)
# Search by name
for wf in client.workflows.list(search="onboarding"):
print(wf.name)
# Get a workflow (includes 7-day metrics)
wf = client.workflows.get("wfl_abc123")
print(wf.metrics.success_rate, wf.metrics.time_saved_minutes)
# Set time-saved estimates (for ROI reporting)
wf = client.workflows.update(
"wfl_abc123",
minutes_saved_per_success=15,
minutes_saved_per_failure=5,
)
```
### Executions
Executions are read-only records of workflow runs synced from n8n.
```python
# List executions with filters
for ex in client.executions.list(
client_id="com_abc123",
status="failed",
start_date="2025-01-01",
end_date="2025-01-31",
):
print(ex.workflow_name, ex.status, ex.duration_ms)
# Get only errors
for ex in client.executions.list(errors_only=True):
print(ex.error_category, ex.workflow_name)
# Get execution details (includes error message and payload)
ex = client.executions.get("exe_abc123")
print(ex.error_message)
print(ex.error_payload)
```
### Sync runs
```python
# List sync run history
for run in client.sync_runs.list(instance_id="n8n_abc123", status="failed"):
print(run.sync_type, run.status, run.duration_seconds)
# Get a specific sync run
run = client.sync_runs.get("syn_abc123")
# Get sync health across all instances
for entry in client.sync_runs.health():
print(entry.instance_name, entry.sync_status)
print(f" Workflows last synced: {entry.workflows.last_synced_at}")
print(f" Executions last synced: {entry.executions.last_synced_at}")
```
### Users
```python
# List team members
for user in client.users.list():
print(user.name, user.email, user.role)
# Get a user
user = client.users.get("usr_abc123")
# Invite a new team member
invitation = client.users.invite(email="new@example.com", role="member")
print(invitation.expires_at)
# Change a user's role
user = client.users.update("usr_abc123", role="admin")
# Remove a user
client.users.delete("usr_abc123")
```
### Webhooks
```python
# List webhooks
for wh in client.webhooks.list():
print(wh.url, wh.events, wh.enabled)
# Create a webhook
wh = client.webhooks.create(
url="https://example.com/hook",
events=["execution.failed", "sync.failed"],
description="Slack failure alerts",
)
print(wh.secret) # Save this — used to verify webhook signatures
# Update a webhook
wh = client.webhooks.update("whk_abc123", enabled=False)
# Regenerate the signing secret (old secret becomes invalid immediately)
wh = client.webhooks.regenerate_secret("whk_abc123")
print(wh.secret)
# Delete a webhook
client.webhooks.delete("whk_abc123")
```
### API tokens
```python
# List all tokens
for token in client.api_tokens.list():
print(token.name, token.permission, token.token_hint)
# Create a token (the plain token is only returned once)
token = client.api_tokens.create(
name="CI/CD Pipeline",
permission="read",
ip_allowlist=["10.0.0.0/8"],
expires_in="90_days",
)
print(token.token) # sk_live_... — save this immediately
# Update a token
token = client.api_tokens.update("tok_abc123", name="Updated Name")
# Revoke a token
client.api_tokens.delete("tok_abc123")
```
### LLM providers
Connect your AI provider accounts to track costs.
```python
# List providers
for provider in client.llm_providers.list():
print(provider.name, provider.provider_type, provider.sync_status)
# Get a provider (includes 7-day metrics)
provider = client.llm_providers.get("llm_abc123")
print(provider.metrics.total_cost_cents, provider.metrics.total_tokens)
# Connect a new provider
provider = client.llm_providers.create(
name="OpenAI Production",
provider_type="openai", # openai, anthropic, openrouter, or azure
api_key="sk-...",
organization_id="org-...",
)
# Trigger a cost sync
client.llm_providers.sync("llm_abc123")
# Update a provider
provider = client.llm_providers.update("llm_abc123", name="OpenAI Staging")
# Delete a provider
client.llm_providers.delete("llm_abc123")
```
### LLM projects
Projects are discovered automatically when syncing a provider. Assign them to clients to attribute costs.
```python
# List projects for a provider
for project in client.llm_projects.list("llm_abc123"):
print(project.name, project.total_cost_cents, project.client_name)
# Assign a project to a client
project = client.llm_projects.update(
"llm_abc123", "proj_456", client_id="com_abc123"
)
```
### LLM costs
```python
# Get cost summary (defaults to last 7 days)
costs = client.llm_costs.summary()
print(f"Total: ${costs.data.summary.total_cost_cents / 100:.2f}")
print(f"Tokens: {costs.data.summary.total_tokens:,}")
# Breakdown by provider
for p in costs.data.providers:
print(f" {p.name}: ${p.cost_cents / 100:.2f}")
# Breakdown by model
for m in costs.data.models:
print(f" {m.model}: ${m.cost_cents / 100:.2f}")
# Daily trend
for day in costs.data.daily:
print(f" {day.date}: ${day.cost_cents / 100:.2f}")
# Custom date range
costs = client.llm_costs.summary(
start_date="2025-01-01",
end_date="2025-01-31",
)
# Costs by client
for entry in client.llm_costs.by_client().data:
print(f"{entry.name}: ${entry.cost_cents / 100:.2f}")
# Costs by provider
for entry in client.llm_costs.by_provider().data:
print(f"{entry.name}: ${entry.cost_cents / 100:.2f}")
```
## Pagination
All `.list()` methods return an iterator that handles pagination automatically. By default, the API returns 25 items per page (max 100).
```python
# Auto-paginate through all results
for c in client.clients.list():
print(c.name)
# Control page size
for c in client.clients.list(per_page=100):
print(c.name)
# Get a single page
page = client.clients.list(per_page=10).first_page()
print(page.meta.total, page.meta.total_pages)
for c in page:
print(c.name)
```
Async iteration works the same way:
```python
async for c in async_client.clients.list():
print(c.name)
```
## Error handling
The SDK raises typed exceptions for all API errors:
```python
from administrate import (
Administrate,
APIError,
AuthenticationError,
NotFoundError,
RateLimitError,
ValidationError,
)
client = Administrate(api_key="sk_live_...")
try:
c = client.clients.get("com_nonexistent")
except NotFoundError as e:
print(f"Not found: {e.message}")
except AuthenticationError:
print("Invalid API key")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after}s")
except ValidationError as e:
print(f"Invalid params: {e.body}")
except APIError as e:
print(f"API error {e.status_code}: {e.message}")
```
**Exception hierarchy:**
| Exception | Status code | Description |
|---|---|---|
| `AdministrateError` | — | Base exception for all SDK errors |
| `APIError` | Any non-2xx | Base for all HTTP API errors |
| `AuthenticationError` | 401 | Invalid or missing API key |
| `PermissionDeniedError` | 403 | Insufficient token permissions |
| `NotFoundError` | 404 | Resource does not exist |
| `ValidationError` | 422 | Invalid request parameters |
| `RateLimitError` | 429 | Rate limit exceeded (has `retry_after`) |
| `InternalServerError` | 5xx | Server-side error |
| `ConnectionError` | — | Failed to connect to the API |
| `TimeoutError` | — | Request timed out |
All `APIError` subclasses expose `status_code`, `response` (the raw `httpx.Response`), and `body` (parsed JSON or text).
## Retries
The SDK automatically retries failed requests with exponential backoff:
- **429 (rate limited)** — Retries after the duration specified in the `Retry-After` header
- **5xx (server errors)** — Retries with exponential backoff (0.5s, 1s, 2s, ...)
- **Connection errors and timeouts** — Retried with the same backoff schedule
By default, the SDK retries up to 3 times. Set `max_retries=0` to disable:
```python
client = Administrate(api_key="sk_live_...", max_retries=0)
```
## Requirements
- Python 3.9+
- [httpx](https://www.python-httpx.org/) >= 0.25.0
- [pydantic](https://docs.pydantic.dev/) >= 2.0.0
## License
MIT
| text/markdown | null | Administrate <support@administrate.dev> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Py... | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx<1,>=0.25.0",
"pydantic<3,>=2.0.0",
"mypy>=1.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest>=7.0; extra == \"dev\"",
"respx>=0.21; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://administrate.dev",
"Documentation, https://administrate.dev/docs",
"Repository, https://github.com/administrate-dev/administrate-python"
] | twine/6.2.0 CPython/3.12.5 | 2026-02-19T02:58:49.949938 | administrate-0.1.1.tar.gz | 84,382 | 61/c1/488aeb35301a10c7e52ac0bd149e84b92438945ff129246158f708046f14/administrate-0.1.1.tar.gz | source | sdist | null | false | 6109f493eced3273d62594ff1dad9b5b | c4a7fb0682e0dd444ac1e405fd178b0b37f3f374972f10fc9a29004ff0fa684f | 61c1488aeb35301a10c7e52ac0bd149e84b92438945ff129246158f708046f14 | MIT | [
"LICENSE"
] | 248 |
2.4 | stele-sdk | 0.2.2 | STELE Python SDK and local bootstrap helpers (HTTP client + optional local steled launcher). | # stele (Python)
Dependency-free Python SDK for STELE (stdlib `urllib`, Python 3.9+).
This package contains:
- `stele_sdk`: the HTTP client (in-repo module)
- `stele`: thin wrapper module + optional local bootstrap helpers (installed from `pip install stele-sdk`)
If you want a deeper dive, see the repo docs at `docs/sdk/python.md`.
## Install
```bash
pip install stele-sdk
```
```python
from stele import SteleClient, APIError
```
Package note:
- PyPI distribution name: `stele-sdk`
- Import module for published package usage: `stele`
## Client Configuration
```python
client = SteleClient(
base_url="http://127.0.0.1:8080",
api_version="v1",
agent_id="agent-1",
session_id="session-1",
client_id="my-app",
embedder_override="", # optional
ops_auth_token="", # optional; also read from STELE_OPS_AUTH_TOKEN / STELE_MCP_AUTH_TOKEN
timeout=15,
)
```
Environment defaults (when not passed explicitly):
- `STELE_BASE_URL`
- `STELE_API_VERSION`
- `STELE_AGENT_ID`
- `STELE_SESSION_ID`
- `STELE_CLIENT_ID`
- `STELE_EMBEDDER_OVERRIDE`
- `STELE_OPS_AUTH_TOKEN` (ops endpoints)
- `STELE_MCP_AUTH_TOKEN` (ops endpoints; convenience fallback)
## Minimal Example: Write + Query
```python
import time
from stele import SteleClient
client = SteleClient(
base_url="http://127.0.0.1:8080",
agent_id="agent-1",
session_id="session-1",
)
write = client.write(
{
"memory": {
"content": "We should attach provenance.source_refs to all writes.",
"type": "MEMORY_TYPE_PROCEDURAL",
"scope": "SCOPE_AGENT",
"provenance": {
"source_refs": ["run:sdk/python/README.md#minimal-example"],
"citations": [],
"origin_turn_id": "",
},
"timestamp_ms": int(time.time() * 1000),
}
}
)
print("wrote:", (write.get("memory") or {}).get("id"))
query = client.query(
{
"query_text": "What provenance should we attach?",
"budget_tokens": 250,
"scope_order": ["SCOPE_AGENT", "SCOPE_TEAM", "SCOPE_PROJECT", "SCOPE_GLOBAL"],
}
)
print("recall:", len(query.get("memories") or []))
```
## Optional: Local Bootstrap (No Go Toolchain)
If you have GitHub Release assets published for this repo, you can bootstrap a local `steled` before creating the client:
```python
from stele.local import ensure_running
from stele import SteleClient
ensure_running()
client = SteleClient(agent_id="agent-1", session_id="session-1")
print(client.query({"query_text": "sanity check"}))
```
## Error Handling
Non-2xx responses raise `APIError`:
```python
from stele import APIError
try:
client.reindex({})
except APIError as exc:
print("status:", exc.status_code, "code:", exc.code, "request:", exc.request_id, "msg:", exc.message)
```
## Graph Example: Add Edge + List
```python
import time
client.edge_add(
{
"edge": {
"from_id": "mem-aaa",
"to_id": "mem-bbb",
"type": "EDGE_TYPE_RELATES_TO",
"weight": 1,
"created_at_ms": int(time.time() * 1000),
}
}
)
edges = client.edge_list({"node_id": "mem-aaa"})
print(edges.get("edges"))
```
## Proactive Surfacing Helper
Use the built-in trigger orchestrator when you want SDK-side proactive suggest parity:
```python
suggestor = client.create_proactive_suggestor()
suggestor.trigger("bootstrap", {"active_files": ["README.md"]})
suggestor.trigger(
"context_shift",
{"active_files": ["src/auth/handler.go"], "recent_commands": ["plan:implement"]},
)
suggestor.trigger("pre_edit", {"active_symbols": ["AuthHandler"]})
```
`ProactiveSuggestor` enforces debounce, context-hash dedupe, rate limits, and payload caps before calling `suggest`.
## Ops Endpoints (Bearer Token)
Ops endpoints require `ops_auth_token` (or env `STELE_OPS_AUTH_TOKEN` / `STELE_MCP_AUTH_TOKEN`):
- `bulk_forget`
- `export_ndjson` / `export_to_file`
- `import_memories`
- `maintenance`
- `reindex`
- `decay`
### Export to File (Convenience)
```python
import os
from stele import SteleClient
ops = SteleClient(
base_url="http://127.0.0.1:8080",
agent_id="agent-1",
session_id="session-1",
ops_auth_token=os.environ.get("STELE_OPS_AUTH_TOKEN", ""),
)
ops.export_to_file("stele-export.jsonl")
```
### Import
```python
from stele import SteleClient
with open("stele-export.jsonl", "r", encoding="utf-8") as f:
ndjson = f.read()
ops.import_memories({"data": ndjson, "strict": False})
```
## API Coverage (Method Map)
Methods map closely to `steled` routes:
- Core: `write`, `query`, `feedback`, `protect`, `supersede`, `forget`
- Retrieval utils: `list_memories`, `get_memory`, `search_memories`, `update_memory`, `verify`, `extract`
- Bulk: `bulk_write`, `bulk_forget` (ops)
- Ops: `export_ndjson`/`export_to_file`, `import_memories`, `schema`, `metrics`, `maintenance`, `reindex`, `decay`
- Graph: `edge_add`, `edge_delete`, `edge_list`, `edge_traverse`, `edge_suggest`
- Proactive/stream: `suggest`, `watch`
- Sharing: `learn_share` (mistake-only)
- Agent/system: `agent_get`, `agent_upsert`, `access_mark_used`, `events`, `membership_add/remove/list`, `health`, `stats`
| text/markdown | STELE | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Repository, https://github.com/sincover/stele"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-19T02:57:44.522763 | stele_sdk-0.2.2.tar.gz | 18,659 | 3c/95/2950c3ff952e54639dd5b45cdad8e58958c48f33652c9e1ddfdb751a1e6b/stele_sdk-0.2.2.tar.gz | source | sdist | null | false | 37a86aa1bc3816168d664a29925f535c | d354235dbdef7729edf258ece118a058a3d6329b5b0605dcd30b78fab2b57d8a | 3c952950c3ff952e54639dd5b45cdad8e58958c48f33652c9e1ddfdb751a1e6b | null | [] | 234 |
2.4 | kattis2canvas | 0.1.9 | CLI tool to integrate Kattis offerings with Canvas LMS courses | # kattis2canvas
this is a simple python tool that uses the canvasapi toolkit to integrate a kattis offering with a canvas course. the tool was specifically made to work with the commercial kattis (as you will see in a moment). it would take a bit of tweaking on the web scraping to make it work with open.kattis.com.
the kattis connection is done using web scraping and thus it is very fragile! at the end, i will highlight where it is most vulnerable.
# setting up kattis2canvas
## the config file
first you will need to set up the config file. it has all your authorization tokens so DON'T CHECK IT IN. i specificially look for it in the app_dir as defined by click to get it far away from source. on linux this ends up being a file called ~/.config/kattis2canvas.ini. this is how it should be populated:
```
[kattis]
username: YOUR_KATTIS_USERNAME_NOT_EMAIL
token: SOME_RANDOM_CHARACTERS
hostname: THE_DOMAIN_NAME_OF_YOUR_KATTIS_INSTANCE
loginurl: THE_URL_TO_LOG_IN_TO_KATTIS
[canvas]
url=URL_OF_YOUR_CANVAS_INSTANCE
token=SOME_RANDOM_CHARACTERS
```
you can easily get the kattis section by going to https://\<kattis>/download/kattisrc where \<kattis> is your instance of kattis. you will need to move the lines around slightly. for canvas the url is the one you use to access the main page of canvas. you generate the token in the bottom of the Account -> Settings page.
you can check that everything is set up by running:
```
kattis2canvas list-offerings
kattis2canvas list-assignments
```
or if you built the pyz file using make_zipapp.sh
```
kattis2canvas list-offerings
kattis2canvas list-assignments
```
## mapping student kattis accounts to canvas accounts
in canvas, you can associate various URLs with your account in the Links section of Account -> Profile. students need to put the URL of their kattis account in a link with the word "kattis" (in any case) in the title. this ends up being the join key for kattis2canvas.
you can check if the students have set up the links properly using
```
kattis2canvas kattislinks
```
# using kattis2canvas
## populating kattis assignments in canvas
when you create new kattis assignments, you will need to get it mirrored into canvas. currently the **course2canvas** command will put all of the assignments into an assignment group called kattis. the assignment group must be created in your course before **course2canvas**.
when you specify the names of offerings in kattis and courses in canvas, you can specify a substring of the name and **kattis2canvas** will be able to use it as long as it matches exactly one name. if it doesn't, it will show it what it found.
when creating the assignment in canvas, anything you have put in the description in kattis will also be replicated in canvas along with a link to the kattis assignment.
if you have made changes to a kattis assignment that you have already populated in canvas, use the --force option to force an update. (right now there isn't a way to force individual assignments.)
if you use modules, you can use the **--add-to-module** flag to add the kattis assignments to a module. at this point, it puts all of the kattis assignments into that one module.
## getting the results to canvas
the **submissions2canvas** will replicate results from student submissions to kattis into canvas. it will only replicate results that are either better or the same as or newer than previous results it has replicated. a summary of the problem, score, and link to the submission will be added as a comment for the student in the gradebook for the relevant assignment. the idea is that when it's time to grade, you have easy access to the results and the link to the source from canvas speedgrader.
# kattis webscraping
unfortuately, scraping the kattis API is very adhoc and it would be naive to think that it wouldn't change in ways that will break this tool. we use BeautifulSoupe (it's pretty beautiful...) so here are the features that the script relies on for kattis webpages: (note the term "assume" is used for things we have to believe because that is the only reasonable way that we can use the information we are given.)
* the list of offerings: (HOSTNAME is from the config file above) we assume http://HOSTNAME/ will give us a page will all the offerings and the urls in the href of those offerings will have the form **/courses/[^/]+/[^/]+**
* the list of assignments: we assume the offering page will have the detail page for assignments in hrefs of anchor tabs of the form "assignments/\w+$".
* the assignment details: we assume the assignment detail page will have an \<h2> tag with the text "Description" followed by a sibling \<p> tag that entirely contains the description. we also assume that there will be a \<td> for "start time" and another for "end time" we do case insensitve comparisons to find them. we assume that the following \<td> tage will have the time.
* time: TIME IS HARD! if the time is recent, kattis will drop the date, so if we get a time with no date, we take the current time and up date the HH:MM:SS with the date we get from kattis. we also assume all dates are UTC.
* getting submissions: we assume the submissions for an assignment are found at https://HOSTNAME/OFFERING/assignments/ASSIGNMENT_ID/submissions it appears that all submissions for the assignment are listed on that page. submissions for a problem outside of the assignment time period will not show up on that page. the submissions are in a table called "judege"table". the headers \<th> that we are looking for are "User" for the user url, "Problem" for the name of the problem, "Test cases" for the score reflected as sucess/count with -/- indicating no tries, and "" indicating the header for the url of the submission. once we know the column numbers we want, we have to look for the \<tbody> child of the table (problems happen if you try to look for \<td> recursively from the table!) then we look for \<tr> children of \<tbody> which have a "data-submission-id" attribute.
| text/markdown | bcr33d | null | null | null | null | kattis, canvas, lms, education, grading | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Education",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Prog... | [] | null | null | >=3.8 | [] | [] | [] | [
"click>=8.0",
"requests>=2.25",
"beautifulsoup4>=4.9",
"canvasapi>=2.0",
"python-dateutil>=2.8"
] | [] | [] | [] | [
"Homepage, https://github.com/bcr33d/kattis2canvas",
"Repository, https://github.com/bcr33d/kattis2canvas"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T02:57:38.282166 | kattis2canvas-0.1.9.tar.gz | 18,527 | 93/bc/6d894f709acb32a8adb2fae6aa45ba48d936bd5d2ed3d26193ea1d8dfc3b/kattis2canvas-0.1.9.tar.gz | source | sdist | null | false | fcdf625a47bdbf41bb177a1451d6ee93 | 677d7414c7b1f5f0701705da028cbe178b41a3b8a33ec0ace7c078a590c535b3 | 93bc6d894f709acb32a8adb2fae6aa45ba48d936bd5d2ed3d26193ea1d8dfc3b | MIT | [] | 247 |
2.4 | semanscope | 1.0.1 | Multilingual semantic embedding visualization and analysis toolkit | # Semanscope
**Multilingual Semantic Embedding Visualization and Analysis Toolkit**
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
Semanscope is a comprehensive toolkit for visualizing and analyzing semantic embeddings across multiple languages. It features advanced metrics for measuring semantic consistency (Semantic Affinity) and relational structure preservation (Relational Affinity) in multilingual embedding models.
## Key Features
- **Multi-Model Support**: LaBSE, SONAR, Gemma, OpenAI, Voyage AI, Google Gemini, Ollama, and 30+ models
- **Advanced Dimensionality Reduction**: UMAP, PHATE, t-SNE, PaCMAP, TriMap
- **Semantic Affinity (SA)**: Novel metric for measuring semantic consistency across embeddings
- **Relational Affinity (RA)**: Metric for evaluating relational structure preservation
- **Interactive UI**: Streamlit-based interface with 11 specialized pages
- **Batch Benchmarking**: CLI tools for research-grade evaluation
- **Multilingual**: Support for 70+ languages
- **Visualization**: Interactive plots with Plotly and ECharts
## Quick Start
### Installation
```bash
# Clone the repository
git clone https://github.com/semanscope/semanscope.git
cd semanscope
# Create conda environment
conda create -n semanscope python=3.11
conda activate semanscope
# Install package with UI support
pip install -e ".[ui]"
# Or install with all dependencies (including API integrations)
pip install -e ".[all]"
```
### Launch the UI
```bash
# Option 1: Using the launcher script
python run_app.py
# Option 2: Using the CLI command (after installation)
semanscope-ui
```
### Basic Usage (Python API)
```python
from semanscope.models.model_manager import get_model
from semanscope.components.embedding_viz import EmbeddingVisualizer
# Load a model
model = get_model("LaBSE")
# Create visualizer
viz = EmbeddingVisualizer(model=model)
# Visualize embeddings
words = ["hello", "world", "friend", "peace"]
viz.plot_words(words, method="UMAP", dimension=2)
```
### Batch Benchmarking
```bash
# Semantic Affinity benchmark
semanscope-benchmark-sa \
--dataset data/input/NeurIPS-01-family-relations-v2.5-SA.csv \
--models LaBSE SONAR \
--output results/sa_benchmark.csv
# Relational Affinity benchmark
semanscope-benchmark-ra \
--dataset data/input/NeurIPS-01-family-relations-v2.5-RA.csv \
--models LaBSE SONAR \
--languages english chinese \
--output results/ra_benchmark.csv
```
## Features in Detail
### Semantic Affinity (SA) Metric
Measures how consistently a model represents semantic relationships:
```python
from semanscope.components.semantic_affinity import calculate_semantic_affinity
sa_score = calculate_semantic_affinity(
model=model,
word_pairs=[("cat", "dog"), ("happy", "sad")],
metric="cosine"
)
```
**SA Formula**:
```
SA = 1 - std(similarities) / mean(similarities)
```
Higher SA (→1.0) = more consistent semantic representations
### Relational Affinity (RA) Metric
Evaluates preservation of relational structure across languages:
```python
from semanscope.components import calculate_relational_affinity
ra_score = calculate_relational_affinity(
model=model,
word_quadruples=[("king", "queen", "man", "woman")],
languages=["english", "chinese"],
metric="cosine"
)
```
**RA Formula** (Cosine):
```
rel_vec(w1, w2) = emb(w2) - emb(w1)
RA = cosine_similarity(rel_vec_lang1, rel_vec_lang2)
```
Higher RA (→1.0) = better relational structure preservation
### Interactive UI Pages
1. **Settings** (0_🔧_Settings.py): Configure models, methods, cache
2. **Semanscope** (1_🧭_Semanscope.py): Main visualization interface
3. **Semanscope ECharts** (2_📊_Semanscope-ECharts.py): ECharts-based visualization
4. **Compare** (3_⚖️_Semanscope-Compare.py): Side-by-side model comparison
5. **Multilingual** (4_🌐_Semanscope-Multilingual.py): Multi-language visualization
6. **Zoom** (5_🔍_Semanscope-Zoom.py): Interactive zoom and exploration
7. **Semantic Affinity** (6_📐_Semantic_Affinity.py): SA metric calculator
8. **Relational Affinity** (6_🔗_Relational_Affinity.py): RA metric calculator
9. **Translator** (8_🌐_Translator.py): Translation utilities
10. **NSM Prime Words** (9_📝_NSM_Prime_Words.py): Natural Semantic Metalanguage
11. **Review Images** (9_🖼️_Review_Images.py): Visualization gallery
### Supported Models
**Open Source**:
- LaBSE (Language-agnostic BERT Sentence Embedding)
- SONAR (Seamless Communication models)
- XLM-RoBERTa variants
- mBERT (Multilingual BERT)
- And 20+ more...
**API-based** (requires API keys):
- OpenAI (text-embedding-ada-002, text-embedding-3-small, etc.)
- Voyage AI (voyage-multilingual-2, voyage-code-2)
- Google Gemini (text-embedding-004)
- Ollama (local models)
See `semanscope/config.py` for complete model catalog.
### Dimensionality Reduction Methods
- **UMAP**: Uniform Manifold Approximation and Projection
- **PHATE**: Potential of Heat-diffusion for Affinity-based Transition Embedding
- **t-SNE**: t-Distributed Stochastic Neighbor Embedding
- **PaCMAP**: Pairwise Controlled Manifold Approximation
- **TriMap**: Triplet-based dimensionality reduction
- **PCA**: Principal Component Analysis
## Datasets
Semanscope includes 60+ representative datasets across 7 categories:
- **ACL-0**: Chinese morphology (Zinets, Radicals)
- **ACL-1**: Alphabets (15+ languages)
- **ACL-2**: PeterG vocabulary (semantic primes)
- **ACL-3**: Morphological networks
- **ACL-4**: Semantic categories (numbers, emotions, animals)
- **ACL-5**: Poetry corpora (Li Bai, Du Fu, Frost, Wordsworth)
- **ACL-6**: Visual semantics (emoji, pictographs)
- **NeurIPS-01 to NeurIPS-11**: Research benchmarks for SA/RA metrics
See `data/input/README.md` for complete dataset documentation.
## Documentation
- **[Usage Guide](docs/USAGE.md)**: Detailed usage instructions
- **[API Reference](docs/API.md)**: Python API documentation
- **[Troubleshooting](docs/TROUBLESHOOTING.md)**: Common issues and solutions
- **[GPU Setup](docs/GPU_SETUP.md)**: CUDA configuration for acceleration
## Architecture
```
semanscope/
├── semanscope/ # Core Python package
│ ├── components/ # Analysis components (SA, RA, viz)
│ ├── models/ # Model managers and integrations
│ ├── utils/ # Utilities (caching, text processing)
│ ├── services/ # External API integrations
│ └── cli/ # Command-line tools
├── ui/ # Streamlit UI
├── data/ # Datasets and visualizations
├── tests/ # Test suite
├── demo/ # Usage examples
├── scripts/ # Utility scripts
└── docs/ # Documentation
```
## Development
```bash
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest tests/ -v
# Run specific test
pytest tests/test_semantic_affinity.py -v
# Code formatting
black semanscope/ ui/ tests/
ruff check semanscope/ ui/
```
## Configuration
Create a `.env` file for API keys and settings:
```bash
# Copy example configuration
cp .env.example .env
# Edit with your API keys
OPENROUTER_API_KEY=your_key_here
VOYAGE_API_KEY=your_key_here
GOOGLE_API_KEY=your_key_here
```
## Performance Tips
1. **Use GPU**: Set `CUDA_VISIBLE_DEVICES=0` for GPU acceleration
2. **Enable caching**: Embeddings are cached automatically to `~/projects/embedding_cache/`
3. **Batch processing**: Use CLI tools for large-scale benchmarking
4. **Model selection**: Start with smaller models (LaBSE, mBERT) for exploration
## Citation
If you use Semanscope in your research, please cite:
```bibtex
@software{semanscope2026,
title={Semanscope: Multilingual Semantic Embedding Visualization Toolkit},
author={Semanscope Contributors},
year={2026},
url={https://github.com/semanscope/semanscope}
}
```
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Contributing
Contributions are welcome! Please:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## Acknowledgments
- **Language Models**: Thanks to Google (LaBSE), Meta (SONAR), and the open-source community
- **Dimensionality Reduction**: UMAP, PHATE, t-SNE, PaCMAP, TriMap libraries
- **Visualization**: Plotly, Streamlit, ECharts
- **Datasets**: Computational linguistics research community
## Support
- **Documentation**: [GitHub Wiki](https://github.com/semanscope/semanscope/wiki)
- **Issues**: [GitHub Issues](https://github.com/semanscope/semanscope/issues)
- **Discussions**: [GitHub Discussions](https://github.com/semanscope/semanscope/discussions)
## Roadmap
- [ ] PyPI publication
- [ ] Additional embedding models (Cohere, Anthropic)
- [ ] Enhanced visualization options
- [ ] Expanded benchmark datasets
- [ ] Interactive tutorials and examples
- [ ] Web deployment (Streamlit Cloud)
---
**Built with ❤️ for the multilingual NLP community**
| text/markdown | null | Digital Duck <p2p2learn@outlook.com> | null | null | null | embeddings, multilingual, visualization, NLP, semantics, semantic-affinity, relational-affinity | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
... | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy<2.0.0,>=1.21.0",
"pandas<3.0.0,>=2.1.4",
"torch>=2.0.0",
"transformers>=4.30.0",
"sentence-transformers>=2.2.0",
"scikit-learn>=1.0.0",
"scipy>=1.7.0",
"umap-learn>=0.5.3",
"phate>=1.0.7",
"trimap>=1.1.4",
"pacmap>=0.8.0",
"python-igraph>=0.11.9",
"networkx>=2.6.0",
"joblib>=1.0.0",... | [] | [] | [] | [
"Homepage, https://github.com/semanscope/semanscope",
"Documentation, https://github.com/semanscope/semanscope#readme",
"Repository, https://github.com/semanscope/semanscope",
"Issues, https://github.com/semanscope/semanscope/issues"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-19T02:57:11.916347 | semanscope-1.0.1.tar.gz | 228,322 | 6c/05/d540fb671c7da0edce2d8a52e6a5c02e781e9e0a31f48b7c49abd91383cc/semanscope-1.0.1.tar.gz | source | sdist | null | false | 3c95e9ce69190f6a53158a25fcd9f03b | 77edfac76ef6fce5579d16e44e77cc6995e83c77f2e7441566cfef099c7a6565 | 6c05d540fb671c7da0edce2d8a52e6a5c02e781e9e0a31f48b7c49abd91383cc | MIT | [
"LICENSE"
] | 231 |
2.4 | maniscope | 1.1.1 | Efficient neural reranking via geodesic distances on k-NN manifolds | # Maniscope: A Novel RAG Reranker via Geodesic Distances on k-NN Manifolds
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
**Maniscope** is a lightweight geometric reranking method that leverages geodesic distances on k-nearest neighbor manifolds for efficient and accurate information retrieval. It combines global cosine similarity (telescope) with local manifold geometry (microscope) to achieve state-of-the-art retrieval quality with sub-20ms latency.
## Key Features
- **🚀 Ultra-Fast**: 3.2× faster than HNSW, 10-45× faster than cross-encoder rerankers, sub-10ms latency
- **🎯 Accurate**: MRR 0.9642 on 8 BEIR benchmarks (1,233 queries), within 2% of best cross-encoder
- **💡 Efficient**: 4.7ms average latency, outperforms HNSW on hardest datasets (NFCorpus: +7.0%, TREC-COVID: +1.6%, AorB: +2.8% NDCG@3)
- **🌍 Practical**: Achieves near-theoretical-maximum accuracy (within 1.8% of LLM-Reranker) at 420× faster speed
- **📊 Robust**: Handles disconnected graph components gracefully via hybrid scoring, CUDA error resilience with CPU fallback
- **🔧 Comprehensive**: 14 priority-ranked embedding models, 32+ validated LLM models, 5 reranker types, custom dataset support
- **💾 Smart Caching**: GPU acceleration with automatic CPU fallback, persistent embedding cache, environment variable controls
- **📱 Interactive**: Full-featured Streamlit app with enhanced data management, real-time benchmarking, and end-to-end RAG evaluation
- **🛡️ Production-Ready**: Robust error handling, automatic fallback mechanisms, comprehensive troubleshooting support
## Demo
Real-world benchmark on AorB dataset (10 queries, warm cache):

**Highlights:**
- ✅ **Maniscope: 8.2ms** - Fastest reranker with perfect accuracy (MRR=1.0)
- ✅ **13× faster than Jina v2** (177.7ms), **134× faster than BGE-M3** (1100.2ms), **273× faster than LLM** (2235.9ms)
- ✅ **All methods achieve perfect rankings** (MRR=1.0, NDCG@3=1.0) - Maniscope matches SOTA accuracy at fraction of latency
## Quick Start
### Installation
```bash
conda create -n maniscope python=3.11 -y
conda activate maniscope
```
Install from source:
```bash
git clone https://github.com/digital-duck/maniscope.git
cd maniscope
pip install -e .
```
```bash
pip install maniscope # coming soon
```
**Optional: GPU Troubleshooting Setup**
If you encounter CUDA errors, set up CPU fallback:
```bash
# For persistent CUDA issues, force CPU mode
export MANISCOPE_FORCE_CPU=true
# Or add to your ~/.bashrc or ~/.zshrc for permanent setting
echo 'export MANISCOPE_FORCE_CPU=true' >> ~/.bashrc
```
### Option 1: Streamlit App (Recommended)
Launch the Streamlit evaluation interface to benchmark and visualize results:
```bash
# Launch the app
streamlit run ui/Maniscope.py
# or
python run_app.py
```
The app provides:
- 📊 **Benchmark Suite**: 8 BEIR datasets + custom dataset support (PDF import, MTEB format)
- ⚡ **Multi-Model Support**: 14 embedding models (priority-ranked), 32+ validated LLM models, 5 reranker types
- 📈 **Analytics Dashboard**: MRR, NDCG@K, MAP, latency analysis with real-time comparison charts
- 🎯 **RAG Evaluation**: End-to-end RAG pipeline testing with LLM answer generation and scoring
- 🔬 **Query-Level Analysis**: Deep-dive evaluation with document inspection and ranking comparison
- 💾 **Enhanced Data Manager**: Upload datasets, import PDFs, view/manage existing custom datasets with one-click loading
- ⚙️ **Smart Configuration**: Priority-based model selection, parameter tuning, detailed model metadata, optimization levels
- 🔄 **Dataset Switching**: Toggle between BEIR benchmarks and custom datasets with smart terminology (PDF imports vs standard datasets)
- 💡 **Robust Computing**: Automatic GPU/CPU detection with fallback, persistent embedding cache, CUDA error handling
- 🎚️ **Advanced Controls**: Environment variable support (`MANISCOPE_FORCE_CPU`), reduced logging verbosity, session state management
### Option 2: Python API
```python
from maniscope import ManiscopeEngine_v2o
# Initialize engine (v2o: Ultimate optimization - 13.2× speedup)
engine = ManiscopeEngine_v2o(
model_name='all-MiniLM-L6-v2', # Priority 0: fastest (22M params)
# Alternative models:
# model_name='Qwen/Qwen3-Embedding-0.6B', # Priority 2: SOTA 2025
# model_name='google/embeddinggemma-300m', # Priority 2: Google's latest
# model_name='BAAI/bge-m3', # Priority 4: multi-functionality
k=5, # Number of nearest neighbors
alpha=0.5, # Hybrid scoring weight
device=None, # Auto-detect GPU with CPU fallback
use_cache=True, # Enable persistent disk cache
verbose=True
)
# Fit on document corpus
documents = [
"Python is a programming language",
"Python is a type of snake",
"Machine learning uses Python",
# ... more documents
]
engine.fit(documents)
# Search with Maniscope (telescope + microscope)
results = engine.search("What is Python?", top_n=5)
for doc, score, idx in results:
print(f"[{score:.3f}] {doc}")
# Compare with baseline cosine similarity
comparison = engine.compare_methods("What is Python?", top_n=5)
print(f"Ranking changed: {comparison['ranking_changed']}")
```
## How It Works
Maniscope uses a two-stage retrieval architecture:
### 1. **Telescope** (Global Retrieval)
Broad retrieval using cosine similarity to get top candidates
### 2. **Microscope** (Local Refinement)
Geodesic reranking on k-NN manifold graph:
- Build k-nearest neighbor graph from document embeddings
- Compute geodesic distances on this manifold
- Hybrid scoring: `α × cosine + (1-α) × geodesic`
**Key Insight**: Local manifold structure captures semantic relationships better than global Euclidean distances.
## Supported Models & Components
### 📊 Embedding Models (14 Total)
Maniscope supports comprehensive embedding models from lightweight to SOTA, organized by priority:
| Priority | Category | Models | Description |
|----------|----------|--------|-------------|
| **⚡ 0** | **Fastest** | all-MiniLM-L6-v2 | 22M params, lightning-fast inference |
| **🌍 1** | **Multilingual** | Sentence-BERT, LaBSE, E5-Instruct | 50-109 languages, production-ready |
| **🚀 2** | **SOTA 2025** | Qwen3-0.6B, EmbeddingGemma-300M, E5-Base-v2 | Current state-of-the-art models |
| **🔬 3** | **Research** | mBERT, DistilBERT, XLM-RoBERTa | Specialized research baselines |
| **💎 4** | **Advanced** | E5-Large, BGE-M3 | 560M+ params, maximum accuracy |
**Key Models:**
- `all-MiniLM-L6-v2` (22M) - Default, fastest
- `Qwen/Qwen3-Embedding-0.6B` (600M) - MTEB #1 series
- `google/embeddinggemma-300m` (300M) - Google's latest
- `BAAI/bge-m3` (568M) - Multi-functionality leader
- `sentence-transformers/paraphrase-multilingual-mpnet-base-v2` (278M) - Proven baseline
### 🤖 LLM Models (32+ Total)
**OpenRouter Models** (Validated & Sorted):
- **Anthropic**: Claude 3/3.5 (Haiku, Sonnet, Opus)
- **OpenAI**: GPT-3.5-turbo, GPT-4, GPT-4o, GPT-4o-mini
- **Google**: Gemini 2.0 Flash, Gemini Flash/Pro 1.5
- **Meta**: Llama 3.1/3.2 (8B, 70B), with free tier options
- **Others**: Cohere Command-R, DeepSeek, Mistral, Qwen 2.5, Perplexity
**Ollama Models** (Local):
- `llama3.1:latest`, `deepseek-r1:7b`, `qwen2.5:latest`
### 🔧 Rerankers
| Reranker | Type | Latency | Accuracy | Best For |
|----------|------|---------|----------|----------|
| **Maniscope v2o** | Geometric | **4.7ms** | MRR 0.964 | Production RAG |
| **HNSW** | Graph-based | 14.8ms | MRR 0.965 | Large-scale search |
| **Jina Reranker v2** | Cross-encoder | 47ms | MRR 0.975 | High accuracy |
| **BGE-M3** | Cross-encoder | 210ms | MRR 0.963 | Multilingual |
| **LLM Reranker** | Generative | 4400ms | MRR 0.978 | Research/upper bound |
### 📚 Datasets
**Built-in BEIR Benchmarks** (8 datasets, 1,233 queries):
| Dataset | Queries | Domain | Difficulty | Maniscope vs HNSW |
|---------|---------|--------|------------|-------------------|
| **NFCorpus** | 323 | Medical | Hard | **+7.0% NDCG@3** ✅ |
| **TREC-COVID** | 50 | Biomedical | Hard | **+1.6% NDCG@3** ✅ |
| **AorB** | 50 | Disambiguation | Hard | **+2.8% NDCG@3** ✅ |
| **SciFact** | 100 | Scientific | Medium | -0.5% NDCG@3 |
| **FiQA** | 100 | Financial | Medium | Tied |
| **MS MARCO** | 200 | Web Search | Easy | Tied |
| **ArguAna** | 100 | Argumentation | Easy | -0.6% NDCG@3 |
| **FEVER** | 200 | Fact Checking | Easy | Tied |
**Custom Dataset Support:**
- **📁 MTEB Format**: Upload JSON datasets with query/docs/relevance structure
- **📄 PDF Import**: Convert research papers to searchable datasets
- Section-based chunking with overlap
- Figure/table caption extraction
- Custom query mode (no ground truth required)
- **🔧 File Detection**: Auto-detect datasets in `data/custom/` directory
**Dataset Formats Supported:**
```json
// MTEB Format
[{
"query": "search query",
"docs": ["doc1", "doc2", "..."],
"relevance_map": {"0": 1, "1": 0, "...": 0},
"query_id": "q1",
"num_docs": 10
}]
// PDF Import Result
[{
"query": "", // Empty for custom query mode
"docs": ["## Section 1\n\nContent...", "## Section 2\n\n..."],
"relevance_map": {},
"metadata": {
"source": "pdf_import",
"pdf_filename": "paper.pdf",
"num_chunks": 75
}
}]
```
Each dataset includes quick test versions (`*-10.json`) with 10 queries for rapid prototyping.
## Data Management Features
### Enhanced Data Manager Interface
The Data Manager provides comprehensive dataset management capabilities:
**📂 Custom Dataset Management:**
- **View Existing Datasets**: Auto-detection of all custom datasets from `data/custom/` directory
- **One-Click Loading**: Load any dataset directly into evaluation interface
- **Rich Metadata Display**: View dataset details, number of queries, documents, and processing info
- **Smart Detection**: Automatically detects MTEB format vs PDF imports with appropriate terminology
**📄 PDF Processing:**
- **Intelligent Chunking**: Section-based chunking with configurable overlap
- **Content Extraction**: Figure/table captions, metadata preservation
- **Error Resilience**: Robust error handling with detailed logging
- **Auto-JSON Export**: Converts PDFs to MTEB-compatible JSON format
**🔄 Dataset Switching:**
- **Toggle Interface**: Checkbox to switch between BEIR benchmarks and custom datasets
- **Context-Aware Display**: Different UI terminology for PDF imports (1 dataset, many documents) vs standard datasets (many queries)
- **Session Persistence**: Maintains dataset selection across app sessions
## Advanced Configuration & Troubleshooting
### GPU/CPU Optimization
Maniscope automatically detects and uses GPU when available, with intelligent fallback to CPU:
```python
# Automatic GPU detection with CPU fallback
engine = ManiscopeEngine_v2o(
model_name='all-MiniLM-L6-v2',
device=None, # Auto-detect: GPU if available, else CPU
use_faiss=True # Enable GPU-accelerated k-NN when possible
)
# Force CPU mode (if CUDA issues persist)
import os
os.environ['MANISCOPE_FORCE_CPU'] = 'true'
# Or set environment variable: export MANISCOPE_FORCE_CPU=true
```
**CUDA Troubleshooting:**
- GPU memory errors automatically trigger CPU fallback
- Set `MANISCOPE_FORCE_CPU=true` environment variable for persistent CUDA issues
- Reduced docling logging verbosity for cleaner output during PDF processing
### Optimization Versions
Maniscope provides multiple optimization levels for different use cases:
| Version | Description | Speedup | Best For |
|---------|-------------|---------|----------|
| **v0** | Baseline (CPU, no cache) | 1.0× | Reference |
| **v1** | Efficient k-NN construction | 17.8× | Early optimization |
| **v2** | Heap-based Dijkstra | 22.0× | Reduced overhead |
| **v2o** | 🌟 **RECOMMENDED** - SciPy optimized | **13.2×** | Production |
| **v3** | Persistent cache + query LRU | Variable | Repeated experiments |
**v2o Performance (Real-World Results on 8 BEIR datasets, 1,233 queries):**
- Average latency: 4.7ms (3.2× faster than HNSW at 14.8ms)
- Outperforms HNSW on hardest datasets (NFCorpus, TREC-COVID, AorB)
- 10-45× faster than cross-encoder rerankers
- Within 2% of best cross-encoder accuracy (Jina v2)
#### Using Optimized Versions
```python
# v2o: Ultimate optimization (recommended)
from maniscope import ManiscopeEngine_v2o
engine = ManiscopeEngine_v2o(
k=5, alpha=0.5,
device=None, # Auto-detect GPU
use_cache=True, # Persistent disk cache
use_faiss=True # GPU-accelerated k-NN
)
# v3: CPU-friendly with caching
from maniscope import ManiscopeEngine_v3
engine = ManiscopeEngine_v3(k=5, alpha=0.5, use_cache=True)
# v2: Fast cold-cache performance
from maniscope import ManiscopeEngine_v2
engine = ManiscopeEngine_v2(k=5, alpha=0.5, use_faiss=True)
# v1: Simple GPU acceleration
from maniscope import ManiscopeEngine_v1
engine = ManiscopeEngine_v1(k=5, alpha=0.5)
# v0: Baseline
from maniscope import ManiscopeEngine
engine = ManiscopeEngine(k=5, alpha=0.5)
```
### Embedding Cache
Maniscope automatically caches document embeddings to disk to avoid recomputation. This is especially valuable when:
- Testing different `k` and `alpha` parameters on the same corpus
- Re-running experiments after code changes
- Benchmarking multiple rerankers on the same dataset
```python
engine = ManiscopeEngine_v2o(
model_name='all-MiniLM-L6-v2',
k=5,
alpha=0.5,
cache_dir='~/projects/embedding_cache/maniscope', # Custom cache location
use_cache=True, # Enable persistent disk cache
query_cache_size=100 # LRU cache for 100 queries
)
```
**Cache behavior:**
- Cache files are stored in `cache_dir` (default: `~/projects/embedding_cache/maniscope`)
- Cache key is computed from document content + model name
- Embeddings are automatically loaded from cache if available
- Query LRU cache stores recent query embeddings in memory
**Benefits:**
- Avoid expensive re-encoding when testing different parameters
- Faster iteration during development
- Reduced computation time for batch benchmarking
- Query cache provides instant response for repeated queries
## Evaluation & Analytics
### Comprehensive RAG Evaluation
Maniscope provides end-to-end RAG pipeline evaluation with detailed analytics:
**📊 Retrieval Metrics**
- **MRR (Mean Reciprocal Rank)**: Primary ranking quality metric
- **NDCG@K (Normalized Discounted Cumulative Gain)**: Ranking quality with position weighting
- **MAP (Mean Average Precision)**: Precision across all relevant documents
- **Latency Analysis**: Real-time performance measurement with percentile statistics
**🎯 RAG Pipeline Evaluation**
- **Answer Generation**: LLM-powered answer generation from retrieved documents
- **Answer Quality Scoring**: Automated scoring using configurable LLM models
- **Query-Level Analysis**: Detailed per-query breakdown with document ranking comparison
- **Method Comparison**: Side-by-side comparison of multiple rerankers
**📈 Real-Time Analytics**
- **Performance Dashboard**: Live charts showing MRR, NDCG, and latency trends
- **Model Comparison**: Benchmark multiple embedding models, rerankers, and LLMs
- **Interactive Filtering**: Filter results by dataset, model, or performance thresholds
- **Export Capabilities**: Save results for further analysis and reporting
## Troubleshooting
### Common Issues and Solutions
**🔧 CUDA/GPU Errors**
```bash
# Error: CUDA launch failure or GPU memory issues
# Solution: Enable CPU fallback mode
export MANISCOPE_FORCE_CPU=true
streamlit run ui/Maniscope.py
```
**📄 PDF Processing Fails**
```bash
# Error: "Object of type method is not JSON serializable"
# Solution: Ensure complete dataset JSON files
# Check data/custom/*.json for proper formatting with closing braces
```
**📂 Custom Datasets Not Appearing**
- Ensure datasets are in `data/custom/` directory
- Use the checkbox "📂 Use Custom Dataset" in Eval ReRanker page
- Verify JSON format matches MTEB structure
**⚡ Model Loading Issues**
```python
# Error: Model not found or authentication issues
# Solution: Check model availability and API keys
# OpenRouter models require OPENROUTER_API_KEY
# Ollama models require local ollama installation
```
**💾 Cache Issues**
```bash
# Clear embedding cache if needed
rm -rf ~/projects/embedding_cache/maniscope
# Or use custom cache directory in engine initialization
```
## Cleanup (optional - after evaluation)
```bash
conda env remove -n maniscope
```
## Citation
If you use Maniscope in your research, please cite:
```bibtex
@inproceedings{gong2026maniscope,
title={A Novel RAG Reranker via Geodesic Distances on k-NN Manifolds},
author={Gong, Wen G.},
booktitle={International Conference on Machine Learning (ICML)},
year={2026}
}
```
## License
MIT License - see LICENSE file for details.
---
**"Look closer to see farther"** — The Maniscope philosophy
| text/markdown | Wen G. Gong, Albert Gong | Digital Duck <p2p2learn@outlook.com> | null | null | null | information-retrieval, reranking, manifold-learning, geodesic-distance, neural-search, rag | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyth... | [] | https://github.com/digital-duck/maniscope | null | >=3.8 | [] | [] | [] | [
"numpy>=1.21.0",
"networkx>=2.6.0",
"scikit-learn>=1.0.0",
"sentence-transformers>=2.2.0",
"torch>=1.10.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=3.0.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"flake8>=4.0.0; extra == \"dev\"",
"mypy>=0.950; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/digital-duck/maniscope",
"Documentation, https://maniscope.readthedocs.io",
"Repository, https://github.com/digital-duck/maniscope.git",
"Bug Tracker, https://github.com/digital-duck/maniscope/issues"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-19T02:57:08.981740 | maniscope-1.1.1.tar.gz | 65,274 | 8a/b4/063a0956d4e9c64a6b9c6ce557c4fa1035f56ceb2f2ba06a2376f8883c6c/maniscope-1.1.1.tar.gz | source | sdist | null | false | e8caf28cf11d51e694b8cc9ec0de37fc | 84e86ec72743055579dbe623b4f01587f70583413a54bf53bf91b4bf6533c666 | 8ab4063a0956d4e9c64a6b9c6ce557c4fa1035f56ceb2f2ba06a2376f8883c6c | MIT | [
"LICENSE"
] | 233 |
2.4 | orascope | 0.0.2 | Computational paleography engine for Chinese script evolution | # 🔭 Orascope (Code: ORASCOPE)
**"A Hubble Telescope for Ancient Chinese Civilization"**
## Overview
Orascope is a computational paleography engine built to reverse-engineer Chinese script evolution. It bridges the "Modality Gap" between Oracle Bone Script (OBS) imagery and modern semantic concepts using manifold learning.
## The Instrument Suite
Orascope integrates three core "lenses":
* **`maniscope`**: RAG-based retrieval and evidence grounding.
* **`semanscope`**: Geometric visualization of semantic manifolds.
* **`orascope`**: The evolution tracker and cross-modal alignment engine.
## Core Principles
* **Non-Euclidean:** Uses Diffusion Geometry and Geodesic distances.
* **Topology-First:** Prioritizes stroke connectivity and branching over raw pixel similarity.
## References
- https://gemini.google.com/app/a9c9d1ade606fa50
- https://gemini.google.com/share/9bb45103f1ec
| text/markdown | null | Digital Duck <p2p2learn@outlook.com> | null | null | null | paleography, oracle-bone-script, chinese, manifold-learning, computational-linguistics | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/digital-duck/orascope",
"Repository, https://github.com/digital-duck/orascope"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-19T02:57:05.822317 | orascope-0.0.2.tar.gz | 15,729 | 96/3c/7a9acdb02d68b8c725270f66fa514907b5e5bebd345fe160f4c4e86a8b48/orascope-0.0.2.tar.gz | source | sdist | null | false | 91eed37294cf4f4753f970102b27a7a9 | cfd99205e026a47b61c405581ffea043df531edc5236e8c71da91e998e7a7ff0 | 963c7a9acdb02d68b8c725270f66fa514907b5e5bebd345fe160f4c4e86a8b48 | GPL-3.0-only | [
"LICENSE"
] | 226 |
2.4 | monteflow | 0.0.2 | Stochastic workflow simulation engine with transition matrices and path integral analysis | # monteflow
Agentic workflow built on Markoflow
| text/markdown | null | Digital Duck <p2p2learn@outlook.com> | null | null | null | workflow, monte-carlo, stochastic, simulation, markov-chain, llm | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/digital-duck/monteflow",
"Repository, https://github.com/digital-duck/monteflow"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-19T02:57:03.016565 | monteflow-0.0.2.tar.gz | 2,235 | 70/45/50cc6b545ca0e27dbb5aa4bdc0569785b916ff2a2e3c53b8c0bcb09c4b8b/monteflow-0.0.2.tar.gz | source | sdist | null | false | 9c223824ba461c9cae5484f60f28cb0a | ca8fa372740a4afe3e04e937e25f5bdc7d7735d55417ca8ce48e4d4958d75e6d | 704550cc6b545ca0e27dbb5aa4bdc0569785b916ff2a2e3c53b8c0bcb09c4b8b | MIT | [
"LICENSE"
] | 230 |
2.4 | pocoflow | 0.2.1 | Lightweight LLM workflow orchestration — a hardened evolution of PocketFlow | # PocoFlow
> Lightweight LLM workflow orchestration.
> A hardened evolution of [PocketFlow](https://github.com/The-Pocket/PocketFlow).
Built with love by **Claude & digital-duck** 🦆
---
## What It Is
PocoFlow is a minimal framework for building LLM pipelines as **directed graphs of
nano-ETL nodes** communicating through a shared, typed Store.
It keeps PocketFlow's best idea — the `prep | exec | post` abstraction — and fixes
the weaknesses that surface in production:
| Weakness | PocoFlow fix |
|----------|-------------|
| Raw dict store — no type safety | `Store` with optional schema + `TypeError` on bad writes |
| Ambiguous `>>` edge API | Single clear API: `.then("action", next_node)` |
| No built-in async support | `AsyncNode.exec_async()` — framework handles `asyncio.run()` |
| No observability | Hook system: `node_start / node_end / node_error / flow_end` |
| No checkpointing | JSON snapshots + **SQLite backend** with full event log |
| No long-running support | `run_background()` → `RunHandle` with status, wait, cancel |
| Inconsistent logging | **dd-logging** integration — structured, file-backed, namespaced |
| No workflow visibility | **Streamlit monitor UI** — live runs table, timeline, store inspector |
**Dependencies:** [pocketflow](https://github.com/The-Pocket/PocketFlow) + [dd-logging](https://github.com/digital-duck/dd-logging)
---
## Install
```bash
# Core
pip install pocoflow
# With Streamlit monitor UI
pip install "pocoflow[ui]"
# Local dev (from the digital-duck monorepo)
pip install -e ~/projects/digital-duck/dd-logging
pip install -e ~/projects/digital-duck/pocoflow"[ui,dev]"
```
---
## Quick Start
```python
from pocoflow import Node, Flow, Store
class SummariseNode(Node):
def prep(self, store):
return store["document"]
def exec(self, text):
return llm.summarise(text) # your LLM call here
def post(self, store, prep, summary):
store["summary"] = summary
return "done"
store = Store({"document": "...", "summary": ""})
Flow(start=SummariseNode(), db_path="pocoflow.db", flow_name="summarise").run(store)
print(store["summary"])
```
Then open the monitor:
```bash
streamlit run pocoflow/ui/monitor.py -- pocoflow.db
```
---
## Core Concepts
### Node — nano-ETL
Every node is a three-phase processing unit that maps directly to **Extract → Transform → Load**:
```
prep(store) → Extract: read what this node needs from the store
exec(prep_result) → Transform: do the work (pure — no store side-effects)
post(store, prep, exec) → Load: write results back, return next action string
```
| Phase | ETL step | Purity |
|-------|----------|--------|
| `prep` | Extract | reads store |
| `exec` | Transform | pure function — retryable, testable without a store |
| `post` | Load + Route | writes store, returns action string |
```python
from pocoflow import Node
class CallLLMNode(Node):
max_retries = 3 # retry exec() automatically on failure
retry_delay = 1.0 # seconds between retries
def prep(self, store):
return store["prompt"]
def exec(self, prompt):
return llm.call(prompt) # retried up to 3× on exception
def post(self, store, prep, response):
store["response"] = response
return "done"
```
### Store — typed shared state
```python
from pocoflow import Store
store = Store(
data={"query": "", "result": ""},
schema={"query": str, "result": str}, # type-checked on every write
name="my_pipeline",
)
store["query"] = "explain quantum entanglement"
store["query"] = 42 # ← raises TypeError immediately
# Observer: fired on every write (logging, tracing, UI updates)
store.add_observer(lambda key, old, new: print(f"{key}: {old!r} → {new!r}"))
# JSON snapshot / restore (lightweight backup)
store.snapshot("/tmp/run_42/step_002.json")
store2 = Store.restore("/tmp/run_42/step_002.json")
```
### Flow — directed graph with hooks
```python
from pocoflow import Flow, Store
# Wire nodes with unambiguous named edges
a.then("ok", b)
a.then("error", c)
a.then("*", fallback) # wildcard: matches any unhandled action
# Build with SQLite persistence
flow = Flow(
start=a,
flow_name="my_pipeline", # label shown in the monitor UI
db_path="pocoflow.db", # SQLite: runs, events, checkpoints
checkpoint_dir="/tmp/ckpt", # also write JSON snapshots (optional)
max_steps=50, # guard against infinite loops
)
# Hooks — wire to any logger, metrics sink, or progress bar
flow.on("node_start", lambda name, store: print(f"▶ {name}"))
flow.on("node_end", lambda name, action, elapsed, store:
print(f"✓ {name} → {action} ({elapsed:.2f}s)"))
flow.on("node_error", lambda name, exc, store: alert(name, exc))
flow.on("flow_end", lambda steps, store: print(f"Done in {steps} steps"))
store = Store({"query": "..."})
flow.run(store)
```
### AsyncNode — parallel sub-tasks
```python
from pocoflow import AsyncNode
import asyncio
class FetchNode(AsyncNode):
def prep(self, store):
return store["urls"]
async def exec_async(self, urls):
return await asyncio.gather(*[fetch(u) for u in urls])
def post(self, store, prep, results):
store["pages"] = results
return "done"
```
Implement `exec_async()` — the framework calls it via `asyncio.run()`.
Use `asyncio.gather()` inside for true parallel sub-tasks.
---
## SQLite Backend
When `db_path` is set, every run is fully recorded in a SQLite database:
```
pf_runs — one row per flow execution (run_id, status, timing, error)
pf_checkpoints — Store snapshot after every node (restorable at any step)
pf_events — ordered event log (flow_start → node_start/end/error → flow_end)
```
```python
from pocoflow import WorkflowDB
db = WorkflowDB("pocoflow.db")
# List all runs
for run in db.list_runs():
print(run["run_id"], run["status"], run["total_steps"])
# Inspect events for a run
for event in db.get_events("my_pipeline-3f9a1b2c"):
print(event["event"], event["node_name"], event["elapsed_ms"])
# Restore Store from any checkpoint
store = db.load_checkpoint("my_pipeline-3f9a1b2c", step=2)
```
WAL mode is enabled so the Streamlit monitor can poll while a flow is running.
---
## Long-Running Workflows
For flows that take minutes or hours, use `run_background()` to avoid blocking:
```python
flow = Flow(start=my_node, db_path="pocoflow.db", flow_name="research")
# Returns immediately — flow runs in a daemon thread
handle = flow.run_background(store)
print(handle.run_id) # e.g. "research-3f9a1b2c"
print(handle.status) # "running" (reads live from SQLite)
# Block until done (optional timeout)
result = handle.wait(timeout=300)
print(handle.status) # "completed"
# Cooperative cancel — stops between nodes
handle.cancel()
```
### Resume after crash
```python
from pocoflow import WorkflowDB, Flow
db = WorkflowDB("pocoflow.db")
# Find the failed run
runs = [r for r in db.list_runs() if r["status"] == "failed"]
failed = runs[0]
# Restore store from the last successful checkpoint
checkpoints = db.get_checkpoints(failed["run_id"])
last = checkpoints[-1]
store = db.load_checkpoint(failed["run_id"], step=last["step"])
# Resume from the node after the last checkpoint
flow = Flow(start=my_flow_start, db_path="pocoflow.db")
flow.run(store, resume_from=node_after_crash)
```
---
## Streamlit Monitor UI
Visualise and manage all workflow runs from a browser.
**Standalone:**
```bash
streamlit run pocoflow/ui/monitor.py -- pocoflow.db
```
**Embedded in any Streamlit page:**
```python
from pocoflow.ui.monitor import render_workflow_monitor
render_workflow_monitor("pocoflow.db")
```
Features:
- **Runs table** — run ID, flow name, status badge (✅ 🔄 ❌), started time, duration, step count
- **Auto-refresh** — toggle on with 5 / 10 / 30 s intervals; updates live while flows run
- **Timeline tab** — ordered event log per run: node names, actions, per-node latency (ms), errors
- **Store Inspector tab** — step slider to view the Store state at any checkpoint as a key/value table + raw JSON
- **Resume tab** — generates a ready-to-paste Python code snippet for resuming from the selected checkpoint
---
## Logging
PocoFlow uses [dd-logging](https://github.com/digital-duck/dd-logging) for structured,
namespaced, file-backed log output.
```python
from pocoflow.logging import setup_logging, get_logger
# Set up once at app start (e.g. in CLI entry point or Streamlit cache_resource)
log_path = setup_logging("run", log_level="debug", adapter="openrouter")
# → logs/run-openrouter-20260217-143022.log
# In any module
_log = get_logger("nodes.summarise") # → pocoflow.nodes.summarise
_log.info("summarising len=%d", len(text))
```
Logger hierarchy:
```
pocoflow
├── pocoflow.store
├── pocoflow.node
├── pocoflow.flow
├── pocoflow.db
└── pocoflow.runner
```
---
## Migrating from PocketFlow
```python
# Before
from pocketflow import Node, Flow
node_a >> node_b # creates "default" edge — causes UserWarning
node_a - "action" >> node_b # named edge (correct but inconsistent)
shared = {} # raw dict — no type safety
# After
from pocoflow import Node, Flow, Store
node_a.then("action", node_b) # single unambiguous API, always
shared = Store(data=shared_dict) # typed, observable, checkpointable
flow.run(shared) # plain dict also accepted for backward compat
```
---
## Project Layout
```
pocoflow/
__init__.py — public API: Store, Node, AsyncNode, Flow, WorkflowDB, RunHandle
store.py — typed, observable, JSON-checkpointable shared state
node.py — Node (sync) + AsyncNode (async) + retry
flow.py — directed graph runner: hooks, JSON + SQLite checkpoints, background
db.py — WorkflowDB: SQLite schema, CRUD for runs / checkpoints / events
logging.py — dd-logging wrapper (pocoflow.* namespace)
runner.py — RunHandle: status, wait, cancel
ui/
monitor.py — Streamlit workflow monitor (standalone + embeddable)
examples/
hello.py — minimal two-node flow with hooks
tests/
test_pocoflow.py — 25 tests: Store, Node, Flow, WorkflowDB, RunHandle
docs/
design.md — architecture, design decisions, migration guide
```
---
## Comparison with PocketFlow
| Feature | PocketFlow | PocoFlow v0.2 |
|---------|-----------|--------------|
| Core size | ~100 lines | ~600 lines |
| Shared state | raw dict | typed `Store` with schema |
| Edge API | `>>` and `- "action" >>` (confusing) | `.then("action", node)` only |
| Async nodes | manual `asyncio.run()` per node | `AsyncNode.exec_async()` |
| Observability | none | 4-event hook system |
| Checkpointing | none | JSON + SQLite (`WorkflowDB`) |
| Event log | none | `pf_events` table — full audit trail |
| Long-running | none | `run_background()` → `RunHandle` |
| Retry | none | `max_retries` + `retry_delay` on any Node |
| Wildcard edges | none | `.then("*", fallback)` |
| Logging | manual | dd-logging (`pocoflow.*` namespace) |
| Monitor UI | none | Streamlit monitor with auto-refresh |
| External deps | 0 | pocketflow + dd-logging (both stdlib-only) |
---
## Relationship to PocketFlow
PocoFlow is spiritually a child of PocketFlow. We kept:
- The `prep | exec | post` nano-ETL abstraction — beautiful and correct
- Zero vendor lock-in — bring your own LLM client
- No framework magic — every behaviour is traceable to code you can read in minutes
We added what production LLM workflows actually demand:
- Typed, observable, checkpointable `Store`
- Unambiguous `.then()` edge API (no more `UserWarning`)
- `AsyncNode` with `exec_async()`
- Hook system for pluggable observability
- SQLite backend — full audit log, queryable checkpoints, crash recovery
- `run_background()` for long-running agentic workflows
- Streamlit monitor — see every run, every node, every store state
- dd-logging — structured, file-backed, namespaced logs out of the box
**PocketFlow** stays listed as a dependency — as a nod to its inspiration and to ease
migration for projects already using it.
---
## License
MIT — see [LICENSE](LICENSE).
Copyright © 2026 digital-duck.
| text/markdown | null | Digital Duck <p2p2learn@outlook.com> | null | null | null | llm, workflow, orchestration, agent, pipeline, etl | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"pocketflow>=0.0.1",
"dd-logging>=0.1.0",
"openai>=1.0.0; extra == \"llm\"",
"anthropic>=0.25.0; extra == \"llm\"",
"google-genai>=1.0.0; extra == \"llm\"",
"python-dotenv>=1.0; extra == \"llm\"",
"streamlit>=1.32; extra == \"ui\"",
"pandas>=1.5; extra == \"ui\"",
"pytest>=8.0; extra == \"dev\"",
... | [] | [] | [] | [
"Homepage, https://github.com/digital-duck/pocoflow",
"Repository, https://github.com/digital-duck/pocoflow",
"Bug Tracker, https://github.com/digital-duck/pocoflow/issues"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-19T02:56:56.754436 | pocoflow-0.2.1.tar.gz | 34,063 | 90/7c/861e89d64627fcd4c304deced968ecf0ac68d13f24a451ca527af4f25101/pocoflow-0.2.1.tar.gz | source | sdist | null | false | 3889cf2a478cc6479b82be4e63c1e636 | 96cc81b89d8fbc59f6038c5ab4ae41f4a484161893185e4f2064435a6be2e519 | 907c861e89d64627fcd4c304deced968ecf0ac68d13f24a451ca527af4f25101 | MIT | [
"LICENSE"
] | 230 |
2.4 | spl-llm | 0.1.1 | Structured Prompt Language - SQL for LLM Context Management | # SPL - Structured Prompt Language
**SQL for LLM Context Management**
SPL is a declarative, SQL-inspired language that treats LLMs as **generative knowledge bases**. Just as SQL (1970) standardized database access, SPL standardizes LLM prompt engineering with automatic token budget optimization, built-in RAG, and persistent memory.
```sql
PROMPT answer_question
WITH BUDGET 8000 tokens
USING MODEL claude-sonnet-4-5
SELECT
system_role("You are a knowledgeable assistant"),
context.question AS question LIMIT 200 tokens,
rag.query("relevant docs", top_k=5) AS docs LIMIT 3000 tokens,
memory.get("history") AS history LIMIT 500 tokens
GENERATE
detailed_answer(question, docs, history)
WITH OUTPUT BUDGET 2000 tokens, TEMPERATURE 0.3;
```
## Why SPL?
| Before (Imperative) | After (SPL) |
|-----|-----|
| Manual token counting | `WITH BUDGET 8000 tokens` |
| Trial-and-error truncation | `LIMIT 500 tokens` (auto-compressed) |
| No visibility into allocation | `EXPLAIN PROMPT` shows full plan |
| Copy-paste prompt templates | CTEs, functions, composition |
| Tied to one LLM provider | Provider-agnostic (Ollama, OpenRouter, Claude CLI) |
## Install
```bash
# Core install
pip install spl-llm
# With ChromaDB vector store support
pip install "spl-llm[chroma]"
```
### Zero-cost quick start with Ollama
Run SPL queries entirely locally — no API key, no cost:
```bash
# 1. Install Ollama: https://ollama.ai/download
# 2. Pull a model
ollama pull llama3.2 # 2 GB, great for most tasks
ollama pull qwen2.5 # strong reasoning and code
# 3. Install SPL and run
pip install spl-llm
spl init # creates .spl/config.yaml — set adapter: ollama
spl execute examples/hello_world.spl
```
## Quick Start
```bash
# Initialize workspace
spl init
# Validate syntax
spl validate examples/hello_world.spl
# Show execution plan (like SQL EXPLAIN)
spl explain examples/hello_world.spl
# Execute query
spl execute examples/hello_world.spl --param question="What is Python?"
# Built-in RAG (FAISS by default, or use --backend chroma)
spl rag add my_docs.txt
spl rag add my_docs.txt --backend chroma
spl rag query "search text" --top-k 5
# Persistent memory
spl memory set user_pref "prefers Python"
spl memory get user_pref
spl memory list
```
## EXPLAIN Output
```
Execution Plan for: answer_question
============================================================
Budget: 8,000 tokens | Model: claude-sonnet-4-5
Token Allocation:
+-- __system_role__ 20 tokens ( 0.2%)
+-- history 500 tokens ( 6.2%) [from memory]
+-- docs 3,000 tokens ( 37.5%) [cache MISS]
+-- question 200 tokens ( 2.5%)
+-- Output Budget 2,000 tokens ( 25.0%)
\-- Buffer 2,280 tokens ( 28.5%)
----------
Total 5,720 / 8,000 tokens (71.5%)
Estimated Cost: $0.0412
```
## Architecture
```
SPL Source (.spl)
|
[Lexer] -> [Parser] -> [Analyzer] -> [Optimizer] -> [Executor]
/ | \
[LLM] [SQLite] [FAISS or ChromaDB]
| |
Ollama (local) Memory
OpenRouter RAG
Claude CLI
```
**Key design decisions:**
- **Parser**: Hand-written recursive descent (zero external parser deps)
- **LLM**: Ollama (local, free) + OpenRouter.ai (production, 100+ models) + Claude CLI (dev)
- **Memory**: SQLite (file-based, portable, zero-config)
- **Vector Store**: FAISS (default) or ChromaDB (`--backend chroma`)
- **Storage**: `.spl/` directory per project
## Python API
```python
import spl
# Parse
ast = spl.parse(open("query.spl").read())
# Validate
result = spl.validate(open("query.spl").read())
# Explain
print(spl.explain(open("query.spl").read()))
# Execute
import asyncio
result = asyncio.run(spl.execute(
open("query.spl").read(),
params={"question": "What is Python?"}
))
print(result.content)
```
## SPL Syntax
See [docs/dev/design-v1.md](docs/dev/design-v1.md) for full syntax specification and [docs/grammar.ebnf](docs/grammar.ebnf) for formal grammar.
## Benchmark Results
SPL was evaluated across 5 experiments (all runnable without API keys):
| Metric | Result |
|--------|--------|
| Code reduction vs imperative Python | **65% average** (15 vs 44 lines of code) |
| Manual token-counting ops eliminated | **35 ops across 5 tasks** (SPL: 0) |
| Cross-model cost visibility | **68x cost difference** visible before execution |
| Feature claims verified | **20/20** automated checks pass |
| Parser test suite | **58/58** tests pass (incl. FAISS + ChromaDB storage) |
```bash
# Run benchmarks yourself
python -m tests.benchmarks.bench_developer_experience
python -m tests.benchmarks.bench_token_optimization
python -m tests.benchmarks.bench_cost_estimation
python -m tests.benchmarks.bench_explain_showcase
python -m tests.benchmarks.bench_feature_verification
```
## Session 1 Summary (Feb 12, 2026)
The entire SPL engine --- from idea to working prototype with arxiv paper --- was built in a single Human+AI co-creation session:
**What was built:**
- Complete language specification (EBNF grammar, 30+ keywords, 50+ token types)
- Full engine pipeline: Lexer, Parser (hand-written recursive descent), Semantic Analyzer, Token Budget Optimizer, Executor
- Three LLM adapters: Ollama (local, free) + OpenRouter.ai (100+ models) + Claude Code CLI (subscription billing)
- Storage layer: SQLite persistent memory + FAISS vector store + ChromaDB (native RAG)
- CLI tool with 10 commands (`spl init/validate/explain/execute/memory/rag`)
- 4 example `.spl` programs covering basic QA, RAG, CTEs, and functions
- 40 unit tests + 5 benchmark experiments + 4 paper figures
- arxiv paper draft (~12 pages) with formal grammar, evaluation data, and competitive analysis
- pip-installable package (`spl-llm v0.1.0`)
**The core insight:** The LLM context window is a constrained resource --- just like disk I/O was for databases. Constrained resources deserve declarative query languages with optimizers. This is Codd's 1970 insight applied to 2026's problem.
## Project
- **Author**: Wen Gong (20+ years Oracle/SQL experience)
- **Vision**: [SPL Design Thinking](docs/dev/design-v1.md)
- **Paper**: [arxiv draft](docs/paper/spl-paper.tex) | [figures](docs/paper/figures/) | [benchmark data](docs/paper/data/)
- **Co-creation**: Built via Human+AI collaboration ([log](docs/dev/co-creation-log.md))
- **History**: [Why SPL required interdisciplinary thinking](docs/history-lessons.md)
- **License**: Apache 2.0
| text/markdown | null | Digital Duck <p2p2learn@outlook.com> | null | null | null | llm, prompt-engineering, sql, declarative, context-management | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules",
"To... | [] | null | null | >=3.10 | [] | [] | [] | [
"click>=8.0",
"faiss-cpu>=1.7.4",
"numpy>=1.24",
"httpx>=0.25",
"tiktoken>=0.5",
"pyyaml>=6.0",
"dd-logging>=0.1.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"chromadb>=0.4; extra == \"chroma\""
] | [] | [] | [] | [
"Homepage, https://github.com/digital-duck/SPL",
"Repository, https://github.com/digital-duck/SPL"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-19T02:56:53.965151 | spl_llm-0.1.1.tar.gz | 50,476 | 9d/52/ff99240962912a32be4b91636b8aba2fafdc8516475c61740a66db6d0257/spl_llm-0.1.1.tar.gz | source | sdist | null | false | c034d162714fe68334d8909d48ab2e7b | 18d27b02700510a50061ff0961b8db29e0f6c77da2fb278050e2526690410e9d | 9d52ff99240962912a32be4b91636b8aba2fafdc8516475c61740a66db6d0257 | Apache-2.0 | [
"LICENSE"
] | 265 |
2.4 | dd-logging | 0.1.1 | Shared logging helpers for Digital Duck projects (spl-llm, spl-flow, …) | # dd-logging
Shared logging helpers for [Digital Duck](https://github.com/digital-duck) projects.
Provides a consistent, timestamped-file logging pattern for CLI tools and Streamlit apps.
Used by **spl-llm** and **spl-flow**; designed to be reused by any project in the ecosystem.
## Install
```bash
# from PyPI (once published)
pip install dd-logging
# local editable install (for development)
pip install -e /path/to/dd-logging
```
## Usage
```python
from dd_logging import setup_logging, get_logger, disable_logging
# 1. Call once per process (CLI entry point or app startup)
log_path = setup_logging(
"run",
root_name="my_app", # top-level logger namespace
adapter="openrouter", # appended to filename (optional)
log_level="info", # debug | info | warning | error
)
# → logs/run-openrouter-20260215-143022.log
# 2. In each module
_log = get_logger("nodes.text2spl", root_name="my_app")
_log.info("translating query len=%d", len(query))
# 3. Silence all logging (e.g. --no-log CLI flag)
disable_logging("my_app")
```
### Thin wrapper pattern (recommended)
Each project wraps `dd_logging` so call-sites never pass `root_name`:
```python
# myapp/logging_config.py
from pathlib import Path
from dd_logging import setup_logging as _setup, get_logger as _get, disable_logging as _disable
_ROOT = "my_app"
LOG_DIR = Path(__file__).resolve().parent.parent / "logs"
def get_logger(name: str):
return _get(name, _ROOT)
def setup_logging(run_name: str, **kw):
kw.setdefault("log_dir", LOG_DIR)
return _setup(run_name, root_name=_ROOT, **kw)
def disable_logging():
return _disable(_ROOT)
```
## Log file naming
```
<log_dir>/<run_name>[-<adapter>]-<YYYYMMDD-HHMMSS>.log
logs/run-openrouter-20260215-143022.log
logs/benchmark-claude_cli-20260215-144500.log
logs/generate-20260215-145001.log
```
## Logger hierarchy
```
my_app ← root (FileHandler attached here)
├── my_app.api ← get_logger("api", "my_app")
├── my_app.nodes.text2spl ← get_logger("nodes.text2spl", "my_app")
└── my_app.flows ← get_logger("flows", "my_app")
```
All child loggers inherit the root's handler — no per-module handler setup needed.
## Design notes
- **`propagate=False`** — prevents duplicate output when a root Python logger
handler is already configured (e.g. Streamlit, pytest, Jupyter).
- **Stale-handler removal** — calling `setup_logging()` multiple times in one
process (e.g. test suites) is safe; old `FileHandler`s are replaced.
- **No third-party dependencies** — stdlib `logging` only.
## Projects using dd-logging
| Project | Root name | Log dir |
|---------|-----------|---------|
| [spl-llm](https://github.com/digital-duck/SPL) | `spl` | `SPL/logs/` |
| [spl-flow](https://github.com/digital-duck/SPL-Flow) | `spl_flow` | `SPL-Flow/logs/` |
## License
MIT
| text/markdown | null | Digital Duck <p2p2learn@outlook.com> | null | null | null | logging, digital-duck, spl | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: System :: Logging"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/digital-duck/dd-logging",
"Repository, https://github.com/digital-duck/dd-logging",
"Issues, https://github.com/digital-duck/dd-logging/issues"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-19T02:56:51.375580 | dd_logging-0.1.1.tar.gz | 5,038 | ae/a2/b54d57323b01f6b9f3d60fd4e61f27eb91f708031ee94bbbec52311b96be/dd_logging-0.1.1.tar.gz | source | sdist | null | false | 1927577be1c65ee31ec4e8d068a6424e | d33fee20f8bb119030d4b15095ea9f656f41a2d6a1b3b55ae3c50b34294dd8d1 | aea2b54d57323b01f6b9f3d60fd4e61f27eb91f708031ee94bbbec52311b96be | MIT | [
"LICENSE"
] | 311 |
2.4 | dd-llm | 0.1.0 | Shared LLM abstraction layer for Digital Duck projects | # dd-llm
Shared LLM abstraction layer for Digital Duck projects.
Zero core dependencies. Adapters lazy-import their SDKs only when used.
## Install
```bash
pip install -e . # zero deps — claude_cli adapter works out of the box
pip install -e ".[openai]" # + OpenAI SDK (also covers openrouter, ollama)
pip install -e ".[anthropic]" # + Anthropic SDK
pip install -e ".[gemini]" # + Google GenAI SDK
pip install -e ".[all]" # all provider SDKs
```
## Quick Start
```python
from dd_llm import call_llm
# Uses LLM_PROVIDER env var (default: "openai")
response = call_llm("What is 2+2?")
# Specify provider
response = call_llm("Hello", provider="claude_cli")
# With messages
response = call_llm(messages=[
{"role": "system", "content": "You are helpful."},
{"role": "user", "content": "Hi"},
])
```
## Built-in Adapters
| Name | Class | SDK | Notes |
|------|-------|-----|-------|
| `claude_cli` | `ClaudeCLIAdapter` | none (subprocess) | Dev provider, $0 cost via Claude Code subscription |
| `openai` | `OpenAIAdapter` | `openai` | Direct OpenAI API |
| `anthropic` | `AnthropicAdapter` | `anthropic` | Direct Anthropic API |
| `gemini` | `GeminiAdapter` | `google-genai` | Direct Google API |
| `openrouter` | `OpenAIAdapter` (configured) | `openai` | OpenAI-compatible endpoint |
| `ollama` | `OpenAIAdapter` (configured) | `openai` | Local OpenAI-compatible endpoint |
## Custom Adapters
```python
from dd_llm import LLMAdapter, LLMResponse, register_adapter, call_llm
class MyAdapter(LLMAdapter):
def call(self, prompt="", *, messages=None, **kwargs):
result = my_internal_api(prompt)
return LLMResponse(content=result, success=True, provider="my_api", model="v1")
register_adapter("my_api", MyAdapter)
# Now usable everywhere
response = call_llm("hello", provider="my_api")
```
## UnifiedLLMProvider
Multi-provider client with retry (exponential backoff + jitter) and automatic
fallback to alternative providers.
```python
from dd_llm import UnifiedLLMProvider
provider = UnifiedLLMProvider(
primary_provider="openai",
fallback_providers=["anthropic", "ollama"],
max_retries=3,
)
result = provider.call("Explain quantum computing")
if result.success:
print(result.content)
print(f"Provider: {result.provider}, Model: {result.model}")
print(f"Tokens: {result.input_tokens} in, {result.output_tokens} out")
```
## Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `LLM_PROVIDER` | Primary provider name | `openai` |
| `LLM_MODEL` | Default model (all providers) | per-provider |
| `LLM_MODEL_OPENAI` | Override model for OpenAI | `gpt-4o` |
| `LLM_MODEL_ANTHROPIC` | Override model for Anthropic | `claude-sonnet-4-5-20250929` |
| `LLM_MODEL_GEMINI` | Override model for Gemini | `gemini-2.0-flash` |
| `LLM_MAX_RETRIES` | Max retries per provider | `3` |
| `LLM_INITIAL_WAIT` | Initial backoff (seconds) | `1` |
| `LLM_MAX_WAIT` | Max backoff (seconds) | `30` |
| `OPENAI_API_KEY` | OpenAI API key | — |
| `ANTHROPIC_API_KEY` | Anthropic API key | — |
| `GEMINI_API_KEY` | Google Gemini API key | — |
| `OPENROUTER_API_KEY` | OpenRouter API key | — |
| `OLLAMA_HOST` | Ollama base URL | `http://localhost:11434` |
## License
MIT
| text/markdown | null | Digital Duck <p2p2learn@outlook.com> | null | null | null | llm, digital-duck, ai, provider | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"openai>=1.0.0; extra == \"openai\"",
"anthropic>=0.25.0; extra == \"anthropic\"",
"google-genai>=1.0.0; extra == \"gemini\"",
"openai>=1.0.0; extra == \"all\"",
"anthropic>=0.25.0; extra == \"all\"",
"google-genai>=1.0.0; extra == \"all\"",
"pytest>=8.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/digital-duck/dd-llm",
"Repository, https://github.com/digital-duck/dd-llm",
"Issues, https://github.com/digital-duck/dd-llm/issues"
] | twine/6.2.0 CPython/3.11.5 | 2026-02-19T02:56:44.325991 | dd_llm-0.1.0.tar.gz | 14,472 | 7e/26/049aa38fa16e8593f2a7f6392d3bde6902f62f6606c9c1eb7acc4b6b0fd0/dd_llm-0.1.0.tar.gz | source | sdist | null | false | 6080af4b57160a08c15dff78a21dfa6b | faa21440e79799d34d74225ce74b05b48b9f2ff75d593186470489ab4c50f75e | 7e26049aa38fa16e8593f2a7f6392d3bde6902f62f6606c9c1eb7acc4b6b0fd0 | MIT | [
"LICENSE"
] | 279 |
2.4 | document-placeholder | 1.0.0.post1 | Fill Word document templates using YAML configs with a powerful expression language, built-in functions, and SQLite integration. | <div align="center">
# 📄 DocumentPlaceholder
**Automatically fill Word templates using YAML configs, expressions, and SQL.**
*Generate invoices, reports, statements, and any other documents with a single command.*
[](https://pypi.org/project/document-placeholder/)
[](https://pypi.org/project/document-placeholder/)
[](https://opensource.org/licenses/MIT)
[]()
[Features](#-features) • [Installation](#-installation) • [Quick Start](#-quick-start) • [Configuration](#-configuration) • [Functions](#-built-in-functions) • [GUI](#-graphical-interface)
</div>
---
## 🚀 Features
**DocumentPlaceholder** turns `.docx` templates into ready-to-use documents based on YAML configs with a powerful expression language.
* 📝 **Word templates** — placeholders like `{KEY}` in text, tables, and headers/footers.
* ⚡ **Expression language** — arithmetic, comparisons, nested function calls, and template strings.
* 🛢 **SQLite out of the box** — run database queries directly inside configs for counters, lookups, and client data.
* 📅 **59 built-in functions** — date/time, strings, math, logic, and conditions.
* 📤 **PDF export** — automatic conversion through LibreOffice.
* 🖥 **GUI with syntax highlighting** — config editor, live preview, SQL manager.
* 🔌 **Extensible** — add your own functions with a single decorator.
---
## 📦 Installation
```bash
pip install document-placeholder
```
**Optional extras:**
| Extra | Includes |
|-------|----------|
| `document-placeholder[gui]` | GUI interface (CustomTkinter) |
| `document-placeholder[dev]` | Development tools (pytest) |
| `document-placeholder[all]` | Everything |
---
## ⚡ Quick Start
### 1. Create a Word template (`template.docx`)
Insert placeholders into the document:
```
Invoice #{INVOICE_NUM}
Date: {DAY_NUM}.{MONTH_STR}.{YEAR_NUM}
Amount: ${PRICE}
{DESCRIPTION}
```
### 2. Write a config (`template.yaml`)
```yaml
ON_START:
- SQL('CREATE TABLE IF NOT EXISTS doc (num INTEGER DEFAULT 0)')
- SQL('INSERT OR IGNORE INTO doc (rowid, num) VALUES (1, 0)')
INVOICE_NUM:
SQL('SELECT num FROM doc WHERE rowid = 1') + 1
MONTH_STR:
CURRENT_DATE_STR(month)
DAY_NUM:
CURRENT_DATE_NUM(day)
YEAR_NUM:
CURRENT_DATE_NUM(year)
PRICE:
500
DESCRIPTION:
"Software Development Services
(Period: {CURRENT_DATE_NUM(day, month, year) - DAYS(7)} — {CURRENT_DATE_NUM(day, month, year)})"
OUTPUT_NAME:
"Invoice-{INVOICE_NUM}"
OUTPUT_FORMAT:
- docx
- pdf
ON_END:
SQL('UPDATE doc SET num = num + 1 WHERE rowid = 1')
```
### 3. Run
```bash
docplaceholder -c template.yaml -t template.docx
```
```
INVOICE_NUM = 2026-2-5
MONTH_STR = February
DAY_NUM = 16
YEAR_NUM = 2026
PRICE = 500
DESCRIPTION = Software Development Services (Period: 09.02.2026 — 16.02.2026)
Output: Invoice-2026-2-5 [docx, pdf]
-> Invoice-2026-2-5.docx
-> Invoice-2026-2-5.pdf
```
---
## 🎨 Expression Language
A config is not just key-value mapping. Every value is an **expression** that gets evaluated.
### Arithmetic and comparisons
```yaml
TAX: ROUND(PRICE * 0.2, 2)
TOTAL: PRICE + TAX
IS_PREMIUM: TOTAL > 1000
```
### Template strings
Inside `"..."`, expressions like `{expr}` are interpolated into the final string:
```yaml
PERIOD: "{CURRENT_DATE_NUM(day, month, year) - DAYS(30)} — {CURRENT_DATE_NUM(day, month, year)}"
```
### Nested calls
```yaml
INVOICE_NUM:
"{CURRENT_DATE_NUM(year)}-{SQL('SELECT num FROM doc WHERE rowid = 1') + 1}"
```
### Conditional logic
```yaml
STATUS: IF(TOTAL > 1000, 'Premium', 'Standard')
DISCOUNT: IF(TOTAL >= 500, TOTAL * 0.1, 0)
LABEL: SWITCH(STATUS, 'Premium', '⭐ Premium', 'Standard', '📋 Standard')
```
**Supported operators:** `+` `-` `*` `/` `%` `>` `<` `>=` `<=` `==` `!=` `()`
---
## ⚙️ Configuration
### CLI arguments
```
docplaceholder [-c CONFIG] [-t TEMPLATE] [-o OUTPUT] [--db DATABASE]
```
| Argument | Default | Description |
|----------|---------|-------------|
| `-c, --config` | `template.yaml` | Path to YAML config |
| `-t, --template` | `template.docx` | Path to Word template |
| `-o, --output` | `output.docx` | Path to output file |
| `--db` | `data.db` | Path to SQLite database |
| `-V, --version` | | Print program version |
### Special YAML keys
| Key | Description |
|-----|-------------|
| `ON_START` | Expressions executed **before** processing (table creation, initialization) |
| `ON_END` | Expressions executed **after** processing (increment counters, cleanup) |
| `OUTPUT_NAME` | Output filename template: `"Invoice-{INVOICE_NUM}"` |
| `OUTPUT_FORMAT` | List of output formats: `[docx, pdf]` |
All other keys are treated as **placeholders** and replaced in the document.
---
## 🧰 Built-in Functions
**59 functions** in 5 categories. Full reference: [FUNCTIONS.md](FUNCTIONS.md)
### 📅 Date and time
```yaml
TODAY: TODAY() # 16.02.2026
YEAR: CURRENT_DATE_NUM(year) # 2026
MONTH: CURRENT_DATE_STR(month) # February
CUSTOM: DATE_FORMAT(DATE(2026, 3, 8), '%d %B %Y') # 08 March 2026
WEEK_AGO: "{TODAY() - DAYS(7)}" # 09.02.2026
DIFF: DAYS_BETWEEN(DATE(2026, 1, 1), TODAY()) # 46
```
### 🔤 Strings
```yaml
UPPER('hello') # HELLO
TITLE('john doe') # John Doe
PAD_LEFT('42', 6, '0') # 000042
JOIN(', ', 'a', 'b', 'c') # a, b, c
REPLACE('foo bar', 'bar', 'baz') # foo baz
SPLIT('user@mail.com', '@', 1) # mail.com
```
### 🔢 Math
```yaml
ROUND(19.956, 2) # 19.96
FORMAT_NUM(1234567, 2) # 1,234,567.00
MIN(3, 1, 4, 1, 5) # 1
AVG(10, 20, 30) # 20.0
SQRT(144) # 12.0
```
### 🧠 Logic
```yaml
IF(PRICE > 1000, 'expensive', 'cheap')
COALESCE(SQL('SELECT name FROM clients'), 'Unknown')
DEFAULT(value, 'N/A')
SWITCH(status, 'draft', 'Draft', 'sent', 'Sent', 'Unknown')
```
### 🛢 SQL
```yaml
SQL('SELECT count(*) FROM orders WHERE user_id = 1')
SQL('INSERT INTO log (event) VALUES ("generated")')
```
---
## 🖥 Graphical Interface
```bash
pip install document-placeholder[gui]
docplaceholder-gui
```
The GUI includes:
- **Config editor** with YAML and custom syntax highlighting (`SQL(...)`, `{expressions}`)
- **Live preview** of evaluated values
- **SQL manager** for running queries and viewing tables/schema
- **Keyboard shortcuts** — `Ctrl+S` save, `Ctrl+F` search, `F5` refresh
---
## 🔌 Extending Functions
Add a custom function with a single decorator:
```python
from document_placeholder.functions import FunctionRegistry
@FunctionRegistry.register("MY_FUNC")
def my_func(arg1, arg2):
"""Your custom logic."""
return f"{arg1}-{arg2}"
```
After importing the module, the function becomes available in config expressions:
```yaml
VALUE: MY_FUNC('hello', 'world') # hello-world
```
---
## 📁 Library Usage
```python
from document_placeholder.config import Config
from document_placeholder.evaluator import Evaluator
from document_placeholder.processor import DocumentProcessor
config = Config.from_string("""
NAME: UPPER('john doe')
DATE: TODAY()
""")
evaluator = Evaluator()
values = {k: evaluator.evaluate_value(v) for k, v in config.placeholders.items()}
# {'NAME': 'JOHN DOE', 'DATE': DateValue(2026-02-16)}
processor = DocumentProcessor("template.docx")
processor.replace_placeholders(values)
processor.save("output.docx")
```
---
## 🧪 Testing
```bash
pip install document-placeholder[dev]
pytest
```
```
295 passed in 0.36s
```
---
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch
3. Commit your changes
4. Open a Pull Request
Bugs and feature requests → [Issues](https://github.com/FlacSy/DocumentPlaceholder/issues)
---
## 📄 License
This project is released under the **MIT** license. See [LICENSE](LICENSE) for details.
<div align="center">
<sub>Developed with ❤️ by <a href="https://github.com/FlacSy">FlacSy</a></sub>
</div>
| text/markdown | FlacSy | null | null | null | null | automation, document, docx, pdf, placeholder, report, template, word, yaml | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pytho... | [] | null | null | >=3.10 | [] | [] | [] | [
"python-docx>=1.1.0",
"pyyaml>=6.0",
"customtkinter>=5.2.0; extra == \"all\"",
"pytest>=8.0; extra == \"all\"",
"pytest>=8.0; extra == \"dev\"",
"customtkinter>=5.2.0; extra == \"gui\""
] | [] | [] | [] | [
"Homepage, https://github.com/FlacSy/DocumentPlaceholder",
"Repository, https://github.com/FlacSy/DocumentPlaceholder",
"Issues, https://github.com/FlacSy/DocumentPlaceholder/issues",
"Documentation, https://github.com/FlacSy/DocumentPlaceholder/blob/master/FUNCTIONS.md"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T02:54:31.508331 | document_placeholder-1.0.0.post1.tar.gz | 52,004 | e8/4a/1d0c98b8c30d4bc3d83a6a47d695b02af20a3f5239f97092a960a248a00d/document_placeholder-1.0.0.post1.tar.gz | source | sdist | null | false | ddc63751a62729ab791e0a9c18fe50c1 | 311a0555490fc964b353d265885adec0251fc3e3b2de98a59d4911f68d0894d0 | e84a1d0c98b8c30d4bc3d83a6a47d695b02af20a3f5239f97092a960a248a00d | MIT | [
"LICENSE"
] | 243 |
2.4 | anystore | 1.1.0 | Store and cache things anywhere | [](https://docs.investigraph.dev/lib/anystore/)
[](https://pypi.org/project/anystore/)
[](https://pepy.tech/projects/anystore)
[](https://pypi.org/project/anystore/)
[](https://github.com/dataresearchcenter/anystore/actions/workflows/python.yml)
[](https://github.com/pre-commit/pre-commit)
[](https://coveralls.io/github/dataresearchcenter/anystore?branch=main)
[](./LICENSE)
[](https://pydantic.dev)
# anystore
Store anything anywhere. `anystore` provides a high-level storage and retrieval interface for various supported _store_ backends, such as `redis`, `sql`, `file`, `http`, cloud-storages and anything else supported by [`fsspec`](https://filesystem-spec.readthedocs.io/en/latest/index.html).
Think of it as a `key -> value` store, and `anystore` acts as a cache backend. And when _keys_ become filenames and _values_ become byte blobs, `anystore` becomes actually a file-like storage backend – but always with the same and interchangeable interface.
### Why?
[In our several data engineering projects](https://dataresearchcenter.org/projects) we always wrote boilerplate code that handles the featureset of `anystore` but not in a reusable way. This library shall be a stable foundation for data wrangling related python projects.
### Examples
#### Base cli interface:
```shell
anystore -i ./local/foo.txt -o s3://mybucket/other.txt
echo "hello" | anystore -o sftp://user:password@host:/tmp/world.txt
anystore -i https://dataresearchcenter.org > index.html
anystore --store sqlite:///db keys <prefix>
anystore --store redis://localhost put foo "bar"
anystore --store redis://localhost get foo # -> "bar"
```
#### Use in your applications:
```python
from anystore import smart_read, smart_write
data = smart_read("s3://mybucket/data.txt")
smart_write(".local/data", data)
```
#### Simple cache example via decorator:
Use case: [`@anycache` is used for api view cache in `ftmq-api`](https://github.com/dataresearchcenter/ftmq-api/blob/main/ftmq_api/views.py)
```python
from anystore import get_store, anycache
cache = get_store("redis://localhost")
@anycache(store=cache, key_func=lambda q: f"api/list/{q.make_key()}", ttl=60)
def get_list_view(q: Query) -> Response:
result = ... # complex computing will be cached
return result
```
#### Mirror file collections:
```python
from anystore import get_store
source = get_store("https://example.org/documents/archive1") # directory listing
target = get_store("s3://mybucket/files", backend_config={"client_kwargs": {
"aws_access_key_id": "my-key",
"aws_secret_access_key": "***",
"endpoint_url": "https://s3.local"
}}) # can be configured via ENV as well
for path in source.iterate_keys():
# streaming copy:
with source.open(path) as i:
with target.open(path, "wb") as o:
while chunk := i.read(8192):
o.write(chunk)
```
## Documentation
Find the docs at [docs.investigraph.dev/lib/anystore](https://docs.investigraph.dev/lib/anystore)
## Used by
- [ftmq](https://github.com/dataresearchcenter/ftmq), a query interface layer for [followthemoney](https://followthemoney.tech) data
- [investigraph](https://github.com/dataresearchcenter/investigraph), a framework to manage collections of structured [followthemoney](https://followthemoney.tech) data
- [ftmq-api](https://github.com/dataresearchcenter/ftmq-api), a simple api on top off `ftmq` built with [FastApi](https://fastapi.tiangolo.com/)
- [ftm-geocode](https://github.com/dataresearchcenter/ftm-geocode), batch parse and geocode addresses from followthemoney entities
- [ftm-lakehouse](https://github.com/openaleph/ftm-lakehouse), a library to crawl, store and move around document collections and structured [FollowTheMoney](https://followthemoney.tech) data (in progress)
- The [OpenAleph](https://openaleph.org) suite in general
## Development
This package is using [poetry](https://python-poetry.org/) for packaging and dependencies management, so first [install it](https://python-poetry.org/docs/#installation).
Clone this repository to a local destination.
Within the repo directory, run
poetry install --with dev
This installs a few development dependencies, including [pre-commit](https://pre-commit.com/) which needs to be registered:
poetry run pre-commit install
Before creating a commit, this checks for correct code formatting (isort, black) and some other useful stuff (see: `.pre-commit-config.yaml`)
### testing
`anystore` uses [pytest](https://docs.pytest.org/en/stable/) as the testing framework.
make test
## License and Copyright
`anystore`, (C) 2024 investigativedata.io
`anystore`, (C) 2025 investigativedata.io, Data and Researc Center – DARC
`anystore` is licensed under the AGPLv3 or later license.
Prior to version 0.3.0, `anystore` was released under the GPL-3.0 license.
see [NOTICE](./NOTICE) and [LICENSE](./LICENSE)
| text/markdown | Simon Wörpel | simon.woerpel@pm.me | null | null | AGPLv3+ | null | [
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <4,>=3.11 | [] | [] | [] | [
"banal<2.0.0,>=1.0.6",
"cloudpickle<4.0.0,>=3.1.0",
"dateparser<2.0.0,>=1.2.2",
"fakeredis<3.0.0,>=2.26.2; extra == \"redis\"",
"fastapi<1.0.0,>=0.115.0; extra == \"api\"",
"fsspec<2027,>2023.10",
"imohash<2.0.0,>=1.1.0",
"orjson<4.0.0,>=3.10.18",
"pyaml<26.0.0,>=25.5.0",
"pydantic<3.0.0,>=2.11.7"... | [] | [] | [] | [
"Documentation, https://docs.investigraph.dev/lib/anystore",
"Homepage, https://docs.investigraph.dev/lib/anystore",
"Issues, https://github.com/dataresearchcenter/anystore/issues",
"Repository, https://github.com/dataresearchcenter/anystore"
] | poetry/2.3.2 CPython/3.13.5 Linux/6.12.63+deb13-amd64 | 2026-02-19T02:54:25.562548 | anystore-1.1.0-py3-none-any.whl | 71,917 | 8c/f0/c1af3c67863eea5c0b35d3dbd0251da32c40620a0758a73371e7551720ad/anystore-1.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 3af11d964d4d4d73d928ff1dc8fa60a2 | 8e635203dc059ab2ab7f6b4bca77d3acbea958d814e73ea22d303ba4ba6b3ba0 | 8cf0c1af3c67863eea5c0b35d3dbd0251da32c40620a0758a73371e7551720ad | null | [
"LICENSE",
"NOTICE"
] | 508 |
2.1 | odoo-addon-delivery-dachser | 18.0.1.0.0.4 | Delivery Carrier implementation for Dachser API | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
================
Delivery Dachser
================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:99e7986d91624264623531d622f7358089dba4c18de8a95131afe87f9bf14162
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fdelivery--carrier-lightgray.png?logo=github
:target: https://github.com/OCA/delivery-carrier/tree/18.0/delivery_dachser
:alt: OCA/delivery-carrier
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/delivery-carrier-18-0/delivery-carrier-18-0-delivery_dachser
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/delivery-carrier&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
Dachser integration with Odoo.
**Table of contents**
.. contents::
:local:
Installation
============
It depends on the modules delivery_package_number, delivery_state and
stock_picking_volum that can be found on OCA/delivery-carrier.
Configuration
=============
To configure your Dachser services, go to:
1. *Inventory/Sales > Configuration > Delivery methods* and create a new
one.
2. Choose *Dachser* as provider.
3. Configure your Ddachser data: API key, Division Code, Product Code,
Term Code, Packaging Code and Packaging Type
The API key is a piece of information that you will obtain when you
register at https://api-portal.dachser.com/bi.b2b.portal/api/library and
request access to the following APIs: - transportorder: To create and
cancel shipments. - shipmentstatus: To obtain the status of shipments. -
quotations: To obtain a quote for a shipment.
It is not possible to test transportorder with an API in test mode.
There is no such thing as a test and production environment at Dachser,
so changing the environment in the carrier's corresponding smart button
will have no implications.
Usage
=====
Quotation You can obtain the cost of a shipment in a quote by using the
"Add shipment" wizard and selecting Dachser as the carrier.
Create shipment When you validate a picking linked to a Dachser carrier,
a shipment will be created, the corresponding tracking number will be
defined, and the corresponding label will be added as an attachment.
Update shipment status The status of Dachser shipments will be updated
automatically on the pickings.
Cancel shipment If a shipment has been created in Dachser (it has a
tracking number), the shipment can be canceled (depending on its
status).
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/delivery-carrier/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/delivery-carrier/issues/new?body=module:%20delivery_dachser%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Tecnativa
Contributors
------------
- `Tecnativa <https://www.tecnativa.com>`__:
- Víctor Martínez
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-victoralmau| image:: https://github.com/victoralmau.png?size=40px
:target: https://github.com/victoralmau
:alt: victoralmau
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-victoralmau|
This module is part of the `OCA/delivery-carrier <https://github.com/OCA/delivery-carrier/tree/18.0/delivery_dachser>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Tecnativa, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/delivery-carrier | null | >=3.10 | [] | [] | [] | [
"odoo-addon-delivery_package_number==18.0.*",
"odoo-addon-delivery_state==18.0.*",
"odoo-addon-stock_picking_volume==18.0.*",
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T02:54:20.017078 | odoo_addon_delivery_dachser-18.0.1.0.0.4-py3-none-any.whl | 413,272 | e3/44/f866a20d079a3c88a981ecdbd1f9ba65441bb63ef4727a73f491eac82e7b/odoo_addon_delivery_dachser-18.0.1.0.0.4-py3-none-any.whl | py3 | bdist_wheel | null | false | b2e3a31994bb794898af02a0c50f5cf3 | 089c94abfdf0eb0fe10098493b46e564b56ce9a5dc8519a305ab6a302ff15c27 | e344f866a20d079a3c88a981ecdbd1f9ba65441bb63ef4727a73f491eac82e7b | null | [] | 102 |
2.1 | odoo-addon-delivery-state | 18.0.1.2.0.5 | Provides fields to be able to contemplate the tracking statesand also adds a global fields | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
==============
Delivery State
==============
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:0e8966a24f4b4a262db70ad78cc067e28ca2cfd0f5eac85a3720ce34cceefe4d
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fdelivery--carrier-lightgray.png?logo=github
:target: https://github.com/OCA/delivery-carrier/tree/18.0/delivery_state
:alt: OCA/delivery-carrier
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/delivery-carrier-18-0/delivery-carrier-18-0-delivery_state
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/delivery-carrier&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module adds additional functions that will be needed for the
carrier developments. It provides fields to be able to contemplate the
tracking states and also adds a global field so it can have generic
states in addition to the ones carrier gives us.
**Table of contents**
.. contents::
:local:
Configuration
=============
A scheduled action for automating the tracking update for these pickings
can be configured going to *Settings > Technical > Scheduled Actions*
and then choosing *Update deliveries states*. It will update the pending
delivery states for the pickings with service providers with tracking
methods configured, and in pending state (not delivered or cancelled).
In order to send automatic notifications to the customer when the
picking is customer_delivered:
1. Go to *Inventory > Configuration > Settings*.
2. Enable the option *Email Confirmation (customer delivered)*.
3. Choose the template "Delivery: Picking delivered by Email".
Usage
=====
Depending on the delivery service provider, the state tracking could be
more or less complete, since it could have or not the necessary API
calls implemented.
With regular methods (fixed, based on rules):
1. Go to Inventory / Operations and open an outgoing pending picking.
2. In the *Additional Info* tab, assign it a delivery carrier which
is fixed or based on rules.
3. Validate the picking and you'll see in the same tab the delivery
state info with the shipping date and the shipping state.
4. If enabled, an automatic notification will be sent to the picking
customer.
When service provider methods are implemented, we can follow the same
steps as described before, but we'll get additionally:
1. In the *Additional Info* tab, we'll see button *Update tracking
state* to manually query the provider API for tracking updates for
this expedition.
2. Depending on the stated returned by the provider, we could get
these states (field *Carrier State*):
- Shipping recorded in carrier
- In transit
- Canceled shipment (finished)
- Incident
- Warehouse delivered
- Customer delivered (finished)
3. In the field *Tracking state* we'll get the tracking state name
given by the provider (which is mapped to the ones in this module)
4. In the field *Tracking history* we'll get the former states log.
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/delivery-carrier/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/delivery-carrier/issues/new?body=module:%20delivery_state%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Trey (www.trey.es)
* FactorLibre
* Tecnativa
Contributors
------------
- `Trey <https://www.trey.es>`__:
- Roberto Lizana <roberto@trey.es>
- `FactorLibre <https://www.factorlibre.com>`__:
- Zahra Velasco <zahra.velasco@factorlibre.com>
- `Tecnativa <https://www.tecnativa.com>`__:
- Pedro M. Baeza
- David Vidal
- Víctor Martínez
- Marçal Isern <marsal.isern@qubiq.es>
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/delivery-carrier <https://github.com/OCA/delivery-carrier/tree/18.0/delivery_state>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Trey (www.trey.es), FactorLibre, Tecnativa, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/delivery-carrier | null | >=3.10 | [] | [] | [] | [
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T02:54:14.532829 | odoo_addon_delivery_state-18.0.1.2.0.5-py3-none-any.whl | 40,064 | 71/6f/07752b52e7e0af6b7b2bfefb8583daeeadcf94a28402c9d10fd08712b60c/odoo_addon_delivery_state-18.0.1.2.0.5-py3-none-any.whl | py3 | bdist_wheel | null | false | 76e089d93a240740b4818041fcc4ea2e | aab2f64a8f58df42e1a77d9afc0981636be22dea4955b2ee5c21bc1ede54e7b7 | 716f07752b52e7e0af6b7b2bfefb8583daeeadcf94a28402c9d10fd08712b60c | null | [] | 103 |
2.1 | odoo-addon-delivery-easypost-oca | 18.0.1.0.0.4 | OCA Delivery Easypost | .. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=====================
Easypost Shipping OCA
=====================
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:abc8b4120a8033a55743462dbffeef0e026bc17966f3d8f797b4919625a887a3
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
:target: https://odoo-community.org/page/development-status
:alt: Beta
.. |badge2| image:: https://img.shields.io/badge/license-AGPL--3-blue.png
:target: http://www.gnu.org/licenses/agpl-3.0-standalone.html
:alt: License: AGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fdelivery--carrier-lightgray.png?logo=github
:target: https://github.com/OCA/delivery-carrier/tree/18.0/delivery_easypost_oca
:alt: OCA/delivery-carrier
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/delivery-carrier-18-0/delivery-carrier-18-0-delivery_easypost_oca
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/delivery-carrier&target_branch=18.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This module integrates `EasyPost <https://easypost.com>`__ shipping API
with Odoo, providing access to 100+ carriers through a single, unified
interface.
**What is EasyPost?**
EasyPost is a shipping API aggregator that eliminates the need for
separate carrier integrations. Instead of implementing individual APIs
for USPS, UPS, FedEx, DHL, and others, you connect once to EasyPost and
gain access to all supported carriers with pre-negotiated rates.
**Key Features**
- **Automated Rate Calculation**: Get real-time shipping rates from
multiple carriers and automatically select the lowest rate
- **Label Generation**: Generate and print shipping labels in multiple
formats (PDF, ZPL, EPL2)
- **Multi-Carrier Support**: Access 100+ carriers including USPS, UPS,
FedEx, DHL, Canada Post, and regional carriers
- **Shipment Tracking**: Real-time tracking with automatic updates and
public tracking links
- **Multi-Package Handling**: Support for shipments with multiple
packages (individual or batch mode)
- **Address Verification**: Automatic address validation to reduce
delivery errors
- **Multiple Label Formats**: PDF for standard printers, ZPL/EPL2 for
thermal label printers
- **Test Environment**: Full test mode for development without charges
or actual shipments
- **Automatic Conversions**: Weight conversion to ounces and currency
conversion handled automatically
**Benefits of Using EasyPost**
- **Single Integration**: One API connection for all carriers - no need
to integrate each carrier separately
- **Lowest Rate Selection**: Automatically compares rates from all
available carriers and selects the cheapest option
- **Pre-Negotiated Rates**: Access to discounted carrier rates through
EasyPost's volume agreements
- **Reduced Complexity**: Unified API response format regardless of
carrier
- **Scalability**: Easily add new carriers without code changes
- **Testing Without Risk**: Complete test environment with mock
shipments
**Technical Architecture**
This module implements a clean, layered architecture:
- **Business Logic Layer** (delivery_carrier.py): Handles Odoo-specific
logic (orders, pickings, pricing)
- **API Wrapper Layer** (easypost_request.py): Centralizes all EasyPost
API interactions
- **EasyPost Python SDK** (version 7.15.0): Official EasyPost client
library
- **Comprehensive Test Suite**: 100% mocked tests with no real API calls
during testing
**Supported Carriers (via EasyPost)**
USPS, UPS, FedEx, DHL Express, DHL eCommerce, Canada Post, APC Postal,
Asendia, Australia Post, Canpar, Couriers Please, Deutsche Post,
Fastway, Globegistics, Interlink Express, LaserShip, LSO, OnTrac,
Parcelforce, Purolator, Royal Mail, Sendle, StarTrack, and 80+
additional carriers worldwide.
**Use Cases**
- **E-Commerce Stores**: Automatically calculate shipping costs at
checkout
- **Warehouse Operations**: Generate and print shipping labels for
outbound orders
- **Multi-Carrier Shipping**: Compare rates across carriers to minimize
shipping costs
- **International Shipping**: Access to international carriers with
automatic address verification
- **High-Volume Shipping**: Batch processing for multiple packages in
single transactions
**Table of contents**
.. contents::
:local:
Configuration
=============
**Step 1: Configure Delivery Carrier**
1. Go to **Inventory → Configuration → Delivery Methods**
2. Create a new delivery method or edit an existing one
3. Set **Provider** to **Easypost OCA**
4. In the **Easypost Configuration** tab, configure the following
fields:
- **Easypost Test API Key**: Your test API key from EasyPost
dashboard (for development and testing)
- **Easypost Production API Key**: Your production API key (for live
shipments)
- **Label Format**: Choose the label format according to your
printer:
- **PDF** (default): Standard format, works with all printers
- **ZPL**: For Zebra thermal printers (direct thermal printing)
- **EPL2**: For legacy Eltron/Zebra thermal printers
- **Delivery Multiple Packages**: Select the shipping strategy for
orders with multiple packages:
- **Shipments** (default): Create individual shipment for each
package, charged separately
- **Batch**: Create batch shipment with all packages in a single
transaction
5. Configure **Pricing** tab as needed (fixed price, percentage, or
formula-based)
6. Enable the **Test Environment** checkbox during development to use
test API key
7. **Save** the delivery method
**Step 2: Configure Product Packaging (Optional)**
For accurate shipping calculations using carrier-specific package types:
1. Go to **Inventory → Configuration → Product Packagings**
2. Create or edit a product packaging
3. Set **Carrier** to **Easypost OCA**
4. Configure package dimensions: **Length**, **Width**, **Height** (in
inches)
5. Set **Shipper Package Code** for carrier-specific packaging (e.g.,
"Parcel", "FedExBox", "FlatRateEnvelope")
6. Set **Carrier Prefix** to filter by specific carrier if needed (e.g.,
"USPS", "FedEx")
7. **Save** the packaging
**Step 3: Configure Warehouse Address**
Ensure your warehouse has a complete address that will be used as the
shipper address:
1. Go to **Inventory → Configuration → Warehouses**
2. Edit your warehouse
3. Set complete **Address** with all required fields:
- Street address
- City
- State/Province
- ZIP/Postal code
- Country
4. Verify the address is accurate - this will be the "Ship From" address
on all labels
**Step 4: Enable API Logging (Optional)**
For debugging API interactions and troubleshooting issues:
1. Edit the delivery carrier configured in Step 1
2. Go to the **Advanced Options** tab
3. Enable **Log XML** checkbox
4. Check API request/response logs at **Settings → Technical → Logging**
**Important Configuration Notes**
- **Test vs Production Mode**: Toggle the **Test Environment** checkbox
to switch between test and production API keys. Always test thoroughly
in test mode before going live.
- **Weight Units**: The module automatically converts product weights to
ounces (EasyPost requirement). Ensure all products have weight
configured.
- **Currency Conversion**: EasyPost returns rates in USD. The module
automatically converts to the order's currency using Odoo's currency
rates.
- **USPS/UPS End Shipper**: These carriers require an end shipper ID for
certain services. The module automatically creates and manages this -
no manual configuration needed.
- **API Keys Security**: Keep your production API key secure. Never
commit API keys to version control or share them publicly.
- **Rate Caching**: EasyPost rates are calculated in real-time. Rates
shown at quotation time may differ slightly at shipping time if time
has passed.
Usage
=====
**Workflow 1: Calculate Shipping Rates on Sale Order**
1. Create a new **Sale Order**: Go to **Sales → Orders → Quotations →
Create**
2. Add products to the order (ensure products have weight configured)
3. Set the **Customer** with a complete shipping address
4. Click the **Add Shipping** button
5. In the **Add a shipping method** wizard:
- Select your **Easypost OCA** delivery method from the list
- Click **Get Rate** button
6. The system will:
- Query EasyPost API with order details (weight, origin/destination
addresses)
- Automatically select the lowest rate from all available carriers
- Add a shipping line to the order with the carrier name (e.g.,
"Easypost - USPS Priority")
7. The **Sale Order** now shows:
- Shipping line with carrier name and calculated price
- Stored data: shipment_id, rate_id, carrier_name (for later shipment
creation)
8. Confirm the order when ready to proceed
**Workflow 2: Generate Shipping Labels (Single Package)**
1. After confirming the sale order, a **Delivery Order** is
automatically created
2. Go to **Inventory → Delivery Orders** and open the delivery order
3. Click **Check Availability** to reserve stock
4. Set the **Done** quantities for each product line
5. Click **Validate** button - the system will automatically:
- Create an EasyPost shipment using the stored rate (or calculate a
new rate if expired)
- Purchase the shipment (charges your EasyPost account)
- Download the shipping label in your configured format
(PDF/ZPL/EPL2)
- Attach the label to the picking in the **Chatter** messages
- Set the tracking number on the delivery order
6. Find the shipping label in the **Chatter** section at the bottom
7. Click the attachment to download and print the label
8. Attach the printed label to your package and ship
**Workflow 3: Multi-Package Shipments**
If your delivery order contains multiple packages:
1. In the delivery order, after setting **Done** quantities
2. Use the **Put in Pack** button to create packages:
- Select products for the first package
- Click **Put in Pack** - creates Package 1
- Repeat for additional packages
3. Click **Validate** - behavior depends on your carrier configuration:
**Shipments Mode** (default):
- Creates individual EasyPost shipment for each package
- Each package gets its own tracking number
- Each package is charged separately
- System generates a single merged PDF/ZPL with all labels in
sequence
**Batch Mode**:
- Creates a batch shipment containing all packages
- Packages are purchased together in a single transaction
- May provide better rates for bulk shipping
- Single merged label file with all package labels
4. All tracking numbers are displayed in the delivery order
(comma-separated)
5. The merged label file contains all package labels - print and attach
to corresponding packages
**Workflow 4: Track Shipments**
After shipment creation:
1. Open the **Delivery Order**
2. Click the **Tracking** button/link in the form
3. Opens the EasyPost tracking page in a new browser tab
4. View real-time shipment status, location history, and delivery
confirmation
**Using Product Packaging for Accurate Rates**
To use carrier-specific package types (flat rate boxes, envelopes,
etc.):
1. In the delivery order, after using **Put in Pack**
2. Edit the created **Package** record
3. Set the **Packaging** field to a product packaging configured for
EasyPost (see CONFIGURE.rst)
4. The packaging dimensions and shipper package code are sent to
EasyPost
5. Results in more accurate rates and ensures carrier compatibility
**Label Format Guide**
Choose the appropriate label format for your printing setup:
- **PDF Format**:
- Works with all standard printers
- Can preview in browser before printing
- Best for offices with regular laser/inkjet printers
- Label size: typically 8.5"x11" or 4"x6"
- **ZPL Format**:
- For Zebra thermal label printers
- Direct thermal printing (no ink/toner required)
- Common in warehouses and shipping departments
- Requires ZPL-compatible thermal printer
- **EPL2 Format**:
- For legacy Eltron and older Zebra thermal printers
- Use only if your printer doesn't support ZPL
- Less common in modern shipping operations
**Important Usage Notes**
- **Product Weight Required**: All products in the order must have
weight configured. Orders without weight cannot calculate shipping
rates.
- **Complete Addresses Required**: Both warehouse address (ship from)
and customer address (ship to) must be complete with street, city,
state/province, ZIP, and country.
- **Test Mode Shipments**: Shipments created in test mode (with test API
key) are not actually shipped. They are mock shipments for testing
purposes and no charges apply.
- **Cannot Cancel Shipments**: EasyPost shipments cannot be cancelled
through Odoo. Some carriers allow refunds within 24 hours - use the
EasyPost dashboard for refund requests.
- **Rate Expiration**: Shipping rates may change between quotation and
shipment if significant time has passed. The system will automatically
recalculate rates at shipping time if the stored rate has expired.
- **Multiple Carriers**: EasyPost automatically compares rates from
multiple carriers (USPS, UPS, FedEx, etc.) and selects the lowest
rate. You can see the selected carrier name in the shipping line
(e.g., "Easypost - USPS Priority").
- **Tracking Updates**: Tracking information is available immediately
after shipment creation. Real tracking events (scans, delivery) appear
as the carrier processes the shipment.
Known issues / Roadmap
======================
**Planned Future Enhancements**
The following features are being considered for future releases:
- **Carrier Service Selection**: Allow users to manually select specific
carrier/service combinations instead of automatic lowest rate
selection
- **Custom Carrier Accounts**: Support for connecting specific carrier
accounts configured in EasyPost dashboard for using negotiated rates
- **Shipment Cancellation**: Implement shipment cancellation and refund
workflow for eligible shipments within the refund window
- **Insurance Options**: Add configuration for shipment insurance with
automatic insurance purchase based on order value
- **International Customs Forms**: Generate and attach customs forms
automatically for international shipments
- **Return Labels**: Generate return shipping labels with reverse
addresses for customer returns and RMA workflows
- **Advanced Batch Operations**: Enhanced batch management including
void batch, rebuy batch, and USPS scan forms generation
- **Rate Shopping Interface**: Display all available rates to users in a
comparison view for manual carrier selection
- **Delivery Preferences**: Support for signature confirmation, delivery
confirmation, adult signature, and other service options
- **Webhook Integration**: Real-time tracking updates via EasyPost
webhooks instead of polling
- **Performance Optimization**: Implement rate caching and bulk
operations for high-volume shipping scenarios
- **Smart Box Selection**: Automatic package size recommendation based
on product dimensions and weight
- **Shipping Rules Engine**: Configurable rules for automatic carrier
selection based on destination, weight, or value thresholds
- **Carbon Offset Integration**: Support for EasyPost's carbon offset
program with reporting
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/delivery-carrier/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/delivery-carrier/issues/new?body=module:%20delivery_easypost_oca%0Aversion:%2018.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
-------
* Binhex
Contributors
------------
- `Binhex <https://www.binhex.cloud>`__:
- Antonio Ruban <<a.ruban@binhex.cloud>>
- Christian Ramos
Maintainers
-----------
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
This module is part of the `OCA/delivery-carrier <https://github.com/OCA/delivery-carrier/tree/18.0/delivery_easypost_oca>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.
| text/x-rst | Binhex, Odoo Community Association (OCA) | support@odoo-community.org | null | null | AGPL-3 | null | [
"Programming Language :: Python",
"Framework :: Odoo",
"Framework :: Odoo :: 18.0",
"License :: OSI Approved :: GNU Affero General Public License v3"
] | [] | https://github.com/OCA/delivery-carrier | null | >=3.10 | [] | [] | [] | [
"easypost==7.15.0",
"odoo==18.0.*"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.3 | 2026-02-19T02:54:10.212124 | odoo_addon_delivery_easypost_oca-18.0.1.0.0.4-py3-none-any.whl | 1,251,904 | 96/e9/29088423fae053596ae17cfd2931a143c2daac44d627146314d20ded9c60/odoo_addon_delivery_easypost_oca-18.0.1.0.0.4-py3-none-any.whl | py3 | bdist_wheel | null | false | 34d3c615e82a6b6a1fae62ca48237217 | c18a6596c1bf232c7e771bd9f7897b498ae09770f013185aa840d8f6707af5aa | 96e929088423fae053596ae17cfd2931a143c2daac44d627146314d20ded9c60 | null | [] | 101 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.